pax_global_header 0000666 0000000 0000000 00000000064 12607514243 0014516 g ustar 00root root 0000000 0000000 52 comment=85cbf40b8974b3eed09d50de7484efd646be45a5
calendarserver-7.0+dfsg/ 0000775 0000000 0000000 00000000000 12607514243 0015303 5 ustar 00root root 0000000 0000000 calendarserver-7.0+dfsg/.project 0000664 0000000 0000000 00000002443 12607514243 0016755 0 ustar 00root root 0000000 0000000
CalendarServerCalDAVClientLibrarycfficx_Oraclekerberospg8000psutilpycalendarpycparserpycryptopyOpenSSLservice_identitytwextpyTwistedzope.interface-4.0.3org.python.pydev.PyDevBuilderorg.python.pydev.pythonNature139666893042110org.eclipse.ui.ide.multiFilter1.0-projectRelativePath-matches-true-false-.develop139666893042210org.eclipse.ui.ide.multiFilter1.0-name-matches-true-false-subprojects
calendarserver-7.0+dfsg/.pydevproject 0000664 0000000 0000000 00000000650 12607514243 0020023 0 ustar 00root root 0000000 0000000
python 2.7/${PROJECT_DIR_NAME}Default
calendarserver-7.0+dfsg/HACKING.rst 0000664 0000000 0000000 00000033555 12607514243 0017114 0 ustar 00root root 0000000 0000000 Developer's Guide to Hacking the Calendar Server
================================================
If you are interested in contributing to the Calendar and Contacts
Server project, please read this document.
Participating in the Community
==============================
Although the Calendar and Contacts Server is sponsored and hosted by
Apple Inc. (http://www.apple.com/), it's a true open-source project
under an Apache license. Contributions from other developers are
welcome, and, as with all open development projects, may lead to
"commit access" and a voice in the future of the project.
The community exists mainly through mailing lists and a Subversion
repository. To participate, go to:
http://trac.calendarserver.org/projects/calendarserver/wiki/MailLists
and join the appropriate mailing lists. We also use IRC, as described
here:
http://trac.calendarserver.org/projects/calendarserver/wiki/IRC
There are many ways to join the project. One may write code, test the
software and file bugs, write documentation, etc.
The bug tracking database is here:
http://trac.calendarserver.org/projects/calendarserver/report
To help manage the issues database, read over the issue summaries,
looking and testing for issues that are either invalid, or are
duplicates of other issues. Both kinds are very common, the first
because bugs often get unknowingly fixed as side effects of other
changes in the code, and the second because people sometimes file an
issue without noticing that it has already been reported. If you are
not sure about an issue, post a question to
calendarserver-dev@lists.macosforge.org.
Before filing bugs, please take a moment to perform a quick search to
see if someone else has already filed your bug. In that case, add a
comment to the existing bug if appropriate and monitor it, rather than
filing a duplicate.
Obtaining the Code
==================
The source code to the Calendar and Contacts Server is available via
Subversion at this repository URL:
http://svn.calendarserver.org/repository/calendarserver/CalendarServer/trunk/
You can also browse the repository directly using your web browser, or
use WebDAV clients to browse the repository, such as Mac OS X's Finder
(`Go -> Connect to Server`).
A richer web interface which provides access to version history and
logs is available via Trac here:
http://trac.calendarserver.org/browser/
Most developers will want to use a full-featured Subversion client.
More information about Subversion, including documentation and client
download instructions, is available from the Subversion project:
http://subversion.tigris.org/
Directory Layout
================
A rough guide to the source tree:
* ``doc/`` - User and developer documentation, including relevant
protocol specifications and extensions.
* ``bin/`` - Executable programs.
* ``conf/`` - Configuration files.
* ``calendarserver/`` - Source code for the Calendar and Contacts
Server
* ``twistedcaldav/`` - Source code for CalDAV library
* ``twistedcaldav/`` - Source code for extensions to Twisted
* ``twisted/`` - Files required to set up the Calendar and Contacts
Server as a Twisted service. Twisted (http://twistedmatrix.com/)
is a networking framework upon which the Calendar and Contacts
Server is built.
* ``locales/`` - Localization files.
* ``contrib/`` - Extra stuff that works with the Calendar and
Contacts Server, or that helps integrate with other software
(including operating systems), but that the Calendar and Contacts
Server does not depend on.
* ``support/`` - Support files of possible use to developers.
Coding Standards
================
The vast majority of the Calendar and Contacts Server is written in
the Python programming language. When writing Python code for the
Calendar and Contacts Server, please observe the following
conventions.
Please note that all of our code at present does not follow these
standards, but that does not mean that one shouldn't bother to do so.
On the contrary, code changes that do nothing but reformat code to
comply with these standards are welcome, and code changes that do not
conform to these standards are discouraged.
**We require Python 2.6 or higher.** It therefore is OK to write code
that does not work with Python versions older than 2.6.
Read PEP-8:
http://www.python.org/dev/peps/pep-0008/
For the most part, our code should follow PEP-8, with a few exceptions
and a few additions. It is also useful to review the Twisted Coding
Standard, from which we borrow some standards, though we don't
strictly follow it:
http://twistedmatrix.com/trac/browser/trunk/doc/development/policy/coding-standard.xhtml?format=raw
Key items to follow, and specifics:
* Indent level is 4 spaces.
* Never indent code with tabs. Always use spaces.
PEP-8 items we do not follow:
* PEP-8 recommends using a backslash to break long lines up:
::
if width == 0 and height == 0 and \
color == 'red' and emphasis == 'strong' or \
highlight > 100:
raise ValueError("sorry, you lose")
Don't do that, it's gross, and the indentation for the ``raise`` line
gets confusing. Use parentheses:
::
if (
width == 0 and
height == 0 and
color == "red" and
emphasis == "strong" or
highlight > 100
):
raise ValueError("sorry, you lose")
Just don't do it the way PEP-8 suggests:
::
if width == 0 and height == 0 and (color == 'red' or
emphasis is None):
raise ValueError("I don't think so")
Because that's just silly.
Additions:
* Close parentheses and brackets such as ``()``, ``[]`` and ``{}`` at the
same indent level as the line in which you opened it:
::
launchAtTarget(
target="David",
object=PaperWad(
message="Yo!",
crumpleFactor=0.7,
),
speed=0.4,
)
* Long lines are often due to long strings. Try to break strings up
into multiple lines:
::
processString(
"This is a very long string with a lot of text. "
"Fortunately, it is easy to break it up into parts "
"like this."
)
Similarly, callables that take many arguments can be broken up into
multiple lines, as in the ``launchAtTarget()`` example above.
* Breaking generator expressions and list comprehensions into
multiple lines can improve readability. For example:
::
myStuff = (
item.obtainUsefulValue()
for item in someDataStore
if item.owner() == me
)
* Import symbols (especially class names) from modules instead of
importing modules and referencing the symbol via the module unless
it doesn't make sense to do so. For example:
::
from subprocess import Popen
process = Popen(...)
Instead of:
::
import subprocess
process = subprocess.Popen(...)
This makes code shorter and makes it easier to replace one implementation
with another.
* All files should have an ``__all__`` specification. Put them at the
top of the file, before imports (PEP-8 puts them at the top, but
after the imports), so you can see what the public symbols are for
a file right at the top.
* It is more important that symbol names are meaningful than it is
that they be concise. ``x`` is rarely an appropriate name for a
variable. Avoid contractions: ``transmogrifierStatus`` is more useful
to the reader than ``trmgStat``.
* A deferred that will be immediately returned may be called ``d``:
::
d = doThisAndThat()
d.addCallback(onResult)
d.addErrback(onError)
return d
* Do not use ``deferredGenerator``. Use ``inlineCallbacks`` instead.
* That said, avoid using ``inlineCallbacks`` when chaining deferreds
is straightforward, as they are more expensive. Use
``inlineCallbacks`` when necessary for keeping code maintainable,
such as when creating serialized deferreds in a for loop.
* ``_`` may be used to denote unused callback arguments:
::
def onCompletion(_):
# Don't care about result of doThisAndThat() in here;
# we only care that it has completed.
doNextThing()
d = doThisAndThat()
d.addCallback(onCompletion)
return d
* Do not prefix symbols with ``_`` unless they might otherwise be
exposed as a public symbol: a private method name should begin with
``_``, but a locally scoped variable should not, as there is no
danger of it being exposed. Locally scoped variables are already
private.
* Per twisted convention, use camel-case (``fuzzyWidget``,
``doThisAndThat()``) for symbol names instead of using underscores
(``fuzzy_widget``, ``do_this_and_that()``).
Use of underscores is reserved for implied dispatching and the like
(eg. ``http_FOO()``). See the Twisted Coding Standard for details.
* Do not use ``%``-formatting:
::
error = "Unexpected value: %s" % (value,)
Use PEP-3101 formatting instead:
::
error = "Unexpected value: {value}".format(value=value)
* If you must use ``%``-formatting for some reason, always use a tuple as
the format argument, even when only one value is being provided:
::
error = "Unexpected value: %s" % (value,)
Never use the non-tuple form:
::
error = "Unexpected value: %s" % value
Which is allowed in Python, but results in a programming error if
``type(value) is tuple and len(value) != 1``.
* Don't use a trailing ``,`` at the end of a tuple if it's on one line:
::
numbers = (1,2,3,) # No
numbers = (1,2,3) # Yes
The trailing comma is desirable on multiple lines, though, as that makes
re-ordering items easy, and avoids a diff on the last line when adding
another:
::
strings = (
"This is a string.",
"And so is this one.",
"And here is yet another string.",
)
* Docstrings are important. All public symbols (anything declared in
``__all__``) must have a correct docstring. The script
``docs/Developer/gendocs`` will generate the API documentation using
``pydoctor``. See the ``pydoctor`` documentation for details on the
formatting:
http://codespeak.net/~mwh/pydoctor/
Note: existing docstrings need a complete review.
* Use PEP-257 as a guideline for docstrings.
* Begin all multi-line docstrings with 3 double quotes and a
newline:
::
def doThisAndThat(...):
"""
Do this, and that.
...
"""
Best Practices
==============
* If a callable is going to return a Deferred some of the time, it
should return a deferred all of the time. Return ``succeed(value)``
instead of ``value`` if necessary. This avoids forcing the caller
to check as to whether the value is a deferred or not (eg. by using
``maybeDeferred()``), which is both annoying to code and potentially
expensive at runtime.
* Be proactive about closing files and file-like objects.
For a lot of Python software, letting Python close the stream for
you works fine, but in a long-lived server that's processing many
data streams at a time, it is important to close them as soon as
possible.
On some platforms (eg. Windows), deleting a file will fail if the
file is still open. By leaving it up to Python to decide when to
close a file, you may find yourself being unable to reliably delete
it.
The most reliable way to ensure that a stream is closed is to put
the call to ``close()`` in a ``finally`` block:
::
stream = file(somePath)
try:
... do something with stream ...
finally:
stream.close()
Testing
=======
Be sure that all of the units tests pass before you commit new code.
Code that breaks units tests may be reverted without further
discussion; it is up to the committer to fix the problem and try
again.
Note that repeatedly committing code that breaks units tests presents
a possible time sink for other developers, and is not looked upon
favorably.
Units tests can be run rather easily by executing the ``./bin/test`` script
at the top of the Calendar and Contacts Server source tree. By
default, it will run all of the Calendar and Contacts Server tests
followed by all of the Twisted tests. You can run specific tests by
specifying them as arguments like this:
::
./bin/test twistedcaldav.static
All non-trivial public callables must have unit tests. (Note we don't
don't totally comply with this rule; that's a problem we'd like to
fix.) All other callables should have unit tests.
Units tests are written using the ``twisted.trial`` framework. Test
module names should start with ``test_``. Twisted has some tips on
writing tests here:
http://twistedmatrix.com/projects/core/documentation/howto/testing.html
http://twistedmatrix.com/trac/browser/trunk/doc/development/policy/test-standard.xhtml?format=raw
We also use CalDAVTester (which is a companion to the Calendar and
Contacts Server in the same Mac OS Forge project), which performs more
"black box"-type testing against the server to ensure compliance with
the CalDAV protocol. That requires running the server with a test
configuration and then running CalDAVTester against it. For
information about CalDAVTester is available here:
http://trac.calendarserver.org/projects/calendarserver/wiki/CalDAVTester
Commit Policy
=============
We follow a commit-then-review policy for relatively "safe" changes to
the code. If you have a rather straightforward change or are working
on new functionality that does not affect existing functionality, you
can commit that code without review at your discretion.
Developers are encouraged to monitor the commit notifications that are
sent via email after each commit and review/critique/comment on
modifications as appropriate.
Any changes that impact existing functionality should be reviewed by
another developer before being committed. Large changes should be
made on a branch and merged after review.
This policy relies on the discretion of committers.
calendarserver-7.0+dfsg/LICENSE.txt 0000664 0000000 0000000 00000026136 12607514243 0017136 0 ustar 00root root 0000000 0000000
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
calendarserver-7.0+dfsg/README.rst 0000664 0000000 0000000 00000011456 12607514243 0017001 0 ustar 00root root 0000000 0000000 ==========================
Getting Started
==========================
This is the core code base for the Calendar and Contacts Server, which is a CalDAV, CardDAV, WebDAV, and HTTP server.
For general information about the server, see http://www.calendarserver.org/
==========================
Copyright and License
==========================
Copyright (c) 2005-2015 Apple Inc. All rights reserved.
This software is licensed under the Apache License, Version 2.0. The Apache License is a well-established open source license, enabling collaborative open source software development.
See the "LICENSE" file for the full text of the license terms.
==========================
QuickStart
==========================
**WARNING**: these instructions are for running a server from the source tree, which is useful for development. These are not the correct steps for running the server in deployment or as part of an OS install. You should not be using the run script in system startup files (eg. /etc/init.d); it does things (like download software) that you don't want to happen in that context.
Begin by creating a directory to contain Calendar and Contacts Server and all its dependencies::
mkdir ~/CalendarServer
cd CalendarServer
Next, check out the source code from the SVN repository. To check out the latest trunk code::
svn co http://svn.calendarserver.org/repository/calendarserver/CalendarServer/trunk/ trunk
The server requires various external libraries in order to operate. The bin/develop script in the sources will retrieve these dependencies and install them to the .develop directory. Note that this behavior is currently also a side-effect of bin/run, but that is likely to change in the future::
cd trunk
./bin/develop
____________________________________________________________
Using system version of libffi.
____________________________________________________________
Using system version of OpenLDAP.
____________________________________________________________
Using system version of SASL.
____________________________________________________________
...
Pip is used to retrieve the python dependencies and stage them for use by virtualenv, however if your system does not have pip (and virtualenv), you can use bin/install_pip (or your installation / upgrade method of choice).
Before you can run the server, you need to set up a configuration file for development. There is a provided test configuration that you can use to start with, conf/caldavd-test.plist, which can be copied to conf/caldavd-dev.plist (the default config file used by the bin/run script). If conf/caldavd-dev.plist is not present when the server starts, you will be prompted to create a new one from conf/caldavd-test.plist.
You will need to choose a directory service to use to populate your server's principals (users, groups, resources, and locations). A directory service provides the Calendar and Contacts Server with information about these principals. The directory services supported by Calendar and Contacts Server are:
- XMLDirectoryService: this service is configurable via an XML file that contains principal information. The file conf/auth/accounts.xml provides an example principals configuration.
- OpenDirectoryService: this service uses Apple's OpenDirectory client, the bulk of the configuration for which is handled external to Calendar and Contacts Server (e.g. System Preferences --> Usersi & Groups --> Login Options --> Network Account Server).
- LdapDirectoryService: a highly flexible LDAP client that can leverage existing LDAP servers. See `twistedcaldav/stdconfig.py `_ for the available LdapDirectoryService options and their defaults.
The caldavd-test.plist configuration uses XMLDirectoryService by default, set up to use conf/auth/accounts-test.xml. This is a generally useful configuration for development and testing.
This file contains a user principal, named admin, with password admin, which is set up (in caldavd-test.plist) to have administrative permissions on the server.
Start the server using the bin/run script, and use the -n option to bypass dependency setup::
bin/run -n
Using /Users/andre/CalendarServer/trunk/.develop/roots/py_modules/bin/python as Python
Missing config file: /Users/andre/CalendarServer/trunk/conf/caldavd-dev.plist
You might want to start by copying the test configuration:
cp conf/caldavd-test.plist conf/caldavd-dev.plist
Would you like to copy the test configuration now? [y/n]y
Copying test cofiguration...
Starting server...
The server should then start up and bind to port 8008 for HTTP and 8443 for HTTPS. You should then be able to connect to the server using your web browser (eg. Safari, Firefox) or with a CalDAV client (eg. Calendar).
calendarserver-7.0+dfsg/bin/ 0000775 0000000 0000000 00000000000 12607514243 0016053 5 ustar 00root root 0000000 0000000 calendarserver-7.0+dfsg/bin/_build.sh 0000664 0000000 0000000 00000047754 12607514243 0017666 0 ustar 00root root 0000000 0000000 # -*- sh-basic-offset: 2 -*-
##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
. "${wd}/bin/_py.sh";
# Provide a default value: if the variable named by the first argument is
# empty, set it to the default in the second argument.
conditional_set () {
local var="$1"; shift;
local default="$1"; shift;
if [ -z "$(eval echo "\${${var}:-}")" ]; then
eval "${var}=\${default:-}";
fi;
}
c_macro () {
local sys_header="$1"; shift;
local version_macro="$1"; shift;
local value="$(printf "#include <${sys_header}>\n${version_macro}\n" | cc -x c -E - 2>/dev/null | tail -1)";
if [ "${value}" = "${version_macro}" ]; then
# Macro was not replaced
return 1;
fi;
echo "${value}";
}
# Checks for presence of a C header, optionally with a version comparison.
# With only a header file name, try to include it, returning nonzero if absent.
# With 3 params, also attempt a version check, returning nonzero if too old.
# Param 2 is a minimum acceptable version number
# Param 3 is a #define from the source that holds the installed version number
# Examples:
# Assert that ldap.h is present
# find_header "ldap.h"
# Assert that ldap.h is present with a version >= 20344
# find_header "ldap.h" 20344 "LDAP_VENDOR_VERSION"
find_header () {
local sys_header="$1"; shift;
if [ $# -ge 1 ]; then
local min_version="$1"; shift;
local version_macro="$1"; shift;
fi;
# No min_version given:
# Check for presence of a header. We use the "-c" cc option because we don't
# need to emit a file; cc exits nonzero if it can't find the header
if [ -z "${min_version:-}" ]; then
echo "#include <${sys_header}>" | cc -x c -c - -o /dev/null 2> /dev/null;
return "$?";
fi;
# Check for presence of a header of specified version
local found_version="$(c_macro "${sys_header}" "${version_macro}")";
if [ -n "${found_version}" ] && cmp_version "${min_version}" "${found_version}"; then
return 0;
else
return 1;
fi;
};
# Initialize all the global state required to use this library.
init_build () {
cd "${wd}";
init_py;
# These variables are defaults for things which might be configured by
# environment; only set them if they're un-set.
conditional_set wd "$(pwd)";
conditional_set do_get "true";
conditional_set do_setup "true";
conditional_set force_setup "false";
conditional_set requirements "${wd}/requirements-dev.txt"
dev_home="${wd}/.develop";
dev_roots="${dev_home}/roots";
dep_packages="${dev_home}/pkg";
dep_sources="${dev_home}/src";
py_virtualenv="${dev_home}/virtualenv";
py_bindir="${py_virtualenv}/bin";
python="${bootstrap_python}";
export PYTHON="${python}";
if [ -z "${TWEXT_PKG_CACHE-}" ]; then
dep_packages="${dev_home}/pkg";
else
dep_packages="${TWEXT_PKG_CACHE}";
fi;
project="$(setup_print name)" || project="";
# Find some hashing commands
# sha1() = sha1 hash, if available
# md5() = md5 hash, if available
# hash() = default hash function
# $hash = name of the type of hash used by hash()
hash="";
if find_cmd openssl > /dev/null; then
if [ -z "${hash}" ]; then hash="md5"; fi;
# remove "(stdin)= " from the front which openssl emits on some platforms
md5 () { "$(find_cmd openssl)" dgst -md5 "$@" | sed 's/^.* //'; }
elif find_cmd md5 > /dev/null; then
if [ -z "${hash}" ]; then hash="md5"; fi;
md5 () { "$(find_cmd md5)" "$@"; }
elif find_cmd md5sum > /dev/null; then
if [ -z "${hash}" ]; then hash="md5"; fi;
md5 () { "$(find_cmd md5sum)" "$@"; }
fi;
if find_cmd sha1sum > /dev/null; then
if [ -z "${hash}" ]; then hash="sha1sum"; fi;
sha1 () { "$(find_cmd sha1sum)" "$@"; }
fi;
if find_cmd shasum > /dev/null; then
if [ -z "${hash}" ]; then hash="sha1"; fi;
sha1 () { "$(find_cmd shasum)" "$@"; }
fi;
if [ "${hash}" = "sha1" ]; then
hash () { sha1 "$@"; }
elif [ "${hash}" = "md5" ]; then
hash () { md5 "$@"; }
elif find_cmd cksum > /dev/null; then
hash="hash";
hash () { cksum "$@" | cut -f 1 -d " "; }
elif find_cmd sum > /dev/null; then
hash="hash";
hash () { sum "$@" | cut -f 1 -d " "; }
else
hash () { echo "INTERNAL ERROR: No hash function."; exit 1; }
fi;
}
setup_print () {
local what="$1"; shift;
PYTHONPATH="${wd}:${PYTHONPATH:-}" "${bootstrap_python}" - 2>/dev/null << EOF
from __future__ import print_function
import setup
print(setup.${what})
EOF
}
# If do_get is turned on, get an archive file containing a dependency via HTTP.
www_get () {
if ! "${do_get}"; then return 0; fi;
local md5="";
local sha1="";
local OPTIND=1;
while getopts "m:s:" option; do
case "${option}" in
'm') md5="${OPTARG}"; ;;
's') sha1="${OPTARG}"; ;;
esac;
done;
shift $((${OPTIND} - 1));
local name="$1"; shift;
local path="$1"; shift;
local url="$1"; shift;
if "${force_setup}"; then
rm -rf "${path}";
fi;
if [ ! -d "${path}" ]; then
local ext="$(echo "${url}" | sed 's|^.*\.\([^.]*\)$|\1|')";
local decompress="";
local unpack="";
untar () { tar -xvf -; }
unzipstream () { local tmp="$(mktemp -t ccsXXXXX)"; cat > "${tmp}"; unzip "${tmp}"; rm "${tmp}"; }
case "${ext}" in
gz|tgz) decompress="gzip -d -c"; unpack="untar"; ;;
bz2) decompress="bzip2 -d -c"; unpack="untar"; ;;
tar) decompress="untar"; unpack="untar"; ;;
zip) decompress="cat"; unpack="unzipstream"; ;;
*)
echo "Error in www_get of URL ${url}: Unknown extension ${ext}";
exit 1;
;;
esac;
echo "";
if [ -n "${dep_packages}" ] && [ -n "${hash}" ]; then
mkdir -p "${dep_packages}";
local cache_basename="$(echo ${name} | tr '[ ]' '_')-$(echo "${url}" | hash)-$(basename "${url}")";
local cache_file="${dep_packages}/${cache_basename}";
check_hash () {
local file="$1"; shift;
local sum="$(md5 "${file}" | perl -pe 's|^.*([0-9a-f]{32}).*$|\1|')";
if [ -n "${md5}" ]; then
echo "Checking MD5 sum for ${name}...";
if [ "${md5}" != "${sum}" ]; then
echo "ERROR: MD5 sum for downloaded file is wrong: ${sum} != ${md5}";
return 1;
fi;
else
echo "MD5 sum for ${name} is ${sum}";
fi;
local sum="$(sha1 "${file}" | perl -pe 's|^.*([0-9a-f]{40}).*$|\1|')";
if [ -n "${sha1}" ]; then
echo "Checking SHA1 sum for ${name}...";
if [ "${sha1}" != "${sum}" ]; then
echo "ERROR: SHA1 sum for downloaded file is wrong: ${sum} != ${sha1}";
return 1;
fi;
else
echo "SHA1 sum for ${name} is ${sum}";
fi;
}
if [ ! -f "${cache_file}" ]; then
echo "No cache file: ${cache_file}";
echo "Downloading ${name}...";
local pkg_host="static.calendarserver.org";
local pkg_path="/pkg";
#
# Try getting a copy from calendarserver.org.
#
local tmp="$(mktemp "/tmp/${cache_basename}.XXXXXX")";
curl -L "http://${pkg_host}${pkg_path}/${cache_basename}" -o "${tmp}" || true;
echo "";
if [ ! -s "${tmp}" ] || grep '404 Not Found' "${tmp}" > /dev/null; then
rm -f "${tmp}";
echo "${name} is not available from calendarserver.org; trying upstream source.";
elif ! check_hash "${tmp}"; then
rm -f "${tmp}";
echo "${name} from calendarserver.org is invalid; trying upstream source.";
fi;
#
# That didn't work. Try getting a copy from the upstream source.
#
if [ ! -f "${tmp}" ]; then
curl -L "${url}" -o "${tmp}";
echo "";
if [ ! -s "${tmp}" ] || grep '404 Not Found' "${tmp}" > /dev/null; then
rm -f "${tmp}";
echo "${name} is not available from upstream source: ${url}";
exit 1;
elif ! check_hash "${tmp}"; then
rm -f "${tmp}";
echo "${name} from upstream source is invalid: ${url}";
exit 1;
fi;
if egrep "^${pkg_host}" "${HOME}/.ssh/known_hosts" > /dev/null 2>&1; then
echo "Copying cache file up to ${pkg_host}.";
if ! scp "${tmp}" "${pkg_host}:/var/www/static${pkg_path}/${cache_basename}"; then
echo "Failed to copy cache file up to ${pkg_host}.";
fi;
echo ""
fi;
fi;
#
# OK, we should be good
#
mv "${tmp}" "${cache_file}";
else
#
# We have the file cached, just verify hash
#
if ! check_hash "${cache_file}"; then
exit 1;
fi;
fi;
echo "Unpacking ${name} from cache...";
get () { cat "${cache_file}"; }
else
echo "Downloading ${name}...";
get () { curl -L "${url}"; }
fi;
rm -rf "${path}";
cd "$(dirname "${path}")";
get | ${decompress} | ${unpack};
cd "${wd}";
fi;
}
# Run 'make' with the given command line, prepending a -j option appropriate to
# the number of CPUs on the current machine, if that can be determined.
jmake () {
local ncpu="";
case "$(uname -s)" in
Darwin|Linux)
ncpu="$(getconf _NPROCESSORS_ONLN)";
;;
FreeBSD)
ncpu="$(sysctl hw.ncpu)";
ncpu="${ncpu##hw.ncpu: }";
;;
esac;
if [ -n "${ncpu:-}" ] && [[ "${ncpu}" =~ ^[0-9]+$ ]]; then
make -j "${ncpu}" "$@";
else
make "$@";
fi;
}
# Declare a dependency on a C project built with autotools.
# Support for custom configure, prebuild, build, and install commands
# prebuild_cmd, build_cmd, and install_cmd phases may be skipped by
# passing the corresponding option with the empty string as the value.
# By default, do: ./configure --prefix ... ; jmake ; make install
c_dependency () {
local f_hash="";
local configure="configure";
local prebuild_cmd="";
local build_cmd="jmake";
local install_cmd="make install";
local OPTIND=1;
while getopts "m:s:c:p:b:" option; do
case "${option}" in
'm') f_hash="-m ${OPTARG}"; ;;
's') f_hash="-s ${OPTARG}"; ;;
'c') configure="${OPTARG}"; ;;
'p') prebuild_cmd="${OPTARG}"; ;;
'b') build_cmd="${OPTARG}"; ;;
esac;
done;
shift $((${OPTIND} - 1));
local name="$1"; shift;
local path="$1"; shift;
local uri="$1"; shift;
# Extra arguments are processed below, as arguments to configure.
mkdir -p "${dep_sources}";
local srcdir="${dep_sources}/${path}";
local dstroot="${dev_roots}/${name}";
www_get ${f_hash} "${name}" "${srcdir}" "${uri}";
export PATH="${dstroot}/bin:${PATH}";
export C_INCLUDE_PATH="${dstroot}/include:${C_INCLUDE_PATH:-}";
export LD_LIBRARY_PATH="${dstroot}/lib:${dstroot}/lib64:${LD_LIBRARY_PATH:-}";
export CPPFLAGS="-I${dstroot}/include ${CPPFLAGS:-} ";
export LDFLAGS="-L${dstroot}/lib -L${dstroot}/lib64 ${LDFLAGS:-} ";
export DYLD_LIBRARY_PATH="${dstroot}/lib:${dstroot}/lib64:${DYLD_LIBRARY_PATH:-}";
export PKG_CONFIG_PATH="${dstroot}/lib/pkgconfig:${PKG_CONFIG_PATH:-}";
if "${do_setup}"; then
if "${force_setup}"; then
rm -rf "${dstroot}";
fi;
if [ ! -d "${dstroot}" ]; then
echo "Building ${name}...";
cd "${srcdir}";
"./${configure}" --prefix="${dstroot}" "$@";
if [ ! -z "${prebuild_cmd}" ]; then
eval ${prebuild_cmd};
fi;
eval ${build_cmd};
eval ${install_cmd};
cd "${wd}";
else
echo "Using built ${name}.";
echo "";
fi;
fi;
}
ruler () {
if "${do_setup}"; then
echo "____________________________________________________________";
echo "";
if [ $# -gt 0 ]; then
echo "$@";
fi;
fi;
}
using_system () {
if "${do_setup}"; then
local name="$1"; shift;
echo "Using system version of ${name}.";
echo "";
fi;
}
#
# Build C dependencies
#
c_dependencies () {
local c_glue_root="${dev_roots}/c_glue";
local c_glue_include="${c_glue_root}/include";
export C_INCLUDE_PATH="${c_glue_include}:${C_INCLUDE_PATH:-}";
# The OpenSSL version number is special. Our strategy is to get the integer
# value of OPENSSL_VERSION_NUBMER for use in inequality comparison.
ruler;
local min_ssl_version="9470463"; # OpenSSL 0.9.8zf
local ssl_version="$(c_macro openssl/ssl.h OPENSSL_VERSION_NUMBER)";
if [ -z "${ssl_version}" ]; then ssl_version="0x0"; fi;
ssl_version="$("${bootstrap_python}" -c "print ${ssl_version}")";
if [ "${ssl_version}" -ge "${min_ssl_version}" ]; then
using_system "OpenSSL";
else
local v="0.9.8zf";
local n="openssl";
local p="${n}-${v}";
# use 'config' instead of 'configure'; 'make' instead of 'jmake'.
# also pass 'shared' to config to build shared libs.
c_dependency -c "config" -m "c69a4a679233f7df189e1ad6659511ec" \
-p "make depend" -b "make" \
"openssl" "${p}" \
"http://www.openssl.org/source/${p}.tar.gz" "shared";
fi;
ruler;
if find_header ffi.h; then
using_system "libffi";
elif find_header ffi/ffi.h; then
if "${do_setup}"; then
mkdir -p "${c_glue_include}";
echo "#include " > "${c_glue_include}/ffi.h"
using_system "libffi";
fi;
else
local v="3.0.13";
local n="libffi";
local p="${n}-${v}";
c_dependency -m "45f3b6dbc9ee7c7dfbbbc5feba571529" \
"libffi" "${p}" \
"ftp://sourceware.org/pub/libffi/${p}.tar.gz"
fi;
ruler;
if find_header ldap.h 20428 LDAP_VENDOR_VERSION; then
using_system "OpenLDAP";
else
local v="2.4.38";
local n="openldap";
local p="${n}-${v}";
c_dependency -m "39831848c731bcaef235a04e0d14412f" \
"OpenLDAP" "${p}" \
"http://www.openldap.org/software/download/OpenLDAP/${n}-release/${p}.tgz" \
--disable-bdb --disable-hdb --with-tls=openssl;
fi;
ruler;
if find_header sasl.h; then
using_system "SASL";
elif find_header sasl/sasl.h; then
if "${do_setup}"; then
mkdir -p "${c_glue_include}";
echo "#include " > "${c_glue_include}/sasl.h"
using_system "SASL";
fi;
else
local v="2.1.26";
local n="cyrus-sasl";
local p="${n}-${v}";
c_dependency -m "a7f4e5e559a0e37b3ffc438c9456e425" \
"CyrusSASL" "${p}" \
"ftp://ftp.cyrusimap.org/cyrus-sasl/${p}.tar.gz" \
--disable-macos-framework;
fi;
ruler;
if command -v memcached > /dev/null; then
using_system "memcached";
else
local v="2.0.21-stable";
local n="libevent";
local p="${n}-${v}";
c_dependency -m "b2405cc9ebf264aa47ff615d9de527a2" \
"libevent" "${p}" \
"http://github.com/downloads/libevent/libevent/${p}.tar.gz";
local v="1.4.20";
local n="memcached";
local p="${n}-${v}";
c_dependency -m "92f702bcb28d7bec8fdf9418360fc062" \
"memcached" "${p}" \
"http://www.memcached.org/files/${p}.tar.gz";
fi;
ruler;
if command -v postgres > /dev/null; then
using_system "Postgres";
else
local v="9.3.1";
local n="postgresql";
local p="${n}-${v}";
if command -v dtrace > /dev/null; then
local enable_dtrace="--enable-dtrace";
else
local enable_dtrace="";
fi;
c_dependency -m "c003d871f712d4d3895956b028a96e74" \
"PostgreSQL" "${p}" \
"http://ftp.postgresql.org/pub/source/v${v}/${p}.tar.bz2" \
--with-python ${enable_dtrace};
fi;
}
#
# Build Python dependencies
#
py_dependencies () {
python="${py_bindir}/python";
py_ve_tools="${dev_home}/ve_tools";
export PATH="${py_virtualenv}/bin:${PATH}";
export PYTHON="${python}";
export PYTHONPATH="${py_ve_tools}/lib:${wd}:${PYTHONPATH:-}";
# Work around a change in Xcode tools that breaks Python modules in OS X
# 10.9.2 and prior due to a hard error if the -mno-fused-madd is used, as
# it was in the system Python, and is therefore passed along by disutils.
if [ "$(uname -s)" = "Darwin" ]; then
if "${bootstrap_python}" -c 'import distutils.sysconfig; print distutils.sysconfig.get_config_var("CFLAGS")' \
| grep -e -mno-fused-madd > /dev/null; then
export ARCHFLAGS="-Wno-error=unused-command-line-argument-hard-error-in-future";
fi;
fi;
if ! "${do_setup}"; then return 0; fi;
# Set up virtual environment
if "${force_setup}"; then
# Nuke the virtual environment first
rm -rf "${py_virtualenv}";
fi;
if [ ! -d "${py_virtualenv}" ]; then
bootstrap_virtualenv;
"${bootstrap_python}" -m virtualenv \
--system-site-packages \
--no-setuptools \
"${py_virtualenv}";
fi;
cd "${wd}";
# Make sure setup got called enough to write the version file.
PYTHONPATH="${PYTHONPATH}" "${python}" "${wd}/setup.py" check > /dev/null;
if [ -d "${dev_home}/pip_downloads" ]; then
pip_install="pip_install_from_cache";
else
pip_install="pip_download_and_install";
fi;
ruler "Preparing Python requirements";
echo "";
"${pip_install}" --requirement="${requirements}";
for option in $("${bootstrap_python}" -c 'import setup; print "\n".join(setup.extras_requirements.keys())'); do
ruler "Preparing Python requirements for optional feature: ${option}";
echo "";
if ! "${pip_install}" --editable="${wd}[${option}]"; then
echo "Feature ${option} is optional; continuing.";
fi;
done;
echo "";
}
bootstrap_virtualenv () {
mkdir -p "${py_ve_tools}";
mkdir -p "${py_ve_tools}/lib";
mkdir -p "${py_ve_tools}/junk";
for pkg in \
setuptools-17.0 \
pip-7.0.3 \
virtualenv-13.0.3 \
; do
local name="${pkg%-*}";
local version="${pkg#*-}";
local first="$(echo "${name}" | sed 's|^\(.\).*$|\1|')";
local url="https://pypi.python.org/packages/source/${first}/${name}/${pkg}.tar.gz";
ruler "Downloading ${pkg}";
local tmp="$(mktemp -d -t ccsXXXXX)";
curl -L "${url}" | tar -C "${tmp}" -xvzf -;
cd "${tmp}/$(basename "${pkg}")";
PYTHONPATH="${py_ve_tools}/lib" \
"${bootstrap_python}" setup.py install \
--install-base="${py_ve_tools}" \
--install-lib="${py_ve_tools}/lib" \
--install-headers="${py_ve_tools}/junk" \
--install-scripts="${py_ve_tools}/junk" \
--install-data="${py_ve_tools}/junk" \
; \
cd "${wd}";
rm -rf "${tmp}";
done;
}
pip_download () {
mkdir -p "${dev_home}/pip_downloads";
"${python}" -m pip install \
--disable-pip-version-check \
--download="${dev_home}/pip_downloads" \
--pre --allow-all-external \
--no-cache-dir \
--log-file="${dev_home}/pip.log" \
"$@";
}
pip_install_from_cache () {
"${python}" -m pip install \
--disable-pip-version-check \
--pre --allow-all-external \
--no-index \
--no-cache-dir \
--find-links="${dev_home}/pip_downloads" \
--log-file="${dev_home}/pip.log" \
"$@";
}
pip_download_and_install () {
"${python}" -m pip install \
--disable-pip-version-check \
--pre --allow-all-external \
--no-cache-dir \
--log-file="${dev_home}/pip.log" \
"$@";
}
#
# Set up for development
#
develop () {
init_build;
c_dependencies;
py_dependencies;
}
develop_clean () {
init_build;
# Clean
rm -rf "${dev_roots}";
rm -rf "${py_virtualenv}";
}
develop_distclean () {
init_build;
# Clean
rm -rf "${dev_home}";
}
calendarserver-7.0+dfsg/bin/_py.sh 0000664 0000000 0000000 00000007700 12607514243 0017202 0 ustar 00root root 0000000 0000000 # -*- sh-basic-offset: 2 -*-
##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
find_cmd () {
local cmd="$1"; shift;
local path="$(type "${cmd}" 2>/dev/null | sed "s|^${cmd} is \(a tracked alias for \)\{0,1\}||")";
if [ -z "${path}" ]; then
return 1;
fi;
echo "${path}";
}
# Echo the major.minor version of the given Python interpreter.
py_version () {
local python="$1"; shift;
echo "$("${python}" -c "from distutils.sysconfig import get_python_version; print get_python_version()")";
}
#
# Test if a particular python interpreter is available, given the full path to
# that interpreter.
#
try_python () {
local python="$1"; shift;
if [ -z "${python}" ]; then
return 1;
fi;
if ! type "${python}" > /dev/null 2>&1; then
return 1;
fi;
local py_version="$(py_version "${python}")";
if [ "$(echo "${py_version}" | sed 's|\.||')" -lt "25" ]; then
return 1;
fi;
return 0;
}
#
# Detect which version of Python to use, then print out which one was detected.
#
# This will prefer the python interpreter in the PYTHON environment variable.
# If that's not found, it will check for "python2.7" and "python", looking for
# each in your PATH.
#
detect_python_version () {
local v;
local p;
for v in "2.7" ""
do
for p in \
"${PYTHON:=}" \
"python${v}" \
;
do
if p="$(find_cmd "${p}")"; then
if try_python "${p}"; then
echo "${p}";
return 0;
fi;
fi;
done;
done;
return 1;
}
#
# Compare version numbers
#
cmp_version () {
local v="$1"; shift;
local mv="$1"; shift;
local vh;
local mvh;
local result;
while true; do
vh="${v%%.*}"; # Get highest-order segment
mvh="${mv%%.*}";
if [ "${vh}" -gt "${mvh}" ]; then
result=1;
break;
fi;
if [ "${vh}" -lt "${mvh}" ]; then
result=0;
break;
fi;
if [ "${v}" = "${v#*.}" ]; then
# No dots left, so we're ok
result=0;
break;
fi;
if [ "${mv}" = "${mv#*.}" ]; then
# No dots left, so we're not gonna match
result=1;
break;
fi;
v="${v#*.}";
mv="${mv#*.}";
done;
return ${result};
}
#
# Detect which python to use, and store it in the 'python' variable, as well as
# setting up variables related to version and build configuration.
#
init_py () {
# First, detect the appropriate version of Python to use, based on our version
# requirements and the environment. Note that all invocations of python in
# our build scripts should therefore be '"${python}"', not 'python'; this is
# important on systems with older system pythons (2.4 or earlier) with an
# alternate install of Python, or alternate python installation mechanisms
# like virtualenv.
bootstrap_python="$(detect_python_version)";
# Set the $PYTHON environment variable to an absolute path pointing at the
# appropriate python executable, a standard-ish mechanism used by certain
# non-distutils things that need to find the "right" python. For instance,
# the part of the PostgreSQL build process which builds pl_python. Note that
# detect_python_version, above, already honors $PYTHON, so if this is already
# set it won't be stomped on, it will just be re-set to the same value.
export PYTHON="$(find_cmd ${bootstrap_python})";
if [ -z "${bootstrap_python:-}" ]; then
echo "No suitable python found. Python 2.6 or 2.7 is required.";
exit 1;
fi;
}
calendarserver-7.0+dfsg/bin/benchmark 0000775 0000000 0000000 00000001433 12607514243 0017734 0 ustar 00root root 0000000 0000000 #!/bin/sh
##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
set -e;
set -u;
wd="$(cd "$(dirname "$0")/.." && pwd -L)";
. "${wd}/bin/_build.sh";
init_build > /dev/null;
exec "${python}" "${wd}/contrib/performance/benchmark" "$@";
calendarserver-7.0+dfsg/bin/benchreport 0000775 0000000 0000000 00000002021 12607514243 0020307 0 ustar 00root root 0000000 0000000 #!/bin/sh
##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
set -e;
set -u;
wd="$(cd "$(dirname "$0")/.." && pwd -L)";
. "${wd}/bin/_build.sh";
init_build > /dev/null;
args="$@"
echo "Parameter: 1";
"${python}" "${wd}/contrib/performance/report" ${args} 1 HTTP summarize;
echo;
echo "Parameter: 9";
"${python}" "${wd}/contrib/performance/report" ${args} 9 HTTP summarize;
echo;
echo "Parameter: 81";
"${python}" "${wd}/contrib/performance/report" ${args} 81 HTTP summarize;
calendarserver-7.0+dfsg/bin/caldavd 0000775 0000000 0000000 00000010574 12607514243 0017406 0 ustar 00root root 0000000 0000000 #!/bin/sh
# -*- sh-basic-offset: 2 -*-
##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
#PATH
#PYTHONPATH
daemonize="";
errorlogenabled="";
username="";
groupname="";
configfile="";
twistdpath="$(command -v twistd)";
plugin_name="caldav";
service_type="";
profile="";
twistd_reactor="";
child_reactor=""
extra="";
py_version () {
local python="$1"; shift;
echo "$("${python}" -c "from distutils.sysconfig import get_python_version; print get_python_version()")";
}
try_python () {
local python="$1"; shift;
if [ -z "${python}" ]; then
return 1;
fi;
if ! type "${python}" > /dev/null 2>&1; then
return 1;
fi;
local py_version="$(py_version "${python}")";
if [ "$(echo "${py_version}" | sed 's|\.||')" -lt "25" ]; then
return 1;
fi;
return 0;
}
for v in "2.7" ""; do
for p in \
"${PYTHON:=}" \
"python${v}" \
"/usr/local/bin/python${v}" \
"/usr/local/python/bin/python${v}" \
"/usr/local/python${v}/bin/python${v}" \
"/opt/bin/python${v}" \
"/opt/python/bin/python${v}" \
"/opt/python${v}/bin/python${v}" \
"/Library/Frameworks/Python.framework/Versions/${v}/bin/python" \
"/opt/local/bin/python${v}" \
"/sw/bin/python${v}" \
;
do
if try_python "${p}"; then python="${p}"; break; fi;
done;
if [ -n "${python:-}" ]; then break; fi;
done;
if [ -z "${python:-}" ]; then
echo "No suitable python found.";
exit 1;
fi;
usage ()
{
program="$(basename "$0")";
if [ "${1--}" != "-" ]; then echo "$@"; echo; fi;
echo "Usage: ${program} [-hX] [-u username] [-g groupname] [-T twistd] [-t type] [-f caldavd.plist] [-p statsfile]";
echo "Options:";
echo " -h Print this help and exit";
echo " -X Do not daemonize";
echo " -L Do not log errors to file; instead use stdout";
echo " -u User name to run as";
echo " -g Group name to run as";
echo " -f Configuration file to read";
echo " -T Path to twistd binary";
echo " -t Process type (Master, Slave or Combined)";
echo " -p Path to the desired pstats file.";
echo " -R The Twisted Reactor to run [${reactor}]";
if [ "${1-}" = "-" ]; then return 0; fi;
exit 64;
}
while getopts 'hXLu:g:f:T:P:t:p:R:o:' option; do
case "${option}" in
'?') usage; ;;
'h') usage -; exit 0; ;;
'X') daemonize="-n"; ;;
'L') errorlogenabled="-o ErrorLogEnabled=False"; ;;
'f') configfile="-f ${OPTARG}"; ;;
'T') twistdpath="${OPTARG}"; ;;
'u') username="-u ${OPTARG}"; ;;
'g') groupname="-g ${OPTARG}"; ;;
'P') plugin_name="${OPTARG}"; ;;
't') service_type="-o ProcessType=${OPTARG}"; ;;
'p') twistd_profile="--profiler=cprofile-cpu --profile=${OPTARG}/master.pstats --savestats";
profile="-o Profiling/Enabled=True -o Profiling/BaseDirectory=${OPTARG}"; ;;
'R') twistd_reactor="--reactor=${OPTARG}"; child_reactor="-o Twisted/reactor=${OPTARG}"; ;;
'o') extra="${extra} -o ${OPTARG}"; ;;
esac;
done;
shift $((${OPTIND} - 1));
if [ $# != 0 ]; then usage "Unrecognized arguments:" "$@"; fi;
export PYTHONHASHSEED=random
export PYTHONPATH
exec "${python}" "${twistdpath}" ${twistd_profile} ${twistd_reactor} ${daemonize} ${username} ${groupname} "${plugin_name}" ${configfile} ${service_type} ${errorlogenabled} ${profile} ${child_reactor} ${extra};
calendarserver-7.0+dfsg/bin/dependencies 0000775 0000000 0000000 00000004201 12607514243 0020424 0 ustar 00root root 0000000 0000000 #!/bin/sh
# -*- sh-basic-offset: 2 -*-
##
# Copyright (c) 2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
set -e
set -u
wd="$(cd "$(dirname "$0")/.." && pwd)";
. "${wd}/bin/_build.sh";
do_setup="false";
develop > /dev/null;
##
# Argument handling
##
all_extras="false";
output="nested";
extras () {
PYTHONPATH="${wd}:${PYTHONPATH:-}" "${bootstrap_python}" - 2>/dev/null << EOF
from __future__ import print_function
import setup
print(" ".join(setup.extras_requirements.keys()))
EOF
}
usage ()
{
program="$(basename "$0")";
if [ "${1--}" != "-" ]; then echo "$@"; echo; fi;
echo "Usage: ${program} [-hanl] [extra ...]";
echo "Supported extras: $(extras)";
echo "Options:";
echo " -h Print this help and exit";
echo " -a Enable all extras";
echo " -n Output a nested list (default)";
echo " -l Output a simple list";
if [ "${1-}" = "-" ]; then return 0; fi;
exit 64;
}
while getopts 'hanl' option; do
case "${option}" in
'?') usage; ;;
'h') usage -; exit 0; ;;
'a') all_extras="true"; ;;
'n') output="nested"; ;;
'l') output="list"; ;;
esac;
done;
shift $((${OPTIND} - 1));
##
# Do The Right Thing
##
cmd="${py_bindir}/eggdeps";
if "${all_extras}"; then
extras="$(extras)";
else
extras="$@";
fi;
extras="$(echo ${extras} | sed 's| |,|g')";
spec="calendarserver[${extras}]";
case "${output}" in
'nested')
"${cmd}" "${spec}";
;;
'list')
PYTHONPATH="${wd}:${PYTHONPATH:-}" "${bootstrap_python}" - 2>/dev/null << EOF
from __future__ import print_function
import setup
print("\n".join($("${cmd}" --requirements "${spec}")))
EOF
esac;
calendarserver-7.0+dfsg/bin/develop 0000775 0000000 0000000 00000003512 12607514243 0017440 0 ustar 00root root 0000000 0000000 #!/bin/sh
# -*- sh-basic-offset: 2 -*-
##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
set -e;
set -u;
if [ -z "${wd:-}" ]; then
wd="$(cd "$(dirname "$0")/.." && pwd)";
fi;
. "${wd}/bin/_build.sh";
develop;
cd ${wd};
#
# Link to scripts for convenience
#
find .develop/virtualenv/bin -type f -name "calendarserver_*" | {
while read source; do
target="${wd}/bin/$(basename ${source})";
rm -f "${target}";
cat << __END__ > "${target}"
#!/bin/sh
export PATH="${PATH:-}";
export LD_LIBRARY_PATH="${LD_LIBRARY_PATH:-}";
export DYLD_LIBRARY_PATH="${DYLD_LIBRARY_PATH:-}";
exec "\$(dirname \$0)/../${source}" --config "${wd}/conf/caldavd-dev.plist" "\$@";
__END__
chmod +x "${target}";
done;
}
#
# Create a subprojects directory with -e checkouts for convenience.
#
find .develop/virtualenv/src -mindepth 1 -maxdepth 1 -type d | {
while read source; do
deps="${wd}/subprojects";
mkdir -p "${deps}";
target="${deps}/$(basename ${source} | sed 's|twextpy|twext|')";
if [ -L "${target}" ]; then
rm "${target}";
fi;
ln -s "../${source}" "${target}";
# NO! This causes bin/develop to hang the second time it is run
#rm -f "${target}/.develop";
#ln -s "${wd}/.develop" "${target}/.develop";
done;
}
echo "Dependency setup complete."
calendarserver-7.0+dfsg/bin/environment 0000775 0000000 0000000 00000001670 12607514243 0020351 0 ustar 00root root 0000000 0000000 #!/bin/sh
# -*- sh-basic-offset: 2 -*-
##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
set -e
set -u
wd="$(cd "$(dirname "$0")/.." && pwd)";
. "${wd}/bin/_build.sh";
do_setup="false";
develop > /dev/null;
if [ $# -gt 0 ]; then
for name in "$@"; do
# Use python to avoid using eval.
"${python}" -c 'import os; print os.environ.get("'${name}'", "")';
done;
else
env;
fi;
calendarserver-7.0+dfsg/bin/make-ssl-ca 0000775 0000000 0000000 00000001765 12607514243 0020107 0 ustar 00root root 0000000 0000000 #!/bin/sh
# -*- sh-basic-offset: 2 -*-
set -e
set -u
##
# Handle command line
##
usage ()
{
program=$(basename "$0");
if [ $# != 0 ]; then echo "$@"; echo ""; fi;
echo "usage: ${program} name";
}
if [ $# != 1 ]; then
usage;
exit 1;
fi;
name="$1";
##
# Do The Right Thing
##
newfile ()
{
# New file is not readable and empty
local name="$1";
rm -f "${name}";
tmp="$(mktemp "${name}")";
if [ "${tmp}" != "${name}" ]; then
mv "${tmp}" "${name}";
fi;
}
if [ ! -s "${name}.key" ]; then
echo "Generating certificate authority key...";
newfile "${name}.key";
openssl genrsa -des3 -out "${name}.key" 2048;
echo "";
else
echo "Key for ${name} already exists.";
fi;
if [ ! -s "${name}.crt" ]; then
echo "Generating certificate...";
openssl req -new -x509 -days 3650 -key "${name}.key" -out "${name}.crt";
chmod 644 "${name}.crt";
echo "";
else
echo "Certificate for ${name} already exists.";
fi;
# Print the certificate
openssl x509 -in "${name}.crt" -text -noout;
calendarserver-7.0+dfsg/bin/make-ssl-key 0000775 0000000 0000000 00000004515 12607514243 0020310 0 ustar 00root root 0000000 0000000 #!/bin/sh
# -*- sh-basic-offset: 2 -*-
set -e
set -u
##
# Handle command line
##
authority="ca";
usage ()
{
program=$(basename "$0");
if [ $# != 0 ]; then echo "$@"; echo ""; fi;
echo "usage: ${program} [options] host_name";
echo "";
echo " -h Show this help";
echo " -a authority Use given certificate authority [${authority}].";
}
while getopts 'ha:' option; do
case "$option" in
'?') usage; exit 64; ;;
'h') usage; exit 0; ;;
'a') authority="${OPTARG}"; ;;
esac;
done;
shift $((${OPTIND} - 1));
if [ $# != 1 ]; then
usage;
exit 64;
fi;
host="$1";
##
# Do The Right Thing
##
if [ "${authority}" != "NONE" ]; then
if [ ! -s "${authority}.key" ]; then
echo "Not a certificate authority key: ${authority}.key";
exit 1;
fi;
if [ ! -s "${authority}.crt" ]; then
echo "Not a certificate authority certificate: ${authority}.crt";
exit 1;
fi;
fi;
newfile ()
{
# New file is not readable and empty
name="$1";
rm -f "${name}";
tmp="$(mktemp "${name}")";
if [ "${tmp}" != "${name}" ]; then
mv "${tmp}" "${name}";
fi;
}
#
# FIXME:
# Remove requirement that user type in a pass phrase here, which
# we then simply strip out.
#
if [ ! -s "${host}.key" ]; then
echo "Generating host key...";
newfile "${host}.key.tmp";
openssl genrsa -des3 -out "${host}.key.tmp" 1024;
echo "";
echo "Removing pass phrase from key...";
newfile "${host}.key";
openssl rsa -in "${host}.key.tmp" -out "${host}.key";
rm "${host}.key.tmp";
echo "";
else
echo "Key for ${host} already exists.";
fi;
#
# FIXME:
# Remove requirement that user type the common name, which we
# already know ($hostname).
#
if [ ! -s "${host}.csr" ]; then
echo "Generating certificate request...";
newfile "${host}.csr";
openssl req -new -key "${host}.key" -out "${host}.csr";
echo "";
else
echo "Certificate request for ${host} already exists.";
fi;
if [ "${authority}" != "NONE" ]; then
if [ ! -s "${host}.crt" ]; then
echo "Generating certificate...";
openssl x509 -req -in "${host}.csr" -out "${host}.crt" -sha1 -days 3650 \
-CA "${authority}.crt" -CAkey "${authority}.key" -CAcreateserial;
chmod 644 "${host}.crt";
echo "";
else
echo "Certificate for ${host} already exists.";
fi;
# Print the certificate
openssl x509 -in "${host}.crt" -text -noout;
fi;
calendarserver-7.0+dfsg/bin/package 0000775 0000000 0000000 00000010614 12607514243 0017376 0 ustar 00root root 0000000 0000000 #!/bin/sh
# -*- sh-basic-offset: 2 -*-
##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
##
# WARNING: This script is intended for use by developers working on
# the Calendar Server code base. It is not intended for use in a
# deployment configuration.
#
# DO NOT use this script as a system startup tool (eg. in /etc/init.d,
# /Library/StartupItems, launchd plists, etc.)
#
# For those uses, install the server properly (eg. with "./run -i
# /tmp/foo && cd /tmp/foo && pax -pe -rvw . /") and use the caldavd
# executable to start the server.
##
set -e;
set -u;
wd="$(cd "$(dirname "$0")/.." && pwd)";
#
# Usage
#
clean="false";
usage () {
program="$(basename "$0")";
if [ "${1--}" != "-" ]; then
echo "$@";
echo;
fi;
echo "Usage: ${program} [-hFfn] destination";
echo "Options:";
echo " -h Print this help and exit";
echo " -F Clean and force setup to run";
echo " -f Force setup to run";
echo " -n Do not run setup";
if [ "${1-}" = "-" ]; then
return 0;
fi;
exit 64;
}
parse_options () {
local OPTIND=1;
while getopts "hFfn" option; do
case "${option}" in
'?') usage; ;;
'h') usage -; exit 0; ;;
'F') do_setup="true" ; force_setup="true" ; clean="true" ; ;;
'f') do_setup="true" ; force_setup="true" ; clean="false"; ;;
'n') do_setup="false"; force_setup="false"; clean="false"; ;;
esac;
done;
shift $((${OPTIND} - 1));
if [ $# -le 0 ]; then
usage "No desination provided.";
fi;
destination="$1"; shift;
if [ $# != 0 ]; then
usage "Unrecognized arguments:" "$@";
fi;
}
main () {
. "${wd}/bin/_build.sh";
parse_options "$@";
#
# Build everything
#
if "${clean}"; then
develop_clean;
fi;
install -d "${destination}";
local destination="$(cd "${destination}" && pwd)";
init_build;
dev_roots="${destination}/roots";
py_virtualenv="${destination}/virtualenv";
py_bindir="${py_virtualenv}/bin";
py_ve_tools="${dev_home}/ve_tools";
if [ ! -d "${py_virtualenv}" ]; then
bootstrap_virtualenv;
"${bootstrap_python}" -m virtualenv \
--always-copy \
--system-site-packages \
--no-setuptools \
"${py_virtualenv}";
fi;
c_dependencies;
py_dependencies;
install -d "${destination}/bin";
install -d "${destination}/lib";
cd "${destination}/bin";
find ../virtualenv/bin \
"(" -name "caldavd" -o -name 'calendarserver_*' -o -name "python" ")" \
-exec ln -fs "{}" . ";" \
;
for executable in \
"memcached/bin/memcached" \
"OpenLDAP/bin/ldapsearch" \
"openssl/bin/openssl" \
"PostgreSQL/bin/initdb" \
"PostgreSQL/bin/pg_ctl" \
"PostgreSQL/bin/psql" \
; do
if [ -e "../roots/${executable}" ]; then
ln -s "../roots/${executable}" .;
else
echo "No executable: ${executable}";
fi;
done;
cd "${destination}/lib";
find "../roots" "(" -type f -o -type l ")" \
"(" \
-name '*.so' -o \
-name '*.so.*' -o \
-name '*.dylib' \
")" -print0 \
| xargs -0 -I % ln -s % .;
# Write out environment.sh
local dst="${destination}";
cat > "${dst}/environment.sh" << __EOF__
export PATH="${dst}/bin:\${PATH}";
export C_INCLUDE_PATH="${dst}/include:\${C_INCLUDE_PATH:-}";
export LD_LIBRARY_PATH="${dst}/lib:${dst}/lib64:\${LD_LIBRARY_PATH:-}:\$ORACLE_HOME";
export CPPFLAGS="-I${dst}/include \${CPPFLAGS:-} ";
export LDFLAGS="-L${dst}/lib -L${dst}/lib64 \${LDFLAGS:-} ";
export DYLD_LIBRARY_PATH="${dst}/lib:${dst}/lib64:\${DYLD_LIBRARY_PATH:-}:\$ORACLE_HOME";
__EOF__
# Install CalendarServer into venv
cd ${wd}
${python} setup.py install
}
main "$@";
calendarserver-7.0+dfsg/bin/pyflakes 0000775 0000000 0000000 00000001652 12607514243 0017623 0 ustar 00root root 0000000 0000000 #!/bin/sh
# -*- sh-basic-offset: 2 -*-
##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
set -e;
set -u;
wd="$(cd "$(dirname "$0")/.." && pwd -L)";
. "${wd}/bin/_build.sh";
do_setup="false";
develop > /dev/null;
if [ $# -eq 0 ]; then
set - calendarserver contrib twisted twistedcaldav txdav txweb2;
fi;
echo "Checking modules:" "$@";
exec "${python}" -m pyflakes "$@";
calendarserver-7.0+dfsg/bin/python 0000775 0000000 0000000 00000001434 12607514243 0017324 0 ustar 00root root 0000000 0000000 #!/bin/sh
# -*- sh-basic-offset: 2 -*-
##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
set -e
set -u
wd="$(cd "$(dirname "$0")/.." && pwd)";
. "${wd}/bin/_build.sh";
do_setup="false";
develop > /dev/null;
exec "${python}" "$@";
calendarserver-7.0+dfsg/bin/run 0000775 0000000 0000000 00000014242 12607514243 0016610 0 ustar 00root root 0000000 0000000 #!/bin/sh
# -*- sh-basic-offset: 2 -*-
##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
##
# WARNING: This script is intended for use by developers working on
# the Calendar Server code base. It is not intended for use in a
# deployment configuration.
#
# DO NOT use this script as a system startup tool (eg. in /etc/init.d,
# /Library/StartupItems, launchd plists, etc.)
#
# For those uses, install the server properly (eg. with "./run -i
# /tmp/foo && cd /tmp/foo && pax -pe -rvw . /") and use the caldavd
# executable to start the server.
##
set -e;
set -u;
wd="$(cd "$(dirname "$0")/.." && pwd)";
#
# Usage
#
config="${wd}/conf/caldavd-dev.plist";
reactor="";
plugin_name="caldav";
service_type="Combined";
do_setup="true";
run="true";
daemonize="-X -L";
kill="false";
restart="false";
profile="";
usage () {
program="$(basename "$0")";
if [ "${1--}" != "-" ]; then
echo "$@";
echo;
fi;
echo "Usage: ${program} [-hvgsfnpdkrR] [-K key] [-iIb dst] [-t type] [-S statsdirectory] [-P plugin]";
echo "Options:";
echo " -h Print this help and exit";
echo " -s Run setup only; don't run server";
echo " -f Force setup to run";
echo " -n Do not run setup";
echo " -p Print PYTHONPATH value for server and exit";
echo " -e Print =-separated environment variables required to run and exit";
echo " -d Run caldavd as a daemon";
echo " -k Stop caldavd";
echo " -r Restart caldavd";
echo " -K Print value of configuration key and exit";
echo " -t Select the server process type (Master, Slave or Combined) [${service_type}]";
echo " -S Write a pstats object for each process to the given directory when the server is stopped.";
echo " -P Select the twistd plugin name [${plugin_name}]";
echo " -R Twisted Reactor plugin to execute [${reactor}]";
if [ "${1-}" = "-" ]; then
return 0;
fi;
exit 64;
}
# Parse command-line options to set up state which controls the behavior of the
# functions in build.sh.
parse_options () {
OPTIND=1;
print_path="false";
while getopts "ahsfnpedkrc:K:i:I:b:t:S:P:R:" option; do
case "${option}" in
'?') usage; ;;
'h') usage -; exit 0; ;;
'f') force_setup="true"; do_setup="true"; ;;
'k') kill="true"; do_setup="false"; ;;
'r') restart="true"; do_setup="false"; ;;
'd') daemonize=""; ;;
'P') plugin_name="${OPTARG}"; ;;
'c') config="${OPTARG}"; ;;
'R') reactor="-R ${OPTARG}"; ;;
't') service_type="${OPTARG}"; ;;
'K') read_key="${OPTARG}"; ;;
'S') profile="-p ${OPTARG}"; ;;
's') do_setup="true"; run="false"; ;;
'p')
do_get="false"; do_setup="false"; run="false"; print_path="true";
;;
'n') do_setup="false"; force_setup="false"; ;;
esac;
done;
shift $((${OPTIND} - 1));
if [ $# != 0 ]; then
usage "Unrecognized arguments:" "$@";
fi;
}
# Actually run the server. (Or, exit, if things aren't sufficiently set up in
# order to do that.)
run () {
echo "Using ${python} as Python";
if "${run}"; then
if [ ! -f "${config}" ]; then
echo "";
echo "Missing config file: ${config}";
echo "You might want to start by copying the test configuration:";
echo "";
echo " cp conf/caldavd-test.plist conf/caldavd-dev.plist";
echo "";
if [ -t 0 ]; then
# Interactive shell
echo -n "Would you like to copy the test configuration now? [y/n]";
read answer;
case "${answer}" in
y|yes|Y|YES|Yes)
echo "Copying test cofiguration...";
cp "${wd}/conf/caldavd-test.plist" "${wd}/conf/caldavd-dev.plist";
;;
*)
exit 1;
;;
esac;
else
exit 1;
fi;
fi;
cd "${wd}";
if [ ! -d "${wd}/data" ]; then
mkdir "${wd}/data";
fi;
#if [ "$(uname -s)" = "Darwin" ] && [ "$(uname -r | cut -d . -f 1)" -ge 9 ]; then
# caldavd_wrapper_command="launchctl /";
#else
caldavd_wrapper_command="";
#fi;
echo "";
echo "Starting server...";
export PYTHON="${python}";
exec ${caldavd_wrapper_command} \
"${wd}/bin/caldavd" ${daemonize} \
-f "${config}" \
-P "${plugin_name}" \
-t "${service_type}" \
${reactor} \
${profile};
cd /;
fi;
}
# The main-point of the 'run' script: parse all options, decide what to do,
# then do it.
run_main () {
. "${wd}/bin/_build.sh";
parse_options "$@";
# If we've been asked to read a configuration key, just read it and exit.
if [ -n "${read_key:-}" ]; then
value="$("${wd}/bin/calendarserver_config" "${read_key}")";
IFS="="; set ${value}; echo "$2"; unset IFS;
exit $?;
fi;
if "${print_path}"; then
exec "${wd}/bin/environment" PYTHONPATH;
fi;
# About to do something for real; let's enumerate (and depending on options,
# possibly download and/or install) the dependencies.
develop;
if "${kill}" || "${restart}"; then
pidfile="$("${wd}/bin/calendarserver_config" "-f" "${config}" "PIDFile")";
# Split key and value on "=" and just grab the value
IFS="="; set ${pidfile}; pidfile="$2"; unset IFS;
if [ ! -r "${pidfile}" ]; then
echo "Unreadable PID file: ${pidfile}";
exit 1
fi;
pid="$(cat "${pidfile}" | head -1)";
if [ -z "${pid}" ]; then
echo "No PID in PID file: ${pidfile}";
exit 1;
fi;
echo "Killing process ${pid}";
kill -TERM "${pid}";
if ! "${restart}"; then
exit 0;
fi;
fi;
run;
}
run_main "$@";
calendarserver-7.0+dfsg/bin/sim 0000775 0000000 0000000 00000001501 12607514243 0016566 0 ustar 00root root 0000000 0000000 #!/bin/sh
# -*- sh-basic-offset: 2 -*-
##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
set -e;
set -u;
wd="$(cd "$(dirname "$0")/.." && pwd -L)";
. "${wd}/bin/_build.sh";
do_setup="false";
develop > /dev/null;
exec "${python}" "${wd}/contrib/performance/sim" "$@";
calendarserver-7.0+dfsg/bin/test 0000775 0000000 0000000 00000012405 12607514243 0016762 0 ustar 00root root 0000000 0000000 #!/bin/sh
# -*- sh-basic-offset: 2 -*-
##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
set -e;
set -u;
#
# Initialize build support
#
wd="$(cd "$(dirname "$0")/.." && pwd -L)";
. "${wd}/bin/_build.sh";
init_build > /dev/null;
#
# Options
#
do_setup="false";
do_get="false";
viewlog="false";
random="--random=$(date "+%s")";
no_color="";
until_fail="";
coverage="";
numjobs="";
reactor="";
if [ "$(uname -s)" = "Darwin" ]; then
reactor="--reactor=kqueue";
fi;
usage ()
{
program="$(basename "$0")";
if [ "${1--}" != "-" ]; then echo "$@"; echo; fi;
echo "Usage: ${program} [options]";
echo "Options:";
echo " -h Print this help and exit";
echo " -n Do not use color";
echo " -o Do not run tests in random order.";
echo " -r Use specified seed to determine order.";
echo " -u Run until the tests fail.";
echo " -c Generate coverage reports.";
echo " -L Show logging output in a separate view.";
if [ "${1-}" = "-" ]; then return 0; fi;
exit 64;
}
while getopts "hor:nucj:L" option; do
case "${option}" in
'?') usage; ;;
'h') usage -; exit 0; ;;
'o') random=""; ;;
'r') random="--random=$OPTARG"; ;;
'n') no_color="--reporter=bwverbose"; ;;
'u') until_fail="--until-failure"; ;;
'c') coverage="--coverage"; ;;
'j') numjobs="-j $OPTARG"; ;;
'L') viewlog="true";
esac;
done;
shift $((${OPTIND} - 1));
#
# Dependencies
#
develop > /dev/null "${dev_home}/setup.log";
echo "Using ${python} as Python:";
"${python}" -V;
echo "";
#
# Clean up
#
find "${wd}" -name \*.pyc -print0 | xargs -0 rm;
#
# Unit tests
#
trial_temp="${dev_home}/trial";
if "${viewlog}"; then
if [ "${TERM_PROGRAM}" = "Apple_Terminal" ]; then
if [ ! -d "${trial_temp}" ]; then
mkdir "${trial_temp}";
fi;
touch "${trial_temp}/_trial_marker";
cp /dev/null "${trial_temp}/test.log";
open -a Console "${trial_temp}/test.log";
else
echo "Unable to view log.";
fi;
fi;
error=0;
if [ $# -eq 0 ]; then
lint="true";
# Test these modules with a single invocation of trial
set - calendarserver twistedcaldav txdav txweb2 contrib
cd "${wd}" && \
"${wd}/bin/trial" \
--temp-directory="${dev_home}/trial" \
--rterrors \
${reactor} \
${random} \
${until_fail} \
${no_color} \
${coverage} \
${numjobs} \
"$@" \
|| error=$?;
else
lint="false";
# Loop over the passed-in modules
for module in "$@"; do
if [ -f "${module}" ]; then
module="--testmodule=${module}";
fi;
cd "${wd}" && \
"${wd}/bin/trial" \
--temp-directory="${dev_home}/trial" \
--rterrors \
${reactor} \
${random} \
${until_fail} \
${no_color} \
${coverage} \
${numjobs} \
"${module}" \
|| error=$? ;
done;
fi;
if [ ${error} -ne 0 ]; then
exit ${error};
fi;
#
# Linting
#
if ! "${lint}"; then
exit 0;
fi;
echo "";
echo "Running pyflakes...";
"${python}" -m pip install pyflakes --upgrade >> "${dev_home}/setup.log";
tmp="$(mktemp -t "twext_flakes.XXXXX")";
cd "${wd}" && "${python}" -m pyflakes "$@" | tee "${tmp}" 2>&1;
if [ -s "${tmp}" ]; then
echo "**** Pyflakes says you have some code to clean up. ****";
exit 1;
fi;
rm -f "${tmp}";
#
# Empty files
#
echo "";
echo "Checking for empty files...";
tmp="$(mktemp -t "twext_test_empty.XXXXX")";
find "${wd}" \
'!' '(' \
-type d \
'(' -path '*/.*' -or -name data -or -name build ')' \
-prune \
')' \
-type f -size 0 \
> "${tmp}";
if [ -s "${tmp}" ]; then
echo "**** Empty files: ****";
cat "${tmp}";
exit 1;
fi;
rm -f "${tmp}";
calendarserver-7.0+dfsg/bin/testpods 0000775 0000000 0000000 00000013151 12607514243 0017647 0 ustar 00root root 0000000 0000000 #!/bin/sh
# -*- sh-basic-offset: 2 -*-
##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
set -e;
set -u;
wd="$(cd "$(dirname "$0")/.." && pwd -L)";
. "${wd}/bin/_build.sh";
init_build > /dev/null;
cdt="${py_virtualenv}/src/caldavtester";
##
# Command line handling
##
verbose="";
serverinfo="${cdt}/scripts/server/serverinfo-pod.xml";
printres="";
subdir="";
random="--random";
seed="";
ssl="";
cdtdebug="";
usage ()
{
program="$(basename "$0")";
echo "Usage: ${program} [-v] [-s serverinfo]";
echo "Options:";
echo " -d Set the script subdirectory";
echo " -h Print this help and exit";
echo " -o Execute tests in order";
echo " -r Print request and response";
echo " -s Set the serverinfo.xml";
echo " -t Set the CalDAVTester directory";
echo " -x Random seed to use.";
echo " -v Verbose.";
echo " -z Use SSL.";
echo " -D Turn on CalDAVTester debugging";
if [ "${1-}" = "-" ]; then return 0; fi;
exit 64;
}
while getopts 'Dhvrozt:s:d:x:' option; do
case "$option" in
'?') usage; ;;
'h') usage -; exit 0; ;;
't') cdt="${OPTARG}"; serverinfo="${OPTARG}/scripts/server/serverinfo-pod.xml"; ;;
'd') subdir="--subdir=${OPTARG}"; ;;
's') serverinfo="${OPTARG}"; ;;
'r') printres="--always-print-request --always-print-response"; ;;
'v') verbose="v"; ;;
'o') random=""; ;;
'x') seed="--random-seed ${OPTARG}"; ;;
'z') ssl="--ssl"; ;;
'D') cdtdebug="--debug"; ;;
esac;
done;
shift $((${OPTIND} - 1));
if [ $# = 0 ]; then
set - "--all";
fi;
##
# Do The Right Thing
##
do_setup="false";
develop > /dev/null;
# Set up sandbox
sandboxdir="/tmp/cdt_server_sandbox"
if [ -d "${sandboxdir}" ]; then
rm -rf "${sandboxdir}"
fi;
configdir="${sandboxdir}/Config"
serverrootA="${sandboxdir}/podA"
serverrootB="${sandboxdir}/podB"
mkdir -p "${configdir}/auth"
mkdir -p "${serverrootA}/Logs" "${serverrootA}/Run" "${serverrootA}/Data/Documents"
mkdir -p "${serverrootB}/Logs" "${serverrootB}/Run" "${serverrootB}/Data/Documents"
cp conf/caldavd-test.plist "${configdir}/caldavd-cdt.plist"
cp conf/caldavd-test-podA.plist "${configdir}/caldavd-cdt-podA.plist"
cp conf/caldavd-test-podB.plist "${configdir}/caldavd-cdt-podB.plist"
cp conf/auth/proxies-test-pod.xml "${configdir}/auth/proxies-cdt.xml"
cp conf/auth/resources-test-pod.xml "${configdir}/auth/resources-cdt.xml"
cp conf/auth/augments-test-pod.xml "${configdir}/auth/augments-cdt.xml"
cp conf/auth/accounts-test-pod.xml "${configdir}/auth/accounts-cdt.xml"
# Modify the plists
python -c "import plistlib; f=plistlib.readPlist('${configdir}/caldavd-cdt.plist'); f['ConfigRoot'] = '${configdir}'; f['RunRoot'] = 'Run'; f['Authentication']['Kerberos']['Enabled'] = False; plistlib.writePlist(f, '${configdir}/caldavd-cdt.plist');"
python -c "import plistlib; f=plistlib.readPlist('${configdir}/caldavd-cdt-podA.plist'); f['ImportConfig'] = '${configdir}/caldavd-cdt.plist'; f['ServerRoot'] = '${serverrootA}'; f['ConfigRoot'] = '${configdir}'; f['ProxyLoadFromFile'] = '${configdir}/auth/proxies-cdt.xml'; f['ResourceService']['params']['xmlFile'] = '${configdir}/auth/resources-cdt.xml'; f['DirectoryService']['params']['xmlFile'] = '${configdir}/auth/accounts-cdt.xml'; f['AugmentService']['params']['xmlFiles'] = ['${configdir}/auth/augments-cdt.xml']; plistlib.writePlist(f, '${configdir}/caldavd-cdt-podA.plist');"
python -c "import plistlib; f=plistlib.readPlist('${configdir}/caldavd-cdt-podB.plist'); f['ImportConfig'] = '${configdir}/caldavd-cdt.plist'; f['ServerRoot'] = '${serverrootB}'; f['ConfigRoot'] = '${configdir}'; f['ProxyLoadFromFile'] = '${configdir}/auth/proxies-cdt.xml'; f['ResourceService']['params']['xmlFile'] = '${configdir}/auth/resources-cdt.xml'; f['DirectoryService']['params']['xmlFile'] = '${configdir}/auth/accounts-cdt.xml'; f['AugmentService']['params']['xmlFiles'] = ['${configdir}/auth/augments-cdt.xml']; plistlib.writePlist(f, '${configdir}/caldavd-cdt-podB.plist');"
runpod() {
local podsuffix="$1"; shift;
# Start the server
"${wd}/bin/run" -nd -c "${configdir}/caldavd-cdt-${podsuffix}.plist"
/bin/echo -n "Waiting for server ${podsuffix} to start up..."
while [ ! -f "${sandboxdir}/${podsuffix}/Run/caldav-instance-0.pid" ]; do
sleep 1
/bin/echo -n "."
done;
echo "Server ${podsuffix} has started"
}
stoppod() {
local podsuffix="$1"; shift;
echo "Stopping server ${podsuffix}"
"${wd}/bin/run" -nk -c "${configdir}/caldavd-cdt-${podsuffix}.plist"
}
runpod "podA";
runpod "podB";
# Don't exit if testcaldav.py fails, because we need to clean up afterwards.
set +e
# Run CDT
echo ""
echo "Starting CDT run"
cd "${cdt}" && "${python}" testcaldav.py ${random} ${seed} ${ssl} ${cdtdebug} --print-details-onfail ${printres} -s "${serverinfo}" -x scripts/tests-pod ${subdir} "$@";
# Capture exit status of testcaldav.py to use as this script's exit status.
STATUS=$?
# Re-enable exit on failure incase run -nk fails
set -e
stoppod "podA";
stoppod "podB";
# Exit with the exit status of testcaldav.py, to reflect the test suite's result
exit $STATUS
calendarserver-7.0+dfsg/bin/testserver 0000775 0000000 0000000 00000011572 12607514243 0020215 0 ustar 00root root 0000000 0000000 #!/bin/sh
# -*- sh-basic-offset: 2 -*-
##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
set -e;
set -u;
wd="$(cd "$(dirname "$0")/.." && pwd -L)";
. "${wd}/bin/_build.sh";
init_build > /dev/null;
cdt="${py_virtualenv}/src/caldavtester";
##
# Command line handling
##
verbose="";
serverinfo="${cdt}/scripts/server/serverinfo.xml";
pretests="";
printres="";
subdir="";
random="--random";
seed="";
ssl="";
cdtdebug="";
usage ()
{
program="$(basename "$0")";
echo "Usage: ${program} [-v] [-s serverinfo]";
echo "Options:";
echo " -d Set the script subdirectory";
echo " -h Print this help and exit";
echo " -o Execute tests in order";
echo " -p Run pretests";
echo " -r Print request and response";
echo " -s Set the serverinfo.xml";
echo " -t Set the CalDAVTester directory";
echo " -x Random seed to use.";
echo " -v Verbose.";
echo " -z Use SSL.";
echo " -D Turn on CalDAVTester debugging";
if [ "${1-}" = "-" ]; then return 0; fi;
exit 64;
}
while getopts 'Dhvprozt:s:d:x:' option; do
case "$option" in
'?') usage; ;;
'h') usage -; exit 0; ;;
't') cdt="${OPTARG}"; serverinfo="${OPTARG}/scripts/server/serverinfo.xml"; ;;
'd') subdir="--subdir=${OPTARG}"; ;;
'p') pretests="--pretest CalDAV/pretest.xml --posttest CalDAV/pretest.xml"; ;;
's') serverinfo="${OPTARG}"; ;;
'r') printres="--always-print-request --always-print-response"; ;;
'v') verbose="v"; ;;
'o') random=""; ;;
'x') seed="--random-seed ${OPTARG}"; ;;
'z') ssl="--ssl"; ;;
'D') cdtdebug="--debug"; ;;
esac;
done;
shift $((${OPTIND} - 1));
if [ $# = 0 ]; then
set - "--all";
fi;
##
# Do The Right Thing
##
do_setup="false";
develop > /dev/null;
# Set up sandbox
sandboxdir="/tmp/cdt_server_sandbox💣"
if [ -d "${sandboxdir}" ]; then
rm -rf "${sandboxdir}"
fi;
configdir="${sandboxdir}/Config"
datadir="${sandboxdir}/Data"
mkdir -p "${configdir}/Config" "${sandboxdir}/Logs" "${sandboxdir}/Run" "${datadir}/Documents"
cp conf/caldavd-test.plist "${configdir}/caldavd-cdt.plist"
cp conf/auth/proxies-test.xml "${datadir}/proxies-cdt.xml"
cp conf/auth/resources-test.xml "${datadir}/resources-cdt.xml"
cp conf/auth/augments-test.xml "${datadir}/augments-cdt.xml"
cp conf/auth/accounts-test.xml "${datadir}/accounts-cdt.xml"
# Modify the plist
python -c "import plistlib; f=plistlib.readPlist('${configdir}/caldavd-cdt.plist'); f['EnableTrashCollection'] = True; f['HTTPPort'] = 18008; f['BindHTTPPorts'] = [18008]; f['SSLPort'] = 18443; f['BindSSLPorts'] = [18443]; f['Notifications']['Services']['AMP']['Port'] = 62312; f['ServerRoot'] = u'/tmp/cdt_server_sandbox\ud83d\udca3'; f['ConfigRoot'] = 'Config'; f['RunRoot'] = 'Run'; f['ProxyLoadFromFile'] = u'/tmp/cdt_server_sandbox\ud83d\udca3/Data/proxies-cdt.xml'; f['ResourceService']['params']['xmlFile'] = u'/tmp/cdt_server_sandbox\ud83d\udca3/Data/resources-cdt.xml'; f['DirectoryService']['params']['xmlFile'] = u'/tmp/cdt_server_sandbox\ud83d\udca3/Data/accounts-cdt.xml'; f['AugmentService']['params']['xmlFiles'] = [u'/tmp/cdt_server_sandbox\ud83d\udca3/Data/augments-cdt.xml']; f['Authentication']['Kerberos']['Enabled'] = False; plistlib.writePlist(f, '${configdir}/caldavd-cdt.plist');"
# Modify serverinfo to update ports
sed "s/8008/18008/g;s/8443/18443/g;s/<\!-- \(trash-collection<\/feature>\) -->/\1/g" "${serverinfo}" > "${configdir}/serverinfo-cdt.xml"
serverinfo="${configdir}/serverinfo-cdt.xml"
# Start the server
"${wd}/bin/run" -nd -c "${configdir}/caldavd-cdt.plist"
/bin/echo -n "Waiting for server to start up..."
while [ ! -f "${sandboxdir}/Run/caldav-instance-0.pid" ]; do
sleep 1
/bin/echo -n "."
done;
echo "Server has started"
# Don't exit if testcaldav.py fails, because we need to clean up afterwards.
set +e
# Run CDT
echo "Starting CDT run"
cd "${cdt}" && "${python}" testcaldav.py ${random} ${seed} ${ssl} ${cdtdebug} ${pretests} --print-details-onfail ${printres} -s "${serverinfo}" ${subdir} "$@";
# Capture exit status of testcaldav.py to use as this script's exit status.
STATUS=$?
# Re-enable exit on failure incase run -nk fails
set -e
echo "Stopping server"
"${wd}/bin/run" -nk -c "${configdir}/caldavd-cdt.plist"
# Exit with the exit status of testcaldav.py, to reflect the test suite's result
exit $STATUS
calendarserver-7.0+dfsg/bin/trial 0000775 0000000 0000000 00000001647 12607514243 0017124 0 ustar 00root root 0000000 0000000 #!/bin/sh
# -*- sh-basic-offset: 2 -*-
##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
set -e
set -u
wd="$(cd "$(dirname "$0")/.." && pwd)";
. "${wd}/bin/_build.sh";
do_setup="false";
develop > /dev/null;
exec "${python}" \
-c "\
import sys;\
import os;\
sys.path.insert(0, os.path.abspath(os.getcwd()));\
from twisted.scripts.trial import run;\
run()\
" \
"$@";
calendarserver-7.0+dfsg/bin/twistd 0000775 0000000 0000000 00000001650 12607514243 0017321 0 ustar 00root root 0000000 0000000 #!/bin/sh
# -*- sh-basic-offset: 2 -*-
##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
set -e
set -u
wd="$(cd "$(dirname "$0")/.." && pwd)";
. "${wd}/bin/_build.sh";
do_setup="false";
develop > /dev/null;
exec "${python}" \
-c "\
import sys;\
import os;\
sys.path.insert(0, os.path.abspath(os.getcwd()));\
from twisted.scripts.twistd import run;\
run()\
" \
"$@";
calendarserver-7.0+dfsg/bin/update_copyrights 0000775 0000000 0000000 00000003550 12607514243 0021541 0 ustar 00root root 0000000 0000000 #!/bin/sh
# -*- sh-basic-offset: 2 -*-
##
# Copyright (c) 2013 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
set -e;
set -u;
find_files () {
where="$1"; shift;
find "${where}" \
! \( \
-type d \
\( \
-name .svn -o \
-name build -o \
-name data -o \
-name '_trial_temp*' \
\) \
-prune \
\) \
-type f \
! -name '.#*' \
! -name '#*#' \
! -name '*~' \
! -name '*.pyc' \
! -name '*.log' \
! -name update_copyrights \
-print0;
}
wd="$(pwd -P)";
this_year="$(date "+%Y")";
last_year=$((${this_year} - 1));
tmp="$(mktemp -t "$$")";
find_files "${wd}" > "${tmp}";
ff () { cat "${tmp}"; }
echo "Updating copyrights from ${last_year} to ${this_year}...";
ff | xargs -0 perl -i -pe 's|(Copyright \(c\) .*-)'"${last_year}"'( Apple)|${1}'"${this_year}"'${2}|';
ff | xargs -0 perl -i -pe 's|(Copyright \(c\) )'"${last_year}"'( Apple)|${1}'"${last_year}-${this_year}"'${2}|';
ff | xargs -0 grep -e 'Copyright (c) .* Apple' \
| grep -v -e 'Copyright (c) .*'"${this_year}"' Apple' \
;
rm "${tmp}";
calendarserver-7.0+dfsg/bin/watch_memcached 0000775 0000000 0000000 00000000556 12607514243 0021103 0 ustar 00root root 0000000 0000000 #!/bin/sh
# -*- sh-basic-offset: 2 -*-
set -e
set -u
if [ $# -gt 0 ]; then
interval="$1"; shift;
else
interval="5";
fi;
watch ()
{
while :; do
echo stats;
sleep "${interval}";
echo "----------------------------------------" >&2;
done | nc localhost 11211 2>&1;
}
if ! watch; then
echo "Error contacting memcached for stats.";
exit 1;
fi;
calendarserver-7.0+dfsg/calendarserver/ 0000775 0000000 0000000 00000000000 12607514243 0020303 5 ustar 00root root 0000000 0000000 calendarserver-7.0+dfsg/calendarserver/__init__.py 0000664 0000000 0000000 00000002103 12607514243 0022410 0 ustar 00root root 0000000 0000000 # -*- test-case-name: calendarserver -*-
##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
CalendarServer application code.
"""
#
# setuptools is annoying
#
from warnings import filterwarnings
filterwarnings("ignore", "Module (.*) was already imported (.*)")
del filterwarnings
#
# Set __version__
#
try:
from calendarserver.version import version as __version__
except ImportError:
__version__ = None
#
# Get imap4 module to STFU
#
# from twisted.mail.imap4 import Command
# Command._1_RESPONSES += tuple(['BYE'])
calendarserver-7.0+dfsg/calendarserver/accesslog.py 0000664 0000000 0000000 00000055504 12607514243 0022631 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2006-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Access logs.
"""
__all__ = [
"DirectoryLogWrapperResource",
"RotatingFileAccessLoggingObserver",
"AMPCommonAccessLoggingObserver",
"AMPLoggingFactory",
]
import collections
import datetime
import json
import os
try:
import psutil
except ImportError:
psutil = None
from sys import platform
import time
from calendarserver.logAnalysis import getAdjustedMethodName, \
getAdjustedClientName
from twext.python.log import Logger
from twext.who.idirectory import RecordType
from txweb2 import iweb
from txweb2.log import BaseCommonAccessLoggingObserver
from txweb2.log import LogWrapperResource
from twisted.internet import protocol, task
from twisted.protocols import amp
from twistedcaldav.config import config
log = Logger()
class DirectoryLogWrapperResource(LogWrapperResource):
def __init__(self, resource, directory):
super(DirectoryLogWrapperResource, self).__init__(resource)
self.directory = directory
def getDirectory(self):
return self.directory
class CommonAccessLoggingObserverExtensions(BaseCommonAccessLoggingObserver):
"""
A base class for our extension to the L{BaseCommonAccessLoggingObserver}
"""
def emit(self, eventDict):
format = None
formatArgs = None
if eventDict.get("interface") is iweb.IRequest:
request = eventDict["request"]
response = eventDict["response"]
loginfo = eventDict["loginfo"]
# Try to determine authentication and authorization identifiers
uid = "-"
if getattr(request, "authnUser", None) is not None:
def convertPrincipaltoShortName(principal):
if principal.record.recordType == RecordType.user:
return principal.record.shortNames[0]
else:
return "({rtype}){name}".format(rtype=principal.record.recordType, name=principal.record.shortNames[0],)
uidn = convertPrincipaltoShortName(request.authnUser)
uidz = convertPrincipaltoShortName(request.authzUser)
if uidn != uidz:
uid = '"{authn} as {authz}"'.format(authn=uidn, authz=uidz,)
else:
uid = uidn
#
# For some methods which basically allow you to tunnel a
# custom request (eg. REPORT, POST), the method name
# itself doesn't tell you much about what action is being
# requested. This allows a method to tack a submethod
# attribute to the request, so we can provide a little
# more detail here.
#
if config.EnableExtendedAccessLog and hasattr(request, "submethod"):
method = "%s(%s)" % (request.method, request.submethod)
else:
method = request.method
# Standard Apache access log fields
format = (
'%(host)s - %(uid)s [%(date)s]'
' "%(method)s %(uri)s HTTP/%(protocolVersion)s"'
' %(statusCode)s %(bytesSent)d'
' "%(referer)s" "%(userAgent)s"'
)
formatArgs = {
"host" : request.remoteAddr.host,
"uid" : uid,
"date" : self.logDateString(response.headers.getHeader("date", 0)),
"method" : method,
"uri" : request.uri.replace('"', "%22"),
"protocolVersion" : ".".join(str(x) for x in request.clientproto),
"statusCode" : response.code,
"bytesSent" : loginfo.bytesSent,
"referer" : request.headers.getHeader("referer", "-"),
"userAgent" : request.headers.getHeader("user-agent", "-"),
}
# Add extended items to format and formatArgs
if config.EnableExtendedAccessLog:
format += ' i=%(serverInstance)s'
formatArgs["serverInstance"] = config.LogID if config.LogID else "0"
if request.chanRequest: # This can be None during tests
format += ' or=%(outstandingRequests)s'
formatArgs["outstandingRequests"] = request.chanRequest.channel.factory.outstandingRequests
# Tags for time stamps collected along the way - the first one in the list is the initial
# time for request creation - we use that to track the entire request/response time
nowtime = time.time()
if config.EnableExtendedTimingAccessLog:
basetime = request.timeStamps[0][1]
request.timeStamps[0] = ("t", time.time(),)
for tag, timestamp in request.timeStamps:
format += " %s=%%(%s).1f" % (tag, tag,)
formatArgs[tag] = (timestamp - basetime) * 1000
if tag != "t":
basetime = timestamp
if len(request.timeStamps) > 1:
format += " t-log=%(t-log).1f"
formatArgs["t-log"] = (timestamp - basetime) * 1000
else:
format += " t=%(t).1f"
formatArgs["t"] = (nowtime - request.timeStamps[0][1]) * 1000
if hasattr(request, "extendedLogItems"):
for k, v in sorted(request.extendedLogItems.iteritems(), key=lambda x: x[0]):
k = str(k).replace('"', "%22")
v = str(v).replace('"', "%22")
if " " in v:
v = '"%s"' % (v,)
format += " %s=%%(%s)s" % (k, k,)
formatArgs[k] = v
# Add the name of the XML error element for debugging purposes
if hasattr(response, "error"):
format += " err=%(err)s"
formatArgs["err"] = response.error.qname()[1]
fwdHeaders = request.headers.getRawHeaders("x-forwarded-for", "")
if fwdHeaders:
# Limit each x-forwarded-header to 50 in case someone is
# trying to overwhelm the logs
forwardedFor = ",".join([hdr[:50] for hdr in fwdHeaders])
forwardedFor = forwardedFor.replace(" ", "")
format += " fwd=%(fwd)s"
formatArgs["fwd"] = forwardedFor
if formatArgs["host"] == "0.0.0.0":
fwdHeaders = request.headers.getRawHeaders("x-forwarded-for", "")
if fwdHeaders:
formatArgs["host"] = fwdHeaders[-1].split(",")[-1].strip()
format += " unix=%(unix)s"
formatArgs["unix"] = "true"
elif "overloaded" in eventDict:
overloaded = eventDict.get("overloaded")
format = (
'%(host)s - %(uid)s [%(date)s]'
' "%(method)s"'
' %(statusCode)s %(bytesSent)d'
' "%(referer)s" "%(userAgent)s"'
)
formatArgs = {
"host" : overloaded.transport.hostname,
"uid" : "-",
"date" : self.logDateString(time.time()),
"method" : "???",
"uri" : "",
"protocolVersion" : "",
"statusCode" : 503,
"bytesSent" : 0,
"referer" : "-",
"userAgent" : "-",
}
if config.EnableExtendedAccessLog:
format += ' p=%(serverPort)s'
formatArgs["serverPort"] = overloaded.transport.server.port
format += ' or=%(outstandingRequests)s'
formatArgs["outstandingRequests"] = overloaded.outstandingRequests
# Write anything we got to the log and stats
if format is not None:
# sanitize output to mitigate log injection
for k, v in formatArgs.items():
if not isinstance(v, basestring):
continue
v = v.replace("\r", "\\r")
v = v.replace("\n", "\\n")
v = v.replace("\"", "\\\"")
formatArgs[k] = v
formatArgs["type"] = "access-log"
formatArgs["log-format"] = format
self.logStats(formatArgs)
class RotatingFileAccessLoggingObserver(CommonAccessLoggingObserverExtensions):
"""
Class to do "apache" style access logging to a rotating log file. The log
file is rotated after midnight each day.
This class also currently handles the collection of system and log statistics.
"""
def __init__(self, logpath):
self.logpath = logpath
self.systemStats = None
self.statsByMinute = []
def accessLog(self, message, allowrotate=True):
"""
Log a message to the file and possibly rotate if date has changed.
@param message: C{str} for the message to log.
@param allowrotate: C{True} if log rotate allowed, C{False} to log to current file
without testing for rotation.
"""
if self.shouldRotate() and allowrotate:
self.flush()
self.rotate()
self.f.write(message + "\n")
def start(self):
"""
Start logging. Open the log file and log an "open" message.
"""
super(RotatingFileAccessLoggingObserver, self).start()
self._open()
self.accessLog("Log opened - server start: [%s]." % (datetime.datetime.now().ctime(),))
def stop(self):
"""
Stop logging. Close the log file and log an "open" message.
"""
self.accessLog("Log closed - server stop: [%s]." % (datetime.datetime.now().ctime(),), False)
super(RotatingFileAccessLoggingObserver, self).stop()
self._close()
if self.systemStats is not None:
self.systemStats.stop()
def _open(self):
"""
Open the log file.
"""
self.f = open(self.logpath, "a", 1)
self.lastDate = self.toDate(os.stat(self.logpath)[8])
def _close(self):
"""
Close the log file.
"""
self.f.close()
def flush(self):
"""
Flush the log file.
"""
self.f.flush()
def shouldRotate(self):
"""
Rotate when the date has changed since last write
"""
if config.RotateAccessLog:
return self.toDate() > self.lastDate
else:
return False
def toDate(self, *args):
"""
Convert a unixtime to (year, month, day) localtime tuple,
or return the current (year, month, day) localtime tuple.
This function primarily exists so you may overload it with
gmtime, or some cruft to make unit testing possible.
"""
# primarily so this can be unit tested easily
return time.localtime(*args)[:3]
def suffix(self, tupledate):
"""
Return the suffix given a (year, month, day) tuple or unixtime
"""
try:
return "_".join(map(str, tupledate))
except:
# try taking a float unixtime
return "_".join(map(str, self.toDate(tupledate)))
def rotate(self):
"""
Rotate the file and create a new one.
If it's not possible to open new logfile, this will fail silently,
and continue logging to old logfile.
"""
newpath = "%s.%s" % (self.logpath, self.suffix(self.lastDate))
if os.path.exists(newpath):
log.info("Cannot rotate log file to %s because it already exists." % (newpath,))
return
self.accessLog("Log closed - rotating: [%s]." % (datetime.datetime.now().ctime(),), False)
log.info("Rotating log file to: %s" % (newpath,), system="Logging")
self.f.close()
os.rename(self.logpath, newpath)
self._open()
self.accessLog("Log opened - rotated: [%s]." % (datetime.datetime.now().ctime(),), False)
def logStats(self, stats):
"""
Update stats
"""
if self.systemStats is None:
self.systemStats = SystemMonitor()
# Currently only storing stats for access log type
if "type" not in stats or stats["type"] != "access-log":
return
currentStats = self.ensureSequentialStats()
self.updateStats(currentStats, stats)
if stats["type"] == "access-log":
self.accessLog(stats["log-format"] % stats)
def getStats(self):
"""
Return the stats
"""
if self.systemStats is None:
self.systemStats = SystemMonitor()
# The current stats
currentStats = self.ensureSequentialStats()
# Get previous minute details
index = min(2, len(self.statsByMinute))
if index > 0:
previousMinute = self.statsByMinute[-index][1]
else:
previousMinute = self.initStats()
# Do five minute aggregate
fiveMinutes = self.initStats()
index = min(6, len(self.statsByMinute))
for i in range(-index, -1):
stat = self.statsByMinute[i][1]
self.mergeStats(fiveMinutes, stat)
# Do one hour aggregate
oneHour = self.initStats()
index = min(61, len(self.statsByMinute))
for i in range(-index, -1):
stat = self.statsByMinute[i][1]
self.mergeStats(oneHour, stat)
printStats = {
"system": self.systemStats.items,
"current": currentStats,
"1m": previousMinute,
"5m": fiveMinutes,
"1h": oneHour,
}
return printStats
def ensureSequentialStats(self):
"""
Make sure the list of timed stats is contiguous wrt time.
"""
dtindex = int(time.time() / 60.0) * 60
if len(self.statsByMinute) > 0:
if self.statsByMinute[-1][0] != dtindex:
oldindex = self.statsByMinute[-1][0]
while oldindex != dtindex:
oldindex += 60
self.statsByMinute.append((oldindex, self.initStats(),))
else:
self.statsByMinute.append((dtindex, self.initStats(),))
return self.statsByMinute[-1][1]
def initStats(self):
def initTimeHistogram():
return {
"<10ms": 0,
"10ms<->100ms" : 0,
"100ms<->1s" : 0,
"1s<->10s" : 0,
"10s<->30s" : 0,
"30s<->60s" : 0,
">60s" : 0,
"Over 1s" : 0,
"Over 10s" : 0,
}
return {
"requests" : 0,
"method" : collections.defaultdict(int),
"method-t" : collections.defaultdict(float),
"uid" : collections.defaultdict(int),
"user-agent" : collections.defaultdict(int),
"500" : 0,
"t" : 0.0,
"t-resp-wr" : 0.0,
"slots" : 0,
"T" : initTimeHistogram(),
"T-RESP-WR" : initTimeHistogram(),
"T-MAX" : 0.0,
"cpu" : self.systemStats.items["cpu use"],
}
def updateStats(self, current, stats):
# Gather specific information and aggregate into our persistent stats
adjustedMethod = getAdjustedMethodName(stats)
adjustedClient = getAdjustedClientName(stats)
if current["requests"] == 0:
current["cpu"] = 0.0
current["requests"] += 1
current["method"][adjustedMethod] += 1
current["method-t"][adjustedMethod] += stats.get("t", 0.0)
current["uid"][stats["uid"]] += 1
current["user-agent"][adjustedClient] += 1
if stats["statusCode"] >= 500:
current["500"] += 1
current["t"] += stats.get("t", 0.0)
current["t-resp-wr"] += stats.get("t-resp-wr", 0.0)
current["slots"] += stats.get("outstandingRequests", 0)
current["cpu"] += self.systemStats.items["cpu use"]
def histogramUpdate(t, key):
if t >= 60000.0:
current[key][">60s"] += 1
elif t >= 30000.0:
current[key]["30s<->60s"] += 1
elif t >= 10000.0:
current[key]["10s<->30s"] += 1
elif t >= 1000.0:
current[key]["1s<->10s"] += 1
elif t >= 100.0:
current[key]["100ms<->1s"] += 1
elif t >= 10.0:
current[key]["10ms<->100ms"] += 1
else:
current[key]["<10ms"] += 1
if t >= 1000.0:
current[key]["Over 1s"] += 1
elif t >= 10000.0:
current[key]["Over 10s"] += 1
t = stats.get("t", None)
if t is not None:
histogramUpdate(t, "T")
current["T-MAX"] = max(current["T-MAX"], t)
t = stats.get("t-resp-wr", None)
if t is not None:
histogramUpdate(t, "T-RESP-WR")
def mergeStats(self, current, stats):
# Gather specific information and aggregate into our persistent stats
if current["requests"] == 0:
current["cpu"] = 0.0
current["requests"] += stats["requests"]
for method in stats["method"].keys():
current["method"][method] += stats["method"][method]
for method in stats["method-t"].keys():
current["method-t"][method] += stats["method-t"][method]
for uid in stats["uid"].keys():
current["uid"][uid] += stats["uid"][uid]
for ua in stats["user-agent"].keys():
current["user-agent"][ua] += stats["user-agent"][ua]
current["500"] += stats["500"]
current["t"] += stats["t"]
current["t-resp-wr"] += stats["t-resp-wr"]
current["slots"] += stats["slots"]
current["cpu"] += stats["cpu"]
def histogramUpdate(t, key):
if t >= 60000.0:
current[key][">60s"] += 1
elif t >= 30000.0:
current[key]["30s<->60s"] += 1
elif t >= 10000.0:
current[key]["10s<->30s"] += 1
elif t >= 1000.0:
current[key]["1s<->10s"] += 1
elif t >= 100.0:
current[key]["100ms<->1s"] += 1
elif t >= 10.0:
current[key]["10ms<->100ms"] += 1
else:
current[key]["<10ms"] += 1
if t >= 1000.0:
current[key]["Over 1s"] += 1
elif t >= 10000.0:
current[key]["Over 10s"] += 1
for bin in stats["T"].keys():
current["T"][bin] += stats["T"][bin]
current["T-MAX"] = max(current["T-MAX"], stats["T-MAX"])
for bin in stats["T-RESP-WR"].keys():
current["T-RESP-WR"][bin] += stats["T-RESP-WR"][bin]
class SystemMonitor(object):
"""
Keeps track of system usage information. This installs a reactor task to
run about once per second and track system use.
"""
CPUStats = collections.namedtuple("CPUStats", ("total", "idle",))
def __init__(self):
self.items = {
"cpu count" : psutil.NUM_CPUS if psutil is not None else -1,
"cpu use" : 0.0,
"memory used" : 0,
"memory percent": 0.0,
"start time" : time.time(),
}
if psutil is not None:
times = psutil.cpu_times()
self.previous_cpu = SystemMonitor.CPUStats(sum(times), times.idle,)
else:
self.previous_cpu = SystemMonitor.CPUStats(0, 0)
self.task = task.LoopingCall(self.update)
self.task.start(1.0)
def stop(self):
"""
Just stop the task
"""
self.task.stop()
def update(self):
# CPU usage based on diff'ing CPU times
if psutil is not None:
times = psutil.cpu_times()
cpu_now = SystemMonitor.CPUStats(sum(times), times.idle,)
try:
self.items["cpu use"] = 100.0 * (1.0 - (cpu_now.idle - self.previous_cpu.idle) / (cpu_now.total - self.previous_cpu.total))
except ZeroDivisionError:
self.items["cpu use"] = 0.0
self.previous_cpu = cpu_now
# Memory usage
if psutil is not None and 'freebsd' not in platform:
mem = psutil.virtual_memory()
self.items["memory used"] = mem.used
self.items["memory percent"] = mem.percent
class LogStats(amp.Command):
arguments = [("message", amp.String())]
class AMPCommonAccessLoggingObserver(CommonAccessLoggingObserverExtensions):
def __init__(self):
self.protocol = None
self._buffer = []
def flushBuffer(self):
if self._buffer:
for msg in self._buffer:
self.logStats(msg)
def addClient(self, connectedClient):
"""
An AMP client connected; hook it up to this observer.
"""
self.protocol = connectedClient
self.flushBuffer()
def logStats(self, message):
"""
Log server stats via the remote AMP Protocol
"""
if self.protocol is not None:
message = json.dumps(message)
if isinstance(message, unicode):
message = message.encode("utf-8")
d = self.protocol.callRemote(LogStats, message=message)
d.addErrback(log.error)
else:
self._buffer.append(message)
class AMPLoggingProtocol(amp.AMP):
"""
A server side protocol for logging to the given observer.
"""
def __init__(self, observer):
self.observer = observer
super(AMPLoggingProtocol, self).__init__()
def logStats(self, message):
stats = json.loads(message)
self.observer.logStats(stats)
return {}
LogStats.responder(logStats)
class AMPLoggingFactory(protocol.ServerFactory):
def __init__(self, observer):
self.observer = observer
def doStart(self):
self.observer.start()
def doStop(self):
self.observer.stop()
def buildProtocol(self, addr):
return AMPLoggingProtocol(self.observer)
calendarserver-7.0+dfsg/calendarserver/controlsocket.py 0000664 0000000 0000000 00000006627 12607514243 0023561 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2012-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Multiplexing control socket. Currently used for messages related to queueing
and logging, but extensible to more.
"""
from zope.interface import implements
from twisted.internet.protocol import Factory
from twisted.protocols.amp import BinaryBoxProtocol, IBoxReceiver, IBoxSender
from twisted.application.service import Service
class DispatchingSender(object):
implements(IBoxSender)
def __init__(self, sender, route):
self.sender = sender
self.route = route
def sendBox(self, box):
box['_route'] = self.route
self.sender.sendBox(box)
def unhandledError(self, failure):
self.sender.unhandledError(failure)
class DispatchingBoxReceiver(object):
implements(IBoxReceiver)
def __init__(self, receiverMap):
self.receiverMap = receiverMap
def startReceivingBoxes(self, boxSender):
for key, receiver in self.receiverMap.items():
receiver.startReceivingBoxes(DispatchingSender(boxSender, key))
def ampBoxReceived(self, box):
self.receiverMap[box['_route']].ampBoxReceived(box)
def stopReceivingBoxes(self, reason):
for receiver in self.receiverMap.values():
receiver.stopReceivingBoxes(reason)
class ControlSocket(Factory, object):
"""
An AMP control socket that aggregates other AMP factories. This is the
service that listens in the master process.
"""
def __init__(self):
"""
Initialize this L{ControlSocket}.
"""
self._factoryMap = {}
def addFactory(self, key, otherFactory):
"""
Add another L{Factory} - one that returns L{AMP} instances - to this
socket.
"""
self._factoryMap[key] = otherFactory
def buildProtocol(self, addr):
"""
Build a thing that will multiplex AMP to all the relevant sockets.
"""
receiverMap = {}
for k, f in self._factoryMap.items():
receiverMap[k] = f.buildProtocol(addr)
return BinaryBoxProtocol(DispatchingBoxReceiver(receiverMap))
def doStart(self):
"""
Relay start notification to all added factories.
"""
for f in self._factoryMap.values():
f.doStart()
def doStop(self):
"""
Relay stop notification to all added factories.
"""
for f in self._factoryMap.values():
f.doStop()
class ControlSocketConnectingService(Service, object):
def __init__(self, endpointFactory, controlSocket):
super(ControlSocketConnectingService, self).__init__()
self.endpointFactory = endpointFactory
self.controlSocket = controlSocket
def privilegedStartService(self):
from twisted.internet import reactor
endpoint = self.endpointFactory(reactor)
endpoint.connect(self.controlSocket)
calendarserver-7.0+dfsg/calendarserver/dashboard_service.py 0000664 0000000 0000000 00000012753 12607514243 0024334 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2012-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from twext.enterprise.jobs.jobitem import JobItem
from twisted.internet.defer import inlineCallbacks, succeed, returnValue
from twisted.internet.error import ConnectError
from twisted.internet.protocol import Factory
from twisted.protocols.basic import LineReceiver
from twistedcaldav.config import config
from txdav.dps.client import DirectoryService as DirectoryProxyClientService
from txdav.who.cache import CachingDirectoryService
import json
"""
Protocol and other items that enable introspection of the server. This will include
server protocol analysis statistics, job queue load, and other useful information that
a server admin or developer would like to keep an eye on.
"""
class DashboardProtocol (LineReceiver):
"""
A protocol that receives a line containing a JSON object representing a request,
and returns a line containing a JSON object as a response.
"""
unknown_cmd = json.dumps({"result": "unknown command"})
bad_cmd = json.dumps({"result": "bad command"})
def lineReceived(self, line):
"""
Process a request which is expected to be a JSON object.
@param line: The line which was received with the delimiter removed.
@type line: C{bytes}
"""
# Look for exit
if line in ("exit", "quit"):
self.sendLine("Done")
self.transport.loseConnection()
return
def _write(result):
self.sendLine(result)
try:
j = json.loads(line)
if isinstance(j, list):
self.process_data(j)
except (ValueError, KeyError):
_write(self.bad_cmd)
@inlineCallbacks
def process_data(self, j):
results = {}
for data in j:
if hasattr(self, "data_{}".format(data)):
result = yield getattr(self, "data_{}".format(data))()
elif data.startswith("stats_"):
result = yield self.data_stats()
result = result.get(data[6:], "")
else:
result = ""
results[data] = result
self.sendLine(json.dumps(results))
def data_stats(self):
"""
Return the logging protocol statistics.
@return: a string containing the JSON result.
@rtype: L{str}
"""
return succeed(self.factory.logger.getStats())
def data_slots(self):
"""
Return the logging protocol statistics.
@return: a string containing the JSON result.
@rtype: L{str}
"""
states = tuple(self.factory.limiter.dispatcher.slavestates) if self.factory.limiter is not None else ()
results = []
for num, status in states:
result = {"slot": num}
result.update(status.items())
results.append(result)
return succeed({"slots": results, "overloaded": self.factory.limiter.overloaded if self.factory.limiter is not None else False})
def data_jobcount(self):
"""
Return a count of job types.
@return: the JSON result.
@rtype: L{int}
"""
return succeed(JobItem.numberOfWorkTypes())
@inlineCallbacks
def data_jobs(self):
"""
Return a summary of the job queue.
@return: a string containing the JSON result.
@rtype: L{str}
"""
if self.factory.store:
txn = self.factory.store.newTransaction()
records = (yield JobItem.histogram(txn))
yield txn.commit()
else:
records = {}
returnValue(records)
def data_job_assignments(self):
"""
Return a summary of the job assignments to workers.
@return: a string containing the JSON result.
@rtype: L{str}
"""
if self.factory.store:
pool = self.factory.store.pool
loads = pool.workerPool.eachWorkerLoad()
level = pool.workerPool.loadLevel()
else:
loads = []
level = 0
return succeed({"workers": loads, "level": level})
def data_directory(self):
"""
Return a summary of directory service calls.
@return: the JSON result.
@rtype: L{str}
"""
try:
return self.factory.directory.stats()
except (AttributeError, ConnectError):
return succeed({})
class DashboardServer(Factory):
protocol = DashboardProtocol
def __init__(self, logObserver, limiter):
self.logger = logObserver
self.limiter = limiter
self.store = None
self.directory = None
def makeDirectoryProxyClient(self):
self.directory = DirectoryProxyClientService(config.DirectoryRealmName)
if config.DirectoryProxy.InProcessCachingSeconds:
self.directory = CachingDirectoryService(
self.directory,
expireSeconds=config.DirectoryProxy.InProcessCachingSeconds
)
calendarserver-7.0+dfsg/calendarserver/logAnalysis.py 0000664 0000000 0000000 00000031141 12607514243 0023142 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2012-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
# Adjust method names
# PROPFINDs
METHOD_PROPFIND_CALENDAR_HOME = "PROPFIND Calendar Home"
METHOD_PROPFIND_CACHED_CALENDAR_HOME = "PROPFIND cached Calendar Home"
METHOD_PROPFIND_CALENDAR = "PROPFIND Calendar"
METHOD_PROPFIND_INBOX = "PROPFIND Inbox"
METHOD_PROPFIND_ADDRESSBOOK_HOME = "PROPFIND Adbk Home"
METHOD_PROPFIND_CACHED_ADDRESSBOOK_HOME = "PROPFIND cached Adbk Home"
METHOD_PROPFIND_ADDRESSBOOK = "PROPFIND Adbk"
METHOD_PROPFIND_DIRECTORY = "PROPFIND Directory"
METHOD_PROPFIND_PRINCIPALS = "PROPFIND Principals"
METHOD_PROPFIND_CACHED_PRINCIPALS = "PROPFIND cached Principals"
# PROPPATCHs
METHOD_PROPPATCH_CALENDAR = "PROPPATCH Calendar"
METHOD_PROPPATCH_ADDRESSBOOK = "PROPPATCH Adbk Home"
# REPORTs
METHOD_REPORT_CALENDAR_MULTIGET = "REPORT cal-multi"
METHOD_REPORT_CALENDAR_QUERY = "REPORT cal-query"
METHOD_REPORT_CALENDAR_FREEBUSY = "REPORT freebusy"
METHOD_REPORT_CALENDAR_SYNC = "REPORT cal-sync"
METHOD_REPORT_ADDRESSBOOK_MULTIGET = "REPORT adbk-multi"
METHOD_REPORT_ADDRESSBOOK_QUERY = "REPORT adbk-query"
METHOD_REPORT_DIRECTORY_QUERY = "REPORT dir-query"
METHOD_REPORT_ADDRESSBOOK_SYNC = "REPORT adbk-sync"
METHOD_REPORT_P_SEARCH_P_SET = "REPORT p-set"
METHOD_REPORT_P_P_SEARCH = "REPORT p-search"
METHOD_REPORT_EXPAND_P = "REPORT expand"
# POSTs
METHOD_POST_CALENDAR_HOME = "POST Calendar Home"
METHOD_POST_CALENDAR = "POST Calendar"
METHOD_POST_CALENDAR_OBJECT = "POST Calendar Object"
METHOD_POST_ADDRESSBOOK_HOME = "POST Adbk Home"
METHOD_POST_ADDRESSBOOK = "POST Adbk"
METHOD_POST_ISCHEDULE_FREEBUSY = "POST Freebusy iSchedule"
METHOD_POST_ISCHEDULE = "POST iSchedule"
METHOD_POST_TIMEZONES = "POST Timezones"
METHOD_POST_FREEBUSY = "POST Freebusy"
METHOD_POST_ORGANIZER = "POST Organizer"
METHOD_POST_ATTENDEE = "POST Attendee"
METHOD_POST_OUTBOX = "POST Outbox"
METHOD_POST_APNS = "POST apns"
# PUTs
METHOD_PUT_ICS = "PUT ics"
METHOD_PUT_ORGANIZER = "PUT Organizer"
METHOD_PUT_ATTENDEE = "PUT Attendee"
METHOD_PUT_DROPBOX = "PUT dropbox"
METHOD_PUT_VCF = "PUT VCF"
# GETs
METHOD_GET_CALENDAR_HOME = "GET Calendar Home"
METHOD_GET_CALENDAR = "GET Calendar"
METHOD_GET_ICS = "GET ics"
METHOD_GET_INBOX_ICS = "GET inbox ics"
METHOD_GET_DROPBOX = "GET dropbox"
METHOD_GET_ADDRESSBOOK_HOME = "GET Adbk Home"
METHOD_GET_ADDRESSBOOK = "GET Adbk"
METHOD_GET_VCF = "GET VCF"
METHOD_GET_TIMEZONES = "GET Timezones"
# DELETEs
METHOD_DELETE_CALENDAR_HOME = "DELETE Calendar Home"
METHOD_DELETE_CALENDAR = "DELETE Calendar"
METHOD_DELETE_ICS = "DELETE ics"
METHOD_DELETE_INBOX_ICS = "DELETE inbox ics"
METHOD_DELETE_DROPBOX = "DELETE dropbox"
METHOD_DELETE_ADDRESSBOOK_HOME = "DELETE Adbk Home"
METHOD_DELETE_ADDRESSBOOK = "DELETE Adbk"
METHOD_DELETE_VCF = "DELETE vcf"
def getAdjustedMethodName(stats):
method = stats["method"]
uribits = stats["uri"].rstrip("/").split('/')[1:]
if len(uribits) == 0:
uribits = [stats["uri"]]
calendar_specials = ("attachments", "dropbox", "notification", "freebusy", "outbox",)
adbk_specials = ("notification",)
def _PROPFIND():
cached = "cached" in stats
if uribits[0] == "calendars":
if len(uribits) == 3:
return METHOD_PROPFIND_CACHED_CALENDAR_HOME if cached else METHOD_PROPFIND_CALENDAR_HOME
elif len(uribits) > 3:
if uribits[3] in calendar_specials:
return "PROPFIND %s" % (uribits[3],)
elif len(uribits) == 4:
if uribits[3] == "inbox":
return METHOD_PROPFIND_INBOX
else:
return METHOD_PROPFIND_CALENDAR
elif uribits[0] == "addressbooks":
if len(uribits) == 3:
return METHOD_PROPFIND_CACHED_ADDRESSBOOK_HOME if cached else METHOD_PROPFIND_ADDRESSBOOK_HOME
elif len(uribits) > 3:
if uribits[3] in adbk_specials:
return "PROPFIND %s" % (uribits[3],)
elif len(uribits) == 4:
return METHOD_PROPFIND_ADDRESSBOOK
elif uribits[0] == "directory":
return METHOD_PROPFIND_DIRECTORY
elif uribits[0] == "principals":
return METHOD_PROPFIND_CACHED_PRINCIPALS if cached else METHOD_PROPFIND_PRINCIPALS
return method
def _REPORT():
if "(" in method:
report_type = method.split("}" if "}" in method else ":")[1][:-1]
if report_type == "addressbook-query":
if uribits[0] == "directory":
report_type = "directory-query"
if report_type == "sync-collection":
if uribits[0] == "calendars":
report_type = "cal-sync"
elif uribits[0] == "addressbooks":
report_type = "adbk-sync"
mappedNames = {
"calendar-multiget" : METHOD_REPORT_CALENDAR_MULTIGET,
"calendar-query" : METHOD_REPORT_CALENDAR_QUERY,
"free-busy-query" : METHOD_REPORT_CALENDAR_FREEBUSY,
"cal-sync" : METHOD_REPORT_CALENDAR_SYNC,
"addressbook-multiget" : METHOD_REPORT_ADDRESSBOOK_MULTIGET,
"addressbook-query" : METHOD_REPORT_ADDRESSBOOK_QUERY,
"directory-query" : METHOD_REPORT_DIRECTORY_QUERY,
"adbk-sync" : METHOD_REPORT_ADDRESSBOOK_SYNC,
"principal-search-property-set" : METHOD_REPORT_P_SEARCH_P_SET,
"principal-property-search" : METHOD_REPORT_P_P_SEARCH,
"expand-property" : METHOD_REPORT_EXPAND_P,
}
return mappedNames.get(report_type, "REPORT %s" % (report_type,))
return method
def _PROPPATCH():
if uribits[0] == "calendars":
return METHOD_PROPPATCH_CALENDAR
elif uribits[0] == "addressbooks":
return METHOD_PROPPATCH_ADDRESSBOOK
return method
def _POST():
if uribits[0] == "calendars":
if len(uribits) == 3:
return METHOD_POST_CALENDAR_HOME
elif len(uribits) == 4:
if uribits[3] == "outbox":
if "recipients" in stats:
return METHOD_POST_FREEBUSY
elif "freebusy" in stats:
return METHOD_POST_FREEBUSY
elif "itip.request" in stats or "itip.cancel" in stats:
return METHOD_POST_ORGANIZER
elif "itip.reply" in stats:
return METHOD_POST_ATTENDEE
else:
return METHOD_POST_OUTBOX
elif uribits[3] in calendar_specials:
pass
else:
return METHOD_POST_CALENDAR
elif len(uribits) == 5:
return METHOD_POST_CALENDAR_OBJECT
elif uribits[0] == "addressbooks":
if len(uribits) == 3:
return METHOD_POST_ADDRESSBOOK_HOME
elif len(uribits) == 4:
if uribits[3] in adbk_specials:
pass
else:
return METHOD_POST_ADDRESSBOOK
elif uribits[0] == "ischedule":
if "fb-cached" in stats or "fb-uncached" in stats or "freebusy" in stats:
return METHOD_POST_ISCHEDULE_FREEBUSY
else:
return METHOD_POST_ISCHEDULE
elif uribits[0].startswith("timezones"):
return METHOD_POST_TIMEZONES
elif uribits[0].startswith("apns"):
return METHOD_POST_APNS
return method
def _PUT():
if uribits[0] == "calendars":
if len(uribits) > 3:
if uribits[3] in calendar_specials:
return "PUT %s" % (uribits[3],)
elif len(uribits) == 4:
pass
else:
if "itip.requests" in stats:
return METHOD_PUT_ORGANIZER
elif "itip.reply" in stats:
return METHOD_PUT_ATTENDEE
else:
return METHOD_PUT_ICS
elif uribits[0] == "addressbooks":
if len(uribits) > 3:
if uribits[3] in adbk_specials:
return "PUT %s" % (uribits[3],)
elif len(uribits) == 4:
pass
else:
return METHOD_PUT_VCF
return method
def _GET():
if uribits[0] == "calendars":
if len(uribits) == 3:
return METHOD_GET_CALENDAR_HOME
elif len(uribits) > 3:
if uribits[3] in calendar_specials:
return "GET %s" % (uribits[3],)
elif len(uribits) == 4:
return METHOD_GET_CALENDAR
elif uribits[3] == "inbox":
return METHOD_GET_INBOX_ICS
else:
return METHOD_GET_ICS
elif uribits[0] == "addressbooks":
if len(uribits) == 3:
return METHOD_GET_ADDRESSBOOK_HOME
elif len(uribits) > 3:
if uribits[3] in adbk_specials:
return "GET %s" % (uribits[3],)
elif len(uribits) == 4:
return METHOD_GET_ADDRESSBOOK
else:
return METHOD_GET_VCF
elif uribits[0].startswith("timezones"):
return METHOD_GET_TIMEZONES
return method
def _DELETE():
if uribits[0] == "calendars":
if len(uribits) == 3:
return METHOD_DELETE_CALENDAR_HOME
elif len(uribits) > 3:
if uribits[3] in calendar_specials:
return "DELETE %s" % (uribits[3],)
elif len(uribits) == 4:
return METHOD_DELETE_CALENDAR
elif uribits[3] == "inbox":
return METHOD_DELETE_INBOX_ICS
else:
return METHOD_DELETE_ICS
elif uribits[0] == "addressbooks":
if len(uribits) == 3:
return METHOD_DELETE_ADDRESSBOOK_HOME
elif len(uribits) > 3:
if uribits[3] in adbk_specials:
return "DELETE %s" % (uribits[3],)
elif len(uribits) == 4:
return METHOD_DELETE_ADDRESSBOOK
else:
return METHOD_DELETE_VCF
return method
def _ANY():
return method
return {
"DELETE" : _DELETE,
"GET" : _GET,
"POST" : _POST,
"PROPFIND" : _PROPFIND,
"PROPPATCH" : _PROPPATCH,
"PUT" : _PUT,
"REPORT" : _REPORT,
}.get(method.split("(")[0], _ANY)()
osClients = (
"Mac OS X/",
"Mac_OS_X/",
"iOS/",
)
versionClients = (
"iCal/",
"iPhone/",
"CalendarAgent/",
"Calendar/",
"CoreDAV/",
"Safari/",
"dataaccessd/",
"Preferences/",
"curl/",
"DAVKit/",
)
quickclients = (
("InterMapper/", "InterMapper"),
("CardDAVPlugin/", "CardDAVPlugin"),
("Address%20Book/", "AddressBook"),
("AddressBook/", "AddressBook"),
("Mail/", "Mail"),
("iChat/", "iChat"),
)
def getAdjustedClientName(stats):
userAgent = stats["userAgent"]
os = ""
for client in osClients:
index = userAgent.find(client)
if index != -1:
l = len(client)
endex = userAgent.find(' ', index + l)
os = (userAgent[index:] if endex == -1 else userAgent[index:endex]) + " "
for client in versionClients:
index = userAgent.find(client)
if index != -1:
if os:
return os + client[:-1]
else:
l = len(client)
endex = userAgent.find(' ', index + l)
return os + (userAgent[index:] if endex == -1 else userAgent[index:endex])
for quick, result in quickclients:
index = userAgent.find(quick)
if index != -1:
return os + result
return userAgent[:20]
calendarserver-7.0+dfsg/calendarserver/profiling.py 0000664 0000000 0000000 00000003160 12607514243 0022646 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2014-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from twisted.internet.defer import inlineCallbacks, returnValue
import cProfile as profile
import pstats
def profile_method():
"""
Decorator to profile a function
"""
def wrapper(method):
def inner(*args, **kwargs):
prof = profile.Profile()
prof.enable()
result = method(*args, **kwargs)
prof.disable()
stats = pstats.Stats(prof)
stats.sort_stats('cumulative').print_stats()
return result
return inner
return wrapper
def profile_inline_callback():
"""
Decorator to profile an inlineCallback function
"""
def wrapper(method):
@inlineCallbacks
def inner(*args, **kwargs):
prof = profile.Profile()
prof.enable()
result = yield method(*args, **kwargs)
prof.disable()
stats = pstats.Stats(prof)
stats.sort_stats('cumulative').print_stats()
returnValue(result)
return inner
return wrapper
calendarserver-7.0+dfsg/calendarserver/provision/ 0000775 0000000 0000000 00000000000 12607514243 0022333 5 ustar 00root root 0000000 0000000 calendarserver-7.0+dfsg/calendarserver/provision/__init__.py 0000664 0000000 0000000 00000001136 12607514243 0024445 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2009-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
calendarserver-7.0+dfsg/calendarserver/provision/root.py 0000664 0000000 0000000 00000037605 12607514243 0023703 0 ustar 00root root 0000000 0000000 # -*- test-case-name: calendarserver.provision.test.test_root -*-
##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
__all__ = [
"RootResource",
]
try:
from twext.python.sacl import checkSACL
except ImportError:
# OS X Server SACLs not supported on this system, make SACL check a no-op
checkSACL = lambda *ignored: True
from twext.python.log import Logger
from twisted.cred.error import LoginFailed, UnauthorizedLogin
from twisted.internet.defer import inlineCallbacks, returnValue, succeed
from twisted.python.reflect import namedClass
from twisted.web.error import Error as WebError
from twistedcaldav.cache import DisabledCache
from twistedcaldav.cache import MemcacheResponseCache, MemcacheChangeNotifier
from twistedcaldav.cache import _CachedResponseResource
from twistedcaldav.config import config
from twistedcaldav.directory.principal import DirectoryPrincipalResource
from twistedcaldav.extensions import DAVFile, CachingPropertyStore
from twistedcaldav.extensions import DirectoryPrincipalPropertySearchMixIn
from twistedcaldav.extensions import ReadOnlyResourceMixIn
from twistedcaldav.resource import CalDAVComplianceMixIn
from txdav.who.wiki import DirectoryService as WikiDirectoryService
from txdav.who.wiki import uidForAuthToken
from txweb2 import responsecode
from txweb2.auth.wrapper import UnauthorizedResponse
from txweb2.dav.xattrprops import xattrPropertyStore
from txweb2.http import HTTPError, StatusResponse, RedirectResponse
log = Logger()
class RootResource(
ReadOnlyResourceMixIn, DirectoryPrincipalPropertySearchMixIn,
CalDAVComplianceMixIn, DAVFile
):
"""
A special root resource that contains support checking SACLs
as well as adding responseFilters.
"""
useSacls = False
# Mapping of top-level resource paths to SACLs. If a request path
# starts with any of these, then the list of SACLs are checked. If the
# request path does not start with any of these, then no SACLs are checked.
saclMap = {
"addressbooks": ("addressbook",),
"calendars": ("calendar",),
"directory": ("addressbook",),
"principals": ("addressbook", "calendar"),
"webcal": ("calendar",),
}
# If a top-level resource path starts with any of these, an unauthenticated
# request is redirected to the auth url (config.WebCalendarAuthPath)
authServiceMap = {
"webcal": True,
}
def __init__(self, path, *args, **kwargs):
super(RootResource, self).__init__(path, *args, **kwargs)
if config.EnableSACLs:
self.useSacls = True
self.contentFilters = []
if (
config.EnableResponseCache and
config.Memcached.Pools.Default.ClientEnabled
):
self.responseCache = MemcacheResponseCache(self.fp)
# These class attributes need to be setup with our memcache\
# notifier
DirectoryPrincipalResource.cacheNotifierFactory = (
MemcacheChangeNotifier
)
else:
self.responseCache = DisabledCache()
if config.ResponseCompression:
from txweb2.filter import gzip
self.contentFilters.append((gzip.gzipfilter, True))
def deadProperties(self):
if not hasattr(self, "_dead_properties"):
# Get the property store from super
deadProperties = (
namedClass(config.RootResourcePropStoreClass)(self)
)
# Wrap the property store in a memory store
if isinstance(deadProperties, xattrPropertyStore):
deadProperties = CachingPropertyStore(deadProperties)
self._dead_properties = deadProperties
return self._dead_properties
def defaultAccessControlList(self):
return succeed(config.RootResourceACL)
@inlineCallbacks
def checkSACL(self, request):
"""
Check SACLs against the current request
"""
topLevel = request.path.strip("/").split("/")[0]
saclServices = self.saclMap.get(topLevel, None)
if not saclServices:
returnValue(True)
try:
authnUser, authzUser = yield self.authenticate(request)
except Exception:
response = (yield UnauthorizedResponse.makeResponse(
request.credentialFactories,
request.remoteAddr
))
raise HTTPError(response)
# SACLs are enabled in the plist, but there may not actually
# be a SACL group assigned to this service. Let's see if
# unauthenticated users are allowed by calling CheckSACL
# with an empty string.
if authzUser is None:
for saclService in saclServices:
if checkSACL("", saclService):
# No group actually exists for this SACL, so allow
# unauthenticated access
returnValue(True)
# There is a SACL group for at least one of the SACLs, so no
# unauthenticated access
response = (yield UnauthorizedResponse.makeResponse(
request.credentialFactories,
request.remoteAddr
))
log.info("Unauthenticated user denied by SACLs")
raise HTTPError(response)
# Cache the authentication details
request.authnUser = authnUser
request.authzUser = authzUser
# Figure out the "username" from the davxml.Principal object
username = authzUser.record.shortNames[0]
access = False
for saclService in saclServices:
if checkSACL(username, saclService):
# Access is allowed
access = True
break
# Mark SACLs as having been checked so we can avoid doing it
# multiple times
request.checkedSACL = True
if access:
returnValue(True)
log.warn(
"User {user!r} is not enabled with the {sacl!r} SACL(s)",
user=username, sacl=saclServices
)
raise HTTPError(responsecode.FORBIDDEN)
@inlineCallbacks
def locateChild(self, request, segments):
for filter in self.contentFilters:
request.addResponseFilter(filter[0], atEnd=filter[1])
# Examine cookies for wiki auth token; if there, ask the paired wiki
# server for the corresponding record name. If that maps to a
# principal, assign that to authnuser.
# Also, certain non-browser clients send along the wiki auth token
# sometimes, so we now also look for the presence of x-requested-with
# header that the webclient sends. However, in the case of a GET on
# /webcal that header won't be sent so therefore we allow wiki auth
# for any path in the authServiceMap even if that header is missing.
allowWikiAuth = False
topLevel = request.path.strip("/").split("/")[0]
if self.authServiceMap.get(topLevel, False):
allowWikiAuth = True
if not hasattr(request, "checkedWiki"):
# Only do this once per request
request.checkedWiki = True
wikiConfig = config.Authentication.Wiki
cookies = request.headers.getHeader("cookie")
requestedWith = request.headers.hasHeader("x-requested-with")
if (
wikiConfig["Enabled"] and
(requestedWith or allowWikiAuth) and
cookies is not None
):
for cookie in cookies:
if cookie.name == wikiConfig["Cookie"]:
token = cookie.value
break
else:
token = None
if token is not None and token != "unauthenticated":
log.debug(
"Wiki sessionID cookie value: {token}", token=token
)
try:
uid = yield uidForAuthToken(token, wikiConfig["EndpointDescriptor"])
if uid == "unauthenticated":
uid = None
except WebError as w:
uid = None
# FORBIDDEN status means it's an unknown token
if int(w.status) == responsecode.NOT_FOUND:
log.debug(
"Unknown wiki token: {token}", token=token
)
else:
log.error(
"Failed to look up wiki token {token}: "
"{message}",
token=token, message=w.message
)
except Exception as e:
log.error(
"Failed to look up wiki token: {error}",
error=e
)
uid = None
if uid is not None:
log.debug(
"Wiki lookup returned uid: {uid}", uid=uid
)
principal = yield self.principalForUID(request, uid)
if principal:
log.debug(
"Wiki-authenticated principal {uid} "
"being assigned to authnUser and authzUser",
uid=uid
)
request.authzUser = request.authnUser = principal
if not hasattr(request, "authzUser") and config.WebCalendarAuthPath:
topLevel = request.path.strip("/").split("/")[0]
if self.authServiceMap.get(topLevel, False):
# We've not been authenticated and the auth service is enabled
# for this resource, so redirect.
# Use config.ServerHostName if no x-forwarded-host header,
# otherwise use the final hostname in x-forwarded-host.
host = request.headers.getRawHeaders(
"x-forwarded-host",
[config.ServerHostName]
)[-1].split(",")[-1].strip()
port = 443 if config.EnableSSL else 80
scheme = "https" if config.EnableSSL else "http"
response = RedirectResponse(
request.unparseURL(
host=host,
port=port,
scheme=scheme,
path=config.WebCalendarAuthPath,
querystring="redirect={}://{}{}".format(
scheme,
host,
request.path
)
),
temporary=True
)
raise HTTPError(response)
# We don't want the /inbox resource to pay attention to SACLs because
# we just want it to use the hard-coded ACL for the imip reply user.
# The /timezones resource is used by the wiki web calendar, so open
# up that resource.
if segments[0] in ("inbox", "timezones"):
request.checkedSACL = True
elif (
(
len(segments) > 2 and
segments[0] in ("calendars", "principals") and
(
segments[1] == "wikis" or
(
segments[1] == "__uids__" and
segments[2].startswith(WikiDirectoryService.uidPrefix)
)
)
)
):
# This is a wiki-related calendar resource. SACLs are not checked.
request.checkedSACL = True
# The authzuser value is set to that of the wiki principal if
# not already set.
if not hasattr(request, "authzUser") and segments[2]:
wikiUid = None
if segments[1] == "wikis":
wikiUid = "{}{}".format(WikiDirectoryService.uidPrefix, segments[2])
else:
wikiUid = segments[2]
if wikiUid:
log.debug(
"Wiki principal {name} being assigned to authzUser",
name=wikiUid
)
request.authzUser = yield self.principalForUID(request, wikiUid)
elif (
self.useSacls and
not hasattr(request, "checkedSACL")
):
yield self.checkSACL(request)
if config.RejectClients:
#
# Filter out unsupported clients
#
agent = request.headers.getHeader("user-agent")
if agent is not None:
for reject in config.RejectClients:
if reject.search(agent) is not None:
log.info("Rejecting user-agent: {agent}", agent=agent)
raise HTTPError(StatusResponse(
responsecode.FORBIDDEN,
"Your client software ({}) is not allowed to "
"access this service."
.format(agent)
))
if not hasattr(request, "authnUser"):
try:
authnUser, authzUser = yield self.authenticate(request)
request.authnUser = authnUser
request.authzUser = authzUser
except (UnauthorizedLogin, LoginFailed):
response = yield UnauthorizedResponse.makeResponse(
request.credentialFactories,
request.remoteAddr
)
raise HTTPError(response)
if (
config.EnableResponseCache and
request.method == "PROPFIND" and
not getattr(request, "notInCache", False) and
len(segments) > 1
):
try:
if not getattr(request, "checkingCache", False):
request.checkingCache = True
response = yield self.responseCache.getResponseForRequest(
request
)
if response is None:
request.notInCache = True
raise KeyError("Not found in cache.")
returnValue((_CachedResponseResource(response), []))
except KeyError:
pass
child = yield super(RootResource, self).locateChild(
request, segments
)
returnValue(child)
@inlineCallbacks
def principalForUID(self, request, uid):
principal = None
directory = request.site.resource.getDirectory()
record = yield directory.recordWithUID(uid)
if record is not None:
username = record.shortNames[0]
log.debug(
"Wiki user record for user {user}: {record}",
user=username, record=record
)
for collection in self.principalCollections():
principal = yield collection.principalForRecord(record)
if principal is not None:
break
returnValue(principal)
def http_COPY(self, request):
return responsecode.FORBIDDEN
def http_MOVE(self, request):
return responsecode.FORBIDDEN
def http_DELETE(self, request):
return responsecode.FORBIDDEN
calendarserver-7.0+dfsg/calendarserver/provision/test/ 0000775 0000000 0000000 00000000000 12607514243 0023312 5 ustar 00root root 0000000 0000000 calendarserver-7.0+dfsg/calendarserver/provision/test/__init__.py 0000664 0000000 0000000 00000001136 12607514243 0025424 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2009-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
calendarserver-7.0+dfsg/calendarserver/provision/test/test_root.py 0000664 0000000 0000000 00000025277 12607514243 0025723 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from twisted.internet.defer import inlineCallbacks, maybeDeferred, returnValue
from txweb2 import http_headers
from txweb2 import responsecode
from txdav.xml import element as davxml
from txweb2.http import HTTPError
from txweb2.iweb import IResponse
from twistedcaldav.test.util import StoreTestCase, SimpleStoreRequest
from twistedcaldav.directory.principal import DirectoryPrincipalProvisioningResource
import calendarserver.provision.root # for patching checkSACL
TEST_SACLS = {
"calendar": [
"dreid"
],
"addressbook": [
"dreid"
],
}
def stubCheckSACL(username, service):
if service not in TEST_SACLS:
return True
if username in TEST_SACLS[service]:
return True
return False
class RootTests(StoreTestCase):
@inlineCallbacks
def setUp(self):
yield super(RootTests, self).setUp()
self.patch(calendarserver.provision.root, "checkSACL", stubCheckSACL)
class ComplianceTests(RootTests):
"""
Tests to verify CalDAV compliance of the root resource.
"""
@inlineCallbacks
def issueRequest(self, segments, method="GET"):
"""
Get a resource from a particular path from the root URI, and return a
Deferred which will fire with (something adaptable to) an HTTP response
object.
"""
request = SimpleStoreRequest(self, method, ("/".join([""] + segments)))
rsrc = self.actualRoot
while segments:
rsrc, segments = (yield maybeDeferred(
rsrc.locateChild, request, segments
))
result = yield rsrc.renderHTTP(request)
returnValue(result)
@inlineCallbacks
def test_optionsIncludeCalendar(self):
"""
OPTIONS request should include a DAV header that mentions the
addressbook capability.
"""
response = yield self.issueRequest([""], "OPTIONS")
self.assertIn("addressbook", response.headers.getHeader("DAV"))
class SACLTests(RootTests):
@inlineCallbacks
def test_noSacls(self):
"""
Test the behaviour of locateChild when SACLs are not enabled.
should return a valid resource
"""
self.actualRoot.useSacls = False
request = SimpleStoreRequest(self, "GET", "/principals/")
resrc, segments = (yield maybeDeferred(
self.actualRoot.locateChild, request, ["principals"]
))
self.failUnless(
isinstance(resrc, DirectoryPrincipalProvisioningResource),
"Did not get a DirectoryPrincipalProvisioningResource: %s"
% (resrc,)
)
self.assertEquals(segments, [])
@inlineCallbacks
def test_inSacls(self):
"""
Test the behavior of locateChild when SACLs are enabled and the
user is in the SACL group
should return a valid resource
"""
self.actualRoot.useSacls = True
principal = yield self.actualRoot.findPrincipalForAuthID("dreid")
request = SimpleStoreRequest(
self,
"GET",
"/principals/",
authPrincipal=principal
)
resrc, segments = (yield maybeDeferred(
self.actualRoot.locateChild, request, ["principals"]
))
self.failUnless(
isinstance(resrc, DirectoryPrincipalProvisioningResource),
"Did not get a DirectoryPrincipalProvisioningResource: %s"
% (resrc,)
)
self.assertEquals(segments, [])
self.assertEquals(
request.authzUser.principalElement(),
davxml.Principal(
davxml.HRef(
"/principals/__uids__/5FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1/"
)
)
)
@inlineCallbacks
def test_notInSacls(self):
"""
Test the behavior of locateChild when SACLs are enabled and the
user is not in the SACL group
should return a 403 forbidden response
"""
self.actualRoot.useSacls = True
principal = yield self.actualRoot.findPrincipalForAuthID("wsanchez")
request = SimpleStoreRequest(
self,
"GET",
"/principals/",
authPrincipal=principal
)
try:
_ignore_resrc, _ignore_segments = (yield maybeDeferred(
self.actualRoot.locateChild, request, ["principals"]
))
raise AssertionError(
"RootResource.locateChild did not return an error"
)
except HTTPError, e:
self.assertEquals(e.response.code, 403)
@inlineCallbacks
def test_unauthenticated(self):
"""
Test the behavior of locateChild when SACLs are enabled and the request
is unauthenticated
should return a 401 UnauthorizedResponse
"""
self.actualRoot.useSacls = True
request = SimpleStoreRequest(
self,
"GET",
"/principals/"
)
try:
_ignore_resrc, _ignore_segments = (yield maybeDeferred(
self.actualRoot.locateChild, request, ["principals"]
))
raise AssertionError(
"RootResource.locateChild did not return an error"
)
except HTTPError, e:
self.assertEquals(e.response.code, 401)
@inlineCallbacks
def test_badCredentials(self):
"""
Test the behavior of locateChild when SACLS are enabled, and
incorrect credentials are given.
should return a 401 UnauthorizedResponse
"""
self.actualRoot.useSacls = True
request = SimpleStoreRequest(
self,
"GET",
"/principals/",
headers=http_headers.Headers(
{
"Authorization": [
"basic", "%s" % ("dreid:dreid".encode("base64"),)
]
}
)
)
try:
_ignore_resrc, _ignore_segments = (yield maybeDeferred(
self.actualRoot.locateChild, request, ["principals"]
))
raise AssertionError(
"RootResource.locateChild did not return an error"
)
except HTTPError, e:
self.assertEquals(e.response.code, 401)
def test_DELETE(self):
def do_test(response):
response = IResponse(response)
if response.code != responsecode.FORBIDDEN:
self.fail("Incorrect response for DELETE /: %s"
% (response.code,))
request = SimpleStoreRequest(self, "DELETE", "/")
return self.send(request, do_test)
def test_COPY(self):
def do_test(response):
response = IResponse(response)
if response.code != responsecode.FORBIDDEN:
self.fail("Incorrect response for COPY /: %s"
% (response.code,))
request = SimpleStoreRequest(
self,
"COPY",
"/",
headers=http_headers.Headers({"Destination": "/copy/"})
)
return self.send(request, do_test)
def test_MOVE(self):
def do_test(response):
response = IResponse(response)
if response.code != responsecode.FORBIDDEN:
self.fail("Incorrect response for MOVE /: %s"
% (response.code,))
request = SimpleStoreRequest(
self,
"MOVE",
"/",
headers=http_headers.Headers({"Destination": "/copy/"})
)
return self.send(request, do_test)
class SACLCacheTests(RootTests):
class StubResponseCacheResource(object):
def __init__(self):
self.cache = {}
self.responseCache = self
self.cacheHitCount = 0
def getResponseForRequest(self, request):
if str(request) in self.cache:
self.cacheHitCount += 1
return self.cache[str(request)]
def cacheResponseForRequest(self, request, response):
self.cache[str(request)] = response
return response
@inlineCallbacks
def setUp(self):
yield super(SACLCacheTests, self).setUp()
self.actualRoot.responseCache = SACLCacheTests.StubResponseCacheResource()
@inlineCallbacks
def test_PROPFIND(self):
self.actualRoot.useSacls = True
body = """
"""
principal = yield self.actualRoot.findPrincipalForAuthID("dreid")
request = SimpleStoreRequest(
self,
"PROPFIND",
"/principals/users/dreid/",
headers=http_headers.Headers({
'Depth': '1',
}),
authPrincipal=principal,
content=body
)
response = yield self.send(request)
response = IResponse(response)
if response.code != responsecode.MULTI_STATUS:
self.fail("Incorrect response for PROPFIND /principals/: %s" % (response.code,))
request = SimpleStoreRequest(
self,
"PROPFIND",
"/principals/users/dreid/",
headers=http_headers.Headers({
'Depth': '1',
}),
authPrincipal=principal,
content=body
)
response = yield self.send(request)
response = IResponse(response)
if response.code != responsecode.MULTI_STATUS:
self.fail("Incorrect response for PROPFIND /principals/: %s" % (response.code,))
self.assertEqual(self.actualRoot.responseCache.cacheHitCount, 1)
class WikiTests(RootTests):
@inlineCallbacks
def test_oneTime(self):
"""
Make sure wiki auth lookup is only done once per request;
request.checkedWiki will be set to True
"""
request = SimpleStoreRequest(self, "GET", "/principals/")
_ignore_resrc, _ignore_segments = (yield maybeDeferred(
self.actualRoot.locateChild, request, ["principals"]
))
self.assertTrue(request.checkedWiki)
calendarserver-7.0+dfsg/calendarserver/push/ 0000775 0000000 0000000 00000000000 12607514243 0021262 5 ustar 00root root 0000000 0000000 calendarserver-7.0+dfsg/calendarserver/push/__init__.py 0000664 0000000 0000000 00000001216 12607514243 0023373 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2011-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
CalendarServer push notification code.
"""
calendarserver-7.0+dfsg/calendarserver/push/amppush.py 0000664 0000000 0000000 00000024371 12607514243 0023320 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2012-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from calendarserver.push.util import PushScheduler
from twext.python.log import Logger
from twisted.internet.defer import inlineCallbacks, returnValue
from twisted.internet.endpoints import TCP4ClientEndpoint
from twisted.internet.protocol import Factory, ServerFactory
from twisted.protocols import amp
import time
import uuid
from calendarserver.push.util import PushPriority
log = Logger()
# Control socket message-routing constants
PUSH_ROUTE = "push"
# AMP Commands sent to server
class SubscribeToID(amp.Command):
arguments = [('token', amp.String()), ('id', amp.String())]
response = [('status', amp.String())]
class UnsubscribeFromID(amp.Command):
arguments = [('token', amp.String()), ('id', amp.String())]
response = [('status', amp.String())]
# AMP Commands sent to client (and forwarded to Master)
class NotificationForID(amp.Command):
arguments = [('id', amp.String()),
('dataChangedTimestamp', amp.Integer(optional=True)),
('priority', amp.Integer(optional=True))]
response = [('status', amp.String())]
# Server classes
class AMPPushForwardingFactory(Factory):
log = Logger()
def __init__(self, forwarder):
self.forwarder = forwarder
def buildProtocol(self, addr):
protocol = amp.AMP()
self.forwarder.protocols.append(protocol)
return protocol
class AMPPushForwarder(object):
"""
Runs in the slaves, forwards notifications to the master via AMP
"""
log = Logger()
def __init__(self, controlSocket):
self.protocols = []
controlSocket.addFactory(PUSH_ROUTE, AMPPushForwardingFactory(self))
@inlineCallbacks
def enqueue(
self, transaction, id, dataChangedTimestamp=None,
priority=PushPriority.high
):
if dataChangedTimestamp is None:
dataChangedTimestamp = int(time.time())
for protocol in self.protocols:
yield protocol.callRemote(
NotificationForID, id=id,
dataChangedTimestamp=dataChangedTimestamp,
priority=priority.value)
class AMPPushMasterListeningProtocol(amp.AMP):
"""
Listens for notifications coming in over AMP from the slaves
"""
log = Logger()
def __init__(self, master):
super(AMPPushMasterListeningProtocol, self).__init__()
self.master = master
@NotificationForID.responder
def enqueueFromWorker(
self, id, dataChangedTimestamp=None,
priority=PushPriority.high.value
):
if dataChangedTimestamp is None:
dataChangedTimestamp = int(time.time())
self.master.enqueue(
None, id, dataChangedTimestamp=dataChangedTimestamp,
priority=PushPriority.lookupByValue(priority))
return {"status" : "OK"}
class AMPPushMasterListenerFactory(Factory):
log = Logger()
def __init__(self, master):
self.master = master
def buildProtocol(self, addr):
protocol = AMPPushMasterListeningProtocol(self.master)
return protocol
class AMPPushMaster(object):
"""
AMPPushNotifierService allows clients to use AMP to subscribe to,
and receive, change notifications.
"""
log = Logger()
def __init__(
self, controlSocket, parentService, port, enableStaggering,
staggerSeconds, reactor=None
):
if reactor is None:
from twisted.internet import reactor
from twisted.application.strports import service as strPortsService
if port:
# Service which listens for client subscriptions and sends
# notifications to them
strPortsService(
str(port), AMPPushNotifierFactory(self),
reactor=reactor).setServiceParent(parentService)
if controlSocket is not None:
# Set up the listener which gets notifications from the slaves
controlSocket.addFactory(
PUSH_ROUTE, AMPPushMasterListenerFactory(self)
)
self.subscribers = []
if enableStaggering:
self.scheduler = PushScheduler(
reactor, self.sendNotification,
staggerSeconds=staggerSeconds)
else:
self.scheduler = None
def addSubscriber(self, p):
self.log.debug("Added subscriber")
self.subscribers.append(p)
def removeSubscriber(self, p):
self.log.debug("Removed subscriber")
self.subscribers.remove(p)
def enqueue(
self, transaction, pushKey, dataChangedTimestamp=None,
priority=PushPriority.high
):
"""
Sends an AMP push notification to any clients subscribing to this pushKey.
@param pushKey: The identifier of the resource that was updated, including
a prefix indicating whether this is CalDAV or CardDAV related.
"/CalDAV/abc/def/"
@type pushKey: C{str}
@param dataChangedTimestamp: Timestamp (epoch seconds) for the data change
which triggered this notification (Only used for unit tests)
@type key: C{int}
"""
# Unit tests can pass this value in; otherwise it defaults to now
if dataChangedTimestamp is None:
dataChangedTimestamp = int(time.time())
tokens = []
for subscriber in self.subscribers:
token = subscriber.subscribedToID(pushKey)
if token is not None:
tokens.append(token)
if tokens:
return self.scheduleNotifications(
tokens, pushKey,
dataChangedTimestamp, priority)
@inlineCallbacks
def sendNotification(self, token, id, dataChangedTimestamp, priority):
for subscriber in self.subscribers:
if subscriber.subscribedToID(id):
yield subscriber.notify(
token, id, dataChangedTimestamp,
priority)
@inlineCallbacks
def scheduleNotifications(self, tokens, id, dataChangedTimestamp, priority):
if self.scheduler is not None:
self.scheduler.schedule(tokens, id, dataChangedTimestamp, priority)
else:
for token in tokens:
yield self.sendNotification(
token, id, dataChangedTimestamp,
priority)
class AMPPushNotifierProtocol(amp.AMP):
log = Logger()
def __init__(self, service):
super(AMPPushNotifierProtocol, self).__init__()
self.service = service
self.subscriptions = {}
self.any = None
def subscribe(self, token, id):
if id == "any":
self.any = token
else:
self.subscriptions[id] = token
return {"status" : "OK"}
SubscribeToID.responder(subscribe)
def unsubscribe(self, token, id):
try:
del self.subscriptions[id]
except KeyError:
pass
return {"status" : "OK"}
UnsubscribeFromID.responder(unsubscribe)
def notify(self, token, id, dataChangedTimestamp, priority):
if self.subscribedToID(id) == token:
self.log.debug("Sending notification for %s to %s" % (id, token))
return self.callRemote(
NotificationForID, id=id,
dataChangedTimestamp=dataChangedTimestamp,
priority=priority.value)
def subscribedToID(self, id):
if self.any is not None:
return self.any
return self.subscriptions.get(id, None)
def connectionLost(self, reason=None):
self.service.removeSubscriber(self)
class AMPPushNotifierFactory(ServerFactory):
log = Logger()
protocol = AMPPushNotifierProtocol
def __init__(self, service):
self.service = service
def buildProtocol(self, addr):
p = self.protocol(self.service)
self.service.addSubscriber(p)
p.service = self.service
return p
# Client classes
class AMPPushClientProtocol(amp.AMP):
"""
Implements the client side of the AMP push protocol. Whenever
the NotificationForID Command arrives, the registered callback
will be called with the id.
"""
def __init__(self, callback):
super(AMPPushClientProtocol, self).__init__()
self.callback = callback
@inlineCallbacks
def notificationForID(self, id, dataChangedTimestamp, priority):
yield self.callback(id, dataChangedTimestamp, PushPriority.lookupByValue(priority))
returnValue({"status" : "OK"})
NotificationForID.responder(notificationForID)
class AMPPushClientFactory(Factory):
log = Logger()
protocol = AMPPushClientProtocol
def __init__(self, callback):
self.callback = callback
def buildProtocol(self, addr):
p = self.protocol(self.callback)
return p
# Client helper methods
@inlineCallbacks
def subscribeToIDs(host, port, ids, callback, reactor=None):
"""
Clients can call this helper method to register a callback which
will get called whenever a push notification is fired for any
id in the ids list.
@param host: AMP host name to connect to
@type host: string
@param port: AMP port to connect to
@type port: integer
@param ids: The push IDs to subscribe to
@type ids: list of strings
@param callback: The method to call whenever a notification is
received.
@type callback: callable which is passed an id (string)
"""
if reactor is None:
from twisted.internet import reactor
token = str(uuid.uuid4())
endpoint = TCP4ClientEndpoint(reactor, host, port)
factory = AMPPushClientFactory(callback)
protocol = yield endpoint.connect(factory)
for id in ids:
yield protocol.callRemote(SubscribeToID, token=token, id=id)
returnValue(factory)
calendarserver-7.0+dfsg/calendarserver/push/applepush.py 0000664 0000000 0000000 00000103116 12607514243 0023637 0 ustar 00root root 0000000 0000000 # -*- test-case-name: calendarserver.push.test.test_applepush -*-
##
# Copyright (c) 2011-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from twext.internet.ssl import ChainingOpenSSLContextFactory
from twext.python.log import Logger
from txweb2 import responsecode
from txdav.xml import element as davxml
from txweb2.dav.noneprops import NonePropertyStore
from txweb2.http import Response
from txweb2.http_headers import MimeType
from txweb2.server import parsePOSTData
from twisted.application import service
from twisted.internet.protocol import Protocol
from twisted.internet.defer import inlineCallbacks, returnValue, succeed
from twisted.internet.protocol import ClientFactory, ReconnectingClientFactory
from twisted.internet.task import LoopingCall
from twistedcaldav.extensions import DAVResource, DAVResourceWithoutChildrenMixin
from twistedcaldav.resource import ReadOnlyNoCopyResourceMixIn
import json
import OpenSSL
import struct
import time
from txdav.common.icommondatastore import InvalidSubscriptionValues
from calendarserver.push.util import (
validToken, TokenHistory, PushScheduler, PushPriority
)
from twext.internet.adaptendpoint import connect
from twext.internet.gaiendpoint import GAIEndpoint
from twisted.python.constants import Values, ValueConstant
log = Logger()
class ApplePushPriority(Values):
"""
Maps calendarserver.push.util.PushPriority values to APNS-specific values
"""
low = ValueConstant(PushPriority.low.value)
medium = ValueConstant(PushPriority.medium.value)
high = ValueConstant(PushPriority.high.value)
class ApplePushNotifierService(service.MultiService):
"""
ApplePushNotifierService is a MultiService responsible for
setting up the APN provider and feedback connections. Once
connected, calling its enqueue( ) method sends notifications
to any device token which is subscribed to the enqueued key.
The Apple Push Notification protocol is described here:
https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/Chapters/CommunicatingWIthAPS.html
"""
log = Logger()
@classmethod
def makeService(
cls, settings, store, testConnectorClass=None,
reactor=None
):
"""
Creates the various "subservices" that work together to implement
APN, including "provider" and "feedback" services for CalDAV and
CardDAV.
@param settings: The portion of the configuration specific to APN
@type settings: C{dict}
@param store: The db store for storing/retrieving subscriptions
@type store: L{IDataStore}
@param testConnectorClass: Used for unit testing; implements
connect( ) and receiveData( )
@type testConnectorClass: C{class}
@param reactor: Used for unit testing; allows tests to advance the
clock in order to test the feedback polling service.
@type reactor: L{twisted.internet.task.Clock}
@return: instance of L{ApplePushNotifierService}
"""
service = cls()
service.store = store
service.providers = {}
service.feedbacks = {}
service.purgeCall = None
service.purgeIntervalSeconds = settings["SubscriptionPurgeIntervalSeconds"]
service.purgeSeconds = settings["SubscriptionPurgeSeconds"]
for protocol in ("CalDAV", "CardDAV"):
if settings[protocol]["CertificatePath"]:
providerTestConnector = None
feedbackTestConnector = None
if testConnectorClass is not None:
providerTestConnector = testConnectorClass()
feedbackTestConnector = testConnectorClass()
provider = APNProviderService(
service.store,
settings["ProviderHost"],
settings["ProviderPort"],
settings[protocol]["CertificatePath"],
settings[protocol]["PrivateKeyPath"],
chainPath=settings[protocol]["AuthorityChainPath"],
passphrase=settings[protocol]["Passphrase"],
staggerNotifications=settings["EnableStaggering"],
staggerSeconds=settings["StaggerSeconds"],
testConnector=providerTestConnector,
reactor=reactor,
)
provider.setServiceParent(service)
service.providers[protocol] = provider
service.log.info(
"APNS %s topic: %s" %
(protocol, settings[protocol]["Topic"]))
feedback = APNFeedbackService(
service.store,
settings["FeedbackUpdateSeconds"],
settings["FeedbackHost"],
settings["FeedbackPort"],
settings[protocol]["CertificatePath"],
settings[protocol]["PrivateKeyPath"],
chainPath=settings[protocol]["AuthorityChainPath"],
passphrase=settings[protocol]["Passphrase"],
testConnector=feedbackTestConnector,
reactor=reactor,
)
feedback.setServiceParent(service)
service.feedbacks[protocol] = feedback
return service
def startService(self):
"""
In addition to starting the provider and feedback sub-services, start a
LoopingCall whose job it is to purge old subscriptions
"""
service.MultiService.startService(self)
self.log.debug("ApplePushNotifierService startService")
self.purgeCall = LoopingCall(self.purgeOldSubscriptions, self.purgeSeconds)
self.purgeCall.start(self.purgeIntervalSeconds, now=False)
def stopService(self):
"""
In addition to stopping the provider and feedback sub-services, stop the
LoopingCall
"""
service.MultiService.stopService(self)
self.log.debug("ApplePushNotifierService stopService")
if self.purgeCall is not None:
self.purgeCall.stop()
self.purgeCall = None
@inlineCallbacks
def purgeOldSubscriptions(self, purgeSeconds):
"""
Remove any subscriptions that registered more than purgeSeconds ago
@param purgeSeconds: The cutoff given in seconds
@type purgeSeconds: C{int}
"""
self.log.debug("ApplePushNotifierService purgeOldSubscriptions")
txn = self.store.newTransaction(label="ApplePushNotifierService.purgeOldSubscriptions")
yield txn.purgeOldAPNSubscriptions(int(time.time()) - purgeSeconds)
yield txn.commit()
@inlineCallbacks
def enqueue(
self, transaction, pushKey, dataChangedTimestamp=None,
priority=PushPriority.high
):
"""
Sends an Apple Push Notification to any device token subscribed to
this pushKey.
@param pushKey: The identifier of the resource that was updated, including
a prefix indicating whether this is CalDAV or CardDAV related.
"/CalDAV/abc/def/"
@type pushKey: C{str}
@param dataChangedTimestamp: Timestamp (epoch seconds) for the data change
which triggered this notification (Only used for unit tests)
@type key: C{int}
@param priority: the priority level
@type priority: L{PushPriority}
"""
try:
protocol = pushKey.split("/")[1]
except ValueError:
# pushKey has no protocol, so we can't do anything with it
self.log.error("Push key '%s' is missing protocol" % (pushKey,))
return
# Unit tests can pass this value in; otherwise it defaults to now
if dataChangedTimestamp is None:
dataChangedTimestamp = int(time.time())
provider = self.providers.get(protocol, None)
if provider is not None:
# Look up subscriptions for this key
subscriptions = (yield transaction.apnSubscriptionsByKey(pushKey))
numSubscriptions = len(subscriptions)
if numSubscriptions > 0:
self.log.debug(
"Sending %d APNS notifications for %s" %
(numSubscriptions, pushKey))
tokens = [record.token for record in subscriptions if record.token and record.subscriberGUID]
if tokens:
provider.scheduleNotifications(
tokens, pushKey,
dataChangedTimestamp, priority)
class APNProviderProtocol(Protocol):
"""
Implements the Provider portion of APNS
"""
log = Logger()
# Sent by provider
COMMAND_PROVIDER = 2
# Received by provider
COMMAND_ERROR = 8
# Returned only for an error. Successful notifications get no response.
STATUS_CODES = {
0 : "No errors encountered",
1 : "Processing error",
2 : "Missing device token",
3 : "Missing topic",
4 : "Missing payload",
5 : "Invalid token size",
6 : "Invalid topic size",
7 : "Invalid payload size",
8 : "Invalid token",
255 : "None (unknown)",
}
# If error code comes back as one of these, remove the associated device
# token
TOKEN_REMOVAL_CODES = (5, 8)
MESSAGE_LENGTH = 6
def makeConnection(self, transport):
self.history = TokenHistory()
self.log.debug("ProviderProtocol makeConnection")
Protocol.makeConnection(self, transport)
def connectionMade(self):
self.log.debug("ProviderProtocol connectionMade")
self.buffer = ""
# Store a reference to ourself on the factory so the service can
# later call us
self.factory.connection = self
self.factory.clientConnectionMade()
def connectionLost(self, reason=None):
# self.log.debug("ProviderProtocol connectionLost: %s" % (reason,))
# Clear the reference to us from the factory
self.factory.connection = None
@inlineCallbacks
def dataReceived(self, data, fn=None):
"""
Buffer and divide up received data into error messages which are
always 6 bytes long
"""
if fn is None:
fn = self.processError
self.log.debug("ProviderProtocol dataReceived %d bytes" % (len(data),))
self.buffer += data
while len(self.buffer) >= self.MESSAGE_LENGTH:
message = self.buffer[:self.MESSAGE_LENGTH]
self.buffer = self.buffer[self.MESSAGE_LENGTH:]
try:
command, status, identifier = struct.unpack("!BBI", message)
if command == self.COMMAND_ERROR:
yield fn(status, identifier)
except Exception, e:
self.log.warn(
"ProviderProtocol could not process error: %s (%s)" %
(message.encode("hex"), e))
@inlineCallbacks
def processError(self, status, identifier):
"""
Handles an error message we've received on the provider channel.
If the error code is one that indicates a bad token, remove all
subscriptions corresponding to that token.
@param status: The status value returned from APN Feedback server
@type status: C{int}
@param identifier: The identifier of the outbound push notification
message which had a problem.
@type status: C{int}
"""
msg = self.STATUS_CODES.get(status, "Unknown status code")
self.log.info("Received APN error %d on identifier %d: %s" % (status, identifier, msg))
if status in self.TOKEN_REMOVAL_CODES:
token = self.history.extractIdentifier(identifier)
if token is not None:
self.log.debug(
"Removing subscriptions for bad token: %s" %
(token,))
txn = self.factory.store.newTransaction(label="APNProviderProtocol.processError")
subscriptions = (yield txn.apnSubscriptionsByToken(token))
for record in subscriptions:
self.log.debug(
"Removing subscription: %s %s" %
(token, record.resourceKey))
yield txn.removeAPNSubscription(token, record.resourceKey)
yield txn.commit()
def sendNotification(self, token, key, dataChangedTimestamp, priority):
"""
Sends a push notification message for the key to the device associated
with the token.
@param token: The device token subscribed to the key
@type token: C{str}
@param key: The key we're sending a notification about
@type key: C{str}
@param dataChangedTimestamp: Timestamp (epoch seconds) for the data change
which triggered this notification
@type key: C{int}
"""
if not (token and key and dataChangedTimestamp):
return
try:
binaryToken = token.replace(" ", "").decode("hex")
except:
self.log.error("Invalid APN token in database: %s" % (token,))
return
identifier = self.history.add(token)
apnsPriority = ApplePushPriority.lookupByValue(priority.value).value
payload = json.dumps(
{
"key" : key,
"dataChangedTimestamp" : dataChangedTimestamp,
"pushRequestSubmittedTimestamp" : int(time.time()),
}
)
payloadLength = len(payload)
self.log.debug(
"Sending APNS notification to {token}: id={id} payload={payload} priority={priority}",
token=token, id=identifier, payload=payload, priority=apnsPriority)
"""
Notification format
Top level: Command (1 byte), Frame length (4 bytes), Frame data (variable)
Within Frame data: Item ...
Item: Item number (1 byte), Item data length (2 bytes), Item data (variable)
Item 1: Device token (32 bytes)
Item 2: Payload (variable length) in JSON format, not null-terminated
Item 3: Notification ID (4 bytes) an opaque value used for reporting errors
Item 4: Expiration date (4 bytes) UNIX epoch in secondcs UTC
Item 5: Priority (1 byte): 10 (push sent immediately) or 5 (push sent
at a time that conservces power on the device receiving it)
"""
# Frame struct.pack format ! Network byte order
command = self.COMMAND_PROVIDER # B
frameLength = (# I
# Item 1 (Device token)
1 + # Item number # B
2 + # Item length # H
32 + # device token # 32s
# Item 2 (Payload)
1 + # Item number # B
2 + # Item length # H
payloadLength + # the JSON payload # %d s
# Item 3 (Notification ID)
1 + # Item number # B
2 + # Item length # H
4 + # Notification ID # I
# Item 4 (Expiration)
1 + # Item number # B
2 + # Item length # H
4 + # Expiration seconds since epoch # I
# Item 5 (Priority)
1 + # Item number # B
2 + # Item length # H
1 # Priority # B
)
self.transport.write(
struct.pack(
"!BIBH32sBH%dsBHIBHIBHB" % (payloadLength,),
command, # Command
frameLength, # Frame length
1, # Item 1 (Device token)
32, # Token Length
binaryToken, # Token
2, # Item 2 (Payload)
payloadLength, # Payload length
payload, # Payload
3, # Item 3 (Notification ID)
4, # Notification ID Length
identifier, # Notification ID
4, # Item 4 (Expiration)
4, # Expiration length
int(time.time()) + 72 * 60 * 60, # Expires in 72 hours
5, # Item 5 (Priority)
1, # Priority length
apnsPriority, # Priority
)
)
class APNProviderFactory(ReconnectingClientFactory):
log = Logger()
protocol = APNProviderProtocol
def __init__(self, service, store):
self.service = service
self.store = store
self.noisy = True
self.maxDelay = 30 # max seconds between connection attempts
self.shuttingDown = False
def clientConnectionMade(self):
self.log.info("Connection to APN server made")
self.service.clientConnectionMade()
self.delay = 1.0
def clientConnectionLost(self, connector, reason):
if not self.shuttingDown:
self.log.info("Connection to APN server lost: %s" % (reason,))
ReconnectingClientFactory.clientConnectionLost(self, connector, reason)
def clientConnectionFailed(self, connector, reason):
self.log.error("Unable to connect to APN server: %s" % (reason,))
self.connected = False
ReconnectingClientFactory.clientConnectionFailed(
self, connector,
reason)
def retry(self, connector=None):
self.log.info("Reconnecting to APN server")
ReconnectingClientFactory.retry(self, connector)
def stopTrying(self):
self.shuttingDown = True
ReconnectingClientFactory.stopTrying(self)
class APNConnectionService(service.Service):
log = Logger()
def __init__(
self, host, port, certPath, keyPath, chainPath="",
passphrase="", sslMethod="TLSv1_METHOD", testConnector=None,
reactor=None
):
self.host = host
self.port = port
self.certPath = certPath
self.keyPath = keyPath
self.chainPath = chainPath
self.passphrase = passphrase
self.sslMethod = sslMethod
self.testConnector = testConnector
if reactor is None:
from twisted.internet import reactor
self.reactor = reactor
def connect(self, factory):
if self.testConnector is not None:
# For testing purposes
self.testConnector.connect(self, factory)
else:
if self.passphrase:
passwdCallback = lambda *ignored : self.passphrase
else:
passwdCallback = None
context = ChainingOpenSSLContextFactory(
self.keyPath,
self.certPath,
certificateChainFile=self.chainPath,
passwdCallback=passwdCallback,
sslmethod=getattr(OpenSSL.SSL, self.sslMethod)
)
connect(GAIEndpoint(self.reactor, self.host, self.port, context),
factory)
class APNProviderService(APNConnectionService):
def __init__(
self, store, host, port, certPath, keyPath, chainPath="",
passphrase="", sslMethod="TLSv1_METHOD",
staggerNotifications=False, staggerSeconds=3,
testConnector=None, reactor=None
):
APNConnectionService.__init__(
self, host, port, certPath, keyPath,
chainPath=chainPath, passphrase=passphrase, sslMethod=sslMethod,
testConnector=testConnector, reactor=reactor)
self.store = store
self.factory = None
self.queue = []
if staggerNotifications:
self.scheduler = PushScheduler(
self.reactor, self.sendNotification,
staggerSeconds=staggerSeconds)
else:
self.scheduler = None
def startService(self):
self.log.debug("APNProviderService startService")
self.factory = APNProviderFactory(self, self.store)
self.connect(self.factory)
def stopService(self):
self.log.debug("APNProviderService stopService")
if self.factory is not None:
self.factory.stopTrying()
if self.scheduler is not None:
self.scheduler.stop()
def clientConnectionMade(self):
# Service the queue
if self.queue:
# Copy and clear the queue. Any notifications that don't get
# sent will be put back into the queue.
queued = list(self.queue)
self.queue = []
for (token, key), dataChangedTimestamp, priority in queued:
if token and key and dataChangedTimestamp and priority:
self.sendNotification(
token, key, dataChangedTimestamp,
priority)
def scheduleNotifications(self, tokens, key, dataChangedTimestamp, priority):
"""
The starting point for getting notifications to the APNS server. If there is
a connection to the APNS server, these notifications are scheduled (or directly
sent if there is no scheduler). If there is no connection, the notifications
are saved for later.
@param tokens: The device tokens to schedule notifications for
@type tokens: List of strings
@param key: The key to use for this batch of notifications
@type key: String
@param dataChangedTimestamp: Timestamp (epoch seconds) for the data change
which triggered this notification
@type key: C{int}
"""
# Service has reference to factory has reference to protocol instance
connection = getattr(self.factory, "connection", None)
if connection is not None:
if self.scheduler is not None:
self.scheduler.schedule(tokens, key, dataChangedTimestamp, priority)
else:
for token in tokens:
self.sendNotification(token, key, dataChangedTimestamp, priority)
else:
self._saveForWhenConnected(tokens, key, dataChangedTimestamp, priority)
def _saveForWhenConnected(self, tokens, key, dataChangedTimestamp, priority):
"""
Called in order to save notifications that can't be sent now because there
is no connection to the APNS server. (token, key) tuples are appended to
the queue which is serviced during clientConnectionMade()
@param tokens: The device tokens to schedule notifications for
@type tokens: List of C{str}
@param key: The key to use for this batch of notifications
@type key: C{str}
@param dataChangedTimestamp: Timestamp (epoch seconds) for the data change
which triggered this notification
@type key: C{int}
"""
for token in tokens:
tokenKeyPair = (token, key)
for existingPair, _ignore_timstamp, priority in self.queue:
if tokenKeyPair == existingPair:
self.log.debug("APNProviderService has no connection; skipping duplicate: %s %s" % (token, key))
break # Already scheduled
else:
self.log.debug("APNProviderService has no connection; queuing: %s %s" % (token, key))
self.queue.append(((token, key), dataChangedTimestamp, priority))
def sendNotification(self, token, key, dataChangedTimestamp, priority):
"""
If there is a connection the notification is sent right away, otherwise
the notification is saved for later.
@param token: The device token to send a notifications to
@type token: C{str}
@param key: The key to use for this notification
@type key: C{str}
@param dataChangedTimestamp: Timestamp (epoch seconds) for the data change
which triggered this notification
@type key: C{int}
"""
if not (token and key and dataChangedTimestamp, priority):
return
# Service has reference to factory has reference to protocol instance
connection = getattr(self.factory, "connection", None)
if connection is None:
self._saveForWhenConnected([token], key, dataChangedTimestamp, priority)
else:
connection.sendNotification(token, key, dataChangedTimestamp, priority)
class APNFeedbackProtocol(Protocol):
"""
Implements the Feedback portion of APNS
"""
log = Logger()
MESSAGE_LENGTH = 38
def connectionMade(self):
self.log.debug("FeedbackProtocol connectionMade")
self.buffer = ""
@inlineCallbacks
def dataReceived(self, data, fn=None):
"""
Buffer and divide up received data into feedback messages which are
always 38 bytes long
"""
if fn is None:
fn = self.processFeedback
self.log.debug("FeedbackProtocol dataReceived %d bytes" % (len(data),))
self.buffer += data
while len(self.buffer) >= self.MESSAGE_LENGTH:
message = self.buffer[:self.MESSAGE_LENGTH]
self.buffer = self.buffer[self.MESSAGE_LENGTH:]
try:
timestamp, _ignore_tokenLength, binaryToken = struct.unpack(
"!IH32s",
message)
token = binaryToken.encode("hex").lower()
yield fn(timestamp, token)
except Exception, e:
self.log.warn(
"FeedbackProtocol could not process message: %s (%s)" %
(message.encode("hex"), e))
@inlineCallbacks
def processFeedback(self, timestamp, token):
"""
Handles a feedback message indicating that the given token is no
longer active as of the timestamp, and its subscription should be
removed as long as that device has not re-subscribed since the
timestamp.
@param timestamp: Seconds since the epoch
@type timestamp: C{int}
@param token: The device token to unsubscribe
@type token: C{str}
"""
self.log.debug(
"FeedbackProtocol processFeedback time=%d token=%s" %
(timestamp, token))
txn = self.factory.store.newTransaction(label="APNFeedbackProtocol.processFeedback")
subscriptions = (yield txn.apnSubscriptionsByToken(token))
for record in subscriptions:
if timestamp > record.modified:
self.log.debug(
"FeedbackProtocol removing subscription: %s %s" %
(token, record.resourceKey))
yield txn.removeAPNSubscription(token, record.resourceKey)
yield txn.commit()
class APNFeedbackFactory(ClientFactory):
log = Logger()
protocol = APNFeedbackProtocol
def __init__(self, store):
self.store = store
def clientConnectionFailed(self, connector, reason):
self.log.error(
"Unable to connect to APN feedback server: %s" %
(reason,))
self.connected = False
ClientFactory.clientConnectionFailed(self, connector, reason)
class APNFeedbackService(APNConnectionService):
def __init__(
self, store, updateSeconds, host, port,
certPath, keyPath, chainPath="", passphrase="", sslMethod="TLSv1_METHOD",
testConnector=None, reactor=None
):
APNConnectionService.__init__(
self, host, port, certPath, keyPath,
chainPath=chainPath, passphrase=passphrase, sslMethod=sslMethod,
testConnector=testConnector, reactor=reactor)
self.store = store
self.updateSeconds = updateSeconds
def startService(self):
self.log.debug("APNFeedbackService startService")
self.factory = APNFeedbackFactory(self.store)
self.checkForFeedback()
def stopService(self):
self.log.debug("APNFeedbackService stopService")
if self.nextCheck is not None:
self.nextCheck.cancel()
def checkForFeedback(self):
self.nextCheck = None
self.log.debug("APNFeedbackService checkForFeedback")
self.connect(self.factory)
self.nextCheck = self.reactor.callLater(
self.updateSeconds,
self.checkForFeedback)
class APNSubscriptionResource(
ReadOnlyNoCopyResourceMixIn,
DAVResourceWithoutChildrenMixin, DAVResource
):
"""
The DAV resource allowing clients to subscribe to Apple push notifications.
To subscribe, a client should first determine the key they are interested
in my examining the "pushkey" DAV property on the home or collection they
want to monitor. Next the client sends an authenticated HTTP GET or POST
request to this resource, passing their device token and the key in either
the URL params or in the POST body.
"""
log = Logger()
def __init__(self, parent, store):
DAVResource.__init__(
self, principalCollections=parent.principalCollections()
)
self.parent = parent
self.store = store
def deadProperties(self):
if not hasattr(self, "_dead_properties"):
self._dead_properties = NonePropertyStore(self)
return self._dead_properties
def etag(self):
return succeed(None)
def checkPreconditions(self, request):
return None
def defaultAccessControlList(self):
return succeed(
davxml.ACL(
# DAV:Read for authenticated principals
davxml.ACE(
davxml.Principal(davxml.Authenticated()),
davxml.Grant(
davxml.Privilege(davxml.Read()),
),
davxml.Protected(),
),
# DAV:Write for authenticated principals
davxml.ACE(
davxml.Principal(davxml.Authenticated()),
davxml.Grant(
davxml.Privilege(davxml.Write()),
),
davxml.Protected(),
),
)
)
def contentType(self):
return MimeType.fromString("text/html; charset=utf-8")
def resourceType(self):
return None
def isCollection(self):
return False
def isCalendarCollection(self):
return False
def isPseudoCalendarCollection(self):
return False
@inlineCallbacks
def http_POST(self, request):
yield self.authorize(request, (davxml.Write(),))
yield parsePOSTData(request)
code, msg = (yield self.processSubscription(request))
returnValue(self.renderResponse(code, body=msg))
http_GET = http_POST
@inlineCallbacks
def processSubscription(self, request):
"""
Given an authenticated request, use the token and key arguments
to add a subscription entry to the database.
@param request: The request to process
@type request: L{txweb2.server.Request}
"""
token = request.args.get("token", ("",))[0].replace(" ", "").lower()
key = request.args.get("key", ("",))[0]
userAgent = request.headers.getHeader("user-agent", "-")
host = request.remoteAddr.host
fwdHeaders = request.headers.getRawHeaders("x-forwarded-for", [])
if fwdHeaders:
host = fwdHeaders[0]
if not (key and token):
code = responsecode.BAD_REQUEST
msg = "Invalid request: both 'token' and 'key' must be provided"
elif not validToken(token):
code = responsecode.BAD_REQUEST
msg = "Invalid request: bad 'token' %s" % (token,)
else:
uid = request.authnUser.record.uid
try:
yield self.addSubscription(token, key, uid, userAgent, host)
code = responsecode.OK
msg = None
except InvalidSubscriptionValues:
code = responsecode.BAD_REQUEST
msg = "Invalid subscription values"
returnValue((code, msg))
@inlineCallbacks
def addSubscription(self, token, key, uid, userAgent, host):
"""
Add a subscription (or update its timestamp if already there).
@param token: The device token, must be lowercase
@type token: C{str}
@param key: The push key
@type key: C{str}
@param uid: The uid of the subscriber principal
@type uid: C{str}
@param userAgent: The user-agent requesting the subscription
@type key: C{str}
@param host: The host requesting the subscription
@type key: C{str}
"""
now = int(time.time()) # epoch seconds
txn = self.store.newTransaction(label="APNSubscriptionResource.addSubscription")
yield txn.addAPNSubscription(token, key, now, uid, userAgent, host)
yield txn.commit()
def renderResponse(self, code, body=None):
response = Response(code, {}, body)
response.headers.setHeader("content-type", MimeType("text", "html"))
return response
calendarserver-7.0+dfsg/calendarserver/push/notifier.py 0000664 0000000 0000000 00000022637 12607514243 0023465 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Notification framework for Calendar Server
"""
from twext.enterprise.dal.record import fromTable
from twext.enterprise.dal.syntax import Delete, Select, Parameter
from twext.enterprise.jobs.jobitem import JobItem
from twext.enterprise.jobs.workitem import WorkItem, WORK_PRIORITY_HIGH, \
WORK_WEIGHT_1
from twext.python.log import Logger
from twisted.internet.defer import inlineCallbacks
from txdav.common.datastore.sql_tables import schema
from txdav.idav import IStoreNotifierFactory, IStoreNotifier
from zope.interface.declarations import implements
import datetime
from calendarserver.push.util import PushPriority
log = Logger()
class PushNotificationWork(WorkItem, fromTable(schema.PUSH_NOTIFICATION_WORK)):
group = property(lambda self: (self.table.PUSH_ID == self.pushID))
default_priority = WORK_PRIORITY_HIGH
default_weight = WORK_WEIGHT_1
@inlineCallbacks
def doWork(self):
# Find all work items with the same push ID and find the highest
# priority. Delete matching work items.
results = (yield Select(
[self.table.WORK_ID, self.table.JOB_ID, self.table.PUSH_PRIORITY],
From=self.table, Where=self.table.PUSH_ID == self.pushID).on(
self.transaction))
maxPriority = self.pushPriority
# If there are other enqueued work items for this push ID, find the
# highest priority one and use that value. Note that L{results} will
# not contain this work item as job processing behavior will have already
# deleted it. So we need to make sure the max priority calculation includes
# this one.
if results:
workIDs, jobIDs, priorities = zip(*results)
maxPriority = max(priorities + (self.pushPriority,))
# Delete the work items and jobs we selected - deleting the job will ensure that there are no
# orphaned" jobs left in the job queue which would otherwise get to run at some later point,
# though not do anything because there is no related work item.
yield Delete(
From=self.table,
Where=self.table.WORK_ID.In(Parameter("workIDs", len(workIDs)))
).on(self.transaction, workIDs=workIDs)
yield Delete(
From=JobItem.table, #@UndefinedVariable
Where=JobItem.jobID.In(Parameter("jobIDs", len(jobIDs))) #@UndefinedVariable
).on(self.transaction, jobIDs=jobIDs)
pushDistributor = self.transaction._pushDistributor
if pushDistributor is not None:
# Convert the integer priority value back into a constant
priority = PushPriority.lookupByValue(maxPriority)
yield pushDistributor.enqueue(self.transaction, self.pushID, priority=priority)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Classes used within calendarserver itself
#
class Notifier(object):
"""
Provides a hook for sending change notifications to the
L{NotifierFactory}.
"""
log = Logger()
implements(IStoreNotifier)
def __init__(self, notifierFactory, storeObject):
self._notifierFactory = notifierFactory
self._storeObject = storeObject
self._notify = True
def enableNotify(self, arg):
self.log.debug("enableNotify: %s" % (self._ids['default'][1],))
self._notify = True
def disableNotify(self):
self.log.debug("disableNotify: %s" % (self._ids['default'][1],))
self._notify = False
@inlineCallbacks
def notify(self, txn, priority=PushPriority.high):
"""
Send the notification. For a home object we just push using the home id. For a home
child we push both the owner home id and the owned home child id.
@param txn: The transaction to create the work item with
@type txn: L{CommonStoreTransaction}
@param priority: the priority level
@type priority: L{PushPriority}
"""
# Push ids from the store objects are a tuple of (prefix, name,) and we need to compose that
# into a single token.
ids = (self._storeObject.notifierID(),)
# For resources that are children of a home, we need to add the home id too.
if hasattr(self._storeObject, "parentNotifierID"):
ids += (self._storeObject.parentNotifierID(),)
for prefix, id in ids:
if self._notify:
self.log.debug(
"Notifications are enabled: %s %s/%s priority=%d" %
(self._storeObject, prefix, id, priority.value))
yield self._notifierFactory.send(
prefix, id, txn,
priority=priority)
else:
self.log.debug(
"Skipping notification for: %s %s/%s" %
(self._storeObject, prefix, id,))
def clone(self, storeObject):
return self.__class__(self._notifierFactory, storeObject)
def nodeName(self):
"""
The pushkey is the notifier id of the home collection for home and owned home child objects. For
a shared home child, the push key is the notifier if of the owner's home child.
"""
if hasattr(self._storeObject, "ownerHome"):
if self._storeObject.owned():
prefix, id = self._storeObject.ownerHome().notifierID()
else:
prefix, id = self._storeObject.notifierID()
else:
prefix, id = self._storeObject.notifierID()
return self._notifierFactory.pushKeyForId(prefix, id)
class NotifierFactory(object):
"""
Notifier Factory
Creates Notifier instances and forwards notifications from them to the
work queue.
"""
log = Logger()
implements(IStoreNotifierFactory)
def __init__(self, hostname, coalesceSeconds, reactor=None):
self.store = None # Initialized after the store is created
self.hostname = hostname
self.coalesceSeconds = coalesceSeconds
if reactor is None:
from twisted.internet import reactor
self.reactor = reactor
@inlineCallbacks
def send(self, prefix, id, txn, priority=PushPriority.high):
"""
Enqueue a push notification work item on the provided transaction.
"""
yield txn.enqueue(
PushNotificationWork,
pushID=self.pushKeyForId(prefix, id),
notBefore=datetime.datetime.utcnow() + datetime.timedelta(seconds=self.coalesceSeconds),
pushPriority=priority.value
)
def newNotifier(self, storeObject):
return Notifier(self, storeObject)
def pushKeyForId(self, prefix, id):
key = "/%s/%s/%s/" % (prefix, self.hostname, id)
return key[:255]
def getPubSubAPSConfiguration(notifierID, config):
"""
Returns the Apple push notification settings specific to the pushKey
"""
try:
protocol, ignored = notifierID
except ValueError:
# id has no protocol, so we can't look up APS config
return None
# If we are directly talking to apple push, advertise those settings
applePushSettings = config.Notifications.Services.APNS
if applePushSettings.Enabled:
settings = {}
settings["APSBundleID"] = applePushSettings[protocol]["Topic"]
if config.EnableSSL:
url = "https://%s:%s/%s" % (
config.ServerHostName, config.SSLPort,
applePushSettings.SubscriptionURL)
else:
url = "http://%s:%s/%s" % (
config.ServerHostName, config.HTTPPort,
applePushSettings.SubscriptionURL)
settings["SubscriptionURL"] = url
settings["SubscriptionRefreshIntervalSeconds"] = applePushSettings.SubscriptionRefreshIntervalSeconds
settings["APSEnvironment"] = applePushSettings.Environment
return settings
return None
class PushDistributor(object):
"""
Distributes notifications to the protocol-specific subservices
"""
def __init__(self, observers):
"""
@param observers: the list of observers to distribute pushKeys to
@type observers: C{list}
"""
# TODO: add an IPushObservers interface?
self.observers = observers
@inlineCallbacks
def enqueue(self, transaction, pushKey, priority=PushPriority.high):
"""
Pass along enqueued pushKey to any observers
@param transaction: a transaction to use, if needed
@type transaction: L{CommonStoreTransaction}
@param pushKey: the push key to distribute to the observers
@type pushKey: C{str}
@param priority: the priority level
@type priority: L{PushPriority}
"""
for observer in self.observers:
yield observer.enqueue(
transaction, pushKey,
dataChangedTimestamp=None, priority=priority)
calendarserver-7.0+dfsg/calendarserver/push/test/ 0000775 0000000 0000000 00000000000 12607514243 0022241 5 ustar 00root root 0000000 0000000 calendarserver-7.0+dfsg/calendarserver/push/test/__init__.py 0000664 0000000 0000000 00000001216 12607514243 0024352 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2011-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
CalendarServer push notification code.
"""
calendarserver-7.0+dfsg/calendarserver/push/test/test_amppush.py 0000664 0000000 0000000 00000014625 12607514243 0025337 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2011-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from calendarserver.push.amppush import AMPPushMaster, AMPPushNotifierProtocol
from calendarserver.push.amppush import NotificationForID
from twistedcaldav.test.util import StoreTestCase
from twisted.internet.task import Clock
from calendarserver.push.util import PushPriority
class AMPPushMasterTests(StoreTestCase):
def test_AMPPushMaster(self):
# Set up the service
clock = Clock()
service = AMPPushMaster(None, None, 0, True, 3, reactor=clock)
self.assertEquals(service.subscribers, [])
client1 = TestProtocol(service)
client1.subscribe("token1", "/CalDAV/localhost/user01/")
client1.subscribe("token1", "/CalDAV/localhost/user02/")
client2 = TestProtocol(service)
client2.subscribe("token2", "/CalDAV/localhost/user01/")
client3 = TestProtocol(service)
client3.subscribe("token3", "any")
service.addSubscriber(client1)
service.addSubscriber(client2)
service.addSubscriber(client3)
self.assertEquals(len(service.subscribers), 3)
self.assertTrue(client1.subscribedToID("/CalDAV/localhost/user01/"))
self.assertTrue(client1.subscribedToID("/CalDAV/localhost/user02/"))
self.assertFalse(client1.subscribedToID("nonexistent"))
self.assertTrue(client2.subscribedToID("/CalDAV/localhost/user01/"))
self.assertFalse(client2.subscribedToID("/CalDAV/localhost/user02/"))
self.assertTrue(client3.subscribedToID("/CalDAV/localhost/user01/"))
self.assertTrue(client3.subscribedToID("/CalDAV/localhost/user02/"))
self.assertTrue(client3.subscribedToID("/CalDAV/localhost/user03/"))
dataChangedTimestamp = 1354815999
service.enqueue(
None, "/CalDAV/localhost/user01/",
dataChangedTimestamp=dataChangedTimestamp,
priority=PushPriority.high)
self.assertEquals(len(client1.history), 0)
self.assertEquals(len(client2.history), 0)
self.assertEquals(len(client3.history), 0)
clock.advance(1)
self.assertEquals(
client1.history,
[
(
NotificationForID,
{
'id' : '/CalDAV/localhost/user01/',
'dataChangedTimestamp' : 1354815999,
'priority' : PushPriority.high.value,
}
)
]
)
self.assertEquals(len(client2.history), 0)
self.assertEquals(len(client3.history), 0)
clock.advance(3)
self.assertEquals(
client2.history,
[
(
NotificationForID,
{
'id' : '/CalDAV/localhost/user01/',
'dataChangedTimestamp' : 1354815999,
'priority' : PushPriority.high.value,
}
)
]
)
self.assertEquals(len(client3.history), 0)
clock.advance(3)
self.assertEquals(
client3.history,
[
(
NotificationForID,
{
'id' : '/CalDAV/localhost/user01/',
'dataChangedTimestamp' : 1354815999,
'priority' : PushPriority.high.value,
}
)
]
)
client1.reset()
client2.reset()
client2.unsubscribe("token2", "/CalDAV/localhost/user01/")
service.enqueue(
None, "/CalDAV/localhost/user01/",
dataChangedTimestamp=dataChangedTimestamp,
priority=PushPriority.low)
self.assertEquals(len(client1.history), 0)
clock.advance(1)
self.assertEquals(
client1.history,
[
(
NotificationForID,
{
'id' : '/CalDAV/localhost/user01/',
'dataChangedTimestamp' : 1354815999,
'priority' : PushPriority.low.value,
}
)
]
)
self.assertEquals(len(client2.history), 0)
clock.advance(3)
self.assertEquals(len(client2.history), 0)
# Turn off staggering
service.scheduler = None
client1.reset()
client2.reset()
client2.subscribe("token2", "/CalDAV/localhost/user01/")
service.enqueue(
None, "/CalDAV/localhost/user01/",
dataChangedTimestamp=dataChangedTimestamp,
priority=PushPriority.medium)
self.assertEquals(
client1.history,
[
(
NotificationForID,
{
'id' : '/CalDAV/localhost/user01/',
'dataChangedTimestamp' : 1354815999,
'priority' : PushPriority.medium.value,
}
)
]
)
self.assertEquals(
client2.history,
[
(
NotificationForID,
{
'id' : '/CalDAV/localhost/user01/',
'dataChangedTimestamp' : 1354815999,
'priority' : PushPriority.medium.value,
}
)
]
)
class TestProtocol(AMPPushNotifierProtocol):
def __init__(self, service):
super(TestProtocol, self).__init__(service)
self.reset()
def callRemote(self, cls, **kwds):
self.history.append((cls, kwds))
def reset(self):
self.history = []
calendarserver-7.0+dfsg/calendarserver/push/test/test_applepush.py 0000664 0000000 0000000 00000042015 12607514243 0025655 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2011-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
import json
import struct
import time
from calendarserver.push.applepush import (
ApplePushNotifierService, APNProviderProtocol, ApplePushPriority
)
from calendarserver.push.util import validToken, TokenHistory, PushPriority
from twistedcaldav.test.util import StoreTestCase
from twisted.internet.defer import inlineCallbacks, succeed
from twisted.internet.task import Clock
from txdav.common.icommondatastore import InvalidSubscriptionValues
class ApplePushNotifierServiceTests(StoreTestCase):
@inlineCallbacks
def test_ApplePushNotifierService(self):
settings = {
"Enabled" : True,
"SubscriptionURL" : "apn",
"SubscriptionPurgeSeconds" : 24 * 60 * 60,
"SubscriptionPurgeIntervalSeconds" : 24 * 60 * 60,
"ProviderHost" : "gateway.push.apple.com",
"ProviderPort" : 2195,
"FeedbackHost" : "feedback.push.apple.com",
"FeedbackPort" : 2196,
"FeedbackUpdateSeconds" : 300,
"EnableStaggering" : True,
"StaggerSeconds" : 3,
"CalDAV" : {
"CertificatePath" : "caldav.cer",
"PrivateKeyPath" : "caldav.pem",
"AuthorityChainPath" : "chain.pem",
"Passphrase" : "",
"Topic" : "caldav_topic",
},
"CardDAV" : {
"CertificatePath" : "carddav.cer",
"PrivateKeyPath" : "carddav.pem",
"AuthorityChainPath" : "chain.pem",
"Passphrase" : "",
"Topic" : "carddav_topic",
},
}
# Add subscriptions
txn = self._sqlCalendarStore.newTransaction()
# Ensure empty values don't get through
try:
yield txn.addAPNSubscription("", "", "", "", "", "")
except InvalidSubscriptionValues:
pass
try:
yield txn.addAPNSubscription("", "1", "2", "3", "", "")
except InvalidSubscriptionValues:
pass
token = "2d0d55cd7f98bcb81c6e24abcdc35168254c7846a43e2828b1ba5a8f82e219df"
token2 = "3d0d55cd7f98bcb81c6e24abcdc35168254c7846a43e2828b1ba5a8f82e219df"
key1 = "/CalDAV/calendars.example.com/user01/calendar/"
timestamp1 = 1000
uid = "D2256BCC-48E2-42D1-BD89-CBA1E4CCDFFB"
userAgent = "test agent"
ipAddr = "127.0.0.1"
yield txn.addAPNSubscription(token, key1, timestamp1, uid, userAgent, ipAddr)
yield txn.addAPNSubscription(token2, key1, timestamp1, uid, userAgent, ipAddr)
key2 = "/CalDAV/calendars.example.com/user02/calendar/"
timestamp2 = 3000
yield txn.addAPNSubscription(token, key2, timestamp2, uid, userAgent, ipAddr)
subscriptions = (yield txn.apnSubscriptionsBySubscriber(uid))
subscriptions = [[record.token, record.resourceKey, record.modified, record.userAgent, record.ipAddr] for record in subscriptions]
self.assertTrue([token, key1, timestamp1, userAgent, ipAddr] in subscriptions)
self.assertTrue([token, key2, timestamp2, userAgent, ipAddr] in subscriptions)
self.assertTrue([token2, key1, timestamp1, userAgent, ipAddr] in subscriptions)
# Verify an update to a subscription with a different uid takes on
# the new uid
timestamp3 = 5000
uid2 = "D8FFB335-9D36-4CE8-A3B9-D1859E38C0DA"
yield txn.addAPNSubscription(token, key2, timestamp3, uid2, userAgent, ipAddr)
subscriptions = (yield txn.apnSubscriptionsBySubscriber(uid))
subscriptions = [[record.token, record.resourceKey, record.modified, record.userAgent, record.ipAddr] for record in subscriptions]
self.assertTrue([token, key1, timestamp1, userAgent, ipAddr] in subscriptions)
self.assertFalse([token, key2, timestamp3, userAgent, ipAddr] in subscriptions)
subscriptions = (yield txn.apnSubscriptionsBySubscriber(uid2))
subscriptions = [[record.token, record.resourceKey, record.modified, record.userAgent, record.ipAddr] for record in subscriptions]
self.assertTrue([token, key2, timestamp3, userAgent, ipAddr] in subscriptions)
# Change it back
yield txn.addAPNSubscription(token, key2, timestamp2, uid, userAgent, ipAddr)
yield txn.commit()
# Set up the service
clock = Clock()
service = (yield ApplePushNotifierService.makeService(
settings,
self._sqlCalendarStore, testConnectorClass=TestConnector, reactor=clock))
self.assertEquals(set(service.providers.keys()), set(["CalDAV", "CardDAV"]))
self.assertEquals(set(service.feedbacks.keys()), set(["CalDAV", "CardDAV"]))
# First, enqueue a notification while we have no connection, in this
# case by doing it prior to startService()
# Notification arrives from calendar server
dataChangedTimestamp = 1354815999
txn = self._sqlCalendarStore.newTransaction()
yield service.enqueue(
txn, "/CalDAV/calendars.example.com/user01/calendar/",
dataChangedTimestamp=dataChangedTimestamp, priority=PushPriority.high)
yield txn.commit()
# The notifications should be in the queue
self.assertTrue(
((token, key1), dataChangedTimestamp, PushPriority.high)
in service.providers["CalDAV"].queue)
self.assertTrue(
((token2, key1), dataChangedTimestamp, PushPriority.high)
in service.providers["CalDAV"].queue)
# Start the service, making the connection which should service the
# queue
service.startService()
# The queue should be empty
self.assertEquals(service.providers["CalDAV"].queue, [])
# Verify data sent to APN
providerConnector = service.providers["CalDAV"].testConnector
rawData = providerConnector.transport.data
self.assertEquals(len(rawData), 199)
data = struct.unpack("!BI", rawData[:5])
self.assertEquals(data[0], 2) # command
self.assertEquals(data[1], 194) # frame length
# Item 1 (device token)
data = struct.unpack("!BH32s", rawData[5:40])
self.assertEquals(data[0], 1)
self.assertEquals(data[1], 32)
self.assertEquals(data[2].encode("hex"), token.replace(" ", "")) # token
# Item 2 (payload)
data = struct.unpack("!BH", rawData[40:43])
self.assertEquals(data[0], 2)
payloadLength = data[1]
self.assertEquals(payloadLength, 138)
payload = struct.unpack("!%ds" % (payloadLength,), rawData[43:181])
payload = json.loads(payload[0])
self.assertEquals(payload["key"], u"/CalDAV/calendars.example.com/user01/calendar/")
self.assertEquals(payload["dataChangedTimestamp"], dataChangedTimestamp)
self.assertTrue("pushRequestSubmittedTimestamp" in payload)
# Item 3 (notification id)
data = struct.unpack("!BHI", rawData[181:188])
self.assertEquals(data[0], 3)
self.assertEquals(data[1], 4)
self.assertEquals(data[2], 2)
# Item 4 (expiration)
data = struct.unpack("!BHI", rawData[188:195])
self.assertEquals(data[0], 4)
self.assertEquals(data[1], 4)
# Item 5 (priority)
data = struct.unpack("!BHB", rawData[195:199])
self.assertEquals(data[0], 5)
self.assertEquals(data[1], 1)
self.assertEquals(data[2], ApplePushPriority.high.value)
# Verify token history is updated
self.assertTrue(token in [t for (_ignore_i, t) in providerConnector.service.protocol.history.history])
self.assertTrue(token2 in [t for (_ignore_i, t) in providerConnector.service.protocol.history.history])
#
# Verify staggering behavior
#
# Reset sent data
providerConnector.transport.data = None
# Send notification while service is connected
txn = self._sqlCalendarStore.newTransaction()
yield service.enqueue(
txn, "/CalDAV/calendars.example.com/user01/calendar/",
priority=PushPriority.low)
yield txn.commit()
clock.advance(1) # so that first push is sent
self.assertEquals(len(providerConnector.transport.data), 199)
# Ensure that the priority is "low"
data = struct.unpack("!BHB", providerConnector.transport.data[195:199])
self.assertEquals(data[0], 5)
self.assertEquals(data[1], 1)
self.assertEquals(data[2], ApplePushPriority.low.value)
# Reset sent data
providerConnector.transport.data = None
clock.advance(3) # so that second push is sent
self.assertEquals(len(providerConnector.transport.data), 199)
history = []
def errorTestFunction(status, identifier):
history.append((status, identifier))
return succeed(None)
# Simulate an error
errorData = struct.pack("!BBI", APNProviderProtocol.COMMAND_ERROR, 1, 2)
yield providerConnector.receiveData(errorData, fn=errorTestFunction)
clock.advance(301)
# Simulate multiple errors and dataReceived called
# with amounts of data not fitting message boundaries
# Send 1st 4 bytes
history = []
errorData = struct.pack(
"!BBIBBI",
APNProviderProtocol.COMMAND_ERROR, 3, 4,
APNProviderProtocol.COMMAND_ERROR, 5, 6,
)
yield providerConnector.receiveData(errorData[:4], fn=errorTestFunction)
# Send remaining bytes
yield providerConnector.receiveData(errorData[4:], fn=errorTestFunction)
self.assertEquals(history, [(3, 4), (5, 6)])
# Buffer is empty
self.assertEquals(len(providerConnector.service.protocol.buffer), 0)
# Sending 7 bytes
yield providerConnector.receiveData("!" * 7, fn=errorTestFunction)
# Buffer has 1 byte remaining
self.assertEquals(len(providerConnector.service.protocol.buffer), 1)
# Prior to feedback, there are 2 subscriptions
txn = self._sqlCalendarStore.newTransaction()
subscriptions = (yield txn.apnSubscriptionsByToken(token))
yield txn.commit()
self.assertEquals(len(subscriptions), 2)
# Simulate feedback with a single token
feedbackConnector = service.feedbacks["CalDAV"].testConnector
timestamp = 2000
binaryToken = token.decode("hex")
feedbackData = struct.pack(
"!IH32s", timestamp, len(binaryToken),
binaryToken)
yield feedbackConnector.receiveData(feedbackData)
# Simulate feedback with multiple tokens, and dataReceived called
# with amounts of data not fitting message boundaries
history = []
def feedbackTestFunction(timestamp, token):
history.append((timestamp, token))
return succeed(None)
timestamp = 2000
binaryToken = token.decode("hex")
feedbackData = struct.pack(
"!IH32sIH32s",
timestamp, len(binaryToken), binaryToken,
timestamp, len(binaryToken), binaryToken,
)
# Send 1st 10 bytes
yield feedbackConnector.receiveData(feedbackData[:10], fn=feedbackTestFunction)
# Send remaining bytes
yield feedbackConnector.receiveData(feedbackData[10:], fn=feedbackTestFunction)
self.assertEquals(history, [(timestamp, token), (timestamp, token)])
# Buffer is empty
self.assertEquals(len(feedbackConnector.service.protocol.buffer), 0)
# Sending 39 bytes
yield feedbackConnector.receiveData("!" * 39, fn=feedbackTestFunction)
# Buffer has 1 byte remaining
self.assertEquals(len(feedbackConnector.service.protocol.buffer), 1)
# The second subscription should now be gone
txn = self._sqlCalendarStore.newTransaction()
subscriptions = (yield txn.apnSubscriptionsByToken(token))
yield txn.commit()
self.assertEquals(len(subscriptions), 1)
self.assertEqual(subscriptions[0].resourceKey, "/CalDAV/calendars.example.com/user02/calendar/")
self.assertEqual(subscriptions[0].modified, 3000)
self.assertEqual(subscriptions[0].subscriberGUID, "D2256BCC-48E2-42D1-BD89-CBA1E4CCDFFB")
# Verify processError removes associated subscriptions and history
# First find the id corresponding to token2
for (id, t) in providerConnector.service.protocol.history.history:
if t == token2:
break
yield providerConnector.service.protocol.processError(8, id)
# The token for this identifier is gone
self.assertTrue((id, token2) not in providerConnector.service.protocol.history.history)
# All subscriptions for this token should now be gone
txn = self._sqlCalendarStore.newTransaction()
subscriptions = (yield txn.apnSubscriptionsByToken(token2))
yield txn.commit()
self.assertEquals(subscriptions, [])
#
# Verify purgeOldAPNSubscriptions
#
# Create two subscriptions, one old and one new
txn = self._sqlCalendarStore.newTransaction()
now = int(time.time())
yield txn.addAPNSubscription(token2, key1, now - 2 * 24 * 60 * 60, uid, userAgent, ipAddr) # old
yield txn.addAPNSubscription(token2, key2, now, uid, userAgent, ipAddr) # recent
yield txn.commit()
# Purge old subscriptions
txn = self._sqlCalendarStore.newTransaction()
yield txn.purgeOldAPNSubscriptions(now - 60 * 60)
yield txn.commit()
# Check that only the recent subscription remains
txn = self._sqlCalendarStore.newTransaction()
subscriptions = (yield txn.apnSubscriptionsByToken(token2))
yield txn.commit()
self.assertEquals(len(subscriptions), 1)
self.assertEquals(subscriptions[0].resourceKey, key2)
service.stopService()
def test_validToken(self):
self.assertTrue(validToken("2d0d55cd7f98bcb81c6e24abcdc35168254c7846a43e2828b1ba5a8f82e219df"))
self.assertFalse(validToken("d0d55cd7f98bcb81c6e24abcdc35168254c7846a43e2828b1ba5a8f82e219df"))
self.assertFalse(validToken("foo"))
self.assertFalse(validToken(""))
def test_TokenHistory(self):
history = TokenHistory(maxSize=5)
# Ensure returned identifiers increment
for id, token in enumerate(
("one", "two", "three", "four", "five"),
start=1
):
self.assertEquals(id, history.add(token))
self.assertEquals(len(history.history), 5)
# History size never exceeds maxSize
id = history.add("six")
self.assertEquals(id, 6)
self.assertEquals(len(history.history), 5)
self.assertEquals(
history.history,
[(2, "two"), (3, "three"), (4, "four"), (5, "five"), (6, "six")]
)
id = history.add("seven")
self.assertEquals(id, 7)
self.assertEquals(len(history.history), 5)
self.assertEquals(
history.history,
[(3, "three"), (4, "four"), (5, "five"), (6, "six"), (7, "seven")]
)
# Look up non-existent identifier
token = history.extractIdentifier(9999)
self.assertEquals(token, None)
self.assertEquals(
history.history,
[(3, "three"), (4, "four"), (5, "five"), (6, "six"), (7, "seven")]
)
# Look up oldest identifier in history
token = history.extractIdentifier(3)
self.assertEquals(token, "three")
self.assertEquals(
history.history,
[(4, "four"), (5, "five"), (6, "six"), (7, "seven")]
)
# Look up latest identifier in history
token = history.extractIdentifier(7)
self.assertEquals(token, "seven")
self.assertEquals(
history.history,
[(4, "four"), (5, "five"), (6, "six")]
)
# Look up an identifier in the middle
token = history.extractIdentifier(5)
self.assertEquals(token, "five")
self.assertEquals(
history.history,
[(4, "four"), (6, "six")]
)
class TestConnector(object):
def connect(self, service, factory):
self.service = service
service.protocol = factory.buildProtocol(None)
service.connected = 1
self.transport = StubTransport()
service.protocol.makeConnection(self.transport)
def receiveData(self, data, fn=None):
return self.service.protocol.dataReceived(data, fn=fn)
class StubTransport(object):
def __init__(self):
self.data = None
def write(self, data):
self.data = data
calendarserver-7.0+dfsg/calendarserver/push/test/test_notifier.py 0000664 0000000 0000000 00000023517 12607514243 0025501 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2011-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from twistedcaldav.test.util import StoreTestCase
from calendarserver.push.notifier import PushDistributor
from calendarserver.push.notifier import getPubSubAPSConfiguration
from calendarserver.push.notifier import PushNotificationWork
from twisted.internet.defer import inlineCallbacks, succeed
from twistedcaldav.config import ConfigDict
from txdav.common.datastore.test.util import populateCalendarsFrom
from txdav.common.datastore.sql_tables import _BIND_MODE_WRITE
from calendarserver.push.util import PushPriority
from txdav.idav import ChangeCategory
from twext.enterprise.jobs.jobitem import JobItem
from twisted.internet import reactor
class StubService(object):
def __init__(self):
self.reset()
def reset(self):
self.history = []
def enqueue(
self, transaction, id, dataChangedTimestamp=None,
priority=None
):
self.history.append((id, priority))
return(succeed(None))
class PushDistributorTests(StoreTestCase):
@inlineCallbacks
def test_enqueue(self):
stub = StubService()
dist = PushDistributor([stub])
yield dist.enqueue(None, "testing", PushPriority.high)
self.assertEquals(stub.history, [("testing", PushPriority.high)])
def test_getPubSubAPSConfiguration(self):
config = ConfigDict({
"EnableSSL" : True,
"ServerHostName" : "calendars.example.com",
"SSLPort" : 8443,
"HTTPPort" : 8008,
"Notifications" : {
"Services" : {
"APNS" : {
"CalDAV" : {
"Topic" : "test topic",
},
"SubscriptionRefreshIntervalSeconds" : 42,
"SubscriptionURL" : "apns",
"Environment" : "prod",
"Enabled" : True,
},
},
},
})
result = getPubSubAPSConfiguration(("CalDAV", "foo",), config)
self.assertEquals(
result,
{
"SubscriptionRefreshIntervalSeconds": 42,
"SubscriptionURL": "https://calendars.example.com:8443/apns",
"APSBundleID": "test topic",
"APSEnvironment": "prod"
}
)
class StubDistributor(object):
def __init__(self):
self.reset()
def reset(self):
self.history = []
def enqueue(
self, transaction, pushID, dataChangedTimestamp=None,
priority=None
):
self.history.append((pushID, priority))
return(succeed(None))
class PushNotificationWorkTests(StoreTestCase):
@inlineCallbacks
def test_work(self):
self.patch(JobItem, "failureRescheduleInterval", 2)
pushDistributor = StubDistributor()
def decorateTransaction(txn):
txn._pushDistributor = pushDistributor
self._sqlCalendarStore.callWithNewTransactions(decorateTransaction)
txn = self._sqlCalendarStore.newTransaction()
yield txn.enqueue(
PushNotificationWork,
pushID="/CalDAV/localhost/foo/",
pushPriority=PushPriority.high.value
)
yield txn.commit()
yield JobItem.waitEmpty(self.storeUnderTest().newTransaction, reactor, 60)
self.assertEquals(
pushDistributor.history,
[("/CalDAV/localhost/foo/", PushPriority.high)])
pushDistributor.reset()
txn = self._sqlCalendarStore.newTransaction()
yield txn.enqueue(
PushNotificationWork,
pushID="/CalDAV/localhost/bar/",
pushPriority=PushPriority.high.value
)
yield txn.enqueue(
PushNotificationWork,
pushID="/CalDAV/localhost/bar/",
pushPriority=PushPriority.high.value
)
yield txn.enqueue(
PushNotificationWork,
pushID="/CalDAV/localhost/bar/",
pushPriority=PushPriority.high.value
)
# Enqueue a different pushID to ensure those are not grouped with
# the others:
yield txn.enqueue(
PushNotificationWork,
pushID="/CalDAV/localhost/baz/",
pushPriority=PushPriority.high.value
)
yield txn.commit()
yield JobItem.waitEmpty(self.storeUnderTest().newTransaction, reactor, 60)
self.assertEquals(
set(pushDistributor.history),
set([
("/CalDAV/localhost/bar/", PushPriority.high),
("/CalDAV/localhost/baz/", PushPriority.high)
])
)
# Ensure only the high-water-mark priority push goes out, by
# enqueuing low, medium, and high notifications
pushDistributor.reset()
txn = self._sqlCalendarStore.newTransaction()
yield txn.enqueue(
PushNotificationWork,
pushID="/CalDAV/localhost/bar/",
pushPriority=PushPriority.low.value
)
yield txn.enqueue(
PushNotificationWork,
pushID="/CalDAV/localhost/bar/",
pushPriority=PushPriority.high.value
)
yield txn.enqueue(
PushNotificationWork,
pushID="/CalDAV/localhost/bar/",
pushPriority=PushPriority.medium.value
)
yield txn.commit()
yield JobItem.waitEmpty(self.storeUnderTest().newTransaction, reactor, 60)
self.assertEquals(
pushDistributor.history,
[("/CalDAV/localhost/bar/", PushPriority.high)])
class NotifierFactory(StoreTestCase):
requirements = {
"user01" : {
"calendar_1" : {}
},
"user02" : {
"calendar_1" : {}
},
}
@inlineCallbacks
def populate(self):
# Need to bypass normal validation inside the store
yield populateCalendarsFrom(self.requirements, self.storeUnderTest())
self.notifierFactory.reset()
def test_storeInit(self):
self.assertTrue("push" in self._sqlCalendarStore._notifierFactories)
@inlineCallbacks
def test_homeNotifier(self):
home = yield self.homeUnderTest(name="user01")
yield home.notifyChanged(category=ChangeCategory.default)
self.assertEquals(
self.notifierFactory.history,
[("/CalDAV/example.com/user01/", PushPriority.high)])
yield self.commit()
@inlineCallbacks
def test_calendarNotifier(self):
calendar = yield self.calendarUnderTest(home="user01")
yield calendar.notifyChanged(category=ChangeCategory.default)
self.assertEquals(
set(self.notifierFactory.history),
set([
("/CalDAV/example.com/user01/", PushPriority.high),
("/CalDAV/example.com/user01/calendar_1/", PushPriority.high)])
)
yield self.commit()
@inlineCallbacks
def test_shareWithNotifier(self):
calendar = yield self.calendarUnderTest(home="user01")
yield calendar.inviteUIDToShare("user02", _BIND_MODE_WRITE, "")
self.assertEquals(
set(self.notifierFactory.history),
set([
("/CalDAV/example.com/user01/", PushPriority.high),
("/CalDAV/example.com/user01/calendar_1/", PushPriority.high),
("/CalDAV/example.com/user02/", PushPriority.high),
("/CalDAV/example.com/user02/notification/", PushPriority.high),
])
)
yield self.commit()
calendar = yield self.calendarUnderTest(home="user01")
yield calendar.uninviteUIDFromShare("user02")
self.assertEquals(
set(self.notifierFactory.history),
set([
("/CalDAV/example.com/user01/", PushPriority.high),
("/CalDAV/example.com/user01/calendar_1/", PushPriority.high),
("/CalDAV/example.com/user02/", PushPriority.high),
("/CalDAV/example.com/user02/notification/", PushPriority.high),
])
)
yield self.commit()
@inlineCallbacks
def test_sharedCalendarNotifier(self):
calendar = yield self.calendarUnderTest(home="user01")
shareeView = yield calendar.inviteUIDToShare("user02", _BIND_MODE_WRITE, "")
yield shareeView.acceptShare("")
shareName = shareeView.name()
yield self.commit()
self.notifierFactory.reset()
shared = yield self.calendarUnderTest(home="user02", name=shareName)
yield shared.notifyChanged(category=ChangeCategory.default)
self.assertEquals(
set(self.notifierFactory.history),
set([
("/CalDAV/example.com/user01/", PushPriority.high),
("/CalDAV/example.com/user01/calendar_1/", PushPriority.high)])
)
yield self.commit()
@inlineCallbacks
def test_notificationNotifier(self):
notifications = yield self.transactionUnderTest().notificationsWithUID("user01", create=True)
yield notifications.notifyChanged(category=ChangeCategory.default)
self.assertEquals(
set(self.notifierFactory.history),
set([
("/CalDAV/example.com/user01/", PushPriority.high),
("/CalDAV/example.com/user01/notification/", PushPriority.high)])
)
yield self.commit()
calendarserver-7.0+dfsg/calendarserver/push/util.py 0000664 0000000 0000000 00000014775 12607514243 0022627 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2011-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from OpenSSL import crypto
from twext.python.log import Logger
from twisted.python.constants import Values, ValueConstant
class PushPriority(Values):
"""
Constants to use for push priorities
"""
low = ValueConstant(1)
medium = ValueConstant(5)
high = ValueConstant(10)
def getAPNTopicFromCertificate(certPath):
"""
Given the path to a certificate, extract the UID value portion of the
subject, which in this context is used for the associated APN topic.
@param certPath: file path of the certificate
@type certPath: C{str}
@return: C{str} topic, or empty string if value is not found
"""
certData = open(certPath).read()
x509 = crypto.load_certificate(crypto.FILETYPE_PEM, certData)
subject = x509.get_subject()
components = subject.get_components()
for name, value in components:
if name == "UID":
return value
return ""
def validToken(token):
"""
Return True if token is in hex and is 64 characters long, False
otherwise
"""
if len(token) != 64:
return False
try:
token.decode("hex")
except TypeError:
return False
return True
class TokenHistory(object):
"""
Manages a queue of tokens and corresponding identifiers. Queue is always
kept below maxSize in length (older entries removed as needed).
"""
def __init__(self, maxSize=200):
"""
@param maxSize: How large the history is allowed to grow. Once this
size is reached, older entries are pruned as needed.
@type maxSize: C{int}
"""
self.maxSize = maxSize
self.identifier = 0
self.history = []
def add(self, token):
"""
Add a token to the history, and return the new identifier associated
with this token. Identifiers begin at 1 and increase each time this
is called. If the number of items in the history exceeds maxSize,
older entries are removed to get the size down to maxSize.
@param token: The token to store
@type token: C{str}
@returns: the message identifier associated with this token, C{int}
"""
self.identifier += 1
self.history.append((self.identifier, token))
del self.history[:-self.maxSize]
return self.identifier
def extractIdentifier(self, identifier):
"""
Look for the token associated with the identifier. Remove the
identifier-token pair from the history and return the token. Return
None if the identifier is not found in the history.
@param identifier: The identifier to look up
@type identifier: C{int}
@returns: the token associated with this message identifier, C{str},
or None if not found
"""
for index, (id, token) in enumerate(self.history):
if id == identifier:
del self.history[index]
return token
return None
class PushScheduler(object):
"""
Allows staggered scheduling of push notifications
"""
log = Logger()
def __init__(self, reactor, callback, staggerSeconds=1):
"""
@param callback: The method to call when it's time to send a push
@type callback: callable
@param staggerSeconds: The number of seconds to stagger between each
push
@type staggerSeconds: integer
"""
self.outstanding = {}
self.reactor = reactor
self.callback = callback
self.staggerSeconds = staggerSeconds
def schedule(self, tokens, key, dataChangedTimestamp, priority):
"""
Schedules a batch of notifications for the given tokens, staggered
with self.staggerSeconds between each one. Duplicates are ignored,
so if a token/key pair is already scheduled and not yet sent, a new
one will not be scheduled for that pair.
@param tokens: The device tokens to schedule notifications for
@type tokens: List of C{str}
@param key: The key to use for this batch of notifications
@type key: C{str}
@param dataChangedTimestamp: Timestamp (epoch seconds) for the data change
which triggered this notification
@type key: C{int}
"""
scheduleTime = 0.0
for token in tokens:
internalKey = (token, key)
if internalKey in self.outstanding:
self.log.debug(
"PushScheduler already has this scheduled: %s" %
(internalKey,))
else:
self.outstanding[internalKey] = self.reactor.callLater(
scheduleTime, self.send, token, key, dataChangedTimestamp,
priority)
self.log.debug(
"PushScheduler scheduled: %s in %.0f sec" %
(internalKey, scheduleTime))
scheduleTime += self.staggerSeconds
def send(self, token, key, dataChangedTimestamp, priority):
"""
This method is what actually gets scheduled. Its job is to remove
its corresponding entry from the outstanding dict and call the
callback.
@param token: The device token to send a notification to
@type token: C{str}
@param key: The notification key
@type key: C{str}
@param dataChangedTimestamp: Timestamp (epoch seconds) for the data change
which triggered this notification
@type key: C{int}
"""
self.log.debug("PushScheduler fired for %s %s %d" % (token, key, dataChangedTimestamp))
del self.outstanding[(token, key)]
return self.callback(token, key, dataChangedTimestamp, priority)
def stop(self):
"""
Cancel all outstanding delayed calls
"""
for (token, key), delayed in self.outstanding.iteritems():
self.log.debug("PushScheduler cancelling %s %s" % (token, key))
delayed.cancel()
calendarserver-7.0+dfsg/calendarserver/tap/ 0000775 0000000 0000000 00000000000 12607514243 0021067 5 ustar 00root root 0000000 0000000 calendarserver-7.0+dfsg/calendarserver/tap/__init__.py 0000664 0000000 0000000 00000001433 12607514243 0023201 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
CalendarServer TAP plugin support.
"""
from calendarserver.tap import profiling # Pre-imported for side-effect
__all__ = [
"caldav",
"carddav",
"profiling",
"util"
]
calendarserver-7.0+dfsg/calendarserver/tap/caldav.py 0000664 0000000 0000000 00000275642 12607514243 0022713 0 ustar 00root root 0000000 0000000 # -*- test-case-name: calendarserver.tap.test.test_caldav -*-
##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
__all__ = [
"CalDAVService",
"CalDAVOptions",
"CalDAVServiceMaker",
]
import sys
from collections import OrderedDict
from os import getuid, getgid, umask, remove, environ, stat, chown, W_OK
from os.path import exists, basename
import socket
from stat import S_ISSOCK
import time
from subprocess import Popen, PIPE
from pwd import getpwuid, getpwnam
from grp import getgrnam
import OpenSSL
from OpenSSL.SSL import Error as SSLError
from zope.interface import implements
from twistedcaldav.stdconfig import config
from twisted.python.log import FileLogObserver, ILogObserver
from twisted.python.logfile import LogFile
from twisted.python.usage import Options, UsageError
from twisted.python.util import uidFromString, gidFromString
from twisted.plugin import IPlugin
from twisted.internet.defer import (
gatherResults, Deferred, inlineCallbacks, succeed
)
from twisted.internet.endpoints import UNIXClientEndpoint, TCP4ClientEndpoint
from twisted.internet.process import ProcessExitedAlready
from twisted.internet.protocol import ProcessProtocol
from twisted.internet.protocol import Factory
from twisted.application.internet import TCPServer, UNIXServer
from twisted.application.service import MultiService, IServiceMaker
from twisted.application.service import Service
from twisted.protocols.amp import AMP
from twext.python.filepath import CachingFilePath
from twext.python.log import Logger, LogLevel, replaceTwistedLoggers
from twext.internet.fswatch import (
DirectoryChangeListener, IDirectoryChangeListenee
)
from twext.internet.ssl import ChainingOpenSSLContextFactory
from twext.internet.tcp import MaxAcceptTCPServer, MaxAcceptSSLServer
from twext.enterprise.adbapi2 import ConnectionPool
from twext.enterprise.ienterprise import ORACLE_DIALECT
from twext.enterprise.ienterprise import POSTGRES_DIALECT
from twext.enterprise.jobs.jobitem import JobItem
from twext.enterprise.jobs.queue import NonPerformingQueuer
from twext.enterprise.jobs.queue import ControllerQueue
from twext.enterprise.jobs.queue import WorkerFactory as QueueWorkerFactory
from twext.application.service import ReExecService
from txdav.who.groups import GroupCacherPollingWork
from calendarserver.tools.purge import PrincipalPurgePollingWork
from txweb2.channel.http import (
LimitingHTTPFactory, SSLRedirectRequest, HTTPChannel
)
from txweb2.metafd import ConnectionLimiter, ReportingHTTPService
from txweb2.server import Site
from txdav.base.datastore.dbapiclient import DBAPIConnector
from txdav.caldav.datastore.scheduling.imip.inbound import MailRetriever
from txdav.caldav.datastore.scheduling.imip.inbound import scheduleNextMailPoll
from txdav.common.datastore.upgrade.migrate import UpgradeToDatabaseStep
from txdav.common.datastore.upgrade.sql.upgrade import (
UpgradeDatabaseCalendarDataStep, UpgradeDatabaseOtherStep,
UpgradeDatabaseSchemaStep, UpgradeDatabaseAddressBookDataStep,
UpgradeAcquireLockStep, UpgradeReleaseLockStep,
UpgradeDatabaseNotificationDataStep
)
from txdav.common.datastore.work.inbox_cleanup import InboxCleanupWork
from txdav.common.datastore.work.revision_cleanup import FindMinValidRevisionWork
from txdav.who.groups import GroupCacher
from twistedcaldav import memcachepool
from twistedcaldav.cache import MemcacheURLPatternChangeNotifier
from twistedcaldav.config import ConfigurationError
from twistedcaldav.localization import processLocalizationFiles
from twistedcaldav.stdconfig import DEFAULT_CONFIG, DEFAULT_CONFIG_FILE
from twistedcaldav.upgrade import (
UpgradeFileSystemFormatStep, PostDBImportStep,
)
from calendarserver.accesslog import AMPCommonAccessLoggingObserver
from calendarserver.accesslog import AMPLoggingFactory
from calendarserver.accesslog import RotatingFileAccessLoggingObserver
from calendarserver.controlsocket import ControlSocket
from calendarserver.controlsocket import ControlSocketConnectingService
from calendarserver.dashboard_service import DashboardServer
from calendarserver.push.amppush import AMPPushMaster, AMPPushForwarder
from calendarserver.push.applepush import ApplePushNotifierService
from calendarserver.push.notifier import PushDistributor
from calendarserver.tap.util import (
ConnectionDispenser, Stepper,
checkDirectories, getRootResource,
pgServiceFromConfig, getDBPool, MemoryLimitService,
storeFromConfig, getSSLPassphrase, preFlightChecks,
storeFromConfigWithDPSClient, storeFromConfigWithoutDPS,
)
try:
from calendarserver.version import version
except ImportError:
from twisted.python.modules import getModule
sys.path.insert(
0, getModule(__name__).pathEntry.filePath.child("support").path
)
from version import version as getVersion
version = "{} ({}*)".format(getVersion())
from txweb2.server import VERSION as TWISTED_VERSION
TWISTED_VERSION = "CalendarServer/{} {}".format(
version.replace(" ", ""), TWISTED_VERSION,
)
log = Logger()
# Control socket message-routing constants.
_LOG_ROUTE = "log"
_QUEUE_ROUTE = "queue"
_CONTROL_SERVICE_NAME = "control"
def getid(uid, gid):
if uid is not None:
uid = uidFromString(uid)
if gid is not None:
gid = gidFromString(gid)
return (uid, gid)
def conflictBetweenIPv4AndIPv6():
"""
Is there a conflict between binding an IPv6 and an IPv4 port? Return True
if there is, False if there isn't.
This is a temporary workaround until maybe Twisted starts setting
C{IPPROTO_IPV6 / IPV6_V6ONLY} on IPv6 sockets.
@return: C{True} if listening on IPv4 conflicts with listening on IPv6.
"""
s4 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s6 = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
try:
s4.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s6.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s4.bind(("", 0))
s4.listen(1)
usedport = s4.getsockname()[1]
try:
s6.bind(("::", usedport))
except socket.error:
return True
else:
return False
finally:
s4.close()
s6.close()
def _computeEnvVars(parent):
"""
Compute environment variables to be propagated to child processes.
"""
result = {}
requiredVars = [
"PATH",
"PYTHONPATH",
"LD_LIBRARY_PATH",
"LD_PRELOAD",
"DYLD_LIBRARY_PATH",
"DYLD_INSERT_LIBRARIES",
]
optionalVars = [
"PYTHONHASHSEED",
"KRB5_KTNAME",
"ORACLE_HOME",
"VERSIONER_PYTHON_PREFER_32_BIT",
]
for varname in requiredVars:
result[varname] = parent.get(varname, "")
for varname in optionalVars:
if varname in parent:
result[varname] = parent[varname]
return result
PARENT_ENVIRONMENT = _computeEnvVars(environ)
class ErrorLoggingMultiService(MultiService, object):
""" Registers a rotating file logger for error logging, if
config.ErrorLogEnabled is True. """
def __init__(self, logEnabled, logPath, logRotateLength, logMaxFiles):
"""
@param logEnabled: Whether to write to a log file
@type logEnabled: C{boolean}
@param logPath: the full path to the log file
@type logPath: C{str}
@param logRotateLength: rotate when files exceed this many bytes
@type logRotateLength: C{int}
@param logMaxFiles: keep at most this many files
@type logMaxFiles: C{int}
"""
MultiService.__init__(self)
self.logEnabled = logEnabled
self.logPath = logPath
self.logRotateLength = logRotateLength
self.logMaxFiles = logMaxFiles
def setServiceParent(self, app):
MultiService.setServiceParent(self, app)
if self.logEnabled:
errorLogFile = LogFile.fromFullPath(
self.logPath,
rotateLength=self.logRotateLength,
maxRotatedFiles=self.logMaxFiles
)
errorLogObserver = FileLogObserver(errorLogFile).emit
# Registering ILogObserver with the Application object
# gets our observer picked up within AppLogger.start( )
app.setComponent(ILogObserver, errorLogObserver)
class CalDAVService (ErrorLoggingMultiService):
# The ConnectionService is a MultiService which bundles all the connection
# services together for the purposes of being able to stop them and wait
# for all of their connections to close before shutting down.
connectionServiceName = "ConnectionService"
def __init__(self, logObserver):
self.logObserver = logObserver # accesslog observer
ErrorLoggingMultiService.__init__(
self, config.ErrorLogEnabled,
config.ErrorLogFile, config.ErrorLogRotateMB * 1024 * 1024,
config.ErrorLogMaxRotatedFiles
)
def privilegedStartService(self):
MultiService.privilegedStartService(self)
self.logObserver.start()
@inlineCallbacks
def stopService(self):
"""
Wait for outstanding requests to finish
@return: a Deferred which fires when all outstanding requests are
complete
"""
connectionService = self.getServiceNamed(self.connectionServiceName)
# Note: removeService() also calls stopService()
yield self.removeService(connectionService)
# At this point, all outstanding requests have been responded to
yield super(CalDAVService, self).stopService()
self.logObserver.stop()
class CalDAVOptions (Options):
log = Logger()
optParameters = [[
"config", "f", DEFAULT_CONFIG_FILE, "Path to configuration file."
]]
zsh_actions = {"config": "_files -g '*.plist'"}
def __init__(self, *args, **kwargs):
super(CalDAVOptions, self).__init__(*args, **kwargs)
self.overrides = {}
@staticmethod
def coerceOption(configDict, key, value):
"""
Coerce the given C{val} to type of C{configDict[key]}
"""
if key in configDict:
if isinstance(configDict[key], bool):
value = value == "True"
elif isinstance(configDict[key], (int, float, long)):
value = type(configDict[key])(value)
elif isinstance(configDict[key], (list, tuple)):
value = value.split(",")
elif isinstance(configDict[key], dict):
raise UsageError(
"Dict options not supported on the command line"
)
elif value == "None":
value = None
return value
@classmethod
def setOverride(cls, configDict, path, value, overrideDict):
"""
Set the value at path in configDict
"""
key = path[0]
if len(path) == 1:
overrideDict[key] = cls.coerceOption(configDict, key, value)
return
if key in configDict:
if not isinstance(configDict[key], dict):
raise UsageError(
"Found intermediate path element that is not a dictionary"
)
if key not in overrideDict:
overrideDict[key] = {}
cls.setOverride(
configDict[key], path[1:],
value, overrideDict[key]
)
def opt_option(self, option):
"""
Set an option to override a value in the config file. True, False, int,
and float options are supported, as well as comma separated lists. Only
one option may be given for each --option flag, however multiple
--option flags may be specified.
"""
if "=" in option:
path, value = option.split("=")
self.setOverride(
DEFAULT_CONFIG,
path.split("/"),
value,
self.overrides
)
else:
self.opt_option("{}=True".format(option))
opt_o = opt_option
def postOptions(self):
try:
self.loadConfiguration()
self.checkConfiguration()
except ConfigurationError, e:
print("Invalid configuration:", e)
sys.exit(1)
def loadConfiguration(self):
if not exists(self["config"]):
raise ConfigurationError(
"Config file {} not found. Exiting."
.format(self["config"])
)
print("Reading configuration from file:", self["config"])
config.load(self["config"])
for path in config.getProvider().importedFiles:
print("Imported configuration from file:", path)
for path in config.getProvider().includedFiles:
print("Adding configuration from file:", path)
for path in config.getProvider().missingFiles:
print("Missing configuration file:", path)
config.updateDefaults(self.overrides)
def checkDirectories(self, config):
checkDirectories(config)
def checkConfiguration(self):
uid, gid = None, None
if self.parent["uid"] or self.parent["gid"]:
uid, gid = getid(self.parent["uid"], self.parent["gid"])
def gottaBeRoot():
if getuid() != 0:
username = getpwuid(getuid()).pw_name
raise UsageError(
"Only root can drop privileges. You are: {}"
.format(username)
)
if uid and uid != getuid():
gottaBeRoot()
if gid and gid != getgid():
gottaBeRoot()
self.parent["pidfile"] = config.PIDFile
self.checkDirectories(config)
#
# Nuke the file log observer's time format.
#
if not config.ErrorLogFile and config.ProcessType == "Slave":
FileLogObserver.timeFormat = ""
# Check current umask and warn if changed
oldmask = umask(config.umask)
if oldmask != config.umask:
self.log.info(
"WARNING: changing umask from: 0{old:03o} to 0{new:03o}",
old=oldmask, new=config.umask
)
self.parent["umask"] = config.umask
class GroupOwnedUNIXServer(UNIXServer, object):
"""
A L{GroupOwnedUNIXServer} is a L{UNIXServer} which changes the group
ownership of its socket immediately after binding its port.
@ivar gid: the group ID which should own the socket after it is bound.
"""
def __init__(self, gid, *args, **kw):
super(GroupOwnedUNIXServer, self).__init__(*args, **kw)
self.gid = gid
def privilegedStartService(self):
"""
Bind the UNIX socket and then change its group.
"""
super(GroupOwnedUNIXServer, self).privilegedStartService()
# Unfortunately, there's no public way to access this. -glyph
fileName = self._port.port
chown(fileName, getuid(), self.gid)
class SlaveSpawnerService(Service):
"""
Service to add all Python subprocesses that need to do work to a
L{DelayedStartupProcessMonitor}:
- regular slave processes (CalDAV workers)
"""
def __init__(
self, maker, monitor, dispenser, dispatcher, stats, configPath,
inheritFDs=None, inheritSSLFDs=None
):
self.maker = maker
self.monitor = monitor
self.dispenser = dispenser
self.dispatcher = dispatcher
self.stats = stats
self.configPath = configPath
self.inheritFDs = inheritFDs
self.inheritSSLFDs = inheritSSLFDs
def startService(self):
# Add the directory proxy sidecar first so it at least get spawned
# prior to the caldavd worker processes:
log.info("Adding directory proxy service")
dpsArgv = [
sys.executable,
sys.argv[0],
]
if config.UserName:
dpsArgv.extend(("-u", config.UserName))
if config.GroupName:
dpsArgv.extend(("-g", config.GroupName))
dpsArgv.extend((
"--reactor={}".format(config.Twisted.reactor),
"-n", "caldav_directoryproxy",
"-f", self.configPath,
))
self.monitor.addProcess(
"directoryproxy", dpsArgv, env=PARENT_ENVIRONMENT
)
for slaveNumber in xrange(0, config.MultiProcess.ProcessCount):
if config.UseMetaFD:
extraArgs = dict(
metaSocket=self.dispatcher.addSocket(slaveNumber)
)
else:
extraArgs = dict(inheritFDs=self.inheritFDs,
inheritSSLFDs=self.inheritSSLFDs)
if self.dispenser is not None:
extraArgs.update(ampSQLDispenser=self.dispenser)
process = TwistdSlaveProcess(
sys.argv[0], self.maker.tapname, self.configPath, slaveNumber,
config.BindAddresses, **extraArgs
)
self.monitor.addProcessObject(process, PARENT_ENVIRONMENT)
# Hook up the stats service directory to the DPS server after a short delay
if self.stats is not None:
self.monitor._reactor.callLater(5, self.stats.makeDirectoryProxyClient)
class WorkSchedulingService(Service):
"""
A Service to kick off the initial scheduling of periodic work items.
"""
log = Logger()
def __init__(self, store, doImip, doGroupCaching, doPrincipalPurging):
"""
@param store: the Store to use for enqueuing work
@param doImip: whether to schedule imip polling
@type doImip: boolean
@param doGroupCaching: whether to schedule group caching
@type doImip: boolean
"""
self.store = store
self.doImip = doImip
self.doGroupCaching = doGroupCaching
self.doPrincipalPurging = doPrincipalPurging
@inlineCallbacks
def startService(self):
# Note: the "seconds in the future" args are being set to the LogID
# numbers to spread them out. This is only needed until
# ultimatelyPerform( ) handles groups correctly. Once that is fixed
# these can be set to zero seconds in the future.
if self.doImip:
yield scheduleNextMailPoll(
self.store,
int(config.LogID) if config.LogID else 5
)
if self.doGroupCaching:
yield GroupCacherPollingWork.initialSchedule(
self.store,
int(config.LogID) if config.LogID else 5
)
if self.doPrincipalPurging:
yield PrincipalPurgePollingWork.initialSchedule(
self.store,
int(config.LogID) if config.LogID else 5
)
yield FindMinValidRevisionWork.initialSchedule(
self.store,
int(config.LogID) if config.LogID else 5
)
yield InboxCleanupWork.initialSchedule(
self.store,
int(config.LogID) if config.LogID else 5
)
class PreProcessingService(Service):
"""
A Service responsible for running any work that needs to be finished prior
to the main service starting. Once that work is done, it instantiates the
main service and adds it to the Service hierarchy (specifically to its
parent). If the final work step does not return a Failure, that is an
indication the store is ready and it is passed to the main service.
Otherwise, None is passed to the main service in place of a store. This
is mostly useful in the case of command line utilities that need to do
something different if the store is not available (e.g. utilities that
aren't allowed to upgrade the database).
"""
def __init__(
self, serviceCreator, connectionPool, store, logObserver,
storageService, reactor=None
):
"""
@param serviceCreator: callable which will be passed the connection
pool, store, and log observer, and should return a Service
@param connectionPool: connection pool to pass to serviceCreator
@param store: the store object being processed
@param logObserver: log observer to pass to serviceCreator
@param storageService: the service responsible for starting/stopping
the data store
"""
self.serviceCreator = serviceCreator
self.connectionPool = connectionPool
self.store = store
self.logObserver = logObserver
self.storageService = storageService
self.stepper = Stepper()
if reactor is None:
from twisted.internet import reactor
self.reactor = reactor
def stepWithResult(self, result):
"""
The final "step"; if we get here we know our store is ready, so
we create the main service and pass in the store.
"""
service = self.serviceCreator(
self.connectionPool, self.store,
self.logObserver, self.storageService
)
if self.parent is not None:
self.reactor.callLater(0, service.setServiceParent, self.parent)
return succeed(None)
def stepWithFailure(self, failure):
"""
The final "step", but if we get here we know our store is not ready,
so we create the main service and pass in a None for the store.
"""
try:
service = self.serviceCreator(
self.connectionPool, None,
self.logObserver, self.storageService
)
if self.parent is not None:
self.reactor.callLater(
0, service.setServiceParent, self.parent
)
except StoreNotAvailable:
log.error("Data store not available; shutting down")
if config.ServiceDisablingProgram:
# If the store is not available, we don't want launchd to
# repeatedly launch us, we want our job to get unloaded.
# If the config.ServiceDisablingProgram is assigned and exists
# we execute it now. Its job is to carry out the platform-
# specific tasks of disabling the service.
if exists(config.ServiceDisablingProgram):
log.error(
"Disabling service via {exe}",
exe=config.ServiceDisablingProgram
)
Popen(
args=[config.ServiceDisablingProgram],
stdout=PIPE,
stderr=PIPE,
).communicate()
time.sleep(60)
self.reactor.stop()
return succeed(None)
def addStep(self, step):
"""
Hand the step to our Stepper
@param step: an object implementing stepWithResult( )
"""
self.stepper.addStep(step)
return self
def startService(self):
"""
Add ourself as the final step, and then tell the coordinator to start
working on each step one at a time.
"""
self.addStep(self)
self.stepper.start()
class StoreNotAvailable(Exception):
"""
Raised when we want to give up because the store is not available
"""
class CalDAVServiceMaker (object):
log = Logger()
implements(IPlugin, IServiceMaker)
tapname = "caldav"
description = "Calendar and Contacts Server"
options = CalDAVOptions
def makeService(self, options):
"""
Create the top-level service.
"""
replaceTwistedLoggers()
self.log.info(
"{log_source.description} {version} starting "
"{config.ProcessType} process...",
version=version, config=config
)
try:
from setproctitle import setproctitle
except ImportError:
pass
else:
execName = basename(sys.argv[0])
if config.LogID:
logID = " #{}".format(config.LogID)
else:
logID = ""
if config.ProcessType is not "Utility":
execName = ""
setproctitle(
"CalendarServer {} [{}{}] {}"
.format(version, config.ProcessType, logID, execName)
)
serviceMethod = getattr(
self, "makeService_{}".format(config.ProcessType), None
)
if not serviceMethod:
raise UsageError(
"Unknown server type {}. "
"Please choose: Slave, Single or Combined"
.format(config.ProcessType)
)
else:
#
# Configure Memcached Client Pool
#
memcachepool.installPools(
config.Memcached.Pools,
config.Memcached.MaxClients,
)
if config.ProcessType in ("Combined", "Single"):
# Process localization string files
processLocalizationFiles(config.Localization)
try:
service = serviceMethod(options)
except ConfigurationError, e:
sys.stderr.write("Configuration error: {}\n".format(e))
sys.exit(1)
#
# Note: if there is a stopped process in the same session
# as the calendar server and the calendar server is the
# group leader then when twistd forks to drop privileges a
# SIGHUP may be sent by the kernel, which can cause the
# process to exit. This SIGHUP should be, at a minimum,
# ignored.
#
def location(frame):
if frame is None:
return "Unknown"
else:
return "{frame.f_code.co_name}: {frame.f_lineno}".format(
frame=frame
)
return service
def createContextFactory(self):
"""
Create an SSL context factory for use with any SSL socket talking to
this server.
"""
return ChainingOpenSSLContextFactory(
config.SSLPrivateKey,
config.SSLCertificate,
certificateChainFile=config.SSLAuthorityChain,
passwdCallback=getSSLPassphrase,
sslmethod=getattr(OpenSSL.SSL, config.SSLMethod),
ciphers=config.SSLCiphers.strip(),
verifyClient=config.Authentication.ClientCertificate.Enabled,
requireClientCertificate=config.Authentication.ClientCertificate.Required,
clientCACertFileNames=config.Authentication.ClientCertificate.CAFiles,
sendCAsToClient=config.Authentication.ClientCertificate.SendCAsToClient,
)
def makeService_Slave(self, options):
"""
Create a "slave" service, a subprocess of a service created with
L{makeService_Combined}, which does the work of actually handling
CalDAV and CardDAV requests.
"""
pool, txnFactory = getDBPool(config)
store = storeFromConfigWithDPSClient(config, txnFactory)
directory = store.directoryService()
logObserver = AMPCommonAccessLoggingObserver()
result = self.requestProcessingService(options, store, logObserver)
if pool is not None:
pool.setServiceParent(result)
self._initJobQueue(None)
if config.ControlSocket:
id = config.ControlSocket
self.log.info("Control via AF_UNIX: {id}", id=id)
endpointFactory = lambda reactor: UNIXClientEndpoint(
reactor, id
)
else:
id = int(config.ControlPort)
self.log.info("Control via AF_INET: {id}", id=id)
endpointFactory = lambda reactor: TCP4ClientEndpoint(
reactor, "127.0.0.1", id
)
controlSocketClient = ControlSocket()
class LogClient(AMP):
def startReceivingBoxes(self, sender):
super(LogClient, self).startReceivingBoxes(sender)
logObserver.addClient(self)
f = Factory()
f.protocol = LogClient
controlSocketClient.addFactory(_LOG_ROUTE, f)
from txdav.common.datastore.sql import CommonDataStore as SQLStore
if isinstance(store, SQLStore):
def queueMasterAvailable(connectionFromMaster):
store.queuer = connectionFromMaster
queueFactory = QueueWorkerFactory(
store.newTransaction, queueMasterAvailable
)
controlSocketClient.addFactory(_QUEUE_ROUTE, queueFactory)
controlClient = ControlSocketConnectingService(
endpointFactory, controlSocketClient
)
controlClient.setServiceParent(result)
# Optionally set up push notifications
pushDistributor = None
if config.Notifications.Enabled:
observers = []
if config.Notifications.Services.APNS.Enabled:
pushSubService = ApplePushNotifierService.makeService(
config.Notifications.Services.APNS, store)
observers.append(pushSubService)
pushSubService.setServiceParent(result)
if config.Notifications.Services.AMP.Enabled:
pushSubService = AMPPushForwarder(controlSocketClient)
observers.append(pushSubService)
if observers:
pushDistributor = PushDistributor(observers)
# Optionally set up mail retrieval
if config.Scheduling.iMIP.Enabled:
mailRetriever = MailRetriever(
store, directory, config.Scheduling.iMIP.Receiving
)
mailRetriever.setServiceParent(result)
else:
mailRetriever = None
# Optionally set up group cacher
if config.GroupCaching.Enabled:
cacheNotifier = MemcacheURLPatternChangeNotifier("/principals/__uids__/{token}/", cacheHandle="PrincipalToken") if config.EnableResponseCache else None
groupCacher = GroupCacher(
directory,
updateSeconds=config.GroupCaching.UpdateSeconds,
useDirectoryBasedDelegates=config.GroupCaching.UseDirectoryBasedDelegates,
cacheNotifier=cacheNotifier,
)
else:
groupCacher = None
def decorateTransaction(txn):
txn._pushDistributor = pushDistributor
txn._rootResource = result.rootResource
txn._mailRetriever = mailRetriever
txn._groupCacher = groupCacher
store.callWithNewTransactions(decorateTransaction)
# Optionally enable Manhole access
if config.Manhole.Enabled:
try:
from twisted.conch.manhole_tap import (
makeService as manholeMakeService
)
portString = "tcp:{:d}:interface=127.0.0.1".format(
config.Manhole.StartingPortNumber + int(config.LogID) + 1
)
manholeService = manholeMakeService({
"sshPort": None,
"telnetPort": portString,
"namespace": {
"config": config,
"service": result,
"store": store,
"directory": directory,
},
"passwd": config.Manhole.PasswordFilePath,
})
manholeService.setServiceParent(result)
# Using print(because logging isn't ready at this point)
print("Manhole access enabled:", portString)
except ImportError:
print(
"Manhole access could not enabled because "
"manhole_tap could not be imported"
)
return result
def requestProcessingService(self, options, store, logObserver):
"""
Make a service that will actually process HTTP requests.
This may be a 'Slave' service, which runs as a worker subprocess of the
'Combined' configuration, or a 'Single' service, which is a stand-alone
process that answers CalDAV/CardDAV requests by itself.
"""
#
# Change default log level to "info" as its useful to have
# that during startup
#
oldLogLevel = log.publisher.levels.logLevelForNamespace(None)
log.publisher.levels.setLogLevelForNamespace(None, LogLevel.info)
# Note: 'additional' was used for IMIP reply resource, and perhaps
# we can remove this
additional = []
#
# Configure the service
#
self.log.info("Setting up service")
self.log.info(
"Configuring access log observer: {observer}", observer=logObserver
)
service = CalDAVService(logObserver)
rootResource = getRootResource(config, store, additional)
service.rootResource = rootResource
underlyingSite = Site(rootResource)
# Need to cache SSL port info here so we can access it in a Request to
# deal with the possibility of being behind an SSL decoder
underlyingSite.EnableSSL = config.EnableSSL
underlyingSite.SSLPort = config.SSLPort
underlyingSite.BindSSLPorts = config.BindSSLPorts
requestFactory = underlyingSite
if config.EnableSSL and config.RedirectHTTPToHTTPS:
self.log.info(
"Redirecting to HTTPS port {port}", port=config.SSLPort
)
def requestFactory(*args, **kw):
return SSLRedirectRequest(site=underlyingSite, *args, **kw)
# Setup HTTP connection behaviors
HTTPChannel.allowPersistentConnections = config.EnableKeepAlive
HTTPChannel.betweenRequestsTimeOut = config.PipelineIdleTimeOut
HTTPChannel.inputTimeOut = config.IncomingDataTimeOut
HTTPChannel.idleTimeOut = config.IdleConnectionTimeOut
HTTPChannel.closeTimeOut = config.CloseConnectionTimeOut
# Add the Strict-Transport-Security header to all secured requests
# if enabled.
if config.StrictTransportSecuritySeconds:
previousRequestFactory = requestFactory
def requestFactory(*args, **kw):
request = previousRequestFactory(*args, **kw)
def responseFilter(ignored, response):
ignored, secure = request.chanRequest.getHostInfo()
if secure:
response.headers.addRawHeader(
"Strict-Transport-Security",
"max-age={max_age:d}".format(
max_age=config.StrictTransportSecuritySeconds
)
)
return response
responseFilter.handleErrors = True
request.addResponseFilter(responseFilter)
return request
httpFactory = LimitingHTTPFactory(
requestFactory,
maxRequests=config.MaxRequests,
maxAccepts=config.MaxAccepts,
betweenRequestsTimeOut=config.IdleConnectionTimeOut,
vary=True,
)
def updateFactory(configDict, reloading=False):
httpFactory.maxRequests = configDict.MaxRequests
httpFactory.maxAccepts = configDict.MaxAccepts
config.addPostUpdateHooks((updateFactory,))
# Bundle the various connection services within a single MultiService
# that can be stopped before the others for graceful shutdown.
connectionService = MultiService()
connectionService.setName(CalDAVService.connectionServiceName)
connectionService.setServiceParent(service)
# Service to schedule initial work
WorkSchedulingService(
store,
config.Scheduling.iMIP.Enabled,
config.GroupCaching.Enabled,
config.AutomaticPurging.Enabled
).setServiceParent(service)
# For calendarserver.tap.test
# .test_caldav.BaseServiceMakerTests.getSite():
connectionService.underlyingSite = underlyingSite
if config.InheritFDs or config.InheritSSLFDs:
# Inherit sockets to call accept() on them individually.
if config.EnableSSL:
for fdAsStr in config.InheritSSLFDs:
try:
contextFactory = self.createContextFactory()
except SSLError, e:
log.error(
"Unable to set up SSL context factory: {error}",
error=e
)
else:
MaxAcceptSSLServer(
int(fdAsStr), httpFactory,
contextFactory,
backlog=config.ListenBacklog,
inherit=True
).setServiceParent(connectionService)
for fdAsStr in config.InheritFDs:
MaxAcceptTCPServer(
int(fdAsStr), httpFactory,
backlog=config.ListenBacklog,
inherit=True
).setServiceParent(connectionService)
elif config.MetaFD:
# Inherit a single socket to receive accept()ed connections via
# recvmsg() and SCM_RIGHTS.
if config.SocketFiles.Enabled:
# TLS will be handled by a front-end web proxy
contextFactory = None
else:
try:
contextFactory = self.createContextFactory()
except SSLError, e:
self.log.error(
"Unable to set up SSL context factory: {error}",
error=e
)
# None is okay as a context factory for ReportingHTTPService as
# long as we will never receive a file descriptor with the
# 'SSL' tag on it, since that's the only time it's used.
contextFactory = None
ReportingHTTPService(
requestFactory, int(config.MetaFD), contextFactory,
usingSocketFile=config.SocketFiles.Enabled
).setServiceParent(connectionService)
else: # Not inheriting, therefore we open our own:
for bindAddress in self._allBindAddresses():
self._validatePortConfig()
if config.EnableSSL:
for port in config.BindSSLPorts:
self.log.info(
"Adding SSL server at {address}:{port}",
address=bindAddress, port=port
)
try:
contextFactory = self.createContextFactory()
except SSLError, e:
self.log.error(
"Unable to set up SSL context factory: {error}"
"Disabling SSL port: {port}",
error=e, port=port
)
else:
httpsService = MaxAcceptSSLServer(
int(port), httpFactory,
contextFactory, interface=bindAddress,
backlog=config.ListenBacklog,
inherit=False
)
httpsService.setServiceParent(connectionService)
for port in config.BindHTTPPorts:
MaxAcceptTCPServer(
int(port), httpFactory,
interface=bindAddress,
backlog=config.ListenBacklog,
inherit=False
).setServiceParent(connectionService)
# Change log level back to what it was before
log.publisher.levels.setLogLevelForNamespace(None, oldLogLevel)
return service
def _validatePortConfig(self):
"""
If BindHTTPPorts is specified, HTTPPort must also be specified to
indicate which is the preferred port (the one to be used in URL
generation, etc). If only HTTPPort is specified, BindHTTPPorts should
be set to a list containing only that port number. Similarly for
BindSSLPorts/SSLPort.
@raise UsageError: if configuration is not valid.
"""
if config.BindHTTPPorts:
if config.HTTPPort == 0:
raise UsageError(
"HTTPPort required if BindHTTPPorts is not empty"
)
elif config.HTTPPort != 0:
config.BindHTTPPorts = [config.HTTPPort]
if config.BindSSLPorts:
if config.SSLPort == 0:
raise UsageError(
"SSLPort required if BindSSLPorts is not empty"
)
elif config.SSLPort != 0:
config.BindSSLPorts = [config.SSLPort]
def _allBindAddresses(self):
"""
An empty array for the config value of BindAddresses should be
equivalent to an array containing two BindAddresses; one with a single
empty string, and one with "::", meaning "bind everything on both IPv4
and IPv6".
"""
if not config.BindAddresses:
if getattr(socket, "has_ipv6", False):
if conflictBetweenIPv4AndIPv6():
# If there's a conflict between v4 and v6, then almost by
# definition, v4 is mapped into the v6 space, so we will
# listen "only" on v6.
config.BindAddresses = ["::"]
else:
config.BindAddresses = ["", "::"]
else:
config.BindAddresses = [""]
return config.BindAddresses
def _spawnMemcached(self, monitor=None):
"""
Optionally start memcached through the specified ProcessMonitor,
or if monitor is None, use Popen.
"""
for name, pool in config.Memcached.Pools.items():
if pool.ServerEnabled:
memcachedArgv = [
config.Memcached.memcached,
"-U", "0",
]
# Use Unix domain sockets by default
if pool.get("MemcacheSocket"):
memcachedArgv.extend([
"-s", str(pool.MemcacheSocket),
])
else:
# ... or INET sockets
memcachedArgv.extend([
"-p", str(pool.Port),
"-l", pool.BindAddress,
])
if config.Memcached.MaxMemory is not 0:
memcachedArgv.extend(
["-m", str(config.Memcached.MaxMemory)]
)
if config.UserName:
memcachedArgv.extend(["-u", config.UserName])
memcachedArgv.extend(config.Memcached.Options)
if monitor is not None:
monitor.addProcess(
"memcached-{}".format(name), memcachedArgv,
env=PARENT_ENVIRONMENT
)
else:
Popen(memcachedArgv)
def makeService_Single(self, options):
"""
Create a service to be used in a single-process, stand-alone
configuration. Memcached will be spawned automatically.
"""
def slaveSvcCreator(pool, store, logObserver, storageService):
if store is None:
raise StoreNotAvailable()
result = self.requestProcessingService(options, store, logObserver)
# Optionally set up push notifications
pushDistributor = None
if config.Notifications.Enabled:
observers = []
if config.Notifications.Services.APNS.Enabled:
pushSubService = ApplePushNotifierService.makeService(
config.Notifications.Services.APNS, store
)
observers.append(pushSubService)
pushSubService.setServiceParent(result)
if config.Notifications.Services.AMP.Enabled:
pushSubService = AMPPushMaster(
None, result,
config.Notifications.Services.AMP.Port,
config.Notifications.Services.AMP.EnableStaggering,
config.Notifications.Services.AMP.StaggerSeconds,
)
observers.append(pushSubService)
if observers:
pushDistributor = PushDistributor(observers)
directory = store.directoryService()
# Job queues always required
from twisted.internet import reactor
pool = ControllerQueue(
reactor, store.newTransaction,
useWorkerPool=False,
disableWorkProcessing=config.MigrationOnly,
)
self._initJobQueue(pool)
store.queuer = store.pool = pool
pool.setServiceParent(result)
# Optionally set up mail retrieval
if config.Scheduling.iMIP.Enabled:
mailRetriever = MailRetriever(
store, directory, config.Scheduling.iMIP.Receiving
)
mailRetriever.setServiceParent(result)
else:
mailRetriever = None
# Start listening on the stats socket, for administrators to inspect
# the current stats on the server.
stats = None
if config.Stats.EnableUnixStatsSocket:
stats = DashboardServer(logObserver, None)
stats.store = store
statsService = GroupOwnedUNIXServer(
gid, config.Stats.UnixStatsSocket, stats, mode=0660
)
statsService.setName("unix-stats")
statsService.setServiceParent(result)
if config.Stats.EnableTCPStatsSocket:
stats = DashboardServer(logObserver, None)
stats.store = store
statsService = TCPServer(
config.Stats.TCPStatsPort, stats, interface=""
)
statsService.setName("tcp-stats")
statsService.setServiceParent(result)
# Optionally set up group cacher
if config.GroupCaching.Enabled:
cacheNotifier = MemcacheURLPatternChangeNotifier("/principals/__uids__/{token}/", cacheHandle="PrincipalToken") if config.EnableResponseCache else None
groupCacher = GroupCacher(
directory,
updateSeconds=config.GroupCaching.UpdateSeconds,
useDirectoryBasedDelegates=config.GroupCaching.UseDirectoryBasedDelegates,
cacheNotifier=cacheNotifier,
)
else:
groupCacher = None
# Optionally enable Manhole access
if config.Manhole.Enabled:
try:
from twisted.conch.manhole_tap import (
makeService as manholeMakeService
)
portString = "tcp:{:d}:interface=127.0.0.1".format(
config.Manhole.StartingPortNumber
)
manholeService = manholeMakeService({
"sshPort": None,
"telnetPort": portString,
"namespace": {
"config": config,
"service": result,
"store": store,
"directory": directory,
},
"passwd": config.Manhole.PasswordFilePath,
})
manholeService.setServiceParent(result)
# Using print(because logging isn't ready at this point)
print("Manhole access enabled:", portString)
except ImportError:
print(
"Manhole access could not enabled because "
"manhole_tap could not be imported"
)
def decorateTransaction(txn):
txn._pushDistributor = pushDistributor
txn._rootResource = result.rootResource
txn._mailRetriever = mailRetriever
txn._groupCacher = groupCacher
store.callWithNewTransactions(decorateTransaction)
return result
uid, gid = getSystemIDs(config.UserName, config.GroupName)
# Make sure no old socket files are lying around.
self.deleteStaleSocketFiles()
logObserver = RotatingFileAccessLoggingObserver(
config.AccessLogFile,
)
# Maybe spawn memcached. Note, this is not going through a
# ProcessMonitor because there is code elsewhere that needs to
# access memcached before startService() gets called
self._spawnMemcached(monitor=None)
return self.storageService(
slaveSvcCreator, logObserver, uid=uid, gid=gid
)
def makeService_Utility(self, options):
"""
Create a service to be used in a command-line utility
Specify the actual utility service class in config.UtilityServiceClass.
When created, that service will have access to the storage facilities.
"""
def toolServiceCreator(pool, store, ignored, storageService):
return config.UtilityServiceClass(store)
uid, gid = getSystemIDs(config.UserName, config.GroupName)
return self.storageService(
toolServiceCreator, None, uid=uid, gid=gid
)
def makeService_Agent(self, options):
"""
Create an agent service which listens for configuration requests
"""
# Don't use memcached initially -- calendar server might take it away
# at any moment. However, when we run a command through the gateway,
# it will conditionally set ClientEnabled at that time.
def agentPostUpdateHook(configDict, reloading=False):
configDict.Memcached.Pools.Default.ClientEnabled = False
config.addPostUpdateHooks((agentPostUpdateHook,))
config.reload()
# Verify that server root actually exists and is not phantom
from calendarserver.tools.util import checkDirectory
checkDirectory(
config.ServerRoot,
"Server root",
access=W_OK,
wait=True # Wait in a loop until ServerRoot exists and is not phantom
)
# These we need to set in order to open the store
config.EnableCalDAV = config.EnableCardDAV = True
def agentServiceCreator(pool, store, ignored, storageService):
from calendarserver.tools.agent import makeAgentService
if storageService is not None:
# Shut down if DataRoot becomes unavailable
from twisted.internet import reactor
dataStoreWatcher = DirectoryChangeListener(
reactor,
config.DataRoot,
DataStoreMonitor(reactor, storageService)
)
dataStoreWatcher.startListening()
if store is not None:
store.queuer = NonPerformingQueuer()
return makeAgentService(store)
uid, gid = getSystemIDs(config.UserName, config.GroupName)
svc = self.storageService(
agentServiceCreator, None, uid=uid, gid=gid
)
agentLoggingService = ErrorLoggingMultiService(
config.ErrorLogEnabled,
config.AgentLogFile,
config.ErrorLogRotateMB * 1024 * 1024,
config.ErrorLogMaxRotatedFiles
)
svc.setServiceParent(agentLoggingService)
return agentLoggingService
def storageService(
self, createMainService, logObserver, uid=None, gid=None
):
"""
If necessary, create a service to be started used for storage; for
example, starting a database backend. This service will then start the
main service.
This has the effect of delaying any child process spawning or
stand alone port-binding until the backing for the selected data store
implementation is ready to process requests.
@param createMainService: This is the service that will be doing the
main work of the current process. If the configured storage mode
does not require any particular setup, then this may return the
C{mainService} argument.
@type createMainService: C{callable} that takes C{(connectionPool,
store)} and returns L{IService}
@param uid: the user ID to run the backend as, if this process is
running as root (also the uid to chown Attachments to).
@type uid: C{int}
@param gid: the user ID to run the backend as, if this process is
running as root (also the gid to chown Attachments to).
@type gid: C{int}
@param directory: The directory service to use.
@type directory: L{IStoreDirectoryService} or None
@return: the appropriate a service to start.
@rtype: L{IService}
"""
def createSubServiceFactory(dbtype):
if dbtype == "":
dialect = POSTGRES_DIALECT
paramstyle = "pyformat"
elif dbtype == "postgres":
dialect = POSTGRES_DIALECT
paramstyle = "pyformat"
elif dbtype == "oracle":
dialect = ORACLE_DIALECT
paramstyle = "numeric"
def subServiceFactory(connectionFactory, storageService):
ms = MultiService()
cp = ConnectionPool(
connectionFactory, dialect=dialect,
paramstyle=paramstyle,
maxConnections=config.MaxDBConnectionsPerPool
)
cp.setServiceParent(ms)
store = storeFromConfigWithoutDPS(config, cp.connection)
pps = PreProcessingService(
createMainService, cp, store, logObserver, storageService
)
# The following "steps" will run sequentially when the service
# hierarchy is started. If any of the steps raise an exception
# the subsequent steps' stepWithFailure methods will be called
# instead, until one of them returns a non-Failure.
pps.addStep(
UpgradeAcquireLockStep(store)
)
# Still need this for Snow Leopard support
pps.addStep(
UpgradeFileSystemFormatStep(config, store)
)
pps.addStep(
UpgradeDatabaseSchemaStep(
store, uid=overrideUID, gid=overrideGID,
failIfUpgradeNeeded=config.FailIfUpgradeNeeded
)
)
pps.addStep(
UpgradeDatabaseAddressBookDataStep(
store, uid=overrideUID, gid=overrideGID
)
)
pps.addStep(
UpgradeDatabaseCalendarDataStep(
store, uid=overrideUID, gid=overrideGID
)
)
pps.addStep(
UpgradeDatabaseNotificationDataStep(
store, uid=overrideUID, gid=overrideGID
)
)
pps.addStep(
UpgradeToDatabaseStep(
UpgradeToDatabaseStep.fileStoreFromPath(
CachingFilePath(config.DocumentRoot)
),
store, uid=overrideUID, gid=overrideGID,
merge=config.MergeUpgrades
)
)
pps.addStep(
UpgradeDatabaseOtherStep(
store, uid=overrideUID, gid=overrideGID
)
)
pps.addStep(
PostDBImportStep(
store, config, getattr(self, "doPostImport", True)
)
)
pps.addStep(
UpgradeReleaseLockStep(store)
)
pps.setServiceParent(ms)
return ms
return subServiceFactory
# FIXME: this is replicating the logic of getDBPool(), except for the
# part where the pgServiceFromConfig service is actually started here,
# and discarded in that function. This should be refactored to simply
# use getDBPool.
if config.UseDatabase:
if getuid() == 0: # Only override if root
overrideUID = uid
overrideGID = gid
else:
overrideUID = None
overrideGID = None
if config.DBType == '':
# Spawn our own database as an inferior process, then connect
# to it.
pgserv = pgServiceFromConfig(
config,
createSubServiceFactory(""),
uid=overrideUID, gid=overrideGID
)
return pgserv
else:
# Connect to a database that is already running.
return createSubServiceFactory(config.DBType)(
DBAPIConnector.connectorFor(config.DBType, **config.DatabaseConnection).connect, None
)
else:
store = storeFromConfig(config, None, None)
return createMainService(None, store, logObserver, None)
def makeService_Combined(self, options):
"""
Create a master service to coordinate a multi-process configuration,
spawning subprocesses that use L{makeService_Slave} to perform work.
"""
s = ErrorLoggingMultiService(
config.ErrorLogEnabled,
config.ErrorLogFile,
config.ErrorLogRotateMB * 1024 * 1024,
config.ErrorLogMaxRotatedFiles
)
# Perform early pre-flight checks. If this returns True, continue on.
# Otherwise one of two things will happen:
# A) preFlightChecks( ) will have registered an "after startup" system
# event trigger to request a shutdown via the
# ServiceDisablingProgram, and we want to return our
# ErrorLoggingMultiService so that logging is set up enough to
# emit our reason for shutting down into the error.log.
# B) preFlightChecks( ) will sys.exit(1) if there is not a
# ServiceDisablingProgram configured.
if not preFlightChecks(config):
return s
# Add a service to re-exec the master when it receives SIGHUP
ReExecService(config.PIDFile).setServiceParent(s)
# Make sure no old socket files are lying around.
self.deleteStaleSocketFiles()
# The logger service must come before the monitor service, otherwise
# we won't know which logging port to pass to the slaves' command lines
logger = AMPLoggingFactory(
RotatingFileAccessLoggingObserver(config.AccessLogFile)
)
if config.GroupName:
try:
gid = getgrnam(config.GroupName).gr_gid
except KeyError:
raise ConfigurationError(
"Invalid group name: {}".format(config.GroupName)
)
else:
gid = getgid()
if config.UserName:
try:
uid = getpwnam(config.UserName).pw_uid
except KeyError:
raise ConfigurationError(
"Invalid user name: {}".format(config.UserName)
)
else:
uid = getuid()
controlSocket = ControlSocket()
controlSocket.addFactory(_LOG_ROUTE, logger)
# Optionally set up AMPPushMaster
if (
config.Notifications.Enabled and
config.Notifications.Services.AMP.Enabled
):
ampSettings = config.Notifications.Services.AMP
AMPPushMaster(
controlSocket,
s,
ampSettings["Port"],
ampSettings["EnableStaggering"],
ampSettings["StaggerSeconds"]
)
if config.ControlSocket:
controlSocketService = GroupOwnedUNIXServer(
gid, config.ControlSocket, controlSocket, mode=0660
)
else:
controlSocketService = ControlPortTCPServer(
config.ControlPort, controlSocket, interface="127.0.0.1"
)
controlSocketService.setName(_CONTROL_SERVICE_NAME)
controlSocketService.setServiceParent(s)
monitor = DelayedStartupProcessMonitor()
s.processMonitor = monitor
monitor.setServiceParent(s)
if config.MemoryLimiter.Enabled:
memoryLimiter = MemoryLimitService(
monitor, config.MemoryLimiter.Seconds,
config.MemoryLimiter.Bytes, config.MemoryLimiter.ResidentOnly
)
memoryLimiter.setServiceParent(s)
# Maybe spawn memcached through a ProcessMonitor
self._spawnMemcached(monitor=monitor)
# Open the socket(s) to be inherited by the slaves
inheritFDs = []
inheritSSLFDs = []
if config.UseMetaFD:
cl = ConnectionLimiter(config.MaxAccepts,
(config.MaxRequests *
config.MultiProcess.ProcessCount))
dispatcher = cl.dispatcher
else:
# keep a reference to these so they don't close
s._inheritedSockets = []
dispatcher = None
if config.SocketFiles.Enabled:
if config.SocketFiles.Secured:
# TLS-secured requests will arrive via this Unix domain socket file
cl.addSocketFileService(
"SSL", config.SocketFiles.Secured, config.ListenBacklog
)
if config.SocketFiles.Unsecured:
# Unsecured requests will arrive via this Unix domain socket file
cl.addSocketFileService(
"TCP", config.SocketFiles.Unsecured, config.ListenBacklog
)
else:
for bindAddress in self._allBindAddresses():
self._validatePortConfig()
if config.UseMetaFD:
portsList = [(config.BindHTTPPorts, "TCP")]
if config.EnableSSL:
portsList.append((config.BindSSLPorts, "SSL"))
for ports, description in portsList:
for port in ports:
cl.addPortService(
description, port, bindAddress,
config.ListenBacklog
)
else:
def _openSocket(addr, port):
log.info(
"Opening socket for inheritance at {address}:{port}",
address=addr, port=port
)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setblocking(0)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind((addr, port))
sock.listen(config.ListenBacklog)
s._inheritedSockets.append(sock)
return sock
for portNum in config.BindHTTPPorts:
sock = _openSocket(bindAddress, int(portNum))
inheritFDs.append(sock.fileno())
if config.EnableSSL:
for portNum in config.BindSSLPorts:
sock = _openSocket(bindAddress, int(portNum))
inheritSSLFDs.append(sock.fileno())
# Start listening on the stats socket, for administrators to inspect
# the current stats on the server.
stats = None
if config.Stats.EnableUnixStatsSocket:
stats = DashboardServer(logger.observer, cl if config.UseMetaFD else None)
statsService = GroupOwnedUNIXServer(
gid, config.Stats.UnixStatsSocket, stats, mode=0660
)
statsService.setName("unix-stats")
statsService.setServiceParent(s)
elif config.Stats.EnableTCPStatsSocket:
stats = DashboardServer(logger.observer, cl if config.UseMetaFD else None)
statsService = TCPServer(
config.Stats.TCPStatsPort, stats, interface=""
)
statsService.setName("tcp-stats")
statsService.setServiceParent(s)
# Optionally enable Manhole access
if config.Manhole.Enabled:
try:
from twisted.conch.manhole_tap import (
makeService as manholeMakeService
)
portString = "tcp:{:d}:interface=127.0.0.1".format(
config.Manhole.StartingPortNumber
)
manholeService = manholeMakeService({
"sshPort": None,
"telnetPort": portString,
"namespace": {
"config": config,
"service": s,
},
"passwd": config.Manhole.PasswordFilePath,
})
manholeService.setServiceParent(s)
# Using print(because logging isn't ready at this point)
print("Manhole access enabled:", portString)
except ImportError:
print(
"Manhole access could not enabled because "
"manhole_tap could not be imported"
)
# Finally, let's get the real show on the road. Create a service that
# will spawn all of our worker processes when started, and wrap that
# service in zero to two necessary layers before it's started: first,
# the service which spawns a subsidiary database (if that's necessary,
# and we don't have an external, already-running database to connect
# to), and second, the service which does an upgrade from the
# filesystem to the database (if that's necessary, and there is
# filesystem data in need of upgrading).
def spawnerSvcCreator(pool, store, ignored, storageService):
if store is None:
raise StoreNotAvailable()
if stats is not None:
stats.store = store
from twisted.internet import reactor
pool = ControllerQueue(
reactor, store.newTransaction,
disableWorkProcessing=config.MigrationOnly,
)
self._initJobQueue(pool)
# The master should not perform queued work
store.queuer = NonPerformingQueuer()
store.pool = pool
controlSocket.addFactory(
_QUEUE_ROUTE, pool.workerListenerFactory()
)
# TODO: now that we have the shared control socket, we should get
# rid of the connection dispenser and make a shared / async
# connection pool implementation that can dispense transactions
# synchronously as the interface requires.
if pool is not None and config.SharedConnectionPool:
self.log.warn("Using Shared Connection Pool")
dispenser = ConnectionDispenser(pool)
else:
dispenser = None
multi = MultiService()
pool.setServiceParent(multi)
spawner = SlaveSpawnerService(
self, monitor, dispenser, dispatcher, stats, options["config"],
inheritFDs=inheritFDs, inheritSSLFDs=inheritSSLFDs
)
spawner.setServiceParent(multi)
if config.UseMetaFD:
cl.setServiceParent(multi)
directory = store.directoryService()
rootResource = getRootResource(config, store, [])
# Optionally set up mail retrieval
if config.Scheduling.iMIP.Enabled:
mailRetriever = MailRetriever(
store, directory, config.Scheduling.iMIP.Receiving
)
mailRetriever.setServiceParent(multi)
else:
mailRetriever = None
# Optionally set up group cacher
if config.GroupCaching.Enabled:
cacheNotifier = MemcacheURLPatternChangeNotifier("/principals/__uids__/{token}/", cacheHandle="PrincipalToken") if config.EnableResponseCache else None
groupCacher = GroupCacher(
directory,
updateSeconds=config.GroupCaching.UpdateSeconds,
useDirectoryBasedDelegates=config.GroupCaching.UseDirectoryBasedDelegates,
cacheNotifier=cacheNotifier,
)
else:
groupCacher = None
def decorateTransaction(txn):
txn._pushDistributor = None
txn._rootResource = rootResource
txn._mailRetriever = mailRetriever
txn._groupCacher = groupCacher
store.callWithNewTransactions(decorateTransaction)
return multi
ssvc = self.storageService(
spawnerSvcCreator, None, uid, gid
)
ssvc.setServiceParent(s)
return s
def deleteStaleSocketFiles(self):
# Check all socket files we use.
for checkSocket in [
config.ControlSocket, config.Stats.UnixStatsSocket
]:
# See if the file exists.
if (exists(checkSocket)):
# See if the file represents a socket. If not, delete it.
if (not S_ISSOCK(stat(checkSocket).st_mode)):
self.log.warn(
"Deleting stale socket file (not a socket): {socket}",
socket=checkSocket
)
remove(checkSocket)
else:
# It looks like a socket.
# See if it's accepting connections.
tmpSocket = socket.socket(
socket.AF_INET, socket.SOCK_STREAM
)
numConnectFailures = 0
testPorts = [config.HTTPPort, config.SSLPort]
for testPort in testPorts:
try:
tmpSocket.connect(("127.0.0.1", testPort))
tmpSocket.shutdown(2)
except:
numConnectFailures = numConnectFailures + 1
# If the file didn't connect on any expected ports,
# consider it stale and remove it.
if numConnectFailures == len(testPorts):
self.log.warn(
"Deleting stale socket file "
"(not accepting connections): {socket}",
socket=checkSocket
)
remove(checkSocket)
def _initJobQueue(self, pool):
"""
Common job queue initialization
@param pool: the connection pool to init or L{None}
@type pool: L{PeerConnectionPool} or C{None}
"""
# Initialize queue polling parameters from config settings
if pool is not None:
for attr in (
"queuePollInterval",
"queueOverdueTimeout",
"overloadLevel",
"highPriorityLevel",
"mediumPriorityLevel",
):
setattr(pool, attr, getattr(config.WorkQueue, attr))
# Initialize job parameters from config settings
for attr in (
"failureRescheduleInterval",
"lockRescheduleInterval",
):
setattr(JobItem, attr, getattr(config.WorkQueue, attr))
class TwistdSlaveProcess(object):
"""
A L{TwistdSlaveProcess} is information about how to start a slave process
running a C{twistd} plugin, to be used by
L{DelayedStartupProcessMonitor.addProcessObject}.
@ivar twistd: The path to the twistd executable to launch.
@type twistd: C{str}
@ivar tapname: The name of the twistd plugin to launch.
@type tapname: C{str}
@ivar id: The instance identifier for this slave process.
@type id: C{int}
@ivar interfaces: A sequence of interface addresses which the process will
be configured to bind to.
@type interfaces: sequence of C{str}
@ivar inheritFDs: File descriptors to be inherited for calling accept() on
in the subprocess.
@type inheritFDs: C{list} of C{int}, or C{None}
@ivar inheritSSLFDs: File descriptors to be inherited for calling accept()
on in the subprocess, and speaking TLS on the resulting sockets.
@type inheritSSLFDs: C{list} of C{int}, or C{None}
@ivar metaSocket: an AF_UNIX/SOCK_DGRAM socket (initialized from the
dispatcher passed to C{__init__}) that is to be inherited by the
subprocess and used to accept incoming connections.
@type metaSocket: L{socket.socket}
@ivar ampSQLDispenser: a factory for AF_UNIX/SOCK_STREAM sockets that are
to be inherited by subprocesses and used for sending AMP SQL commands
back to its parent.
"""
prefix = "caldav"
def __init__(self, twistd, tapname, configFile, id, interfaces,
inheritFDs=None, inheritSSLFDs=None, metaSocket=None,
ampSQLDispenser=None):
self.twistd = twistd
self.tapname = tapname
self.configFile = configFile
self.id = id
def emptyIfNone(x):
if x is None:
return []
else:
return x
self.inheritFDs = emptyIfNone(inheritFDs)
self.inheritSSLFDs = emptyIfNone(inheritSSLFDs)
self.metaSocket = metaSocket
self.interfaces = interfaces
self.ampSQLDispenser = ampSQLDispenser
self.ampDBSocket = None
def starting(self):
"""
Called when the process is being started (or restarted). Allows for
various initialization operations to be done. The child process will
itself signal back to the master when it is ready to accept sockets -
until then the master socket is marked as "starting" which means it is
not active and won't be dispatched to.
"""
# Always tell any metafd socket that we have started, so it can
# re-initialize state.
if self.metaSocket is not None:
self.metaSocket.start()
def stopped(self):
"""
Called when the process has stopped (died). The socket is marked as
"stopped" which means it is not active and won't be dispatched to.
"""
# Always tell any metafd socket that we have started, so it can
# re-initialize state.
if self.metaSocket is not None:
self.metaSocket.stop()
def getName(self):
return "{}-{}".format(self.prefix, self.id)
def getFileDescriptors(self):
"""
Get the file descriptors that will be passed to the subprocess, as a
mapping that will be used with L{IReactorProcess.spawnProcess}.
If this server is configured to use a meta FD, pass the client end of
the meta FD. If this server is configured to use an AMP database
connection pool, pass a pre-connected AMP socket.
Note that, contrary to the documentation for
L{twext.internet.sendfdport.InheritedSocketDispatcher.addSocket}, this
does I{not} close the added child socket; this method
(C{getFileDescriptors}) is called repeatedly to start a new process
with the same C{LogID} if the previous one exits. Therefore we
consistently re-use the same file descriptor and leave it open in the
master, relying upon process-exit notification rather than noticing the
meta-FD socket was closed in the subprocess.
@return: a mapping of file descriptor numbers for the new (child)
process to file descriptor numbers in the current (master) process.
"""
fds = {}
extraFDs = []
if self.metaSocket is not None:
extraFDs.append(self.metaSocket.childSocket().fileno())
if self.ampSQLDispenser is not None:
self.ampDBSocket = self.ampSQLDispenser.dispense()
extraFDs.append(self.ampDBSocket.fileno())
for fd in self.inheritSSLFDs + self.inheritFDs + extraFDs:
fds[fd] = fd
return fds
def getCommandLine(self):
"""
@return: a list of command-line arguments, including the executable to
be used to start this subprocess.
@rtype: C{list} of C{str}
"""
args = [sys.executable, self.twistd]
if config.UserName:
args.extend(("-u", config.UserName))
if config.GroupName:
args.extend(("-g", config.GroupName))
if config.Profiling.Enabled:
args.append("--profile={}/{}.pstats".format(
config.Profiling.BaseDirectory, self.getName()
))
args.extend(("--savestats", "--profiler", "cprofile-cpu"))
args.extend([
"--reactor={}".format(config.Twisted.reactor),
"-n", self.tapname,
"-f", self.configFile,
"-o", "ProcessType=Slave",
"-o", "BindAddresses={}".format(",".join(self.interfaces)),
"-o", "PIDFile={}-instance-{}.pid".format(self.tapname, self.id),
"-o", "ErrorLogFile=None",
"-o", "ErrorLogEnabled=False",
"-o", "LogID={}".format(self.id),
"-o", "MultiProcess/ProcessCount={:d}".format(
config.MultiProcess.ProcessCount
),
"-o", "ControlPort={:d}".format(config.ControlPort),
])
if self.inheritFDs:
args.extend([
"-o", "InheritFDs={}".format(
",".join(map(str, self.inheritFDs))
)
])
if self.inheritSSLFDs:
args.extend([
"-o", "InheritSSLFDs={}".format(
",".join(map(str, self.inheritSSLFDs))
)
])
if self.metaSocket is not None:
args.extend([
"-o", "MetaFD={}".format(
self.metaSocket.childSocket().fileno()
)
])
if self.ampDBSocket is not None:
args.extend([
"-o", "DBAMPFD={}".format(self.ampDBSocket.fileno())
])
return args
class ControlPortTCPServer(TCPServer):
""" This TCPServer retrieves the port number that was actually assigned
when the service was started, and stores that into config.ControlPort
"""
def startService(self):
TCPServer.startService(self)
# Record the port we were actually assigned
config.ControlPort = self._port.getHost().port
class DelayedStartupProcessMonitor(Service, object):
"""
A L{DelayedStartupProcessMonitor} is a L{procmon.ProcessMonitor} that
defers building its command lines until the service is actually ready to
start. It also specializes process-starting to allow for process objects
to determine their arguments as they are started up rather than entirely
ahead of time.
Also, unlike L{procmon.ProcessMonitor}, its C{stopService} returns a
L{Deferred} which fires only when all processes have shut down, to allow
for a clean service shutdown.
@ivar reactor: an L{IReactorProcess} for spawning processes, defaulting to
the global reactor.
@ivar delayInterval: the amount of time to wait between starting subsequent
processes.
@ivar stopping: a flag used to determine whether it is time to fire the
Deferreds that track service shutdown.
"""
threshold = 1
killTime = 5
minRestartDelay = 1
maxRestartDelay = 3600
def __init__(self, reactor=None):
super(DelayedStartupProcessMonitor, self).__init__()
if reactor is None:
from twisted.internet import reactor
self._reactor = reactor
self.processes = OrderedDict()
self.protocols = {}
self.delay = {}
self.timeStarted = {}
self.murder = {}
self.restart = {}
self.stopping = False
if config.MultiProcess.StaggeredStartup.Enabled:
self.delayInterval = config.MultiProcess.StaggeredStartup.Interval
else:
self.delayInterval = 0
def addProcess(self, name, args, uid=None, gid=None, env={}):
"""
Add a new monitored process and start it immediately if the
L{DelayedStartupProcessMonitor} service is running.
Note that args are passed to the system call, not to the shell. If
running the shell is desired, the common idiom is to use
C{ProcessMonitor.addProcess("name", ['/bin/sh', '-c', shell_script])}
@param name: A name for this process. This value must be
unique across all processes added to this monitor.
@type name: C{str}
@param args: The argv sequence for the process to launch.
@param uid: The user ID to use to run the process. If C{None},
the current UID is used.
@type uid: C{int}
@param gid: The group ID to use to run the process. If C{None},
the current GID is used.
@type uid: C{int}
@param env: The environment to give to the launched process. See
L{IReactorProcess.spawnProcess}'s C{env} parameter.
@type env: C{dict}
@raises: C{KeyError} if a process with the given name already
exists
"""
class SimpleProcessObject(object):
def starting(self):
pass
def stopped(self):
pass
def getName(self):
return name
def getCommandLine(self):
return args
def getFileDescriptors(self):
return []
self.addProcessObject(SimpleProcessObject(), env, uid, gid)
def addProcessObject(self, process, env, uid=None, gid=None):
"""
Add a process object to be run when this service is started.
@param env: a dictionary of environment variables.
@param process: a L{TwistdSlaveProcesses} object to be started upon
service startup.
"""
name = process.getName()
self.processes[name] = (process, env, uid, gid)
self.delay[name] = self.minRestartDelay
if self.running:
self.startProcess(name)
def startService(self):
# Now we're ready to build the command lines and actually add the
# processes to procmon.
super(DelayedStartupProcessMonitor, self).startService()
# This will be in the order added, since self.processes is an
# OrderedDict
for name in self.processes:
self.startProcess(name)
def stopService(self):
"""
Return a deferred that fires when all child processes have ended.
"""
self.stopping = True
self.deferreds = {}
for name in self.processes:
self.deferreds[name] = Deferred()
super(DelayedStartupProcessMonitor, self).stopService()
# Cancel any outstanding restarts
for name, delayedCall in self.restart.items():
if delayedCall.active():
delayedCall.cancel()
# Stop processes in the reverse order from which they were added and
# started
for name in reversed(self.processes):
self.stopProcess(name)
return gatherResults(self.deferreds.values())
def removeProcess(self, name):
"""
Stop the named process and remove it from the list of monitored
processes.
@type name: C{str}
@param name: A string that uniquely identifies the process.
"""
self.stopProcess(name)
del self.processes[name]
def stopProcess(self, name):
"""
@param name: The name of the process to be stopped
"""
if name not in self.processes:
raise KeyError("Unrecognized process name: {}".format(name))
proto = self.protocols.get(name, None)
if proto is not None:
proc = proto.transport
try:
proc.signalProcess('TERM')
except ProcessExitedAlready:
pass
else:
self.murder[name] = self._reactor.callLater(
self.killTime, self._forceStopProcess, proc
)
def processEnded(self, name):
"""
When a child process has ended it calls me so I can fire the
appropriate deferred which was created in stopService.
Also make sure to signal the dispatcher so that the socket is
marked as inactive.
"""
# Cancel the scheduled _forceStopProcess function if the process
# dies naturally
if name in self.murder:
if self.murder[name].active():
self.murder[name].cancel()
del self.murder[name]
self.processes[name][0].stopped()
del self.protocols[name]
if self._reactor.seconds() - self.timeStarted[name] < self.threshold:
# The process died too fast - back off
nextDelay = self.delay[name]
self.delay[name] = min(self.delay[name] * 2, self.maxRestartDelay)
else:
# Process had been running for a significant amount of time
# restart immediately
nextDelay = 0
self.delay[name] = self.minRestartDelay
# Schedule a process restart if the service is running
if self.running and name in self.processes:
self.restart[name] = self._reactor.callLater(nextDelay,
self.startProcess,
name)
if self.stopping:
deferred = self.deferreds.pop(name, None)
if deferred is not None:
deferred.callback(None)
def _forceStopProcess(self, proc):
"""
@param proc: An L{IProcessTransport} provider
"""
try:
proc.signalProcess('KILL')
except ProcessExitedAlready:
pass
def signalAll(self, signal, startswithname=None):
"""
Send a signal to all child processes.
@param signal: the signal to send
@type signal: C{int}
@param startswithname: is set only signal those processes
whose name starts with this string
@type signal: C{str}
"""
for name in self.processes.keys():
if startswithname is None or name.startswith(startswithname):
self.signalProcess(signal, name)
def signalProcess(self, signal, name):
"""
Send a signal to a single monitored process, by name, if that process
is running; otherwise, do nothing.
@param signal: the signal to send
@type signal: C{int}
@param name: the name of the process to signal.
@type signal: C{str}
"""
if name not in self.protocols:
return
proc = self.protocols[name].transport
try:
proc.signalProcess(signal)
except ProcessExitedAlready:
pass
def reallyStartProcess(self, name):
"""
Actually start a process. (Re-implemented here rather than just using
the inherited implementation of startService because ProcessMonitor
doesn't allow customization of subprocess environment).
"""
if name in self.protocols:
return
p = self.protocols[name] = DelayedStartupLoggingProtocol()
p.service = self
p.name = name
procObj, env, uid, gid = self.processes[name]
self.timeStarted[name] = time.time()
childFDs = {0: "w", 1: "r", 2: "r"}
childFDs.update(procObj.getFileDescriptors())
procObj.starting()
args = procObj.getCommandLine()
self._reactor.spawnProcess(
p, args[0], args, uid=uid, gid=gid, env=env,
childFDs=childFDs
)
_pendingStarts = 0
def startProcess(self, name):
"""
Start process named 'name'. If another process has started recently,
wait until some time has passed before actually starting this process.
@param name: the name of the process to start.
"""
interval = (self.delayInterval * self._pendingStarts)
self._pendingStarts += 1
def delayedStart():
self._pendingStarts -= 1
self.reallyStartProcess(name)
self._reactor.callLater(interval, delayedStart)
def restartAll(self):
"""
Restart all processes. This is useful for third party management
services to allow a user to restart servers because of an outside
change in circumstances -- for example, a new version of a library is
installed.
"""
for name in self.processes:
self.stopProcess(name)
def __repr__(self):
l = []
for name, (procObj, uid, gid, _ignore_env) in self.processes.items():
uidgid = ''
if uid is not None:
uidgid = str(uid)
if gid is not None:
uidgid += ':' + str(gid)
if uidgid:
uidgid = '(' + uidgid + ')'
l.append("{}{}: {}".format(name, uidgid, procObj))
return (
"<{self.__class__.__name__} {l}>"
.format(self=self, l=" ".join(l))
)
class DelayedStartupLineLogger(object):
"""
A line logger that can handle very long lines.
"""
MAX_LENGTH = 1024
CONTINUED_TEXT = " (truncated, continued)"
tag = None
exceeded = False # Am I in the middle of parsing a long line?
_buffer = ''
def makeConnection(self, transport):
"""
Ignore this IProtocol method, since I don't need a transport.
"""
pass
def dataReceived(self, data):
lines = (self._buffer + data).split("\n")
while len(lines) > 1:
line = lines.pop(0)
if len(line) > self.MAX_LENGTH:
self.lineLengthExceeded(line)
elif self.exceeded:
self.lineLengthExceeded(line)
self.exceeded = False
else:
self.lineReceived(line)
lastLine = lines.pop(0)
if len(lastLine) > self.MAX_LENGTH:
self.lineLengthExceeded(lastLine)
self.exceeded = True
self._buffer = ''
else:
self._buffer = lastLine
def lineReceived(self, line):
from twisted.python.log import msg
msg("[{}] {}".format(self.tag, line))
def lineLengthExceeded(self, line):
"""
A very long line is being received. Log it immediately and forget
about buffering it.
"""
segments = self._breakLineIntoSegments(line)
for segment in segments:
self.lineReceived(segment)
def _breakLineIntoSegments(self, line):
"""
Break a line into segments no longer than self.MAX_LENGTH. Each
segment (except for the final one) has self.CONTINUED_TEXT appended.
Returns the array of segments.
@param line: The line to break up
@type line: C{str}
@return: array of C{str}
"""
length = len(line)
numSegments = (
length / self.MAX_LENGTH +
(1 if length % self.MAX_LENGTH else 0)
)
segments = []
for i in range(numSegments):
msg = line[i * self.MAX_LENGTH:(i + 1) * self.MAX_LENGTH]
if i < numSegments - 1: # not the last segment
msg += self.CONTINUED_TEXT
segments.append(msg)
return segments
class DelayedStartupLoggingProtocol(ProcessProtocol):
"""
Logging protocol that handles lines which are too long.
"""
service = None
name = None
empty = 1
def connectionMade(self):
"""
Replace the superclass's output monitoring logic with one that can
handle lineLengthExceeded.
"""
self.output = DelayedStartupLineLogger()
self.output.makeConnection(self.transport)
self.output.tag = self.name
def outReceived(self, data):
self.output.dataReceived(data)
self.empty = data[-1] == '\n'
errReceived = outReceived
def processEnded(self, reason):
"""
Let the service know that this child process has ended
"""
if not self.empty:
self.output.dataReceived('\n')
self.service.processEnded(self.name)
def getSystemIDs(userName, groupName):
"""
Return the system ID numbers corresponding to either:
A) the userName and groupName if non-empty, or
B) the real user ID and group ID of the process
@param userName: The name of the user to look up the ID of. An empty
value indicates the real user ID of the process should be returned
instead.
@type userName: C{str}
@param groupName: The name of the group to look up the ID of. An empty
value indicates the real group ID of the process should be returned
instead.
@type groupName: C{str}
"""
if userName:
try:
uid = getpwnam(userName).pw_uid
except KeyError:
raise ConfigurationError(
"Invalid user name: {}".format(userName)
)
else:
uid = getuid()
if groupName:
try:
gid = getgrnam(groupName).gr_gid
except KeyError:
raise ConfigurationError(
"Invalid group name: {}".format(groupName)
)
else:
gid = getgid()
return uid, gid
class DataStoreMonitor(object):
implements(IDirectoryChangeListenee)
def __init__(self, reactor, storageService):
"""
@param storageService: the service making use of the DataStore
directory; we send it a hardStop() to shut it down
"""
self._reactor = reactor
self._storageService = storageService
def disconnected(self):
self._storageService.hardStop()
self._reactor.stop()
def deleted(self):
self._storageService.hardStop()
self._reactor.stop()
def renamed(self):
self._storageService.hardStop()
self._reactor.stop()
def connectionLost(self, reason):
pass
calendarserver-7.0+dfsg/calendarserver/tap/profiling.py 0000664 0000000 0000000 00000003011 12607514243 0023425 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2010-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
import time
from twisted.application.app import CProfileRunner, AppProfiler
class CProfileCPURunner(CProfileRunner):
"""
Runner for the cProfile module which uses C{time.clock} to measure
CPU usage instead of the default wallclock time.
"""
def run(self, reactor):
"""
Run reactor under the cProfile profiler.
"""
try:
import cProfile
import pstats
except ImportError, e:
self._reportImportError("cProfile", e)
p = cProfile.Profile(time.clock)
p.runcall(reactor.run)
if self.saveStats:
p.dump_stats(self.profileOutput)
else:
stream = open(self.profileOutput, 'w')
s = pstats.Stats(p, stream=stream)
s.strip_dirs()
s.sort_stats(-1)
s.print_stats()
stream.close()
AppProfiler.profilers["cprofile-cpu"] = CProfileCPURunner
calendarserver-7.0+dfsg/calendarserver/tap/test/ 0000775 0000000 0000000 00000000000 12607514243 0022046 5 ustar 00root root 0000000 0000000 calendarserver-7.0+dfsg/calendarserver/tap/test/__init__.py 0000664 0000000 0000000 00000001220 12607514243 0024152 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Tests for the calendarserver.tap module.
"""
calendarserver-7.0+dfsg/calendarserver/tap/test/longlines.py 0000664 0000000 0000000 00000001406 12607514243 0024413 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2007-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
import sys
length = int(sys.argv[1])
data = (("x" * length) +
("y" * (length + 1)) + "\n" +
("z" + "\n"))
sys.stdout.write(data)
sys.stdout.flush()
calendarserver-7.0+dfsg/calendarserver/tap/test/reexec.tac 0000664 0000000 0000000 00000000667 12607514243 0024023 0 ustar 00root root 0000000 0000000 from twisted.application import service
from calendarserver.tap.caldav import ReExecService
class TestService(service.Service):
def startService(self):
print "START"
def stopService(self):
print "STOP"
application = service.Application("ReExec Tester")
reExecService = ReExecService("twistd.pid")
reExecService.setServiceParent(application)
testService = TestService()
testService.setServiceParent(reExecService)
calendarserver-7.0+dfsg/calendarserver/tap/test/test_caldav.py 0000664 0000000 0000000 00000137015 12607514243 0024720 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2007-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
import sys
import os
import stat
import grp
import random
from time import sleep
from os.path import dirname, abspath
from collections import namedtuple
from zope.interface import implements
from twisted.application import service
from twisted.python import log as logging
from twisted.python.threadable import isInIOThread
from twisted.internet.reactor import callFromThread
from twisted.python.usage import Options, UsageError
from twisted.python.procutils import which
from twisted.runner.procmon import ProcessMonitor, LoggingProtocol
from twisted.internet.interfaces import IProcessTransport, IReactorProcess
from twisted.internet.protocol import ServerFactory
from twisted.internet.defer import Deferred, inlineCallbacks, succeed, gatherResults
from twisted.internet.task import Clock
from twisted.internet import reactor
from twisted.application.service import (IService, IServiceCollection
)
from twisted.application import internet
from twext.python.log import Logger
from twext.python.filepath import CachingFilePath as FilePath
from plistlib import writePlist # @UnresolvedImport
from txweb2.dav import auth
from txweb2.log import LogWrapperResource
from twext.internet.tcp import MaxAcceptTCPServer, MaxAcceptSSLServer
from twistedcaldav import memcacheclient
from twistedcaldav.config import config, ConfigDict, ConfigurationError
from twistedcaldav.resource import AuthenticationWrapper
from twistedcaldav.stdconfig import DEFAULT_CONFIG
from twistedcaldav.directory.calendar import DirectoryCalendarHomeProvisioningResource
from twistedcaldav.directory.principal import DirectoryPrincipalProvisioningResource
from twistedcaldav.test.util import StoreTestCase, CapturingProcessProtocol, \
TestCase
from calendarserver.tap.caldav import (
CalDAVOptions, CalDAVServiceMaker, CalDAVService, GroupOwnedUNIXServer,
DelayedStartupProcessMonitor, DelayedStartupLineLogger, TwistdSlaveProcess,
_CONTROL_SERVICE_NAME, getSystemIDs, PreProcessingService,
DataStoreMonitor
)
from calendarserver.provision.root import RootResource
from StringIO import StringIO
import tempfile
log = Logger()
# Points to top of source tree.
sourceRoot = dirname(dirname(dirname(dirname(abspath(__file__)))))
class NotAProcessTransport(object):
"""
Simple L{IProcessTransport} stub.
"""
implements(IProcessTransport)
def __init__(self, processProtocol, executable, args, env, path,
uid, gid, usePTY, childFDs):
"""
Hold on to all the attributes passed to spawnProcess.
"""
self.processProtocol = processProtocol
self.executable = executable
self.args = args
self.env = env
self.path = path
self.uid = uid
self.gid = gid
self.usePTY = usePTY
self.childFDs = childFDs
class InMemoryProcessSpawner(Clock):
"""
Stub out L{IReactorProcess} and L{IReactorClock} so that we can examine the
interaction of L{DelayedStartupProcessMonitor} and the reactor.
"""
implements(IReactorProcess)
def __init__(self):
"""
Create some storage to hold on to all the fake processes spawned.
"""
super(InMemoryProcessSpawner, self).__init__()
self.processTransports = []
self.waiting = []
def waitForOneProcess(self, amount=10.0):
"""
Wait for an L{IProcessTransport} to be created by advancing the clock.
If none are created in the specified amount of time, raise an
AssertionError.
"""
self.advance(amount)
if self.processTransports:
return self.processTransports.pop(0)
else:
raise AssertionError(
"There were no process transports available. Calls: " +
repr(self.calls)
)
def spawnProcess(self, processProtocol, executable, args=(), env={},
path=None, uid=None, gid=None, usePTY=0,
childFDs=None):
transport = NotAProcessTransport(
processProtocol, executable, args, env, path, uid, gid, usePTY,
childFDs
)
transport.startedAt = self.seconds()
self.processTransports.append(transport)
if self.waiting:
self.waiting.pop(0).callback(transport)
return transport
class TestCalDAVOptions (CalDAVOptions):
"""
A fake implementation of CalDAVOptions that provides
empty implementations of checkDirectory and checkFile.
"""
def checkDirectory(self, *args, **kwargs):
pass
def checkFile(self, *args, **kwargs):
pass
def checkDirectories(self, *args, **kwargs):
pass
def loadConfiguration(self):
"""
Simple wrapper to avoid printing during test runs.
"""
oldout = sys.stdout
newout = sys.stdout = StringIO()
try:
return CalDAVOptions.loadConfiguration(self)
finally:
sys.stdout = oldout
log.info(
"load configuration console output: %s" % newout.getvalue()
)
class CalDAVOptionsTest(StoreTestCase):
"""
Test various parameters of our usage.Options subclass
"""
@inlineCallbacks
def setUp(self):
"""
Set up our options object, giving it a parent, and forcing the
global config to be loaded from defaults.
"""
yield super(CalDAVOptionsTest, self).setUp()
self.config = TestCalDAVOptions()
self.config.parent = Options()
self.config.parent["uid"] = 0
self.config.parent["gid"] = 0
self.config.parent["nodaemon"] = False
def tearDown(self):
config.setDefaults(DEFAULT_CONFIG)
config.reload()
def test_overridesConfig(self):
"""
Test that values on the command line's -o and --option options
overide the config file
"""
myConfig = ConfigDict(DEFAULT_CONFIG)
myConfigFile = self.mktemp()
writePlist(myConfig, myConfigFile)
argv = [
"-f", myConfigFile,
"-o", "EnableSACLs",
"-o", "HTTPPort=80",
"-o", "BindAddresses=127.0.0.1,127.0.0.2,127.0.0.3",
"-o", "DocumentRoot=/dev/null",
"-o", "UserName=None",
"-o", "EnableProxyPrincipals=False",
]
self.config.parseOptions(argv)
self.assertEquals(config.EnableSACLs, True)
self.assertEquals(config.HTTPPort, 80)
self.assertEquals(config.BindAddresses,
["127.0.0.1", "127.0.0.2", "127.0.0.3"])
self.assertEquals(config.DocumentRoot, "/dev/null")
self.assertEquals(config.UserName, None)
self.assertEquals(config.EnableProxyPrincipals, False)
argv = ["-o", "Authentication=This Doesn't Matter"]
self.assertRaises(UsageError, self.config.parseOptions, argv)
def test_setsParent(self):
"""
Test that certain values are set on the parent (i.e. twistd's
Option's object)
"""
myConfig = ConfigDict(DEFAULT_CONFIG)
myConfigFile = self.mktemp()
writePlist(myConfig, myConfigFile)
argv = [
"-f", myConfigFile,
"-o", "PIDFile=/dev/null",
"-o", "umask=63",
# integers in plists & calendarserver command line are always
# decimal; umask is traditionally in octal.
]
self.config.parseOptions(argv)
self.assertEquals(self.config.parent["pidfile"], "/dev/null")
self.assertEquals(self.config.parent["umask"], 0077)
def test_specifyConfigFile(self):
"""
Test that specifying a config file from the command line
loads the global config with those values properly.
"""
myConfig = ConfigDict(DEFAULT_CONFIG)
myConfig.Authentication.Basic.Enabled = False
myConfig.HTTPPort = 80
myConfig.ServerHostName = "calendar.calenderserver.org"
myConfigFile = self.mktemp()
writePlist(myConfig, myConfigFile)
args = ["-f", myConfigFile]
self.config.parseOptions(args)
self.assertEquals(config.ServerHostName, myConfig["ServerHostName"])
self.assertEquals(config.HTTPPort, myConfig.HTTPPort)
self.assertEquals(
config.Authentication.Basic.Enabled,
myConfig.Authentication.Basic.Enabled
)
def test_specifyDictPath(self):
"""
Test that we can specify command line overrides to leafs using
a "/" seperated path. Such as "-o MultiProcess/ProcessCount=1"
"""
myConfig = ConfigDict(DEFAULT_CONFIG)
myConfigFile = self.mktemp()
writePlist(myConfig, myConfigFile)
argv = [
"-o", "MultiProcess/ProcessCount=102",
"-f", myConfigFile,
]
self.config.parseOptions(argv)
self.assertEquals(config.MultiProcess["ProcessCount"], 102)
def inServiceHierarchy(svc, predicate):
"""
Find services in the service collection which satisfy the given predicate.
"""
for subsvc in svc.services:
if IServiceCollection.providedBy(subsvc):
for value in inServiceHierarchy(subsvc, predicate):
yield value
if predicate(subsvc):
yield subsvc
def determineAppropriateGroupID():
"""
Determine a secondary group ID which can be used for testing.
"""
return os.getgroups()[1]
class SocketGroupOwnership(StoreTestCase):
"""
Tests for L{GroupOwnedUNIXServer}.
"""
def test_groupOwnedUNIXSocket(self):
"""
When a L{GroupOwnedUNIXServer} is started, it will change the group of
its socket.
"""
alternateGroup = determineAppropriateGroupID()
socketName = self.mktemp()
gous = GroupOwnedUNIXServer(alternateGroup, socketName, ServerFactory(), mode=0660)
gous.privilegedStartService()
self.addCleanup(gous.stopService)
filestat = os.stat(socketName)
self.assertTrue(stat.S_ISSOCK(filestat.st_mode))
self.assertEquals(filestat.st_gid, alternateGroup)
self.assertEquals(filestat.st_uid, os.getuid())
# Tests for the various makeService_ flavors:
class CalDAVServiceMakerTestBase(StoreTestCase):
@inlineCallbacks
def setUp(self):
yield super(CalDAVServiceMakerTestBase, self).setUp()
self.options = TestCalDAVOptions()
self.options.parent = Options()
self.options.parent["gid"] = None
self.options.parent["uid"] = None
self.options.parent["nodaemon"] = None
class CalDAVServiceMakerTestSingle(CalDAVServiceMakerTestBase):
def configure(self):
super(CalDAVServiceMakerTestSingle, self).configure()
config.ProcessType = "Single"
def test_makeService(self):
CalDAVServiceMaker().makeService(self.options)
# No error
class CalDAVServiceMakerTestSlave(CalDAVServiceMakerTestBase):
def configure(self):
super(CalDAVServiceMakerTestSlave, self).configure()
config.ProcessType = "Slave"
def test_makeService(self):
CalDAVServiceMaker().makeService(self.options)
# No error
class CalDAVServiceMakerTestUnknown(CalDAVServiceMakerTestBase):
def configure(self):
super(CalDAVServiceMakerTestUnknown, self).configure()
config.ProcessType = "Unknown"
def test_makeService(self):
self.assertRaises(UsageError, CalDAVServiceMaker().makeService, self.options)
# error
class ModesOnUNIXSocketsTests(CalDAVServiceMakerTestBase):
def configure(self):
super(ModesOnUNIXSocketsTests, self).configure()
config.ProcessType = "Combined"
config.HTTPPort = 0
self.alternateGroup = determineAppropriateGroupID()
config.GroupName = grp.getgrgid(self.alternateGroup).gr_name
config.Stats.EnableUnixStatsSocket = True
def test_modesOnUNIXSockets(self):
"""
The logging and stats UNIX sockets that are bound as part of the
'Combined' service hierarchy should have a secure mode specified: only
the executing user should be able to open and send to them.
"""
svc = CalDAVServiceMaker().makeService(self.options)
for serviceName in [_CONTROL_SERVICE_NAME]:
socketService = svc.getServiceNamed(serviceName)
self.assertIsInstance(socketService, GroupOwnedUNIXServer)
m = socketService.kwargs.get("mode", 0666)
self.assertEquals(
m, int("660", 8),
"Wrong mode on %s: %s" % (serviceName, oct(m))
)
self.assertEquals(socketService.gid, self.alternateGroup)
for serviceName in ["unix-stats"]:
socketService = svc.getServiceNamed(serviceName)
self.assertIsInstance(socketService, GroupOwnedUNIXServer)
m = socketService.kwargs.get("mode", 0666)
self.assertEquals(
m, int("660", 8),
"Wrong mode on %s: %s" % (serviceName, oct(m))
)
self.assertEquals(socketService.gid, self.alternateGroup)
class TestLoggingProtocol(LoggingProtocol):
def processEnded(self, reason):
LoggingProtocol.processEnded(self, reason)
self.service.processEnded(self.name)
class TestProcessMonitor(ProcessMonitor):
def startProcess(self, name):
"""
@param name: The name of the process to be started
"""
# If a protocol instance already exists, it means the process is
# already running
if name in self.protocols:
return
args, uid, gid, env = self.processes[name]
proto = TestLoggingProtocol()
proto.service = self
proto.name = name
self.protocols[name] = proto
self.timeStarted[name] = self._reactor.seconds()
self._reactor.spawnProcess(
proto, args[0], args, uid=uid, gid=gid, env=env
)
def stopService(self):
"""
Return a deferred that fires when all child processes have ended.
"""
service.Service.stopService(self)
self.stopping = True
self.deferreds = {}
for name in self.processes:
self.deferreds[name] = Deferred()
# Cancel any outstanding restarts
for name, delayedCall in self.restart.items():
if delayedCall.active():
delayedCall.cancel()
for name in self.processes:
self.stopProcess(name)
return gatherResults(self.deferreds.values())
def processEnded(self, name):
if self.stopping:
deferred = self.deferreds.pop(name, None)
if deferred is not None:
deferred.callback(None)
class MemcacheSpawner(TestCase):
def setUp(self):
super(MemcacheSpawner, self).setUp()
self.monitor = TestProcessMonitor()
self.monitor.startService()
self.socket = os.path.join(tempfile.gettempdir(), "memcache.sock")
self.patch(config.Memcached.Pools.Default, "ServerEnabled", True)
def test_memcacheUnix(self):
"""
Spawn a memcached process listening on a unix socket that becomes
connectable in no more than one second. Connect and interact.
Verify secure file permissions on the socket file.
"""
self.patch(config.Memcached.Pools.Default, "MemcacheSocket", self.socket)
CalDAVServiceMaker()._spawnMemcached(monitor=self.monitor)
sleep(1)
mc = memcacheclient.Client(["unix:{}".format(self.socket)], debug=1)
rando = random.random()
mc.set("the_answer", rando)
self.assertEquals(rando, mc.get("the_answer"))
# The socket file should not be usable to other users
st = os.stat(self.socket)
self.assertTrue(str(oct(st.st_mode)).endswith("00"))
mc.disconnect_all()
def test_memcacheINET(self):
"""
Spawn a memcached process listening on a network socket that becomes
connectable in no more than one second. Interact with it.
"""
self.patch(config.Memcached.Pools.Default, "MemcacheSocket", "")
ba = config.Memcached.Pools.Default.BindAddress
bp = config.Memcached.Pools.Default.Port
CalDAVServiceMaker()._spawnMemcached(monitor=self.monitor)
sleep(1)
mc = memcacheclient.Client(["{}:{}".format(ba, bp)], debug=1)
rando = random.random()
mc.set("the_password", rando)
self.assertEquals(rando, mc.get("the_password"))
mc.disconnect_all()
def tearDown(self):
"""
Verify that our spawned memcached can be reaped.
"""
return self.monitor.stopService()
class ProcessMonitorTests(CalDAVServiceMakerTestBase):
def configure(self):
super(ProcessMonitorTests, self).configure()
config.ProcessType = "Combined"
def test_processMonitor(self):
"""
In the master, there should be exactly one
L{DelayedStartupProcessMonitor} in the service hierarchy so that it
will be started by startup.
"""
self.assertEquals(
1,
len(
list(inServiceHierarchy(
CalDAVServiceMaker().makeService(self.options),
lambda x: isinstance(x, DelayedStartupProcessMonitor)))
)
)
class SlaveServiceTests(CalDAVServiceMakerTestBase):
"""
Test various configurations of the Slave service
"""
def configure(self):
super(SlaveServiceTests, self).configure()
config.ProcessType = "Slave"
config.HTTPPort = 8008
config.SSLPort = 8443
pemFile = os.path.join(sourceRoot, "twistedcaldav/test/data/server.pem")
config.SSLPrivateKey = pemFile
config.SSLCertificate = pemFile
config.EnableSSL = True
def test_defaultService(self):
"""
Test the value of a Slave service in it's simplest
configuration.
"""
service = CalDAVServiceMaker().makeService(self.options)
self.failUnless(
IService(service),
"%s does not provide IService" % (service,)
)
self.failUnless(
service.services,
"No services configured"
)
self.failUnless(
isinstance(service, CalDAVService),
"%s is not a CalDAVService" % (service,)
)
def test_defaultListeners(self):
"""
Test that the Slave service has sub services with the
default TCP and SSL configuration
"""
# Note: the listeners are bundled within a MultiService named "ConnectionService"
service = CalDAVServiceMaker().makeService(self.options)
service = service.getServiceNamed(CalDAVService.connectionServiceName)
expectedSubServices = dict((
(MaxAcceptTCPServer, config.HTTPPort),
(MaxAcceptSSLServer, config.SSLPort),
))
configuredSubServices = [(s.__class__, getattr(s, 'args', None))
for s in service.services]
checked = 0
for serviceClass, serviceArgs in configuredSubServices:
if serviceClass in expectedSubServices:
checked += 1
self.assertEquals(
serviceArgs[0],
dict(expectedSubServices)[serviceClass]
)
# TCP+SSL services for each bind address
self.assertEquals(checked, 2 * len(config.BindAddresses))
def test_SSLKeyConfiguration(self):
"""
Test that the configuration of the SSLServer reflect the config file's
SSL Private Key and SSL Certificate
"""
# Note: the listeners are bundled within a MultiService named "ConnectionService"
service = CalDAVServiceMaker().makeService(self.options)
service = service.getServiceNamed(CalDAVService.connectionServiceName)
sslService = None
for s in service.services:
if isinstance(s, internet.SSLServer):
sslService = s
break
self.failIf(sslService is None, "No SSL Service found")
context = sslService.args[2]
self.assertEquals(
config.SSLPrivateKey,
context.privateKeyFileName
)
self.assertEquals(
config.SSLCertificate,
context.certificateFileName,
)
class NoSSLTests(CalDAVServiceMakerTestBase):
def configure(self):
super(NoSSLTests, self).configure()
config.ProcessType = "Slave"
config.HTTPPort = 8008
# pemFile = os.path.join(sourceRoot, "twistedcaldav/test/data/server.pem")
# config.SSLPrivateKey = pemFile
# config.SSLCertificate = pemFile
# config.EnableSSL = True
def test_noSSL(self):
"""
Test the single service to make sure there is no SSL Service when SSL
is disabled
"""
# Note: the listeners are bundled within a MultiService named "ConnectionService"
service = CalDAVServiceMaker().makeService(self.options)
service = service.getServiceNamed(CalDAVService.connectionServiceName)
self.assertNotIn(
internet.SSLServer,
[s.__class__ for s in service.services]
)
class NoHTTPTests(CalDAVServiceMakerTestBase):
def configure(self):
super(NoHTTPTests, self).configure()
config.ProcessType = "Slave"
config.SSLPort = 8443
pemFile = os.path.join(sourceRoot, "twistedcaldav/test/data/server.pem")
config.SSLPrivateKey = pemFile
config.SSLCertificate = pemFile
config.EnableSSL = True
def test_noHTTP(self):
"""
Test the single service to make sure there is no TCPServer when
HTTPPort is not configured
"""
# Note: the listeners are bundled within a MultiService named "ConnectionService"
service = CalDAVServiceMaker().makeService(self.options)
service = service.getServiceNamed(CalDAVService.connectionServiceName)
self.assertNotIn(
internet.TCPServer,
[s.__class__ for s in service.services]
)
class SingleBindAddressesTests(CalDAVServiceMakerTestBase):
def configure(self):
super(SingleBindAddressesTests, self).configure()
config.ProcessType = "Slave"
config.HTTPPort = 8008
config.BindAddresses = ["127.0.0.1"]
def test_singleBindAddresses(self):
"""
Test that the TCPServer and SSLServers are bound to the proper address
"""
# Note: the listeners are bundled within a MultiService named "ConnectionService"
service = CalDAVServiceMaker().makeService(self.options)
service = service.getServiceNamed(CalDAVService.connectionServiceName)
for s in service.services:
if isinstance(s, (internet.TCPServer, internet.SSLServer)):
self.assertEquals(s.kwargs["interface"], "127.0.0.1")
class MultipleBindAddressesTests(CalDAVServiceMakerTestBase):
def configure(self):
super(MultipleBindAddressesTests, self).configure()
config.ProcessType = "Slave"
config.HTTPPort = 8008
config.SSLPort = 8443
pemFile = os.path.join(sourceRoot, "twistedcaldav/test/data/server.pem")
config.SSLPrivateKey = pemFile
config.SSLCertificate = pemFile
config.EnableSSL = True
config.BindAddresses = [
"127.0.0.1",
"10.0.0.2",
"172.53.13.123",
]
def test_multipleBindAddresses(self):
"""
Test that the TCPServer and SSLServers are bound to the proper
addresses.
"""
# Note: the listeners are bundled within a MultiService named "ConnectionService"
service = CalDAVServiceMaker().makeService(self.options)
service = service.getServiceNamed(CalDAVService.connectionServiceName)
tcpServers = []
sslServers = []
for s in service.services:
if isinstance(s, internet.TCPServer):
tcpServers.append(s)
elif isinstance(s, internet.SSLServer):
sslServers.append(s)
self.assertEquals(len(tcpServers), len(config.BindAddresses))
self.assertEquals(len(sslServers), len(config.BindAddresses))
for addr in config.BindAddresses:
for s in tcpServers:
if s.kwargs["interface"] == addr:
tcpServers.remove(s)
for s in sslServers:
if s.kwargs["interface"] == addr:
sslServers.remove(s)
self.assertEquals(len(tcpServers), 0)
self.assertEquals(len(sslServers), 0)
class ListenBacklogTests(CalDAVServiceMakerTestBase):
def configure(self):
super(ListenBacklogTests, self).configure()
config.ProcessType = "Slave"
config.ListenBacklog = 1024
config.HTTPPort = 8008
config.SSLPort = 8443
pemFile = os.path.join(sourceRoot, "twistedcaldav/test/data/server.pem")
config.SSLPrivateKey = pemFile
config.SSLCertificate = pemFile
config.EnableSSL = True
config.BindAddresses = [
"127.0.0.1",
"10.0.0.2",
"172.53.13.123",
]
def test_listenBacklog(self):
"""
Test that the backlog arguments is set in TCPServer and SSLServers
"""
# Note: the listeners are bundled within a MultiService named "ConnectionService"
service = CalDAVServiceMaker().makeService(self.options)
service = service.getServiceNamed(CalDAVService.connectionServiceName)
for s in service.services:
if isinstance(s, (internet.TCPServer, internet.SSLServer)):
self.assertEquals(s.kwargs["backlog"], 1024)
class AuthWrapperAllEnabledTests(CalDAVServiceMakerTestBase):
def configure(self):
super(AuthWrapperAllEnabledTests, self).configure()
config.HTTPPort = 8008
config.Authentication.Digest.Enabled = True
config.Authentication.Kerberos.Enabled = True
config.Authentication.Kerberos.ServicePrincipal = "http/hello@bob"
config.Authentication.Basic.Enabled = True
def test_AuthWrapperAllEnabled(self):
"""
Test the configuration of the authentication wrapper
when all schemes are enabled.
"""
authWrapper = self.rootResource.resource
self.failUnless(
isinstance(
authWrapper,
auth.AuthenticationWrapper
)
)
expectedSchemes = ["negotiate", "digest", "basic"]
for scheme in authWrapper.credentialFactories:
self.failUnless(scheme in expectedSchemes)
self.assertEquals(len(expectedSchemes),
len(authWrapper.credentialFactories))
ncf = authWrapper.credentialFactories["negotiate"]
self.assertEquals(ncf.service, "http@HELLO")
self.assertEquals(ncf.realm, "bob")
class ServicePrincipalNoneTests(CalDAVServiceMakerTestBase):
def configure(self):
super(ServicePrincipalNoneTests, self).configure()
config.HTTPPort = 8008
config.Authentication.Digest.Enabled = True
config.Authentication.Kerberos.Enabled = True
config.Authentication.Kerberos.ServicePrincipal = ""
config.Authentication.Basic.Enabled = True
def test_servicePrincipalNone(self):
"""
Test that the Kerberos principal look is attempted if the principal is empty.
"""
authWrapper = self.rootResource.resource
self.assertFalse("negotiate" in authWrapper.credentialFactories)
class AuthWrapperPartialEnabledTests(CalDAVServiceMakerTestBase):
def configure(self):
super(AuthWrapperPartialEnabledTests, self).configure()
config.Authentication.Digest.Enabled = True
config.Authentication.Kerberos.Enabled = False
config.Authentication.Basic.Enabled = False
def test_AuthWrapperPartialEnabled(self):
"""
Test that the expected credential factories exist when
only a partial set of authentication schemes is
enabled.
"""
authWrapper = self.rootResource.resource
expectedSchemes = ["digest"]
for scheme in authWrapper.credentialFactories:
self.failUnless(scheme in expectedSchemes)
self.assertEquals(
len(expectedSchemes),
len(authWrapper.credentialFactories)
)
class ResourceTests(CalDAVServiceMakerTestBase):
def test_LogWrapper(self):
"""
Test the configuration of the log wrapper
"""
self.failUnless(isinstance(self.rootResource, LogWrapperResource))
def test_AuthWrapper(self):
"""
Test the configuration of the auth wrapper
"""
self.failUnless(isinstance(self.rootResource.resource, AuthenticationWrapper))
def test_rootResource(self):
"""
Test the root resource
"""
self.failUnless(isinstance(self.rootResource.resource.resource, RootResource))
@inlineCallbacks
def test_principalResource(self):
"""
Test the principal resource
"""
self.failUnless(isinstance(
(yield self.actualRoot.getChild("principals")),
DirectoryPrincipalProvisioningResource
))
@inlineCallbacks
def test_calendarResource(self):
"""
Test the calendar resource
"""
self.failUnless(isinstance(
(yield self.actualRoot.getChild("calendars")),
DirectoryCalendarHomeProvisioningResource
))
@inlineCallbacks
def test_sameDirectory(self):
"""
Test that the principal hierarchy has a reference
to the same DirectoryService as the calendar hierarchy
"""
principals = yield self.actualRoot.getChild("principals")
calendars = yield self.actualRoot.getChild("calendars")
self.assertEquals(principals.directory, calendars.directory)
class DummyProcessObject(object):
"""
Simple stub for Process Object API which just has an executable and some
arguments.
This is a stand in for L{TwistdSlaveProcess}.
"""
def __init__(self, scriptname, *args):
self.scriptname = scriptname
self.args = args
def starting(self):
pass
def stopped(self):
pass
def getCommandLine(self):
"""
Simple command line.
"""
return [self.scriptname] + list(self.args)
def getFileDescriptors(self):
"""
Return a dummy, empty mapping of file descriptors.
"""
return {}
def getName(self):
"""
Get a dummy name.
"""
return 'Dummy'
class ScriptProcessObject(DummyProcessObject):
"""
Simple stub for the Process Object API that will run a test script.
"""
def getCommandLine(self):
"""
Get the command line to invoke this script.
"""
return [
sys.executable,
FilePath(__file__).sibling(self.scriptname).path
] + list(self.args)
class DelayedStartupProcessMonitorTests(StoreTestCase):
"""
Test cases for L{DelayedStartupProcessMonitor}.
"""
def test_lineAfterLongLine(self):
"""
A "long" line of output from a monitored process (longer than
L{LineReceiver.MAX_LENGTH}) should be logged in chunks rather than all
at once, to avoid resource exhaustion.
"""
dspm = DelayedStartupProcessMonitor()
dspm.addProcessObject(
ScriptProcessObject(
'longlines.py',
str(DelayedStartupLineLogger.MAX_LENGTH)
),
os.environ
)
dspm.startService()
self.addCleanup(dspm.stopService)
logged = []
def tempObserver(event):
# Probably won't be a problem, but let's not have any intermittent
# test issues that stem from multi-threaded log messages randomly
# going off...
if not isInIOThread():
callFromThread(tempObserver, event)
return
if event.get('isError'):
d.errback()
m = event.get('message')[0]
if m.startswith('[Dummy] '):
logged.append(event)
if m == '[Dummy] z':
d.callback("done")
logging.addObserver(tempObserver)
self.addCleanup(logging.removeObserver, tempObserver)
d = Deferred()
def assertions(result):
self.assertEquals(["[Dummy] x",
"[Dummy] y",
"[Dummy] y", # final segment
"[Dummy] z"],
[''.join(evt['message'])[:len('[Dummy]') + 2]
for evt in logged])
self.assertEquals([" (truncated, continued)",
" (truncated, continued)",
"[Dummy] y",
"[Dummy] z"],
[''.join(evt['message'])[-len(" (truncated, continued)"):]
for evt in logged])
d.addCallback(assertions)
return d
def test_breakLineIntoSegments(self):
"""
Exercise the line-breaking logic with various key lengths
"""
testLogger = DelayedStartupLineLogger()
testLogger.MAX_LENGTH = 10
for input, output in [
("", []),
("a", ["a"]),
("abcde", ["abcde"]),
("abcdefghij", ["abcdefghij"]),
(
"abcdefghijk",
[
"abcdefghij (truncated, continued)",
"k"
]
),
(
"abcdefghijklmnopqrst",
[
"abcdefghij (truncated, continued)",
"klmnopqrst"
]
),
(
"abcdefghijklmnopqrstuv",
[
"abcdefghij (truncated, continued)",
"klmnopqrst (truncated, continued)",
"uv"
]
),
]:
self.assertEquals(output, testLogger._breakLineIntoSegments(input))
def test_acceptDescriptorInheritance(self):
"""
If a L{TwistdSlaveProcess} specifies some file descriptors to be
inherited, they should be inherited by the subprocess.
"""
imps = InMemoryProcessSpawner()
dspm = DelayedStartupProcessMonitor(imps)
# Most arguments here will be ignored, so these are bogus values.
slave = TwistdSlaveProcess(
twistd="bleh",
tapname="caldav",
configFile="/does/not/exist",
id=10,
interfaces='127.0.0.1',
inheritFDs=[3, 7],
inheritSSLFDs=[19, 25],
)
dspm.addProcessObject(slave, {})
dspm.startService()
# We can easily stub out spawnProcess, because caldav calls it, but a
# bunch of callLater calls are buried in procmon itself, so we need to
# use the real clock.
oneProcessTransport = imps.waitForOneProcess()
self.assertEquals(oneProcessTransport.childFDs,
{0: 'w', 1: 'r', 2: 'r',
3: 3, 7: 7,
19: 19, 25: 25})
def test_changedArgumentEachSpawn(self):
"""
If the result of C{getCommandLine} changes on subsequent calls,
subsequent calls should result in different arguments being passed to
C{spawnProcess} each time.
"""
imps = InMemoryProcessSpawner()
dspm = DelayedStartupProcessMonitor(imps)
slave = DummyProcessObject('scriptname', 'first')
dspm.addProcessObject(slave, {})
dspm.startService()
oneProcessTransport = imps.waitForOneProcess()
self.assertEquals(oneProcessTransport.args,
['scriptname', 'first'])
slave.args = ['second']
oneProcessTransport.processProtocol.processEnded(None)
twoProcessTransport = imps.waitForOneProcess()
self.assertEquals(twoProcessTransport.args,
['scriptname', 'second'])
def test_metaDescriptorInheritance(self):
"""
If a L{TwistdSlaveProcess} specifies a meta-file-descriptor to be
inherited, it should be inherited by the subprocess, and a
configuration argument should be passed that indicates to the
subprocess.
"""
imps = InMemoryProcessSpawner()
dspm = DelayedStartupProcessMonitor(imps)
# Most arguments here will be ignored, so these are bogus values.
slave = TwistdSlaveProcess(
twistd="bleh",
tapname="caldav",
configFile="/does/not/exist",
id=10,
interfaces='127.0.0.1',
metaSocket=FakeDispatcher().addSocket()
)
dspm.addProcessObject(slave, {})
dspm.startService()
oneProcessTransport = imps.waitForOneProcess()
self.assertIn("MetaFD=4", oneProcessTransport.args)
self.assertEquals(
oneProcessTransport.args[oneProcessTransport.args.index("MetaFD=4") - 1],
'-o',
"MetaFD argument was not passed as an option"
)
self.assertEquals(oneProcessTransport.childFDs,
{0: 'w', 1: 'r', 2: 'r',
4: 4})
def test_startServiceDelay(self):
"""
Starting a L{DelayedStartupProcessMonitor} should result in the process
objects that have been added to it being started once per
delayInterval.
"""
imps = InMemoryProcessSpawner()
dspm = DelayedStartupProcessMonitor(imps)
dspm.delayInterval = 3.0
sampleCounter = range(0, 5)
for counter in sampleCounter:
slave = TwistdSlaveProcess(
twistd="bleh",
tapname="caldav",
configFile="/does/not/exist",
id=counter * 10,
interfaces='127.0.0.1',
metaSocket=FakeDispatcher().addSocket()
)
dspm.addProcessObject(slave, {"SAMPLE_ENV_COUNTER": str(counter)})
dspm.startService()
# Advance the clock a bunch of times, allowing us to time things with a
# comprehensible resolution.
imps.pump([0] + [dspm.delayInterval / 2.0] * len(sampleCounter) * 3)
expectedValues = [dspm.delayInterval * n for n in sampleCounter]
self.assertEquals([x.startedAt for x in imps.processTransports],
expectedValues)
class FakeFD(object):
def __init__(self, n):
self.fd = n
def fileno(self):
return self.fd
class FakeSubsocket(object):
def __init__(self, fakefd):
self.fakefd = fakefd
def childSocket(self):
return self.fakefd
def start(self):
pass
def restarted(self):
pass
def stop(self):
pass
class FakeDispatcher(object):
n = 3
def addSocket(self):
self.n += 1
return FakeSubsocket(FakeFD(self.n))
class TwistdSlaveProcessTests(StoreTestCase):
"""
Tests for L{TwistdSlaveProcess}.
"""
def test_pidfile(self):
"""
The result of L{TwistdSlaveProcess.getCommandLine} includes an option
setting the name of the pidfile to something including the instance id.
"""
slave = TwistdSlaveProcess("/path/to/twistd", "something", "config", 7, [])
commandLine = slave.getCommandLine()
option = 'PIDFile=something-instance-7.pid'
self.assertIn(option, commandLine)
self.assertEquals(commandLine[commandLine.index(option) - 1], '-o')
class ReExecServiceTests(StoreTestCase):
@inlineCallbacks
def test_reExecService(self):
"""
Verify that sending a HUP to the test reexec.tac causes startService
and stopService to be called again by counting the number of times
START and STOP appear in the process output.
"""
# Inherit the reactor used to run trial
reactorArg = "--reactor=select"
for arg in sys.argv:
if arg.startswith("--reactor"):
reactorArg = arg
break
tacFilePath = os.path.join(os.path.dirname(__file__), "reexec.tac")
twistd = which("twistd")[0]
deferred = Deferred()
proc = reactor.spawnProcess(
CapturingProcessProtocol(deferred, None),
sys.executable,
[sys.executable, '-W', 'ignore', twistd, reactorArg, '-n', '-y', tacFilePath],
env=os.environ
)
reactor.callLater(3, proc.signalProcess, "HUP")
reactor.callLater(6, proc.signalProcess, "TERM")
output = yield deferred
self.assertEquals(output.count("START"), 2)
self.assertEquals(output.count("STOP"), 2)
class SystemIDsTests(StoreTestCase):
"""
Verifies the behavior of calendarserver.tap.caldav.getSystemIDs
"""
def _wrappedFunction(self):
"""
Return a copy of the getSystemIDs function with test implementations
of the ID lookup functions swapped into the namespace.
"""
def _getpwnam(name):
if name == "exists":
Getpwnam = namedtuple("Getpwnam", ("pw_uid"))
return Getpwnam(42)
else:
raise KeyError(name)
def _getgrnam(name):
if name == "exists":
Getgrnam = namedtuple("Getgrnam", ("gr_gid"))
return Getgrnam(43)
else:
raise KeyError(name)
def _getuid():
return 44
def _getgid():
return 45
return type(getSystemIDs)(
getSystemIDs.func_code, #@UndefinedVariable
{
"getpwnam": _getpwnam,
"getgrnam": _getgrnam,
"getuid": _getuid,
"getgid": _getgid,
"KeyError": KeyError,
"ConfigurationError": ConfigurationError,
}
)
def test_getSystemIDs_UserNameNotFound(self):
"""
If userName is passed in but is not found on the system, raise a
ConfigurationError
"""
self.assertRaises(
ConfigurationError, self._wrappedFunction(),
"nonexistent", "exists"
)
def test_getSystemIDs_GroupNameNotFound(self):
"""
If groupName is passed in but is not found on the system, raise a
ConfigurationError
"""
self.assertRaises(
ConfigurationError, self._wrappedFunction(),
"exists", "nonexistent"
)
def test_getSystemIDs_NamesNotSpecified(self):
"""
If names are not provided, use the IDs of the process
"""
self.assertEquals(self._wrappedFunction()("", ""), (44, 45))
def test_getSystemIDs_NamesSpecified(self):
"""
If names are provided, use the IDs corresponding to those names
"""
self.assertEquals(self._wrappedFunction()("exists", "exists"), (42, 43))
#
# Tests for PreProcessingService
#
class Step(object):
def __init__(self, recordCallback, shouldFail):
self._recordCallback = recordCallback
self._shouldFail = shouldFail
def stepWithResult(self, result):
self._recordCallback(self.successValue, None)
if self._shouldFail:
1 / 0
return succeed(result)
def stepWithFailure(self, failure):
self._recordCallback(self.errorValue, failure)
if self._shouldFail:
return failure
class StepOne(Step):
successValue = "one success"
errorValue = "one failure"
class StepTwo(Step):
successValue = "two success"
errorValue = "two failure"
class StepThree(Step):
successValue = "three success"
errorValue = "three failure"
class StepFour(Step):
successValue = "four success"
errorValue = "four failure"
class PreProcessingServiceTestCase(TestCase):
def fakeServiceCreator(self, cp, store, lo, storageService):
self.history.append(("serviceCreator", store, storageService))
def setUp(self):
self.history = []
self.clock = Clock()
self.pps = PreProcessingService(
self.fakeServiceCreator, None, "store",
None, "storageService", reactor=self.clock
)
def _record(self, value, failure):
self.history.append(value)
def test_allSuccess(self):
self.pps.addStep(
StepOne(self._record, False)
).addStep(
StepTwo(self._record, False)
).addStep(
StepThree(self._record, False)
).addStep(
StepFour(self._record, False)
)
self.pps.startService()
self.assertEquals(
self.history,
[
'one success', 'two success', 'three success', 'four success',
('serviceCreator', 'store', 'storageService')
]
)
def test_allFailure(self):
self.pps.addStep(
StepOne(self._record, True)
).addStep(
StepTwo(self._record, True)
).addStep(
StepThree(self._record, True)
).addStep(
StepFour(self._record, True)
)
self.pps.startService()
self.assertEquals(
self.history,
[
'one success', 'two failure', 'three failure', 'four failure',
('serviceCreator', None, 'storageService')
]
)
def test_partialFailure(self):
self.pps.addStep(
StepOne(self._record, True)
).addStep(
StepTwo(self._record, False)
).addStep(
StepThree(self._record, True)
).addStep(
StepFour(self._record, False)
)
self.pps.startService()
self.assertEquals(
self.history,
[
'one success', 'two failure', 'three success', 'four failure',
('serviceCreator', 'store', 'storageService')
]
)
class StubStorageService(object):
def __init__(self):
self.hardStopCalled = False
def hardStop(self):
self.hardStopCalled = True
class StubReactor(object):
def __init__(self):
self.stopCalled = False
def stop(self):
self.stopCalled = True
class DataStoreMonitorTestCase(TestCase):
def test_monitor(self):
storageService = StubStorageService()
stubReactor = StubReactor()
monitor = DataStoreMonitor(stubReactor, storageService)
monitor.disconnected()
self.assertTrue(storageService.hardStopCalled)
self.assertTrue(stubReactor.stopCalled)
storageService.hardStopCalled = False
stubReactor.stopCalled = False
monitor.deleted()
self.assertTrue(storageService.hardStopCalled)
self.assertTrue(stubReactor.stopCalled)
storageService.hardStopCalled = False
stubReactor.stopCalled = False
monitor.renamed()
self.assertTrue(storageService.hardStopCalled)
self.assertTrue(stubReactor.stopCalled)
calendarserver-7.0+dfsg/calendarserver/tap/test/test_util.py 0000664 0000000 0000000 00000020356 12607514243 0024442 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2007-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from calendarserver.tap.util import (
MemoryLimitService, Stepper, verifyTLSCertificate
)
from twistedcaldav.util import computeProcessCount
from twistedcaldav.test.util import TestCase
from twisted.internet.task import Clock
from twisted.internet.defer import succeed, inlineCallbacks
from twisted.python.filepath import FilePath
from twistedcaldav.config import ConfigDict
class ProcessCountTestCase(TestCase):
def test_count(self):
data = (
# minimum, perCPU, perGB, cpu, memory (in GB), expected:
(0, 1, 1, 0, 0, 0),
(1, 2, 2, 0, 0, 1),
(1, 2, 1, 1, 1, 1),
(1, 2, 2, 1, 1, 2),
(1, 2, 2, 2, 1, 2),
(1, 2, 2, 1, 2, 2),
(1, 2, 2, 2, 2, 4),
(4, 1, 2, 8, 2, 4),
(6, 2, 2, 2, 2, 6),
(1, 2, 1, 4, 99, 8),
(2, 1, 2, 2, 2, 2), # 2 cores, 2GB = 2
(2, 1, 2, 2, 4, 2), # 2 cores, 4GB = 2
(2, 1, 2, 8, 6, 8), # 8 cores, 6GB = 8
(2, 1, 2, 8, 16, 8), # 8 cores, 16GB = 8
)
for min, perCPU, perGB, cpu, mem, expected in data:
mem *= (1024 * 1024 * 1024)
self.assertEquals(
expected,
computeProcessCount(min, perCPU, perGB, cpuCount=cpu, memSize=mem)
)
# Stub classes for MemoryLimitServiceTestCase
class StubProtocol(object):
def __init__(self, transport):
self.transport = transport
class StubProcess(object):
def __init__(self, pid):
self.pid = pid
class StubProcessMonitor(object):
def __init__(self, processes, protocols):
self.processes = processes
self.protocols = protocols
self.history = []
def stopProcess(self, name):
self.history.append(name)
class MemoryLimitServiceTestCase(TestCase):
def test_checkMemory(self):
"""
Set up stub objects to verify MemoryLimitService.checkMemory( )
only stops the processes whose memory usage exceeds the configured
limit, and skips memcached
"""
data = {
# PID : (name, resident memory-in-bytes, virtual memory-in-bytes)
101: ("process #1", 10, 1010),
102: ("process #2", 30, 1030),
103: ("process #3", 50, 1050),
99: ("memcached-Default", 10, 1010),
}
processes = []
protocols = {}
for pid, (name, _ignore_resident, _ignore_virtual) in data.iteritems():
protocols[name] = StubProtocol(StubProcess(pid))
processes.append(name)
processMonitor = StubProcessMonitor(processes, protocols)
clock = Clock()
service = MemoryLimitService(processMonitor, 10, 15, True, reactor=clock)
# For testing, use a stub implementation of memory-usage lookup
def testMemoryForPID(pid, residentOnly):
return data[pid][1 if residentOnly else 2]
service._memoryForPID = testMemoryForPID
# After 5 seconds, nothing should have happened, since the interval is 10 seconds
service.startService()
clock.advance(5)
self.assertEquals(processMonitor.history, [])
# After 7 more seconds, processes 2 and 3 should have been stopped since their
# memory usage exceeds 10 bytes
clock.advance(7)
self.assertEquals(processMonitor.history, ['process #2', 'process #3'])
# Now switch to looking at virtual memory, in which case all 3 processes
# should be stopped
service._residentOnly = False
processMonitor.history = []
clock.advance(10)
self.assertEquals(processMonitor.history, ['process #1', 'process #2', 'process #3'])
#
# Tests for Stepper
#
class Step(object):
def __init__(self, recordCallback, shouldFail):
self._recordCallback = recordCallback
self._shouldFail = shouldFail
def stepWithResult(self, result):
self._recordCallback(self.successValue, None)
if self._shouldFail:
1 / 0
return succeed(result)
def stepWithFailure(self, failure):
self._recordCallback(self.errorValue, failure)
if self._shouldFail:
return failure
class StepOne(Step):
successValue = "one success"
errorValue = "one failure"
class StepTwo(Step):
successValue = "two success"
errorValue = "two failure"
class StepThree(Step):
successValue = "three success"
errorValue = "three failure"
class StepFour(Step):
successValue = "four success"
errorValue = "four failure"
class StepperTestCase(TestCase):
def setUp(self):
self.history = []
self.stepper = Stepper()
def _record(self, value, failure):
self.history.append(value)
@inlineCallbacks
def test_allSuccess(self):
self.stepper.addStep(
StepOne(self._record, False)
).addStep(
StepTwo(self._record, False)
).addStep(
StepThree(self._record, False)
).addStep(
StepFour(self._record, False)
)
result = (yield self.stepper.start("abc"))
self.assertEquals(result, "abc") # original result passed through
self.assertEquals(
self.history,
['one success', 'two success', 'three success', 'four success'])
def test_allFailure(self):
self.stepper.addStep(StepOne(self._record, True))
self.stepper.addStep(StepTwo(self._record, True))
self.stepper.addStep(StepThree(self._record, True))
self.stepper.addStep(StepFour(self._record, True))
self.failUnlessFailure(self.stepper.start(), ZeroDivisionError)
self.assertEquals(
self.history,
['one success', 'two failure', 'three failure', 'four failure'])
@inlineCallbacks
def test_partialFailure(self):
self.stepper.addStep(StepOne(self._record, True))
self.stepper.addStep(StepTwo(self._record, False))
self.stepper.addStep(StepThree(self._record, True))
self.stepper.addStep(StepFour(self._record, False))
result = (yield self.stepper.start("abc"))
self.assertEquals(result, None) # original result is gone
self.assertEquals(
self.history,
['one success', 'two failure', 'three success', 'four failure'])
class PreFlightChecksTestCase(TestCase):
"""
Verify that missing, empty, or bogus TLS Certificates are detected
"""
def test_missingCertificate(self):
success, _ignore_reason = verifyTLSCertificate(
ConfigDict(
{
"SSLCertificate": "missing",
}
)
)
self.assertFalse(success)
def test_emptyCertificate(self):
certFilePath = FilePath(self.mktemp())
certFilePath.setContent("")
success, _ignore_reason = verifyTLSCertificate(
ConfigDict(
{
"SSLCertificate": certFilePath.path,
}
)
)
self.assertFalse(success)
def test_bogusCertificate(self):
certFilePath = FilePath(self.mktemp())
certFilePath.setContent("bogus")
keyFilePath = FilePath(self.mktemp())
keyFilePath.setContent("bogus")
success, _ignore_reason = verifyTLSCertificate(
ConfigDict(
{
"SSLCertificate": certFilePath.path,
"SSLPrivateKey": keyFilePath.path,
"SSLAuthorityChain": "",
"SSLMethod": "SSLv3_METHOD",
"SSLCiphers": "ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM",
}
)
)
self.assertFalse(success)
calendarserver-7.0+dfsg/calendarserver/tap/util.py 0000664 0000000 0000000 00000146462 12607514243 0022433 0 ustar 00root root 0000000 0000000 # -*- test-case-name: calendarserver.tap.test.test_caldav -*-
##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Utilities for assembling the service and resource hierarchy.
"""
__all__ = [
"FakeRequest",
"getDBPool",
"getRootResource",
"getSSLPassphrase",
"MemoryLimitService",
"postAlert",
"preFlightChecks",
]
from calendarserver.accesslog import DirectoryLogWrapperResource
from calendarserver.provision.root import RootResource
from calendarserver.push.applepush import APNSubscriptionResource
from calendarserver.push.notifier import NotifierFactory
from calendarserver.push.util import getAPNTopicFromCertificate
from calendarserver.tools import diagnose
from calendarserver.tools.util import checkDirectory
from calendarserver.webadmin.landing import WebAdminLandingResource
from calendarserver.webcal.resource import WebCalendarResource
from socket import fromfd, AF_UNIX, SOCK_STREAM, socketpair
from subprocess import Popen, PIPE
from twext.enterprise.adbapi2 import ConnectionPool, ConnectionPoolConnection
from twext.enterprise.adbapi2 import ConnectionPoolClient
from twext.enterprise.ienterprise import ORACLE_DIALECT
from twext.enterprise.ienterprise import POSTGRES_DIALECT
from twext.internet.ssl import ChainingOpenSSLContextFactory
from twext.python.filepath import CachingFilePath
from twext.python.filepath import CachingFilePath as FilePath
from twext.python.log import Logger
from twext.who.checker import HTTPDigestCredentialChecker
from twext.who.checker import UsernamePasswordCredentialChecker
from twisted.application.service import Service
from twisted.cred.error import UnauthorizedLogin
from twisted.cred.portal import Portal
from twisted.internet import reactor as _reactor
from twisted.internet.defer import inlineCallbacks, returnValue, Deferred, succeed
from twisted.internet.reactor import addSystemEventTrigger
from twisted.internet.tcp import Connection
from twisted.python.usage import UsageError
from twistedcaldav.bind import doBind
from twistedcaldav.cache import CacheStoreNotifierFactory
from twistedcaldav.config import ConfigurationError
from twistedcaldav.controlapi import ControlAPIResource
from twistedcaldav.directory.addressbook import DirectoryAddressBookHomeProvisioningResource
from twistedcaldav.directory.calendar import DirectoryCalendarHomeProvisioningResource
from twistedcaldav.directory.digest import QopDigestCredentialFactory
from twistedcaldav.directory.principal import DirectoryPrincipalProvisioningResource
from twistedcaldav.directorybackedaddressbook import DirectoryBackedAddressBookResource
from twistedcaldav.resource import AuthenticationWrapper
from twistedcaldav.simpleresource import SimpleResource, SimpleRedirectResource, \
SimpleUnavailableResource
from twistedcaldav.stdconfig import config
from twistedcaldav.timezones import TimezoneCache
from twistedcaldav.timezoneservice import TimezoneServiceResource
from twistedcaldav.timezonestdservice import TimezoneStdServiceResource
from twistedcaldav.util import getPasswordFromKeychain
from twistedcaldav.util import KeychainAccessError, KeychainPasswordNotFound
from txdav.base.datastore.dbapiclient import DBAPIConnector
from txdav.base.datastore.subpostgres import PostgresService
from txdav.caldav.datastore.scheduling.ischedule.dkim import DKIMUtils, DomainKeyResource
from txdav.caldav.datastore.scheduling.ischedule.localservers import buildServersDB
from txdav.caldav.datastore.scheduling.ischedule.resource import IScheduleInboxResource
from txdav.common.datastore.file import CommonDataStore as CommonFileDataStore
from txdav.common.datastore.podding.resource import ConduitResource
from txdav.common.datastore.sql import CommonDataStore as CommonSQLDataStore
from txdav.common.datastore.sql import current_sql_schema
from txdav.common.datastore.upgrade.sql.upgrade import NotAllowedToUpgrade
from txdav.dps.client import DirectoryService as DirectoryProxyClientService
from txdav.who.cache import CachingDirectoryService
from txdav.who.util import directoryFromConfig
from txweb2.auth.basic import BasicCredentialFactory
from txweb2.auth.tls import TLSCredentialsFactory, TLSCredentials
from txweb2.dav import auth
from txweb2.dav.auth import IPrincipalCredentials
from txweb2.dav.util import joinURL
from txweb2.http_headers import Headers
from txweb2.resource import Resource
from txweb2.static import File as FileResource
from urllib import quote
import OpenSSL
import errno
import os
import psutil
import sys
try:
from twistedcaldav.authkerb import NegotiateCredentialFactory
NegotiateCredentialFactory # pacify pyflakes
except ImportError:
NegotiateCredentialFactory = None
log = Logger()
def pgServiceFromConfig(config, subServiceFactory, uid=None, gid=None):
"""
Construct a L{PostgresService} from a given configuration and subservice.
@param config: the configuration to derive postgres configuration
parameters from.
@param subServiceFactory: A factory for the service to start once the
L{PostgresService} has been initialized.
@param uid: The user-ID to run the PostgreSQL server as.
@param gid: The group-ID to run the PostgreSQL server as.
@return: a service which can start postgres.
@rtype: L{PostgresService}
"""
dbRoot = CachingFilePath(config.DatabaseRoot)
# Construct a PostgresService exactly as the parent would, so that we
# can establish connection information.
return PostgresService(
dbRoot, subServiceFactory, current_sql_schema,
databaseName=config.Postgres.DatabaseName,
clusterName=config.Postgres.ClusterName,
logFile=config.Postgres.LogFile,
logDirectory=config.LogRoot if config.Postgres.LogRotation else "",
socketDir=config.Postgres.SocketDirectory,
socketName=config.Postgres.SocketName,
listenAddresses=config.Postgres.ListenAddresses,
sharedBuffers=config.Postgres.SharedBuffers,
maxConnections=config.Postgres.MaxConnections,
options=config.Postgres.Options,
uid=uid, gid=gid,
spawnedDBUser=config.SpawnedDBUser,
pgCtl=config.Postgres.Ctl,
initDB=config.Postgres.Init,
)
class ConnectionWithPeer(Connection):
connected = True
def getPeer(self):
return "" % (self.socket.fileno(), id(self))
def getHost(self):
return "" % (self.socket.fileno(), id(self))
def transactionFactoryFromFD(dbampfd, dialect, paramstyle):
"""
Create a transaction factory from an inherited file descriptor, such as one
created by L{ConnectionDispenser}.
"""
skt = fromfd(dbampfd, AF_UNIX, SOCK_STREAM)
os.close(dbampfd)
protocol = ConnectionPoolClient(dialect=dialect, paramstyle=paramstyle)
transport = ConnectionWithPeer(skt, protocol)
protocol.makeConnection(transport)
transport.startReading()
return protocol.newTransaction
class ConnectionDispenser(object):
"""
A L{ConnectionDispenser} can dispense already-connected file descriptors,
for use with subprocess spawning.
"""
# Very long term FIXME: this mechanism should ideally be eliminated, by
# making all subprocesses have a single stdio AMP connection that
# multiplexes between multiple protocols.
def __init__(self, connectionPool):
self.pool = connectionPool
def dispense(self):
"""
Dispense a socket object, already connected to a server, for a client
in a subprocess.
"""
# FIXME: these sockets need to be re-dispensed when the process is
# respawned, and they currently won't be.
c, s = socketpair(AF_UNIX, SOCK_STREAM)
protocol = ConnectionPoolConnection(self.pool)
transport = ConnectionWithPeer(s, protocol)
protocol.makeConnection(transport)
transport.startReading()
return c
def storeFromConfigWithoutDPS(config, txnFactory):
store = storeFromConfig(config, txnFactory, None)
directory = directoryFromConfig(config, store)
# if config.DirectoryProxy.InProcessCachingSeconds:
# directory = CachingDirectoryService(
# directory,
# expireSeconds=config.DirectoryProxy.InProcessCachingSeconds
# )
store.setDirectoryService(directory)
return store
def storeFromConfigWithDPSClient(config, txnFactory):
store = storeFromConfig(config, txnFactory, None)
directory = DirectoryProxyClientService(config.DirectoryRealmName)
if config.Servers.Enabled:
directory.setServersDB(buildServersDB(config.Servers.MaxClients))
if config.DirectoryProxy.InProcessCachingSeconds:
directory = CachingDirectoryService(
directory,
expireSeconds=config.DirectoryProxy.InProcessCachingSeconds
)
store.setDirectoryService(directory)
return store
def storeFromConfigWithDPSServer(config, txnFactory):
store = storeFromConfig(config, txnFactory, None)
directory = directoryFromConfig(config, store)
# if config.DirectoryProxy.InSidecarCachingSeconds:
# directory = CachingDirectoryService(
# directory,
# expireSeconds=config.DirectoryProxy.InSidecarCachingSeconds
# )
store.setDirectoryService(directory)
return store
def storeFromConfig(config, txnFactory, directoryService):
"""
Produce an L{IDataStore} from the given configuration, transaction factory,
and notifier factory.
If the transaction factory is C{None}, we will create a filesystem
store. Otherwise, a SQL store, using that connection information.
"""
#
# Configure NotifierFactory
#
notifierFactories = {}
if config.Notifications.Enabled:
notifierFactories["push"] = NotifierFactory(config.ServerHostName, config.Notifications.CoalesceSeconds)
if config.EnableResponseCache and config.Memcached.Pools.Default.ClientEnabled:
notifierFactories["cache"] = CacheStoreNotifierFactory()
quota = config.UserQuota
if quota == 0:
quota = None
if txnFactory is not None:
if config.EnableSSL:
uri = "https://{config.ServerHostName}:{config.SSLPort}".format(config=config)
else:
uri = "https://{config.ServerHostName}:{config.HTTPPort}".format(config=config)
attachments_uri = uri + "/calendars/__uids__/%(home)s/dropbox/%(dropbox_id)s/%(name)s"
store = CommonSQLDataStore(
txnFactory, notifierFactories,
directoryService,
FilePath(config.AttachmentsRoot), attachments_uri,
config.EnableCalDAV, config.EnableCardDAV,
config.EnableManagedAttachments,
quota=quota,
logLabels=config.LogDatabase.LabelsInSQL,
logStats=config.LogDatabase.Statistics,
logStatsLogFile=config.LogDatabase.StatisticsLogFile,
logSQL=config.LogDatabase.SQLStatements,
logTransactionWaits=config.LogDatabase.TransactionWaitSeconds,
timeoutTransactions=config.TransactionTimeoutSeconds,
cacheQueries=config.QueryCaching.Enabled,
cachePool=config.QueryCaching.MemcachedPool,
cacheExpireSeconds=config.QueryCaching.ExpireSeconds
)
else:
store = CommonFileDataStore(
FilePath(config.DocumentRoot),
notifierFactories, directoryService,
config.EnableCalDAV, config.EnableCardDAV,
quota=quota
)
# FIXME: NotifierFactories need a reference to the store in order
# to get a txn in order to possibly create a Work item
for notifierFactory in notifierFactories.values():
notifierFactory.store = store
return store
# MOVE2WHO -- should we move this class somewhere else?
class PrincipalCredentialChecker(object):
credentialInterfaces = (IPrincipalCredentials,)
@inlineCallbacks
def requestAvatarId(self, credentials):
credentials = IPrincipalCredentials(credentials)
if credentials.authnPrincipal is None:
raise UnauthorizedLogin(
"No such user: {user}".format(
user=credentials.credentials.username
)
)
# See if record is enabledForLogin
if not credentials.authnPrincipal.record.isLoginEnabled():
raise UnauthorizedLogin(
"User not allowed to log in: {user}".format(
user=credentials.credentials.username
)
)
# Handle Kerberos as a separate behavior
try:
from twistedcaldav.authkerb import NegotiateCredentials
except ImportError:
NegotiateCredentials = None
if NegotiateCredentials and isinstance(credentials.credentials, NegotiateCredentials):
# If we get here with Kerberos, then authentication has already succeeded
returnValue(
(
credentials.authnPrincipal,
credentials.authzPrincipal,
)
)
# Handle TLS Client Certificate
elif isinstance(credentials.credentials, TLSCredentials):
# If we get here with TLS, then authentication (certificate verification) has already succeeded
returnValue(
(
credentials.authnPrincipal,
credentials.authzPrincipal,
)
)
else:
if (yield credentials.authnPrincipal.record.verifyCredentials(credentials.credentials)):
returnValue(
(
credentials.authnPrincipal,
credentials.authzPrincipal,
)
)
else:
raise UnauthorizedLogin(
"Incorrect credentials for user: {user}".format(
user=credentials.credentials.username
)
)
def getRootResource(config, newStore, resources=None):
"""
Set up directory service and resource hierarchy based on config.
Return root resource.
Additional resources can be added to the hierarchy by passing a list of
tuples containing: path, resource class, __init__ args list, and optional
authentication schemes list ("basic", "digest").
If the store is specified, then it has already been constructed, so use it.
Otherwise build one with L{storeFromConfig}.
"""
if newStore is None:
raise RuntimeError("Internal error, 'newStore' must be specified.")
if resources is None:
resources = []
# FIXME: this is only here to workaround circular imports
doBind()
#
# Default resource classes
#
rootResourceClass = RootResource
calendarResourceClass = DirectoryCalendarHomeProvisioningResource
iScheduleResourceClass = IScheduleInboxResource
conduitResourceClass = ConduitResource
timezoneServiceResourceClass = TimezoneServiceResource
timezoneStdServiceResourceClass = TimezoneStdServiceResource
webCalendarResourceClass = WebCalendarResource
webAdminResourceClass = WebAdminLandingResource
addressBookResourceClass = DirectoryAddressBookHomeProvisioningResource
directoryBackedAddressBookResourceClass = DirectoryBackedAddressBookResource
apnSubscriptionResourceClass = APNSubscriptionResource
principalResourceClass = DirectoryPrincipalProvisioningResource
controlResourceClass = ControlAPIResource
directory = newStore.directoryService()
principalCollection = principalResourceClass("/principals/", directory)
#
# Configure the Site and Wrappers
#
wireEncryptedCredentialFactories = []
wireUnencryptedCredentialFactories = []
portal = Portal(auth.DavRealm())
portal.registerChecker(UsernamePasswordCredentialChecker(directory))
portal.registerChecker(HTTPDigestCredentialChecker(directory))
portal.registerChecker(PrincipalCredentialChecker())
realm = directory.realmName.encode("utf-8") or ""
log.info("Configuring authentication for realm: {realm}", realm=realm)
for scheme, schemeConfig in config.Authentication.iteritems():
scheme = scheme.lower()
credFactory = None
if schemeConfig["Enabled"]:
log.info("Setting up scheme: {scheme}", scheme=scheme)
if scheme == "kerberos":
if not NegotiateCredentialFactory:
log.info("Kerberos support not available")
continue
try:
principal = schemeConfig["ServicePrincipal"]
if not principal:
credFactory = NegotiateCredentialFactory(
serviceType="HTTP",
hostname=config.ServerHostName,
)
else:
credFactory = NegotiateCredentialFactory(
principal=principal,
)
except ValueError:
log.info("Could not start Kerberos")
continue
elif scheme == "digest":
credFactory = QopDigestCredentialFactory(
schemeConfig["Algorithm"],
schemeConfig["Qop"],
realm,
)
elif scheme == "basic":
credFactory = BasicCredentialFactory(realm)
elif scheme == TLSCredentialsFactory.scheme:
credFactory = TLSCredentialsFactory(realm)
elif scheme == "wiki":
pass
else:
log.error("Unknown scheme: {scheme}", scheme=scheme)
if credFactory:
wireEncryptedCredentialFactories.append(credFactory)
if schemeConfig.get("AllowedOverWireUnencrypted", False):
wireUnencryptedCredentialFactories.append(credFactory)
#
# Setup Resource hierarchy
#
log.info("Setting up document root at: {root}", root=config.DocumentRoot)
if config.EnableCalDAV:
log.info("Setting up calendar collection: {cls}", cls=calendarResourceClass)
calendarCollection = calendarResourceClass(
directory,
"/calendars/",
newStore,
)
principalCollection.calendarCollection = calendarCollection
if config.EnableCardDAV:
log.info("Setting up address book collection: {cls}", cls=addressBookResourceClass)
addressBookCollection = addressBookResourceClass(
directory,
"/addressbooks/",
newStore,
)
principalCollection.addressBookCollection = addressBookCollection
if config.DirectoryAddressBook.Enabled and config.EnableSearchAddressBook:
log.info(
"Setting up directory address book: {cls}",
cls=directoryBackedAddressBookResourceClass)
directoryBackedAddressBookCollection = directoryBackedAddressBookResourceClass(
principalCollections=(principalCollection,),
principalDirectory=directory,
uri=joinURL("/", config.DirectoryAddressBook.name, "/")
)
if _reactor._started:
directoryBackedAddressBookCollection.provisionDirectory()
else:
addSystemEventTrigger("after", "startup", directoryBackedAddressBookCollection.provisionDirectory)
else:
# remove /directory from previous runs that may have created it
directoryPath = os.path.join(config.DocumentRoot, config.DirectoryAddressBook.name)
try:
FilePath(directoryPath).remove()
log.info("Deleted: {path}", path=directoryPath)
except (OSError, IOError), e:
if e.errno != errno.ENOENT:
log.error("Could not delete: {path} : {error}", path=directoryPath, error=e)
if config.MigrationOnly:
unavailable = SimpleUnavailableResource((principalCollection,))
else:
unavailable = None
log.info("Setting up root resource: {cls}", cls=rootResourceClass)
root = rootResourceClass(
config.DocumentRoot,
principalCollections=(principalCollection,),
)
root.putChild("principals", principalCollection if unavailable is None else unavailable)
if config.EnableCalDAV:
root.putChild("calendars", calendarCollection if unavailable is None else unavailable)
if config.EnableCardDAV:
root.putChild('addressbooks', addressBookCollection if unavailable is None else unavailable)
if config.DirectoryAddressBook.Enabled and config.EnableSearchAddressBook:
root.putChild(config.DirectoryAddressBook.name, directoryBackedAddressBookCollection if unavailable is None else unavailable)
# /.well-known
if config.EnableWellKnown:
log.info("Setting up .well-known collection resource")
wellKnownResource = SimpleResource(
principalCollections=(principalCollection,),
isdir=True,
defaultACL=SimpleResource.allReadACL
)
root.putChild(".well-known", wellKnownResource)
for enabled, wellknown_name, redirected_to in (
(config.EnableCalDAV, "caldav", "/principals/",),
(config.EnableCardDAV, "carddav", "/principals/",),
(config.TimezoneService.Enabled, "timezone", config.TimezoneService.URI,),
(config.Scheduling.iSchedule.Enabled, "ischedule", "/ischedule"),
):
if enabled:
if config.EnableSSL:
scheme = "https"
port = config.SSLPort
else:
scheme = "http"
port = config.HTTPPort
wellKnownResource.putChild(
wellknown_name,
SimpleRedirectResource(
principalCollections=(principalCollection,),
isdir=False,
defaultACL=SimpleResource.allReadACL,
scheme=scheme, port=port, path=redirected_to)
)
for alias in config.Aliases:
url = alias.get("url", None)
path = alias.get("path", None)
if not url or not path or url[0] != "/":
log.error("Invalid alias: URL: {url} Path: {path}", url=url, path=path)
continue
urlbits = url[1:].split("/")
parent = root
for urlpiece in urlbits[:-1]:
child = parent.getChild(urlpiece)
if child is None:
child = Resource()
parent.putChild(urlpiece, child)
parent = child
if parent.getChild(urlbits[-1]) is not None:
log.error("Invalid alias: URL: {url} Path: {path} already exists", url=url, path=path)
continue
resource = FileResource(path)
parent.putChild(urlbits[-1], resource)
log.info("Added alias {url} -> {path}", url=url, path=path)
# Need timezone cache before setting up any timezone service
log.info("Setting up Timezone Cache")
TimezoneCache.create()
# Timezone service is optional
if config.EnableTimezoneService:
log.info(
"Setting up time zone service resource: {cls}",
cls=timezoneServiceResourceClass)
timezoneService = timezoneServiceResourceClass(
root,
)
root.putChild("timezones", timezoneService)
# Standard Timezone service is optional
if config.TimezoneService.Enabled:
log.info(
"Setting up standard time zone service resource: {cls}",
cls=timezoneStdServiceResourceClass)
timezoneStdService = timezoneStdServiceResourceClass(
root,
)
root.putChild("stdtimezones", timezoneStdService)
# TODO: we only want the master to do this
if _reactor._started:
_reactor.callLater(0, timezoneStdService.onStartup)
else:
addSystemEventTrigger("after", "startup", timezoneStdService.onStartup)
#
# iSchedule/cross-pod service for podding
#
if config.Servers.Enabled:
log.info("Setting up iSchedule podding inbox resource: {cls}", cls=iScheduleResourceClass)
ischedule = iScheduleResourceClass(
root,
newStore,
podding=True
)
root.putChild(config.Servers.InboxName, ischedule if unavailable is None else unavailable)
log.info("Setting up podding conduit resource: {cls}", cls=conduitResourceClass)
conduit = conduitResourceClass(
root,
newStore,
)
root.putChild(config.Servers.ConduitName, conduit)
#
# iSchedule service (not used for podding)
#
if config.Scheduling.iSchedule.Enabled:
log.info("Setting up iSchedule inbox resource: {cls}", cls=iScheduleResourceClass)
ischedule = iScheduleResourceClass(
root,
newStore,
)
root.putChild("ischedule", ischedule if unavailable is None else unavailable)
# Do DomainKey resources
DKIMUtils.validConfiguration(config)
if config.Scheduling.iSchedule.DKIM.Enabled:
log.info("Setting up domainkey resource: {res}", res=DomainKeyResource)
domain = config.Scheduling.iSchedule.DKIM.Domain if config.Scheduling.iSchedule.DKIM.Domain else config.ServerHostName
dk = DomainKeyResource(
domain,
config.Scheduling.iSchedule.DKIM.KeySelector,
config.Scheduling.iSchedule.DKIM.PublicKeyFile,
)
wellKnownResource.putChild("domainkey", dk)
#
# WebCal
#
if config.WebCalendarRoot:
log.info(
"Setting up WebCalendar resource: {res}",
res=config.WebCalendarRoot)
webCalendar = webCalendarResourceClass(
config.WebCalendarRoot,
principalCollections=(principalCollection,),
)
root.putChild("webcal", webCalendar if unavailable is None else unavailable)
#
# WebAdmin
#
if config.EnableWebAdmin:
log.info("Setting up WebAdmin resource")
webAdmin = webAdminResourceClass(
config.WebCalendarRoot,
root,
directory,
newStore,
principalCollections=(principalCollection,),
)
root.putChild("admin", webAdmin)
#
# Control API
#
if config.EnableControlAPI:
log.info("Setting up Control API resource")
controlAPI = controlResourceClass(
root,
directory,
newStore,
principalCollections=(principalCollection,),
)
root.putChild("control", controlAPI)
#
# Apple Push Notification Subscriptions
#
apnConfig = config.Notifications.Services.APNS
if apnConfig.Enabled:
log.info(
"Setting up APNS resource at /{url}",
url=apnConfig["SubscriptionURL"])
apnResource = apnSubscriptionResourceClass(root, newStore)
root.putChild(apnConfig["SubscriptionURL"], apnResource)
#
# Configure ancillary data
#
# MOVE2WHO
log.info("Configuring authentication wrapper")
overrides = {}
if resources:
for path, cls, args, schemes in resources:
# putChild doesn't want "/" starting the path
root.putChild(path, cls(root, newStore, *args))
# overrides requires "/" prepended
path = "/" + path
overrides[path] = []
for scheme in schemes:
if scheme == "basic":
overrides[path].append(BasicCredentialFactory(realm))
elif scheme == "digest":
schemeConfig = config.Authentication.Digest
overrides[path].append(QopDigestCredentialFactory(
schemeConfig["Algorithm"],
schemeConfig["Qop"],
realm,
))
log.info(
"Overriding {path} with {cls} ({schemes})",
path=path, cls=cls, schemes=schemes)
authWrapper = AuthenticationWrapper(
root,
portal,
wireEncryptedCredentialFactories,
wireUnencryptedCredentialFactories,
(auth.IPrincipal,),
overrides=overrides
)
logWrapper = DirectoryLogWrapperResource(
authWrapper,
directory,
)
# FIXME: Storing a reference to the root resource on the store
# until scheduling no longer needs resource objects
newStore.rootResource = root
return logWrapper
def getDBPool(config):
"""
Inspect configuration to determine what database connection pool
to set up.
return: (L{ConnectionPool}, transactionFactory)
"""
if config.DBType == 'oracle':
dialect = ORACLE_DIALECT
paramstyle = 'numeric'
else:
dialect = POSTGRES_DIALECT
paramstyle = 'pyformat'
pool = None
if config.DBAMPFD:
txnFactory = transactionFactoryFromFD(
int(config.DBAMPFD), dialect, paramstyle
)
elif not config.UseDatabase:
txnFactory = None
elif not config.SharedConnectionPool:
if config.DBType == '':
# get a PostgresService to tell us what the local connection
# info is, but *don't* start it (that would start one postgres
# master per slave, resulting in all kinds of mayhem...)
connectionFactory = pgServiceFromConfig(config, None).produceConnection
else:
connectionFactory = DBAPIConnector.connectorFor(config.DBType, **config.DatabaseConnection).connect
pool = ConnectionPool(connectionFactory, dialect=dialect,
paramstyle=paramstyle,
maxConnections=config.MaxDBConnectionsPerPool)
txnFactory = pool.connection
else:
raise UsageError(
"trying to use DB in slave, but no connection info from parent"
)
return (pool, txnFactory)
class FakeRequest(object):
def __init__(self, rootResource, method, path, uri='/', transaction=None):
self.rootResource = rootResource
self.method = method
self.path = path
self.uri = uri
self._resourcesByURL = {}
self._urlsByResource = {}
self.headers = Headers()
if transaction is not None:
self._newStoreTransaction = transaction
@inlineCallbacks
def _getChild(self, resource, segments):
if not segments:
returnValue(resource)
child, remaining = (yield resource.locateChild(self, segments))
returnValue((yield self._getChild(child, remaining)))
@inlineCallbacks
def locateResource(self, url):
url = url.strip("/")
segments = url.split("/")
resource = (yield self._getChild(self.rootResource, segments))
if resource:
self._rememberResource(resource, url)
returnValue(resource)
@inlineCallbacks
def locateChildResource(self, parent, childName):
if parent is None or childName is None:
returnValue(None)
parentURL = self.urlForResource(parent)
if not parentURL.endswith("/"):
parentURL += "/"
url = parentURL + quote(childName)
segment = childName
resource = (yield self._getChild(parent, [segment]))
if resource:
self._rememberResource(resource, url)
returnValue(resource)
def _rememberResource(self, resource, url):
self._resourcesByURL[url] = resource
self._urlsByResource[resource] = url
return resource
def _forgetResource(self, resource, url):
if url in self._resourcesByURL:
del self._resourcesByURL[url]
if resource in self._urlsByResource:
del self._urlsByResource[resource]
def urlForResource(self, resource):
url = self._urlsByResource.get(resource, None)
if url is None:
class NoURLForResourceError(RuntimeError):
pass
raise NoURLForResourceError(resource)
return url
def addResponseFilter(self, *args, **kwds):
pass
def memoryForPID(pid, residentOnly=True):
"""
Return the amount of memory in use for the given process. If residentOnly is True,
then RSS is returned; if False, then virtual memory is returned.
@param pid: process id
@type pid: C{int}
@param residentOnly: Whether only resident memory should be included
@type residentOnly: C{boolean}
@return: Memory used by process in bytes
@rtype: C{int}
"""
memoryInfo = psutil.Process(pid).get_memory_info()
return memoryInfo.rss if residentOnly else memoryInfo.vms
class MemoryLimitService(Service, object):
"""
A service which when paired with a DelayedStartupProcessMonitor will periodically
examine the memory usage of the monitored processes and stop any which exceed
a configured limit. Memcached processes are ignored.
"""
def __init__(self, processMonitor, intervalSeconds, limitBytes, residentOnly, reactor=None):
"""
@param processMonitor: the DelayedStartupProcessMonitor
@param intervalSeconds: how often to check
@type intervalSeconds: C{int}
@param limitBytes: any monitored process over this limit is stopped
@type limitBytes: C{int}
@param residentOnly: whether only resident memory should be included
@type residentOnly: C{boolean}
@param reactor: for testing
"""
self._processMonitor = processMonitor
self._seconds = intervalSeconds
self._bytes = limitBytes
self._residentOnly = residentOnly
self._delayedCall = None
if reactor is None:
from twisted.internet import reactor
self._reactor = reactor
# Unit tests can swap out _memoryForPID
self._memoryForPID = memoryForPID
def startService(self):
"""
Start scheduling the memory checks
"""
super(MemoryLimitService, self).startService()
self._delayedCall = self._reactor.callLater(self._seconds, self.checkMemory)
def stopService(self):
"""
Stop checking memory
"""
super(MemoryLimitService, self).stopService()
if self._delayedCall is not None and self._delayedCall.active():
self._delayedCall.cancel()
self._delayedCall = None
def checkMemory(self):
"""
Stop any processes monitored by our paired processMonitor whose resident
memory exceeds our configured limitBytes. Reschedule intervalSeconds in
the future.
"""
try:
for name in self._processMonitor.processes:
if name.startswith("memcached"):
continue
proto = self._processMonitor.protocols.get(name, None)
if proto is not None:
proc = proto.transport
pid = proc.pid
try:
memory = self._memoryForPID(pid, self._residentOnly)
except Exception, e:
log.error(
"Unable to determine memory usage of PID: {pid} ({err})",
pid=pid, err=e)
continue
if memory > self._bytes:
log.warn(
"Killing large process: {name} PID:{pid} {memtype}:{mem}",
name=name, pid=pid,
memtype=("Resident" if self._residentOnly else "Virtual"),
mem=memory)
self._processMonitor.stopProcess(name)
finally:
self._delayedCall = self._reactor.callLater(self._seconds, self.checkMemory)
def checkDirectories(config):
"""
Make sure that various key directories exist (and create if needed)
"""
#
# Verify that server root actually exists
#
checkDirectory(
config.ServerRoot,
"Server root",
# Require write access because one might not allow editing on /
access=os.W_OK,
wait=True # Wait in a loop until ServerRoot exists
)
#
# Verify that other root paths are OK
#
if config.DataRoot.startswith(config.ServerRoot + os.sep):
checkDirectory(
config.DataRoot,
"Data root",
access=os.W_OK,
create=(0750, config.UserName, config.GroupName),
)
if config.DocumentRoot.startswith(config.DataRoot + os.sep):
checkDirectory(
config.DocumentRoot,
"Document root",
# Don't require write access because one might not allow editing on /
access=os.R_OK,
create=(0750, config.UserName, config.GroupName),
)
if config.ConfigRoot.startswith(config.ServerRoot + os.sep):
checkDirectory(
config.ConfigRoot,
"Config root",
access=os.W_OK,
create=(0750, config.UserName, config.GroupName),
)
if config.SocketFiles.Enabled:
checkDirectory(
config.SocketRoot,
"Socket file root",
access=os.W_OK,
create=(
config.SocketFiles.Permissions,
config.SocketFiles.Owner,
config.SocketFiles.Group
)
)
# Always create these:
checkDirectory(
config.LogRoot,
"Log root",
access=os.W_OK,
create=(0750, config.UserName, config.GroupName),
)
checkDirectory(
config.RunRoot,
"Run root",
access=os.W_OK,
create=(0770, config.UserName, config.GroupName),
)
class Stepper(object):
"""
Manages the sequential, deferred execution of "steps" which are objects
implementing these methods:
- stepWithResult(result)
@param result: the result returned from the previous step
@returns: Deferred
- stepWithFailure(failure)
@param failure: a Failure encapsulating the exception from the
previous step
@returns: Failure to continue down the errback chain, or a
Deferred returning a non-Failure to switch back to the
callback chain
"Step" objects are added in order by calling addStep(), and when start()
is called, the Stepper will call the stepWithResult() of the first step.
If stepWithResult() doesn't raise an Exception, the Stepper will call the
next step's stepWithResult(). If a stepWithResult() raises an Exception,
the Stepper will call the next step's stepWithFailure() -- if it's
implemented -- passing it a Failure object. If the stepWithFailure()
decides it can handle the Failure and proceed, it can return a non-Failure
which is an indicator to the Stepper to call the next step's
stepWithResult().
TODO: Create an IStep interface (?)
"""
def __init__(self):
self.steps = []
self.failure = None
self.result = None
self.running = False
def addStep(self, step):
"""
Adds a step object to the ordered list of steps
@param step: the object to add
@type step: an object implementing stepWithResult()
@return: the Stepper object itself so addStep() calls can be chained
"""
if self.running:
raise RuntimeError("Can't add step after start")
self.steps.append(step)
return self
def defaultStepWithResult(self, result):
return succeed(result)
def defaultStepWithFailure(self, failure):
if failure.type not in (
NotAllowedToUpgrade, ConfigurationError,
OpenSSL.SSL.Error
):
log.failure("Step failure", failure=failure)
return failure
# def protectStep(self, callback):
# def _protected(result):
# try:
# return callback(result)
# except Exception, e:
# # TODO: how to turn Exception into Failure
# return Failure()
# return _protected
def start(self, result=None):
"""
Begin executing the added steps in sequence. If a step object
does not implement a stepWithResult/stepWithFailure method, a
default implementation will be used.
@param result: an optional value to pass to the first step
@return: the Deferred that will fire when steps are done
"""
self.running = True
self.deferred = Deferred()
for step in self.steps:
# See if we need to use a default implementation of the step methods:
if hasattr(step, "stepWithResult"):
callBack = step.stepWithResult
# callBack = self.protectStep(step.stepWithResult)
else:
callBack = self.defaultStepWithResult
if hasattr(step, "stepWithFailure"):
errBack = step.stepWithFailure
else:
errBack = self.defaultStepWithFailure
# Add callbacks to the Deferred
self.deferred.addCallbacks(callBack, errBack)
# Get things going
self.deferred.callback(result)
return self.deferred
def requestShutdown(programPath, reason):
"""
Log the shutdown reason and call the shutdown-requesting program.
In the case the service is spawned by launchd (or equivalent), if our
service decides it needs to shut itself down, because of a misconfiguration,
for example, we can't just exit. We may need to go through the system
machinery to unload our job, manage reverse proxies, update admin UI, etc.
Therefore you can configure the ServiceDisablingProgram plist key to point
to a program to run which will stop our service.
@param programPath: the full path to a program to call (with no args)
@type programPath: C{str}
@param reason: a shutdown reason to log
@type reason: C{str}
"""
log.error("Shutting down Calendar and Contacts server")
log.error(reason)
Popen(
args=[config.ServiceDisablingProgram],
stdout=PIPE,
stderr=PIPE,
).communicate()
def preFlightChecks(config):
"""
Perform checks prior to spawning any processes. Returns True if the checks
are ok, False if they don't and we have a ServiceDisablingProgram configured.
Otherwise exits.
"""
success, reason = verifyConfig(config)
if success:
success, reason = verifyServerRoot(config)
if success:
success, reason = verifyTLSCertificate(config)
if success:
success, reason = verifyAPNSCertificate(config)
if not success:
if config.ServiceDisablingProgram:
# If pre-flight checks fail, we don't want launchd to
# repeatedly launch us, we want our job to get unloaded.
# If the config.ServiceDisablingProgram is assigned and exists
# we schedule it to run after startService finishes.
# Its job is to carry out the platform-specific tasks of disabling
# the service.
if os.path.exists(config.ServiceDisablingProgram):
addSystemEventTrigger(
"after", "startup",
requestShutdown, config.ServiceDisablingProgram, reason
)
return False
else:
print(reason)
sys.exit(1)
return True
def verifyConfig(config):
"""
At least one of EnableCalDAV or EnableCardDAV must be True
"""
if config.EnableCalDAV or config.EnableCardDAV:
return True, "A protocol is enabled"
return False, "Neither CalDAV nor CardDAV are enabled"
def verifyServerRoot(config):
"""
Ensure server root is not on a phantom volume
"""
result = diagnose.detectPhantomVolume(config.ServerRoot)
if result == diagnose.EXIT_CODE_SERVER_ROOT_MISSING:
return False, "ServerRoot is missing"
if result == diagnose.EXIT_CODE_PHANTOM_DATA_VOLUME:
return False, "ServerRoot is supposed to be on a non-boot-volume but it's not"
return True, "ServerRoot is ok"
def verifyTLSCertificate(config):
"""
If a TLS certificate is configured, make sure it exists, is non empty,
and that it's valid.
"""
if config.SSLCertificate:
if not os.path.exists(config.SSLCertificate):
message = (
"The configured TLS certificate ({cert}) is missing".format(
cert=config.SSLCertificate
)
)
postAlert("MissingCertificateAlert", ["path", config.SSLCertificate])
return False, message
else:
return True, "TLS disabled"
length = os.stat(config.SSLCertificate).st_size
if length == 0:
message = (
"The configured TLS certificate ({cert}) is empty".format(
cert=config.SSLCertificate
)
)
return False, message
try:
ChainingOpenSSLContextFactory(
config.SSLPrivateKey,
config.SSLCertificate,
certificateChainFile=config.SSLAuthorityChain,
passwdCallback=getSSLPassphrase,
sslmethod=getattr(OpenSSL.SSL, config.SSLMethod),
ciphers=config.SSLCiphers.strip()
)
except Exception as e:
message = (
"The configured TLS certificate ({cert}) cannot be used: {reason}".format(
cert=config.SSLCertificate,
reason=str(e)
)
)
return False, message
return True, "TLS enabled"
def verifyAPNSCertificate(config):
"""
If APNS certificates are configured, make sure they're valid.
"""
if config.Notifications.Services.APNS.Enabled:
for protocol, accountName in (
("CalDAV", "apns:com.apple.calendar"),
("CardDAV", "apns:com.apple.contact"),
):
protoConfig = config.Notifications.Services.APNS[protocol]
# Verify the cert exists
if not os.path.exists(protoConfig.CertificatePath):
message = (
"The {proto} APNS certificate ({cert}) is missing".format(
proto=protocol,
cert=protoConfig.CertificatePath
)
)
postAlert("PushNotificationCertificateAlert", [])
return False, message
# Verify we can extract the topic
if not protoConfig.Topic:
topic = getAPNTopicFromCertificate(protoConfig.CertificatePath)
protoConfig.Topic = topic
if not protoConfig.Topic:
postAlert("PushNotificationCertificateAlert", [])
message = "Cannot extract APN topic"
return False, message
# Verify we can acquire the passphrase
if not protoConfig.Passphrase:
try:
passphrase = getPasswordFromKeychain(accountName)
protoConfig.Passphrase = passphrase
except KeychainAccessError:
# The system doesn't support keychain
pass
except KeychainPasswordNotFound:
# The password doesn't exist in the keychain.
postAlert("PushNotificationCertificateAlert", [])
message = "Cannot retrieve APN passphrase from keychain"
return False, message
# Let OpenSSL try to use the cert
try:
if protoConfig.Passphrase:
passwdCallback = lambda *ignored: protoConfig.Passphrase
else:
passwdCallback = None
ChainingOpenSSLContextFactory(
protoConfig.PrivateKeyPath,
protoConfig.CertificatePath,
certificateChainFile=protoConfig.AuthorityChainPath,
passwdCallback=passwdCallback,
sslmethod=getattr(OpenSSL.SSL, "TLSv1_METHOD"),
)
except Exception as e:
message = (
"The {proto} APNS certificate ({cert}) cannot be used: {reason}".format(
proto=protocol,
cert=protoConfig.CertificatePath,
reason=str(e)
)
)
postAlert("PushNotificationCertificateAlert", [])
return False, message
return True, "APNS enabled"
else:
return True, "APNS disabled"
def getSSLPassphrase(*ignored):
if not config.SSLPrivateKey:
return None
if config.SSLCertAdmin and os.path.isfile(config.SSLCertAdmin):
child = Popen(
args=[
"sudo", config.SSLCertAdmin,
"--get-private-key-passphrase", config.SSLPrivateKey,
],
stdout=PIPE, stderr=PIPE,
)
output, error = child.communicate()
if child.returncode:
log.error(
"Could not get passphrase for {key}: {error}",
key=config.SSLPrivateKey, error=error
)
else:
log.info(
"Obtained passphrase for {key}", key=config.SSLPrivateKey
)
return output.strip()
if (
config.SSLPassPhraseDialog and
os.path.isfile(config.SSLPassPhraseDialog)
):
sslPrivKey = open(config.SSLPrivateKey)
try:
keyType = None
for line in sslPrivKey.readlines():
if "-----BEGIN RSA PRIVATE KEY-----" in line:
keyType = "RSA"
break
elif "-----BEGIN DSA PRIVATE KEY-----" in line:
keyType = "DSA"
break
finally:
sslPrivKey.close()
if keyType is None:
log.error(
"Could not get private key type for {key}",
key=config.SSLPrivateKey
)
else:
child = Popen(
args=[
config.SSLPassPhraseDialog,
"{}:{}".format(config.ServerHostName, config.SSLPort),
keyType,
],
stdout=PIPE, stderr=PIPE,
)
output, error = child.communicate()
if child.returncode:
log.error(
"Could not get passphrase for {key}: {error}",
key=config.SSLPrivateKey, error=error
)
else:
return output.strip()
return None
def postAlert(alertType, args):
if (
config.AlertPostingProgram and
os.path.exists(config.AlertPostingProgram)
):
try:
commandLine = [config.AlertPostingProgram, alertType]
commandLine.extend(args)
Popen(
commandLine,
stdout=PIPE,
stderr=PIPE,
).communicate()
except Exception, e:
log.error(
"Could not post alert: {alertType} {args} ({error})",
alertType=alertType, args=args, error=e
)
calendarserver-7.0+dfsg/calendarserver/test/ 0000775 0000000 0000000 00000000000 12607514243 0021262 5 ustar 00root root 0000000 0000000 calendarserver-7.0+dfsg/calendarserver/test/__init__.py 0000664 0000000 0000000 00000001214 12607514243 0023371 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Tests for the calendarserver module.
"""
calendarserver-7.0+dfsg/calendarserver/test/test_logAnalysis.py 0000664 0000000 0000000 00000010572 12607514243 0025165 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2012-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from twisted.trial.unittest import TestCase
from calendarserver.logAnalysis import getAdjustedMethodName, \
getAdjustedClientName
class LogAnalysis(TestCase):
def test_getAdjustedMethodName(self):
"""
L{getAdjustedMethodName} returns the appropriate method.
"""
data = (
("PROPFIND", "/calendars/users/user01/", {}, "PROPFIND Calendar Home",),
("PROPFIND", "/calendars/users/user01/", {"cached": "1"}, "PROPFIND cached Calendar Home",),
("PROPFIND", "/calendars/users/user01/ABC/", {}, "PROPFIND Calendar",),
("PROPFIND", "/calendars/users/user01/inbox/", {}, "PROPFIND Inbox",),
("PROPFIND", "/addressbooks/users/user01/", {}, "PROPFIND Adbk Home",),
("PROPFIND", "/addressbooks/users/user01/", {"cached": "1"}, "PROPFIND cached Adbk Home",),
("PROPFIND", "/addressbooks/users/user01/ABC/", {}, "PROPFIND Adbk",),
("PROPFIND", "/addressbooks/users/user01/inbox/", {}, "PROPFIND Adbk",),
("PROPFIND", "/principals/users/user01/", {}, "PROPFIND Principals",),
("PROPFIND", "/principals/users/user01/", {"cached": "1"}, "PROPFIND cached Principals",),
("PROPFIND", "/.well-known/caldav", {}, "PROPFIND",),
("REPORT(CalDAV:sync-collection)", "/calendars/users/user01/ABC/", {}, "REPORT cal-sync",),
("REPORT(CalDAV:calendar-query)", "/calendars/users/user01/ABC/", {}, "REPORT cal-query",),
("POST", "/calendars/users/user01/", {}, "POST Calendar Home",),
("POST", "/calendars/users/user01/outbox/", {"recipients": "1"}, "POST Freebusy",),
("POST", "/calendars/users/user01/outbox/", {}, "POST Outbox",),
("POST", "/apns", {}, "POST apns",),
("PUT", "/calendars/users/user01/calendar/1.ics", {}, "PUT ics",),
("PUT", "/calendars/users/user01/calendar/1.ics", {"itip.requests": "1"}, "PUT Organizer",),
("PUT", "/calendars/users/user01/calendar/1.ics", {"itip.reply": "1"}, "PUT Attendee",),
("GET", "/calendars/users/user01/", {}, "GET Calendar Home",),
("GET", "/calendars/users/user01/calendar/", {}, "GET Calendar",),
("GET", "/calendars/users/user01/calendar/1.ics", {}, "GET ics",),
("DELETE", "/calendars/users/user01/", {}, "DELETE Calendar Home",),
("DELETE", "/calendars/users/user01/calendar/", {}, "DELETE Calendar",),
("DELETE", "/calendars/users/user01/calendar/1.ics", {}, "DELETE ics",),
("DELETE", "/calendars/users/user01/inbox/1.ics", {}, "DELETE inbox ics",),
("ACL", "/calendars/users/user01/", {}, "ACL",),
)
for method, uri, extras, result in data:
extras["method"] = method
extras["uri"] = uri
self.assertEqual(getAdjustedMethodName(extras), result, "Failed getAdjustedMethodName: %s" % (result,))
def test_getAdjustedClientName(self):
"""
L{getAdjustedClientName} returns the appropriate method.
"""
data = (
("Mac OS X/10.8.2 (12C60) CalendarAgent/55", "Mac OS X/10.8.2 CalendarAgent",),
("CalendarStore/5.0.3 (1204.2); iCal/5.0.3 (1605.4); Mac OS X/10.7.5 (11G63b)", "Mac OS X/10.7.5 iCal",),
("DAVKit/5.0 (767); iCalendar/5.0 (79); iPhone/4.2.1 8C148", "iPhone/4.2.1",),
("iOS/6.0 (10A405) Preferences/1.0", "iOS/6.0 Preferences",),
("iOS/6.0.1 (10A523) dataaccessd/1.0", "iOS/6.0.1 dataaccessd",),
("InterMapper/5.4.3", "InterMapper",),
("Mac OS X/10.8.2 (12C60) AddressBook/1167", "Mac OS X/10.8.2 AddressBook",),
)
for ua, result in data:
self.assertEqual(getAdjustedClientName({"userAgent": ua}), result, "Failed getAdjustedClientName: %s" % (ua,))
calendarserver-7.0+dfsg/calendarserver/tools/ 0000775 0000000 0000000 00000000000 12607514243 0021443 5 ustar 00root root 0000000 0000000 calendarserver-7.0+dfsg/calendarserver/tools/__init__.py 0000664 0000000 0000000 00000001175 12607514243 0023560 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
CalendarServer tools.
"""
calendarserver-7.0+dfsg/calendarserver/tools/agent.py 0000775 0000000 0000000 00000025030 12607514243 0023116 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# -*- test-case-name: calendarserver.tools.test.test_agent -*-
##
# Copyright (c) 2013-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
A service spawned on-demand by launchd, meant to handle configuration requests
from Server.app. When a request comes in on the socket specified in the
launchd agent.plist, launchd will run "caldavd -t Agent" which ends up creating
this service. Requests are made using HTTP POSTS to /gateway, and are
authenticated by OpenDirectory.
"""
from __future__ import print_function
__all__ = [
"makeAgentService",
]
import cStringIO
from plistlib import readPlistFromString, writePlistToString
import socket
from twext.python.launchd import launchActivateSocket
from twext.python.log import Logger
from twext.who.checker import HTTPDigestCredentialChecker
from twext.who.opendirectory import (
DirectoryService as OpenDirectoryDirectoryService,
NoQOPDigestCredentialFactory
)
from twisted.application.internet import StreamServerEndpointService
from twisted.cred.portal import IRealm, Portal
from twisted.internet.defer import inlineCallbacks, returnValue
from twisted.internet.endpoints import AdoptedStreamServerEndpoint
from twisted.internet.protocol import Factory
from twisted.protocols import amp
from twisted.web.guard import HTTPAuthSessionWrapper
from twisted.web.resource import IResource, Resource, ForbiddenResource
from twisted.web.server import Site, NOT_DONE_YET
from zope.interface import implements
log = Logger()
class AgentRealm(object):
"""
Only allow a specified list of avatar IDs to access the site
"""
implements(IRealm)
def __init__(self, root, allowedAvatarIds):
"""
@param root: The root resource of the site
@param allowedAvatarIds: The list of IDs to allow access to
"""
self.root = root
self.allowedAvatarIds = allowedAvatarIds
def requestAvatar(self, avatarId, mind, *interfaces):
if IResource in interfaces:
if avatarId.shortNames[0] in self.allowedAvatarIds:
return (IResource, self.root, lambda: None)
else:
return (IResource, ForbiddenResource(), lambda: None)
raise NotImplementedError()
class AgentGatewayResource(Resource):
"""
The gateway resource which forwards incoming requests through
gateway.Runner.
"""
isLeaf = True
def __init__(self, store, directory, inactivityDetector):
"""
@param store: an already opened store
@param directory: a directory service
@param inactivityDetector: the InactivityDetector to tell when requests
come in
"""
Resource.__init__(self)
self.store = store
self.directory = directory
self.inactivityDetector = inactivityDetector
def render_POST(self, request):
"""
Take the body of the POST request and feed it to gateway.Runner();
return the result as the response body.
"""
self.inactivityDetector.activity()
def onSuccess(result, output):
txt = output.getvalue()
output.close()
request.write(txt)
request.finish()
def onError(failure):
message = failure.getErrorMessage()
tbStringIO = cStringIO.StringIO()
failure.printTraceback(file=tbStringIO)
tbString = tbStringIO.getvalue()
tbStringIO.close()
error = {
"Error": message,
"Traceback": tbString,
}
log.error("command failed {error}", error=failure)
request.write(writePlistToString(error))
request.finish()
from calendarserver.tools.gateway import Runner
body = request.content.read()
command = readPlistFromString(body)
output = cStringIO.StringIO()
runner = Runner(self.store, [command], output=output)
d = runner.run()
d.addCallback(onSuccess, output)
d.addErrback(onError)
return NOT_DONE_YET
def makeAgentService(store):
"""
Returns a service which will process GatewayAMPCommands, using a socket
file descripter acquired by launchd
@param store: an already opened store
@returns: service
"""
from twisted.internet import reactor
sockets = launchActivateSocket("AgentSocket")
fd = sockets[0]
family = socket.AF_INET
endpoint = AdoptedStreamServerEndpoint(reactor, fd, family)
directory = store.directoryService()
def becameInactive():
log.warn("Agent inactive; shutting down")
reactor.stop()
from twistedcaldav.config import config
inactivityDetector = InactivityDetector(
reactor, config.AgentInactivityTimeoutSeconds, becameInactive
)
root = Resource()
root.putChild(
"gateway",
AgentGatewayResource(
store, directory, inactivityDetector
)
)
# We need this service to be able to return com.apple.calendarserver,
# so tell it not to suppress system accounts.
directory = OpenDirectoryDirectoryService(
"/Local/Default", suppressSystemRecords=False
)
portal = Portal(
AgentRealm(root, [u"com.apple.calendarserver"]),
[HTTPDigestCredentialChecker(directory)]
)
credentialFactory = NoQOPDigestCredentialFactory(
"md5", "/Local/Default"
)
wrapper = HTTPAuthSessionWrapper(portal, [credentialFactory])
site = Site(wrapper)
return StreamServerEndpointService(endpoint, site)
class InactivityDetector(object):
"""
If no 'activity' takes place for a specified amount of time, a method
will get called. Activity causes the inactivity time threshold to be
reset.
"""
def __init__(self, reactor, timeoutSeconds, becameInactive):
"""
@param reactor: the reactor
@timeoutSeconds: the number of seconds considered to mean inactive
@becameInactive: the method to call (with no arguments) when
inactivity is reached
"""
self._reactor = reactor
self._timeoutSeconds = timeoutSeconds
self._becameInactive = becameInactive
if self._timeoutSeconds > 0:
self._delayedCall = self._reactor.callLater(
self._timeoutSeconds,
self._inactivityThresholdReached
)
def _inactivityThresholdReached(self):
"""
The delayed call has fired. We're inactive. Call the becameInactive
method.
"""
self._becameInactive()
def activity(self):
"""
Call this to let the InactivityMonitor that there has been activity.
It will reset the timeout.
"""
if self._timeoutSeconds > 0:
if self._delayedCall.active():
self._delayedCall.reset(self._timeoutSeconds)
else:
self._delayedCall = self._reactor.callLater(
self._timeoutSeconds,
self._inactivityThresholdReached
)
def stop(self):
"""
Cancels the delayed call
"""
if self._timeoutSeconds > 0:
if self._delayedCall.active():
self._delayedCall.cancel()
#
# Alternate implementation using AMP instead of HTTP
#
class GatewayAMPCommand(amp.Command):
"""
A command to be executed by gateway.Runner
"""
arguments = [('command', amp.String())]
response = [('result', amp.String())]
class GatewayAMPProtocol(amp.AMP):
"""
Passes commands to gateway.Runner and returns the results
"""
def __init__(self, store, directory):
"""
@param store: an already opened store
operations
@param directory: a directory service
"""
amp.AMP.__init__(self)
self.store = store
self.directory = directory
@GatewayAMPCommand.responder
@inlineCallbacks
def gatewayCommandReceived(self, command):
"""
Process a command via gateway.Runner
@param command: GatewayAMPCommand
@returns: a deferred returning a dict
"""
command = readPlistFromString(command)
output = cStringIO.StringIO()
from calendarserver.tools.gateway import Runner
runner = Runner(
self.store,
[command], output=output
)
try:
yield runner.run()
result = output.getvalue()
output.close()
except Exception as e:
error = {"Error": str(e)}
result = writePlistToString(error)
output.close()
returnValue(dict(result=result))
class GatewayAMPFactory(Factory):
"""
Builds GatewayAMPProtocols
"""
protocol = GatewayAMPProtocol
def __init__(self, store):
"""
@param store: an already opened store
"""
self.store = store
self.directory = self.store.directoryService()
def buildProtocol(self, addr):
return GatewayAMPProtocol(
self.store, self.davRootResource, self.directory
)
#
# A test AMP client
#
command = """
commandgetLocationAndResourceList"""
def getList():
# For the sample client, below:
from twisted.internet import reactor
from twisted.internet.protocol import ClientCreator
creator = ClientCreator(reactor, amp.AMP)
host = '127.0.0.1'
import sys
if len(sys.argv) > 1:
host = sys.argv[1]
d = creator.connectTCP(host, 62308)
def connected(ampProto):
return ampProto.callRemote(GatewayAMPCommand, command=command)
d.addCallback(connected)
def resulted(result):
return result['result']
d.addCallback(resulted)
def done(result):
print('Done: %s' % (result,))
reactor.stop()
d.addCallback(done)
reactor.run()
if __name__ == '__main__':
getList()
calendarserver-7.0+dfsg/calendarserver/tools/ampnotifications.py 0000775 0000000 0000000 00000006470 12607514243 0025376 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
##
# Copyright (c) 2012-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
from calendarserver.push.amppush import subscribeToIDs
from calendarserver.tools.cmdline import utilityMain, WorkerService
from getopt import getopt, GetoptError
from twext.python.log import Logger
from twisted.internet.defer import inlineCallbacks, succeed
import os
import sys
log = Logger()
def usage(e=None):
name = os.path.basename(sys.argv[0])
print("usage: %s [options] [pushkey ...]" % (name,))
print("")
print(" Monitor AMP Push Notifications")
print("")
print("options:")
print(" -h --help: print this help and exit")
print(" -f --config : Specify caldavd.plist configuration path")
print(" -p --port : AMP port to connect to")
print(" -s --server : AMP server to connect to")
print(" --debug: verbose logging")
print("")
if e:
sys.stderr.write("%s\n" % (e,))
sys.exit(64)
else:
sys.exit(0)
class MonitorAMPNotifications(WorkerService):
ids = []
hostname = None
port = None
def doWork(self):
return monitorAMPNotifications(self.hostname, self.port, self.ids)
def postStartService(self):
"""
Don't quit right away
"""
pass
def main():
try:
(optargs, args) = getopt(
sys.argv[1:], "f:hp:s:v", [
"config=",
"debug",
"help",
"port=",
"server=",
],
)
except GetoptError, e:
usage(e)
#
# Get configuration
#
configFileName = None
hostname = "localhost"
port = 62311
debug = False
for opt, arg in optargs:
if opt in ("-h", "--help"):
usage()
elif opt in ("-f", "--config"):
configFileName = arg
elif opt in ("-p", "--port"):
port = int(arg)
elif opt in ("-s", "--server"):
hostname = arg
elif opt in ("--debug"):
debug = True
else:
raise NotImplementedError(opt)
if not args:
usage("Not enough arguments")
MonitorAMPNotifications.ids = args
MonitorAMPNotifications.hostname = hostname
MonitorAMPNotifications.port = port
utilityMain(
configFileName,
MonitorAMPNotifications,
verbose=debug
)
def notificationCallback(id, dataChangedTimestamp, priority):
print("Received notification for:", id, "Priority", priority)
return succeed(True)
@inlineCallbacks
def monitorAMPNotifications(hostname, port, ids):
print("Subscribing to notifications...")
yield subscribeToIDs(hostname, port, ids, notificationCallback)
print("Waiting for notifications...")
calendarserver-7.0+dfsg/calendarserver/tools/anonymize.py 0000775 0000000 0000000 00000052030 12607514243 0024031 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
##
# Copyright (c) 2006-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
from __future__ import with_statement
from getopt import getopt, GetoptError
from subprocess import Popen, PIPE, STDOUT
import datetime
import hashlib
import os
import random
import shutil
import sys
import urllib
import uuid
import xattr
import zlib
from plistlib import readPlistFromString
from pycalendar.icalendar.calendar import Calendar
from pycalendar.parameter import Parameter
COPY_CAL_XATTRS = (
'WebDAV:{DAV:}resourcetype',
'WebDAV:{urn:ietf:params:xml:ns:caldav}calendar-timezone',
'WebDAV:{http:%2F%2Fapple.com%2Fns%2Fical%2F}calendar-color',
'WebDAV:{http:%2F%2Fapple.com%2Fns%2Fical%2F}calendar-order',
)
COPY_EVENT_XATTRS = ('WebDAV:{DAV:}getcontenttype',)
def usage(e=None):
if e:
print(e)
print("")
name = os.path.basename(sys.argv[0])
print("usage: %s [options] source destination" % (name,))
print("")
print(" Anonymizes calendar data")
print("")
print(" source and destination should refer to document root directories")
print("")
print("options:")
print(" -h --help: print this help and exit")
print(" -n --node : Directory node (defaults to /Search)")
print("")
if e:
sys.exit(64)
else:
sys.exit(0)
def main():
try:
(optargs, args) = getopt(
sys.argv[1:], "hn:", [
"help",
"node=",
],
)
except GetoptError, e:
usage(e)
#
# Get configuration
#
directoryNode = "/Search"
for opt, arg in optargs:
if opt in ("-h", "--help"):
usage()
elif opt in ("-n", "--node"):
directoryNode = arg
if len(args) != 2:
usage("Source and destination directories must be specified.")
sourceDirectory, destDirectory = args
directoryMap = DirectoryMap(directoryNode)
anonymizeRoot(directoryMap, sourceDirectory, destDirectory)
directoryMap.printStats()
directoryMap.dumpDsImports(os.path.join(destDirectory, "dsimports"))
def anonymizeRoot(directoryMap, sourceDirectory, destDirectory):
# sourceDirectory and destDirectory are DocumentRoots
print("Anonymizing calendar data from %s into %s" % (sourceDirectory, destDirectory))
homes = 0
calendars = 0
resources = 0
if not os.path.exists(sourceDirectory):
print("Can't find source: %s" % (sourceDirectory,))
sys.exit(1)
if not os.path.exists(destDirectory):
os.makedirs(destDirectory)
sourceCalRoot = os.path.join(sourceDirectory, "calendars")
destCalRoot = os.path.join(destDirectory, "calendars")
if not os.path.exists(destCalRoot):
os.makedirs(destCalRoot)
sourceUidHomes = os.path.join(sourceCalRoot, "__uids__")
if os.path.exists(sourceUidHomes):
destUidHomes = os.path.join(destCalRoot, "__uids__")
if not os.path.exists(destUidHomes):
os.makedirs(destUidHomes)
homeList = []
for first in os.listdir(sourceUidHomes):
if len(first) == 2:
firstPath = os.path.join(sourceUidHomes, first)
for second in os.listdir(firstPath):
if len(second) == 2:
secondPath = os.path.join(firstPath, second)
for home in os.listdir(secondPath):
record = directoryMap.lookupCUA(home)
if not record:
print("Couldn't find %s, skipping." % (home,))
continue
sourceHome = os.path.join(secondPath, home)
destHome = os.path.join(
destUidHomes,
record['guid'][0:2], record['guid'][2:4],
record['guid'])
homeList.append((sourceHome, destHome, record))
else:
home = first
sourceHome = os.path.join(sourceUidHomes, home)
if not os.path.isdir(sourceHome):
continue
record = directoryMap.lookupCUA(home)
if not record:
print("Couldn't find %s, skipping." % (home,))
continue
sourceHome = os.path.join(sourceUidHomes, home)
destHome = os.path.join(destUidHomes, record['guid'])
homeList.append((sourceHome, destHome, record))
print("Processing %d calendar homes..." % (len(homeList),))
for sourceHome, destHome, record in homeList:
quotaUsed = 0
if not os.path.exists(destHome):
os.makedirs(destHome)
homes += 1
# Iterate calendars
freeBusies = []
for cal in os.listdir(sourceHome):
# Skip these:
if cal in ("dropbox", "notifications"):
continue
# Don't include these in freebusy list
if cal not in ("inbox", "outbox"):
freeBusies.append(cal)
sourceCal = os.path.join(sourceHome, cal)
destCal = os.path.join(destHome, cal)
if not os.path.exists(destCal):
os.makedirs(destCal)
calendars += 1
# Copy calendar xattrs
for attr, value in xattr.xattr(sourceCal).iteritems():
if attr in COPY_CAL_XATTRS:
xattr.setxattr(destCal, attr, value)
# Copy index
sourceIndex = os.path.join(sourceCal, ".db.sqlite")
destIndex = os.path.join(destCal, ".db.sqlite")
if os.path.exists(sourceIndex):
shutil.copyfile(sourceIndex, destIndex)
# Iterate resources
for resource in os.listdir(sourceCal):
if resource.startswith("."):
continue
sourceResource = os.path.join(sourceCal, resource)
# Skip directories
if os.path.isdir(sourceResource):
continue
with open(sourceResource) as res:
data = res.read()
data = anonymizeData(directoryMap, data)
if data is None:
# Ignore data we can't parse
continue
destResource = os.path.join(destCal, resource)
with open(destResource, "w") as res:
res.write(data)
quotaUsed += len(data)
for attr, value in xattr.xattr(sourceResource).iteritems():
if attr in COPY_EVENT_XATTRS:
xattr.setxattr(destResource, attr, value)
# Set new etag
xml = "\r\n%s\r\n" % (hashlib.md5(data).hexdigest(),)
xattr.setxattr(destResource, "WebDAV:{http:%2F%2Ftwistedmatrix.com%2Fxml_namespace%2Fdav%2F}getcontentmd5", zlib.compress(xml))
resources += 1
# Store new ctag on calendar
xml = "\r\n%s\r\n" % (str(datetime.datetime.now()),)
xattr.setxattr(destCal, "WebDAV:{http:%2F%2Fcalendarserver.org%2Fns%2F}getctag", zlib.compress(xml))
# Calendar home quota
xml = "\r\n%d\r\n" % (quotaUsed,)
xattr.setxattr(destHome, "WebDAV:{http:%2F%2Ftwistedmatrix.com%2Fxml_namespace%2Fdav%2Fprivate%2F}quota-used", zlib.compress(xml))
# Inbox free busy calendars list
destInbox = os.path.join(destHome, "inbox")
if not os.path.exists(destInbox):
os.makedirs(destInbox)
xml = "\n"
for freeBusy in freeBusies:
xml += "/calendars/__uids__/%s/%s/\n" % (record['guid'], freeBusy)
xml += "\n"
xattr.setxattr(
destInbox,
"WebDAV:{urn:ietf:params:xml:ns:caldav}calendar-free-busy-set",
zlib.compress(xml)
)
if not (homes % 100):
print(" %d..." % (homes,))
print("Done.")
print("")
print("Calendar totals:")
print(" Calendar homes: %d" % (homes,))
print(" Calendars: %d" % (calendars,))
print(" Events: %d" % (resources,))
print("")
def anonymizeData(directoryMap, data):
try:
pyobj = Calendar.parseText(data)
except Exception, e:
print("Failed to parse (%s): %s" % (e, data))
return None
# Delete property from the top level
try:
for prop in pyobj.getProperties('x-wr-calname'):
prop.setValue(anonymize(prop.getValue().getValue()))
except KeyError:
pass
for comp in pyobj.getComponents():
# Replace with anonymized CUAs:
for propName in ('organizer', 'attendee'):
try:
for prop in tuple(comp.getProperties(propName)):
cua = prop.getValue().getValue()
record = directoryMap.lookupCUA(cua)
if record is None:
# print("Can't find record for", cua)
record = directoryMap.addRecord(cua=cua)
if record is None:
comp.removeProperty(prop)
continue
prop.setValue("urn:uuid:%s" % (record['guid'],))
if prop.hasParameter('X-CALENDARSERVER-EMAIL'):
prop.replaceParameter(Parameter('X-CALENDARSERVER-EMAIL', record['email']))
else:
prop.removeParameters('EMAIL')
prop.addParameter(Parameter('EMAIL', record['email']))
prop.removeParameters('CN')
prop.addParameter(Parameter('CN', record['name']))
except KeyError:
pass
# Replace with anonymized text:
for propName in ('summary', 'location', 'description'):
try:
for prop in comp.getProperties(propName):
prop.setValue(anonymize(prop.getValue().getValue()))
except KeyError:
pass
# Replace with anonymized URL:
try:
for prop in comp.getProperties('url'):
prop.setValue("http://example.com/%s/" % (anonymize(prop.getValue().getValue()),))
except KeyError:
pass
# Remove properties:
for propName in ('x-apple-dropbox', 'attach'):
try:
for prop in tuple(comp.getProperties(propName)):
comp.removeProperty(prop)
except KeyError:
pass
return pyobj.getText(includeTimezones=Calendar.ALL_TIMEZONES)
class DirectoryMap(object):
def __init__(self, node):
self.map = {}
self.byType = {
'users' : [],
'groups' : [],
'locations' : [],
'resources' : [],
}
self.counts = {
'users' : 0,
'groups' : 0,
'locations' : 0,
'resources' : 0,
'unknown' : 0,
}
self.strings = {
'users' : ('Users', 'user'),
'groups' : ('Groups', 'group'),
'locations' : ('Places', 'location'),
'resources' : ('Resources', 'resource'),
}
print("Fetching records from directory: %s" % (node,))
for internalType, (recordType, _ignore_friendlyType) in self.strings.iteritems():
print(" %s..." % (internalType,))
child = Popen(
args=[
"/usr/bin/dscl", "-plist", node, "-readall",
"/%s" % (recordType,),
"GeneratedUID", "RecordName", "EMailAddress", "GroupMembers"
],
stdout=PIPE, stderr=STDOUT,
)
output, error = child.communicate()
if child.returncode:
raise DirectoryError(error)
else:
records = readPlistFromString(output)
random.shuffle(records) # so we don't go alphabetically
for record in records:
origGUID = record.get('dsAttrTypeStandard:GeneratedUID', [None])[0]
if not origGUID:
continue
origRecordNames = record['dsAttrTypeStandard:RecordName']
origEmails = record.get('dsAttrTypeStandard:EMailAddress', [])
origMembers = record.get('dsAttrTypeStandard:GroupMembers', [])
self.addRecord(
internalType=internalType, guid=origGUID,
names=origRecordNames, emails=origEmails,
members=origMembers)
print("Done.")
print("")
def addRecord(
self, internalType="users", guid=None, names=None,
emails=None, members=None, cua=None
):
if cua:
keys = [self.cua2key(cua)]
self.counts['unknown'] += 1
else:
keys = self.getKeys(guid, names, emails)
if keys:
self.counts[internalType] += 1
count = self.counts[internalType]
namePrefix = randomName(6)
typeStr = self.strings[internalType][1]
record = {
'guid' : str(uuid.uuid4()).upper(),
'name' : "%s %s%d" % (namePrefix, typeStr, count,),
'first' : namePrefix,
'last' : "%s%d" % (typeStr, count,),
'recordName' : "%s%d" % (typeStr, count,),
'email' : ("%s%d@example.com" % (typeStr, count,)),
'type' : self.strings[internalType][0],
'cua' : cua,
'members' : members,
}
for key in keys:
self.map[key] = record
self.byType[internalType].append(record)
return record
else:
return None
def getKeys(self, guid, names, emails):
keys = []
if guid:
keys.append(guid.lower())
if names:
for name in names:
try:
name = name.encode('utf-8')
name = urllib.quote(name).lower()
keys.append(name)
except:
# print("Failed to urllib.quote( ) %s. Skipping." % (name,))
pass
if emails:
for email in emails:
email = email.lower()
keys.append(email)
return keys
def cua2key(self, cua):
key = cua.lower()
if key.startswith("mailto:"):
key = key[7:]
elif key.startswith("urn:uuid:"):
key = key[9:]
elif (key.startswith("/") or key.startswith("http")):
key = key.rstrip("/")
key = key.split("/")[-1]
return key
def lookupCUA(self, cua):
key = self.cua2key(cua)
if key and key in self.map:
return self.map[key]
else:
return None
def printStats(self):
print("Directory totals:")
for internalType, (recordType, ignore) in self.strings.iteritems():
print(" %s: %d" % (recordType, self.counts[internalType]))
unknown = self.counts['unknown']
if unknown:
print(" Principals not found in directory: %d" % (unknown,))
def dumpDsImports(self, dirPath):
if not os.path.exists(dirPath):
os.makedirs(dirPath)
uid = 1000000
filePath = os.path.join(dirPath, "users.dsimport")
with open(filePath, "w") as out:
out.write("0x0A 0x5C 0x3A 0x2C dsRecTypeStandard:Users 12 dsAttrTypeStandard:RecordName dsAttrTypeStandard:AuthMethod dsAttrTypeStandard:Password dsAttrTypeStandard:UniqueID dsAttrTypeStandard:GeneratedUID dsAttrTypeStandard:PrimaryGroupID dsAttrTypeStandard:RealName dsAttrTypeStandard:FirstName dsAttrTypeStandard:LastName dsAttrTypeStandard:NFSHomeDirectory dsAttrTypeStandard:UserShell dsAttrTypeStandard:EMailAddress\n")
for record in self.byType['users']:
fields = []
fields.append(record['recordName'])
fields.append("dsAuthMethodStandard\\:dsAuthClearText")
fields.append("test") # password
fields.append(str(uid))
fields.append(record['guid'])
fields.append("20") # primary group id
fields.append(record['name'])
fields.append(record['first'])
fields.append(record['last'])
fields.append("/var/empty")
fields.append("/usr/bin/false")
fields.append(record['email'])
out.write(":".join(fields))
out.write("\n")
uid += 1
gid = 2000000
filePath = os.path.join(dirPath, "groups.dsimport")
with open(filePath, "w") as out:
out.write("0x0A 0x5C 0x3A 0x2C dsRecTypeStandard:Groups 5 dsAttrTypeStandard:RecordName dsAttrTypeStandard:PrimaryGroupID dsAttrTypeStandard:GeneratedUID dsAttrTypeStandard:RealName dsAttrTypeStandard:GroupMembership\n")
for record in self.byType['groups']:
fields = []
fields.append(record['recordName'])
fields.append(str(gid))
fields.append(record['guid'])
fields.append(record['name'])
anonMembers = []
for member in record['members']:
memberRec = self.lookupCUA("urn:uuid:%s" % (member,))
if memberRec:
anonMembers.append(memberRec['guid'])
if anonMembers: # skip empty groups
fields.append(",".join(anonMembers))
out.write(":".join(fields))
out.write("\n")
gid += 1
filePath = os.path.join(dirPath, "resources.dsimport")
with open(filePath, "w") as out:
out.write("0x0A 0x5C 0x3A 0x2C dsRecTypeStandard:Resources 3 dsAttrTypeStandard:RecordName dsAttrTypeStandard:GeneratedUID dsAttrTypeStandard:RealName\n")
for record in self.byType['resources']:
fields = []
fields.append(record['recordName'])
fields.append(record['guid'])
fields.append(record['name'])
out.write(":".join(fields))
out.write("\n")
filePath = os.path.join(dirPath, "places.dsimport")
with open(filePath, "w") as out:
out.write("0x0A 0x5C 0x3A 0x2C dsRecTypeStandard:Places 3 dsAttrTypeStandard:RecordName dsAttrTypeStandard:GeneratedUID dsAttrTypeStandard:RealName\n")
for record in self.byType['locations']:
fields = []
fields.append(record['recordName'])
fields.append(record['guid'])
fields.append(record['name'])
out.write(":".join(fields))
out.write("\n")
class DirectoryError(Exception):
"""
Error trying to access dscl
"""
class DatabaseError(Exception):
"""
Error trying to access sqlite3
"""
def anonymize(text):
"""
Return a string whose value is the hex digest of text, repeated as needed
to create a string of the same length as text.
Useful for anonymizing strings in a deterministic manner.
"""
if isinstance(text, unicode):
try:
text = text.encode('utf-8')
except UnicodeEncodeError:
print("Failed to anonymize:", text)
text = "Anonymize me!"
h = hashlib.md5(text)
h = h.hexdigest()
l = len(text)
return (h * ((l / 32) + 1))[:-(32 - (l % 32))]
nameChars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
def randomName(length):
l = []
for _ignore in xrange(length):
l.append(random.choice(nameChars))
return "".join(l)
if __name__ == "__main__":
main()
calendarserver-7.0+dfsg/calendarserver/tools/calverify.py 0000775 0000000 0000000 00000331750 12607514243 0024015 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# -*- test-case-name: calendarserver.tools.test.test_calverify -*-
##
# Copyright (c) 2011-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
"""
This tool scans the calendar store to analyze organizer/attendee event
states to verify that the organizer's view of attendee state matches up
with the attendees' views. It can optionally apply a fix to bring the two
views back into line.
In theory the implicit scheduling model should eliminate the possibility
of mismatches, however, because we store separate resources for organizer
and attendee events, there is a possibility of mismatch. This is greatly
lessened via the new transaction model of database changes, but it is
possible there are edge cases or actual implicit processing errors we have
missed. This tool will allow us to track mismatches to help determine these
errors and get them fixed.
Even in the long term if we move to a "single instance" store where the
organizer event resource is the only one we store (with attendee views
derived from that), in a situation where we have server-to-server scheduling
it is possible for mismatches to creep in. In that case having a way to analyze
multiple DBs for inconsistency would be good too.
"""
import collections
import sys
import time
import traceback
from uuid import uuid4
from calendarserver.tools import tables
from calendarserver.tools.cmdline import utilityMain, WorkerService
from pycalendar.datetime import DateTime
from pycalendar.exceptions import ErrorBase
from pycalendar.icalendar import definitions
from pycalendar.icalendar.calendar import Calendar
from pycalendar.period import Period
from pycalendar.timezone import Timezone
from twext.enterprise.dal.syntax import Select, Parameter, Count
from twext.python.log import Logger
from twisted.internet.defer import inlineCallbacks, returnValue
from twisted.python import usage
from twisted.python.usage import Options
from twistedcaldav.datafilters.peruserdata import PerUserDataFilter
from twistedcaldav.dateops import pyCalendarToSQLTimestamp
from twistedcaldav.ical import Component, InvalidICalendarDataError, Property, PERUSER_COMPONENT
from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE
from twistedcaldav.timezones import TimezoneCache
from txdav.caldav.datastore.scheduling.icalsplitter import iCalSplitter
from txdav.caldav.datastore.scheduling.implicit import ImplicitScheduler
from txdav.caldav.datastore.scheduling.itip import iTipGenerator
from txdav.caldav.datastore.sql import CalendarStoreFeatures
from txdav.caldav.datastore.util import normalizationLookup
from txdav.caldav.icalendarstore import ComponentUpdateState
from txdav.common.datastore.sql_tables import schema, _BIND_MODE_OWN
from txdav.common.icommondatastore import InternalDataStoreError
from txdav.who.idirectory import (
RecordType as CalRecordType, AutoScheduleMode
)
log = Logger()
# Monkey patch
def new_validRecurrenceIDs(self, doFix=True):
fixed = []
unfixed = []
# Detect invalid occurrences and fix by adding RDATEs for them
master = self.masterComponent()
if master is not None:
# Get the set of all recurrence IDs
all_rids = set(self.getComponentInstances())
if None in all_rids:
all_rids.remove(None)
# If the master has no recurrence properties treat any other components as invalid
if master.isRecurring():
# Remove all EXDATEs with a matching RECURRENCE-ID. Do this before we start
# processing of valid instances just in case the matching R-ID is also not valid and
# thus will need RDATE added.
exdates = {}
for property in list(master.properties("EXDATE")):
for exdate in property.value():
exdates[exdate.getValue()] = property
for rid in all_rids:
if rid in exdates:
if doFix:
property = exdates[rid]
for value in property.value():
if value.getValue() == rid:
property.value().remove(value)
break
master.removeProperty(property)
if len(property.value()) > 0:
master.addProperty(property)
del exdates[rid]
fixed.append("Removed EXDATE for valid override: %s" % (rid,))
else:
unfixed.append("EXDATE for valid override: %s" % (rid,))
# Get the set of all valid recurrence IDs
valid_rids = self.validInstances(all_rids, ignoreInvalidInstances=True)
# Get the set of all RDATEs and add those to the valid set
rdates = []
for property in master.properties("RDATE"):
rdates.extend([_rdate.getValue() for _rdate in property.value()])
valid_rids.update(set(rdates))
# Remove EXDATEs predating master
dtstart = master.propertyValue("DTSTART")
if dtstart is not None:
for property in list(master.properties("EXDATE")):
newValues = []
changed = False
for exdate in property.value():
exdateValue = exdate.getValue()
if exdateValue < dtstart:
if doFix:
fixed.append("Removed earlier EXDATE: %s" % (exdateValue,))
else:
unfixed.append("EXDATE earlier than master: %s" % (exdateValue,))
changed = True
else:
newValues.append(exdateValue)
if changed and doFix:
# Remove the property...
master.removeProperty(property)
if newValues:
# ...and add it back only if it still has values
property.setValue(newValues)
master.addProperty(property)
else:
valid_rids = set()
# Determine the invalid recurrence IDs by set subtraction
invalid_rids = all_rids - valid_rids
# Add RDATEs for the invalid ones, or remove any EXDATE.
for invalid_rid in invalid_rids:
brokenComponent = self.overriddenComponent(invalid_rid)
brokenRID = brokenComponent.propertyValue("RECURRENCE-ID")
if doFix:
master.addProperty(Property("RDATE", [brokenRID, ]))
fixed.append(
"Added RDATE for invalid occurrence: %s" %
(brokenRID,))
else:
unfixed.append("Invalid occurrence: %s" % (brokenRID,))
return fixed, unfixed
def new_hasDuplicateAlarms(self, doFix=False):
"""
test and optionally remove alarms that have the same ACTION and TRIGGER values in the same component.
"""
changed = False
if self.name() in ("VCALENDAR", PERUSER_COMPONENT,):
for component in self.subcomponents():
if component.name() in ("VTIMEZONE",):
continue
changed = component.hasDuplicateAlarms(doFix) or changed
else:
action_trigger = set()
for component in tuple(self.subcomponents()):
if component.name() == "VALARM":
item = (component.propertyValue("ACTION"), component.propertyValue("TRIGGER"),)
if item in action_trigger:
if doFix:
self.removeComponent(component)
changed = True
else:
action_trigger.add(item)
return changed
Component.validRecurrenceIDs = new_validRecurrenceIDs
if not hasattr(Component, "maxAlarmCounts"):
Component.hasDuplicateAlarms = new_hasDuplicateAlarms
VERSION = "12"
def printusage(e=None):
if e:
print(e)
print("")
try:
CalVerifyOptions().opt_help()
except SystemExit:
pass
if e:
sys.exit(64)
else:
sys.exit(0)
description = """
Usage: calendarserver_verify_data [options]
Version: %s
This tool scans the calendar store to look for and correct any
problems.
OPTIONS:
Modes of operation:
-h : print help and exit.
--ical : verify iCalendar data.
--mismatch : verify scheduling state.
--missing : display orphaned calendar homes - can be used.
with either --ical or --mismatch.
--double : detect double-bookings.
--dark-purge : purge room/resource events with invalid organizer
--split : split recurring event
--nuke PATH|RID : remove specific calendar resources - can
only be used by itself. PATH is the full
/calendars/__uids__/XXX/YYY/ZZZ.ics object
resource path, RID is the SQL DB resource-id.
Options for all modes:
--fix : changes are only made when this is present.
--config : caldavd.plist file for the server.
-v : verbose logging
Options for --ical:
--uuid : only scan specified calendar homes. Can be a partial GUID
to scan all GUIDs with that as a prefix.
--uid : scan only calendar data with the specific iCalendar UID.
Options for --mismatch:
--uid : look for mismatches with the specified iCalendar UID only.
--details : log extended details on each mismatch.
--tzid : timezone to adjust details to.
Options for --double:
--uuid : only scan specified calendar homes. Can be a partial GUID
to scan all GUIDs with that as a prefix or "*" for all GUIDS
(that are marked as resources or locations in the directory).
--tzid : timezone to adjust details to.
--summary : report only which GUIDs have double-bookings - no details.
--days : number of days ahead to scan [DEFAULT: 365]
Options for --double:
If none of (--no-organizer, --invalid-organizer, --disabled-organizer) is present, it
will default to (--invalid-organizer, --disabled-organizer).
Options for --dark-purge:
--uuid : only scan specified calendar homes. Can be a partial GUID
to scan all GUIDs with that as a prefix or "*" for all GUIDS
(that are marked as resources or locations in the directory).
--summary : report only which GUIDs have double-bookings - no details.
--no-organizer : only detect events without an organizer
--invalid-organizer : only detect events with an organizer not in the directory
--disabled-organizer : only detect events with an organizer disabled for calendaring
Options for --split:
--path : URI path to resource to split.
--rid : UTC date-time where split occurs (YYYYMMDDTHHMMSSZ).
--summary : only print a list of recurrences in the resource - no splitting.
CHANGES
v8: Detects ORGANIZER or ATTENDEE properties with mailto: calendar user
addresses for users that have valid directory records. Fix is to
replace the value with a urn:x-uid: form.
v9: Detects double-bookings.
v10: Purges data for invalid users.
v11: Allows manual splitting of recurring events.
v12: Fix double-booking false positives caused by timezones-by-reference.
""" % (VERSION,)
def safePercent(x, y, multiplier=100.0):
return ((multiplier * x) / y) if y else 0
class CalVerifyOptions(Options):
"""
Command-line options for 'calendarserver_verify_data'
"""
synopsis = description
optFlags = [
['ical', 'i', "Calendar data check."],
['debug', 'D', "Debug logging."],
['mismatch', 's', "Detect organizer/attendee mismatches."],
['missing', 'm', "Show 'orphaned' homes."],
['double', 'd', "Detect double-bookings."],
['dark-purge', 'p', "Purge room/resource events with invalid organizer."],
['split', 'l', "Split an event."],
['fix', 'x', "Fix problems."],
['verbose', 'v', "Verbose logging."],
['details', 'V', "Detailed logging."],
['summary', 'S', "Summary of double-bookings/split."],
['tzid', 't', "Timezone to adjust displayed times to."],
['no-organizer', '', "Detect dark events without an organizer"],
['invalid-organizer', '', "Detect dark events with an organizer not in the directory"],
['disabled-organizer', '', "Detect dark events with a disabled organizer"],
]
optParameters = [
['config', 'f', DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."],
['uuid', 'u', "", "Only check this user."],
['uid', 'U', "", "Only this event UID."],
['nuke', 'e', "", "Remove event given its path."],
['days', 'T', "365", "Number of days for scanning events into the future."],
['path', '', "", "Split event given its path."],
['rid', '', "", "Split date-time."],
]
def __init__(self):
super(CalVerifyOptions, self).__init__()
self.outputName = '-'
def getUsage(self, width=None):
return ""
def opt_output(self, filename):
"""
Specify output file path (default: '-', meaning stdout).
"""
self.outputName = filename
opt_o = opt_output
def openOutput(self):
"""
Open the appropriate output file based on the '--output' option.
"""
if self.outputName == '-':
return sys.stdout
else:
return open(self.outputName, 'wb')
class CalVerifyService(WorkerService, object):
"""
Base class for common service behaviors.
"""
def __init__(self, store, options, output, reactor, config):
super(CalVerifyService, self).__init__(store)
self.options = options
self.output = output
self.reactor = reactor
self.config = config
self._directory = store.directoryService()
self._principalCollection = self.rootResource().getChild("principals")
self.cuaCache = {}
self.results = {}
self.summary = []
self.total = 0
self.totalErrors = None
self.totalExceptions = None
TimezoneCache.create()
def title(self):
return ""
@inlineCallbacks
def doWork(self):
"""
Do the operation stopping the reactor when done.
"""
self.output.write("\n---- CalVerify %s version: %s ----\n" % (self.title(), VERSION,))
try:
yield self.doAction()
self.output.close()
except:
log.failure("doWork()")
def directoryService(self):
"""
Return the directory service
"""
return self._directory
@inlineCallbacks
def getAllHomeUIDs(self):
ch = schema.CALENDAR_HOME
rows = (yield Select(
[ch.OWNER_UID, ],
From=ch,
).on(self.txn))
returnValue(tuple([uid[0] for uid in rows]))
@inlineCallbacks
def getMatchingHomeUIDs(self, uuid):
ch = schema.CALENDAR_HOME
kwds = {"uuid": uuid}
rows = (yield Select(
[ch.OWNER_UID, ],
From=ch,
Where=(ch.OWNER_UID.StartsWith(Parameter("uuid"))),
).on(self.txn, **kwds))
returnValue(tuple([uid[0] for uid in rows]))
@inlineCallbacks
def countHomeContents(self, uid):
ch = schema.CALENDAR_HOME
cb = schema.CALENDAR_BIND
co = schema.CALENDAR_OBJECT
kwds = {"UID" : uid}
rows = (yield Select(
[Count(co.RESOURCE_ID), ],
From=ch.join(
cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID).And(
cb.BIND_MODE == _BIND_MODE_OWN)).join(
co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID)),
Where=(ch.OWNER_UID == Parameter("UID"))
).on(self.txn, **kwds))
returnValue(int(rows[0][0]) if rows else 0)
@inlineCallbacks
def getAllResourceInfo(self, inbox=False):
co = schema.CALENDAR_OBJECT
cb = schema.CALENDAR_BIND
ch = schema.CALENDAR_HOME
if inbox:
cojoin = (cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
cb.BIND_MODE == _BIND_MODE_OWN)
else:
cojoin = (cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
cb.BIND_MODE == _BIND_MODE_OWN).And(
cb.CALENDAR_RESOURCE_NAME != "inbox")
kwds = {}
rows = (yield Select(
[ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED],
From=ch.join(
cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join(
co, type="inner", on=cojoin),
GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,),
).on(self.txn, **kwds))
returnValue(tuple(rows))
@inlineCallbacks
def getAllResourceInfoWithUUID(self, uuid, inbox=False):
co = schema.CALENDAR_OBJECT
cb = schema.CALENDAR_BIND
ch = schema.CALENDAR_HOME
if inbox:
cojoin = (cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
cb.BIND_MODE == _BIND_MODE_OWN)
else:
cojoin = (cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
cb.BIND_MODE == _BIND_MODE_OWN).And(
cb.CALENDAR_RESOURCE_NAME != "inbox")
kwds = {"uuid": uuid}
if len(uuid) != 36:
where = (ch.OWNER_UID.StartsWith(Parameter("uuid")))
else:
where = (ch.OWNER_UID == Parameter("uuid"))
rows = (yield Select(
[ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED],
From=ch.join(
cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join(
co, type="inner", on=cojoin),
Where=where,
GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,),
).on(self.txn, **kwds))
returnValue(tuple(rows))
@inlineCallbacks
def getAllResourceInfoTimeRange(self, start):
co = schema.CALENDAR_OBJECT
cb = schema.CALENDAR_BIND
ch = schema.CALENDAR_HOME
tr = schema.TIME_RANGE
kwds = {
"Start" : pyCalendarToSQLTimestamp(start),
"Max" : pyCalendarToSQLTimestamp(DateTime(1900, 1, 1, 0, 0, 0))
}
rows = (yield Select(
[ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED],
From=ch.join(
cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join(
co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
cb.BIND_MODE == _BIND_MODE_OWN).And(
cb.CALENDAR_RESOURCE_NAME != "inbox").And(
co.ORGANIZER != "")).join(
tr, type="left", on=(co.RESOURCE_ID == tr.CALENDAR_OBJECT_RESOURCE_ID)),
Where=(tr.START_DATE >= Parameter("Start")).Or(co.RECURRANCE_MAX <= Parameter("Start")),
GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,),
).on(self.txn, **kwds))
returnValue(tuple(rows))
@inlineCallbacks
def getAllResourceInfoWithUID(self, uid, inbox=False):
co = schema.CALENDAR_OBJECT
cb = schema.CALENDAR_BIND
ch = schema.CALENDAR_HOME
if inbox:
cojoin = (cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
cb.BIND_MODE == _BIND_MODE_OWN)
else:
cojoin = (cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
cb.BIND_MODE == _BIND_MODE_OWN).And(
cb.CALENDAR_RESOURCE_NAME != "inbox")
kwds = {
"UID" : uid,
}
rows = (yield Select(
[ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED],
From=ch.join(
cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join(
co, type="inner", on=cojoin),
Where=(co.ICALENDAR_UID == Parameter("UID")),
GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,),
).on(self.txn, **kwds))
returnValue(tuple(rows))
@inlineCallbacks
def getAllResourceInfoTimeRangeWithUUID(self, start, uuid):
co = schema.CALENDAR_OBJECT
cb = schema.CALENDAR_BIND
ch = schema.CALENDAR_HOME
tr = schema.TIME_RANGE
kwds = {
"Start" : pyCalendarToSQLTimestamp(start),
"Max" : pyCalendarToSQLTimestamp(DateTime(1900, 1, 1, 0, 0, 0)),
"UUID" : uuid,
}
rows = (yield Select(
[ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED],
From=ch.join(
cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join(
co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
cb.BIND_MODE == _BIND_MODE_OWN).And(
cb.CALENDAR_RESOURCE_NAME != "inbox")).join(
tr, type="left", on=(co.RESOURCE_ID == tr.CALENDAR_OBJECT_RESOURCE_ID)),
Where=(ch.OWNER_UID == Parameter("UUID")).And((tr.START_DATE >= Parameter("Start")).Or(co.RECURRANCE_MAX <= Parameter("Start"))),
GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,),
).on(self.txn, **kwds))
returnValue(tuple(rows))
@inlineCallbacks
def getAllResourceInfoTimeRangeWithUUIDForAllUID(self, start, uuid):
co = schema.CALENDAR_OBJECT
cb = schema.CALENDAR_BIND
ch = schema.CALENDAR_HOME
tr = schema.TIME_RANGE
cojoin = (cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
cb.BIND_MODE == _BIND_MODE_OWN).And(
cb.CALENDAR_RESOURCE_NAME != "inbox")
kwds = {
"Start" : pyCalendarToSQLTimestamp(start),
"Max" : pyCalendarToSQLTimestamp(DateTime(1900, 1, 1, 0, 0, 0)),
"UUID" : uuid,
}
rows = (yield Select(
[ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED],
From=ch.join(
cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join(
co, type="inner", on=cojoin),
Where=(co.ICALENDAR_UID.In(Select(
[co.ICALENDAR_UID],
From=ch.join(
cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join(
co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
cb.BIND_MODE == _BIND_MODE_OWN).And(
cb.CALENDAR_RESOURCE_NAME != "inbox").And(
co.ORGANIZER != "")).join(
tr, type="left", on=(co.RESOURCE_ID == tr.CALENDAR_OBJECT_RESOURCE_ID)),
Where=(ch.OWNER_UID == Parameter("UUID")).And((tr.START_DATE >= Parameter("Start")).Or(co.RECURRANCE_MAX <= Parameter("Start"))),
GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,),
))),
GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,),
).on(self.txn, **kwds))
returnValue(tuple(rows))
@inlineCallbacks
def getAllResourceInfoForResourceID(self, resid):
co = schema.CALENDAR_OBJECT
cb = schema.CALENDAR_BIND
ch = schema.CALENDAR_HOME
kwds = {"resid": resid}
rows = (yield Select(
[ch.RESOURCE_ID, cb.CALENDAR_RESOURCE_ID, ],
From=ch.join(
cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join(
co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
cb.BIND_MODE == _BIND_MODE_OWN)),
Where=(co.RESOURCE_ID == Parameter("resid")),
).on(self.txn, **kwds))
returnValue(rows[0])
@inlineCallbacks
def getResourceID(self, home, calendar, resource):
co = schema.CALENDAR_OBJECT
cb = schema.CALENDAR_BIND
ch = schema.CALENDAR_HOME
kwds = {
"home": home,
"calendar": calendar,
"resource": resource,
}
rows = (yield Select(
[co.RESOURCE_ID],
From=ch.join(
cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join(
co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID)),
Where=(ch.OWNER_UID == Parameter("home")).And(
cb.CALENDAR_RESOURCE_NAME == Parameter("calendar")).And(
co.RESOURCE_NAME == Parameter("resource")
),
).on(self.txn, **kwds))
returnValue(rows[0][0] if rows else None)
@inlineCallbacks
def getCalendar(self, resid, doFix=False):
co = schema.CALENDAR_OBJECT
kwds = {"ResourceID" : resid}
rows = (yield Select(
[co.ICALENDAR_TEXT],
From=co,
Where=(
co.RESOURCE_ID == Parameter("ResourceID")
),
).on(self.txn, **kwds))
try:
caldata = Calendar.parseText(rows[0][0]) if rows else None
except ErrorBase:
self.parseError = "Failed to parse"
returnValue(None)
self.parseError = None
returnValue(caldata)
@inlineCallbacks
def getCalendarForOwnerByUID(self, owner, uid):
co = schema.CALENDAR_OBJECT
cb = schema.CALENDAR_BIND
ch = schema.CALENDAR_HOME
kwds = {"OWNER": owner, "UID": uid}
rows = (yield Select(
[co.ICALENDAR_TEXT, co.RESOURCE_ID, co.CREATED, co.MODIFIED, ],
From=ch.join(
cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join(
co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
cb.BIND_MODE == _BIND_MODE_OWN).And(
cb.CALENDAR_RESOURCE_NAME != "inbox")),
Where=(ch.OWNER_UID == Parameter("OWNER")).And(co.ICALENDAR_UID == Parameter("UID")),
).on(self.txn, **kwds))
try:
caldata = Calendar.parseText(rows[0][0]) if rows else None
except ErrorBase:
returnValue((None, None, None, None,))
returnValue((caldata, rows[0][1], rows[0][2], rows[0][3],) if rows else (None, None, None, None,))
@inlineCallbacks
def removeEvent(self, resid):
"""
Remove the calendar resource specified by resid - this is a force remove - no implicit
scheduling is required so we use store apis directly.
"""
try:
homeID, calendarID = yield self.getAllResourceInfoForResourceID(resid)
home = yield self.txn.calendarHomeWithResourceID(homeID)
calendar = yield home.childWithID(calendarID)
calendarObj = yield calendar.objectResourceWithID(resid)
objname = calendarObj.name()
yield calendarObj.purge(implicitly=False)
yield self.txn.commit()
self.txn = self.store.newTransaction()
self.results.setdefault("Fix remove", set()).add((home.name(), calendar.name(), objname,))
returnValue(True)
except Exception, e:
print("Failed to remove resource whilst fixing: %d\n%s" % (resid, e,))
returnValue(False)
def logResult(self, key, value, total=None):
self.output.write("%s: %s\n" % (key, value,))
self.results[key] = value
self.addToSummary(key, value, total)
def addToSummary(self, title, count, total=None):
if total is not None:
percent = safePercent(count, total),
else:
percent = ""
self.summary.append((title, count, percent))
def addSummaryBreak(self):
self.summary.append(None)
def printSummary(self):
# Print summary of results
table = tables.Table()
table.addHeader(("Item", "Count", "%"))
table.setDefaultColumnFormats(
(
tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.LEFT_JUSTIFY),
tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY),
tables.Table.ColumnFormat("%.1f%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY),
)
)
for item in self.summary:
table.addRow(item)
if self.totalErrors is not None:
table.addRow(None)
table.addRow(("Total Errors", self.totalErrors, safePercent(self.totalErrors, self.total),))
self.output.write("\n")
self.output.write("Overall Summary:\n")
table.printTable(os=self.output)
class NukeService(CalVerifyService):
"""
Service which removes specific events.
"""
def title(self):
return "Nuke Service"
@inlineCallbacks
def doAction(self):
"""
Remove a resource using either its path or resource id. When doing this do not
read the iCalendar data which may be corrupt.
"""
self.output.write("\n---- Removing calendar resource ----\n")
self.txn = self.store.newTransaction()
nuke = self.options["nuke"]
if nuke.startswith("/calendars/__uids__/"):
pathbits = nuke.split("/")
if len(pathbits) != 6:
printusage("Not a valid calendar object resource path: %s" % (nuke,))
homeName = pathbits[3]
calendarName = pathbits[4]
resourceName = pathbits[5]
rid = yield self.getResourceID(homeName, calendarName, resourceName)
if rid is None:
yield self.txn.commit()
self.txn = None
self.output.write("\n")
self.output.write("Path does not exist. Nothing nuked.\n")
returnValue(None)
rid = int(rid)
else:
try:
rid = int(nuke)
except ValueError:
printusage("nuke argument must be a calendar object path or an SQL resource-id")
if self.options["fix"]:
result = yield self.removeEvent(rid)
if result:
self.output.write("\n")
self.output.write("Removed resource: %s.\n" % (rid,))
else:
self.output.write("\n")
self.output.write("Resource: %s.\n" % (rid,))
yield self.txn.commit()
self.txn = None
class OrphansService(CalVerifyService):
"""
Service which detects orphaned calendar homes.
"""
def title(self):
return "Orphans Service"
@inlineCallbacks
def doAction(self):
"""
Report on home collections for which there are no directory records, or record is for user on
a different pod, or a user not enabled for calendaring.
"""
self.output.write("\n---- Finding calendar homes with missing or disabled directory records ----\n")
self.txn = self.store.newTransaction()
if self.options["verbose"]:
t = time.time()
uids = yield self.getAllHomeUIDs()
if self.options["verbose"]:
self.output.write("getAllHomeUIDs time: %.1fs\n" % (time.time() - t,))
missing = []
wrong_server = []
disabled = []
uids_len = len(uids)
uids_div = 1 if uids_len < 100 else uids_len / 100
self.addToSummary("Total Homes", uids_len)
for ctr, uid in enumerate(uids):
if self.options["verbose"] and divmod(ctr, uids_div)[1] == 0:
self.output.write(("\r%d of %d (%d%%)" % (
ctr + 1,
uids_len,
((ctr + 1) * 100 / uids_len),
)).ljust(80))
self.output.flush()
record = yield self.directoryService().recordWithUID(uid)
if record is None:
contents = yield self.countHomeContents(uid)
missing.append((uid, contents,))
elif not record.thisServer():
contents = yield self.countHomeContents(uid)
wrong_server.append((uid, contents,))
elif not record.hasCalendars:
contents = yield self.countHomeContents(uid)
disabled.append((uid, contents,))
# To avoid holding locks on all the rows scanned, commit every 100 resources
if divmod(ctr, 100)[1] == 0:
yield self.txn.commit()
self.txn = self.store.newTransaction()
yield self.txn.commit()
self.txn = None
if self.options["verbose"]:
self.output.write("\r".ljust(80) + "\n")
# Print table of results
table = tables.Table()
table.addHeader(("Owner UID", "Calendar Objects"))
for uid, count in sorted(missing, key=lambda x: x[0]):
table.addRow((
uid,
count,
))
self.output.write("\n")
self.logResult("Homes without a matching directory record", len(missing), uids_len)
table.printTable(os=self.output)
# Print table of results
table = tables.Table()
table.addHeader(("Owner UID", "Calendar Objects"))
for uid, count in sorted(wrong_server, key=lambda x: x[0]):
record = yield self.directoryService().recordWithUID(uid)
table.addRow((
"%s/%s (%s)" % (record.recordType if record else "-", record.shortNames[0] if record else "-", uid,),
count,
))
self.output.write("\n")
self.logResult("Homes not hosted on this server", len(wrong_server), uids_len)
table.printTable(os=self.output)
# Print table of results
table = tables.Table()
table.addHeader(("Owner UID", "Calendar Objects"))
for uid, count in sorted(disabled, key=lambda x: x[0]):
record = yield self.directoryService().recordWithUID(uid)
table.addRow((
"%s/%s (%s)" % (record.recordType if record else "-", record.shortNames[0] if record else "-", uid,),
count,
))
self.output.write("\n")
self.logResult("Homes without an enabled directory record", len(disabled), uids_len)
table.printTable(os=self.output)
self.printSummary()
class BadDataService(CalVerifyService):
"""
Service which scans for bad calendar data.
"""
def title(self):
return "Bad Data Service"
@inlineCallbacks
def doAction(self):
self.output.write("\n---- Scanning calendar data ----\n")
self.now = DateTime.getNowUTC()
self.start = DateTime.getToday()
self.start.setDateOnly(False)
self.end = self.start.duplicate()
self.end.offsetYear(1)
self.fix = self.options["fix"]
self.tzid = Timezone(tzid=self.options["tzid"] if self.options["tzid"] else "America/Los_Angeles")
self.txn = self.store.newTransaction()
if self.options["verbose"]:
t = time.time()
descriptor = None
if self.options["uuid"]:
rows = yield self.getAllResourceInfoWithUUID(self.options["uuid"], inbox=True)
descriptor = "getAllResourceInfoWithUUID"
elif self.options["uid"]:
rows = yield self.getAllResourceInfoWithUID(self.options["uid"], inbox=True)
descriptor = "getAllResourceInfoWithUID"
else:
rows = yield self.getAllResourceInfo(inbox=True)
descriptor = "getAllResourceInfo"
yield self.txn.commit()
self.txn = None
if self.options["verbose"]:
self.output.write("%s time: %.1fs\n" % (descriptor, time.time() - t,))
self.total = len(rows)
self.logResult("Number of events to process", self.total)
self.addSummaryBreak()
yield self.calendarDataCheck(rows)
self.printSummary()
@inlineCallbacks
def calendarDataCheck(self, rows):
"""
Check each calendar resource for valid iCalendar data.
"""
self.output.write("\n---- Verifying each calendar object resource ----\n")
self.txn = self.store.newTransaction()
if self.options["verbose"]:
t = time.time()
results_bad = []
count = 0
total = len(rows)
badlen = 0
rjust = 10
for owner, resid, uid, calname, _ignore_md5, _ignore_organizer, _ignore_created, _ignore_modified in rows:
try:
result, message = yield self.validCalendarData(resid, calname == "inbox")
except Exception, e:
result = False
message = "Exception for validCalendarData"
if self.options["verbose"]:
print(e)
if not result:
results_bad.append((owner, uid, resid, message))
badlen += 1
count += 1
if self.options["verbose"]:
if count == 1:
self.output.write("Bad".rjust(rjust) + "Current".rjust(rjust) + "Total".rjust(rjust) + "Complete".rjust(rjust) + "\n")
if divmod(count, 100)[1] == 0:
self.output.write((
"\r" +
("%s" % badlen).rjust(rjust) +
("%s" % count).rjust(rjust) +
("%s" % total).rjust(rjust) +
("%d%%" % safePercent(count, total)).rjust(rjust)
).ljust(80))
self.output.flush()
# To avoid holding locks on all the rows scanned, commit every 100 resources
if divmod(count, 100)[1] == 0:
yield self.txn.commit()
self.txn = self.store.newTransaction()
yield self.txn.commit()
self.txn = None
if self.options["verbose"]:
self.output.write((
"\r" +
("%s" % badlen).rjust(rjust) +
("%s" % count).rjust(rjust) +
("%s" % total).rjust(rjust) +
("%d%%" % safePercent(count, total)).rjust(rjust)
).ljust(80) + "\n")
# Print table of results
table = tables.Table()
table.addHeader(("Owner", "Event UID", "RID", "Problem",))
for item in sorted(results_bad, key=lambda x: (x[0], x[1])):
owner, uid, resid, message = item
owner_record = yield self.directoryService().recordWithUID(owner)
table.addRow((
"%s/%s (%s)" % (owner_record.recordType if owner_record else "-", owner_record.shortNames[0] if owner_record else "-", owner,),
uid,
resid,
message,
))
self.output.write("\n")
self.logResult("Bad iCalendar data", len(results_bad), total)
self.results["Bad iCalendar data"] = results_bad
table.printTable(os=self.output)
if self.options["verbose"]:
diff_time = time.time() - t
self.output.write("Time: %.2f s Average: %.1f ms/resource\n" % (
diff_time,
safePercent(diff_time, total, 1000.0),
))
errorPrefix = "Calendar data had unfixable problems:\n "
@inlineCallbacks
def validCalendarData(self, resid, isinbox):
"""
Check the calendar resource for valid iCalendar data.
"""
caldata = yield self.getCalendar(resid, self.fix)
if caldata is None:
if self.parseError:
returnValue((False, self.parseError))
else:
returnValue((True, "Nothing to scan"))
component = Component(None, pycalendar=caldata)
if getattr(self.config, "MaxInstancesForRRULE", 0):
component.truncateRecurrence(self.config.MaxInstancesForRRULE)
result = True
message = ""
try:
if self.options["ical"]:
component.validCalendarData(doFix=False, validateRecurrences=True)
component.validCalendarForCalDAV(methodAllowed=isinbox)
component.validOrganizerForScheduling(doFix=False)
if component.hasDuplicateAlarms(doFix=False):
raise InvalidICalendarDataError("Duplicate VALARMS")
yield self.noPrincipalPathCUAddresses(component, doFix=False)
if self.options["ical"]:
self.attendeesWithoutOrganizer(component, doFix=False)
except ValueError, e:
result = False
message = str(e)
if message.startswith(self.errorPrefix):
message = message[len(self.errorPrefix):]
lines = message.splitlines()
message = lines[0] + (" ++" if len(lines) > 1 else "")
if self.fix:
fixresult, fixmessage = yield self.fixCalendarData(resid, isinbox)
if fixresult:
message = "Fixed: " + message
else:
message = fixmessage + message
returnValue((result, message,))
@inlineCallbacks
def noPrincipalPathCUAddresses(self, component, doFix):
@inlineCallbacks
def recordWithCalendarUserAddress(address):
principal = yield self._principalCollection.principalForCalendarUserAddress(address)
returnValue(principal.record)
@inlineCallbacks
def lookupFunction(cuaddr, recordFunction, conf):
# Return cached results, if any.add
if cuaddr in self.cuaCache:
returnValue(self.cuaCache[cuaddr])
result = yield normalizationLookup(cuaddr, recordFunction, conf)
_ignore_name, guid, _ignore_cutype, _ignore_cuaddrs = result
if guid is None:
if cuaddr.find("__uids__") != -1:
guid = cuaddr[cuaddr.find("__uids__/") + 9:][:36]
result = ("", guid, "", set(),)
# Cache the result
self.cuaCache[cuaddr] = result
returnValue(result)
for subcomponent in component.subcomponents(ignore=True):
organizer = subcomponent.getProperty("ORGANIZER")
if organizer:
cuaddr = organizer.value()
# http(s) principals need to be converted to urn:uuid
if cuaddr.startswith("http"):
if doFix:
yield component.normalizeCalendarUserAddresses(lookupFunction, recordWithCalendarUserAddress)
else:
raise InvalidICalendarDataError("iCalendar ORGANIZER starts with 'http(s)'")
elif cuaddr.startswith("mailto:"):
if (yield lookupFunction(cuaddr, recordWithCalendarUserAddress, self.config))[1] is not None:
if doFix:
yield component.normalizeCalendarUserAddresses(lookupFunction, recordWithCalendarUserAddress)
else:
raise InvalidICalendarDataError("iCalendar ORGANIZER starts with 'mailto:' and record exists")
else:
if ("@" in cuaddr) and (":" not in cuaddr) and ("/" not in cuaddr):
if doFix:
# Add back in mailto: then re-normalize to urn:uuid if possible
organizer.setValue("mailto:%s" % (cuaddr,))
yield component.normalizeCalendarUserAddresses(lookupFunction, recordWithCalendarUserAddress)
# Remove any SCHEDULE-AGENT=NONE
if organizer.parameterValue("SCHEDULE-AGENT", "SERVER") == "NONE":
organizer.removeParameter("SCHEDULE-AGENT")
else:
raise InvalidICalendarDataError("iCalendar ORGANIZER missing mailto:")
for attendee in subcomponent.properties("ATTENDEE"):
cuaddr = attendee.value()
# http(s) principals need to be converted to urn:uuid
if cuaddr.startswith("http"):
if doFix:
yield component.normalizeCalendarUserAddresses(lookupFunction, recordWithCalendarUserAddress)
else:
raise InvalidICalendarDataError("iCalendar ATTENDEE starts with 'http(s)'")
elif cuaddr.startswith("mailto:"):
if (yield lookupFunction(cuaddr, recordWithCalendarUserAddress, self.config))[1] is not None:
if doFix:
yield component.normalizeCalendarUserAddresses(lookupFunction, recordWithCalendarUserAddress)
else:
raise InvalidICalendarDataError("iCalendar ATTENDEE starts with 'mailto:' and record exists")
else:
if ("@" in cuaddr) and (":" not in cuaddr) and ("/" not in cuaddr):
if doFix:
# Add back in mailto: then re-normalize to urn:uuid if possible
attendee.setValue("mailto:%s" % (cuaddr,))
yield component.normalizeCalendarUserAddresses(lookupFunction, recordWithCalendarUserAddress)
else:
raise InvalidICalendarDataError("iCalendar ATTENDEE missing mailto:")
def attendeesWithoutOrganizer(self, component, doFix):
"""
Look for events with ATTENDEE properties and no ORGANIZER property.
"""
organizer = component.getOrganizer()
attendees = component.getAttendees()
if organizer is None and attendees:
if doFix:
raise ValueError("ATTENDEEs without ORGANIZER")
else:
raise InvalidICalendarDataError("ATTENDEEs without ORGANIZER")
@inlineCallbacks
def fixCalendarData(self, resid, isinbox):
"""
Fix problems in calendar data using store APIs.
"""
homeID, calendarID = yield self.getAllResourceInfoForResourceID(resid)
home = yield self.txn.calendarHomeWithResourceID(homeID)
calendar = yield home.childWithID(calendarID)
calendarObj = yield calendar.objectResourceWithID(resid)
try:
component = yield calendarObj.component()
except InternalDataStoreError:
returnValue((False, "Failed parse: "))
result = True
message = ""
try:
if self.options["ical"]:
component.validCalendarData(doFix=True, validateRecurrences=True)
component.validCalendarForCalDAV(methodAllowed=isinbox)
component.validOrganizerForScheduling(doFix=True)
component.hasDuplicateAlarms(doFix=True)
yield self.noPrincipalPathCUAddresses(component, doFix=True)
if self.options["ical"]:
self.attendeesWithoutOrganizer(component, doFix=True)
except ValueError:
result = False
message = "Failed fix: "
if result:
# Write out fix, commit and get a new transaction
try:
# Use _migrating to ignore possible overridden instance errors - we are either correcting or ignoring those
self.txn._migrating = True
component = yield calendarObj._setComponentInternal(component, internal_state=ComponentUpdateState.RAW)
except Exception, e:
print(e, component)
print(traceback.print_exc())
result = False
message = "Exception fix: "
yield self.txn.commit()
self.txn = self.store.newTransaction()
returnValue((result, message,))
class SchedulingMismatchService(CalVerifyService):
"""
Service which detects mismatched scheduled events.
"""
metadata = {
"accessMode": "PUBLIC",
"isScheduleObject": True,
"scheduleTag": "abc",
"scheduleEtags": (),
"hasPrivateComment": False,
}
metadata_inbox = {
"accessMode": "PUBLIC",
"isScheduleObject": False,
"scheduleTag": "",
"scheduleEtags": (),
"hasPrivateComment": False,
}
def __init__(self, store, options, output, reactor, config):
super(SchedulingMismatchService, self).__init__(store, options, output, reactor, config)
self.validForCalendaringUUIDs = {}
self.fixAttendeesForOrganizerMissing = 0
self.fixAttendeesForOrganizerMismatch = 0
self.fixOrganizersForAttendeeMissing = 0
self.fixOrganizersForAttendeeMismatch = 0
self.fixFailed = 0
self.fixedAutoAccepts = []
def title(self):
return "Scheduling Mismatch Service"
@inlineCallbacks
def doAction(self):
self.output.write("\n---- Scanning calendar data ----\n")
self.now = DateTime.getNowUTC()
self.start = self.options["start"] if "start" in self.options else DateTime.getToday()
self.start.setDateOnly(False)
self.end = self.start.duplicate()
self.end.offsetYear(1)
self.fix = self.options["fix"]
self.tzid = Timezone(tzid=self.options["tzid"] if self.options["tzid"] else "America/Los_Angeles")
self.txn = self.store.newTransaction()
if self.options["verbose"]:
t = time.time()
descriptor = None
if self.options["uid"]:
rows = yield self.getAllResourceInfoWithUID(self.options["uid"])
descriptor = "getAllResourceInfoWithUID"
elif self.options["uuid"]:
rows = yield self.getAllResourceInfoTimeRangeWithUUIDForAllUID(self.start, self.options["uuid"])
descriptor = "getAllResourceInfoTimeRangeWithUUIDForAllUID"
self.options["uuid"] = None
else:
rows = yield self.getAllResourceInfoTimeRange(self.start)
descriptor = "getAllResourceInfoTimeRange"
yield self.txn.commit()
self.txn = None
if self.options["verbose"]:
self.output.write("%s time: %.1fs\n" % (descriptor, time.time() - t,))
self.total = len(rows)
self.logResult("Number of events to process", self.total)
# Split into organizer events and attendee events
self.organized = []
self.organized_byuid = {}
self.attended = []
self.attended_byuid = collections.defaultdict(list)
self.matched_attendee_to_organizer = collections.defaultdict(set)
skipped, inboxes = yield self.buildResourceInfo(rows)
self.logResult("Number of organizer events to process", len(self.organized), self.total)
self.logResult("Number of attendee events to process", len(self.attended), self.total)
self.logResult("Number of skipped events", skipped, self.total)
self.logResult("Number of inbox events", inboxes)
self.addSummaryBreak()
self.totalErrors = 0
yield self.verifyAllAttendeesForOrganizer()
yield self.verifyAllOrganizersForAttendee()
# Need to add fix summary information
if self.fix:
self.addSummaryBreak()
self.logResult("Fixed missing attendee events", self.fixAttendeesForOrganizerMissing)
self.logResult("Fixed mismatched attendee events", self.fixAttendeesForOrganizerMismatch)
self.logResult("Fixed missing organizer events", self.fixOrganizersForAttendeeMissing)
self.logResult("Fixed mismatched organizer events", self.fixOrganizersForAttendeeMismatch)
self.logResult("Fix failures", self.fixFailed)
self.logResult("Fixed Auto-Accepts", len(self.fixedAutoAccepts))
self.results["Auto-Accepts"] = self.fixedAutoAccepts
self.printAutoAccepts()
self.printSummary()
@inlineCallbacks
def buildResourceInfo(self, rows, onlyOrganizer=False, onlyAttendee=False):
"""
For each resource, determine whether it is an organizer or attendee event, and also
cache the attendee partstats.
@param rows: set of DB query rows
@type rows: C{list}
@param onlyOrganizer: whether organizer information only is required
@type onlyOrganizer: C{bool}
@param onlyAttendee: whether attendee information only is required
@type onlyAttendee: C{bool}
"""
skipped = 0
inboxes = 0
for owner, resid, uid, calname, md5, organizer, created, modified in rows:
# Skip owners not enabled for calendaring
if not (yield self.testForCalendaringUUID(owner)):
skipped += 1
continue
# Skip inboxes
if calname == "inbox":
inboxes += 1
continue
# If targeting a specific organizer, skip events belonging to others
if self.options["uuid"]:
if not organizer.startswith("urn:x-uid:") or self.options["uuid"] != organizer[10:]:
continue
# Cache organizer/attendee states
if organizer.startswith("urn:x-uid:") and owner == organizer[10:]:
if not onlyAttendee:
self.organized.append((owner, resid, uid, md5, organizer, created, modified,))
self.organized_byuid[uid] = (owner, resid, uid, md5, organizer, created, modified,)
else:
if not onlyOrganizer:
self.attended.append((owner, resid, uid, md5, organizer, created, modified,))
self.attended_byuid[uid].append((owner, resid, uid, md5, organizer, created, modified,))
returnValue((skipped, inboxes))
@inlineCallbacks
def testForCalendaringUUID(self, uuid):
"""
Determine if the specified directory UUID is valid for calendaring. Keep a cache of
valid and invalid so we can do this quickly.
@param uuid: the directory UUID to test
@type uuid: C{str}
@return: C{True} if valid, C{False} if not
"""
if uuid not in self.validForCalendaringUUIDs:
record = yield self.directoryService().recordWithUID(uuid)
self.validForCalendaringUUIDs[uuid] = record is not None and record.hasCalendars and record.thisServer()
returnValue(self.validForCalendaringUUIDs[uuid])
@inlineCallbacks
def verifyAllAttendeesForOrganizer(self):
"""
Make sure that for each organizer, each referenced attendee has a consistent view of the organizer's event.
We will look for events that an organizer has and are missing for the attendee, and events that an organizer's
view of attendee status does not match the attendee's view of their own status.
"""
self.output.write("\n---- Verifying Organizer events against Attendee copies ----\n")
self.txn = self.store.newTransaction()
results_missing = []
results_mismatch = []
attendeeResIDs = {}
organized_len = len(self.organized)
organizer_div = 1 if organized_len < 100 else organized_len / 100
# Test organized events
t = time.time()
for ctr, organizerEvent in enumerate(self.organized):
if self.options["verbose"] and divmod(ctr, organizer_div)[1] == 0:
self.output.write(("\r%d of %d (%d%%) Missing: %d Mismatched: %s" % (
ctr + 1,
organized_len,
((ctr + 1) * 100 / organized_len),
len(results_missing),
len(results_mismatch),
)).ljust(80))
self.output.flush()
# To avoid holding locks on all the rows scanned, commit every 10 seconds
if time.time() - t > 10:
yield self.txn.commit()
self.txn = self.store.newTransaction()
t = time.time()
# Get the organizer's view of attendee states
organizer, resid, uid, _ignore_md5, _ignore_organizer, org_created, org_modified = organizerEvent
calendar = yield self.getCalendar(resid)
if calendar is None:
continue
if self.options["verbose"] and self.masterComponent(calendar) is None:
self.output.write("Missing master for organizer: %s, resid: %s, uid: %s\n" % (organizer, resid, uid,))
organizerViewOfAttendees = self.buildAttendeeStates(calendar, self.start, self.end)
try:
del organizerViewOfAttendees[organizer]
except KeyError:
# Odd - the organizer is not an attendee - this usually does not happen
pass
if len(organizerViewOfAttendees) == 0:
continue
# Get attendee states for matching UID
eachAttendeesOwnStatus = {}
attendeeCreatedModified = {}
for attendeeEvent in self.attended_byuid.get(uid, ()):
owner, attresid, attuid, _ignore_md5, _ignore_organizer, att_created, att_modified = attendeeEvent
attendeeCreatedModified[owner] = (att_created, att_modified,)
calendar = yield self.getCalendar(attresid)
if calendar is None:
continue
eachAttendeesOwnStatus[owner] = self.buildAttendeeStates(calendar, self.start, self.end, attendee_only=owner)
attendeeResIDs[(owner, attuid)] = attresid
# Look at each attendee in the organizer's meeting
for organizerAttendee, organizerViewOfStatus in organizerViewOfAttendees.iteritems():
missing = False
mismatch = False
self.matched_attendee_to_organizer[uid].add(organizerAttendee)
# Skip attendees not enabled for calendaring
if not (yield self.testForCalendaringUUID(organizerAttendee)):
continue
# Double check the missing attendee situation in case we missed it during the original query
if organizerAttendee not in eachAttendeesOwnStatus:
# Try to reload the attendee data
calendar, attresid, att_created, att_modified = yield self.getCalendarForOwnerByUID(organizerAttendee, uid)
if calendar is not None:
eachAttendeesOwnStatus[organizerAttendee] = self.buildAttendeeStates(calendar, self.start, self.end, attendee_only=organizerAttendee)
attendeeResIDs[(organizerAttendee, uid)] = attresid
attendeeCreatedModified[organizerAttendee] = (att_created, att_modified,)
# print("Reloaded missing attendee data")
# If an entry for the attendee exists, then check whether attendee status matches
if organizerAttendee in eachAttendeesOwnStatus:
attendeeOwnStatus = eachAttendeesOwnStatus[organizerAttendee].get(organizerAttendee, set())
att_created, att_modified = attendeeCreatedModified[organizerAttendee]
if organizerViewOfStatus != attendeeOwnStatus:
# Check that the difference is only cancelled or declined on the organizers side
for _organizerInstance, partstat in organizerViewOfStatus.difference(attendeeOwnStatus):
if partstat not in ("DECLINED", "CANCELLED"):
results_mismatch.append((uid, resid, organizer, org_created, org_modified, organizerAttendee, att_created, att_modified))
self.results.setdefault("Mismatch Attendee", set()).add((uid, organizer, organizerAttendee,))
mismatch = True
if self.options["details"]:
self.output.write("Mismatch: on Organizer's side:\n")
self.output.write(" UID: %s\n" % (uid,))
self.output.write(" Organizer: %s\n" % (organizer,))
self.output.write(" Attendee: %s\n" % (organizerAttendee,))
self.output.write(" Instance: %s\n" % (_organizerInstance,))
break
# Check that the difference is only cancelled on the attendees side
for _attendeeInstance, partstat in attendeeOwnStatus.difference(organizerViewOfStatus):
if partstat not in ("CANCELLED",):
if not mismatch:
results_mismatch.append((uid, resid, organizer, org_created, org_modified, organizerAttendee, att_created, att_modified))
self.results.setdefault("Mismatch Attendee", set()).add((uid, organizer, organizerAttendee,))
mismatch = True
if self.options["details"]:
self.output.write("Mismatch: on Attendee's side:\n")
self.output.write(" Organizer: %s\n" % (organizer,))
self.output.write(" Attendee: %s\n" % (organizerAttendee,))
self.output.write(" Instance: %s\n" % (_attendeeInstance,))
break
# Check that the status for this attendee is always declined which means a missing copy of the event is OK
else:
for _ignore_instance_id, partstat in organizerViewOfStatus:
if partstat not in ("DECLINED", "CANCELLED"):
results_missing.append((uid, resid, organizer, organizerAttendee, org_created, org_modified))
self.results.setdefault("Missing Attendee", set()).add((uid, organizer, organizerAttendee,))
missing = True
break
# If there was a problem we can fix it
if (missing or mismatch) and self.fix:
fix_result = (yield self.fixByReinvitingAttendee(resid, attendeeResIDs.get((organizerAttendee, uid)), organizerAttendee))
if fix_result:
if missing:
self.fixAttendeesForOrganizerMissing += 1
else:
self.fixAttendeesForOrganizerMismatch += 1
else:
self.fixFailed += 1
yield self.txn.commit()
self.txn = None
if self.options["verbose"]:
self.output.write("\r".ljust(80) + "\n")
# Print table of results
table = tables.Table()
table.addHeader(("Organizer", "Attendee", "Event UID", "Organizer RID", "Created", "Modified",))
results_missing.sort()
for item in results_missing:
uid, resid, organizer, attendee, created, modified = item
organizer_record = yield self.directoryService().recordWithUID(organizer)
attendee_record = yield self.directoryService().recordWithUID(attendee)
table.addRow((
"%s/%s (%s)" % (organizer_record.recordType if organizer_record else "-", organizer_record.shortNames[0] if organizer_record else "-", organizer,),
"%s/%s (%s)" % (attendee_record.recordType if attendee_record else "-", attendee_record.shortNames[0] if attendee_record else "-", attendee,),
uid,
resid,
created,
"" if modified == created else modified,
))
self.output.write("\n")
self.logResult("Events missing from Attendee's calendars", len(results_missing), self.total)
table.printTable(os=self.output)
self.totalErrors += len(results_missing)
# Print table of results
table = tables.Table()
table.addHeader(("Organizer", "Attendee", "Event UID", "Organizer RID", "Created", "Modified", "Attendee RID", "Created", "Modified",))
results_mismatch.sort()
for item in results_mismatch:
uid, org_resid, organizer, org_created, org_modified, attendee, att_created, att_modified = item
organizer_record = yield self.directoryService().recordWithUID(organizer)
attendee_record = yield self.directoryService().recordWithUID(attendee)
table.addRow((
"%s/%s (%s)" % (organizer_record.recordType if organizer_record else "-", organizer_record.shortNames[0] if organizer_record else "-", organizer,),
"%s/%s (%s)" % (attendee_record.recordType if attendee_record else "-", attendee_record.shortNames[0] if attendee_record else "-", attendee,),
uid,
org_resid,
org_created,
"" if org_modified == org_created else org_modified,
attendeeResIDs[(attendee, uid)],
att_created,
"" if att_modified == att_created else att_modified,
))
self.output.write("\n")
self.logResult("Events mismatched between Organizer's and Attendee's calendars", len(results_mismatch), self.total)
table.printTable(os=self.output)
self.totalErrors += len(results_mismatch)
@inlineCallbacks
def verifyAllOrganizersForAttendee(self):
"""
Make sure that for each attendee, there is a matching event for the organizer.
"""
self.output.write("\n---- Verifying Attendee events against Organizer copies ----\n")
self.txn = self.store.newTransaction()
# Now try to match up each attendee event
missing = []
mismatched = []
attended_len = len(self.attended)
attended_div = 1 if attended_len < 100 else attended_len / 100
t = time.time()
for ctr, attendeeEvent in enumerate(tuple(self.attended)): # self.attended might mutate during the loop
if self.options["verbose"] and divmod(ctr, attended_div)[1] == 0:
self.output.write(("\r%d of %d (%d%%) Missing: %d Mismatched: %s" % (
ctr + 1,
attended_len,
((ctr + 1) * 100 / attended_len),
len(missing),
len(mismatched),
)).ljust(80))
self.output.flush()
# To avoid holding locks on all the rows scanned, commit every 10 seconds
if time.time() - t > 10:
yield self.txn.commit()
self.txn = self.store.newTransaction()
t = time.time()
attendee, resid, uid, _ignore_md5, organizer, att_created, att_modified = attendeeEvent
calendar = yield self.getCalendar(resid)
if calendar is None:
continue
eachAttendeesOwnStatus = self.buildAttendeeStates(calendar, self.start, self.end, attendee_only=attendee)
if attendee not in eachAttendeesOwnStatus:
continue
# Only care about data for hosted organizers
if not organizer.startswith("urn:x-uid:"):
continue
organizer = organizer[10:]
# Skip organizers not enabled for calendaring
if not (yield self.testForCalendaringUUID(organizer)):
continue
# Double check the missing attendee situation in case we missed it during the original query
if uid not in self.organized_byuid:
# Try to reload the organizer info data
rows = yield self.getAllResourceInfoWithUID(uid)
yield self.buildResourceInfo(rows, onlyOrganizer=True)
# if uid in self.organized_byuid:
# print("Reloaded missing organizer data: %s" % (uid,))
if uid not in self.organized_byuid:
# Check whether attendee has all instances cancelled
if self.allCancelled(eachAttendeesOwnStatus):
continue
missing.append((uid, attendee, organizer, resid, att_created, att_modified,))
self.results.setdefault("Missing Organizer", set()).add((uid, attendee, organizer,))
# If there is a miss we fix by removing the attendee data
if self.fix:
# This is where we attempt a fix
fix_result = (yield self.removeEvent(resid))
if fix_result:
self.fixOrganizersForAttendeeMissing += 1
else:
self.fixFailed += 1
elif attendee not in self.matched_attendee_to_organizer[uid]:
# Check whether attendee has all instances cancelled
if self.allCancelled(eachAttendeesOwnStatus):
continue
mismatched.append((uid, attendee, organizer, resid, att_created, att_modified,))
self.results.setdefault("Mismatch Organizer", set()).add((uid, attendee, organizer,))
# If there is a mismatch we fix by re-inviting the attendee
if self.fix:
fix_result = (yield self.fixByReinvitingAttendee(self.organized_byuid[uid][1], resid, attendee))
if fix_result:
self.fixOrganizersForAttendeeMismatch += 1
else:
self.fixFailed += 1
yield self.txn.commit()
self.txn = None
if self.options["verbose"]:
self.output.write("\r".ljust(80) + "\n")
# Print table of results
table = tables.Table()
table.addHeader(("Organizer", "Attendee", "UID", "Attendee RID", "Created", "Modified",))
missing.sort()
unique_set = set()
for item in missing:
uid, attendee, organizer, resid, created, modified = item
unique_set.add(uid)
if organizer:
organizerRecord = yield self.directoryService().recordWithUID(organizer)
organizer = "%s/%s (%s)" % (organizerRecord.recordType if organizerRecord else "-", organizerRecord.shortNames[0] if organizerRecord else "-", organizer,)
attendeeRecord = yield self.directoryService().recordWithUID(attendee)
table.addRow((
organizer,
"%s/%s (%s)" % (attendeeRecord.recordType if attendeeRecord else "-", attendeeRecord.shortNames[0] if attendeeRecord else "-", attendee,),
uid,
resid,
created,
"" if modified == created else modified,
))
self.output.write("\n")
self.output.write("Attendee events missing in Organizer's calendar (total=%d, unique=%d):\n" % (len(missing), len(unique_set),))
table.printTable(os=self.output)
self.addToSummary("Attendee events missing in Organizer's calendar", len(missing), self.total)
self.totalErrors += len(missing)
# Print table of results
table = tables.Table()
table.addHeader(("Organizer", "Attendee", "UID", "Organizer RID", "Created", "Modified", "Attendee RID", "Created", "Modified",))
mismatched.sort()
for item in mismatched:
uid, attendee, organizer, resid, att_created, att_modified = item
if organizer:
organizerRecord = yield self.directoryService().recordWithUID(organizer)
organizer = "%s/%s (%s)" % (organizerRecord.recordType if organizerRecord else "-", organizerRecord.shortNames[0] if organizerRecord else "-", organizer,)
attendeeRecord = yield self.directoryService().recordWithUID(attendee)
table.addRow((
organizer,
"%s/%s (%s)" % (attendeeRecord.recordType if attendeeRecord else "-", attendeeRecord.shortNames[0] if attendeeRecord else "-", attendee,),
uid,
self.organized_byuid[uid][1],
self.organized_byuid[uid][5],
self.organized_byuid[uid][6],
resid,
att_created,
"" if att_modified == att_created else att_modified,
))
self.output.write("\n")
self.logResult("Attendee events mismatched in Organizer's calendar", len(mismatched), self.total)
table.printTable(os=self.output)
self.totalErrors += len(mismatched)
@inlineCallbacks
def fixByReinvitingAttendee(self, orgresid, attresid, attendee):
"""
Fix a mismatch/missing error by having the organizer send a REQUEST for the entire event to the attendee
to trigger implicit scheduling to resync the attendee event.
We do not have implicit apis in the store, but really want to use store-only apis here to avoid having to create
"fake" HTTP requests and manipulate HTTP resources. So what we will do is emulate implicit behavior by copying the
organizer resource to the attendee (filtering it for the attendee's view of the event) and deposit an inbox item
for the same event. Right now that will wipe out any per-attendee data - notably alarms.
"""
try:
cuaddr = "urn:x-uid:%s" % attendee
# Get the organizer's calendar data
calendar = (yield self.getCalendar(orgresid))
calendar = Component(None, pycalendar=calendar)
# Generate an iTip message for the entire event filtered for the attendee's view
itipmsg = iTipGenerator.generateAttendeeRequest(calendar, (cuaddr,), None)
# Handle the case where the attendee is not actually in the organizer event at all by
# removing the attendee event instead of re-inviting
if itipmsg.resourceUID() is None:
yield self.removeEvent(attresid)
returnValue(True)
# Convert iTip message into actual calendar data - just remove METHOD
attendee_calendar = itipmsg.duplicate()
attendee_calendar.removeProperty(attendee_calendar.getProperty("METHOD"))
# Adjust TRANSP to match PARTSTAT
self.setTransparencyForAttendee(attendee_calendar, cuaddr)
# Get attendee home store object
home = (yield self.txn.calendarHomeWithUID(attendee))
if home is None:
raise ValueError("Cannot find home")
inbox = (yield home.calendarWithName("inbox"))
if inbox is None:
raise ValueError("Cannot find inbox")
details = {}
# Replace existing resource data, or create a new one
if attresid:
# TODO: transfer over per-attendee data - valarms
_ignore_homeID, calendarID = yield self.getAllResourceInfoForResourceID(attresid)
calendar = yield home.childWithID(calendarID)
calendarObj = yield calendar.objectResourceWithID(attresid)
calendarObj.scheduleTag = str(uuid4())
yield calendarObj._setComponentInternal(attendee_calendar, internal_state=ComponentUpdateState.RAW)
self.results.setdefault("Fix change event", set()).add((home.name(), calendar.name(), attendee_calendar.resourceUID(),))
details["path"] = "/calendars/__uids__/%s/%s/%s" % (home.name(), calendar.name(), calendarObj.name(),)
details["rid"] = attresid
else:
# Find default calendar for VEVENTs
defaultCalendar = (yield self.defaultCalendarForAttendee(home))
if defaultCalendar is None:
raise ValueError("Cannot find suitable default calendar")
new_name = str(uuid4()) + ".ics"
calendarObj = (yield defaultCalendar._createCalendarObjectWithNameInternal(new_name, attendee_calendar, internal_state=ComponentUpdateState.RAW, options=self.metadata))
self.results.setdefault("Fix add event", set()).add((home.name(), defaultCalendar.name(), attendee_calendar.resourceUID(),))
details["path"] = "/calendars/__uids__/%s/%s/%s" % (home.name(), defaultCalendar.name(), new_name,)
details["rid"] = calendarObj._resourceID
details["uid"] = attendee_calendar.resourceUID()
instances = attendee_calendar.expandTimeRanges(self.end)
for key in instances:
instance = instances[key]
if instance.start > self.now:
break
details["start"] = instance.start.adjustTimezone(self.tzid)
details["title"] = instance.component.propertyValue("SUMMARY")
# Write new itip message to attendee inbox
yield inbox.createCalendarObjectWithName(str(uuid4()) + ".ics", itipmsg, options=self.metadata_inbox)
self.results.setdefault("Fix add inbox", set()).add((home.name(), itipmsg.resourceUID(),))
yield self.txn.commit()
self.txn = self.store.newTransaction()
# Need to know whether the attendee is a location or resource with auto-accept set
record = yield self.directoryService().recordWithUID(attendee)
autoScheduleMode = getattr(record, "autoScheduleMode", None)
if autoScheduleMode not in (None, AutoScheduleMode.none):
# Log details about the event so we can have a human manually process
self.fixedAutoAccepts.append(details)
returnValue(True)
except Exception, e:
print("Failed to fix resource: %d for attendee: %s\n%s" % (orgresid, attendee, e,))
returnValue(False)
@inlineCallbacks
def defaultCalendarForAttendee(self, home):
# Check for property
calendar = (yield home.defaultCalendar("VEVENT"))
returnValue(calendar)
def printAutoAccepts(self):
# Print summary of results
table = tables.Table()
table.addHeader(("Path", "RID", "UID", "Start Time", "Title"))
for item in sorted(self.fixedAutoAccepts, key=lambda x: x["path"]):
table.addRow((
item["path"],
item["rid"],
item["uid"],
item["start"],
item["title"],
))
self.output.write("\n")
self.output.write("Auto-Accept Fixes:\n")
table.printTable(os=self.output)
def masterComponent(self, calendar):
"""
Return the master iCal component in this calendar.
@return: the L{Component} for the master component,
or C{None} if there isn't one.
"""
for component in calendar.getComponents(definitions.cICalComponent_VEVENT):
if not component.hasProperty("RECURRENCE-ID"):
return component
return None
def buildAttendeeStates(self, calendar, start, end, attendee_only=None):
# Expand events into instances in the start/end range
results = []
calendar.getVEvents(
Period(
start=start,
end=end,
),
results
)
# Need to do iCal fake master fixup
overrides = len(calendar.getComponents(definitions.cICalComponent_VEVENT)) > 1
# Create map of each attendee's instances with the instance id (start time) and attendee part-stat
attendees = {}
for item in results:
# Fake master fixup
if overrides:
if not item.getOwner().isRecurrenceInstance():
if item.getOwner().getRecurrenceSet() is None or not item.getOwner().getRecurrenceSet().hasRecurrence():
continue
# Get Status - ignore cancelled events
status = item.getOwner().loadValueString(definitions.cICalProperty_STATUS)
cancelled = status == definitions.cICalProperty_STATUS_CANCELLED
# Get instance start
item.getInstanceStart().adjustToUTC()
instance_id = item.getInstanceStart().getText()
props = item.getOwner().getProperties().get(definitions.cICalProperty_ATTENDEE, [])
for prop in props:
caladdr = prop.getCalAddressValue().getValue()
if caladdr.startswith("urn:x-uid:"):
caladdr = caladdr[10:]
else:
continue
if attendee_only is not None and attendee_only != caladdr:
continue
if cancelled:
partstat = "CANCELLED"
else:
if not prop.hasParameter(definitions.cICalParameter_PARTSTAT):
partstat = definitions.cICalParameter_PARTSTAT_NEEDSACTION
else:
partstat = prop.getParameterValue(definitions.cICalParameter_PARTSTAT)
attendees.setdefault(caladdr, set()).add((instance_id, partstat))
return attendees
def allCancelled(self, attendeesStatus):
# Check whether attendees have all instances cancelled
all_cancelled = True
for _ignore_guid, states in attendeesStatus.iteritems():
for _ignore_instance_id, partstat in states:
if partstat not in ("CANCELLED", "DECLINED",):
all_cancelled = False
break
if not all_cancelled:
break
return all_cancelled
def setTransparencyForAttendee(self, calendar, attendee):
"""
Set the TRANSP property based on the PARTSTAT value on matching ATTENDEE properties
in each component.
"""
for component in calendar.subcomponents(ignore=True):
prop = component.getAttendeeProperty(attendee)
addTransp = False
if prop:
partstat = prop.parameterValue("PARTSTAT", "NEEDS-ACTION")
addTransp = partstat in ("NEEDS-ACTION", "DECLINED",)
component.replaceProperty(Property("TRANSP", "TRANSPARENT" if addTransp else "OPAQUE"))
class DoubleBookingService(CalVerifyService):
"""
Service which detects double-booked events.
"""
def title(self):
return "Double Booking Service"
@inlineCallbacks
def doAction(self):
if self.options["fix"]:
self.output.write("\nFixing is not supported.\n")
returnValue(None)
self.output.write("\n---- Scanning calendar data ----\n")
self.tzid = Timezone(tzid=self.options["tzid"] if self.options["tzid"] else "America/Los_Angeles")
self.now = DateTime.getNowUTC()
self.start = DateTime.getToday()
self.start.setDateOnly(False)
self.start.setTimezone(self.tzid)
self.end = self.start.duplicate()
self.end.offsetYear(1)
self.fix = self.options["fix"]
if self.options["verbose"] and self.options["summary"]:
ot = time.time()
# Check loop over uuid
UUIDDetails = collections.namedtuple("UUIDDetails", ("uuid", "rname", "auto", "doubled",))
self.uuid_details = []
if len(self.options["uuid"]) != 36:
self.txn = self.store.newTransaction()
if self.options["uuid"]:
homes = yield self.getMatchingHomeUIDs(self.options["uuid"])
else:
homes = yield self.getAllHomeUIDs()
yield self.txn.commit()
self.txn = None
uuids = []
for uuid in sorted(homes):
record = yield self.directoryService().recordWithUID(uuid)
if record is not None and record.recordType in (CalRecordType.location, CalRecordType.resource):
uuids.append(uuid)
else:
uuids = [self.options["uuid"], ]
count = 0
for uuid in uuids:
self.results = {}
self.summary = []
self.total = 0
count += 1
record = yield self.directoryService().recordWithUID(uuid)
if record is None:
continue
if not record.thisServer() or not record.hasCalendars:
continue
rname = record.displayName
autoScheduleMode = getattr(record, "autoSchedule", AutoScheduleMode.none)
if len(uuids) > 1 and not self.options["summary"]:
self.output.write("\n\n-----------------------------\n")
self.txn = self.store.newTransaction()
if self.options["verbose"]:
t = time.time()
rows = yield self.getTimeRangeInfoWithUUID(uuid, self.start)
descriptor = "getTimeRangeInfoWithUUID"
yield self.txn.commit()
self.txn = None
if self.options["verbose"]:
if not self.options["summary"]:
self.output.write("%s time: %.1fs\n" % (descriptor, time.time() - t,))
else:
self.output.write("%s (%d/%d)" % (uuid, count, len(uuids),))
self.output.flush()
self.total = len(rows)
if not self.options["summary"]:
self.logResult("UUID to process", uuid)
self.logResult("Record name", rname)
self.logResult("Auto-schedule-mode", autoScheduleMode.description)
self.addSummaryBreak()
self.logResult("Number of events to process", self.total)
if rows:
if not self.options["summary"]:
self.addSummaryBreak()
doubled = yield self.doubleBookCheck(rows, uuid, self.start)
else:
doubled = False
self.uuid_details.append(UUIDDetails(uuid, rname, autoScheduleMode, doubled))
if not self.options["summary"]:
self.printSummary()
else:
self.output.write(" - %s\n" % ("Double-booked" if doubled else "OK",))
self.output.flush()
if self.options["summary"]:
table = tables.Table()
table.addHeader(("GUID", "Name", "Auto-Schedule", "Double-Booked",))
doubled = 0
for item in sorted(self.uuid_details):
if not item.doubled:
continue
table.addRow((
item.uuid,
item.rname,
item.autoScheduleMode,
item.doubled,
))
doubled += 1
table.addFooter(("Total", "", "", "%d of %d" % (doubled, len(self.uuid_details),),))
self.output.write("\n")
table.printTable(os=self.output)
if self.options["verbose"]:
self.output.write("%s time: %.1fs\n" % ("Summary", time.time() - ot,))
@inlineCallbacks
def getTimeRangeInfoWithUUID(self, uuid, start):
co = schema.CALENDAR_OBJECT
cb = schema.CALENDAR_BIND
ch = schema.CALENDAR_HOME
tr = schema.TIME_RANGE
kwds = {
"uuid": uuid,
"Start" : pyCalendarToSQLTimestamp(start),
}
rows = (yield Select(
[co.RESOURCE_ID, ],
From=ch.join(
cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join(
co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
cb.BIND_MODE == _BIND_MODE_OWN).And(
cb.CALENDAR_RESOURCE_NAME != "inbox").And(
co.ORGANIZER != "")).join(
tr, type="left", on=(co.RESOURCE_ID == tr.CALENDAR_OBJECT_RESOURCE_ID)),
Where=(ch.OWNER_UID == Parameter("uuid")).And((tr.START_DATE >= Parameter("Start")).Or(co.RECURRANCE_MAX <= Parameter("Start"))),
Distinct=True,
).on(self.txn, **kwds))
returnValue(tuple(rows))
@inlineCallbacks
def doubleBookCheck(self, rows, uuid, start):
"""
Check each calendar resource by expanding instances within the next year, and looking for
any that overlap with status not CANCELLED and PARTSTAT ACCEPTED.
"""
if not self.options["summary"]:
self.output.write("\n---- Checking instances for double-booking ----\n")
self.txn = self.store.newTransaction()
if self.options["verbose"]:
t = time.time()
InstanceDetails = collections.namedtuple("InstanceDetails", ("resid", "uid", "start", "end", "organizer", "summary",))
end = start.duplicate()
end.offsetDay(int(self.options["days"]))
count = 0
total = len(rows)
total_instances = 0
booked_instances = 0
details = []
rjust = 10
tzid = None
hasFloating = False
for resid in rows:
resid = resid[0]
caldata = yield self.getCalendar(resid, self.fix)
if caldata is None:
if self.parseError:
returnValue((False, self.parseError))
else:
returnValue((True, "Nothing to scan"))
cal = Component(None, pycalendar=caldata)
cal = PerUserDataFilter(uuid).filter(cal)
uid = cal.resourceUID()
instances = cal.expandTimeRanges(end, start, ignoreInvalidInstances=False)
count += 1
for instance in instances.instances.values():
total_instances += 1
# See if it is CANCELLED or TRANSPARENT
if instance.component.propertyValue("STATUS") == "CANCELLED":
continue
if instance.component.propertyValue("TRANSP") == "TRANSPARENT":
continue
dtstart = instance.component.propertyValue("DTSTART")
if tzid is None and dtstart.getTimezoneID():
tzid = Timezone(tzid=dtstart.getTimezoneID())
hasFloating |= dtstart.isDateOnly() or dtstart.floating()
details.append(InstanceDetails(resid, uid, instance.start, instance.end, instance.component.getOrganizer(), instance.component.propertyValue("SUMMARY")))
booked_instances += 1
if self.options["verbose"] and not self.options["summary"]:
if count == 1:
self.output.write("Instances".rjust(rjust) + "Current".rjust(rjust) + "Total".rjust(rjust) + "Complete".rjust(rjust) + "\n")
if divmod(count, 100)[1] == 0:
self.output.write((
"\r" +
("%s" % total_instances).rjust(rjust) +
("%s" % count).rjust(rjust) +
("%s" % total).rjust(rjust) +
("%d%%" % safePercent(count, total)).rjust(rjust)
).ljust(80))
self.output.flush()
# To avoid holding locks on all the rows scanned, commit every 100 resources
if divmod(count, 100)[1] == 0:
yield self.txn.commit()
self.txn = self.store.newTransaction()
yield self.txn.commit()
self.txn = None
if self.options["verbose"] and not self.options["summary"]:
self.output.write((
"\r" +
("%s" % total_instances).rjust(rjust) +
("%s" % count).rjust(rjust) +
("%s" % total).rjust(rjust) +
("%d%%" % safePercent(count, total)).rjust(rjust)
).ljust(80) + "\n")
if not self.options["summary"]:
self.logResult("Number of instances in time-range", total_instances)
self.logResult("Number of booked instances", booked_instances)
# Adjust floating and sort
if hasFloating and tzid is not None:
utc = Timezone.UTCTimezone
for item in details:
if item.start.floating():
item.start.setTimezone(tzid)
item.start.adjustTimezone(utc)
if item.end.floating():
item.end.setTimezone(tzid)
item.end.adjustTimezone(utc)
details.sort(key=lambda x: x.start)
# Now look for double-bookings
DoubleBookedDetails = collections.namedtuple("DoubleBookedDetails", ("resid1", "uid1", "resid2", "uid2", "start",))
double_booked = []
current = details[0] if details else None
for next in details[1:]:
if current.end > next.start and current.resid != next.resid and not (current.organizer == next.organizer and current.summary == next.summary):
dt = next.start.duplicate()
dt.adjustTimezone(self.tzid)
double_booked.append(DoubleBookedDetails(current.resid, current.uid, next.resid, next.uid, dt,))
current = next
# Print table of results
if double_booked and not self.options["summary"]:
table = tables.Table()
table.addHeader(("RID #1", "UID #1", "RID #2", "UID #2", "Start",))
previous1 = None
previous2 = None
unique_events = 0
for item in sorted(double_booked):
if previous1 != item.resid1:
unique_events += 1
resid1 = item.resid1 if previous1 != item.resid1 else "."
uid1 = item.uid1 if previous1 != item.resid1 else "."
resid2 = item.resid2 if previous2 != item.resid2 else "."
uid2 = item.uid2 if previous2 != item.resid2 else "."
table.addRow((
resid1,
uid1,
resid2,
uid2,
item.start,
))
previous1 = item.resid1
previous2 = item.resid2
self.output.write("\n")
self.logResult("Number of double-bookings", len(double_booked))
self.logResult("Number of unique double-bookings", unique_events)
table.printTable(os=self.output)
self.results["Double-bookings"] = double_booked
if self.options["verbose"] and not self.options["summary"]:
diff_time = time.time() - t
self.output.write("Time: %.2f s Average: %.1f ms/resource\n" % (
diff_time,
safePercent(diff_time, total, 1000.0),
))
returnValue(len(double_booked) != 0)
class DarkPurgeService(CalVerifyService):
"""
Service which detects room/resource events that have an invalid organizer.
"""
def title(self):
return "Dark Purge Service"
@inlineCallbacks
def doAction(self):
if not self.options["no-organizer"] and not self.options["invalid-organizer"] and not self.options["disabled-organizer"]:
self.options["invalid-organizer"] = self.options["disabled-organizer"] = True
self.output.write("\n---- Scanning calendar data ----\n")
self.tzid = Timezone(tzid=self.options["tzid"] if self.options["tzid"] else "America/Los_Angeles")
self.now = DateTime.getNowUTC()
self.start = self.options["start"] if "start" in self.options else DateTime.getToday()
self.start.setDateOnly(False)
self.start.setTimezone(self.tzid)
self.fix = self.options["fix"]
if self.options["verbose"] and self.options["summary"]:
ot = time.time()
# Check loop over uuid
UUIDDetails = collections.namedtuple("UUIDDetails", ("uuid", "rname", "purged",))
self.uuid_details = []
if len(self.options["uuid"]) != 36:
self.txn = self.store.newTransaction()
if self.options["uuid"]:
homes = yield self.getMatchingHomeUIDs(self.options["uuid"])
else:
homes = yield self.getAllHomeUIDs()
yield self.txn.commit()
self.txn = None
uuids = []
if self.options["verbose"]:
self.output.write("%d uuids to check\n" % (len(homes,)))
for uuid in sorted(homes):
record = yield self.directoryService().recordWithUID(uuid)
if record is not None and record.recordType in (CalRecordType.location, CalRecordType.resource):
uuids.append(uuid)
else:
uuids = [self.options["uuid"], ]
if self.options["verbose"]:
self.output.write("%d uuids to scan\n" % (len(uuids,)))
count = 0
for uuid in uuids:
self.results = {}
self.summary = []
self.total = 0
count += 1
record = yield self.directoryService().recordWithUID(uuid)
if record is None:
continue
if not record.thisServer() or not record.hasCalendars:
continue
rname = record.displayName
if len(uuids) > 1 and not self.options["summary"]:
self.output.write("\n\n-----------------------------\n")
self.txn = self.store.newTransaction()
if self.options["verbose"]:
t = time.time()
rows = yield self.getAllResourceInfoTimeRangeWithUUID(self.start, uuid)
descriptor = "getAllResourceInfoTimeRangeWithUUID"
yield self.txn.commit()
self.txn = None
if self.options["verbose"]:
if not self.options["summary"]:
self.output.write("%s time: %.1fs\n" % (descriptor, time.time() - t,))
else:
self.output.write("%s (%d/%d)" % (uuid, count, len(uuids),))
self.output.flush()
self.total = len(rows)
if not self.options["summary"]:
self.logResult("UUID to process", uuid)
self.logResult("Record name", rname)
self.addSummaryBreak()
self.logResult("Number of events to process", self.total)
if rows:
if not self.options["summary"]:
self.addSummaryBreak()
purged = yield self.darkPurge(rows, uuid)
else:
purged = False
self.uuid_details.append(UUIDDetails(uuid, rname, purged))
if not self.options["summary"]:
self.printSummary()
else:
self.output.write(" - %s\n" % ("Dark Events" if purged else "OK",))
self.output.flush()
if count == 0:
self.output.write("Nothing to scan\n")
if self.options["summary"]:
table = tables.Table()
table.addHeader(("GUID", "Name", "RID", "UID", "Organizer",))
purged = 0
for item in sorted(self.uuid_details):
if not item.purged:
continue
uuid = item.uuid
rname = item.rname
for detail in item.purged:
table.addRow((
uuid,
rname,
detail.resid,
detail.uid,
detail.organizer,
))
uuid = ""
rname = ""
purged += 1
table.addFooter(("Total", "%d" % (purged,), "", "", "",))
self.output.write("\n")
table.printTable(os=self.output)
if self.options["verbose"]:
self.output.write("%s time: %.1fs\n" % ("Summary", time.time() - ot,))
@inlineCallbacks
def darkPurge(self, rows, uuid):
"""
Check each calendar resource by looking at any ORGANIER property value and verifying it is valid.
"""
if not self.options["summary"]:
self.output.write("\n---- Checking for dark events ----\n")
self.txn = self.store.newTransaction()
if self.options["verbose"]:
t = time.time()
Details = collections.namedtuple("Details", ("resid", "uid", "organizer",))
count = 0
total = len(rows)
details = []
fixed = 0
rjust = 10
for resid in rows:
resid = resid[1]
caldata = yield self.getCalendar(resid, self.fix)
if caldata is None:
if self.parseError:
returnValue((False, self.parseError))
else:
returnValue((True, "Nothing to scan"))
cal = Component(None, pycalendar=caldata)
uid = cal.resourceUID()
fail = False
organizer = cal.getOrganizer()
if organizer is None:
if self.options["no-organizer"]:
fail = True
else:
principal = yield self.directoryService().recordWithCalendarUserAddress(organizer)
# FIXME: Why the mix of records and principals here?
if principal is None and organizer.startswith("urn:x-uid:"):
principal = yield self.directoryService().principalCollection.principalForUID(organizer[10:])
if principal is None:
if self.options["invalid-organizer"]:
fail = True
elif not principal.calendarsEnabled():
if self.options["disabled-organizer"]:
fail = True
if fail:
details.append(Details(resid, uid, organizer,))
if self.fix:
yield self.removeEvent(resid)
fixed += 1
if self.options["verbose"] and not self.options["summary"]:
if count == 1:
self.output.write("Current".rjust(rjust) + "Total".rjust(rjust) + "Complete".rjust(rjust) + "\n")
if divmod(count, 100)[1] == 0:
self.output.write((
"\r" +
("%s" % count).rjust(rjust) +
("%s" % total).rjust(rjust) +
("%d%%" % safePercent(count, total)).rjust(rjust)
).ljust(80))
self.output.flush()
# To avoid holding locks on all the rows scanned, commit every 100 resources
if divmod(count, 100)[1] == 0:
yield self.txn.commit()
self.txn = self.store.newTransaction()
yield self.txn.commit()
self.txn = None
if self.options["verbose"] and not self.options["summary"]:
self.output.write((
"\r" +
("%s" % count).rjust(rjust) +
("%s" % total).rjust(rjust) +
("%d%%" % safePercent(count, total)).rjust(rjust)
).ljust(80) + "\n")
# Print table of results
if not self.options["summary"]:
self.logResult("Number of dark events", len(details))
self.results["Dark Events"] = details
if self.fix:
self.results["Fix dark events"] = fixed
if self.options["verbose"] and not self.options["summary"]:
diff_time = time.time() - t
self.output.write("Time: %.2f s Average: %.1f ms/resource\n" % (
diff_time,
safePercent(diff_time, total, 1000.0),
))
returnValue(details)
class EventSplitService(CalVerifyService):
"""
Service which splits a recurring event at a specific date-time value.
"""
def title(self):
return "Event Split Service"
@inlineCallbacks
def doAction(self):
"""
Split a resource using either its path or resource id.
"""
self.txn = self.store.newTransaction()
path = self.options["path"]
if path.startswith("/calendars/__uids__/"):
try:
pathbits = path.split("/")
except TypeError:
printusage("Not a valid calendar object resource path: %s" % (path,))
if len(pathbits) != 6:
printusage("Not a valid calendar object resource path: %s" % (path,))
homeName = pathbits[3]
calendarName = pathbits[4]
resourceName = pathbits[5]
resid = yield self.getResourceID(homeName, calendarName, resourceName)
if resid is None:
yield self.txn.commit()
self.txn = None
self.output.write("\n")
self.output.write("Path does not exist. Nothing split.\n")
returnValue(None)
resid = int(resid)
else:
try:
resid = int(path)
except ValueError:
printusage("path argument must be a calendar object path or an SQL resource-id")
calendarObj = yield CalendarStoreFeatures(self.txn._store).calendarObjectWithID(self.txn, resid)
ical = yield calendarObj.component()
# Must be the ORGANIZER's copy
organizer = ical.getOrganizer()
if organizer is None:
printusage("Calendar object has no ORGANIZER property - cannot split")
# Only allow organizers to split
scheduler = ImplicitScheduler()
is_attendee = (yield scheduler.testAttendeeEvent(calendarObj.calendar(), calendarObj, ical,))
if is_attendee:
printusage("Calendar object is not owned by the ORGANIZER - cannot split")
if self.options["summary"]:
result = self.doSummary(ical)
else:
result = yield self.doSplit(resid, calendarObj, ical)
returnValue(result)
def doSummary(self, ical):
"""
Print a summary of the recurrence instances of the specified event.
@param ical: calendar to process
@type ical: L{Component}
"""
self.output.write("\n---- Calendar resource instances ----\n")
# Find the instance RECURRENCE-ID where a split is going to happen
now = DateTime.getNowUTC()
now.offsetDay(1)
instances = ical.cacheExpandedTimeRanges(now)
instances = sorted(instances.instances.values(), key=lambda x: x.start)
for instance in instances:
self.output.write(instance.rid.getText() + (" *\n" if instance.overridden else "\n"))
@inlineCallbacks
def doSplit(self, resid, calendarObj, ical):
rid = self.options["rid"]
try:
if rid[-1] != "Z":
raise ValueError
rid = DateTime.parseText(rid)
except ValueError:
printusage("rid must be a valid UTC date-time value: 'YYYYMMDDTHHMMSSZ'")
self.output.write("\n---- Splitting calendar resource ----\n")
# Find actual RECURRENCE-ID of split
splitter = iCalSplitter(1024, 14)
rid = splitter.whereSplit(ical, break_point=rid, allow_past_the_end=False)
if rid is None:
printusage("rid is not a valid recurrence instance")
self.output.write("\n")
self.output.write("Actual RECURRENCE-ID: %s.\n" % (rid,))
oldObj = yield calendarObj.split(rid=rid)
oldUID = oldObj.uid()
oldRelated = (yield oldObj.component()).mainComponent().propertyValue("RELATED-TO")
self.output.write("\n")
self.output.write("Split Resource: %s at %s, old UID: %s.\n" % (resid, rid, oldUID,))
yield self.txn.commit()
self.txn = None
returnValue((oldUID, oldRelated,))
def main(argv=sys.argv, stderr=sys.stderr, reactor=None):
if reactor is None:
from twisted.internet import reactor
options = CalVerifyOptions()
try:
options.parseOptions(argv[1:])
except usage.UsageError, e:
printusage(e)
try:
output = options.openOutput()
except IOError, e:
stderr.write("Unable to open output file for writing: %s\n" % (e))
sys.exit(1)
def makeService(store):
from twistedcaldav.config import config
config.TransactionTimeoutSeconds = 0
if options["nuke"]:
return NukeService(store, options, output, reactor, config)
elif options["missing"]:
return OrphansService(store, options, output, reactor, config)
elif options["ical"]:
return BadDataService(store, options, output, reactor, config)
elif options["mismatch"]:
return SchedulingMismatchService(store, options, output, reactor, config)
elif options["double"]:
return DoubleBookingService(store, options, output, reactor, config)
elif options["dark-purge"]:
return DarkPurgeService(store, options, output, reactor, config)
elif options["split"]:
return EventSplitService(store, options, output, reactor, config)
else:
printusage("Invalid operation")
sys.exit(1)
utilityMain(options['config'], makeService, reactor)
if __name__ == '__main__':
main()
calendarserver-7.0+dfsg/calendarserver/tools/calverify_diff.py 0000664 0000000 0000000 00000010560 12607514243 0024773 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# -*- test-case-name: calendarserver.tools.test.test_calverify -*-
##
# Copyright (c) 2012-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
import getopt
import sys
import os
def analyze(fname):
lines = open(os.path.expanduser(fname)).read().splitlines()
total = len(lines)
ctr = 0
results = {
"table1": [],
"table2": [],
"table3": [],
"table4": [],
}
def _tableParser(ctr, tableName, parseFn):
ctr += 4
while ctr < total:
line = lines[ctr]
if line.startswith("+------"):
break
else:
results[tableName].append(parseFn(line))
ctr += 1
return ctr
while ctr < total:
line = lines[ctr]
if line.startswith("Events missing from Attendee's calendars"):
ctr = _tableParser(ctr, "table1", parseTableMissing)
elif line.startswith("Events mismatched between Organizer's and Attendee's calendars"):
ctr = _tableParser(ctr, "table2", parseTableMismatch)
elif line.startswith("Attendee events missing in Organizer's calendar"):
ctr = _tableParser(ctr, "table3", parseTableMissing)
elif line.startswith("Attendee events mismatched in Organizer's calendar"):
ctr = _tableParser(ctr, "table4", parseTableMismatch)
ctr += 1
return results
def parseTableMissing(line):
splits = line.split("|")
organizer = splits[1].strip()
attendee = splits[2].strip()
uid = splits[3].strip()
resid = splits[4].strip()
return (organizer, attendee, uid, resid,)
def parseTableMismatch(line):
splits = line.split("|")
organizer = splits[1].strip()
attendee = splits[2].strip()
uid = splits[3].strip()
organizer_resid = splits[4].strip()
attendee_resid = splits[7].strip()
return (organizer, attendee, uid, organizer_resid, attendee_resid,)
def diff(results1, results2):
print("\n\nEvents missing from Attendee's calendars")
diffSets(results1["table1"], results2["table1"])
print("\n\nEvents mismatched between Organizer's and Attendee's calendars")
diffSets(results1["table2"], results2["table2"])
print("\n\nAttendee events missing in Organizer's calendar")
diffSets(results1["table3"], results2["table3"])
print("\n\nAttendee events mismatched in Organizer's calendar")
diffSets(results1["table4"], results2["table4"])
def diffSets(results1, results2):
s1 = set(results1)
s2 = set(results2)
d = s1 - s2
print("\nIn first, not in second: (%d)" % (len(d),))
for i in sorted(d):
print(i)
d = s2 - s1
print("\nIn second, not in first: (%d)" % (len(d),))
for i in sorted(d):
print(i)
def usage(error_msg=None):
if error_msg:
print(error_msg)
print("""Usage: calverify_diff [options] FILE1 FILE2
Options:
-h Print this help and exit
Arguments:
FILE1 File containing calverify output to analyze
FILE2 File containing calverify output to analyze
Description:
This utility will analyze the output of two calverify runs
and show what is different between the two.
""")
if error_msg:
raise ValueError(error_msg)
else:
sys.exit(0)
if __name__ == '__main__':
options, args = getopt.getopt(sys.argv[1:], "h", [])
for option, value in options:
if option == "-h":
usage()
else:
usage("Unrecognized option: %s" % (option,))
if len(args) != 2:
usage("Must have two arguments")
else:
fname1 = args[0]
fname2 = args[1]
print("*** CalVerify diff from %s to %s" % (
os.path.basename(fname1),
os.path.basename(fname2),
))
results1 = analyze(fname1)
results2 = analyze(fname2)
diff(results1, results2)
calendarserver-7.0+dfsg/calendarserver/tools/changeip_calendar.py 0000775 0000000 0000000 00000013460 12607514243 0025433 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
#
# changeip script for calendar server
#
# Copyright (c) 2005-2015 Apple Inc. All Rights Reserved.
#
# IMPORTANT NOTE: This file is licensed only for use on Apple-labeled
# computers and is subject to the terms and conditions of the Apple
# Software License Agreement accompanying the package this file is a
# part of. You may not port this file to another platform without
# Apple's written consent.
from __future__ import print_function
from __future__ import with_statement
import datetime
from getopt import getopt, GetoptError
import os
import plistlib
import subprocess
import sys
SERVER_APP_ROOT = "/Applications/Server.app/Contents/ServerRoot"
CALENDARSERVER_CONFIG = "%s/usr/sbin/calendarserver_config" % (SERVER_APP_ROOT,)
def serverRootLocation():
"""
Return the ServerRoot value from the servermgr_calendar.plist. If not
present, return the default.
"""
plist = "/Library/Server/Preferences/Calendar.plist"
serverRoot = u"/Library/Server/Calendar and Contacts"
if os.path.exists(plist):
serverRoot = plistlib.readPlist(plist).get("ServerRoot", serverRoot)
return serverRoot
def usage():
name = os.path.basename(sys.argv[0])
print("Usage: %s [-hv] old-ip new-ip [old-hostname new-hostname]" % (name,))
print(" Options:")
print(" -h - print this message and exit")
print(" -f - path to config file")
print(" Arguments:")
print(" old-ip - current IPv4 address of the server")
print(" new-ip - new IPv4 address of the server")
print(" old-hostname - current FQDN for the server")
print(" new-hostname - new FQDN for the server")
def log(msg):
serverRoot = serverRootLocation()
logDir = os.path.join(serverRoot, "Logs")
logFile = os.path.join(logDir, "changeip.log")
try:
timestamp = datetime.datetime.now().strftime("%b %d %H:%M:%S")
msg = "changeip_calendar: %s %s" % (timestamp, msg)
with open(logFile, 'a') as output:
output.write("%s\n" % (msg,))
except IOError:
# Could not write to log
pass
def main():
name = os.path.basename(sys.argv[0])
# Since the serveradmin command must be run as root, so must this script
if os.getuid() != 0:
print("%s must be run as root" % (name,))
sys.exit(1)
try:
(optargs, args) = getopt(
sys.argv[1:], "hf:", [
"help",
"config=",
]
)
except GetoptError:
usage()
sys.exit(1)
configFile = None
for opt, arg in optargs:
if opt in ("-h", "--help"):
usage()
sys.exit(1)
elif opt in ("-f", "--config"):
configFile = arg
oldIP, newIP = args[0:2]
try:
oldHostname, newHostname = args[2:4]
except ValueError:
oldHostname = newHostname = None
log("args: {}".format(args))
config = readConfig(configFile=configFile)
updateConfig(
config,
oldIP, newIP,
oldHostname, newHostname
)
writeConfig(config)
def sendCommand(commandDict, configFile=None):
args = [CALENDARSERVER_CONFIG]
if configFile is not None:
args.append("-f {}".format(configFile))
child = subprocess.Popen(
args=args,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
commandString = plistlib.writePlistToString(commandDict)
log("Sending to calendarserver_config: {}".format(commandString))
output, error = child.communicate(input=commandString)
log("Output from calendarserver_config: {}".format(output))
if child.returncode:
log(
"Error from calendarserver_config: {}, {}".format(
child.returncode, error
)
)
return None
else:
return plistlib.readPlistFromString(output)["result"]
def readConfig(configFile=None):
"""
Ask calendarserver_config for the current configuration
"""
command = {
"command": "readConfig"
}
return sendCommand(command)
def writeConfig(valuesDict, configFile=None):
"""
Ask calendarserver_config to update the configuration
"""
command = {
"command": "writeConfig",
"Values": valuesDict,
}
return sendCommand(command)
def updateConfig(
config,
oldIP, newIP,
oldHostname, newHostname,
configFile=None
):
keys = (
("Scheduling", "iMIP", "Receiving", "Server"),
("Scheduling", "iMIP", "Sending", "Server"),
("Scheduling", "iMIP", "Sending", "Address"),
("ServerHostName",),
)
def _replace(value, oldIP, newIP, oldHostname, newHostname):
newValue = value.replace(oldIP, newIP)
if oldHostname and newHostname:
newValue = newValue.replace(oldHostname, newHostname)
if value != newValue:
log("Changed %s -> %s" % (value, newValue))
return newValue
for keyPath in keys:
parent = config
path = keyPath[:-1]
key = keyPath[-1]
for step in path:
if step not in parent:
parent = None
break
parent = parent[step]
if parent:
if key in parent:
value = parent[key]
if isinstance(value, list):
newValue = []
for item in value:
item = _replace(
item, oldIP, newIP, oldHostname, newHostname
)
newValue.append(item)
else:
newValue = _replace(
value, oldIP, newIP, oldHostname, newHostname
)
parent[key] = newValue
if __name__ == '__main__':
main()
calendarserver-7.0+dfsg/calendarserver/tools/checkdatabaseschema.py 0000664 0000000 0000000 00000017014 12607514243 0025743 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2014-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
from getopt import getopt, GetoptError
import os
import re
import subprocess
import sys
from twext.enterprise.dal.model import Schema, Table, Column, Sequence
from twext.enterprise.dal.parseschema import addSQLToSchema, schemaFromPath
from twisted.python.filepath import FilePath
USERNAME = "caldav"
DATABASENAME = "caldav"
PGSOCKETDIR = "127.0.0.1"
SCHEMADIR = "./txdav/common/datastore/sql_schema/"
# Executables:
PSQL = "../postgresql/_root/bin/psql"
def usage(e=None):
name = os.path.basename(sys.argv[0])
print("usage: %s [options] username" % (name,))
print("")
print(" Check calendar server postgres database and schema")
print("")
print("options:")
print(" -d: path to server's sql_schema directory [./txdav/common/datastore/sql_schema/]")
print(" -k: postgres socket path (value for psql -h argument [127.0.0.1])")
print(" -p: location of psql tool if not on PATH already [psql]")
print(" -x: use default values for OS X server")
print(" -h --help: print this help and exit")
print(" -v --verbose: print additional information")
print("")
if e:
sys.stderr.write("%s\n" % (e,))
sys.exit(64)
else:
sys.exit(0)
def execSQL(title, stmt, verbose=False):
"""
Execute the provided SQL statement, return results as a list of rows.
@param stmt: the SQL to execute
@type stmt: L{str}
"""
cmdArgs = [
PSQL,
"-h", PGSOCKETDIR,
"-d", DATABASENAME,
"-U", USERNAME,
"-t",
"-c", stmt,
]
try:
if verbose:
print("\n{}".format(title))
print("Executing: {}".format(" ".join(cmdArgs)))
out = subprocess.check_output(cmdArgs, stderr=subprocess.STDOUT)
if verbose:
print(out)
except subprocess.CalledProcessError, e:
if verbose:
print(e.output)
raise CheckSchemaError(
"%s failed:\n%s (exit code = %d)" %
(PSQL, e.output, e.returncode)
)
return [s.strip() for s in out.splitlines()[:-1]]
def getSchemaVersion(verbose=False):
"""
Return the version number for the schema installed in the database.
Raise CheckSchemaError if there is an issue.
"""
out = execSQL(
"Reading schema version...",
"select value from calendarserver where name='VERSION';",
verbose
)
try:
version = int(out[0])
except ValueError, e:
raise CheckSchemaError(
"Failed to parse schema version: %s" % (e,)
)
return version
def dumpCurrentSchema(verbose=False):
schema = Schema("Dumped schema")
tables = {}
# Tables
rows = execSQL(
"Schema tables...",
"select table_name from information_schema.tables where table_schema = 'public';",
verbose
)
for row in rows:
name = row
table = Table(schema, name)
tables[name] = table
# Columns
rows = execSQL(
"Reading table '{}' columns...".format(name),
"select column_name from information_schema.columns where table_schema = 'public' and table_name = '{}';".format(name),
verbose
)
for row in rows:
name = row
# TODO: figure out the type
column = Column(table, name, None)
table.columns.append(column)
# Indexes
# TODO: handle implicit indexes created via primary key() and unique() statements within CREATE TABLE
rows = execSQL(
"Schema indexes...",
"select indexdef from pg_indexes where schemaname = 'public';",
verbose
)
for indexdef in rows:
addSQLToSchema(schema, indexdef.replace("public.", ""))
# Sequences
rows = execSQL(
"Schema sequences...",
"select sequence_name from information_schema.sequences where sequence_schema = 'public';",
verbose
)
for row in rows:
name = row
Sequence(schema, name)
return schema
def checkSchema(dbversion, verbose=False):
"""
Compare schema in the database with the expected schema file.
"""
dbschema = dumpCurrentSchema(verbose)
# Find current schema
fp = FilePath(SCHEMADIR)
fpschema = fp.child("old").child("postgres-dialect").child("v{}.sql".format(dbversion))
if not fpschema.exists():
fpschema = fp.child("current.sql")
expectedSchema = schemaFromPath(fpschema)
mismatched = dbschema.compare(expectedSchema)
if mismatched:
print("\nCurrent schema in database is mismatched:\n\n" + "\n".join(mismatched))
else:
print("\nCurrent schema in database is a match to the expected server version")
class CheckSchemaError(Exception):
pass
def error(s):
sys.stderr.write("%s\n" % (s,))
sys.exit(1)
def main():
try:
(optargs, _ignore_args) = getopt(
sys.argv[1:], "d:hk:vx", [
"help",
"verbose",
],
)
except GetoptError, e:
usage(e)
verbose = False
global SCHEMADIR, PGSOCKETDIR, PSQL
for opt, arg in optargs:
if opt in ("-h", "--help"):
usage()
elif opt in ("-d",):
SCHEMADIR = arg
elif opt in ("-k",):
PGSOCKETDIR = arg
elif opt in ("-p",):
PSQL = arg
elif opt in ("-x",):
sktdir = FilePath("/var/run/caldavd")
for skt in sktdir.children():
if skt.basename().startswith("ccs_postgres_"):
PGSOCKETDIR = skt.path
PSQL = "/Applications/Server.app/Contents/ServerRoot/usr/bin/psql"
SCHEMADIR = "/Applications/Server.app/Contents/ServerRoot/usr/share/caldavd/lib/python/txdav/common/datastore/sql_schema/"
elif opt in ("-v", "--verbose"):
verbose = True
else:
raise NotImplementedError(opt)
# Retrieve the db_version number of the installed schema
try:
db_version = getSchemaVersion(verbose=verbose)
except CheckSchemaError, e:
db_version = 0
# Retrieve the version number from the schema file
currentschema = FilePath(SCHEMADIR).child("current.sql")
try:
data = currentschema.getContent()
except IOError:
print("Unable to open the current schema file: %s" % (currentschema.path,))
else:
found = re.search("insert into CALENDARSERVER values \('VERSION', '(\d+)'\);", data)
if found is None:
print("Schema is missing required schema VERSION insert statement: %s" % (currentschema.path,))
else:
current_version = int(found.group(1))
if db_version == current_version:
print("Schema version {} is current".format(db_version))
else: # upgrade needed
print("Schema needs to be upgraded from {} to {}".format(db_version, current_version))
checkSchema(db_version, verbose)
if __name__ == "__main__":
main()
calendarserver-7.0+dfsg/calendarserver/tools/cmdline.py 0000664 0000000 0000000 00000015666 12607514243 0023446 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Shared main-point between utilities.
"""
from calendarserver.tap.util import checkDirectories
from calendarserver.tools.util import loadConfig, autoDisableMemcached
from twext.python.log import StandardIOObserver
from twistedcaldav.config import ConfigurationError
from twisted.internet.defer import inlineCallbacks, succeed
from twisted.application.service import Service
from twisted.python.logfile import LogFile
from twisted.python.log import FileLogObserver
import sys
from calendarserver.tap.util import getRootResource
from errno import ENOENT, EACCES
from twext.enterprise.jobs.queue import NonPerformingQueuer
from twistedcaldav.timezones import TimezoneCache
# TODO: direct unit tests for these functions.
def utilityMain(
configFileName, serviceClass, reactor=None, serviceMaker=None,
patchConfig=None, onShutdown=None, verbose=False, loadTimezones=False
):
"""
Shared main-point for utilities.
This function will:
- Load the configuration file named by C{configFileName},
- launch a L{CalDAVServiceMaker}'s with the C{ProcessType} of
C{"Utility"}
- run the reactor, with start/stop events hooked up to the service's
C{startService}/C{stopService} methods.
It is C{serviceClass}'s responsibility to stop the reactor when it's
complete.
@param configFileName: the name of the configuration file to load.
@type configuration: C{str}
@param serviceClass: a 1-argument callable which takes an object that
provides L{ICalendarStore} and/or L{IAddressbookStore} and returns an
L{IService}.
@param patchConfig: a 1-argument callable which takes a config object
and makes and changes necessary for the tool.
@param onShutdown: a 0-argument callable which will run on shutdown.
@param reactor: if specified, the L{IReactorTime} / L{IReactorThreads} /
L{IReactorTCP} (etc) provider to use. If C{None}, the default reactor
will be imported and used.
"""
from calendarserver.tap.caldav import CalDAVServiceMaker, CalDAVOptions
if serviceMaker is None:
serviceMaker = CalDAVServiceMaker
# We want to validate that the actual service is always an instance of WorkerService, so wrap the
# service maker callback inside a function that does that check
def _makeValidService(store):
service = serviceClass(store)
assert isinstance(service, WorkerService)
return service
# Install std i/o observer
if verbose:
observer = StandardIOObserver()
observer.start()
if reactor is None:
from twisted.internet import reactor
try:
config = loadConfig(configFileName)
if patchConfig is not None:
patchConfig(config)
checkDirectories(config)
utilityLogFile = LogFile.fromFullPath(
config.UtilityLogFile,
rotateLength=config.ErrorLogRotateMB * 1024 * 1024,
maxRotatedFiles=config.ErrorLogMaxRotatedFiles
)
utilityLogObserver = FileLogObserver(utilityLogFile)
utilityLogObserver.start()
config.ProcessType = "Utility"
config.UtilityServiceClass = _makeValidService
autoDisableMemcached(config)
maker = serviceMaker()
# Only perform post-import duties if someone has explicitly said to
maker.doPostImport = getattr(maker, "doPostImport", False)
options = CalDAVOptions
service = maker.makeService(options)
reactor.addSystemEventTrigger("during", "startup", service.startService)
reactor.addSystemEventTrigger("before", "shutdown", service.stopService)
if onShutdown is not None:
reactor.addSystemEventTrigger("before", "shutdown", onShutdown)
if loadTimezones:
TimezoneCache.create()
except (ConfigurationError, OSError), e:
sys.stderr.write("Error: %s\n" % (e,))
return
reactor.run()
class WorkerService(Service):
def __init__(self, store):
self.store = store
def rootResource(self):
try:
from twistedcaldav.config import config
rootResource = getRootResource(config, self.store)
except OSError, e:
if e.errno == ENOENT:
# Trying to re-write resources.xml but its parent directory does
# not exist. The server's never been started, so we're missing
# state required to do any work.
raise ConfigurationError(
"It appears that the server has never been started.\n"
"Please start it at least once before running this tool.")
elif e.errno == EACCES:
# Trying to re-write resources.xml but it is not writable by the
# current user. This most likely means we're in a system
# configuration and the user doesn't have sufficient privileges
# to do the other things the tool might need to do either.
raise ConfigurationError("You must run this tool as root.")
else:
raise
return rootResource
@inlineCallbacks
def startService(self):
try:
# Work can be queued but will not be performed by the command
# line tool
if self.store is not None:
self.store.queuer = NonPerformingQueuer()
yield self.doWork()
else:
yield self.doWorkWithoutStore()
except ConfigurationError, ce:
sys.stderr.write("Error: %s\n" % (str(ce),))
except Exception, e:
sys.stderr.write("Error: %s\n" % (e,))
raise
finally:
self.postStartService()
def doWorkWithoutStore(self):
"""
Subclasses can override doWorkWithoutStore if there is any work they
can accomplish without access to the store, or if they want to emit
their own error message.
"""
sys.stderr.write("Error: Data store is not available\n")
return succeed(None)
def postStartService(self):
"""
By default, stop the reactor after doWork( ) finishes. Subclasses
can override this if they want different behavior.
"""
if hasattr(self, "reactor"):
self.reactor.stop()
else:
from twisted.internet import reactor
reactor.stop()
calendarserver-7.0+dfsg/calendarserver/tools/config.py 0000664 0000000 0000000 00000047774 12607514243 0023305 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
##
# Copyright (c) 2010-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
# Suppress warning that occurs on Linux
import sys
if sys.platform.startswith("linux"):
from Crypto.pct_warnings import PowmInsecureWarning
import warnings
warnings.simplefilter("ignore", PowmInsecureWarning)
"""
This tool gets and sets Calendar Server configuration keys
"""
from getopt import getopt, GetoptError
from itertools import chain
import os
import plistlib
import signal
import subprocess
import xml
from plistlib import readPlistFromString, writePlistToString
from twistedcaldav.config import config, ConfigDict, ConfigurationError, mergeData
from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE
from twisted.python.filepath import FilePath
WRITABLE_CONFIG_KEYS = [
"AccountingCategories",
"Authentication.Basic.AllowedOverWireUnencrypted",
"Authentication.Basic.Enabled",
"Authentication.Digest.AllowedOverWireUnencrypted",
"Authentication.Digest.Enabled",
"Authentication.Kerberos.AllowedOverWireUnencrypted",
"Authentication.Kerberos.Enabled",
"Authentication.Wiki.Enabled",
"DefaultLogLevel",
"DirectoryAddressBook.params.queryPeopleRecords",
"DirectoryAddressBook.params.queryUserRecords",
"EnableCalDAV",
"EnableCardDAV",
"EnableSACLs",
"EnableSearchAddressBook",
"EnableSSL",
"HTTPPort",
"LogLevels",
"Notifications.Services.APNS.Enabled",
"RedirectHTTPToHTTPS",
"Scheduling.iMIP.Enabled",
"Scheduling.iMIP.Receiving.Port",
"Scheduling.iMIP.Receiving.Server",
"Scheduling.iMIP.Receiving.Type",
"Scheduling.iMIP.Receiving.Username",
"Scheduling.iMIP.Receiving.UseSSL",
"Scheduling.iMIP.Sending.Address",
"Scheduling.iMIP.Sending.Port",
"Scheduling.iMIP.Sending.Server",
"Scheduling.iMIP.Sending.Username",
"Scheduling.iMIP.Sending.UseSSL",
"ServerHostName",
"SSLAuthorityChain",
"SSLCertificate",
"SSLPort",
"SSLPrivateKey",
]
READONLY_CONFIG_KEYS = [
"ServerRoot",
]
ACCOUNTING_CATEGORIES = {
"http": ("HTTP",),
"itip": ("iTIP", "iTIP-VFREEBUSY",),
"implicit": ("Implicit Errors",),
"autoscheduling": ("AutoScheduling",),
"ischedule": ("iSchedule",),
"migration": ("migration",),
}
LOGGING_CATEGORIES = {
"directory": ("twext^who", "txdav^who",),
"imip": ("txdav^caldav^datastore^scheduling^imip",),
}
def usage(e=None):
if e:
print(e)
print("")
name = os.path.basename(sys.argv[0])
print("usage: %s [options] config_key" % (name,))
print("")
print("Print the value of the given config key.")
print("options:")
print(" -a --accounting: Specify accounting categories (combinations of http, itip, implicit, autoscheduling, ischedule, migration, or off)")
print(" -h --help: print this help and exit")
print(" -f --config: Specify caldavd.plist configuration path")
print(" -l --logging: Specify logging categories (combinations of directory, imip, or default)")
print(" -r --restart: Restart the calendar service")
print(" --start: Start the calendar service and agent")
print(" --stop: Stop the calendar service and agent")
print(" -w --writeconfig: Specify caldavd.plist configuration path for writing")
if e:
sys.exit(64)
else:
sys.exit(0)
def runAsRootCheck():
"""
If we're running in Server.app context and are not running as root, exit.
"""
if os.path.abspath(__file__).startswith("/Applications/Server.app/"):
if os.getuid() != 0:
print("Must be run as root")
sys.exit(1)
def main():
runAsRootCheck()
try:
(optargs, args) = getopt(
sys.argv[1:], "hf:rw:a:l:", [
"help",
"config=",
"writeconfig=",
"restart",
"start",
"stop",
"accounting=",
"logging=",
],
)
except GetoptError, e:
usage(e)
configFileName = DEFAULT_CONFIG_FILE
writeConfigFileName = ""
accountingCategories = None
loggingCategories = None
doStop = False
doStart = False
doRestart = False
for opt, arg in optargs:
if opt in ("-h", "--help"):
usage()
elif opt in ("-f", "--config"):
configFileName = arg
elif opt in ("-w", "--writeconfig"):
writeConfigFileName = arg
elif opt in ("-a", "--accounting"):
accountingCategories = arg.split(",")
elif opt in ("-l", "--logging"):
loggingCategories = arg.split(",")
if opt == "--stop":
doStop = True
if opt == "--start":
doStart = True
if opt in ("-r", "--restart"):
doRestart = True
try:
config.load(configFileName)
except ConfigurationError, e:
sys.stdout.write("%s\n" % (e,))
sys.exit(1)
if not writeConfigFileName:
# If --writeconfig was not passed, use WritableConfigFile from
# main plist. If that's an empty string, writes will happen to
# the main file.
writeConfigFileName = config.WritableConfigFile
if not writeConfigFileName:
writeConfigFileName = configFileName
if doStop:
setServiceState("org.calendarserver.agent", "disable")
setServiceState("org.calendarserver.calendarserver", "disable")
setEnabled(False)
if doStart:
setServiceState("org.calendarserver.agent", "enable")
setServiceState("org.calendarserver.calendarserver", "enable")
setEnabled(True)
if doStart or doStop:
setReverseProxies()
sys.exit(0)
if doRestart:
restartService(config.PIDFile)
sys.exit(0)
writable = WritableConfig(config, writeConfigFileName)
writable.read()
# Convert logging categories to actual config key changes
if loggingCategories is not None:
args = ["LogLevels="]
if loggingCategories != ["default"]:
for cat in loggingCategories:
if cat in LOGGING_CATEGORIES:
for moduleName in LOGGING_CATEGORIES[cat]:
args.append("LogLevels.{}=debug".format(moduleName))
# Convert accounting categories to actual config key changes
if accountingCategories is not None:
args = ["AccountingCategories="]
if accountingCategories != ["off"]:
for cat in accountingCategories:
if cat in ACCOUNTING_CATEGORIES:
for key in ACCOUNTING_CATEGORIES[cat]:
args.append("AccountingCategories.{}=True".format(key))
processArgs(writable, args)
def setServiceState(service, state):
"""
Invoke serverctl to enable/disable a service
"""
SERVERCTL = "/Applications/Server.app/Contents/ServerRoot/usr/sbin/serverctl"
child = subprocess.Popen(
args=[SERVERCTL, state, "service={}".format(service)],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
output, error = child.communicate()
if child.returncode:
sys.stdout.write(
"Error from serverctl: %d, %s" % (child.returncode, error)
)
def setEnabled(enabled):
command = {
"command": "writeConfig",
"Values": {
"EnableCalDAV": enabled,
"EnableCardDAV": enabled,
},
}
runner = Runner([command], quiet=True)
runner.run()
def setReverseProxies():
"""
Invoke calendarserver_reverse_proxies
"""
SERVERCTL = "/Applications/Server.app/Contents/ServerRoot/usr/libexec/calendarserver_reverse_proxies"
child = subprocess.Popen(
args=[SERVERCTL, ],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
child.communicate()
def processArgs(writable, args, restart=True):
"""
Perform the read/write operations requested in the command line args.
If there are no args, stdin is read, and plist-formatted commands are
processed from there.
@param writable: the WritableConfig
@param args: a list of utf-8 encoded strings
@param restart: whether to restart the calendar server after making a
config change.
"""
if args:
for configKey in args:
# args come in as utf-8 encoded strings
configKey = configKey.decode("utf-8")
if "=" in configKey:
# This is an assignment
configKey, stringValue = configKey.split("=")
if stringValue:
value = writable.convertToValue(stringValue)
valueDict = setKeyPath({}, configKey, value)
writable.set(valueDict)
else:
writable.delete(configKey)
else:
# This is a read
c = config
for subKey in configKey.split("."):
c = c.get(subKey, None)
if c is None:
sys.stderr.write("No such config key: %s\n" % configKey)
break
sys.stdout.write("%s=%s\n" % (configKey.encode("utf-8"), c))
writable.save(restart=restart)
else:
# Read plist commands from stdin
rawInput = sys.stdin.read()
try:
plist = readPlistFromString(rawInput)
# Note: values in plist will already be unicode
except xml.parsers.expat.ExpatError, e:
respondWithError(str(e))
return
# If the plist is an array, each element of the array is a separate
# command dictionary.
if isinstance(plist, list):
commands = plist
else:
commands = [plist]
runner = Runner(commands)
runner.run()
class Runner(object):
"""
A class which carries out commands, which are plist strings containing
dictionaries with a "command" key, plus command-specific data.
"""
def __init__(self, commands, quiet=False):
"""
@param commands: the commands to run
@type commands: list of plist strings
"""
self.commands = commands
self.quiet = quiet
def validate(self):
"""
Validate all the commands by making sure this class implements
all the command keys.
@return: True if all commands are valid, False otherwise
"""
# Make sure commands are valid
for command in self.commands:
if 'command' not in command:
respondWithError("'command' missing from plist")
return False
commandName = command['command']
methodName = "command_%s" % (commandName,)
if not hasattr(self, methodName):
respondWithError("Unknown command '%s'" % (commandName,))
return False
return True
def run(self):
"""
Find the appropriate method for each command and call them.
"""
try:
for command in self.commands:
commandName = command['command']
methodName = "command_%s" % (commandName,)
if hasattr(self, methodName):
getattr(self, methodName)(command)
else:
respondWithError("Unknown command '%s'" % (commandName,))
except Exception, e:
respondWithError("Command failed: '%s'" % (str(e),))
raise
def command_readConfig(self, command):
"""
Return current configuration
@param command: the dictionary parsed from the plist read from stdin
@type command: C{dict}
"""
result = {}
for keyPath in chain(WRITABLE_CONFIG_KEYS, READONLY_CONFIG_KEYS):
value = getKeyPath(config, keyPath)
if value is not None:
# Note: config contains utf-8 encoded strings, but plistlib
# wants unicode, so decode here:
if isinstance(value, str):
value = value.decode("utf-8")
setKeyPath(result, keyPath, value)
respond(command, result)
def command_writeConfig(self, command):
"""
Write config to secondary, writable plist
@param command: the dictionary parsed from the plist read from stdin
@type command: C{dict}
"""
writable = WritableConfig(config, config.WritableConfigFile)
writable.read()
valuesToWrite = command.get("Values", {})
# Note: values are unicode if they contain non-ascii
for keyPath, value in flattenDictionary(valuesToWrite):
if keyPath in WRITABLE_CONFIG_KEYS:
writable.set(setKeyPath(ConfigDict(), keyPath, value))
try:
writable.save(restart=False)
except Exception, e:
respond(command, {"error": str(e)})
else:
config.reload()
if not self.quiet:
self.command_readConfig(command)
def setKeyPath(parent, keyPath, value):
"""
Allows the setting of arbitrary nested dictionary keys via a single
dot-separated string. For example, setKeyPath(parent, "foo.bar.baz",
"xyzzy") would create any intermediate missing directories (or whatever
class parent is, such as ConfigDict) so that the following structure
results: parent = { "foo" : { "bar" : { "baz" : "xyzzy } } }
@param parent: the object to modify
@type parent: any dict-like object
@param keyPath: a dot-delimited string specifying the path of keys to
traverse
@type keyPath: C{str}
@param value: the value to set
@type value: c{object}
@return: parent
"""
original = parent
# Embedded ^ should be replaced by .
# That way, we can still use . as key path separator even though sometimes
# a key part wants to have a . in it (such as a LogLevels module).
# For example: LogLevels.twext^who, will get converted to a "LogLevels"
# dict containing a the key "twext.who"
parts = [p.replace("^", ".") for p in keyPath.split(".")]
for part in parts[:-1]:
child = parent.get(part, None)
if child is None:
parent[part] = child = parent.__class__()
parent = child
parent[parts[-1]] = value
return original
def getKeyPath(parent, keyPath):
"""
Allows the getting of arbitrary nested dictionary keys via a single
dot-separated string. For example, getKeyPath(parent, "foo.bar.baz")
would fetch parent["foo"]["bar"]["baz"]. If any of the keys don't
exist, None is returned instead.
@param parent: the object to traverse
@type parent: any dict-like object
@param keyPath: a dot-delimited string specifying the path of keys to
traverse
@type keyPath: C{str}
@return: the value at keyPath
"""
parts = keyPath.split(".")
for part in parts[:-1]:
child = parent.get(part, None)
if child is None:
return None
parent = child
return parent.get(parts[-1], None)
def flattenDictionary(dictionary, current=""):
"""
Returns a generator of (keyPath, value) tuples for the given dictionary,
where each keyPath is a dot-separated string representing the complete
path to a nested key.
@param dictionary: the dict object to traverse
@type dictionary: C{dict}
@param current: do not use; used internally for recursion
@type current: C{str}
@return: generator of (keyPath, value) tuples
"""
for key, value in dictionary.iteritems():
if isinstance(value, dict):
for result in flattenDictionary(value, current + key + "."):
yield result
else:
yield (current + key, value)
def restartService(pidFilename):
"""
Given the path to a PID file, sends a HUP signal to the contained pid
in order to cause calendar server to restart.
@param pidFilename: an absolute path to a PID file
@type pidFilename: C{str}
"""
if os.path.exists(pidFilename):
pidFile = open(pidFilename, "r")
pid = pidFile.read().strip()
pidFile.close()
try:
pid = int(pid)
except ValueError:
return
try:
os.kill(pid, signal.SIGHUP)
except OSError:
pass
class WritableConfig(object):
"""
A wrapper around a Config object which allows writing of values. The idea
is a deployment could have a master plist which doesn't change, and have
it include a plist file which does. This class facilitates writing to that
included plist.
"""
def __init__(self, wrappedConfig, fileName):
"""
@param wrappedConfig: the Config object to read from
@type wrappedConfig: C{Config}
@param fileName: the full path to the modifiable plist
@type fileName: C{str}
"""
self.config = wrappedConfig
self.fileName = fileName
self.changes = None
self.currentConfigSubset = ConfigDict()
self.dirty = False
def set(self, data):
"""
Merges data into a ConfigDict of changes intended to be saved to disk
when save( ) is called.
@param data: a dict containing new values
@type data: C{dict}
"""
if not isinstance(data, ConfigDict):
data = ConfigDict(mapping=data)
mergeData(self.currentConfigSubset, data)
self.dirty = True
def delete(self, key):
"""
Deletes the specified key
@param key: the key to delete
@type key: C{str}
"""
try:
del self.currentConfigSubset[key]
self.dirty = True
except:
pass
def read(self):
"""
Reads in the data contained in the writable plist file.
@return: C{ConfigDict}
"""
if os.path.exists(self.fileName):
self.currentConfigSubset = ConfigDict(mapping=plistlib.readPlist(self.fileName))
else:
self.currentConfigSubset = ConfigDict()
def toString(self):
return plistlib.writePlistToString(self.currentConfigSubset)
def save(self, restart=False):
"""
Writes any outstanding changes to the writable plist file. Optionally
restart calendar server.
@param restart: whether to restart the calendar server.
@type restart: C{bool}
"""
if self.dirty:
content = writePlistToString(self.currentConfigSubset)
fp = FilePath(self.fileName)
fp.setContent(content)
self.dirty = False
if restart:
restartService(self.config.PIDFile)
@classmethod
def convertToValue(cls, string):
"""
Inspect string and convert the value into an appropriate Python data type
TODO: change this to look at actual types defined within stdconfig
"""
if "." in string:
try:
value = float(string)
except ValueError:
value = string
else:
try:
value = int(string)
except ValueError:
if string == "True":
value = True
elif string == "False":
value = False
else:
value = string
return value
def respond(command, result):
sys.stdout.write(writePlistToString({'command': command['command'], 'result': result}))
def respondWithError(msg, status=1):
sys.stdout.write(writePlistToString({'error': msg, }))
calendarserver-7.0+dfsg/calendarserver/tools/dashboard.py 0000664 0000000 0000000 00000101252 12607514243 0023745 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2012-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
A curses (or plain text) based dashboard for viewing various aspects of the
server as exposed by the L{DashboardProtocol} stats socket.
"""
from getopt import getopt, GetoptError
import curses.panel
import errno
import fcntl
import json
import logging
import os
import sched
import socket
import struct
import sys
import termios
import time
LOG_FILENAME = 'db.log'
# logging.basicConfig(filename=LOG_FILENAME,level=logging.DEBUG)
def usage(e=None):
name = os.path.basename(sys.argv[0])
print("usage: %s [options]" % (name,))
print("")
print(" TODO: describe usage")
print("")
print("options:")
print(" -h --help: print this help and exit")
print(" -t: text output, not curses")
print(" -s: server host (and optional port) [localhost:8100]")
print(" or unix socket path prefixed by 'unix:'")
print("")
if e:
sys.exit(64)
else:
sys.exit(0)
BOX_WIDTH = 52
def main():
try:
(optargs, _ignore_args) = getopt(
sys.argv[1:], "hs:t", [
"help",
],
)
except GetoptError, e:
usage(e)
#
# Get configuration
#
useCurses = True
server = ("localhost", 8100)
for opt, arg in optargs:
if opt in ("-h", "--help"):
usage()
elif opt in ("-t"):
useCurses = False
elif opt in ("-s"):
if not arg.startswith("unix:"):
server = arg.split(":")
if len(server) == 1:
server.append(8100)
else:
server[1] = int(server[1])
server = tuple(server)
else:
server = arg
else:
raise NotImplementedError(opt)
if useCurses:
def _wrapped(stdscrn):
curses.curs_set(0)
curses.use_default_colors()
curses.init_pair(1, curses.COLOR_RED, curses.COLOR_WHITE)
d = Dashboard(server, stdscrn, True)
d.run()
curses.wrapper(_wrapped)
else:
d = Dashboard(server, None, False)
d.run()
def safeDivision(value, total, factor=1):
return value * factor / total if total else 0
def defaultIfNone(x, default):
return x if x is not None else default
def terminal_size():
h, w, _ignore_hp, _ignore_wp = struct.unpack(
'HHHH',
fcntl.ioctl(
0, termios.TIOCGWINSZ,
struct.pack('HHHH', 0, 0, 0, 0)
)
)
return w, h
class Dashboard(object):
"""
Main dashboard controller. Use Python's L{sched} feature to schedule
updates.
"""
screen = None
registered_windows = {}
registered_order = []
def __init__(self, server, screen, usesCurses):
self.screen = screen
self.usesCurses = usesCurses
self.paused = False
self.seconds = 0.1 if usesCurses else 1.0
self.sched = sched.scheduler(time.time, time.sleep)
self.client = DashboardClient(server)
self.client_error = False
@classmethod
def registerWindow(cls, wtype, keypress):
"""
Register a window type along with a key press action. This allows the
controller to select the appropriate window when its key is pressed,
and also provides help information to the L{HelpWindow} for each
available window type.
"""
cls.registered_windows[keypress] = wtype
cls.registered_order.append(keypress)
def run(self):
"""
Create the initial window and run the L{scheduler}.
"""
self.windows = []
self.displayWindow(None)
self.sched.enter(self.seconds, 0, self.updateDisplay, ())
self.sched.run()
def displayWindow(self, wtype):
"""
Toggle the specified window, or reset to launch state if None.
"""
# Toggle a specific window on or off
if wtype is not None:
if wtype not in [type(w) for w in self.windows]:
self.windows.append(wtype(self.usesCurses, self.client).makeWindow())
self.windows[-1].activate()
else:
for window in self.windows:
if type(window) == wtype:
window.deactivate()
self.windows.remove(window)
if len(self.windows) == 0:
self.displayWindow(self.registered_windows["h"])
self.resetWindows()
# Reset the screen to the default config
else:
if self.windows:
for window in self.windows:
window.deactivate()
self.windows = []
top = 0
ordered_windows = [self.registered_windows[i] for i in self.registered_order]
for wtype in filter(lambda x: x.all, ordered_windows):
new_win = wtype(self.usesCurses, self.client).makeWindow(top=top)
logging.debug('created %r at panel level %r' % (new_win, new_win.z_order))
self.windows.append(wtype(self.usesCurses, self.client).makeWindow(top=top))
self.windows[-1].activate()
top += self.windows[-1].nlines + 1
# Don't display help panel if the window is too narrow
term_w, term_h = terminal_size()
logging.debug("logger displayWindow: rows: %s cols: %s" % (term_h, term_w))
if int(term_w) > 100:
logging.debug('term_w > 100, making window with top at %d' % (top))
self.windows.append(HelpWindow(self.usesCurses, self.client).makeWindow(top=top))
self.windows[-1].activate()
curses.panel.update_panels()
self.updateDisplay(True)
def resetWindows(self):
"""
Reset the current set of windows.
"""
if self.windows:
logging.debug('resetting windows: %r' % (self.windows))
for window in self.windows:
window.deactivate()
old_windows = self.windows
self.windows = []
top = 0
for old in old_windows:
logging.debug('processing window of type %r' % (type(old)))
self.windows.append(old.__class__(self.usesCurses, self.client).makeWindow(top=top))
self.windows[-1].activate()
# Allow the help window to float on the right edge
if old.__class__.__name__ != "HelpWindow":
top += self.windows[-1].nlines + 1
def updateDisplay(self, initialUpdate=False):
"""
Periodic update of the current window and check for a key press.
"""
self.client.update()
client_error = len(self.client.currentData) == 0
if client_error ^ self.client_error:
self.client_error = client_error
self.resetWindows()
elif filter(lambda x: x.requiresReset(), self.windows):
self.resetWindows()
try:
if not self.paused or initialUpdate:
for window in filter(
lambda x: x.requiresUpdate() or initialUpdate,
self.windows
):
window.update()
except Exception:
# print(str(e))
pass
if not self.usesCurses:
print("-------------")
# Check keystrokes
if self.usesCurses:
try:
c = self.windows[-1].window.getkey()
except:
c = -1
if c == "q":
sys.exit(0)
elif c == " ":
self.paused = not self.paused
elif c == "t":
self.seconds = 1.0 if self.seconds == 0.1 else 0.1
elif c == "a":
self.displayWindow(None)
elif c == "n":
if self.windows:
for window in self.windows:
window.deactivate()
self.windows = []
self.displayWindow(self.registered_windows["h"])
elif c in self.registered_windows:
self.displayWindow(self.registered_windows[c])
if not initialUpdate:
self.sched.enter(self.seconds, 0, self.updateDisplay, ())
class DashboardClient(object):
"""
Client that connects to a server and fetches information.
"""
def __init__(self, sockname):
self.socket = None
if isinstance(sockname, str):
self.sockname = sockname[5:]
self.useTCP = False
else:
self.sockname = sockname
self.useTCP = True
self.currentData = {}
self.items = []
def readSock(self, items):
"""
Open a socket, send the specified request, and retrieve the response. Keep the socket open.
"""
try:
if self.socket is None:
self.socket = socket.socket(socket.AF_INET if self.useTCP else socket.AF_UNIX, socket.SOCK_STREAM)
self.socket.connect(self.sockname)
self.socket.setblocking(0)
self.socket.sendall(json.dumps(items) + "\r\n")
data = ""
t = time.time()
while not data.endswith("\n"):
try:
d = self.socket.recv(1024)
except socket.error as se:
if se.args[0] != errno.EWOULDBLOCK:
raise
if time.time() - t > 5:
raise socket.error
continue
if d:
data += d
else:
break
data = json.loads(data)
except socket.error:
data = {}
self.socket = None
except ValueError:
data = {}
return data
def update(self):
"""
Update the current data from the server.
"""
self.currentData = self.readSock(self.items)
def getOneItem(self, item):
"""
Update the current data from the server.
"""
data = self.readSock([item])
return data[item] if data else None
def addItem(self, item):
"""
Add a server data item to monitor.
"""
self.items.append(item)
def removeItem(self, item):
"""
No need to monitor this item.
"""
self.items.remove(item)
class BaseWindow(object):
"""
Common behavior for window types.
"""
help = "Not Implemented"
all = True
clientItem = None
def __init__(self, usesCurses, client):
self.usesCurses = usesCurses
self.client = client
self.rowCount = 0
self.needsReset = False
self.z_order = 'bottom'
def makeWindow(self, top=0, left=0):
raise NotImplementedError()
def _createWindow(
self, title, nlines, ncols=BOX_WIDTH, begin_y=0, begin_x=0
):
"""
Initialize a curses window based on the sizes required.
"""
if self.usesCurses:
self.window = curses.newwin(nlines, ncols, begin_y, begin_x)
self.window.nodelay(1)
self.panel = curses.panel.new_panel(self.window)
eval("self.panel.%s()" % (self.z_order,))
else:
self.window = None
self.title = title
self.nlines = nlines
self.ncols = ncols
self.iter = 0
self.lastResult = {}
def requiresUpdate(self):
"""
Indicates whether a window type has dynamic data that should be
refreshed on each update, or whether it is static data (e.g.,
L{HelpWindow}) that only needs to be drawn once.
"""
return True
def requiresReset(self):
"""
Indicates that the window needs a full reset, because e.g., the
number of items it didplays has changed.
"""
return self.needsReset
def activate(self):
"""
About to start displaying.
"""
if self.clientItem:
self.client.addItem(self.clientItem)
def deactivate(self):
"""
Clear any drawing done by the current window type.
"""
if self.clientItem:
self.client.removeItem(self.clientItem)
if self.usesCurses:
self.window.erase()
self.window.refresh()
def update(self):
"""
Periodic window update - redraw the window.
"""
raise NotImplementedError()
def clientData(self):
return self.client.currentData.get(self.clientItem)
def readItem(self, item):
return self.client.getOneItem(item)
class HelpWindow(BaseWindow):
"""
Display help for the dashboard.
"""
help = "dashboard help"
all = False
helpItems = (
"",
"a - all windows",
"n - no windows",
" - (space) pause dashboard polling",
"t - toggle update between 0.1 and 1.0 seconds",
"",
"q - exit the dashboard",
)
def makeWindow(self, top=0, left=0):
term_w, _ignore_term_h = terminal_size()
help_x_offset = term_w - BOX_WIDTH + 4
self._createWindow(
"Help",
len(self.helpItems) + len(Dashboard.registered_windows) + 2,
ncols=BOX_WIDTH - 4,
begin_y=0,
begin_x=help_x_offset,
)
return self
def requiresUpdate(self):
return False
def update(self):
if self.usesCurses:
self.window.erase()
self.window.border()
self.window.addstr(0, 2, "Hotkeys")
x = 1
y = 1
items = []
for keypress, wtype in sorted(
Dashboard.registered_windows.items(), key=lambda x: x[0]
):
items.append("{} - {}".format(keypress, wtype.help))
items.extend(self.helpItems)
for item in items:
if self.usesCurses:
self.window.addstr(y, x, item)
else:
print(item)
y += 1
if self.usesCurses:
self.window.refresh()
class JobsWindow(BaseWindow):
"""
Display the status of the server's job queue.
"""
help = "server jobs"
clientItem = "jobs"
FORMAT_WIDTH = 98
def makeWindow(self, top=0, left=0):
nlines = defaultIfNone(self.readItem("jobcount"), 0)
self.rowCount = nlines
self._createWindow("Jobs", self.rowCount + 6, ncols=self.FORMAT_WIDTH, begin_y=top, begin_x=left)
return self
def update(self):
records = defaultIfNone(self.clientData(), {})
if len(records) != self.rowCount:
self.needsReset = True
return
self.iter += 1
if self.usesCurses:
self.window.erase()
self.window.border()
self.window.addstr(
0, 2,
self.title + " {} ({})".format(len(records), self.iter)
)
x = 1
y = 1
s1 = " {:<40}{:>8}{:>10}{:>8}{:>8}{:>10}{:>10} ".format(
"Work Type", "Queued", "Assigned", "Late", "Failed", "Completed", "Av-Time",
)
s2 = " {:<40}{:>8}{:>10}{:>8}{:>8}{:>10}{:>10} ".format(
"", "", "", "", "", "", "(ms)",
)
if self.usesCurses:
self.window.addstr(y, x, s1, curses.A_REVERSE)
self.window.addstr(y + 1, x, s2, curses.A_REVERSE)
else:
print(s1)
print(s2)
y += 2
total_queued = 0
total_assigned = 0
total_late = 0
total_failed = 0
total_completed = 0
total_time = 0.0
for work_type, details in sorted(records.items(), key=lambda x: x[0]):
total_queued += details["queued"]
total_assigned += details["assigned"]
total_late += details["late"]
total_failed += details["failed"]
total_completed += details["completed"]
total_time += details["time"]
changed = (
work_type in self.lastResult and
self.lastResult[work_type]["queued"] != details["queued"]
)
s = "{}{:<40}{:>8}{:>10}{:>8}{:>8}{:>10}{:>10.1f} ".format(
">" if details["queued"] else " ",
work_type,
details["queued"],
details["assigned"],
details["late"],
details["failed"],
details["completed"],
safeDivision(details["time"], details["completed"], 1000.0)
)
try:
if self.usesCurses:
self.window.addstr(
y, x, s,
curses.A_REVERSE if changed else (
curses.A_BOLD if details["queued"] else curses.A_NORMAL
)
)
else:
print(s)
except curses.error:
pass
y += 1
s = " {:<40}{:>8}{:>10}{:>8}{:>8}{:>10}{:>10.1f} ".format(
"Total:",
total_queued,
total_assigned,
total_late,
total_failed,
total_completed,
safeDivision(total_time, total_completed, 1000.0)
)
if self.usesCurses:
self.window.hline(y, x, "-", self.FORMAT_WIDTH - 2)
y += 1
self.window.addstr(y, x, s)
else:
print(s)
y += 1
if self.usesCurses:
self.window.refresh()
self.lastResult = records
class AssignmentsWindow(BaseWindow):
"""
Displays the status of the server's master process worker slave slots.
"""
help = "server child job assignments"
clientItem = "job_assignments"
FORMAT_WIDTH = 40
def makeWindow(self, top=0, left=0):
slots = defaultIfNone(self.readItem(self.clientItem), {"workers": ()})["workers"]
self.rowCount = len(slots)
self._createWindow(
"Job Assignments", self.rowCount + 5, self.FORMAT_WIDTH,
begin_y=top, begin_x=left
)
return self
def update(self):
data = defaultIfNone(self.clientData(), {"workers": {}, "level": 0})
records = data["workers"]
if len(records) != self.rowCount:
self.needsReset = True
return
self.iter += 1
if self.usesCurses:
self.window.erase()
self.window.border()
self.window.addstr(
0, 2,
self.title + " {} ({})".format(len(records), self.iter)
)
x = 1
y = 1
s = " {:>4}{:>12}{:>8}{:>12} ".format(
"Slot", "assigned", "load", "completed"
)
if self.usesCurses:
self.window.addstr(y, x, s, curses.A_REVERSE)
else:
print(s)
y += 1
total_assigned = 0
total_completed = 0
for ctr, details in enumerate(records):
assigned, load, completed = details
total_assigned += assigned
total_completed += completed
changed = (
ctr in self.lastResult and
self.lastResult[ctr] != assigned
)
s = " {:>4}{:>12}{:>8}{:>12} ".format(
ctr,
assigned,
load,
completed,
)
try:
if self.usesCurses:
self.window.addstr(
y, x, s,
curses.A_REVERSE if changed else curses.A_NORMAL
)
else:
print(s)
except curses.error:
pass
y += 1
s = " {:<6}{:>10}{:>8}{:>12}".format(
"Total:",
total_assigned,
"{}%".format(data["level"]),
total_completed,
)
if self.usesCurses:
self.window.hline(y, x, "-", self.FORMAT_WIDTH - 2)
y += 1
self.window.addstr(y, x, s)
else:
print(s)
y += 1
if self.usesCurses:
self.window.refresh()
self.lastResult = records
class HTTPSlotsWindow(BaseWindow):
"""
Displays the status of the server's master process worker slave slots.
"""
help = "server child slots"
clientItem = "slots"
FORMAT_WIDTH = 72
def makeWindow(self, top=0, left=0):
slots = defaultIfNone(self.readItem(self.clientItem), {"slots": ()})["slots"]
self.rowCount = len(slots)
self._createWindow(
"HTTP Slots", self.rowCount + 5, self.FORMAT_WIDTH,
begin_y=top, begin_x=left
)
return self
def update(self):
data = defaultIfNone(self.clientData(), {"slots": {}, "overloaded": False})
records = data["slots"]
if len(records) != self.rowCount:
self.needsReset = True
return
self.iter += 1
if self.usesCurses:
self.window.erase()
self.window.border()
self.window.addstr(
0, 2,
self.title + " {} ({})".format(len(records), self.iter)
)
x = 1
y = 1
s = " {:>4}{:>8}{:>8}{:>8}{:>8}{:>8}{:>8}{:>8}{:>8} ".format(
"Slot", "unack", "ack", "uncls", "total",
"start", "strting", "stopped", "abd"
)
if self.usesCurses:
self.window.addstr(y, x, s, curses.A_REVERSE)
else:
print(s)
y += 1
for record in sorted(records, key=lambda x: x["slot"]):
changed = (
record["slot"] in self.lastResult and
self.lastResult[record["slot"]] != record
)
s = " {:>4}{:>8}{:>8}{:>8}{:>8}{:>8}{:>8}{:>8}{:>8} ".format(
record["slot"],
record["unacknowledged"],
record["acknowledged"],
record["unclosed"],
record["total"],
record["started"],
record["starting"],
record["stopped"],
record["abandoned"],
)
try:
count = record["unacknowledged"] + record["acknowledged"]
if self.usesCurses:
self.window.addstr(
y, x, s,
curses.A_REVERSE if changed else (
curses.A_BOLD if count else curses.A_NORMAL
)
)
else:
print(s)
except curses.error:
pass
y += 1
s = " {:<12}{:>8}{:>16}".format(
"Total:",
sum(
[
record["unacknowledged"] + record["acknowledged"]
for record in records
]
),
sum([record["total"] for record in records]),
)
if self.usesCurses:
self.window.hline(y, x, "-", self.FORMAT_WIDTH - 2)
y += 1
self.window.addstr(y, x, s)
x += len(s) + 4
s = "{:>10}".format("OVERLOADED" if data["overloaded"] else "")
self.window.addstr(
y, x, s,
curses.color_pair(1) + curses.A_BOLD if data["overloaded"] else curses.A_NORMAL
)
else:
if data["overloaded"]:
s += " OVERLOADED"
print(s)
y += 1
if self.usesCurses:
self.window.refresh()
self.lastResult = records
class SystemWindow(BaseWindow):
"""
Displays the system information provided by the server.
"""
help = "system details"
clientItem = "stats_system"
def makeWindow(self, top=0, left=0):
slots = defaultIfNone(self.readItem(self.clientItem), (1, 2, 3, 4,))
self.rowCount = len(slots)
self._createWindow("System", self.rowCount + 3, begin_y=top, begin_x=left)
return self
def update(self):
records = defaultIfNone(self.clientData(), {
"cpu use": 0.0,
"memory percent": 0.0,
"memory used": 0,
"start time": time.time(),
})
if len(records) != self.rowCount:
self.needsReset = True
return
self.iter += 1
if self.usesCurses:
self.window.erase()
self.window.border()
self.window.addstr(
0, 2,
self.title + " {} ({})".format(len(records), self.iter)
)
x = 1
y = 1
s = " {:<30}{:>18} ".format("Item", "Value")
if self.usesCurses:
self.window.addstr(y, x, s, curses.A_REVERSE)
else:
print(s)
y += 1
records["cpu use"] = "{:.2f}".format(records["cpu use"])
records["memory percent"] = "{:.1f}".format(records["memory percent"])
records["memory used"] = "{:.2f} GB".format(
records["memory used"] / (1000.0 * 1000.0 * 1000.0)
)
records["uptime"] = int(time.time() - records["start time"])
hours, mins = divmod(records["uptime"] / 60, 60)
records["uptime"] = "{}:{:02d} hh:mm".format(hours, mins)
del records["start time"]
for item, value in sorted(records.items(), key=lambda x: x[0]):
changed = (
item in self.lastResult and self.lastResult[item] != value
)
s = " {:<30}{:>18} ".format(item, value)
try:
if self.usesCurses:
self.window.addstr(
y, x, s,
curses.A_REVERSE if changed else curses.A_NORMAL
)
else:
print(s)
except curses.error:
pass
y += 1
if self.usesCurses:
self.window.refresh()
self.lastResult = records
class RequestStatsWindow(BaseWindow):
"""
Displays the status of the server's master process worker slave slots.
"""
help = "server request stats"
clientItem = "stats"
FORMAT_WIDTH = 84
def makeWindow(self, top=0, left=0):
self._createWindow("Request Statistics", 8, self.FORMAT_WIDTH, begin_y=top, begin_x=left)
return self
def update(self):
records = defaultIfNone(self.clientData(), {})
self.iter += 1
if self.usesCurses:
self.window.erase()
self.window.border()
self.window.addstr(0, 2, self.title + " {} ({})".format(len(records), self.iter,))
x = 1
y = 1
s1 = " {:<8}{:>8}{:>10}{:>10}{:>10}{:>10}{:>8}{:>8}{:>8} ".format(
"Period", "Reqs", "Av-Reqs", "Av-Resp", "Av-NoWr", "Max-Resp", "Slot", "CPU ", "500's"
)
s2 = " {:<8}{:>8}{:>10}{:>10}{:>10}{:>10}{:>8}{:>8}{:>8} ".format(
"", "", "per sec", "(ms)", "(ms)", "(ms)", "Avg.", "Avg.", ""
)
if self.usesCurses:
self.window.addstr(y, x, s1, curses.A_REVERSE)
self.window.addstr(y + 1, x, s2, curses.A_REVERSE)
else:
print(s1)
print(s2)
y += 2
for key, seconds in (("current", 60,), ("1m", 60,), ("5m", 5 * 60,), ("1h", 60 * 60,),):
stat = records.get(key, {
"requests": 0,
"t": 0.0,
"t-resp-wr": 0.0,
"T-MAX": 0.0,
"slots": 0,
"cpu": 0.0,
"500": 0,
})
s = " {:<8}{:>8}{:>10.1f}{:>10.1f}{:>10.1f}{:>10.1f}{:>8.2f}{:>7.1f}%{:>8} ".format(
key,
stat["requests"],
safeDivision(float(stat["requests"]), seconds),
safeDivision(stat["t"], stat["requests"]),
safeDivision(stat["t"] - stat["t-resp-wr"], stat["requests"]),
stat["T-MAX"],
safeDivision(float(stat["slots"]), stat["requests"]),
safeDivision(stat["cpu"], stat["requests"]),
stat["500"],
)
try:
if self.usesCurses:
self.window.addstr(y, x, s)
else:
print(s)
except curses.error:
pass
y += 1
if self.usesCurses:
self.window.refresh()
self.lastResult = records
class DirectoryStatsWindow(BaseWindow):
"""
Displays the status of the server's directory service calls
"""
help = "directory service stats"
clientItem = "directory"
FORMAT_WIDTH = 89
def makeWindow(self, top=0, left=0):
nlines = len(defaultIfNone(self.readItem("directory"), {}))
self.rowCount = nlines
self._createWindow(
"Directory Service", self.rowCount + 8, ncols=self.FORMAT_WIDTH,
begin_y=top, begin_x=left
)
return self
def update(self):
records = defaultIfNone(self.clientData(), {})
if len(records) != self.rowCount:
self.needsReset = True
return
self.iter += 1
if self.usesCurses:
self.window.erase()
self.window.border()
self.window.addstr(0, 2, self.title + " {} ({})".format(len(records), self.iter,))
x = 1
y = 1
s1 = " {:<40}{:>15}{:>15}{:>15} ".format(
"Method", "Calls", "Total", "Average"
)
s2 = " {:<40}{:>15}{:>15}{:>15} ".format(
"", "", "(sec)", "(ms)"
)
if self.usesCurses:
self.window.addstr(y, x, s1, curses.A_REVERSE)
self.window.addstr(y + 1, x, s2, curses.A_REVERSE)
else:
print(s1)
print(s2)
y += 2
overallCount = 0
overallCountRatio = 0
overallCountCached = 0
overallCountUncached = 0
overallTimeSpent = 0.0
for methodName, result in sorted(records.items(), key=lambda x: x[0]):
if isinstance(result, int):
count, timeSpent = result, 0.0
else:
count, timeSpent = result
overallCount += count
if methodName.endswith("-hit"):
overallCountRatio += count
overallCountCached += count
if methodName.endswith("-miss") or methodName.endswith("-expired"):
overallCountRatio += count
overallCountUncached += count
overallTimeSpent += timeSpent
s = " {:<40}{:>15d}{:>15.1f}{:>15.3f} ".format(
methodName,
count,
timeSpent,
(1000.0 * timeSpent) / count,
)
try:
if self.usesCurses:
self.window.addstr(y, x, s)
else:
print(s)
except curses.error:
pass
y += 1
s = " {:<40}{:>15d}{:>15.1f}{:>15.3f} ".format(
"Total:",
overallCount,
overallTimeSpent,
safeDivision(overallTimeSpent, overallCount, 1000.0)
)
s_cached = " {:<40}{:>15d}{:>14.1f}%{:>15s} ".format(
"Total Cached:",
overallCountCached,
safeDivision(overallCountCached, overallCountRatio, 100.0),
"",
)
s_uncached = " {:<40}{:>15d}{:>14.1f}%{:>15s} ".format(
"Total Uncached:",
overallCountUncached,
safeDivision(overallCountUncached, overallCountRatio, 100.0),
"",
)
if self.usesCurses:
self.window.hline(y, x, "-", self.FORMAT_WIDTH - 2)
y += 1
self.window.addstr(y, x, s)
self.window.addstr(y + 1, x, s_cached)
self.window.addstr(y + 2, x, s_uncached)
else:
print(s)
print(s1)
print(s2)
y += 3
if self.usesCurses:
self.window.refresh()
Dashboard.registerWindow(HelpWindow, "h")
Dashboard.registerWindow(SystemWindow, "s")
Dashboard.registerWindow(AssignmentsWindow, "w")
Dashboard.registerWindow(RequestStatsWindow, "r")
Dashboard.registerWindow(JobsWindow, "j")
Dashboard.registerWindow(HTTPSlotsWindow, "c")
Dashboard.registerWindow(DirectoryStatsWindow, "d")
if __name__ == "__main__":
main()
calendarserver-7.0+dfsg/calendarserver/tools/dbinspect.py 0000775 0000000 0000000 00000071054 12607514243 0024002 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# -*- test-case-name: calendarserver.tools.test.test_calverify -*-
##
# Copyright (c) 2011-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
"""
This tool allows data in the database to be directly inspected using a set
of simple commands.
"""
from calendarserver.tools import tables
from calendarserver.tools.cmdline import utilityMain, WorkerService
from pycalendar.datetime import DateTime
from twext.enterprise.dal.syntax import Select, Parameter, Count, Delete, \
Constant
from twext.who.idirectory import RecordType
from twisted.internet.defer import inlineCallbacks, returnValue, succeed
from twisted.python.filepath import FilePath
from twisted.python.text import wordWrap
from twisted.python.usage import Options
from twistedcaldav import caldavxml
from twistedcaldav.config import config
from twistedcaldav.directory import calendaruserproxy
from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE
from txdav.caldav.datastore.query.filter import Filter
from txdav.common.datastore.sql_tables import schema, _BIND_MODE_OWN
from txdav.who.idirectory import RecordType as CalRecordType
from uuid import UUID
import os
import sys
import traceback
def usage(e=None):
if e:
print(e)
print("")
try:
DBInspectOptions().opt_help()
except SystemExit:
pass
if e:
sys.exit(64)
else:
sys.exit(0)
description = '\n'.join(
wordWrap(
"""
Usage: calendarserver_dbinspect [options] [input specifiers]\n
""",
int(os.environ.get('COLUMNS', '80'))
)
)
class DBInspectOptions(Options):
"""
Command-line options for 'calendarserver_dbinspect'
"""
synopsis = description
optFlags = [
['verbose', 'v', "Verbose logging."],
['debug', 'D', "Debug logging."],
['purging', 'p', "Enable Purge command."],
]
optParameters = [
['config', 'f', DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."],
]
def __init__(self):
super(DBInspectOptions, self).__init__()
self.outputName = '-'
@inlineCallbacks
def UserNameFromUID(txn, uid):
record = yield txn.directoryService().recordWithUID(uid)
returnValue(record.shortNames[0] if record else "(%s)" % (uid,))
@inlineCallbacks
def UIDFromInput(txn, value):
try:
returnValue(str(UUID(value)).upper())
except (ValueError, TypeError):
pass
record = yield txn.directoryService().recordWithShortName(RecordType.user, value)
if record is None:
record = yield txn.directoryService().recordWithShortName(CalRecordType.location, value)
if record is None:
record = yield txn.directoryService().recordWithShortName(CalRecordType.resource, value)
if record is None:
record = yield txn.directoryService().recordWithShortName(RecordType.group, value)
returnValue(record.uid if record else None)
class Cmd(object):
_name = None
@classmethod
def name(cls):
return cls._name
def doIt(self, txn):
raise NotImplementedError
class TableSizes(Cmd):
_name = "Show Size of each Table"
@inlineCallbacks
def doIt(self, txn):
results = []
for dbtable in schema.model.tables: #@UndefinedVariable
dbtable = getattr(schema, dbtable.name)
count = yield self.getTableSize(txn, dbtable)
results.append((dbtable.model.name, count,))
# Print table of results
table = tables.Table()
table.addHeader(("Table", "Row Count"))
for dbtable, count in sorted(results):
table.addRow((
dbtable,
count,
))
print("\n")
print("Database Tables (total=%d):\n" % (len(results),))
table.printTable()
@inlineCallbacks
def getTableSize(self, txn, dbtable):
rows = (yield Select(
[Count(Constant(1)), ],
From=dbtable,
).on(txn))
returnValue(rows[0][0])
class CalendarHomes(Cmd):
_name = "List Calendar Homes"
@inlineCallbacks
def doIt(self, txn):
uids = yield self.getAllHomeUIDs(txn)
# Print table of results
missing = 0
table = tables.Table()
table.addHeader(("Owner UID", "Short Name"))
for uid in sorted(uids):
shortname = yield UserNameFromUID(txn, uid)
if shortname.startswith("("):
missing += 1
table.addRow((
uid,
shortname,
))
print("\n")
print("Calendar Homes (total=%d, missing=%d):\n" % (len(uids), missing,))
table.printTable()
@inlineCallbacks
def getAllHomeUIDs(self, txn):
ch = schema.CALENDAR_HOME
rows = (yield Select(
[ch.OWNER_UID, ],
From=ch,
).on(txn))
returnValue(tuple([row[0] for row in rows]))
class CalendarHomesSummary(Cmd):
_name = "List Calendar Homes with summary information"
@inlineCallbacks
def doIt(self, txn):
uids = yield self.getCalendars(txn)
results = {}
for uid, calname, count in sorted(uids, key=lambda x: x[0]):
totalname, totalcount = results.get(uid, (0, 0,))
if calname != "inbox":
totalname += 1
totalcount += count
results[uid] = (totalname, totalcount,)
# Print table of results
table = tables.Table()
table.addHeader(("Owner UID", "Short Name", "Calendars", "Resources"))
totals = [0, 0, 0]
for uid in sorted(results.keys()):
shortname = yield UserNameFromUID(txn, uid)
table.addRow((
uid,
shortname,
results[uid][0],
results[uid][1],
))
totals[0] += 1
totals[1] += results[uid][0]
totals[2] += results[uid][1]
table.addFooter(("Total", totals[0], totals[1], totals[2]))
table.addFooter((
"Average",
"",
"%.2f" % ((1.0 * totals[1]) / totals[0] if totals[0] else 0,),
"%.2f" % ((1.0 * totals[2]) / totals[0] if totals[0] else 0,),
))
print("\n")
print("Calendars with resource count (total=%d):\n" % (len(results),))
table.printTable()
@inlineCallbacks
def getCalendars(self, txn):
ch = schema.CALENDAR_HOME
cb = schema.CALENDAR_BIND
co = schema.CALENDAR_OBJECT
rows = (yield Select(
[
ch.OWNER_UID,
cb.CALENDAR_RESOURCE_NAME,
Count(co.RESOURCE_ID),
],
From=ch.join(
cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID).And(
cb.BIND_MODE == _BIND_MODE_OWN)).join(
co, type="left", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID)),
GroupBy=(ch.OWNER_UID, cb.CALENDAR_RESOURCE_NAME)
).on(txn))
returnValue(tuple(rows))
class Calendars(Cmd):
_name = "List Calendars"
@inlineCallbacks
def doIt(self, txn):
uids = yield self.getCalendars(txn)
# Print table of results
table = tables.Table()
table.addHeader(("Owner UID", "Short Name", "Calendar", "Resources"))
for uid, calname, count in sorted(uids, key=lambda x: (x[0], x[1])):
shortname = yield UserNameFromUID(txn, uid)
table.addRow((
uid,
shortname,
calname,
count
))
print("\n")
print("Calendars with resource count (total=%d):\n" % (len(uids),))
table.printTable()
@inlineCallbacks
def getCalendars(self, txn):
ch = schema.CALENDAR_HOME
cb = schema.CALENDAR_BIND
co = schema.CALENDAR_OBJECT
rows = (yield Select(
[
ch.OWNER_UID,
cb.CALENDAR_RESOURCE_NAME,
Count(co.RESOURCE_ID),
],
From=ch.join(
cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID).And(
cb.BIND_MODE == _BIND_MODE_OWN)).join(
co, type="left", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID)),
GroupBy=(ch.OWNER_UID, cb.CALENDAR_RESOURCE_NAME)
).on(txn))
returnValue(tuple(rows))
class CalendarsByOwner(Cmd):
_name = "List Calendars for Owner UID/Short Name"
@inlineCallbacks
def doIt(self, txn):
uid = raw_input("Owner UID/Name: ")
uid = yield UIDFromInput(txn, uid)
uids = yield self.getCalendars(txn, uid)
# Print table of results
table = tables.Table()
table.addHeader(("Owner UID", "Short Name", "Calendars", "ID", "Resources"))
totals = [0, 0, ]
for uid, calname, resid, count in sorted(uids, key=lambda x: x[1]):
shortname = yield UserNameFromUID(txn, uid)
table.addRow((
uid if totals[0] == 0 else "",
shortname if totals[0] == 0 else "",
calname,
resid,
count,
))
totals[0] += 1
totals[1] += count
table.addFooter(("Total", "", totals[0], "", totals[1]))
print("\n")
print("Calendars with resource count (total=%d):\n" % (len(uids),))
table.printTable()
@inlineCallbacks
def getCalendars(self, txn, uid):
ch = schema.CALENDAR_HOME
cb = schema.CALENDAR_BIND
co = schema.CALENDAR_OBJECT
rows = (yield Select(
[
ch.OWNER_UID,
cb.CALENDAR_RESOURCE_NAME,
co.CALENDAR_RESOURCE_ID,
Count(co.RESOURCE_ID),
],
From=ch.join(
cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID).And(
cb.BIND_MODE == _BIND_MODE_OWN)).join(
co, type="left", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID)),
Where=(ch.OWNER_UID == Parameter("UID")),
GroupBy=(ch.OWNER_UID, cb.CALENDAR_RESOURCE_NAME, co.CALENDAR_RESOURCE_ID)
).on(txn, **{"UID": uid}))
returnValue(tuple(rows))
class Events(Cmd):
_name = "List Events"
@inlineCallbacks
def doIt(self, txn):
uids = yield self.getEvents(txn)
# Print table of results
table = tables.Table()
table.addHeader(("Owner UID", "Short Name", "Calendar", "ID", "Type", "UID"))
for uid, calname, id, caltype, caluid in sorted(uids, key=lambda x: (x[0], x[1])):
shortname = yield UserNameFromUID(txn, uid)
table.addRow((
uid,
shortname,
calname,
id,
caltype,
caluid
))
print("\n")
print("Calendar events (total=%d):\n" % (len(uids),))
table.printTable()
@inlineCallbacks
def getEvents(self, txn):
ch = schema.CALENDAR_HOME
cb = schema.CALENDAR_BIND
co = schema.CALENDAR_OBJECT
rows = (yield Select(
[
ch.OWNER_UID,
cb.CALENDAR_RESOURCE_NAME,
co.RESOURCE_ID,
co.ICALENDAR_TYPE,
co.ICALENDAR_UID,
],
From=ch.join(
cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID).And(
cb.BIND_MODE == _BIND_MODE_OWN)).join(
co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID)),
).on(txn))
returnValue(tuple(rows))
class EventsByCalendar(Cmd):
_name = "List Events for a specific calendar"
@inlineCallbacks
def doIt(self, txn):
rid = raw_input("Resource-ID: ")
try:
int(rid)
except ValueError:
print('Resource ID must be an integer')
returnValue(None)
uids = yield self.getEvents(txn, rid)
# Print table of results
table = tables.Table()
table.addHeader(("Type", "UID", "Resource Name", "Resource ID",))
for caltype, caluid, rname, rid in sorted(uids, key=lambda x: x[1]):
table.addRow((
caltype,
caluid,
rname,
rid,
))
print("\n")
print("Calendar events (total=%d):\n" % (len(uids),))
table.printTable()
@inlineCallbacks
def getEvents(self, txn, rid):
ch = schema.CALENDAR_HOME
cb = schema.CALENDAR_BIND
co = schema.CALENDAR_OBJECT
rows = (yield Select(
[
co.ICALENDAR_TYPE,
co.ICALENDAR_UID,
co.RESOURCE_NAME,
co.RESOURCE_ID,
],
From=ch.join(
cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID).And(
cb.BIND_MODE == _BIND_MODE_OWN)).join(
co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID)),
Where=(co.CALENDAR_RESOURCE_ID == Parameter("RID")),
).on(txn, **{"RID": rid}))
returnValue(tuple(rows))
class EventDetails(Cmd):
"""
Base class for common event details commands.
"""
@inlineCallbacks
def printEventDetails(self, txn, details):
owner, calendar, resource_id, resource, created, modified, data = details
shortname = yield UserNameFromUID(txn, owner)
table = tables.Table()
table.addRow(("Owner UID:", owner,))
table.addRow(("User Name:", shortname,))
table.addRow(("Calendar:", calendar,))
table.addRow(("Resource Name:", resource))
table.addRow(("Resource ID:", resource_id))
table.addRow(("Created", created))
table.addRow(("Modified", modified))
print("\n")
table.printTable()
print(data)
@inlineCallbacks
def getEventData(self, txn, whereClause, whereParams):
ch = schema.CALENDAR_HOME
cb = schema.CALENDAR_BIND
co = schema.CALENDAR_OBJECT
rows = (yield Select(
[
ch.OWNER_UID,
cb.CALENDAR_RESOURCE_NAME,
co.RESOURCE_ID,
co.RESOURCE_NAME,
co.CREATED,
co.MODIFIED,
co.ICALENDAR_TEXT,
],
From=ch.join(
cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID).And(
cb.BIND_MODE == _BIND_MODE_OWN)).join(
co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID)),
Where=whereClause,
).on(txn, **whereParams))
returnValue(tuple(rows))
class Event(EventDetails):
_name = "Get Event Data by Resource-ID"
@inlineCallbacks
def doIt(self, txn):
rid = raw_input("Resource-ID: ")
try:
int(rid)
except ValueError:
print('Resource ID must be an integer')
returnValue(None)
result = yield self.getData(txn, rid)
if result:
yield self.printEventDetails(txn, result[0])
else:
print("Could not find resource")
def getData(self, txn, rid):
co = schema.CALENDAR_OBJECT
return self.getEventData(txn, (co.RESOURCE_ID == Parameter("RID")), {"RID": rid})
class EventsByUID(EventDetails):
_name = "Get Event Data by iCalendar UID"
@inlineCallbacks
def doIt(self, txn):
uid = raw_input("UID: ")
rows = yield self.getData(txn, uid)
if rows:
for result in rows:
yield self.printEventDetails(txn, result)
else:
print("Could not find icalendar data")
def getData(self, txn, uid):
co = schema.CALENDAR_OBJECT
return self.getEventData(txn, (co.ICALENDAR_UID == Parameter("UID")), {"UID": uid})
class EventsByName(EventDetails):
_name = "Get Event Data by resource name"
@inlineCallbacks
def doIt(self, txn):
name = raw_input("Resource Name: ")
rows = yield self.getData(txn, name)
if rows:
for result in rows:
yield self.printEventDetails(txn, result)
else:
print("Could not find icalendar data")
def getData(self, txn, name):
co = schema.CALENDAR_OBJECT
return self.getEventData(txn, (co.RESOURCE_NAME == Parameter("NAME")), {"NAME": name})
class EventsByOwner(EventDetails):
_name = "Get Event Data by Owner UID/Short Name"
@inlineCallbacks
def doIt(self, txn):
uid = raw_input("Owner UID/Name: ")
uid = yield UIDFromInput(txn, uid)
rows = yield self.getData(txn, uid)
if rows:
for result in rows:
yield self.printEventDetails(txn, result)
else:
print("Could not find icalendar data")
def getData(self, txn, uid):
ch = schema.CALENDAR_HOME
return self.getEventData(txn, (ch.OWNER_UID == Parameter("UID")), {"UID": uid})
class EventsByOwnerCalendar(EventDetails):
_name = "Get Event Data by Owner UID/Short Name and calendar name"
@inlineCallbacks
def doIt(self, txn):
uid = raw_input("Owner UID/Name: ")
uid = yield UIDFromInput(txn, uid)
name = raw_input("Calendar resource name: ")
rows = yield self.getData(txn, uid, name)
if rows:
for result in rows:
yield self.printEventDetails(txn, result)
else:
print("Could not find icalendar data")
def getData(self, txn, uid, name):
ch = schema.CALENDAR_HOME
cb = schema.CALENDAR_BIND
return self.getEventData(txn, (ch.OWNER_UID == Parameter("UID")).And(cb.CALENDAR_RESOURCE_NAME == Parameter("NAME")), {"UID": uid, "NAME": name})
class EventsByPath(EventDetails):
_name = "Get Event Data by HTTP Path"
@inlineCallbacks
def doIt(self, txn):
path = raw_input("Path: ")
pathbits = path.split("/")
if len(pathbits) != 6:
print("Not a valid calendar object resource path")
returnValue(None)
homeName = pathbits[3]
calendarName = pathbits[4]
resourceName = pathbits[5]
rows = yield self.getData(txn, homeName, calendarName, resourceName)
if rows:
for result in rows:
yield self.printEventDetails(txn, result)
else:
print("Could not find icalendar data")
def getData(self, txn, homeName, calendarName, resourceName):
ch = schema.CALENDAR_HOME
cb = schema.CALENDAR_BIND
co = schema.CALENDAR_OBJECT
return self.getEventData(
txn,
(
ch.OWNER_UID == Parameter("HOME")).And(
cb.CALENDAR_RESOURCE_NAME == Parameter("CALENDAR")).And(
co.RESOURCE_NAME == Parameter("RESOURCE")
),
{"HOME": homeName, "CALENDAR": calendarName, "RESOURCE": resourceName}
)
class EventsByContent(EventDetails):
_name = "Get Event Data by Searching its Text Data"
@inlineCallbacks
def doIt(self, txn):
uid = raw_input("Search for: ")
rows = yield self.getData(txn, uid)
if rows:
for result in rows:
yield self.printEventDetails(txn, result)
else:
print("Could not find icalendar data")
def getData(self, txn, text):
co = schema.CALENDAR_OBJECT
return self.getEventData(txn, (co.ICALENDAR_TEXT.Contains(Parameter("TEXT"))), {"TEXT": text})
class EventsInTimerange(Cmd):
_name = "Get Event Data within a specified time range"
@inlineCallbacks
def doIt(self, txn):
uid = raw_input("Owner UID/Name: ")
start = raw_input("Start Time (UTC YYYYMMDDTHHMMSSZ or YYYYMMDD): ")
if len(start) == 8:
start += "T000000Z"
end = raw_input("End Time (UTC YYYYMMDDTHHMMSSZ or YYYYMMDD): ")
if len(end) == 8:
end += "T000000Z"
try:
start = DateTime.parseText(start)
except ValueError:
print("Invalid start value")
returnValue(None)
try:
end = DateTime.parseText(end)
except ValueError:
print("Invalid end value")
returnValue(None)
timerange = caldavxml.TimeRange(start=start.getText(), end=end.getText())
home = yield txn.calendarHomeWithUID(uid)
if home is None:
print("Could not find calendar home")
returnValue(None)
yield self.eventsForEachCalendar(home, uid, timerange)
@inlineCallbacks
def eventsForEachCalendar(self, home, uid, timerange):
calendars = yield home.calendars()
for calendar in calendars:
if calendar.name() == "inbox":
continue
yield self.eventsInTimeRange(calendar, uid, timerange)
@inlineCallbacks
def eventsInTimeRange(self, calendar, uid, timerange):
# Create fake filter element to match time-range
filter = caldavxml.Filter(
caldavxml.ComponentFilter(
caldavxml.ComponentFilter(
timerange,
name=("VEVENT",),
),
name="VCALENDAR",
)
)
filter = Filter(filter)
filter.settimezone(None)
matches = yield calendar.search(filter, useruid=uid, fbtype=False)
if matches is None:
returnValue(None)
for name, _ignore_uid, _ignore_type in matches:
event = yield calendar.calendarObjectWithName(name)
ical_data = yield event.componentForUser()
ical_data.stripStandardTimezones()
table = tables.Table()
table.addRow(("Calendar:", calendar.name(),))
table.addRow(("Resource Name:", name))
table.addRow(("Resource ID:", event._resourceID))
table.addRow(("Created", event.created()))
table.addRow(("Modified", event.modified()))
print("\n")
table.printTable()
print(ical_data.getTextWithoutTimezones())
class Purge(Cmd):
_name = "Purge all data from tables"
@inlineCallbacks
def doIt(self, txn):
if raw_input("Do you really want to remove all data [y/n]: ")[0].lower() != 'y':
print("No data removed")
returnValue(None)
wipeout = (
# These are ordered in such a way as to ensure key constraints are not
# violated as data is removed
schema.RESOURCE_PROPERTY,
schema.CALENDAR_OBJECT_REVISIONS,
schema.CALENDAR,
# schema.CALENDAR_BIND, - cascades
# schema.CALENDAR_OBJECT, - cascades
# schema.TIME_RANGE, - cascades
# schema.TRANSPARENCY, - cascades
schema.CALENDAR_HOME,
# schema.CALENDAR_HOME_METADATA - cascades
schema.ATTACHMENT,
schema.ADDRESSBOOK_OBJECT_REVISIONS,
schema.ADDRESSBOOK_HOME,
# schema.ADDRESSBOOK_HOME_METADATA, - cascades
# schema.ADDRESSBOOK_BIND, - cascades
# schema.ADDRESSBOOK_OBJECT, - cascades
schema.NOTIFICATION_HOME,
schema.NOTIFICATION,
# schema.NOTIFICATION_OBJECT_REVISIONS - cascades,
)
for tableschema in wipeout:
yield self.removeTableData(txn, tableschema)
print("Removed rows in table %s" % (tableschema,))
if calendaruserproxy.ProxyDBService is not None:
calendaruserproxy.ProxyDBService.clean() #@UndefinedVariable
print("Removed all proxies")
else:
print("No proxy database to clean.")
fp = FilePath(config.AttachmentsRoot)
if fp.exists():
for child in fp.children():
child.remove()
print("Removed attachments.")
else:
print("No attachments path to delete.")
@inlineCallbacks
def removeTableData(self, txn, tableschema):
yield Delete(
From=tableschema,
Where=None # Deletes all rows
).on(txn)
class DBInspectService(WorkerService, object):
"""
Service which runs, exports the appropriate records, then stops the reactor.
"""
def __init__(self, store, options, reactor, config):
super(DBInspectService, self).__init__(store)
self.options = options
self.reactor = reactor
self.config = config
self.commands = []
self.commandMap = {}
def doWork(self):
"""
Start the service.
"""
# Register commands
self.registerCommand(TableSizes)
self.registerCommand(CalendarHomes)
self.registerCommand(CalendarHomesSummary)
self.registerCommand(Calendars)
self.registerCommand(CalendarsByOwner)
self.registerCommand(Events)
self.registerCommand(EventsByCalendar)
self.registerCommand(Event)
self.registerCommand(EventsByUID)
self.registerCommand(EventsByName)
self.registerCommand(EventsByOwner)
self.registerCommand(EventsByOwnerCalendar)
self.registerCommand(EventsByPath)
self.registerCommand(EventsByContent)
self.registerCommand(EventsInTimerange)
self.doDBInspect()
return succeed(None)
def postStartService(self):
"""
Don't quit right away
"""
pass
def registerCommand(self, cmd):
self.commands.append(cmd.name())
self.commandMap[cmd.name()] = cmd
@inlineCallbacks
def runCommandByPosition(self, position):
try:
yield self.runCommandByName(self.commands[position])
except IndexError:
print("Position %d not available" % (position,))
returnValue(None)
@inlineCallbacks
def runCommandByName(self, name):
try:
yield self.runCommand(self.commandMap[name])
except IndexError:
print("Unknown command: '%s'" % (name,))
@inlineCallbacks
def runCommand(self, cmd):
txn = self.store.newTransaction()
try:
yield cmd().doIt(txn)
yield txn.commit()
except Exception, e:
traceback.print_exc()
print("Command '%s' failed because of: %s" % (cmd.name(), e,))
yield txn.abort()
def printCommands(self):
print("\n<---- Commands ---->")
for ctr, name in enumerate(self.commands):
print("%d. %s" % (ctr + 1, name,))
if self.options["purging"]:
print("P. Purge\n")
print("Q. Quit\n")
@inlineCallbacks
def doDBInspect(self):
"""
Poll for commands, stopping the reactor when done.
"""
while True:
self.printCommands()
cmd = raw_input("Command: ")
if cmd.lower() == 'q':
break
if self.options["purging"] and cmd.lower() == 'p':
yield self.runCommand(Purge)
else:
try:
position = int(cmd)
except ValueError:
print("Invalid command. Try again.\n")
continue
yield self.runCommandByPosition(position - 1)
self.reactor.stop()
def stopService(self):
"""
Stop the service. Nothing to do; everything should be finished by this
time.
"""
# TODO: stopping this service mid-export should really stop the export
# loop, but this is not implemented because nothing will actually do it
# except hitting ^C (which also calls reactor.stop(), so that will exit
# anyway).
def main(argv=sys.argv, stderr=sys.stderr, reactor=None):
"""
Do the export.
"""
if reactor is None:
from twisted.internet import reactor
options = DBInspectOptions()
options.parseOptions(argv[1:])
def makeService(store):
return DBInspectService(store, options, reactor, config)
utilityMain(options['config'], makeService, reactor, verbose=options['debug'])
if __name__ == '__main__':
main()
calendarserver-7.0+dfsg/calendarserver/tools/delegatesmigration.py 0000775 0000000 0000000 00000021664 12607514243 0025700 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
##
# Copyright (c) 2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
# Suppress warning that occurs on Linux
import sys
if sys.platform.startswith("linux"):
from Crypto.pct_warnings import PowmInsecureWarning
import warnings
warnings.simplefilter("ignore", PowmInsecureWarning)
from getopt import getopt, GetoptError
from calendarserver.tools.cmdline import utilityMain, WorkerService
from twext.python.log import Logger
from twext.who.expression import MatchExpression, MatchType
from twext.who.idirectory import RecordType, FieldName
from twext.who.util import uniqueResult
from twisted.internet.defer import inlineCallbacks, returnValue
from twistedcaldav.config import config
from twistedcaldav.directory import calendaruserproxy
from txdav.who.delegates import Delegates
log = Logger()
def usage(e=None):
if e:
print(e)
print("")
print("")
print(" Migrates delegate assignments from external Postgres DB to Calendar Server Store")
print("")
print("options:")
print(" -h --help: print this help and exit")
print(" -f --config : Specify caldavd.plist configuration path")
print(" -s --server ")
print(" -u --user ")
print(" -p --password ")
print(" -P --pod ")
print(" -d --database (default = 'proxies')")
print(" -t --dbtype (default = 'ProxyDB')")
print("")
if e:
sys.exit(64)
else:
sys.exit(0)
class DelegatesMigrationService(WorkerService):
"""
"""
function = None
params = []
@inlineCallbacks
def doWork(self):
"""
Calls the function that's been assigned to "function" and passes the root
resource, directory, store, and whatever has been assigned to "params".
"""
if self.function is not None:
yield self.function(self.store, *self.params)
def main():
try:
(optargs, args) = getopt(
sys.argv[1:], "d:f:hp:P:s:t:u:", [
"help",
"config=",
"server=",
"user=",
"password=",
"pod=",
"database=",
"dbtype=",
],
)
except GetoptError, e:
usage(e)
#
# Get configuration
#
configFileName = None
server = None
user = None
password = None
pod = None
database = "proxies"
dbtype = "ProxyDB"
for opt, arg in optargs:
# Args come in as encoded bytes
arg = arg.decode("utf-8")
if opt in ("-h", "--help"):
usage()
elif opt in ("-f", "--config"):
configFileName = arg
elif opt in ("-s", "--server"):
server = arg
elif opt in ("-u", "--user"):
user = arg
elif opt in ("-P", "--pod"):
pod = arg
elif opt in ("-p", "--password"):
password = arg
elif opt in ("-d", "--database"):
database = arg
elif opt in ("-t", "--dbtype"):
dbtype = arg
else:
raise NotImplementedError(opt)
DelegatesMigrationService.function = migrateDelegates
DelegatesMigrationService.params = [server, user, password, pod, database, dbtype]
utilityMain(configFileName, DelegatesMigrationService)
@inlineCallbacks
def getAssignments(db):
"""
Returns all the delegate assignments from the db.
@return: a list of (delegator group, delegate) tuples
"""
print("Fetching delegate assignments...")
rows = yield db.query("select GROUPNAME, MEMBER from GROUPS;")
print("Fetched {} delegate assignments".format(len(rows)))
returnValue(rows)
@inlineCallbacks
def copyAssignments(assignments, pod, directory, store):
"""
Go through the list of assignments from the old db, and selectively copy them
into the new store.
@param assignments: the assignments from the old db
@type assignments: a list of (delegator group, delegate) tuples
@param pod: the name of the pod you want to migrate assignments for; assignments
for delegators who don't reside on this pod will be ignored. Set this
to None to copy all assignments.
@param directory: the directory service
@param store: the store
"""
delegatorsMissingPodInfo = set()
numCopied = 0
numOtherPod = 0
numDirectoryBased = 0
numExamined = 0
# If locations and resources' delegate assignments come from the directory,
# then we're only interested in copying assignments where the delegator is a
# user.
if config.GroupCaching.Enabled and config.GroupCaching.UseDirectoryBasedDelegates:
delegatorRecordTypes = (RecordType.user,)
else:
delegatorRecordTypes = None
# When faulting in delegates, only worry about users and groups.
delegateRecordTypes = (RecordType.user, RecordType.group)
total = len(assignments)
for groupname, delegateUID in assignments:
numExamined += 1
if numExamined % 100 == 0:
print("Processed: {} of {}...".format(numExamined, total))
if "#" in groupname:
delegatorUID, permission = groupname.split("#")
try:
delegatorRecords = yield directory.recordsFromExpression(
MatchExpression(FieldName.uid, delegatorUID, matchType=MatchType.equals),
recordTypes=delegatorRecordTypes
)
delegatorRecord = uniqueResult(delegatorRecords)
except Exception, e:
print("Failed to look up record for {}: {}".format(delegatorUID, str(e)))
continue
if delegatorRecord is None:
continue
if config.GroupCaching.Enabled and config.GroupCaching.UseDirectoryBasedDelegates:
if delegatorRecord.recordType != RecordType.user:
print("Skipping non-user")
numDirectoryBased += 1
continue
if pod:
try:
if delegatorRecord.serviceNodeUID != pod:
numOtherPod += 1
continue
except AttributeError:
print("Record missing serviceNodeUID", delegatorRecord.fullNames)
delegatorsMissingPodInfo.add(delegatorUID)
continue
try:
delegateRecords = yield directory.recordsFromExpression(
MatchExpression(FieldName.uid, delegateUID, matchType=MatchType.equals),
recordTypes=delegateRecordTypes
)
delegateRecord = uniqueResult(delegateRecords)
except Exception, e:
print("Failed to look up record for {}: {}".format(delegateUID, str(e)))
continue
if delegateRecord is None:
continue
txn = store.newTransaction(label="DelegatesMigrationService")
yield Delegates.addDelegate(
txn, delegatorRecord, delegateRecord,
(permission == "calendar-proxy-write")
)
numCopied += 1
yield txn.commit()
print("Total delegate assignments examined: {}".format(numExamined))
print("Total delegate assignments migrated: {}".format(numCopied))
if pod:
print("Total ignored assignments because they're on another pod: {}".format(numOtherPod))
if numDirectoryBased:
print("Total ignored assignments because they come from the directory: {}".format(numDirectoryBased))
if delegatorsMissingPodInfo:
print("Delegators missing pod info:")
for uid in delegatorsMissingPodInfo:
print(uid)
@inlineCallbacks
def migrateDelegates(service, store, server, user, password, pod, database, dbtype):
print("Migrating from server {}".format(server))
try:
calendaruserproxy.ProxyDBService = calendaruserproxy.ProxyPostgreSQLDB(server, database, user, password, dbtype)
calendaruserproxy.ProxyDBService.open()
assignments = yield getAssignments(calendaruserproxy.ProxyDBService)
yield copyAssignments(assignments, pod, store.directoryService(), store)
calendaruserproxy.ProxyDBService.close()
except IOError:
log.error("Could not start proxydb service")
raise
if __name__ == "__main__":
main()
calendarserver-7.0+dfsg/calendarserver/tools/diagnose.py 0000664 0000000 0000000 00000036163 12607514243 0023617 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
##
# Copyright (c) 2014-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
import sys
from getopt import getopt, GetoptError
import os
from plistlib import readPlist, readPlistFromString
import re
import subprocess
import urllib2
PREFS_PLIST = "/Library/Server/Preferences/Calendar.plist"
ServerHostName = ""
class FileNotFound(Exception):
"""
Missing file exception
"""
def usage(e=None):
if e:
print(e)
print("")
name = os.path.basename(sys.argv[0])
print("usage: {} [options]".format(name))
print("options:")
print(" -h --help: print this help and exit")
if e:
sys.exit(64)
else:
sys.exit(0)
def main():
if os.getuid() != 0:
usage("This program must be run as root")
try:
optargs, _ignore_args = getopt(
sys.argv[1:], "h", [
"help",
"phantom",
],
)
except GetoptError, e:
usage(e)
for opt, arg in optargs:
# Args come in as encoded bytes
arg = arg.decode("utf-8")
if opt in ("-h", "--help"):
usage()
elif opt == "--phantom":
result = detectPhantomVolume()
sys.exit(result)
osBuild = getOSBuild()
print("OS Build: {}".format(osBuild))
serverBuild = getServerBuild()
print("Server Build: {}".format(serverBuild))
print()
try:
if checkPlist(PREFS_PLIST):
print("{} exists and can be parsed".format(PREFS_PLIST))
else:
print("{} exists but cannot be parsed".format(PREFS_PLIST))
except FileNotFound:
print("{} does not exist (but that's ok)".format(PREFS_PLIST))
serverRoot = getServerRoot()
print("Prefs plist says ServerRoot directory is: {}".format(serverRoot.encode("utf-8")))
result = detectPhantomVolume(serverRoot)
if result == EXIT_CODE_OK:
print("ServerRoot volume ok")
elif result == EXIT_CODE_SERVER_ROOT_MISSING:
print("ServerRoot directory missing")
elif result == EXIT_CODE_PHANTOM_DATA_VOLUME:
print("Phantom ServerRoot volume detected")
systemPlist = os.path.join(serverRoot, "Config", "caldavd-system.plist")
try:
if checkPlist(systemPlist):
print("{} exists and can be parsed".format(systemPlist.encode("utf-8")))
else:
print("{} exists but cannot be parsed".format(systemPlist.encode("utf-8")))
except FileNotFound:
print("{} does not exist".format(systemPlist.encode("utf-8")))
userPlist = os.path.join(serverRoot, "Config", "caldavd-user.plist")
try:
if checkPlist(userPlist):
print("{} exists and can be parsed".format(userPlist.encode("utf-8")))
else:
print("{} exists but cannot be parsed".format(userPlist.encode("utf-8")))
except FileNotFound:
print("{} does not exist".format(userPlist.encode("utf-8")))
keys = showConfigKeys()
showProcesses()
showServerctlStatus()
showDiskSpace(serverRoot)
postgresRunning = showPostgresStatus(serverRoot)
if postgresRunning:
showPostgresContent()
password = getPasswordFromKeychain("com.apple.calendarserver")
connectToAgent(password)
connectToCaldavd(keys)
showWebApps()
EXIT_CODE_OK = 0
EXIT_CODE_SERVER_ROOT_MISSING = 1
EXIT_CODE_PHANTOM_DATA_VOLUME = 2
def detectPhantomVolume(serverRoot=None):
"""
Check to see if serverRoot directory exists in a "phantom" volume, meaning
it's simply a directory under /Volumes residing on the boot volume, rather
that a real separate volume.
"""
if not serverRoot:
serverRoot = getServerRoot()
if not os.path.exists(serverRoot):
return EXIT_CODE_SERVER_ROOT_MISSING
if serverRoot.startswith("/Volumes/"):
bootDevice = os.stat("/").st_dev
dataDevice = os.stat(serverRoot).st_dev
if bootDevice == dataDevice:
return EXIT_CODE_PHANTOM_DATA_VOLUME
return EXIT_CODE_OK
def showProcesses():
print()
print("Calendar and Contacts service processes:")
_ignore_code, stdout, _ignore_stderr = runCommand(
"/bin/ps", "ax",
"-o user",
"-o pid",
"-o %cpu",
"-o %mem",
"-o rss",
"-o etime",
"-o lstart",
"-o command"
)
for line in stdout.split("\n"):
if "_calendar" in line or "CalendarServer" in line or "COMMAND" in line:
print(line)
def showServerctlStatus():
print()
print("Serverd status:")
_ignore_code, stdout, _ignore_stderr = runCommand(
"/Applications/Server.app/Contents/ServerRoot/usr/sbin/serverctl",
"list",
)
services = {
"org.calendarserver.agent": False,
"org.calendarserver.calendarserver": False,
"org.calendarserver.relocate": False,
}
enabledBucket = False
for line in stdout.split("\n"):
if "enabledServices" in line:
enabledBucket = True
if "disabledServices" in line:
enabledBucket = False
for service in services:
if service in line:
services[service] = enabledBucket
for service, enabled in services.iteritems():
print(
"{service} is {enabled}".format(
service=service,
enabled="enabled" if enabled else "disabled"
)
)
def showDiskSpace(serverRoot):
print()
print("Disk space on boot volume:")
_ignore_code, stdout, _ignore_stderr = runCommand(
"/bin/df",
"-H",
"/",
)
print(stdout)
print("Disk space on service data volume:")
_ignore_code, stdout, _ignore_stderr = runCommand(
"/bin/df",
"-H",
serverRoot
)
print(stdout)
print("Disk space used by Calendar and Contacts service:")
_ignore_code, stdout, _ignore_stderr = runCommand(
"/usr/bin/du",
"-sh",
os.path.join(serverRoot, "Config"),
os.path.join(serverRoot, "Data"),
os.path.join(serverRoot, "Logs"),
)
print(stdout)
def showPostgresStatus(serverRoot):
clusterPath = os.path.join(serverRoot, "Data", "Database.xpg", "cluster.pg")
print()
print("Postgres status for cluster {}:".format(clusterPath.encode("utf-8")))
code, stdout, stderr = runCommand(
"/usr/bin/sudo",
"-u",
"calendar",
"/Applications/Server.app/Contents/ServerRoot/usr/bin/pg_ctl",
"status",
"-D",
clusterPath
)
if stdout:
print(stdout)
if stderr:
print(stderr)
if code:
return False
return True
def runSQLQuery(query):
_ignore_code, stdout, stderr = runCommand(
"/Applications/Server.app/Contents/ServerRoot/usr/bin/psql",
"-h",
"/var/run/caldavd/PostgresSocket",
"--dbname=caldav",
"--username=caldav",
"--command={}".format(query),
)
if stdout:
print(stdout)
if stderr:
print(stderr)
def countFromSQLQuery(query):
_ignore_code, stdout, _ignore_stderr = runCommand(
"/Applications/Server.app/Contents/ServerRoot/usr/bin/psql",
"-h",
"/var/run/caldavd/PostgresSocket",
"--dbname=caldav",
"--username=caldav",
"--command={}".format(query),
)
lines = stdout.split("\n")
try:
count = int(lines[2])
except IndexError:
count = 0
return count
def listDatabases():
_ignore_code, stdout, stderr = runCommand(
"/Applications/Server.app/Contents/ServerRoot/usr/bin/psql",
"-h",
"/var/run/caldavd/PostgresSocket",
"--dbname=caldav",
"--username=caldav",
"--list",
)
if stdout:
print(stdout)
if stderr:
print(stderr)
def showPostgresContent():
print()
print("Postgres content:")
print()
listDatabases()
print("'calendarserver' table...")
runSQLQuery("select * from calendarserver;")
count = countFromSQLQuery("select count(*) from calendar_home;")
print("Number of calendar homes: {}".format(count))
count = countFromSQLQuery("select count(*) from calendar_object;")
print("Number of calendar events: {}".format(count))
count = countFromSQLQuery("select count(*) from addressbook_home;")
print("Number of contacts homes: {}".format(count))
count = countFromSQLQuery("select count(*) from addressbook_object;")
print("Number of contacts cards: {}".format(count))
count = countFromSQLQuery("select count(*) from delegates;")
print("Number of non-group delegate assignments: {}".format(count))
count = countFromSQLQuery("select count(*) from delegate_groups;")
print("Number of group delegate assignments: {}".format(count))
print("'job' table...")
runSQLQuery("select * from job;")
def showConfigKeys():
print()
print("Configuration:")
_ignore_code, stdout, _ignore_stderr = runCommand(
"/Applications/Server.app/Contents/ServerRoot/usr/sbin/calendarserver_config",
"EnableCalDAV",
"EnableCardDAV",
"Notifications.Services.APNS.Enabled",
"Scheduling.iMIP.Enabled",
"Authentication.Basic.Enabled",
"Authentication.Digest.Enabled",
"Authentication.Kerberos.Enabled",
"ServerHostName",
"HTTPPort",
"SSLPort",
)
hidden = [
"ServerHostName",
]
keys = {}
for line in stdout.split("\n"):
if "=" in line:
key, value = line.strip().split("=", 1)
keys[key] = value
if key not in hidden:
print("{key} : {value}".format(key=key, value=value))
return keys
def runCommand(commandPath, *args):
"""
Run a command line tool and return the output
"""
if not os.path.exists(commandPath):
raise FileNotFound
commandLine = [commandPath]
if args:
commandLine.extend(args)
child = subprocess.Popen(
args=commandLine,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
output, error = child.communicate()
return child.returncode, output, error
def getOSBuild():
try:
code, stdout, _ignore_stderr = runCommand("/usr/bin/sw_vers", "-buildVersion")
if not code:
return stdout.strip()
except:
return "Unknown"
def getServerBuild():
try:
code, stdout, _ignore_stderr = runCommand("/usr/sbin/serverinfo", "--buildversion")
if not code:
return stdout.strip()
except:
pass
return "Unknown"
def getServerRoot():
"""
Return the ServerRoot value from the servermgr_calendar.plist. If not
present, return the default.
@rtype: C{unicode}
"""
try:
serverRoot = u"/Library/Server/Calendar and Contacts"
if os.path.exists(PREFS_PLIST):
serverRoot = readPlist(PREFS_PLIST).get("ServerRoot", serverRoot)
if isinstance(serverRoot, str):
serverRoot = serverRoot.decode("utf-8")
return serverRoot
except:
return "Unknown"
def checkPlist(plistPath):
if not os.path.exists(plistPath):
raise FileNotFound
try:
readPlist(plistPath)
except:
return False
return True
def showWebApps():
print()
print("Web apps:")
_ignore_code, stdout, _ignore_stderr = runCommand(
"/Applications/Server.app/Contents/ServerRoot/usr/sbin/webappctl",
"status",
"-"
)
print(stdout)
##
# Keychain access
##
passwordRegExp = re.compile(r'password: "(.*)"')
def getPasswordFromKeychain(account):
code, _ignore_stdout, stderr = runCommand(
"/usr/bin/security",
"find-generic-password",
"-a",
account,
"-g",
)
if code:
return None
else:
match = passwordRegExp.search(stderr)
if not match:
print(
"Password for {} not found in keychain".format(account)
)
return None
else:
return match.group(1)
readCommand = """
commandreadConfig
"""
def connectToAgent(password):
print()
print("Agent:")
url = "http://localhost:62308/gateway/"
user = "com.apple.calendarserver"
auth_handler = urllib2.HTTPDigestAuthHandler()
auth_handler.add_password(
realm="/Local/Default",
uri=url,
user=user,
passwd=password
)
opener = urllib2.build_opener(auth_handler)
# ...and install it globally so it can be used with urlopen.
urllib2.install_opener(opener)
# Send HTTP POST request
request = urllib2.Request(url, readCommand)
try:
print("Attempting to send a request to the agent...")
response = urllib2.urlopen(request, timeout=30)
except Exception as e:
print("Can't connect to agent: {}".format(e))
return False
html = response.read()
code = response.getcode()
if code == 200:
try:
data = readPlistFromString(html)
except Exception as e:
print(
"Could not parse response from agent: {error}\n{html}".format(
error=e, html=html
)
)
return False
if "result" in data:
print("...success")
else:
print("Error in agent's response:\n{}".format(html))
return False
else:
print("Got an error back from the agent: {code} {html}".format(
code=code, html=html)
)
return True
def connectToCaldavd(keys):
print()
print("Server connection:")
url = "https://{host}/principals/".format(host=keys["ServerHostName"])
try:
print("Attempting to send a request to port 443...")
response = urllib2.urlopen(url, timeout=30)
html = response.read()
code = response.getcode()
print(code, html)
if code == 200:
print("Received 200 response")
except urllib2.HTTPError as e:
code = e.code
reason = e.reason
if code == 401:
print("Got the expected response")
else:
print(
"Got an unexpected response: {code} {reason}".format(
code=code, reason=reason
)
)
except Exception as e:
print("Can't connect to port 443: {error}".format(error=e))
if __name__ == "__main__":
main()
calendarserver-7.0+dfsg/calendarserver/tools/dkimtool.py 0000775 0000000 0000000 00000020245 12607514243 0023645 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
##
# Copyright (c) 2012-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
import os
import sys
from Crypto.PublicKey import RSA
from StringIO import StringIO
from twisted.internet import reactor
from twisted.internet.defer import inlineCallbacks
from twisted.python.usage import Options
from twext.python.log import Logger, LogLevel, StandardIOObserver
from txweb2.http_headers import Headers
from txdav.caldav.datastore.scheduling.ischedule.dkim import RSA256, DKIMRequest, \
PublicKeyLookup, DKIMVerifier, DKIMVerificationError
log = Logger()
def _doKeyGeneration(options):
key = RSA.generate(options["key-size"])
output = key.exportKey()
lineBreak = False
if options["key"]:
open(options["key"], "w").write(output)
else:
print(output)
lineBreak = True
output = key.publickey().exportKey()
if options["pub-key"]:
open(options["pub-key"], "w").write(output)
else:
if lineBreak:
print
print(output)
lineBreak = True
if options["txt"]:
output = "".join(output.splitlines()[1:-1])
txt = "v=DKIM1; p=%s" % (output,)
if lineBreak:
print
print(txt)
@inlineCallbacks
def _doRequest(options):
if options["verbose"]:
log.publisher.levels.setLogLevelForNamespace("txdav.caldav.datastore.scheduling.ischedule.dkim", LogLevel.debug)
# Parse the HTTP file
request = open(options["request"]).read()
method, uri, headers, stream = _parseRequest(request)
# Setup signing headers
sign_headers = options["signing"]
if sign_headers is None:
sign_headers = []
for hdr in ("Host", "Content-Type", "Originator", "Recipient+"):
if headers.hasHeader(hdr.rstrip("+")):
sign_headers.append(hdr)
else:
sign_headers = sign_headers.split(":")
dkim = DKIMRequest(
method,
uri,
headers,
stream,
options["domain"],
options["selector"],
options["key"],
options["algorithm"],
sign_headers,
True,
True,
False,
int(options["expire"]),
)
if options["fake-time"]:
dkim.time = "100"
dkim.expire = "200"
dkim.message_id = "1"
yield dkim.sign()
s = StringIO()
_writeRequest(dkim, s)
print(s.getvalue())
@inlineCallbacks
def _doVerify(options):
# Parse the HTTP file
verify = open(os.path.expanduser(options["verify"])).read()
_method, _uri, headers, body = _parseRequest(verify)
# Check for local public key
if options["pub-key"]:
PublicKeyLookup_File.pubkeyfile = os.path.expanduser(options["pub-key"])
lookup = (PublicKeyLookup_File,)
else:
lookup = None
dkim = DKIMVerifier(headers, body, lookup)
if options["fake-time"]:
dkim.time = 0
try:
yield dkim.verify()
except DKIMVerificationError, e:
print("Verification Failed: %s" % (e,))
else:
print("Verification Succeeded")
def _parseRequest(request):
lines = request.splitlines(True)
method, uri, _ignore_version = lines.pop(0).split()
hdrs = []
body = None
for line in lines:
if body is not None:
body.append(line)
elif line.strip() == "":
body = []
elif line[0] in (" ", "\t"):
hdrs[-1] += line
else:
hdrs.append(line)
headers = Headers()
for hdr in hdrs:
name, value = hdr.split(':', 1)
headers.addRawHeader(name, value.strip())
stream = "".join(body)
return method, uri, headers, stream
def _writeRequest(request, f):
f.write("%s %s HTTP/1.1\r\n" % (request.method, request.uri,))
for name, valuelist in request.headers.getAllRawHeaders():
for value in valuelist:
f.write("%s: %s\r\n" % (name, value))
f.write("\r\n")
f.write(request.stream.read())
class PublicKeyLookup_File(PublicKeyLookup):
method = "*"
pubkeyfile = None
def getPublicKey(self):
"""
Do the key lookup using the actual lookup method.
"""
return RSA.importKey(open(self.pubkeyfile).read())
def usage(e=None):
if e:
print(e)
print("")
try:
DKIMToolOptions().opt_help()
except SystemExit:
pass
if e:
sys.exit(64)
else:
sys.exit(0)
description = """Usage: dkimtool [options]
Options:
-h Print this help and exit
# Key Generation
--key-gen Generate private/public key files
--key FILE Private key file to create [stdout]
--pub-key FILE Public key file to create [stdout]
--key-size SIZE Key size [1024]
--txt Also generate the public key TXT record
--fake-time Use fake t=, x= values when signing and also
ignore expiration on verification
# Request
--request FILE An HTTP request to sign
--algorithm ALGO Signature algorithm [rsa-sha256]
--domain DOMAIN Signature domain [example.com]
--selector SELECTOR Signature selector [dkim]
--key FILE Private key to use
--signing HEADERS List of headers to sign [automatic]
--expire SECONDS When to expire signature [no expiry]
# Verify
--verify FILE An HTTP request to verify
--pkey FILE Public key to use in place of
q= lookup
Description:
This utility is for testing DKIM signed HTTP requests. Key operations are:
--key-gen: generate a private/public RSA key.
--request: sign an HTTP request.
--verify: verify a signed HTTP request.
"""
class DKIMToolOptions(Options):
"""
Command-line options for 'calendarserver_dkimtool'
"""
synopsis = description
optFlags = [
['verbose', 'v', "Verbose logging."],
['key-gen', 'g', "Generate private/public key files"],
['txt', 't', "Also generate the public key TXT record"],
['fake-time', 'f', "Fake time values for signing/verification"],
]
optParameters = [
['key', 'k', None, "Private key file to create [default: stdout]"],
['pub-key', 'p', None, 'Public key file to create [default: stdout]'],
['key-size', 'x', 1024, 'Key size'],
['request', 'r', None, 'An HTTP request to sign'],
['algorithm', 'a', RSA256, 'Signature algorithm'],
['domain', 'd', 'example.com', 'Signature domain'],
['selector', 's', 'dkim', 'Signature selector'],
['signing', 'h', None, 'List of headers to sign [automatic]'],
['expire', 'e', 3600, 'When to expire signature'],
['verify', 'w', None, 'An HTTP request to verify'],
]
def __init__(self):
super(DKIMToolOptions, self).__init__()
self.outputName = '-'
@inlineCallbacks
def _runInReactor(fn, options):
try:
yield fn(options)
except Exception, e:
print(e)
finally:
reactor.stop()
def main(argv=sys.argv, stderr=sys.stderr):
options = DKIMToolOptions()
options.parseOptions(argv[1:])
#
# Send logging output to stdout
#
observer = StandardIOObserver()
observer.start()
if options["verbose"]:
log.publisher.levels.setLogLevelForNamespace("txdav.caldav.datastore.scheduling.ischedule.dkim", LogLevel.debug)
if options["key-gen"]:
_doKeyGeneration(options)
elif options["request"]:
reactor.callLater(0, _runInReactor, _doRequest, options)
reactor.run()
elif options["verify"]:
reactor.callLater(0, _runInReactor, _doVerify, options)
reactor.run()
else:
usage("Invalid options")
if __name__ == '__main__':
main()
calendarserver-7.0+dfsg/calendarserver/tools/export.py 0000775 0000000 0000000 00000031736 12607514243 0023353 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# -*- test-case-name: calendarserver.tools.test.test_export -*-
##
# Copyright (c) 2006-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
This tool reads calendar data from a series of inputs and generates a single
iCalendar file which can be opened in many calendar applications.
This can be used to quickly create an iCalendar file from a user's calendars.
This tool requires access to the calendar server's configuration and data
storage; it does not operate by talking to the server via the network. It
therefore does not apply any of the access restrictions that the server would.
As such, one should be mindful that data exported via this tool may be sensitive.
Please also note that this is not an appropriate tool for backups, as there is
data associated with users and calendars beyond the iCalendar as visible to the
owner of that calendar, including DAV properties, information about sharing, and
per-user data such as alarms.
"""
from __future__ import print_function
import itertools
import os
import shutil
import sys
from calendarserver.tools.cmdline import utilityMain, WorkerService
from twext.python.log import Logger
from twisted.internet.defer import inlineCallbacks, returnValue, succeed
from twisted.python.text import wordWrap
from twisted.python.usage import Options, UsageError
from twistedcaldav import customxml
from twistedcaldav.ical import Component, Property
from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE
from txdav.base.propertystore.base import PropertyName
from txdav.xml import element as davxml
log = Logger()
def usage(e=None):
if e:
print(e)
print("")
try:
ExportOptions().opt_help()
except SystemExit:
pass
if e:
sys.exit(64)
else:
sys.exit(0)
description = '\n'.join(
wordWrap(
"""
Usage: calendarserver_export [options] [input specifiers]\n
""" + __doc__,
int(os.environ.get('COLUMNS', '80'))
)
)
class ExportOptions(Options):
"""
Command-line options for 'calendarserver_export'
@ivar exporters: a list of L{DirectoryExporter} objects which can identify the
calendars to export, given a directory service. This list is built by
parsing --record and --collection options.
"""
synopsis = description
optFlags = [
['debug', 'D', "Debug logging."],
]
optParameters = [
['config', 'f', DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."],
]
def __init__(self):
super(ExportOptions, self).__init__()
self.exporters = []
self.outputName = '-'
self.outputDirectoryName = None
def opt_uid(self, uid):
"""
Add a calendar home directly by its UID (which is usually a directory
service's GUID).
"""
self.exporters.append(UIDExporter(uid))
def opt_record(self, recordName):
"""
Add a directory record's calendar home (format: 'recordType:shortName').
"""
recordType, shortName = recordName.split(":", 1)
self.exporters.append(DirectoryExporter(recordType, shortName))
opt_r = opt_record
def opt_collection(self, collectionName):
"""
Add a calendar collection. This option must be passed after --record
(or a synonym, like --user). for example, to export user1's calendars
called 'meetings' and 'team', invoke 'calendarserver_export --user=user1
--collection=meetings --collection=team'.
"""
self.exporters[-1].collections.append(collectionName)
opt_c = opt_collection
def opt_directory(self, dirname):
"""
Specify output directory path.
"""
self.outputDirectoryName = dirname
opt_d = opt_directory
def opt_output(self, filename):
"""
Specify output file path (default: '-', meaning stdout).
"""
self.outputName = filename
opt_o = opt_output
def opt_user(self, user):
"""
Add a user's calendar home (shorthand for '-r users:shortName').
"""
self.opt_record("users:" + user)
opt_u = opt_user
def openOutput(self):
"""
Open the appropriate output file based on the '--output' option.
"""
if self.outputName == '-':
return sys.stdout
else:
return open(self.outputName, 'wb')
class _ExporterBase(object):
"""
Base exporter implementation that works from a home UID.
@ivar collections: A list of the names of collections that this exporter
should enumerate.
@type collections: C{list} of C{str}
"""
def __init__(self):
self.collections = []
def getHomeUID(self, exportService):
"""
Subclasses must implement this.
"""
raise NotImplementedError()
@inlineCallbacks
def listCalendars(self, txn, exportService):
"""
Enumerate all calendars based on the directory record and/or calendars
for this calendar home.
"""
uid = yield self.getHomeUID(exportService)
home = yield txn.calendarHomeWithUID(uid, create=True)
result = []
if self.collections:
for collection in self.collections:
result.append((yield home.calendarWithName(collection)))
else:
for collection in (yield home.calendars()):
if collection.name() != 'inbox':
result.append(collection)
returnValue(result)
class UIDExporter(_ExporterBase):
"""
An exporter that constructs a list of calendars based on the UID of the
home, and an optional list of calendar names.
@ivar uid: The UID of a calendar home to export.
@type uid: C{str}
"""
def __init__(self, uid):
super(UIDExporter, self).__init__()
self.uid = uid
def getHomeUID(self, exportService):
return succeed(self.uid)
class DirectoryExporter(_ExporterBase):
"""
An exporter that constructs a list of calendars based on the directory
services record ID of the home, and an optional list of calendar names.
@ivar recordType: The directory record type to export. For example:
'users'.
@type recordType: C{str}
@ivar shortName: The shortName of the directory record to export, according
to C{recordType}.
@type shortName: C{str}
"""
def __init__(self, recordType, shortName):
super(DirectoryExporter, self).__init__()
self.recordType = recordType
self.shortName = shortName
@inlineCallbacks
def getHomeUID(self, exportService):
"""
Retrieve the home UID.
"""
directory = exportService.directoryService()
record = yield directory.recordWithShortName(
directory.oldNameToRecordType(self.recordType),
self.shortName
)
returnValue(record.uid)
@inlineCallbacks
def exportToFile(calendars, fileobj):
"""
Export some calendars to a file as their owner would see them.
@param calendars: an iterable of L{ICalendar} providers (or L{Deferred}s of
same).
@param fileobj: an object with a C{write} method that will accept some
iCalendar data.
@return: a L{Deferred} which fires when the export is complete. (Note that
the file will not be closed.)
@rtype: L{Deferred} that fires with C{None}
"""
comp = Component.newCalendar()
for calendar in calendars:
calendar = yield calendar
for obj in (yield calendar.calendarObjects()):
evt = yield obj.filteredComponent(
calendar.ownerCalendarHome().uid(), True
)
for sub in evt.subcomponents():
if sub.name() != 'VTIMEZONE':
# Omit all VTIMEZONE components here - we will include them later
# when we serialize the whole calendar.
comp.addComponent(sub)
fileobj.write(comp.getTextWithTimezones(True))
@inlineCallbacks
def exportToDirectory(calendars, dirname):
"""
Export some calendars to a file as their owner would see them.
@param calendars: an iterable of L{ICalendar} providers (or L{Deferred}s of
same).
@param dirname: the path to a directory to store calendar files in; each
calendar being exported will have it's own .ics file
@return: a L{Deferred} which fires when the export is complete. (Note that
the file will not be closed.)
@rtype: L{Deferred} that fires with C{None}
"""
for calendar in calendars:
homeUID = calendar.ownerCalendarHome().uid()
calendarProperties = calendar.properties()
comp = Component.newCalendar()
for element, propertyName in (
(davxml.DisplayName, "NAME"),
(customxml.CalendarColor, "COLOR"),
):
value = calendarProperties.get(PropertyName.fromElement(element), None)
if value:
comp.addProperty(Property(propertyName, str(value)))
source = "/calendars/__uids__/{}/{}/".format(homeUID, calendar.name())
comp.addProperty(Property("SOURCE", source))
for obj in (yield calendar.calendarObjects()):
evt = yield obj.filteredComponent(homeUID, True)
for sub in evt.subcomponents():
if sub.name() != 'VTIMEZONE':
# Omit all VTIMEZONE components here - we will include them later
# when we serialize the whole calendar.
comp.addComponent(sub)
filename = os.path.join(dirname, "{}_{}.ics".format(homeUID, calendar.name()))
fileobj = open(filename, 'wb')
fileobj.write(comp.getTextWithTimezones(True))
fileobj.close()
class ExporterService(WorkerService, object):
"""
Service which runs, exports the appropriate records, then stops the reactor.
"""
def __init__(self, store, options, output, reactor, config):
super(ExporterService, self).__init__(store)
self.options = options
self.output = output
self.reactor = reactor
self.config = config
self._directory = self.store.directoryService()
@inlineCallbacks
def doWork(self):
"""
Do the export, stopping the reactor when done.
"""
txn = self.store.newTransaction()
try:
allCalendars = itertools.chain(
*[(yield exporter.listCalendars(txn, self)) for exporter in
self.options.exporters]
)
if self.options.outputDirectoryName:
dirname = self.options.outputDirectoryName
if os.path.exists(dirname):
shutil.rmtree(dirname)
os.mkdir(dirname)
yield exportToDirectory(allCalendars, dirname)
else:
yield exportToFile(allCalendars, self.output)
self.output.close()
yield txn.commit()
# TODO: should be read-only, so commit/abort shouldn't make a
# difference. commit() for now, in case any transparent cache /
# update stuff needed to happen, don't want to undo it.
except:
log.failure("doWork()")
def directoryService(self):
"""
Get an appropriate directory service.
"""
return self._directory
def stopService(self):
"""
Stop the service. Nothing to do; everything should be finished by this
time.
"""
# TODO: stopping this service mid-export should really stop the export
# loop, but this is not implemented because nothing will actually do it
# except hitting ^C (which also calls reactor.stop(), so that will exit
# anyway).
def main(argv=sys.argv, stderr=sys.stderr, reactor=None):
"""
Do the export.
"""
if reactor is None:
from twisted.internet import reactor
options = ExportOptions()
try:
options.parseOptions(argv[1:])
except UsageError, e:
usage(e)
if options.outputDirectoryName:
output = None
else:
try:
output = options.openOutput()
except IOError, e:
stderr.write(
"Unable to open output file for writing: %s\n" % (e)
)
sys.exit(1)
def makeService(store):
from twistedcaldav.config import config
return ExporterService(store, options, output, reactor, config)
utilityMain(options["config"], makeService, reactor, verbose=options["debug"])
calendarserver-7.0+dfsg/calendarserver/tools/gateway.py 0000775 0000000 0000000 00000050411 12607514243 0023462 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
##
# Copyright (c) 2006-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
# Suppress warning that occurs on Linux
import sys
if sys.platform.startswith("linux"):
from Crypto.pct_warnings import PowmInsecureWarning
import warnings
warnings.simplefilter("ignore", PowmInsecureWarning)
from getopt import getopt, GetoptError
import os
from plistlib import readPlistFromString, writePlistToString
import uuid
import xml
from calendarserver.tools.cmdline import utilityMain
from calendarserver.tools.config import (
WRITABLE_CONFIG_KEYS, setKeyPath, getKeyPath, flattenDictionary,
WritableConfig
)
from calendarserver.tools.principals import (
getProxies, setProxies
)
from calendarserver.tools.purge import (
WorkerService, PurgeOldEventsService,
DEFAULT_BATCH_SIZE, DEFAULT_RETAIN_DAYS,
PrincipalPurgeWork
)
from calendarserver.tools.util import (
recordForPrincipalID, autoDisableMemcached
)
from pycalendar.datetime import DateTime
from twext.who.directory import DirectoryRecord
from twisted.internet.defer import inlineCallbacks, succeed, returnValue
from twistedcaldav.config import config, ConfigDict
from txdav.who.idirectory import RecordType as CalRecordType
from twext.who.idirectory import FieldName
from twisted.python.constants import Names, NamedConstant
from txdav.who.delegates import Delegates, RecordType as DelegateRecordType
attrMap = {
'GeneratedUID': {'attr': 'uid', },
'RealName': {'attr': 'fullNames', },
'RecordName': {'attr': 'shortNames', },
'AutoScheduleMode': {'attr': 'autoScheduleMode', },
'AutoAcceptGroup': {'attr': 'autoAcceptGroup', },
# 'Comment': {'extras': True, 'attr': 'comment', },
# 'Description': {'extras': True, 'attr': 'description', },
# 'Type': {'extras': True, 'attr': 'type', },
# For "Locations", i.e. scheduled spaces
'Capacity': {'attr': 'capacity', },
'Floor': {'attr': 'floor', },
'AssociatedAddress': {'attr': 'associatedAddress', },
# For "Addresses", i.e. nonscheduled areas containing Locations
'AbbreviatedName': {'attr': 'abbreviatedName', },
'StreetAddress': {'attr': 'streetAddress', },
'GeographicLocation': {'attr': 'geographicLocation', },
}
def usage(e=None):
name = os.path.basename(sys.argv[0])
print("usage: %s [options]" % (name,))
print("")
print(" TODO: describe usage")
print("")
print("options:")
print(" -h --help: print this help and exit")
print(" -e --error: send stderr to stdout")
print(" -f --config : Specify caldavd.plist configuration path")
print("")
if e:
sys.exit(64)
else:
sys.exit(0)
class RunnerService(WorkerService):
"""
A wrapper around Runner which uses utilityMain to get the store
"""
commands = None
@inlineCallbacks
def doWork(self):
"""
Create/run a Runner to execute the commands
"""
runner = Runner(self.store, self.commands)
if runner.validate():
yield runner.run()
def doWorkWithoutStore(self):
respondWithError("Database is not available")
return succeed(None)
def main():
try:
(optargs, _ignore_args) = getopt(
sys.argv[1:], "hef:", [
"help",
"error",
"config=",
],
)
except GetoptError, e:
usage(e)
#
# Get configuration
#
configFileName = None
debug = False
for opt, arg in optargs:
if opt in ("-h", "--help"):
usage()
if opt in ("-e", "--error"):
debug = True
elif opt in ("-f", "--config"):
configFileName = arg
else:
raise NotImplementedError(opt)
#
# Read commands from stdin
#
rawInput = sys.stdin.read()
try:
plist = readPlistFromString(rawInput)
except xml.parsers.expat.ExpatError, e:
respondWithError(str(e))
return
# If the plist is an array, each element of the array is a separate
# command dictionary.
if isinstance(plist, list):
commands = plist
else:
commands = [plist]
RunnerService.commands = commands
utilityMain(configFileName, RunnerService, verbose=debug)
class Runner(object):
def __init__(self, store, commands, output=None):
self.store = store
self.dir = store.directoryService()
self.commands = commands
if output is None:
output = sys.stdout
self.output = output
def validate(self):
# Make sure commands are valid
for command in self.commands:
if 'command' not in command:
self.respondWithError("'command' missing from plist")
return False
commandName = command['command']
methodName = "command_%s" % (commandName,)
if not hasattr(self, methodName):
self.respondWithError("Unknown command '%s'" % (commandName,))
return False
return True
@inlineCallbacks
def run(self):
# This method can be called as the result of an agent request. We
# check to see if memcached is there for each call because the server
# could have stopped/started since the last time.
for pool in config.Memcached.Pools.itervalues():
pool.ClientEnabled = True
autoDisableMemcached(config)
try:
for command in self.commands:
commandName = command['command']
methodName = "command_%s" % (commandName,)
if hasattr(self, methodName):
(yield getattr(self, methodName)(command))
else:
self.respondWithError("Unknown command '%s'" % (commandName,))
except Exception, e:
self.respondWithError("Command failed: '%s'" % (str(e),))
raise
# Locations
# deferred
def command_getLocationList(self, command):
return self.respondWithRecordsOfTypes(self.dir, command, ["locations"])
@inlineCallbacks
def _saveRecord(self, typeName, recordType, command, oldFields=None):
"""
Save a record using the values in the command plist, starting with
any fields in the optional oldFields.
@param typeName: one of "locations", "resources", "addresses"; used
to return the appropriate list of records afterwards.
@param recordType: the type of record to save
@param command: the command containing values
@type command: C{dict}
@param oldFields: the optional fields to start with, which will be
overridden by values from command
@type oldFiles: C{dict}
"""
if oldFields is None:
fields = {
self.dir.fieldName.recordType: recordType,
self.dir.fieldName.hasCalendars: True,
self.dir.fieldName.hasContacts: True,
}
create = True
else:
fields = oldFields.copy()
create = False
for key, info in attrMap.iteritems():
if key in command:
attrName = info['attr']
field = self.dir.fieldName.lookupByName(attrName)
valueType = self.dir.fieldName.valueType(field)
value = command[key]
# For backwards compatibility, convert to a list if needed
if (
self.dir.fieldName.isMultiValue(field) and
not isinstance(value, list)
):
value = [value]
if valueType == int:
value = int(value)
elif issubclass(valueType, Names):
if value is not None:
value = valueType.lookupByName(value)
else:
if isinstance(value, list):
newList = []
for item in value:
if isinstance(item, str):
newList.append(item.decode("utf-8"))
else:
newList.append(item)
value = newList
elif isinstance(value, str):
value = value.decode("utf-8")
fields[field] = value
if FieldName.uid not in fields:
# No uid provided, so generate one
fields[FieldName.uid] = unicode(uuid.uuid4()).upper()
if FieldName.shortNames not in fields:
# No short names were provided, so copy from uid
fields[FieldName.shortNames] = [fields[FieldName.uid]]
record = DirectoryRecord(self.dir, fields)
yield self.dir.updateRecords([record], create=create)
readProxies = command.get("ReadProxies", None)
if readProxies:
proxyRecords = []
for proxyUID in readProxies:
proxyRecord = yield self.dir.recordWithUID(proxyUID)
if proxyRecord is not None:
proxyRecords.append(proxyRecord)
readProxies = proxyRecords
writeProxies = command.get("WriteProxies", None)
if writeProxies:
proxyRecords = []
for proxyUID in writeProxies:
proxyRecord = yield self.dir.recordWithUID(proxyUID)
if proxyRecord is not None:
proxyRecords.append(proxyRecord)
writeProxies = proxyRecords
yield setProxies(record, readProxies, writeProxies)
yield self.respondWithRecordsOfTypes(self.dir, command, [typeName])
def command_createLocation(self, command):
return self._saveRecord("locations", CalRecordType.location, command)
def command_createResource(self, command):
return self._saveRecord("resources", CalRecordType.resource, command)
def command_createAddress(self, command):
return self._saveRecord("addresses", CalRecordType.address, command)
@inlineCallbacks
def command_setLocationAttributes(self, command):
uid = command['GeneratedUID']
record = yield self.dir.recordWithUID(uid)
yield self._saveRecord(
"locations",
CalRecordType.location,
command,
oldFields=record.fields
)
@inlineCallbacks
def command_setResourceAttributes(self, command):
uid = command['GeneratedUID']
record = yield self.dir.recordWithUID(uid)
yield self._saveRecord(
"resources",
CalRecordType.resource,
command,
oldFields=record.fields
)
@inlineCallbacks
def command_setAddressAttributes(self, command):
uid = command['GeneratedUID']
record = yield self.dir.recordWithUID(uid)
yield self._saveRecord(
"addresses",
CalRecordType.address,
command,
oldFields=record.fields
)
@inlineCallbacks
def command_getLocationAttributes(self, command):
uid = command['GeneratedUID']
record = yield self.dir.recordWithUID(uid)
if record is None:
self.respondWithError("Principal not found: %s" % (uid,))
return
recordDict = recordToDict(record)
# recordDict['AutoSchedule'] = yield principal.getAutoSchedule()
try:
recordDict['AutoAcceptGroup'] = record.autoAcceptGroup
except AttributeError:
pass
readProxies, writeProxies = yield getProxies(record)
recordDict['ReadProxies'] = [r.uid for r in readProxies]
recordDict['WriteProxies'] = [r.uid for r in writeProxies]
self.respond(command, recordDict)
command_getResourceAttributes = command_getLocationAttributes
command_getAddressAttributes = command_getLocationAttributes
# Resources
def command_getResourceList(self, command):
return self.respondWithRecordsOfTypes(self.dir, command, ["resources"])
# deferred
def command_getLocationAndResourceList(self, command):
return self.respondWithRecordsOfTypes(self.dir, command, ["locations", "resources"])
# Addresses
def command_getAddressList(self, command):
return self.respondWithRecordsOfTypes(self.dir, command, ["addresses"])
@inlineCallbacks
def _delete(self, typeName, command):
uid = command['GeneratedUID']
yield self.dir.removeRecords([uid])
yield self.respondWithRecordsOfTypes(self.dir, command, [typeName])
@inlineCallbacks
def command_deleteLocation(self, command):
txn = self.store.newTransaction()
uid = command['GeneratedUID']
yield txn.enqueue(PrincipalPurgeWork, uid=uid)
yield txn.commit()
yield self._delete("locations", command)
@inlineCallbacks
def command_deleteResource(self, command):
txn = self.store.newTransaction()
uid = command['GeneratedUID']
yield txn.enqueue(PrincipalPurgeWork, uid=uid)
yield txn.commit()
yield self._delete("resources", command)
def command_deleteAddress(self, command):
return self._delete("addresses", command)
# Config
def command_readConfig(self, command):
"""
Return current configuration
@param command: the dictionary parsed from the plist read from stdin
@type command: C{dict}
"""
config.reload()
# config.Memcached.Pools.Default.ClientEnabled = False
result = {}
for keyPath in WRITABLE_CONFIG_KEYS:
value = getKeyPath(config, keyPath)
if value is not None:
# Note: config contains utf-8 encoded strings, but plistlib
# wants unicode, so decode here:
if isinstance(value, str):
value = value.decode("utf-8")
setKeyPath(result, keyPath, value)
self.respond(command, result)
def command_writeConfig(self, command):
"""
Write config to secondary, writable plist
@param command: the dictionary parsed from the plist read from stdin
@type command: C{dict}
"""
writable = WritableConfig(config, config.WritableConfigFile)
writable.read()
valuesToWrite = command.get("Values", {})
# Note: values are unicode if they contain non-ascii
for keyPath, value in flattenDictionary(valuesToWrite):
if keyPath in WRITABLE_CONFIG_KEYS:
writable.set(setKeyPath(ConfigDict(), keyPath, value))
try:
writable.save(restart=False)
except Exception, e:
self.respond(command, {"error": str(e)})
else:
self.command_readConfig(command)
# Proxies
def command_listWriteProxies(self, command):
return self._listProxies(command, "write")
def command_listReadProxies(self, command):
return self._listProxies(command, "read")
@inlineCallbacks
def _listProxies(self, command, proxyType):
record = yield recordForPrincipalID(self.dir, command['Principal'])
if record is None:
self.respondWithError("Principal not found: %s" % (command['Principal'],))
returnValue(None)
yield self.respondWithProxies(command, record, proxyType)
def command_addReadProxy(self, command):
return self._addProxy(command, "read")
def command_addWriteProxy(self, command):
return self._addProxy(command, "write")
@inlineCallbacks
def _addProxy(self, command, proxyType):
record = yield recordForPrincipalID(self.dir, command['Principal'])
if record is None:
self.respondWithError("Principal not found: %s" % (command['Principal'],))
returnValue(None)
proxyRecord = yield recordForPrincipalID(self.dir, command['Proxy'])
if proxyRecord is None:
self.respondWithError("Proxy not found: %s" % (command['Proxy'],))
returnValue(None)
txn = self.store.newTransaction()
yield Delegates.addDelegate(txn, record, proxyRecord, (proxyType == "write"))
yield txn.commit()
yield self.respondWithProxies(command, record, proxyType)
def command_removeReadProxy(self, command):
return self._removeProxy(command, "read")
def command_removeWriteProxy(self, command):
return self._removeProxy(command, "write")
@inlineCallbacks
def _removeProxy(self, command, proxyType):
record = yield recordForPrincipalID(self.dir, command['Principal'])
if record is None:
self.respondWithError("Principal not found: %s" % (command['Principal'],))
returnValue(None)
proxyRecord = yield recordForPrincipalID(self.dir, command['Proxy'])
if proxyRecord is None:
self.respondWithError("Proxy not found: %s" % (command['Proxy'],))
returnValue(None)
txn = self.store.newTransaction()
yield Delegates.removeDelegate(txn, record, proxyRecord, (proxyType == "write"))
yield txn.commit()
yield self.respondWithProxies(command, record, proxyType)
@inlineCallbacks
def command_purgeOldEvents(self, command):
"""
Convert RetainDays from the command dictionary into a date, then purge
events older than that date.
@param command: the dictionary parsed from the plist read from stdin
@type command: C{dict}
"""
retainDays = command.get("RetainDays", DEFAULT_RETAIN_DAYS)
cutoff = DateTime.getToday()
cutoff.setDateOnly(False)
cutoff.offsetDay(-retainDays)
eventCount = (yield PurgeOldEventsService.purgeOldEvents(self.store, None, cutoff, DEFAULT_BATCH_SIZE))
self.respond(command, {'EventsRemoved': eventCount, "RetainDays": retainDays})
@inlineCallbacks
def respondWithProxies(self, command, record, proxyType):
proxies = []
recordType = {
"read": DelegateRecordType.readDelegateGroup,
"write": DelegateRecordType.writeDelegateGroup,
}[proxyType]
proxyGroup = yield self.dir.recordWithShortName(recordType, record.uid)
for member in (yield proxyGroup.members()):
proxies.append(member.uid)
self.respond(command, {
'Principal': record.uid, 'Proxies': proxies
})
@inlineCallbacks
def respondWithRecordsOfTypes(self, directory, command, recordTypes):
result = []
for recordType in recordTypes:
recordType = directory.oldNameToRecordType(recordType)
for record in (yield directory.recordsWithRecordType(recordType)):
recordDict = recordToDict(record)
result.append(recordDict)
self.respond(command, result)
def respond(self, command, result):
self.output.write(writePlistToString({'command': command['command'], 'result': result}))
def respondWithError(self, msg, status=1):
self.output.write(writePlistToString({'error': msg, }))
def recordToDict(record):
recordDict = {}
for key, info in attrMap.iteritems():
try:
value = record.fields[record.service.fieldName.lookupByName(info['attr'])]
if value is None:
continue
# For backwards compatibility, present fullName/RealName as single
# value even though twext.who now has it as multiValue
if key == "RealName":
value = value[0]
if isinstance(value, str):
value = value.decode("utf-8")
elif isinstance(value, NamedConstant):
value = value.name
recordDict[key] = value
except KeyError:
pass
return recordDict
def respondWithError(msg, status=1):
sys.stdout.write(writePlistToString({'error': msg, }))
if __name__ == "__main__":
main()
calendarserver-7.0+dfsg/calendarserver/tools/icalsplit.py 0000664 0000000 0000000 00000006122 12607514243 0024002 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
import os
import sys
from getopt import getopt, GetoptError
from twistedcaldav.ical import Component as iComponent
def splitICalendarFile(inputFileName, outputDirectory):
"""
Reads iCalendar data from a file and outputs a separate file for
each iCalendar component into a directory. This is useful for
converting a monolithic iCalendar object into a set of objects
that comply with CalDAV's requirements on resources.
"""
inputFile = open(inputFileName)
try:
calendar = iComponent.fromStream(inputFile)
finally:
inputFile.close()
assert calendar.name() == "VCALENDAR"
topLevelProperties = tuple(calendar.properties())
for subcomponent in calendar.subcomponents():
subcalendar = iComponent("VCALENDAR")
#
# Add top-level properties from monolithic calendar to
# top-level properties of subcalendar.
#
for property in topLevelProperties:
subcalendar.addProperty(property)
subcalendar.addComponent(subcomponent)
uid = subcalendar.resourceUID()
subFileName = os.path.join(outputDirectory, uid + ".ics")
print("Writing %s" % (subFileName,))
subcalendar_file = file(subFileName, "w")
try:
subcalendar_file.write(str(subcalendar))
finally:
subcalendar_file.close()
def usage(e=None):
if e:
print(e)
print("")
name = os.path.basename(sys.argv[0])
print("usage: %s [options] input_file output_directory" % (name,))
print("")
print(" Splits up monolithic iCalendar data into separate files for each")
print(" subcomponent so as to comply with CalDAV requirements for")
print(" individual resources.")
print("")
print("options:")
print(" -h --help: print this help and exit")
print("")
if e:
sys.exit(64)
else:
sys.exit(0)
def main():
try:
(optargs, args) = getopt(
sys.argv[1:], "h", [
"help",
],
)
except GetoptError, e:
usage(e)
for opt, _ignore_arg in optargs:
if opt in ("-h", "--help"):
usage()
try:
inputFileName, outputDirectory = args
except ValueError:
if len(args) > 2:
many = "many"
else:
many = "few"
usage("Too %s arguments" % (many,))
splitICalendarFile(inputFileName, outputDirectory)
if __name__ == "__main__":
main()
calendarserver-7.0+dfsg/calendarserver/tools/importer.py 0000775 0000000 0000000 00000033067 12607514243 0023672 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# -*- test-case-name: calendarserver.tools.test.test_importer -*-
##
# Copyright (c) 2014-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
This tool imports calendar data
This tool requires access to the calendar server's configuration and data
storage; it does not operate by talking to the server via the network. It
therefore does not apply any of the access restrictions that the server would.
"""
from __future__ import print_function
import os
import sys
import uuid
from calendarserver.tools.cmdline import utilityMain, WorkerService
from twext.python.log import Logger
from twisted.internet.defer import inlineCallbacks
from twisted.python.text import wordWrap
from twisted.python.usage import Options, UsageError
from twistedcaldav import customxml
from twistedcaldav.ical import Component
from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE
from twistedcaldav.timezones import TimezoneCache
from txdav.base.propertystore.base import PropertyName
from txdav.caldav.datastore.scheduling.cuaddress import LocalCalendarUser
from txdav.caldav.datastore.scheduling.itip import iTipGenerator
from txdav.caldav.datastore.scheduling.processing import ImplicitProcessor
from txdav.caldav.icalendarstore import ComponentUpdateState
from txdav.common.icommondatastore import UIDExistsError
from txdav.xml import element as davxml
log = Logger()
def usage(e=None):
if e:
print(e)
print("")
try:
ImportOptions().opt_help()
except SystemExit:
pass
if e:
sys.exit(64)
else:
sys.exit(0)
description = '\n'.join(
wordWrap(
"""
Usage: calendarserver_import [options] [input specifiers]\n
""" + __doc__,
int(os.environ.get('COLUMNS', '80'))
)
)
class ImportException(Exception):
"""
An error occurred during import
"""
class ImportOptions(Options):
"""
Command-line options for 'calendarserver_import'
"""
synopsis = description
optFlags = [
['debug', 'D', "Debug logging."],
]
optParameters = [
['config', 'f', DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."],
]
def __init__(self):
super(ImportOptions, self).__init__()
self.inputName = '-'
self.inputDirectoryName = None
def opt_directory(self, dirname):
"""
Specify input directory path.
"""
self.inputDirectoryName = dirname
opt_d = opt_directory
def opt_input(self, filename):
"""
Specify input file path (default: '-', meaning stdin).
"""
self.inputName = filename
opt_i = opt_input
def openInput(self):
"""
Open the appropriate input file based on the '--input' option.
"""
if self.inputName == '-':
return sys.stdin
else:
return open(self.inputName, 'r')
# These could probably live on the collection class:
def setCollectionPropertyValue(collection, element, value):
collectionProperties = collection.properties()
collectionProperties[PropertyName.fromElement(element)] = (
element.fromString(value)
)
def getCollectionPropertyValue(collection, element):
collectionProperties = collection.properties()
name = PropertyName.fromElement(element)
if name in collectionProperties:
return str(collectionProperties[name])
else:
return None
@inlineCallbacks
def importCollectionComponent(store, component):
"""
Import a component representing a collection (e.g. VCALENDAR) into the
store.
The homeUID and collection resource name the component will be imported
into is derived from the SOURCE property on the VCALENDAR (which must
be present). The code assumes it will be a URI with slash-separated parts
with the penultimate part specifying the homeUID and the last part
specifying the calendar resource name. The NAME property will be used
to set the DAV:display-name, while the COLOR property will be used to set
calendar-color.
Subcomponents (e.g. VEVENTs) are grouped into resources by UID. Objects
which have a UID already in use within the home will be skipped.
@param store: The db store to add the component to
@type store: L{IDataStore}
@param component: The component to store
@type component: L{twistedcaldav.ical.Component}
"""
sourceURI = component.propertyValue("SOURCE")
if not sourceURI:
raise ImportException("Calendar is missing SOURCE property")
ownerUID, collectionResourceName = sourceURI.strip("/").split("/")[-2:]
dir = store.directoryService()
ownerRecord = yield dir.recordWithUID(ownerUID)
if not ownerRecord:
raise ImportException("{} is not in the directory".format(ownerUID))
# Set properties on the collection
txn = store.newTransaction()
home = yield txn.calendarHomeWithUID(ownerUID, create=True)
collection = yield home.childWithName(collectionResourceName)
if not collection:
print("Creating calendar: {}".format(collectionResourceName))
collection = yield home.createChildWithName(collectionResourceName)
for propertyName, element in (
("NAME", davxml.DisplayName),
("COLOR", customxml.CalendarColor),
):
value = component.propertyValue(propertyName)
if value is not None:
setCollectionPropertyValue(collection, element, value)
print(
"Setting {name} to {value}".format(name=propertyName, value=value)
)
yield txn.commit()
# Populate the collection; NB we use a txn for each object, and we might
# want to batch them?
groupedComponents = Component.componentsFromComponent(component)
for groupedComponent in groupedComponents:
try:
uid = list(groupedComponent.subcomponents())[0].propertyValue("UID")
except:
continue
# If event is unscheduled or the organizer matches homeUID, store the
# component
print("Event UID: {}".format(uid))
storeDirectly = True
organizer = groupedComponent.getOrganizer()
if organizer is not None:
organizerRecord = yield dir.recordWithCalendarUserAddress(organizer)
if organizerRecord is None:
# Organizer does not exist, so skip this event
continue
else:
if ownerRecord.uid != organizerRecord.uid:
# Owner is not the organizer
storeDirectly = False
if storeDirectly:
resourceName = "{}.ics".format(str(uuid.uuid4()))
try:
yield storeComponentInHomeAndCalendar(
store, groupedComponent, ownerUID, collectionResourceName,
resourceName
)
print("Imported: {}".format(uid))
except UIDExistsError:
# That event is already in the home
print("Skipping since UID already exists: {}".format(uid))
except Exception, e:
print(
"Failed to import due to: {error}\n{comp}".format(
error=e,
comp=groupedComponent
)
)
else:
# Owner is an attendee, not the organizer
# Apply the PARTSTATs from the import and from the possibly
# existing event (existing event takes precedence) to the
# organizer's copy.
# Put the attendee copy into the right calendar now otherwise it
# could end up on the default calendar when the change to the
# organizer's copy causes an attendee update
resourceName = "{}.ics".format(str(uuid.uuid4()))
try:
yield storeComponentInHomeAndCalendar(
store, groupedComponent, ownerUID, collectionResourceName,
resourceName, asAttendee=True
)
print("Imported: {}".format(uid))
except UIDExistsError:
# No need since the event is already in the home
pass
# Now use the iTip reply processing to update the organizer's copy
# with the PARTSTATs from the component we're restoring.
attendeeCUA = ownerRecord.canonicalCalendarUserAddress()
organizerCUA = organizerRecord.canonicalCalendarUserAddress()
processor = ImplicitProcessor()
newComponent = iTipGenerator.generateAttendeeReply(groupedComponent, attendeeCUA, method="X-RESTORE")
if newComponent is not None:
txn = store.newTransaction()
yield processor.doImplicitProcessing(
txn,
newComponent,
LocalCalendarUser(attendeeCUA, ownerRecord),
LocalCalendarUser(organizerCUA, organizerRecord)
)
yield txn.commit()
@inlineCallbacks
def storeComponentInHomeAndCalendar(
store, component, homeUID, collectionResourceName, objectResourceName,
asAttendee=False
):
"""
Add a component to the store as an objectResource
If the calendar home does not yet exist for homeUID it will be created.
If the collection by the name collectionResourceName does not yet exist
it will be created.
@param store: The db store to add the component to
@type store: L{IDataStore}
@param component: The component to store
@type component: L{twistedcaldav.ical.Component}
@param homeUID: uid of the home collection
@type collectionResourceName: C{str}
@param collectionResourceName: name of the collection resource
@type collectionResourceName: C{str}
@param objectResourceName: name of the objectresource
@type objectResourceName: C{str}
"""
txn = store.newTransaction()
home = yield txn.calendarHomeWithUID(homeUID, create=True)
collection = yield home.childWithName(collectionResourceName)
if not collection:
collection = yield home.createChildWithName(collectionResourceName)
yield collection._createCalendarObjectWithNameInternal(
objectResourceName, component,
(
ComponentUpdateState.ATTENDEE_ITIP_UPDATE
if asAttendee else
ComponentUpdateState.NORMAL
)
)
yield txn.commit()
class ImporterService(WorkerService, object):
"""
Service which runs, imports the data, then stops the reactor.
"""
def __init__(self, store, options, reactor, config):
super(ImporterService, self).__init__(store)
self.options = options
self.reactor = reactor
self.config = config
self._directory = self.store.directoryService()
TimezoneCache.create()
@inlineCallbacks
def doWork(self):
"""
Do the export, stopping the reactor when done.
"""
try:
if self.options.inputDirectoryName:
dirname = self.options.inputDirectoryName
if not os.path.exists(dirname):
sys.stderr.write(
"Directory does not exist: {}\n".format(dirname)
)
sys.exit(1)
for filename in os.listdir(dirname):
fullpath = os.path.join(dirname, filename)
print("Importing {}".format(fullpath))
fileobj = open(fullpath, 'r')
component = Component.allFromStream(fileobj)
fileobj.close()
yield importCollectionComponent(self.store, component)
else:
try:
input = self.options.openInput()
except IOError, e:
sys.stderr.write(
"Unable to open input file for reading: %s\n" % (e)
)
sys.exit(1)
component = Component.allFromStream(input)
input.close()
yield importCollectionComponent(self.store, component)
except:
log.failure("doWork()")
def directoryService(self):
"""
Get an appropriate directory service.
"""
return self._directory
def stopService(self):
"""
Stop the service. Nothing to do; everything should be finished by this
time.
"""
# TODO: stopping this service mid-import should really stop the import
# loop, but this is not implemented because nothing will actually do it
# except hitting ^C (which also calls reactor.stop(), so that will exit
# anyway).
def main(argv=sys.argv, reactor=None):
"""
Do the import.
"""
if reactor is None:
from twisted.internet import reactor
options = ImportOptions()
try:
options.parseOptions(argv[1:])
except UsageError, e:
usage(e)
def makeService(store):
from twistedcaldav.config import config
return ImporterService(store, options, reactor, config)
utilityMain(options["config"], makeService, reactor, verbose=options["debug"])
calendarserver-7.0+dfsg/calendarserver/tools/managetimezones.py 0000775 0000000 0000000 00000025476 12607514243 0025224 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
##
# Copyright (c) 2011-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
from pycalendar.icalendar.calendar import Calendar
from pycalendar.datetime import DateTime
from twext.python.log import StandardIOObserver
from twisted.internet import reactor
from twisted.internet.defer import inlineCallbacks
from twisted.python.filepath import FilePath
from calendarserver.tools.util import loadConfig
from twistedcaldav.config import ConfigurationError
from twistedcaldav.timezones import TimezoneCache
from twistedcaldav.timezonestdservice import PrimaryTimezoneDatabase, \
SecondaryTimezoneDatabase
from zonal.tzconvert import tzconvert
import getopt
import os
import sys
import tarfile
import tempfile
import urllib
from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE
def _doPrimaryActions(action, tzpath, xmlfile, changed, tzvers):
tzdb = PrimaryTimezoneDatabase(tzpath, xmlfile)
if action == "refresh":
_doRefresh(tzpath, xmlfile, tzdb, tzvers)
elif action == "create":
_doCreate(xmlfile, tzdb)
elif action == "update":
_doUpdate(xmlfile, tzdb)
elif action == "list":
_doList(xmlfile, tzdb)
elif action == "changed":
_doChanged(xmlfile, changed, tzdb)
else:
usage("Invalid action: %s" % (action,))
def _doRefresh(tzpath, xmlfile, tzdb, tzvers):
"""
Refresh data from IANA.
"""
print("Downloading latest data from IANA")
if tzvers:
path = "https://www.iana.org/time-zones/repository/releases/tzdata%s.tar.gz" % (tzvers,)
else:
path = "https://www.iana.org/time-zones/repository/tzdata-latest.tar.gz"
data = urllib.urlretrieve(path)
print("Extract data at: %s" % (data[0]))
rootdir = tempfile.mkdtemp()
zonedir = os.path.join(rootdir, "tzdata")
os.mkdir(zonedir)
with tarfile.open(data[0], "r:gz") as t:
t.extractall(zonedir)
# Get the version from the Makefile
try:
makefile = open(os.path.join(zonedir, "Makefile")).read()
lines = makefile.splitlines()
for line in lines:
if line.startswith("VERSION="):
tzvers = line[8:].strip()
break
except IOError:
pass
if not tzvers:
tzvers = DateTime.getToday().getText()
print("Converting data (version: %s) at: %s" % (tzvers, zonedir,))
startYear = 1800
endYear = DateTime.getToday().getYear() + 10
Calendar.sProdID = "-//calendarserver.org//Zonal//EN"
zonefiles = "northamerica", "southamerica", "europe", "africa", "asia", "australasia", "antarctica", "etcetera", "backward"
parser = tzconvert()
for file in zonefiles:
parser.parse(os.path.join(zonedir, file))
# Check for windows aliases
print("Downloading latest data from unicode.org")
path = "http://unicode.org/repos/cldr/tags/latest/common/supplemental/windowsZones.xml"
data = urllib.urlretrieve(path)
wpath = data[0]
# Generate the iCalendar data
print("Generating iCalendar data")
parser.generateZoneinfoFiles(os.path.join(rootdir, "zoneinfo"), startYear, endYear, windowsAliases=wpath, filterzones=())
print("Copy new zoneinfo to destination: %s" % (tzpath,))
z = FilePath(os.path.join(rootdir, "zoneinfo"))
tz = FilePath(tzpath)
z.copyTo(tz)
print("Updating XML file at: %s" % (xmlfile,))
tzdb.readDatabase()
tzdb.updateDatabase()
print("Current total: %d" % (len(tzdb.timezones),))
print("Total Changed: %d" % (tzdb.changeCount,))
if tzdb.changeCount:
print("Changed:")
for k in sorted(tzdb.changed):
print(" %s" % (k,))
versfile = os.path.join(os.path.dirname(xmlfile), "version.txt")
print("Updating version file at: %s" % (versfile,))
with open(versfile, "w") as f:
f.write(TimezoneCache.IANA_VERSION_PREFIX + tzvers)
def _doCreate(xmlfile, tzdb):
"""
Create new xml file.
"""
print("Creating new XML file at: %s" % (xmlfile,))
tzdb.createNewDatabase()
print("Current total: %d" % (len(tzdb.timezones),))
def _doUpdate(xmlfile, tzdb):
"""
Update xml file.
"""
print("Updating XML file at: %s" % (xmlfile,))
tzdb.readDatabase()
tzdb.updateDatabase()
print("Current total: %d" % (len(tzdb.timezones),))
print("Total Changed: %d" % (tzdb.changeCount,))
if tzdb.changeCount:
print("Changed:")
for k in sorted(tzdb.changed):
print(" %s" % (k,))
def _doList(xmlfile, tzdb):
"""
List current timezones from xml file.
"""
print("Listing XML file at: %s" % (xmlfile,))
tzdb.readDatabase()
print("Current timestamp: %s" % (tzdb.dtstamp,))
print("Timezones:")
for k in sorted(tzdb.timezones.keys()):
print(" %s" % (k,))
def _doChanged(xmlfile, changed, tzdb):
"""
Check for local timezone changes.
"""
print("Changes from XML file at: %s" % (xmlfile,))
tzdb.readDatabase()
print("Check timestamp: %s" % (changed,))
print("Current timestamp: %s" % (tzdb.dtstamp,))
results = [k for k, v in tzdb.timezones.items() if v.dtstamp > changed]
print("Total Changed: %d" % (len(results),))
if results:
print("Changed:")
for k in sorted(results):
print(" %s" % (k,))
@inlineCallbacks
def _runInReactor(tzdb):
try:
new, changed = yield tzdb.syncWithServer()
print("New: %d" % (new,))
print("Changed: %d" % (changed,))
print("Current total: %d" % (len(tzdb.timezones),))
except Exception, e:
print("Could not sync with server: %s" % (str(e),))
finally:
reactor.stop()
def _doSecondaryActions(action, tzpath, xmlfile, url):
tzdb = SecondaryTimezoneDatabase(tzpath, xmlfile, url)
try:
tzdb.readDatabase()
except:
pass
if action == "cache":
print("Caching from secondary server: %s" % (url,))
observer = StandardIOObserver()
observer.start()
reactor.callLater(0, _runInReactor, tzdb)
reactor.run()
else:
usage("Invalid action: %s" % (action,))
def usage(error_msg=None):
if error_msg:
print(error_msg)
print("")
print("""Usage: managetimezones [options]
Options:
-h Print this help and exit
-f config file path [REQUIRED]
-x XML file path
-z zoneinfo file path
--tzvers year/release letter of IANA data to refresh
default: use the latest release
--url URL or domain of secondary service
# Primary service
--refresh refresh data from IANA
--refreshpkg refresh package data from IANA
--create create new XML file
--update update XML file
--list list timezones in XML file
--changed changed since timestamp
# Secondary service
--cache Cache data from service
Description:
This utility will create, update, or list an XML timezone database
summary file, or refresh iCalendar timezone from IANA (Olson). It can
also be used to update the server's own zoneinfo database from IANA. It
also creates aliases for the unicode.org windowsZones.
""")
if error_msg:
raise ValueError(error_msg)
else:
sys.exit(0)
def main():
configFileName = DEFAULT_CONFIG_FILE
primary = False
secondary = False
action = None
tzpath = None
xmlfile = None
changed = None
url = None
tzvers = None
updatepkg = False
# Get options
options, _ignore_args = getopt.getopt(
sys.argv[1:],
"f:hx:z:",
[
"refresh",
"refreshpkg",
"create",
"update",
"list",
"changed=",
"url=",
"cache",
"tzvers=",
]
)
for option, value in options:
if option == "-h":
usage()
elif option == "-f":
configFileName = value
elif option == "-x":
xmlfile = value
elif option == "-z":
tzpath = value
elif option == "--refresh":
action = "refresh"
primary = True
elif option == "--refreshpkg":
action = "refresh"
primary = True
updatepkg = True
elif option == "--create":
action = "create"
primary = True
elif option == "--update":
action = "update"
primary = True
elif option == "--list":
action = "list"
primary = True
elif option == "--changed":
action = "changed"
primary = True
changed = value
elif option == "--url":
url = value
secondary = True
elif option == "--cache":
action = "cache"
secondary = True
elif option == "--tzvers":
tzvers = value
else:
usage("Unrecognized option: %s" % (option,))
if configFileName is None:
usage("A configuration file must be specified")
try:
loadConfig(configFileName)
except ConfigurationError, e:
sys.stdout.write("%s\n" % (e,))
sys.exit(1)
if action is None:
action = "list"
primary = True
if tzpath is None:
if updatepkg:
try:
import pkg_resources
except ImportError:
tzpath = os.path.join(os.path.dirname(__file__), "zoneinfo")
else:
tzpath = pkg_resources.resource_filename("twistedcaldav", "zoneinfo") #@UndefinedVariable
else:
# Setup the correct zoneinfo path based on the config
tzpath = TimezoneCache.getDBPath()
TimezoneCache.validatePath()
if xmlfile is None:
xmlfile = os.path.join(tzpath, "timezones.xml")
if primary and not os.path.isdir(tzpath):
usage("Invalid zoneinfo path: %s" % (tzpath,))
if primary and not os.path.isfile(xmlfile) and action != "create":
usage("Invalid XML file path: %s" % (xmlfile,))
if primary and secondary:
usage("Cannot use primary and secondary options together")
if primary:
_doPrimaryActions(action, tzpath, xmlfile, changed, tzvers)
else:
_doSecondaryActions(action, tzpath, xmlfile, url)
if __name__ == '__main__':
main()
calendarserver-7.0+dfsg/calendarserver/tools/migrate.py 0000775 0000000 0000000 00000004604 12607514243 0023454 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
##
# Copyright (c) 2006-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
"""
This tool migrates existing calendar data from any previous calendar server
version to the current version.
This tool requires access to the calendar server's configuration and
data storage; it does not operate by talking to the server via the
network.
"""
import os
import sys
from getopt import getopt, GetoptError
from twistedcaldav.config import ConfigurationError
from twistedcaldav.upgrade import upgradeData
from calendarserver.tools.util import loadConfig
def usage(e=None):
if e:
print(e)
print("")
name = os.path.basename(sys.argv[0])
print("usage: %s [options]" % (name,))
print("")
print("Migrate calendar data to current version")
print(__doc__)
print("options:")
print(" -h --help: print this help and exit")
print(" -f --config: Specify caldavd.plist configuration path")
if e:
sys.exit(64)
else:
sys.exit(0)
def main():
try:
(optargs, args) = getopt(
sys.argv[1:], "hf:", [
"config=",
"help",
],
)
except GetoptError, e:
usage(e)
configFileName = None
for opt, arg in optargs:
if opt in ("-h", "--help"):
usage()
elif opt in ("-f", "--config"):
configFileName = arg
if args:
usage("Too many arguments: %s" % (" ".join(args),))
try:
config = loadConfig(configFileName)
except ConfigurationError, e:
sys.stdout.write("%s\n" % (e,))
sys.exit(1)
profiling = False
if profiling:
import cProfile
cProfile.runctx("upgradeData(c)", globals(), {"c": config}, "/tmp/upgrade.prof")
else:
upgradeData(config)
if __name__ == "__main__":
main()
calendarserver-7.0+dfsg/calendarserver/tools/migrate_verify.py 0000775 0000000 0000000 00000030130 12607514243 0025031 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# -*- test-case-name: calendarserver.tools.test.test_calverify -*-
##
# Copyright (c) 2012-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
"""
This tool takes a list of files paths from a file store being migrated
and compares that to the results of a migration to an SQL store. Items
not migrated are logged.
"""
import os
import sys
from twisted.internet.defer import inlineCallbacks, returnValue
from twisted.python.text import wordWrap
from twisted.python.usage import Options
from twext.enterprise.dal.syntax import Select, Parameter
from twext.python.log import Logger
from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE
from txdav.common.datastore.sql_tables import schema, _BIND_MODE_OWN
from calendarserver.tools.cmdline import utilityMain, WorkerService
log = Logger()
VERSION = "1"
def usage(e=None):
if e:
print(e)
print("")
try:
MigrateVerifyOptions().opt_help()
except SystemExit:
pass
if e:
sys.exit(64)
else:
sys.exit(0)
description = ''.join(
wordWrap(
"""
Usage: calendarserver_migrate_verify [options] [input specifiers]
""",
int(os.environ.get('COLUMNS', '80'))
)
)
description += "\nVersion: %s" % (VERSION,)
class ConfigError(Exception):
pass
class MigrateVerifyOptions(Options):
"""
Command-line options for 'calendarserver_migrate_verify'
"""
synopsis = description
optFlags = [
['debug', 'D', "Debug logging."],
]
optParameters = [
['config', 'f', DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."],
['data', 'd', "./paths.txt", "List of file paths for migrated data."],
]
def __init__(self):
super(MigrateVerifyOptions, self).__init__()
self.outputName = '-'
def opt_output(self, filename):
"""
Specify output file path (default: '-', meaning stdout).
"""
self.outputName = filename
opt_o = opt_output
def openOutput(self):
"""
Open the appropriate output file based on the '--output' option.
"""
if self.outputName == '-':
return sys.stdout
else:
return open(self.outputName, 'wb')
class MigrateVerifyService(WorkerService, object):
"""
Service which runs, does its stuff, then stops the reactor.
"""
def __init__(self, store, options, output, reactor, config):
super(MigrateVerifyService, self).__init__(store)
self.options = options
self.output = output
self.reactor = reactor
self.config = config
self.pathsByGUID = {}
self.badPaths = []
self.validPaths = 0
self.ignoreInbox = 0
self.ignoreDropbox = 0
self.missingGUIDs = []
self.missingCalendars = []
self.missingResources = []
@inlineCallbacks
def doWork(self):
"""
Do the work, stopping the reactor when done.
"""
self.output.write("\n---- Migrate Verify version: %s ----\n" % (VERSION,))
try:
self.readPaths()
yield self.doCheck()
self.output.close()
except ConfigError:
pass
except:
log.failure("doWork()")
def readPaths(self):
self.output.write("-- Reading data file: %s\n" % (self.options["data"]))
datafile = open(os.path.expanduser(self.options["data"]))
total = 0
invalidGUIDs = set()
for line in datafile:
line = line.strip()
total += 1
segments = line.split("/")
while segments and segments[0] != "__uids__":
segments.pop(0)
if segments and len(segments) >= 6:
guid = segments[3]
calendar = segments[4]
resource = segments[5]
if calendar == "inbox":
self.ignoreInbox += 1
invalidGUIDs.add(guid)
elif calendar == "dropbox":
self.ignoreDropbox += 1
invalidGUIDs.add(guid)
elif len(segments) > 6:
self.badPaths.append(line)
invalidGUIDs.add(guid)
else:
self.pathsByGUID.setdefault(guid, {}).setdefault(calendar, set()).add(resource)
self.validPaths += 1
else:
if segments and len(segments) >= 4:
invalidGUIDs.add(segments[3])
self.badPaths.append(line)
# Remove any invalid GUIDs that actuall were valid
invalidGUIDs = [pguid for pguid in invalidGUIDs if pguid not in self.pathsByGUID]
self.output.write("\nTotal lines read: %d\n" % (total,))
self.output.write("Total guids: valid: %d invalid: %d overall: %d\n" % (
len(self.pathsByGUID),
len(invalidGUIDs),
len(self.pathsByGUID) + len(invalidGUIDs),
))
self.output.write("Total valid calendars: %d\n" % (sum([len(v) for v in self.pathsByGUID.values()]),))
self.output.write("Total valid resources: %d\n" % (self.validPaths,))
self.output.write("Total inbox resources: %d\n" % (self.ignoreInbox,))
self.output.write("Total dropbox resources: %d\n" % (self.ignoreDropbox,))
self.output.write("Total bad paths: %d\n" % (len(self.badPaths),))
self.output.write("\n-- Invalid GUIDs\n")
for invalidGUID in sorted(invalidGUIDs):
self.output.write("Invalid GUID: %s\n" % (invalidGUID,))
self.output.write("\n-- Bad paths\n")
for badPath in sorted(self.badPaths):
self.output.write("Bad path: %s\n" % (badPath,))
@inlineCallbacks
def doCheck(self):
"""
Check path data against the SQL store.
"""
self.output.write("\n-- Scanning database for missed migrations\n")
# Get list of distinct resource_property resource_ids to delete
self.txn = self.store.newTransaction()
total = len(self.pathsByGUID)
totalMissingCalendarResources = 0
count = 0
for guid in self.pathsByGUID:
if divmod(count, 10)[1] == 0:
self.output.write(("\r%d of %d (%d%%)" % (
count,
total,
(count * 100 / total),
)).ljust(80))
self.output.flush()
# First check the presence of each guid and the calendar count
homeID = (yield self.guid2ResourceID(guid))
if homeID is None:
self.missingGUIDs.append(guid)
continue
# Now get the list of calendar names and calendar resource IDs
results = (yield self.calendarsForUser(homeID))
if results is None:
results = []
calendars = dict(results)
for calendar in self.pathsByGUID[guid].keys():
if calendar not in calendars:
self.missingCalendars.append("%s/%s (resources: %d)" % (guid, calendar, len(self.pathsByGUID[guid][calendar])))
totalMissingCalendarResources += len(self.pathsByGUID[guid][calendar])
else:
# Now get list of all calendar resources
results = (yield self.resourcesForCalendar(calendars[calendar]))
if results is None:
results = []
results = [result[0] for result in results]
db_resources = set(results)
# Also check for split calendar
if "%s-vtodo" % (calendar,) in calendars:
results = (yield self.resourcesForCalendar(calendars["%s-vtodo" % (calendar,)]))
if results is None:
results = []
results = [result[0] for result in results]
db_resources.update(results)
# Also check for split calendar
if "%s-vevent" % (calendar,) in calendars:
results = (yield self.resourcesForCalendar(calendars["%s-vevent" % (calendar,)]))
if results is None:
results = []
results = [result[0] for result in results]
db_resources.update(results)
old_resources = set(self.pathsByGUID[guid][calendar])
self.missingResources.extend(["%s/%s/%s" % (guid, calendar, resource,) for resource in old_resources.difference(db_resources)])
# Commit every 10 time through
if divmod(count + 1, 10)[1] == 0:
yield self.txn.commit()
self.txn = self.store.newTransaction()
count += 1
yield self.txn.commit()
self.txn = None
self.output.write("\n\nTotal missing GUIDs: %d\n" % (len(self.missingGUIDs),))
for guid in sorted(self.missingGUIDs):
self.output.write("%s\n" % (guid,))
self.output.write("\nTotal missing Calendars: %d (resources: %d)\n" % (len(self.missingCalendars), totalMissingCalendarResources,))
for calendar in sorted(self.missingCalendars):
self.output.write("%s\n" % (calendar,))
self.output.write("\nTotal missing Resources: %d\n" % (len(self.missingResources),))
for resource in sorted(self.missingResources):
self.output.write("%s\n" % (resource,))
@inlineCallbacks
def guid2ResourceID(self, guid):
ch = schema.CALENDAR_HOME
kwds = {"GUID" : guid}
rows = (yield Select(
[
ch.RESOURCE_ID,
],
From=ch,
Where=(
ch.OWNER_UID == Parameter("GUID")
),
).on(self.txn, **kwds))
returnValue(rows[0][0] if rows else None)
@inlineCallbacks
def calendarsForUser(self, rid):
cb = schema.CALENDAR_BIND
kwds = {"RID" : rid}
rows = (yield Select(
[
cb.CALENDAR_RESOURCE_NAME,
cb.CALENDAR_RESOURCE_ID,
],
From=cb,
Where=(
cb.CALENDAR_HOME_RESOURCE_ID == Parameter("RID")
).And(cb.BIND_MODE == _BIND_MODE_OWN),
).on(self.txn, **kwds))
returnValue(rows)
@inlineCallbacks
def resourcesForCalendar(self, rid):
co = schema.CALENDAR_OBJECT
kwds = {"RID" : rid}
rows = (yield Select(
[
co.RESOURCE_NAME,
],
From=co,
Where=(
co.CALENDAR_RESOURCE_ID == Parameter("RID")
),
).on(self.txn, **kwds))
returnValue(rows)
def stopService(self):
"""
Stop the service. Nothing to do; everything should be finished by this
time.
"""
def main(argv=sys.argv, stderr=sys.stderr, reactor=None):
"""
Do the export.
"""
if reactor is None:
from twisted.internet import reactor
options = MigrateVerifyOptions()
options.parseOptions(argv[1:])
try:
output = options.openOutput()
except IOError, e:
stderr.write("Unable to open output file for writing: %s\n" % (e))
sys.exit(1)
def makeService(store):
from twistedcaldav.config import config
config.TransactionTimeoutSeconds = 0
return MigrateVerifyService(store, options, output, reactor, config)
utilityMain(options['config'], makeService, reactor, verbose=options["debug"])
if __name__ == '__main__':
main()
calendarserver-7.0+dfsg/calendarserver/tools/notifications.py 0000664 0000000 0000000 00000057246 12607514243 0024704 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2011-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
from calendarserver.tools.util import loadConfig
from datetime import datetime
from getopt import getopt, GetoptError
from getpass import getpass
from twisted.application.service import Service
from twisted.internet import reactor
from twisted.internet.defer import inlineCallbacks, returnValue
from twisted.web import client
from twisted.words.protocols.jabber import xmlstream
from twisted.words.protocols.jabber.client import XMPPAuthenticator, IQAuthInitializer
from twisted.words.protocols.jabber.jid import JID
from twisted.words.protocols.jabber.xmlstream import IQ
from twisted.words.xish import domish
from twext.internet.gaiendpoint import GAIEndpoint
from twext.internet.adaptendpoint import connect
from twistedcaldav.config import config, ConfigurationError
from twistedcaldav.util import AuthorizedHTTPGetter
from xml.etree import ElementTree
import os
import signal
import sys
import uuid
def usage(e=None):
name = os.path.basename(sys.argv[0])
print("usage: %s [options] username" % (name,))
print("")
print(" Monitor push notification events from calendar server")
print("")
print("options:")
print(" -a --admin : Specify an administrator username")
print(" -f --config : Specify caldavd.plist configuration path")
print(" -h --help: print this help and exit")
print(" -H --host : calendar server host name")
print(" -n --node : pubsub node to subscribe to *")
print(" -p --port : calendar server port number")
print(" -s --ssl: use https (default is http)")
print(" -v --verbose: print additional information including XMPP traffic")
print("")
print(" * The --node option is only required for calendar servers that")
print(" don't advertise the push-transports DAV property (such as a Snow")
print(" Leopard server). In this case, --host should specify the name")
print(" of the XMPP server and --port should specify the port XMPP is")
print(" is listening on.")
print("")
if e:
sys.stderr.write("%s\n" % (e,))
sys.exit(64)
else:
sys.exit(0)
def main():
try:
(optargs, args) = getopt(
sys.argv[1:], "a:f:hH:n:p:sv", [
"admin=",
"config=",
"help",
"host=",
"node=",
"port=",
"ssl",
"verbose",
],
)
except GetoptError, e:
usage(e)
admin = None
configFileName = None
host = None
nodes = None
port = None
useSSL = False
verbose = False
for opt, arg in optargs:
if opt in ("-h", "--help"):
usage()
elif opt in ("-f", "--config"):
configFileName = arg
elif opt in ("-a", "--admin"):
admin = arg
elif opt in ("-H", "--host"):
host = arg
elif opt in ("-n", "--node"):
nodes = [arg]
elif opt in ("-p", "--port"):
port = int(arg)
elif opt in ("-s", "--ssl"):
useSSL = True
elif opt in ("-v", "--verbose"):
verbose = True
else:
raise NotImplementedError(opt)
if len(args) != 1:
usage("Username not specified")
username = args[0]
if host is None:
# No host specified, so try loading config to look up settings
try:
loadConfig(configFileName)
except ConfigurationError, e:
print("Error in configuration: %s" % (e,))
sys.exit(1)
useSSL = config.EnableSSL
host = config.ServerHostName
port = config.SSLPort if useSSL else config.HTTPPort
if port is None:
usage("Must specify a port number")
if admin:
password = getpass("Password for administrator %s: " % (admin,))
else:
password = getpass("Password for %s: " % (username,))
admin = username
monitorService = PushMonitorService(
useSSL, host, port, nodes, admin,
username, password, verbose)
reactor.addSystemEventTrigger(
"during", "startup",
monitorService.startService)
reactor.addSystemEventTrigger(
"before", "shutdown",
monitorService.stopService)
reactor.run()
class PubSubClientFactory(xmlstream.XmlStreamFactory):
"""
An XMPP pubsub client that subscribes to nodes and prints a message
whenever a notification arrives.
"""
pubsubNS = 'http://jabber.org/protocol/pubsub'
def __init__(self, jid, password, service, nodes, verbose, sigint=True):
resource = "pushmonitor.%s" % (uuid.uuid4().hex,)
self.jid = "%s/%s" % (jid, resource)
self.service = service
self.nodes = nodes
self.verbose = verbose
if self.verbose:
print("JID:", self.jid, "Pubsub service:", self.service)
self.presenceSeconds = 60
self.presenceCall = None
self.xmlStream = None
self.doKeepAlive = True
xmlstream.XmlStreamFactory.__init__(
self,
XMPPAuthenticator(JID(self.jid), password))
self.addBootstrap(xmlstream.STREAM_CONNECTED_EVENT, self.connected)
self.addBootstrap(xmlstream.STREAM_END_EVENT, self.disconnected)
self.addBootstrap(xmlstream.INIT_FAILED_EVENT, self.initFailed)
self.addBootstrap(xmlstream.STREAM_AUTHD_EVENT, self.authenticated)
self.addBootstrap(
IQAuthInitializer.INVALID_USER_EVENT,
self.authFailed)
self.addBootstrap(
IQAuthInitializer.AUTH_FAILED_EVENT,
self.authFailed)
if sigint:
signal.signal(signal.SIGINT, self.sigint_handler)
@inlineCallbacks
def sigint_handler(self, num, frame):
print(" Shutting down...")
yield self.unsubscribeAll()
reactor.stop()
@inlineCallbacks
def unsubscribeAll(self):
if self.xmlStream is not None:
for node, (_ignore_url, name, kind) in self.nodes.iteritems():
yield self.unsubscribe(node, name, kind)
def connected(self, xmlStream):
self.xmlStream = xmlStream
if self.verbose:
print("XMPP connection successful")
xmlStream.rawDataInFn = self.rawDataIn
xmlStream.rawDataOutFn = self.rawDataOut
xmlStream.addObserver("/message/event/items",
self.handleMessageEventItems)
def disconnected(self, xmlStream):
self.xmlStream = None
if self.presenceCall is not None:
self.presenceCall.cancel()
self.presenceCall = None
if self.verbose:
print("XMPP disconnected")
def initFailed(self, failure):
self.xmlStream = None
print("XMPP connection failure: %s" % (failure,))
reactor.stop()
@inlineCallbacks
def authenticated(self, xmlStream):
if self.verbose:
print("XMPP authentication successful")
self.sendPresence()
for node, (_ignore_url, name, kind) in self.nodes.iteritems():
yield self.subscribe(node, name, kind)
print("Awaiting notifications (hit Control-C to end)")
def authFailed(self, e):
print("XMPP authentication failed")
reactor.stop()
def sendPresence(self):
if self.doKeepAlive and self.xmlStream is not None:
presence = domish.Element(('jabber:client', 'presence'))
self.xmlStream.send(presence)
self.presenceCall = reactor.callLater(
self.presenceSeconds,
self.sendPresence)
def handleMessageEventItems(self, iq):
item = iq.firstChildElement().firstChildElement()
if item:
node = item.getAttribute("node")
if node:
node = str(node)
url, name, kind = self.nodes.get(node, ("Not subscribed", "Unknown", "Unknown"))
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
print('%s | Notification for "%s" (%s)' % (timestamp, name, kind))
if self.verbose:
print(" node = %s" % (node,))
print(" url = %s" % (url,))
@inlineCallbacks
def subscribe(self, node, name, kind):
iq = IQ(self.xmlStream)
pubsubElement = iq.addElement("pubsub", defaultUri=self.pubsubNS)
subElement = pubsubElement.addElement("subscribe")
subElement["node"] = node
subElement["jid"] = self.jid
print('Subscribing to "%s" (%s)' % (name, kind))
if self.verbose:
print(node)
try:
yield iq.send(to=self.service)
print("OK")
except Exception, e:
print("Subscription failure: %s %s" % (node, e))
@inlineCallbacks
def unsubscribe(self, node, name, kind):
iq = IQ(self.xmlStream)
pubsubElement = iq.addElement("pubsub", defaultUri=self.pubsubNS)
subElement = pubsubElement.addElement("unsubscribe")
subElement["node"] = node
subElement["jid"] = self.jid
print('Unsubscribing from "%s" (%s)' % (name, kind))
if self.verbose:
print(node)
try:
yield iq.send(to=self.service)
print("OK")
except Exception, e:
print("Unsubscription failure: %s %s" % (node, e))
@inlineCallbacks
def retrieveSubscriptions(self):
# This isn't supported by Apple's pubsub service
iq = IQ(self.xmlStream)
pubsubElement = iq.addElement("pubsub", defaultUri=self.pubsubNS)
pubsubElement.addElement("subscriptions")
print("Requesting list of subscriptions")
try:
yield iq.send(to=self.service)
except Exception, e:
print("Subscription list failure: %s" % (e,))
def rawDataIn(self, buf):
print("RECV: %s" % unicode(buf, 'utf-8').encode('ascii', 'replace'))
def rawDataOut(self, buf):
print("SEND: %s" % unicode(buf, 'utf-8').encode('ascii', 'replace'))
class PropfindRequestor(AuthorizedHTTPGetter):
handleStatus_207 = lambda self: self.handleStatus_200
class PushMonitorService(Service):
"""
A service which uses CalDAV to determine which pubsub node(s) correspond
to any calendar the user has access to (including any belonging to users
who have delegated access to the user). Those nodes are subscribed to
using XMPP and monitored for updates.
"""
def __init__(
self, useSSL, host, port, nodes, authname, username, password,
verbose
):
self.useSSL = useSSL
self.host = host
self.port = port
self.nodes = nodes
self.authname = authname
self.username = username
self.password = password
self.verbose = verbose
@inlineCallbacks
def startService(self):
try:
subscribeNodes = {}
if self.nodes is None:
paths = set()
principal = "/principals/users/%s/" % (self.username,)
name, homes = (yield self.getPrincipalDetails(principal))
if self.verbose:
print(name, homes)
for home in homes:
paths.add(home)
for principal in (yield self.getProxyFor()):
name, homes = (yield self.getPrincipalDetails(
principal,
includeCardDAV=False))
if self.verbose:
print(name, homes)
for home in homes:
if home.startswith("/"):
# Only support homes on the same server for now.
paths.add(home)
for path in paths:
host, port, nodes = (yield self.getPushInfo(path))
subscribeNodes.update(nodes)
else:
for node in self.nodes:
subscribeNodes[node] = ("Unknown", "Unknown", "Unknown")
host = self.host
port = self.port
# TODO: support talking to multiple hosts (to support delegates
# from other calendar servers)
if subscribeNodes:
self.startMonitoring(host, port, subscribeNodes)
else:
print("No nodes to monitor")
reactor.stop()
except Exception, e:
print("Error:", e)
reactor.stop()
@inlineCallbacks
def getPrincipalDetails(self, path, includeCardDAV=True):
"""
Given a principal path, retrieve and return the corresponding
displayname and set of calendar/addressbook homes.
"""
name = ""
homes = []
headers = {
"Depth" : "0",
}
body = """
"""
try:
responseBody = (yield self.makeRequest(
path, "PROPFIND", headers,
body))
try:
doc = ElementTree.fromstring(responseBody)
for response in doc.findall("{DAV:}response"):
href = response.find("{DAV:}href")
for propstat in response.findall("{DAV:}propstat"):
status = propstat.find("{DAV:}status")
if "200 OK" in status.text:
for prop in propstat.findall("{DAV:}prop"):
calendarHomeSet = prop.find("{urn:ietf:params:xml:ns:caldav}calendar-home-set")
if calendarHomeSet is not None:
for href in calendarHomeSet.findall("{DAV:}href"):
href = href.text
if href:
homes.append(href)
if includeCardDAV:
addressbookHomeSet = prop.find("{urn:ietf:params:xml:ns:carddav}addressbook-home-set")
if addressbookHomeSet is not None:
for href in addressbookHomeSet.findall("{DAV:}href"):
href = href.text
if href:
homes.append(href)
displayName = prop.find("{DAV:}displayname")
if displayName is not None:
displayName = displayName.text
if displayName:
name = displayName
except Exception, e:
print("Unable to parse principal details", e)
print(responseBody)
raise
except Exception, e:
print("Unable to look up principal details", e)
raise
returnValue((name, homes))
@inlineCallbacks
def getProxyFor(self):
"""
Retrieve and return the principal paths for any user that has delegated
calendar access to this user.
"""
proxies = set()
headers = {
"Depth" : "0",
}
body = """
"""
path = "/principals/users/%s/" % (self.username,)
try:
responseBody = (yield self.makeRequest(
path, "PROPFIND", headers,
body))
try:
doc = ElementTree.fromstring(responseBody)
for response in doc.findall("{DAV:}response"):
href = response.find("{DAV:}href")
for propstat in response.findall("{DAV:}propstat"):
status = propstat.find("{DAV:}status")
if "200 OK" in status.text:
for prop in propstat.findall("{DAV:}prop"):
for element in (
"{http://calendarserver.org/ns/}calendar-proxy-read-for",
"{http://calendarserver.org/ns/}calendar-proxy-write-for",
):
proxyFor = prop.find(element)
if proxyFor is not None:
for href in proxyFor.findall("{DAV:}href"):
href = href.text
if href:
proxies.add(href)
except Exception, e:
print("Unable to parse proxy information", e)
print(responseBody)
raise
except Exception, e:
print("Unable to look up who %s is a proxy for" % (self.username,))
raise
returnValue(proxies)
@inlineCallbacks
def getPushInfo(self, path):
"""
Given a calendar home path, retrieve push notification info including
xmpp hostname and port, the pushkey for the home and for each shared
collection bound into the home.
"""
headers = {
"Depth" : "1",
}
body = """
"""
try:
responseBody = (yield self.makeRequest(
path, "PROPFIND", headers,
body))
host = None
port = None
nodes = {}
try:
doc = ElementTree.fromstring(responseBody)
for response in doc.findall("{DAV:}response"):
href = response.find("{DAV:}href")
key = None
name = None
if path.startswith("/calendars"):
kind = "Calendar home"
else:
kind = "AddressBook home"
for propstat in response.findall("{DAV:}propstat"):
status = propstat.find("{DAV:}status")
if "200 OK" in status.text:
for prop in propstat.findall("{DAV:}prop"):
displayName = prop.find("{DAV:}displayname")
if displayName is not None:
displayName = displayName.text
if displayName:
name = displayName
resourceType = prop.find("{DAV:}resourcetype")
if resourceType is not None:
shared = resourceType.find("{http://calendarserver.org/ns/}shared")
if shared is not None:
kind = "Shared calendar"
pushKey = prop.find("{http://calendarserver.org/ns/}pushkey")
if pushKey is not None:
pushKey = pushKey.text
if pushKey:
key = pushKey
pushTransports = prop.find("{http://calendarserver.org/ns/}push-transports")
if pushTransports is not None:
if self.verbose:
print("push-transports:\n\n", ElementTree.tostring(pushTransports))
for transport in pushTransports.findall("{http://calendarserver.org/ns/}transport"):
if transport.attrib["type"] == "XMPP":
xmppServer = transport.find("{http://calendarserver.org/ns/}xmpp-server")
if xmppServer is not None:
xmppServer = xmppServer.text
if xmppServer:
if ":" in xmppServer:
host, port = xmppServer.split(":")
port = int(port)
else:
host = xmppServer
port = 5222
if key and key not in nodes:
nodes[key] = (href.text, name, kind)
except Exception, e:
print("Unable to parse push information", e)
print(responseBody)
raise
except Exception, e:
print("Unable to look up push information for %s" % (self.username,))
raise
if host is None:
raise Exception("Unable to determine xmpp server name")
if port is None:
raise Exception("Unable to determine xmpp server port")
returnValue((host, port, nodes))
def startMonitoring(self, host, port, nodes):
service = "pubsub.%s" % (host,)
jid = "%s@%s" % (self.authname, host)
pubsubFactory = PubSubClientFactory(
jid, self.password, service, nodes,
self.verbose)
connect(GAIEndpoint(reactor, host, port), pubsubFactory)
def makeRequest(self, path, method, headers, body):
scheme = "https:" if self.useSSL else "http:"
url = "%s//%s:%d%s" % (scheme, self.host, self.port, path)
caldavFactory = client.HTTPClientFactory(
url, method=method,
headers=headers, postdata=body, agent="Push Monitor")
caldavFactory.username = self.authname
caldavFactory.password = self.password
caldavFactory.noisy = False
caldavFactory.protocol = PropfindRequestor
if self.useSSL:
connect(GAIEndpoint(reactor, self.host, self.port,
self.ClientContextFactory()),
caldavFactory)
else:
connect(GAIEndpoint(reactor, self.host, self.port), caldavFactory)
return caldavFactory.deferred
if __name__ == "__main__":
main()
calendarserver-7.0+dfsg/calendarserver/tools/obliterate.py 0000775 0000000 0000000 00000043451 12607514243 0024161 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# -*- test-case-name: calendarserver.tools.test.test_calverify -*-
##
# Copyright (c) 2012-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
"""
This tool scans wipes out user data without using slow store object apis
that attempt to keep the DB consistent. Instead it assumes facts about the
schema and how the various table data are related. Normally the purge principal
tool should be used to "correctly" remove user data. This is an emergency tool
needed when data has been accidently migrated into the DB but no users actually
have access to it as they are not enabled on the server.
"""
import os
import sys
import time
from twisted.internet.defer import inlineCallbacks, returnValue
from twisted.python.text import wordWrap
from twisted.python.usage import Options
from twext.enterprise.dal.syntax import Parameter, Delete, Select, Union, \
CompoundComparison, ExpressionSyntax, Count
from twext.python.log import Logger
from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE
from txdav.common.datastore.sql_tables import schema, _BIND_MODE_OWN
from calendarserver.tools.cmdline import utilityMain, WorkerService
log = Logger()
VERSION = "1"
def usage(e=None):
if e:
print(e)
print("")
try:
ObliterateOptions().opt_help()
except SystemExit:
pass
if e:
sys.exit(64)
else:
sys.exit(0)
description = ''.join(
wordWrap(
"""
Usage: calendarserver_obliterate [options] [input specifiers]
""",
int(os.environ.get('COLUMNS', '80'))
)
)
description += "\nVersion: %s" % (VERSION,)
class ConfigError(Exception):
pass
class ObliterateOptions(Options):
"""
Command-line options for 'calendarserver_obliterate'
"""
synopsis = description
optFlags = [
['verbose', 'v', "Verbose logging."],
['debug', 'D', "Debug logging."],
['fix-props', 'p', "Fix orphaned resource properties only."],
['dry-run', 'n', "Do not make any changes."],
]
optParameters = [
['config', 'f', DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."],
['data', 'd', "./uuids.txt", "Path where list of uuids to obliterate is."],
['uuid', 'u', "", "Obliterate this user's data."],
]
def __init__(self):
super(ObliterateOptions, self).__init__()
self.outputName = '-'
def opt_output(self, filename):
"""
Specify output file path (default: '-', meaning stdout).
"""
self.outputName = filename
opt_o = opt_output
def openOutput(self):
"""
Open the appropriate output file based on the '--output' option.
"""
if self.outputName == '-':
return sys.stdout
else:
return open(self.outputName, 'wb')
# Need to patch this in if not present in actual server code
def NotIn(self, subselect):
# Can't be Select.__contains__ because __contains__ gets __nonzero__
# called on its result by the 'in' syntax.
return CompoundComparison(self, 'not in', subselect)
if not hasattr(ExpressionSyntax, "NotIn"):
ExpressionSyntax.NotIn = NotIn
class ObliterateService(WorkerService, object):
"""
Service which runs, does its stuff, then stops the reactor.
"""
def __init__(self, store, options, output, reactor, config):
super(ObliterateService, self).__init__(store)
self.options = options
self.output = output
self.reactor = reactor
self.config = config
self.results = {}
self.summary = []
self.totalHomes = 0
self.totalCalendars = 0
self.totalResources = 0
self.attachments = set()
@inlineCallbacks
def doWork(self):
"""
Do the work, stopping the reactor when done.
"""
self.output.write("\n---- Obliterate version: %s ----\n" % (VERSION,))
if self.options["dry-run"]:
self.output.write("---- DRY RUN No Changes Being Made ----\n")
try:
if self.options["fix-props"]:
yield self.obliterateOrphanedProperties()
else:
yield self.obliterateUUIDs()
self.output.close()
except ConfigError:
pass
except:
log.failure("doWork()")
@inlineCallbacks
def obliterateOrphanedProperties(self):
"""
Obliterate orphaned data in RESOURCE_PROPERTIES table.
"""
# Get list of distinct resource_property resource_ids to delete
self.txn = self.store.newTransaction()
ch = schema.CALENDAR_HOME
ca = schema.CALENDAR
co = schema.CALENDAR_OBJECT
ah = schema.ADDRESSBOOK_HOME
ao = schema.ADDRESSBOOK_OBJECT
rp = schema.RESOURCE_PROPERTY
rows = (yield Select(
[rp.RESOURCE_ID, ],
Distinct=True,
From=rp,
Where=(rp.RESOURCE_ID.NotIn(
Select(
[ch.RESOURCE_ID],
From=ch,
SetExpression=Union(
Select(
[ca.RESOURCE_ID],
From=ca,
SetExpression=Union(
Select(
[co.RESOURCE_ID],
From=co,
SetExpression=Union(
Select(
[ah.RESOURCE_ID],
From=ah,
SetExpression=Union(
Select(
[ao.RESOURCE_ID],
From=ao,
),
),
),
),
),
),
),
),
),
))
).on(self.txn))
if not rows:
self.output.write("No orphaned resource properties\n")
returnValue(None)
resourceIDs = [row[0] for row in rows]
resourceIDs_len = len(resourceIDs)
t = time.time()
for ctr, resourceID in enumerate(resourceIDs):
self.output.write("%d of %d (%d%%): ResourceID: %s\n" % (
ctr + 1,
resourceIDs_len,
((ctr + 1) * 100 / resourceIDs_len),
resourceID,
))
yield self.removePropertiesForResourceID(resourceID)
# Commit every 10 DELETEs
if divmod(ctr + 1, 10)[1] == 0:
yield self.txn.commit()
self.txn = self.store.newTransaction()
yield self.txn.commit()
self.txn = None
self.output.write("Obliteration time: %.1fs\n" % (time.time() - t,))
@inlineCallbacks
def obliterateUUIDs(self):
"""
Obliterate specified UUIDs.
"""
if self.options["uuid"]:
uuids = [self.options["uuid"], ]
elif self.options["data"]:
if not os.path.exists(self.options["data"]):
self.output.write("%s is not a valid file\n" % (self.options["data"],))
raise ConfigError
uuids = open(self.options["data"]).read().split()
else:
self.output.write("One of --data or --uuid must be specified\n")
raise ConfigError
t = time.time()
uuids_len = len(uuids)
for ctr, uuid in enumerate(uuids):
self.txn = self.store.newTransaction()
self.output.write("%d of %d (%d%%): UUID: %s - " % (
ctr + 1,
uuids_len,
((ctr + 1) * 100 / uuids_len),
uuid,
))
result = (yield self.processUUID(uuid))
self.output.write("%s\n" % (result,))
yield self.txn.commit()
self.txn = None
self.output.write("\nTotal Homes: %d\n" % (self.totalHomes,))
self.output.write("Total Calendars: %d\n" % (self.totalCalendars,))
self.output.write("Total Resources: %d\n" % (self.totalResources,))
if self.attachments:
self.output.write("Attachments removed: %s\n" % (len(self.attachments,)))
# for attachment in self.attachments:
# self.output.write(" %s\n" % (attachment,))
self.output.write("Obliteration time: %.1fs\n" % (time.time() - t,))
@inlineCallbacks
def processUUID(self, uuid):
# Get the resource-id for the home
ch = schema.CALENDAR_HOME
kwds = {"UUID" : uuid}
rows = (yield Select(
[ch.RESOURCE_ID, ],
From=ch,
Where=(
ch.OWNER_UID == Parameter("UUID")
),
).on(self.txn, **kwds))
if not rows:
returnValue("No home found")
homeID = rows[0][0]
self.totalHomes += 1
# Count resources
resourceCount = (yield self.countResources(uuid))
self.totalResources += resourceCount
# Remove revisions - do before deleting calendars to remove
# foreign key constraint
yield self.removeRevisionsForHomeResourceID(homeID)
# Look at each calendar and unbind/delete-if-owned
count = (yield self.deleteCalendars(homeID))
self.totalCalendars += count
# Remove properties
yield self.removePropertiesForResourceID(homeID)
# Remove notifications
yield self.removeNotificationsForUUID(uuid)
# Remove attachments
attachmentCount = (yield self.removeAttachments(homeID))
# Now remove the home
yield self.removeHomeForResourceID(homeID)
returnValue("Home, %d calendars, %d resources%s - deleted" % (
count,
resourceCount,
(", %d attachmensts" % (attachmentCount,)) if attachmentCount else "",
))
@inlineCallbacks
def countResources(self, uuid):
ch = schema.CALENDAR_HOME
cb = schema.CALENDAR_BIND
co = schema.CALENDAR_OBJECT
kwds = {"UUID" : uuid}
rows = (yield Select(
[
Count(co.RESOURCE_ID),
],
From=ch.join(
cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID).And(
cb.BIND_MODE == _BIND_MODE_OWN)).join(
co, type="left", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID)),
Where=(
ch.OWNER_UID == Parameter("UUID")
),
).on(self.txn, **kwds))
returnValue(rows[0][0] if rows else 0)
@inlineCallbacks
def deleteCalendars(self, homeID):
# Get list of binds and bind mode
cb = schema.CALENDAR_BIND
kwds = {"resourceID" : homeID}
rows = (yield Select(
[cb.CALENDAR_RESOURCE_ID, cb.BIND_MODE, ],
From=cb,
Where=(
cb.CALENDAR_HOME_RESOURCE_ID == Parameter("resourceID")
),
).on(self.txn, **kwds))
if not rows:
returnValue(0)
for resourceID, mode in rows:
if mode == _BIND_MODE_OWN:
yield self.deleteCalendar(resourceID)
else:
yield self.deleteBind(homeID, resourceID)
returnValue(len(rows))
@inlineCallbacks
def deleteCalendar(self, resourceID):
# Need to delete any remaining CALENDAR_OBJECT_REVISIONS entries
yield self.removeRevisionsForCalendarResourceID(resourceID)
# Delete the CALENDAR entry (will cascade to CALENDAR_BIND and CALENDAR_OBJECT)
if not self.options["dry-run"]:
ca = schema.CALENDAR
kwds = {
"ResourceID" : resourceID,
}
yield Delete(
From=ca,
Where=(
ca.RESOURCE_ID == Parameter("ResourceID")
),
).on(self.txn, **kwds)
# Remove properties
yield self.removePropertiesForResourceID(resourceID)
@inlineCallbacks
def deleteBind(self, homeID, resourceID):
if not self.options["dry-run"]:
cb = schema.CALENDAR_BIND
kwds = {
"HomeID" : homeID,
"ResourceID" : resourceID,
}
yield Delete(
From=cb,
Where=(
(cb.CALENDAR_HOME_RESOURCE_ID == Parameter("HomeID")).And
(cb.CALENDAR_RESOURCE_ID == Parameter("ResourceID"))
),
).on(self.txn, **kwds)
@inlineCallbacks
def removeRevisionsForHomeResourceID(self, resourceID):
if not self.options["dry-run"]:
rev = schema.CALENDAR_OBJECT_REVISIONS
kwds = {"ResourceID" : resourceID}
yield Delete(
From=rev,
Where=(
rev.CALENDAR_HOME_RESOURCE_ID == Parameter("ResourceID")
),
).on(self.txn, **kwds)
@inlineCallbacks
def removeRevisionsForCalendarResourceID(self, resourceID):
if not self.options["dry-run"]:
rev = schema.CALENDAR_OBJECT_REVISIONS
kwds = {"ResourceID" : resourceID}
yield Delete(
From=rev,
Where=(
rev.CALENDAR_RESOURCE_ID == Parameter("ResourceID")
),
).on(self.txn, **kwds)
@inlineCallbacks
def removePropertiesForResourceID(self, resourceID):
if not self.options["dry-run"]:
props = schema.RESOURCE_PROPERTY
kwds = {"ResourceID" : resourceID}
yield Delete(
From=props,
Where=(
props.RESOURCE_ID == Parameter("ResourceID")
),
).on(self.txn, **kwds)
@inlineCallbacks
def removeNotificationsForUUID(self, uuid):
# Get NOTIFICATION_HOME.RESOURCE_ID
nh = schema.NOTIFICATION_HOME
kwds = {"UUID" : uuid}
rows = (yield Select(
[nh.RESOURCE_ID, ],
From=nh,
Where=(
nh.OWNER_UID == Parameter("UUID")
),
).on(self.txn, **kwds))
if rows:
resourceID = rows[0][0]
# Delete NOTIFICATION rows
if not self.options["dry-run"]:
no = schema.NOTIFICATION
kwds = {"ResourceID" : resourceID}
yield Delete(
From=no,
Where=(
no.NOTIFICATION_HOME_RESOURCE_ID == Parameter("ResourceID")
),
).on(self.txn, **kwds)
# Delete NOTIFICATION_HOME (will cascade to NOTIFICATION_OBJECT_REVISIONS)
if not self.options["dry-run"]:
kwds = {"UUID" : uuid}
yield Delete(
From=nh,
Where=(
nh.OWNER_UID == Parameter("UUID")
),
).on(self.txn, **kwds)
@inlineCallbacks
def removeAttachments(self, resourceID):
# Get ATTACHMENT paths
at = schema.ATTACHMENT
kwds = {"resourceID" : resourceID}
rows = (yield Select(
[at.PATH, ],
From=at,
Where=(
at.CALENDAR_HOME_RESOURCE_ID == Parameter("resourceID")
),
).on(self.txn, **kwds))
if rows:
self.attachments.update([row[0] for row in rows])
# Delete ATTACHMENT rows
if not self.options["dry-run"]:
at = schema.ATTACHMENT
kwds = {"resourceID" : resourceID}
yield Delete(
From=at,
Where=(
at.CALENDAR_HOME_RESOURCE_ID == Parameter("resourceID")
),
).on(self.txn, **kwds)
returnValue(len(rows) if rows else 0)
@inlineCallbacks
def removeHomeForResourceID(self, resourceID):
if not self.options["dry-run"]:
ch = schema.CALENDAR_HOME
kwds = {"ResourceID" : resourceID}
yield Delete(
From=ch,
Where=(
ch.RESOURCE_ID == Parameter("ResourceID")
),
).on(self.txn, **kwds)
def stopService(self):
"""
Stop the service. Nothing to do; everything should be finished by this
time.
"""
def main(argv=sys.argv, stderr=sys.stderr, reactor=None):
"""
Do the export.
"""
if reactor is None:
from twisted.internet import reactor
options = ObliterateOptions()
options.parseOptions(argv[1:])
try:
output = options.openOutput()
except IOError, e:
stderr.write("Unable to open output file for writing: %s\n" % (e))
sys.exit(1)
def makeService(store):
from twistedcaldav.config import config
config.TransactionTimeoutSeconds = 0
return ObliterateService(store, options, output, reactor, config)
utilityMain(options['config'], makeService, reactor, verbose=options["debug"])
if __name__ == '__main__':
main()
calendarserver-7.0+dfsg/calendarserver/tools/pod_migration.py 0000775 0000000 0000000 00000020044 12607514243 0024653 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# -*- test-case-name: calendarserver.tools.test.test_calverify -*-
##
# Copyright (c) 2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
"""
This tool manages an overall pod migration. Migration is done in a series of steps,
with the system admin triggering each step individually by running this tool.
"""
import os
import sys
from twisted.internet.defer import inlineCallbacks
from twisted.python.text import wordWrap
from twisted.python.usage import Options, UsageError
from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE
from twistedcaldav.timezones import TimezoneCache
from txdav.common.datastore.podding.migration.home_sync import CrossPodHomeSync
from twext.python.log import Logger
from twext.who.idirectory import RecordType
from calendarserver.tools.cmdline import utilityMain, WorkerService
log = Logger()
VERSION = "1"
def usage(e=None):
if e:
print(e)
print("")
try:
PodMigrationOptions().opt_help()
except SystemExit:
pass
if e:
sys.exit(64)
else:
sys.exit(0)
description = ''.join(
wordWrap(
"""
Usage: calendarserver_pod_migration [options] [input specifiers]
""",
int(os.environ.get('COLUMNS', '80'))
)
)
description += "\nVersion: %s" % (VERSION,)
class ConfigError(Exception):
pass
class PodMigrationOptions(Options):
"""
Command-line options for 'calendarserver_pod_migration'
"""
synopsis = description
optFlags = [
['verbose', 'v', "Verbose logging."],
['debug', 'D', "Debug logging."],
['step1', '1', "Run step 1 of the migration (initial sync)"],
['step2', '2', "Run step 2 of the migration (incremental sync)"],
['step3', '3', "Run step 3 of the migration (prepare for final sync)"],
['step4', '4', "Run step 4 of the migration (final incremental sync)"],
['step5', '5', "Run step 5 of the migration (final reconcile sync)"],
['step6', '6', "Run step 6 of the migration (enable new home)"],
['step7', '7', "Run step 7 of the migration (remove old home)"],
]
optParameters = [
['config', 'f', DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."],
['uid', 'u', "", "Directory record uid of user to migrate [REQUIRED]"],
]
longdesc = "Only one step option is allowed."
def __init__(self):
super(PodMigrationOptions, self).__init__()
self.outputName = '-'
def opt_output(self, filename):
"""
Specify output file path (default: '-', meaning stdout).
"""
self.outputName = filename
opt_o = opt_output
def openOutput(self):
"""
Open the appropriate output file based on the '--output' option.
"""
if self.outputName == '-':
return sys.stdout
else:
return open(self.outputName, 'wb')
def postOptions(self):
runstep = None
for step in range(7):
if self["step{}".format(step + 1)]:
if runstep is None:
runstep = step
self["runstep"] = step + 1
else:
raise UsageError("Only one step option allowed")
else:
if runstep is None:
raise UsageError("One step option must be present")
if not self["uid"]:
raise UsageError("A uid is required")
class PodMigrationService(WorkerService, object):
"""
Service which runs, does its stuff, then stops the reactor.
"""
def __init__(self, store, options, output, reactor, config):
super(PodMigrationService, self).__init__(store)
self.options = options
self.output = output
self.reactor = reactor
self.config = config
TimezoneCache.create()
@inlineCallbacks
def doWork(self):
"""
Do the work, stopping the reactor when done.
"""
self.output.write("\n---- Pod Migration version: %s ----\n" % (VERSION,))
# Map short name to uid
record = yield self.store.directoryService().recordWithUID(self.options["uid"])
if record is None:
record = yield self.store.directoryService().recordWithShortName(RecordType.user, self.options["uid"])
if record is not None:
self.options["uid"] = record.uid
try:
yield getattr(self, "step{}".format(self.options["runstep"]))()
self.output.close()
except ConfigError:
pass
except:
log.failure("doWork()")
@inlineCallbacks
def step1(self):
syncer = CrossPodHomeSync(
self.store,
self.options["uid"],
uselog=self.output if self.options["verbose"] else None
)
syncer.accounting("Pod Migration Step 1\n")
yield syncer.sync()
@inlineCallbacks
def step2(self):
syncer = CrossPodHomeSync(
self.store,
self.options["uid"],
uselog=self.output if self.options["verbose"] else None
)
syncer.accounting("Pod Migration Step 2\n")
yield syncer.sync()
@inlineCallbacks
def step3(self):
syncer = CrossPodHomeSync(
self.store,
self.options["uid"],
uselog=self.output if self.options["verbose"] else None
)
syncer.accounting("Pod Migration Step 3\n")
yield syncer.disableRemoteHome()
@inlineCallbacks
def step4(self):
syncer = CrossPodHomeSync(
self.store,
self.options["uid"],
final=True,
uselog=self.output if self.options["verbose"] else None
)
syncer.accounting("Pod Migration Step 4\n")
yield syncer.sync()
@inlineCallbacks
def step5(self):
syncer = CrossPodHomeSync(
self.store,
self.options["uid"],
final=True,
uselog=self.output if self.options["verbose"] else None
)
syncer.accounting("Pod Migration Step 5\n")
yield syncer.finalSync()
@inlineCallbacks
def step6(self):
syncer = CrossPodHomeSync(
self.store,
self.options["uid"],
uselog=self.output if self.options["verbose"] else None
)
syncer.accounting("Pod Migration Step 6\n")
yield syncer.enableLocalHome()
@inlineCallbacks
def step7(self):
syncer = CrossPodHomeSync(
self.store,
self.options["uid"],
final=True,
uselog=self.output if self.options["verbose"] else None
)
syncer.accounting("Pod Migration Step 7\n")
yield syncer.removeRemoteHome()
def main(argv=sys.argv, stderr=sys.stderr, reactor=None):
"""
Do the export.
"""
if reactor is None:
from twisted.internet import reactor
options = PodMigrationOptions()
try:
options.parseOptions(argv[1:])
except UsageError as e:
stderr.write("Invalid options specified\n")
options.opt_help()
try:
output = options.openOutput()
except IOError, e:
stderr.write("Unable to open output file for writing: %s\n" % (e))
sys.exit(1)
def makeService(store):
from twistedcaldav.config import config
config.TransactionTimeoutSeconds = 0
return PodMigrationService(store, options, output, reactor, config)
utilityMain(options['config'], makeService, reactor, verbose=options["debug"])
if __name__ == '__main__':
main()
calendarserver-7.0+dfsg/calendarserver/tools/principals.py 0000775 0000000 0000000 00000101125 12607514243 0024164 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
##
# Copyright (c) 2006-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
# Suppress warning that occurs on Linux
import sys
if sys.platform.startswith("linux"):
from Crypto.pct_warnings import PowmInsecureWarning
import warnings
warnings.simplefilter("ignore", PowmInsecureWarning)
from getopt import getopt, GetoptError
import operator
import os
import uuid
from calendarserver.tools.cmdline import utilityMain, WorkerService
from calendarserver.tools.util import (
recordForPrincipalID, prettyRecord
)
from twext.who.directory import DirectoryRecord
from twext.who.idirectory import RecordType, InvalidDirectoryRecordError
from twisted.internet import reactor
from twisted.internet.defer import inlineCallbacks, returnValue, succeed
from twistedcaldav.config import config
from txdav.who.delegates import Delegates, RecordType as DelegateRecordType
from txdav.who.idirectory import AutoScheduleMode
from txdav.who.groups import GroupCacherPollingWork
allowedAutoScheduleModes = {
"default": None,
"none": AutoScheduleMode.none,
"accept-always": AutoScheduleMode.accept,
"decline-always": AutoScheduleMode.decline,
"accept-if-free": AutoScheduleMode.acceptIfFree,
"decline-if-busy": AutoScheduleMode.declineIfBusy,
"automatic": AutoScheduleMode.acceptIfFreeDeclineIfBusy,
}
def usage(e=None):
if e:
print(e)
print("")
name = os.path.basename(sys.argv[0])
print("usage: %s [options] action_flags principal [principal ...]" % (name,))
print(" %s [options] --list-principal-types" % (name,))
print(" %s [options] --list-principals type" % (name,))
print("")
print(" Performs the given actions against the giving principals.")
print("")
print(" Principals are identified by one of the following:")
print(" Type and shortname (eg.: users:wsanchez)")
# print(" A principal path (eg.: /principals/users/wsanchez/)")
print(" A GUID (eg.: E415DBA7-40B5-49F5-A7CC-ACC81E4DEC79)")
print("")
print("options:")
print(" -h --help: print this help and exit")
print(" -f --config : Specify caldavd.plist configuration path")
print(" -v --verbose: print debugging information")
print("")
print("actions:")
print(" --context : {user|group|location|resource|attendee}; must be used in conjunction with --search")
print(" --search : search using one or more tokens")
print(" --list-principal-types: list all of the known principal types")
print(" --list-principals type: list all principals of the given type")
print(" --list-read-proxies: list proxies with read-only access")
print(" --list-write-proxies: list proxies with read-write access")
print(" --list-proxies: list all proxies")
print(" --list-proxy-for: principals this principal is a proxy for")
print(" --add-read-proxy=principal: add a read-only proxy")
print(" --add-write-proxy=principal: add a read-write proxy")
print(" --remove-proxy=principal: remove a proxy")
print(" --set-auto-schedule-mode={default|none|accept-always|decline-always|accept-if-free|decline-if-busy|automatic}: set auto-schedule mode")
print(" --get-auto-schedule-mode: read auto-schedule mode")
print(" --set-auto-accept-group=principal: set auto-accept-group")
print(" --get-auto-accept-group: read auto-accept-group")
print(" --add {locations|resources|addresses} full-name record-name UID: add a principal")
print(" --remove: remove a principal")
print(" --set-geo=url: set the geo: url for an address (e.g. geo:37.331741,-122.030333)")
print(" --get-geo: get the geo: url for an address")
print(" --set-street-address=streetaddress: set the street address string for an address")
print(" --get-street-address: get the street address string for an address")
print(" --set-address=guid: associate principal with an address (by guid)")
print(" --get-address: get the associated address's guid")
print(" --refresh-groups: schedule a group membership refresh")
print(" --print-group-info : prints group delegation and membership")
if e:
sys.exit(64)
else:
sys.exit(0)
class PrincipalService(WorkerService):
"""
Executes principals-related functions in a context which has access to the store
"""
function = None
params = []
@inlineCallbacks
def doWork(self):
"""
Calls the function that's been assigned to "function" and passes the root
resource, directory, store, and whatever has been assigned to "params".
"""
if self.function is not None:
yield self.function(self.store, *self.params)
def main():
try:
(optargs, args) = getopt(
sys.argv[1:], "a:hf:P:v", [
"help",
"config=",
"add=",
"remove",
"context=",
"search",
"list-principal-types",
"list-principals=",
# Proxies
"list-read-proxies",
"list-write-proxies",
"list-proxies",
"list-proxy-for",
"add-read-proxy=",
"add-write-proxy=",
"remove-proxy=",
# Groups
"list-group-members",
"add-group-member=",
"remove-group-member=",
"print-group-info",
"refresh-groups",
# Scheduling
"set-auto-schedule-mode=",
"get-auto-schedule-mode",
"set-auto-accept-group=",
"get-auto-accept-group",
# Principal details
"set-geo=",
"get-geo",
"set-address=",
"get-address",
"set-street-address=",
"get-street-address",
"verbose",
],
)
except GetoptError, e:
usage(e)
#
# Get configuration
#
configFileName = None
addType = None
listPrincipalTypes = False
listPrincipals = None
searchContext = None
searchTokens = None
printGroupInfo = False
scheduleGroupRefresh = False
principalActions = []
verbose = False
for opt, arg in optargs:
# Args come in as encoded bytes
arg = arg.decode("utf-8")
if opt in ("-h", "--help"):
usage()
elif opt in ("-v", "--verbose"):
verbose = True
elif opt in ("-f", "--config"):
configFileName = arg
elif opt in ("-a", "--add"):
addType = arg
elif opt in ("-r", "--remove"):
principalActions.append((action_removePrincipal,))
elif opt in ("", "--list-principal-types"):
listPrincipalTypes = True
elif opt in ("", "--list-principals"):
listPrincipals = arg
elif opt in ("", "--context"):
searchContext = arg
elif opt in ("", "--search"):
searchTokens = args
elif opt in ("", "--list-read-proxies"):
principalActions.append((action_listProxies, "read"))
elif opt in ("", "--list-write-proxies"):
principalActions.append((action_listProxies, "write"))
elif opt in ("-L", "--list-proxies"):
principalActions.append((action_listProxies, "read", "write"))
elif opt in ("--list-proxy-for"):
principalActions.append((action_listProxyFor, "read", "write"))
elif opt in ("--add-read-proxy", "--add-write-proxy"):
if "read" in opt:
proxyType = "read"
elif "write" in opt:
proxyType = "write"
else:
raise AssertionError("Unknown proxy type")
principalActions.append((action_addProxy, proxyType, arg))
elif opt in ("", "--remove-proxy"):
principalActions.append((action_removeProxy, arg))
elif opt in ("", "--list-group-members"):
principalActions.append((action_listGroupMembers,))
elif opt in ("--add-group-member"):
principalActions.append((action_addGroupMember, arg))
elif opt in ("", "--remove-group-member"):
principalActions.append((action_removeGroupMember, arg))
elif opt in ("", "--print-group-info"):
printGroupInfo = True
elif opt in ("", "--refresh-groups"):
scheduleGroupRefresh = True
elif opt in ("", "--set-auto-schedule-mode"):
try:
if arg not in allowedAutoScheduleModes:
raise ValueError("Unknown auto-schedule mode: {mode}".format(
mode=arg))
autoScheduleMode = allowedAutoScheduleModes[arg]
except ValueError, e:
abort(e)
principalActions.append((action_setAutoScheduleMode, autoScheduleMode))
elif opt in ("", "--get-auto-schedule-mode"):
principalActions.append((action_getAutoScheduleMode,))
elif opt in ("", "--set-auto-accept-group"):
principalActions.append((action_setAutoAcceptGroup, arg))
elif opt in ("", "--get-auto-accept-group"):
principalActions.append((action_getAutoAcceptGroup,))
elif opt in ("", "--set-geo"):
principalActions.append((action_setValue, u"geographicLocation", arg))
elif opt in ("", "--get-geo"):
principalActions.append((action_getValue, u"geographicLocation"))
elif opt in ("", "--set-street-address"):
principalActions.append((action_setValue, u"streetAddress", arg))
elif opt in ("", "--get-street-address"):
principalActions.append((action_getValue, u"streetAddress"))
elif opt in ("", "--set-address"):
principalActions.append((action_setValue, u"associatedAddress", arg))
elif opt in ("", "--get-address"):
principalActions.append((action_getValue, u"associatedAddress"))
else:
raise NotImplementedError(opt)
#
# List principals
#
if listPrincipalTypes:
if args:
usage("Too many arguments")
function = runListPrincipalTypes
params = ()
elif printGroupInfo:
function = printGroupCacherInfo
params = (args,)
elif scheduleGroupRefresh:
function = scheduleGroupRefreshJob
params = ()
elif addType:
try:
addType = matchStrings(
addType,
[
"locations", "resources", "addresses", "users", "groups"
]
)
except ValueError, e:
print(e)
return
try:
fullName, shortName, uid = parseCreationArgs(args)
except ValueError, e:
print(e)
return
if fullName is not None:
fullNames = [fullName]
else:
fullNames = ()
if shortName is not None:
shortNames = [shortName]
else:
shortNames = ()
function = runAddPrincipal
params = (addType, uid, shortNames, fullNames)
elif listPrincipals:
try:
listPrincipals = matchStrings(
listPrincipals,
["users", "groups", "locations", "resources", "addresses"]
)
except ValueError, e:
print(e)
return
if args:
usage("Too many arguments")
function = runListPrincipals
params = (listPrincipals,)
elif searchTokens:
function = runSearch
searchTokens = [t.decode("utf-8") for t in searchTokens]
params = (searchTokens, searchContext)
else:
if not args:
usage("No principals specified.")
unicodeArgs = [a.decode("utf-8") for a in args]
function = runPrincipalActions
params = (unicodeArgs, principalActions)
PrincipalService.function = function
PrincipalService.params = params
utilityMain(configFileName, PrincipalService, verbose=verbose)
def runListPrincipalTypes(service, store):
directory = store.directoryService()
for recordType in directory.recordTypes():
print(directory.recordTypeToOldName(recordType))
return succeed(None)
@inlineCallbacks
def runListPrincipals(service, store, listPrincipals):
directory = store.directoryService()
recordType = directory.oldNameToRecordType(listPrincipals)
try:
records = list((yield directory.recordsWithRecordType(recordType)))
if records:
printRecordList(records)
else:
print("No records of type %s" % (listPrincipals,))
except InvalidDirectoryRecordError, e:
usage(e)
returnValue(None)
@inlineCallbacks
def runPrincipalActions(service, store, principalIDs, actions):
directory = store.directoryService()
for principalID in principalIDs:
# Resolve the given principal IDs to records
try:
record = yield recordForPrincipalID(directory, principalID)
except ValueError:
record = None
if record is None:
sys.stderr.write("Invalid principal ID: %s\n" % (principalID,))
continue
# Performs requested actions
for action in actions:
(yield action[0](store, record, *action[1:]))
print("")
@inlineCallbacks
def runSearch(service, store, tokens, context=None):
directory = store.directoryService()
records = list(
(
yield directory.recordsMatchingTokens(
tokens, context=context
)
)
)
if records:
records.sort(key=operator.attrgetter('fullNames'))
print("{n} matches found:".format(n=len(records)))
for record in records:
print(
"\n{d} ({rt})".format(
d=record.displayName,
rt=record.recordType.name
)
)
print(" UID: {u}".format(u=record.uid,))
try:
print(
" Record name{plural}: {names}".format(
plural=("s" if len(record.shortNames) > 1 else ""),
names=(", ".join(record.shortNames))
)
)
except AttributeError:
pass
try:
if record.emailAddresses:
print(
" Email{plural}: {emails}".format(
plural=("s" if len(record.emailAddresses) > 1 else ""),
emails=(", ".join(record.emailAddresses))
)
)
except AttributeError:
pass
else:
print("No matches found")
print("")
@inlineCallbacks
def runAddPrincipal(service, store, addType, uid, shortNames, fullNames):
directory = store.directoryService()
recordType = directory.oldNameToRecordType(addType)
# See if that UID is in use
record = yield directory.recordWithUID(uid)
if record is not None:
print("UID already in use: {uid}".format(uid=uid))
returnValue(None)
# See if the shortnames are in use
for shortName in shortNames:
record = yield directory.recordWithShortName(recordType, shortName)
if record is not None:
print("Record name already in use: {name}".format(name=shortName))
returnValue(None)
fields = {
directory.fieldName.recordType: recordType,
directory.fieldName.uid: uid,
directory.fieldName.shortNames: shortNames,
directory.fieldName.fullNames: fullNames,
directory.fieldName.hasCalendars: True,
directory.fieldName.hasContacts: True,
}
record = DirectoryRecord(directory, fields)
yield record.service.updateRecords([record], create=True)
print("Added '{name}'".format(name=fullNames[0]))
@inlineCallbacks
def action_removePrincipal(store, record):
directory = store.directoryService()
fullName = record.displayName
shortNames = ",".join(record.shortNames)
yield directory.removeRecords([record.uid])
print(
"Removed '{full}' {shorts} {uid}".format(
full=fullName, shorts=shortNames, uid=record.uid
)
)
@inlineCallbacks
def action_listProxies(store, record, *proxyTypes):
directory = store.directoryService()
for proxyType in proxyTypes:
groupRecordType = {
"read": directory.recordType.readDelegateGroup,
"write": directory.recordType.writeDelegateGroup,
}.get(proxyType)
pseudoGroup = yield directory.recordWithShortName(
groupRecordType,
record.uid
)
proxies = yield pseudoGroup.members()
if proxies:
print("%s proxies for %s:" % (
{"read": "Read-only", "write": "Read/write"}[proxyType],
prettyRecord(record)
))
printRecordList(proxies)
print("")
else:
print("No %s proxies for %s" % (proxyType, prettyRecord(record)))
@inlineCallbacks
def action_listProxyFor(store, record, *proxyTypes):
directory = store.directoryService()
if record.recordType != directory.recordType.user:
print("You must pass a user principal to this command")
returnValue(None)
for proxyType in proxyTypes:
groupRecordType = {
"read": directory.recordType.readDelegatorGroup,
"write": directory.recordType.writeDelegatorGroup,
}.get(proxyType)
pseudoGroup = yield directory.recordWithShortName(
groupRecordType,
record.uid
)
proxies = yield pseudoGroup.members()
if proxies:
print("%s is a %s proxy for:" % (
prettyRecord(record),
{"read": "Read-only", "write": "Read/write"}[proxyType]
))
printRecordList(proxies)
print("")
else:
print(
"{r} is not a {t} proxy for anyone".format(
r=prettyRecord(record),
t={"read": "Read-only", "write": "Read/write"}[proxyType]
)
)
@inlineCallbacks
def _addRemoveProxy(msg, fn, store, record, proxyType, *proxyIDs):
directory = store.directoryService()
readWrite = (proxyType == "write")
for proxyID in proxyIDs:
proxyRecord = yield recordForPrincipalID(directory, proxyID)
if proxyRecord is None:
print("Invalid principal ID: %s" % (proxyID,))
else:
txn = store.newTransaction()
yield fn(txn, record, proxyRecord, readWrite)
yield txn.commit()
print(
"{msg} {proxy} as a {proxyType} proxy for {record}".format(
msg=msg, proxy=prettyRecord(proxyRecord),
proxyType=proxyType, record=prettyRecord(record)
)
)
@inlineCallbacks
def action_addProxy(store, record, proxyType, *proxyIDs):
if config.GroupCaching.Enabled and config.GroupCaching.UseDirectoryBasedDelegates:
if record.recordType in (
record.service.recordType.location,
record.service.recordType.resource,
):
print("You are not allowed to add proxies for locations or resources via command line when their proxy assignments come from the directory service.")
returnValue(None)
yield _addRemoveProxy("Added", Delegates.addDelegate, store, record, proxyType, *proxyIDs)
@inlineCallbacks
def action_removeProxy(store, record, *proxyIDs):
if config.GroupCaching.Enabled and config.GroupCaching.UseDirectoryBasedDelegates:
if record.recordType in (
record.service.recordType.location,
record.service.recordType.resource,
):
print("You are not allowed to remove proxies for locations or resources via command line when their proxy assignments come from the directory service.")
returnValue(None)
# Write
yield _addRemoveProxy("Removed", Delegates.removeDelegate, store, record, "write", *proxyIDs)
# Read
yield _addRemoveProxy("Removed", Delegates.removeDelegate, store, record, "read", *proxyIDs)
@inlineCallbacks
def setProxies(record, readProxyRecords, writeProxyRecords):
"""
Set read/write proxies en masse for a record
@param record: L{IDirectoryRecord}
@param readProxyRecords: a list of records
@param writeProxyRecords: a list of records
"""
proxyTypes = [
(DelegateRecordType.readDelegateGroup, readProxyRecords),
(DelegateRecordType.writeDelegateGroup, writeProxyRecords),
]
for recordType, proxyRecords in proxyTypes:
if proxyRecords is None:
continue
proxyGroup = yield record.service.recordWithShortName(
recordType, record.uid
)
yield proxyGroup.setMembers(proxyRecords)
@inlineCallbacks
def getProxies(record):
"""
Returns a tuple containing the records for read proxies and write proxies
of the given record
"""
allProxies = {
DelegateRecordType.readDelegateGroup: [],
DelegateRecordType.writeDelegateGroup: [],
}
for recordType in allProxies.iterkeys():
proxyGroup = yield record.service.recordWithShortName(
recordType, record.uid
)
allProxies[recordType] = yield proxyGroup.members()
returnValue(
(
allProxies[DelegateRecordType.readDelegateGroup],
allProxies[DelegateRecordType.writeDelegateGroup]
)
)
@inlineCallbacks
def action_listGroupMembers(store, record):
members = yield record.members()
if members:
print("Group members for %s:\n" % (
prettyRecord(record)
))
printRecordList(members)
print("")
else:
print("No group members for %s" % (prettyRecord(record),))
@inlineCallbacks
def action_addGroupMember(store, record, *memberIDs):
directory = store.directoryService()
existingMembers = yield record.members()
existingMemberUIDs = set([member.uid for member in existingMembers])
add = set()
for memberID in memberIDs:
memberRecord = yield recordForPrincipalID(directory, memberID)
if memberRecord is None:
print("Invalid member ID: %s" % (memberID,))
elif memberRecord.uid in existingMemberUIDs:
print("Existing member ID: %s" % (memberID,))
else:
add.add(memberRecord)
if add:
yield record.addMembers(add)
for memberRecord in add:
print(
"Added {member} for {record}".format(
member=prettyRecord(memberRecord),
record=prettyRecord(record)
)
)
yield record.service.updateRecords([record], create=False)
@inlineCallbacks
def action_removeGroupMember(store, record, *memberIDs):
directory = store.directoryService()
existingMembers = yield record.members()
existingMemberUIDs = set([member.uid for member in existingMembers])
remove = set()
for memberID in memberIDs:
memberRecord = yield recordForPrincipalID(directory, memberID)
if memberRecord is None:
print("Invalid member ID: %s" % (memberID,))
elif memberRecord.uid not in existingMemberUIDs:
print("Missing member ID: %s" % (memberID,))
else:
remove.add(memberRecord)
if remove:
yield record.removeMembers(remove)
for memberRecord in remove:
print(
"Removed {member} for {record}".format(
member=prettyRecord(memberRecord),
record=prettyRecord(record)
)
)
yield record.service.updateRecords([record], create=False)
@inlineCallbacks
def printGroupCacherInfo(service, store, principalIDs):
"""
Print all groups that have been delegated to, their cached members, and
who delegated to those groups.
"""
directory = store.directoryService()
txn = store.newTransaction()
if not principalIDs:
groupUIDs = yield txn.allGroupDelegates()
else:
groupUIDs = []
for principalID in principalIDs:
record = yield recordForPrincipalID(directory, principalID)
if record:
groupUIDs.append(record.uid)
for groupUID in groupUIDs:
group = yield txn.groupByUID(groupUID)
print("Group: \"{name}\" ({uid})".format(name=group.name, uid=group.groupUID))
for txt, readWrite in (("read-only", False), ("read-write", True)):
delegatorUIDs = yield txn.delegatorsToGroup(group.groupID, readWrite)
for delegatorUID in delegatorUIDs:
delegator = yield directory.recordWithUID(delegatorUID)
print(
"...has {rw} access to {rec}".format(
rw=txt, rec=prettyRecord(delegator)
)
)
print("Group members:")
memberUIDs = yield txn.groupMemberUIDs(group.groupID)
for memberUID in memberUIDs:
record = yield directory.recordWithUID(memberUID)
print(prettyRecord(record))
print("Last cached: {} GMT".format(group.modified))
print()
yield txn.commit()
@inlineCallbacks
def scheduleGroupRefreshJob(service, store):
"""
Schedule GroupCacherPollingWork
"""
txn = store.newTransaction()
print("Scheduling a group refresh")
yield GroupCacherPollingWork.reschedule(txn, 0, force=True)
yield txn.commit()
def action_getAutoScheduleMode(store, record):
print(
"Auto-schedule mode for {record} is {mode}".format(
record=prettyRecord(record),
mode=(
record.autoScheduleMode.description if record.autoScheduleMode
else "Default"
)
)
)
@inlineCallbacks
def action_setAutoScheduleMode(store, record, autoScheduleMode):
if record.recordType == RecordType.group:
print(
"Setting auto-schedule-mode for {record} is not allowed.".format(
record=prettyRecord(record)
)
)
elif (
record.recordType == RecordType.user and
not config.Scheduling.Options.AutoSchedule.AllowUsers
):
print(
"Setting auto-schedule-mode for {record} is not allowed.".format(
record=prettyRecord(record)
)
)
else:
print(
"Setting auto-schedule-mode to {mode} for {record}".format(
mode=autoScheduleMode.description,
record=prettyRecord(record),
)
)
# Get original fields
newFields = record.fields.copy()
# Set new values
newFields[record.service.fieldName.autoScheduleMode] = autoScheduleMode
updatedRecord = DirectoryRecord(record.service, newFields)
yield record.service.updateRecords([updatedRecord], create=False)
@inlineCallbacks
def action_setAutoAcceptGroup(store, record, autoAcceptGroup):
if record.recordType == RecordType.group:
print(
"Setting auto-accept-group for {record} is not allowed.".format(
record=prettyRecord(record)
)
)
elif (
record.recordType == RecordType.user and
not config.Scheduling.Options.AutoSchedule.AllowUsers
):
print(
"Setting auto-accept-group for {record} is not allowed.".format(
record=prettyRecord(record)
)
)
else:
groupRecord = yield recordForPrincipalID(record.service, autoAcceptGroup)
if groupRecord is None or groupRecord.recordType != RecordType.group:
print("Invalid principal ID: {id}".format(id=autoAcceptGroup))
else:
print("Setting auto-accept-group to {group} for {record}".format(
group=prettyRecord(groupRecord),
record=prettyRecord(record),
))
# Get original fields
newFields = record.fields.copy()
# Set new values
newFields[record.service.fieldName.autoAcceptGroup] = groupRecord.uid
updatedRecord = DirectoryRecord(record.service, newFields)
yield record.service.updateRecords([updatedRecord], create=False)
@inlineCallbacks
def action_getAutoAcceptGroup(store, record):
if record.autoAcceptGroup:
groupRecord = yield record.service.recordWithUID(
record.autoAcceptGroup
)
if groupRecord is not None:
print(
"Auto-accept-group for {record} is {group}".format(
record=prettyRecord(record),
group=prettyRecord(groupRecord),
)
)
else:
print(
"Invalid auto-accept-group assigned: {uid}".format(
uid=record.autoAcceptGroup
)
)
else:
print(
"No auto-accept-group assigned to {record}".format(
record=prettyRecord(record)
)
)
@inlineCallbacks
def action_setValue(store, record, name, value):
print(
"Setting {name} to {value} for {record}".format(
name=name, value=value, record=prettyRecord(record),
)
)
# Get original fields
newFields = record.fields.copy()
# Set new value
newFields[record.service.fieldName.lookupByName(name)] = value
updatedRecord = DirectoryRecord(record.service, newFields)
yield record.service.updateRecords([updatedRecord], create=False)
def action_getValue(store, record, name):
try:
value = record.fields[record.service.fieldName.lookupByName(name)]
print(
"{name} for {record} is {value}".format(
name=name, record=prettyRecord(record), value=value
)
)
except KeyError:
print(
"{name} is not set for {record}".format(
name=name, record=prettyRecord(record),
)
)
def abort(msg, status=1):
sys.stdout.write("%s\n" % (msg,))
try:
reactor.stop()
except RuntimeError:
pass
sys.exit(status)
def parseCreationArgs(args):
"""
Look at the command line arguments for --add; the first arg is required
and is the full name. If only that one arg is provided, generate a UUID
and use it for record name and uid. If two args are provided, use the
second arg as the record name and generate a UUID for the uid. If three
args are provided, the second arg is the record name and the third arg
is the uid.
"""
numArgs = len(args)
if numArgs == 0:
print(
"When adding a principal, you must provide the full-name"
)
sys.exit(64)
fullName = args[0].decode("utf-8")
if numArgs == 1:
shortName = uid = unicode(uuid.uuid4()).upper()
elif numArgs == 2:
shortName = args[1].decode("utf-8")
uid = unicode(uuid.uuid4()).upper()
else:
shortName = args[1].decode("utf-8")
uid = args[2].decode("utf-8")
return fullName, shortName, uid
def isUUID(value):
try:
uuid.UUID(value)
return True
except:
return False
def matchStrings(value, validValues):
for validValue in validValues:
if validValue.startswith(value):
return validValue
raise ValueError("'%s' is not a recognized value" % (value,))
def printRecordList(records):
results = []
for record in records:
try:
shortNames = record.shortNames
except AttributeError:
shortNames = []
results.append(
(record.displayName, record.recordType.name, record.uid, shortNames)
)
results.sort()
format = "%-22s %-10s %-20s %s"
print(format % ("Full name", "Type", "UID", "Short names"))
print(format % ("---------", "----", "---", "-----------"))
for fullName, recordType, uid, shortNames in results:
print(format % (fullName, recordType, uid, u", ".join(shortNames)))
if __name__ == "__main__":
main()
calendarserver-7.0+dfsg/calendarserver/tools/purge.py 0000775 0000000 0000000 00000141374 12607514243 0023154 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# -*- test-case-name: calendarserver.tools.test.test_purge -*-
##
# Copyright (c) 2006-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
import collections
import datetime
from getopt import getopt, GetoptError
import os
import sys
from calendarserver.tools import tables
from calendarserver.tools.cmdline import utilityMain, WorkerService
from pycalendar.datetime import DateTime
from twext.enterprise.dal.record import fromTable
from twext.enterprise.dal.syntax import Delete, Select, Union, Parameter, Max
from twext.enterprise.jobs.workitem import WorkItem, RegeneratingWorkItem
from twext.python.log import Logger
from twisted.internet.defer import inlineCallbacks, returnValue, succeed
from twistedcaldav import caldavxml
from twistedcaldav.config import config
from twistedcaldav.dateops import parseSQLDateToPyCalendar, pyCalendarToSQLTimestamp
from twistedcaldav.ical import Component, InvalidICalendarDataError
from txdav.caldav.datastore.query.filter import Filter
from txdav.common.datastore.sql_tables import schema, _HOME_STATUS_NORMAL, _BIND_MODE_OWN
log = Logger()
DEFAULT_BATCH_SIZE = 100
DEFAULT_RETAIN_DAYS = 365
class PrincipalPurgePollingWork(
RegeneratingWorkItem,
fromTable(schema.PRINCIPAL_PURGE_POLLING_WORK)
):
"""
A work item that scans the existing set of provisioned homes in the
store and creates a work item for each to be checked against the
directory to see if they need purging.
"""
group = "principal_purge_polling"
@classmethod
def initialSchedule(cls, store, seconds):
def _enqueue(txn):
return PrincipalPurgePollingWork.reschedule(txn, seconds)
if config.AutomaticPurging.Enabled:
return store.inTransaction("PrincipalPurgePollingWork.initialSchedule", _enqueue)
else:
return succeed(None)
def regenerateInterval(self):
"""
Return the interval in seconds between regenerating instances.
"""
return config.AutomaticPurging.PollingIntervalSeconds if config.AutomaticPurging.Enabled else None
@inlineCallbacks
def doWork(self):
# If not enabled, punt here
if not config.AutomaticPurging.Enabled:
returnValue(None)
# Do the scan
allUIDs = set()
for home in (schema.CALENDAR_HOME, schema.ADDRESSBOOK_HOME):
for [uid] in (
yield Select(
[home.OWNER_UID],
From=home,
Where=(home.STATUS == _HOME_STATUS_NORMAL),
).on(self.transaction)
):
allUIDs.add(uid)
# Spread out the per-uid checks 0 second apart
seconds = 0
for uid in allUIDs:
notBefore = (
datetime.datetime.utcnow() +
datetime.timedelta(seconds=config.AutomaticPurging.CheckStaggerSeconds)
)
seconds += 1
yield self.transaction.enqueue(
PrincipalPurgeCheckWork,
uid=uid,
notBefore=notBefore
)
class PrincipalPurgeCheckWork(
WorkItem,
fromTable(schema.PRINCIPAL_PURGE_CHECK_WORK)
):
"""
Work item for checking for the existence of a UID in the directory. This
work item is created by L{PrincipalPurgePollingWork} - one for each
unique user UID to check.
"""
group = property(lambda self: (self.table.UID == self.uid))
@inlineCallbacks
def doWork(self):
# Delete any other work items for this UID
yield Delete(
From=self.table,
Where=self.group,
).on(self.transaction)
# If not enabled, punt here
if not config.AutomaticPurging.Enabled:
returnValue(None)
log.debug("Checking for existence of {uid} in directory", uid=self.uid)
directory = self.transaction.store().directoryService()
record = yield directory.recordWithUID(self.uid)
if record is None:
# Schedule purge of this UID a week from now
notBefore = (
datetime.datetime.utcnow() +
datetime.timedelta(seconds=config.AutomaticPurging.PurgeIntervalSeconds)
)
log.warn(
"Principal {uid} is no longer in the directory; scheduling clean-up at {when}",
uid=self.uid, when=notBefore
)
yield self.transaction.enqueue(
PrincipalPurgeWork,
uid=self.uid,
notBefore=notBefore
)
else:
log.debug("{uid} is still in the directory", uid=self.uid)
class PrincipalPurgeWork(
WorkItem,
fromTable(schema.PRINCIPAL_PURGE_WORK)
):
"""
Work item for purging a UID's data
"""
group = property(lambda self: (self.table.UID == self.uid))
@inlineCallbacks
def doWork(self):
# Delete any other work items for this UID
yield Delete(
From=self.table,
Where=self.group,
).on(self.transaction)
# If not enabled, punt here
if not config.AutomaticPurging.Enabled:
returnValue(None)
# Check for UID in directory again
log.debug("One last existence check for {uid}", uid=self.uid)
directory = self.transaction.store().directoryService()
record = yield directory.recordWithUID(self.uid)
if record is None:
# Time to go
service = PurgePrincipalService(self.transaction.store())
log.warn(
"Cleaning up future events for principal {uid} since they are no longer in directory",
uid=self.uid
)
yield service.purgeUIDs(
self.transaction.store(),
directory,
[self.uid],
proxies=True,
when=None
)
else:
log.debug("{uid} has re-appeared in the directory", uid=self.uid)
class PrincipalPurgeHomeWork(
WorkItem,
fromTable(schema.PRINCIPAL_PURGE_HOME_WORK)
):
"""
Work item for removing a UID's home
"""
group = property(lambda self: (self.table.HOME_RESOURCE_ID == self.homeResourceID))
@inlineCallbacks
def doWork(self):
# Delete any other work items for this UID
yield Delete(
From=self.table,
Where=self.group,
).on(self.transaction)
# NB We do not check config.AutomaticPurging.Enabled here because if this work
# item was enqueued we always need to complete it
# Check for pending scheduling operations
sow = schema.SCHEDULE_ORGANIZER_WORK
sosw = schema.SCHEDULE_ORGANIZER_SEND_WORK
srw = schema.SCHEDULE_REPLY_WORK
rows = yield Select(
[sow.HOME_RESOURCE_ID],
From=sow,
Where=(sow.HOME_RESOURCE_ID == self.homeResourceID),
SetExpression=Union(
Select(
[sosw.HOME_RESOURCE_ID],
From=sosw,
Where=(sosw.HOME_RESOURCE_ID == self.homeResourceID),
SetExpression=Union(
Select(
[srw.HOME_RESOURCE_ID],
From=srw,
Where=(srw.HOME_RESOURCE_ID == self.homeResourceID),
)
),
)
),
).on(self.transaction)
if rows and len(rows):
# Regenerate this job
notBefore = (
datetime.datetime.utcnow() +
datetime.timedelta(seconds=config.AutomaticPurging.HomePurgeDelaySeconds)
)
yield self.transaction.enqueue(
PrincipalPurgeHomeWork,
homeResourceID=self.homeResourceID,
notBefore=notBefore
)
else:
# Get the home and remove it - only if properly marked as being purged
home = yield self.transaction.calendarHomeWithResourceID(self.homeResourceID)
if home.purging():
yield home.remove()
class PurgeOldEventsService(WorkerService):
uuid = None
cutoff = None
batchSize = None
dryrun = False
debug = False
@classmethod
def usage(cls, e=None):
name = os.path.basename(sys.argv[0])
print("usage: %s [options]" % (name,))
print("")
print(" Remove old events from the calendar server")
print("")
print("options:")
print(" -h --help: print this help and exit")
print(" -f --config : Specify caldavd.plist configuration path")
print(" -u --uuid : Only process this user(s) [REQUIRED]")
print(" -d --days : specify how many days in the past to retain (default=%d)" % (DEFAULT_RETAIN_DAYS,))
print(" -n --dry-run: calculate how many events to purge, but do not purge data")
print(" -v --verbose: print progress information")
print(" -D --debug: debug logging")
print("")
if e:
sys.stderr.write("%s\n" % (e,))
sys.exit(64)
else:
sys.exit(0)
@classmethod
def main(cls):
try:
(optargs, args) = getopt(
sys.argv[1:], "Dd:b:f:hnu:v", [
"days=",
"batch=",
"dry-run",
"config=",
"uuid=",
"help",
"verbose",
"debug",
],
)
except GetoptError, e:
cls.usage(e)
#
# Get configuration
#
configFileName = None
uuid = None
days = DEFAULT_RETAIN_DAYS
batchSize = DEFAULT_BATCH_SIZE
dryrun = False
verbose = False
debug = False
for opt, arg in optargs:
if opt in ("-h", "--help"):
cls.usage()
elif opt in ("-d", "--days"):
try:
days = int(arg)
except ValueError, e:
print("Invalid value for --days: %s" % (arg,))
cls.usage(e)
elif opt in ("-b", "--batch"):
try:
batchSize = int(arg)
except ValueError, e:
print("Invalid value for --batch: %s" % (arg,))
cls.usage(e)
elif opt in ("-v", "--verbose"):
verbose = True
elif opt in ("-D", "--debug"):
debug = True
elif opt in ("-n", "--dry-run"):
dryrun = True
elif opt in ("-f", "--config"):
configFileName = arg
elif opt in ("-u", "--uuid"):
uuid = arg
else:
raise NotImplementedError(opt)
if args:
cls.usage("Too many arguments: %s" % (args,))
if uuid is None:
cls.usage("uuid must be specified")
cls.uuid = uuid
if dryrun:
verbose = True
cutoff = DateTime.getToday()
cutoff.setDateOnly(False)
cutoff.offsetDay(-days)
cls.cutoff = cutoff
cls.batchSize = batchSize
cls.dryrun = dryrun
cls.debug = debug
utilityMain(
configFileName,
cls,
verbose=verbose,
)
@classmethod
@inlineCallbacks
def purgeOldEvents(cls, store, uuid, cutoff, batchSize, debug=False, dryrun=False):
service = cls(store)
service.uuid = uuid
service.cutoff = cutoff
service.batchSize = batchSize
service.dryrun = dryrun
service.debug = debug
result = yield service.doWork()
returnValue(result)
@inlineCallbacks
def getMatchingHomeUIDs(self):
"""
Find all the calendar homes that match the uuid cli argument.
"""
log.debug("Searching for calendar homes matching: '{}'".format(self.uuid))
txn = self.store.newTransaction(label="Find matching homes")
ch = schema.CALENDAR_HOME
if self.uuid:
kwds = {"uuid": self.uuid}
rows = (yield Select(
[ch.RESOURCE_ID, ch.OWNER_UID, ],
From=ch,
Where=(ch.OWNER_UID.StartsWith(Parameter("uuid"))),
).on(txn, **kwds))
else:
rows = (yield Select(
[ch.RESOURCE_ID, ch.OWNER_UID, ],
From=ch,
).on(txn))
yield txn.commit()
log.debug(" Found {} calendar homes".format(len(rows)))
returnValue(sorted(rows, key=lambda x: x[1]))
@inlineCallbacks
def getMatchingCalendarIDs(self, home_id, owner_uid):
"""
Find all the owned calendars for the specified calendar home.
@param home_id: resource-id of calendar home to check
@type home_id: L{int}
@param owner_uid: owner UUID of home to check
@type owner_uid: L{str}
"""
log.debug("Checking calendar home: {} '{}'".format(home_id, owner_uid))
txn = self.store.newTransaction(label="Find matching calendars")
cb = schema.CALENDAR_BIND
kwds = {"home_id": home_id}
rows = (yield Select(
[cb.CALENDAR_RESOURCE_ID, cb.CALENDAR_RESOURCE_NAME, ],
From=cb,
Where=(cb.CALENDAR_HOME_RESOURCE_ID == Parameter("home_id")).And(
cb.BIND_MODE == _BIND_MODE_OWN
),
).on(txn, **kwds))
yield txn.commit()
log.debug(" Found {} calendars".format(len(rows)))
returnValue(rows)
PurgeEvent = collections.namedtuple("PurgeEvent", ("home", "calendar", "resource",))
@inlineCallbacks
def getResourceIDsToPurge(self, home_id, calendar_id, calendar_name):
"""
For the given calendar find which calendar objects are older than the cut-off and return the
resource-ids of those.
@param home_id: resource-id of calendar home
@type home_id: L{int}
@param calendar_id: resource-id of the calendar to check
@type calendar_id: L{int}
@param calendar_name: name of the calendar to check
@type calendar_name: L{str}
"""
log.debug(" Checking calendar: {} '{}'".format(calendar_id, calendar_name))
purge = set()
txn = self.store.newTransaction(label="Find matching resources")
co = schema.CALENDAR_OBJECT
tr = schema.TIME_RANGE
kwds = {"calendar_id": calendar_id}
rows = (yield Select(
[co.RESOURCE_ID, co.RECURRANCE_MAX, co.RECURRANCE_MIN, Max(tr.END_DATE)],
From=co.join(tr, on=(co.RESOURCE_ID == tr.CALENDAR_OBJECT_RESOURCE_ID)),
Where=(co.CALENDAR_RESOURCE_ID == Parameter("calendar_id")).And(
co.ICALENDAR_TYPE == "VEVENT"
),
GroupBy=(co.RESOURCE_ID, co.RECURRANCE_MAX, co.RECURRANCE_MIN,),
Having=(
(co.RECURRANCE_MAX == None).And(Max(tr.END_DATE) < pyCalendarToSQLTimestamp(self.cutoff))
).Or(
(co.RECURRANCE_MAX != None).And(co.RECURRANCE_MAX < pyCalendarToSQLTimestamp(self.cutoff))
),
).on(txn, **kwds))
log.debug(" Found {} resources to check".format(len(rows)))
for resource_id, recurrence_max, recurrence_min, max_end_date in rows:
recurrence_max = parseSQLDateToPyCalendar(recurrence_max) if recurrence_max else None
recurrence_min = parseSQLDateToPyCalendar(recurrence_min) if recurrence_min else None
max_end_date = parseSQLDateToPyCalendar(max_end_date) if max_end_date else None
# Find events where we know the max(end_date) represents a valid,
# untruncated expansion
if recurrence_min is None or recurrence_min < self.cutoff:
if recurrence_max is None:
# Here we know max_end_date is the fully expand final instance
if max_end_date < self.cutoff:
purge.add(self.PurgeEvent(home_id, calendar_id, resource_id,))
continue
elif recurrence_max > self.cutoff:
# Here we know that there are instances newer than the cut-off
# but they have not yet been indexed out that far
continue
# Manually detect the max_end_date from the actual calendar data
calendar = yield self.getCalendar(txn, resource_id)
if calendar is not None:
if self.checkLastInstance(calendar):
purge.add(self.PurgeEvent(home_id, calendar_id, resource_id,))
yield txn.commit()
log.debug(" Found {} resources to purge".format(len(purge)))
returnValue(purge)
@inlineCallbacks
def getCalendar(self, txn, resid):
"""
Get the calendar data for a calendar object resource.
@param resid: resource-id of the calendar object resource to load
@type resid: L{int}
"""
co = schema.CALENDAR_OBJECT
kwds = {"ResourceID" : resid}
rows = (yield Select(
[co.ICALENDAR_TEXT],
From=co,
Where=(
co.RESOURCE_ID == Parameter("ResourceID")
),
).on(txn, **kwds))
try:
caldata = Component.fromString(rows[0][0]) if rows else None
except InvalidICalendarDataError:
returnValue(None)
returnValue(caldata)
def checkLastInstance(self, calendar):
"""
Determine the last instance of a calendar event. Try a "static" analysis of the data first,
and only if needed, do an instance expansion.
@param calendar: the calendar object to examine
@type calendar: L{Component}
"""
# Is it recurring
master = calendar.masterComponent()
if not calendar.isRecurring() or master is None:
# Just check the end date
for comp in calendar.subcomponents():
if comp.name() == "VEVENT":
if comp.getEndDateUTC() > self.cutoff:
return False
else:
return True
elif calendar.isRecurringUnbounded():
return False
else:
# First test all sub-components
# Just check the end date
for comp in calendar.subcomponents():
if comp.name() == "VEVENT":
if comp.getEndDateUTC() > self.cutoff:
return False
# If we get here we need to test the RRULE - if there is an until use
# that as the end point, if a count, we have to expand
rrules = tuple(master.properties("RRULE"))
if len(rrules):
if rrules[0].value().getUseUntil():
return rrules[0].value().getUntil() < self.cutoff
else:
return not calendar.hasInstancesAfter(self.cutoff)
return True
@inlineCallbacks
def getResourcesToPurge(self, home_id, owner_uid):
"""
Find all the resource-ids of calendar object resources that need to be purged in the specified home.
@param home_id: resource-id of calendar home to check
@type home_id: L{int}
@param owner_uid: owner UUID of home to check
@type owner_uid: L{str}
"""
purge = set()
calendars = yield self.getMatchingCalendarIDs(home_id, owner_uid)
for calendar_id, calendar_name in calendars:
purge.update((yield self.getResourceIDsToPurge(home_id, calendar_id, calendar_name)))
returnValue(purge)
@inlineCallbacks
def purgeResources(self, events):
"""
Remove up to batchSize events and return how
many were removed.
"""
txn = self.store.newTransaction(label="Remove old events")
count = 0
last_home = None
last_calendar = None
for event in events:
if event.home != last_home:
home = (yield txn.calendarHomeWithResourceID(event.home))
last_home = event.home
if event.calendar != last_calendar:
calendar = (yield home.childWithID(event.calendar))
last_calendar = event.calendar
resource = (yield calendar.objectResourceWithID(event.resource))
yield resource.purge(implicitly=False)
log.debug("Removed resource {} '{}' from calendar {} '{}' of calendar home '{}'".format(
resource.id(),
resource.name(),
resource.parentCollection().id(),
resource.parentCollection().name(),
resource.parentCollection().ownerHome().uid()
))
count += 1
yield txn.commit()
returnValue(count)
@inlineCallbacks
def doWork(self):
if self.debug:
# Turn on debug logging for this module
config.LogLevels[__name__] = "debug"
else:
config.LogLevels[__name__] = "info"
config.update()
homes = yield self.getMatchingHomeUIDs()
if not homes:
log.info("No homes to process")
returnValue(0)
if self.dryrun:
log.info("Purge dry run only")
log.info("Searching for old events...")
purge = set()
homes = yield self.getMatchingHomeUIDs()
for home_id, owner_uid in homes:
purge.update((yield self.getResourcesToPurge(home_id, owner_uid)))
if self.dryrun:
eventCount = len(purge)
if eventCount == 0:
log.info("No events are older than %s" % (self.cutoff,))
elif eventCount == 1:
log.info("1 event is older than %s" % (self.cutoff,))
else:
log.info("%d events are older than %s" % (eventCount, self.cutoff))
returnValue(eventCount)
purge = list(purge)
purge.sort()
totalEvents = len(purge)
log.info("Removing {} events older than {}...".format(len(purge), self.cutoff,))
numEventsRemoved = -1
totalRemoved = 0
while numEventsRemoved:
numEventsRemoved = (yield self.purgeResources(purge[:self.batchSize]))
if numEventsRemoved:
totalRemoved += numEventsRemoved
log.debug(" Removed {} of {} events...".format(totalRemoved, totalEvents))
purge = purge[numEventsRemoved:]
if totalRemoved == 0:
log.info("No events were removed")
elif totalRemoved == 1:
log.info("1 event was removed in total")
else:
log.info("%d events were removed in total" % (totalRemoved,))
returnValue(totalRemoved)
class PurgeAttachmentsService(WorkerService):
uuid = None
cutoff = None
batchSize = None
dryrun = False
verbose = False
@classmethod
def usage(cls, e=None):
name = os.path.basename(sys.argv[0])
print("usage: %s [options]" % (name,))
print("")
print(" Remove old or orphaned attachments from the calendar server")
print("")
print("options:")
print(" -h --help: print this help and exit")
print(" -f --config : Specify caldavd.plist configuration path")
print(" -u --uuid : target a specific user UID")
# print(" -b --batch : number of attachments to remove in each transaction (default=%d)" % (DEFAULT_BATCH_SIZE,))
print(" -d --days : specify how many days in the past to retain (default=%d) zero means no removal of old attachments" % (DEFAULT_RETAIN_DAYS,))
print(" -n --dry-run: calculate how many attachments to purge, but do not purge data")
print(" -v --verbose: print progress information")
print(" -D --debug: debug logging")
print("")
if e:
sys.stderr.write("%s\n" % (e,))
sys.exit(64)
else:
sys.exit(0)
@classmethod
def main(cls):
try:
(optargs, args) = getopt(
sys.argv[1:], "Dd:b:f:hnu:v", [
"uuid=",
"days=",
"batch=",
"dry-run",
"config=",
"help",
"verbose",
"debug",
],
)
except GetoptError, e:
cls.usage(e)
#
# Get configuration
#
configFileName = None
uuid = None
days = DEFAULT_RETAIN_DAYS
batchSize = DEFAULT_BATCH_SIZE
dryrun = False
verbose = False
debug = False
for opt, arg in optargs:
if opt in ("-h", "--help"):
cls.usage()
elif opt in ("-u", "--uuid"):
uuid = arg
elif opt in ("-d", "--days"):
try:
days = int(arg)
except ValueError, e:
print("Invalid value for --days: %s" % (arg,))
cls.usage(e)
elif opt in ("-b", "--batch"):
try:
batchSize = int(arg)
except ValueError, e:
print("Invalid value for --batch: %s" % (arg,))
cls.usage(e)
elif opt in ("-v", "--verbose"):
verbose = True
elif opt in ("-D", "--debug"):
debug = True
elif opt in ("-n", "--dry-run"):
dryrun = True
elif opt in ("-f", "--config"):
configFileName = arg
else:
raise NotImplementedError(opt)
if args:
cls.usage("Too many arguments: %s" % (args,))
if dryrun:
verbose = True
cls.uuid = uuid
if days > 0:
cutoff = DateTime.getToday()
cutoff.setDateOnly(False)
cutoff.offsetDay(-days)
cls.cutoff = cutoff
else:
cls.cutoff = None
cls.batchSize = batchSize
cls.dryrun = dryrun
cls.verbose = verbose
utilityMain(
configFileName,
cls,
verbose=debug,
)
@classmethod
@inlineCallbacks
def purgeAttachments(cls, store, uuid, days, limit, dryrun, verbose):
service = cls(store)
service.uuid = uuid
if days > 0:
cutoff = DateTime.getToday()
cutoff.setDateOnly(False)
cutoff.offsetDay(-days)
service.cutoff = cutoff
else:
service.cutoff = None
service.batchSize = limit
service.dryrun = dryrun
service.verbose = verbose
result = yield service.doWork()
returnValue(result)
@inlineCallbacks
def doWork(self):
if self.dryrun:
orphans = yield self._orphansDryRun()
if self.cutoff is not None:
dropbox = yield self._dropboxDryRun()
managed = yield self._managedDryRun()
else:
dropbox = ()
managed = ()
returnValue(self._dryRunSummary(orphans, dropbox, managed))
else:
total = yield self._orphansPurge()
if self.cutoff is not None:
total += yield self._dropboxPurge()
total += yield self._managedPurge()
returnValue(total)
@inlineCallbacks
def _orphansDryRun(self):
if self.verbose:
print("(Dry run) Searching for orphaned attachments...")
txn = self.store.newTransaction(label="Find orphaned attachments")
orphans = yield txn.orphanedAttachments(self.uuid)
returnValue(orphans)
@inlineCallbacks
def _dropboxDryRun(self):
if self.verbose:
print("(Dry run) Searching for old dropbox attachments...")
txn = self.store.newTransaction(label="Find old dropbox attachments")
cutoffs = yield txn.oldDropboxAttachments(self.cutoff, self.uuid)
yield txn.commit()
returnValue(cutoffs)
@inlineCallbacks
def _managedDryRun(self):
if self.verbose:
print("(Dry run) Searching for old managed attachments...")
txn = self.store.newTransaction(label="Find old managed attachments")
cutoffs = yield txn.oldManagedAttachments(self.cutoff, self.uuid)
yield txn.commit()
returnValue(cutoffs)
def _dryRunSummary(self, orphans, dropbox, managed):
if self.verbose:
byuser = {}
ByUserData = collections.namedtuple(
'ByUserData',
['quota', 'orphanSize', 'orphanCount', 'dropboxSize', 'dropboxCount', 'managedSize', 'managedCount']
)
for user, quota, size, count in orphans:
byuser[user] = ByUserData(quota=quota, orphanSize=size, orphanCount=count, dropboxSize=0, dropboxCount=0, managedSize=0, managedCount=0)
for user, quota, size, count in dropbox:
if user in byuser:
byuser[user] = byuser[user]._replace(dropboxSize=size, dropboxCount=count)
else:
byuser[user] = ByUserData(quota=quota, orphanSize=0, orphanCount=0, dropboxSize=size, dropboxCount=count, managedSize=0, managedCount=0)
for user, quota, size, count in managed:
if user in byuser:
byuser[user] = byuser[user]._replace(managedSize=size, managedCount=count)
else:
byuser[user] = ByUserData(quota=quota, orphanSize=0, orphanCount=0, dropboxSize=0, dropboxCount=0, managedSize=size, managedCount=count)
# Print table of results
table = tables.Table()
table.addHeader(("User", "Current Quota", "Orphan Size", "Orphan Count", "Dropbox Size", "Dropbox Count", "Managed Size", "Managed Count", "Total Size", "Total Count"))
table.setDefaultColumnFormats((
tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.LEFT_JUSTIFY),
tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY),
tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY),
tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY),
tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY),
tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY),
tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY),
tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY),
tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY),
tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY),
))
totals = [0] * 8
for user, data in sorted(byuser.items(), key=lambda x: x[0]):
cols = (
data.orphanSize,
data.orphanCount,
data.dropboxSize,
data.dropboxCount,
data.managedSize,
data.managedCount,
data.orphanSize + data.dropboxSize + data.managedSize,
data.orphanCount + data.dropboxCount + data.managedCount,
)
table.addRow((user, data.quota,) + cols)
for ctr, value in enumerate(cols):
totals[ctr] += value
table.addFooter(("Total:", "",) + tuple(totals))
total = totals[7]
print("\n")
print("Orphaned/Old Attachments by User:\n")
table.printTable()
else:
total = sum([x[3] for x in orphans]) + sum([x[3] for x in dropbox]) + sum([x[3] for x in managed])
return total
@inlineCallbacks
def _orphansPurge(self):
if self.verbose:
print("Removing orphaned attachments...",)
numOrphansRemoved = -1
totalRemoved = 0
while numOrphansRemoved:
txn = self.store.newTransaction(label="Remove orphaned attachments")
numOrphansRemoved = yield txn.removeOrphanedAttachments(self.uuid, batchSize=self.batchSize)
yield txn.commit()
if numOrphansRemoved:
totalRemoved += numOrphansRemoved
if self.verbose:
print(" %d," % (totalRemoved,),)
elif self.verbose:
print("")
if self.verbose:
if totalRemoved == 0:
print("No orphaned attachments were removed")
elif totalRemoved == 1:
print("1 orphaned attachment was removed in total")
else:
print("%d orphaned attachments were removed in total" % (totalRemoved,))
print("")
returnValue(totalRemoved)
@inlineCallbacks
def _dropboxPurge(self):
if self.verbose:
print("Removing old dropbox attachments...",)
numOldRemoved = -1
totalRemoved = 0
while numOldRemoved:
txn = self.store.newTransaction(label="Remove old dropbox attachments")
numOldRemoved = yield txn.removeOldDropboxAttachments(self.cutoff, self.uuid, batchSize=self.batchSize)
yield txn.commit()
if numOldRemoved:
totalRemoved += numOldRemoved
if self.verbose:
print(" %d," % (totalRemoved,),)
elif self.verbose:
print("")
if self.verbose:
if totalRemoved == 0:
print("No old dropbox attachments were removed")
elif totalRemoved == 1:
print("1 old dropbox attachment was removed in total")
else:
print("%d old dropbox attachments were removed in total" % (totalRemoved,))
print("")
returnValue(totalRemoved)
@inlineCallbacks
def _managedPurge(self):
if self.verbose:
print("Removing old managed attachments...",)
numOldRemoved = -1
totalRemoved = 0
while numOldRemoved:
txn = self.store.newTransaction(label="Remove old managed attachments")
numOldRemoved = yield txn.removeOldManagedAttachments(self.cutoff, self.uuid, batchSize=self.batchSize)
yield txn.commit()
if numOldRemoved:
totalRemoved += numOldRemoved
if self.verbose:
print(" %d," % (totalRemoved,),)
elif self.verbose:
print("")
if self.verbose:
if totalRemoved == 0:
print("No old managed attachments were removed")
elif totalRemoved == 1:
print("1 old managed attachment was removed in total")
else:
print("%d old managed attachments were removed in total" % (totalRemoved,))
print("")
returnValue(totalRemoved)
class PurgePrincipalService(WorkerService):
root = None
directory = None
uids = None
dryrun = False
verbose = False
proxies = True
when = None
@classmethod
def usage(cls, e=None):
name = os.path.basename(sys.argv[0])
print("usage: %s [options]" % (name,))
print("")
print(" Remove a principal's events and contacts from the calendar server")
print(" Future events are declined or cancelled")
print("")
print("options:")
print(" -h --help: print this help and exit")
print(" -f --config : Specify caldavd.plist configuration path")
print(" -n --dry-run: calculate how many events and contacts to purge, but do not purge data")
print(" -v --verbose: print progress information")
print(" -D --debug: debug logging")
print("")
if e:
sys.stderr.write("%s\n" % (e,))
sys.exit(64)
else:
sys.exit(0)
@classmethod
def main(cls):
try:
(optargs, args) = getopt(
sys.argv[1:], "Df:hnv", [
"dry-run",
"config=",
"help",
"verbose",
"debug",
],
)
except GetoptError, e:
cls.usage(e)
#
# Get configuration
#
configFileName = None
dryrun = False
verbose = False
debug = False
for opt, arg in optargs:
if opt in ("-h", "--help"):
cls.usage()
elif opt in ("-v", "--verbose"):
verbose = True
elif opt in ("-D", "--debug"):
debug = True
elif opt in ("-n", "--dry-run"):
dryrun = True
elif opt in ("-f", "--config"):
configFileName = arg
else:
raise NotImplementedError(opt)
# args is a list of uids
cls.uids = args
cls.dryrun = dryrun
cls.verbose = verbose
utilityMain(
configFileName,
cls,
verbose=debug,
)
@classmethod
@inlineCallbacks
def purgeUIDs(cls, store, directory, uids, verbose=False, dryrun=False,
proxies=True, when=None):
service = cls(store)
service.directory = directory
service.uids = uids
service.verbose = verbose
service.dryrun = dryrun
service.proxies = proxies
service.when = when
result = yield service.doWork()
returnValue(result)
@inlineCallbacks
def doWork(self):
if self.directory is None:
self.directory = self.store.directoryService()
total = 0
for uid in self.uids:
count = yield self._purgeUID(uid)
total += count
if self.verbose:
amount = "%d event%s" % (total, "s" if total > 1 else "")
if self.dryrun:
print("Would have modified or deleted %s" % (amount,))
else:
print("Modified or deleted %s" % (amount,))
returnValue(total)
@inlineCallbacks
def _purgeUID(self, uid):
if self.when is None:
self.when = DateTime.getNowUTC()
cuas = set((
"urn:uuid:{}".format(uid),
"urn:x-uid:{}".format(uid)
))
# See if calendar home is provisioned
txn = self.store.newTransaction()
storeCalHome = yield txn.calendarHomeWithUID(uid)
calHomeProvisioned = storeCalHome is not None
# Always, unshare collections, remove notifications
if calHomeProvisioned:
yield self._cleanHome(txn, storeCalHome)
yield txn.commit()
count = 0
if calHomeProvisioned:
count = yield self._cancelEvents(txn, uid, cuas)
# Remove empty calendar collections (and calendar home if no more
# calendars)
yield self._removeCalendarHome(uid)
# Remove VCards
count += (yield self._removeAddressbookHome(uid))
if self.proxies and not self.dryrun:
if self.verbose:
print("Deleting any proxy assignments")
yield self._purgeProxyAssignments(self.store, uid)
returnValue(count)
@inlineCallbacks
def _cleanHome(self, txn, storeCalHome):
# Process shared and shared-to-me calendars
children = list((yield storeCalHome.children()))
for child in children:
if self.verbose:
if self.dryrun:
print("Would unshare: %s" % (child.name(),))
else:
print("Unsharing: %s" % (child.name(),))
if not self.dryrun:
yield child.unshare()
if not self.dryrun:
yield storeCalHome.removeUnacceptedShares()
notificationHome = yield txn.notificationsWithUID(storeCalHome.uid())
if notificationHome is not None:
yield notificationHome.purge()
@inlineCallbacks
def _cancelEvents(self, txn, uid, cuas):
# Anything in the past is left alone
whenString = self.when.getText()
query_filter = caldavxml.Filter(
caldavxml.ComponentFilter(
caldavxml.ComponentFilter(
caldavxml.TimeRange(start=whenString,),
name=("VEVENT",),
),
name="VCALENDAR",
)
)
query_filter = Filter(query_filter)
count = 0
txn = self.store.newTransaction()
storeCalHome = yield txn.calendarHomeWithUID(uid)
calendarNames = yield storeCalHome.listCalendars()
yield txn.commit()
for calendarName in calendarNames:
txn = self.store.newTransaction(authz_uid=uid)
storeCalHome = yield txn.calendarHomeWithUID(uid)
calendar = yield storeCalHome.calendarWithName(calendarName)
allChildNames = []
futureChildNames = set()
# Only purge owned calendars
if calendar.owned():
# all events
for childName in (yield calendar.listCalendarObjects()):
allChildNames.append(childName)
# events matching filter
for childName, _ignore_childUid, _ignore_childType in (yield calendar.search(query_filter)):
futureChildNames.add(childName)
yield txn.commit()
for childName in allChildNames:
txn = self.store.newTransaction(authz_uid=uid)
storeCalHome = yield txn.calendarHomeWithUID(uid)
calendar = yield storeCalHome.calendarWithName(calendarName)
doScheduling = childName in futureChildNames
try:
childResource = yield calendar.calendarObjectWithName(childName)
uri = "/calendars/__uids__/%s/%s/%s" % (storeCalHome.uid(), calendar.name(), childName)
incrementCount = self.dryrun
if self.verbose:
if self.dryrun:
print("Would delete%s: %s" % (" with scheduling" if doScheduling else "", uri,))
else:
print("Deleting%s: %s" % (" with scheduling" if doScheduling else "", uri,))
if not self.dryrun:
retry = False
try:
yield childResource.purge(implicitly=doScheduling)
incrementCount = True
except Exception, e:
print("Exception deleting %s: %s" % (uri, str(e)))
retry = True
if retry and doScheduling:
# Try again with implicit scheduling off
print("Retrying deletion of %s with scheduling turned off" % (uri,))
try:
yield childResource.purge(implicitly=False)
incrementCount = True
except Exception, e:
print("Still couldn't delete %s even with scheduling turned off: %s" % (uri, str(e)))
if incrementCount:
count += 1
# Commit
yield txn.commit()
except Exception, e:
# Abort
yield txn.abort()
raise e
returnValue(count)
@inlineCallbacks
def _removeCalendarHome(self, uid):
try:
txn = self.store.newTransaction(authz_uid=uid)
# Remove empty calendar collections (and calendar home if no more
# calendars)
storeCalHome = yield txn.calendarHomeWithUID(uid)
if storeCalHome is not None:
calendars = list((yield storeCalHome.calendars()))
calendars.extend(list((yield storeCalHome.calendars(onlyInTrash=True))))
remainingCalendars = len(calendars)
for calColl in calendars:
if len(list((yield calColl.calendarObjects()))) == 0:
remainingCalendars -= 1
calendarName = calColl.name()
if self.verbose:
if self.dryrun:
print("Would delete calendar: %s" % (calendarName,))
else:
print("Deleting calendar: %s" % (calendarName,))
if not self.dryrun:
if calColl.owned():
yield storeCalHome.removeChildWithName(calendarName, useTrash=False)
else:
yield calColl.unshare()
if not remainingCalendars:
if self.verbose:
if self.dryrun:
print("Would delete calendar home")
else:
print("Deleting calendar home")
if not self.dryrun:
# Queue a job to delete the calendar home after any scheduling operations
# are complete
notBefore = (
datetime.datetime.utcnow() +
datetime.timedelta(seconds=config.AutomaticPurging.HomePurgeDelaySeconds)
)
yield txn.enqueue(
PrincipalPurgeHomeWork,
homeResourceID=storeCalHome.id(),
notBefore=notBefore
)
# Also mark the home as purging so it won't be looked at again during
# purge polling
yield storeCalHome.purge()
# Commit
yield txn.commit()
except Exception, e:
# Abort
yield txn.abort()
raise e
@inlineCallbacks
def _removeAddressbookHome(self, uid):
count = 0
txn = self.store.newTransaction(authz_uid=uid)
try:
# Remove VCards
storeAbHome = yield txn.addressbookHomeWithUID(uid)
if storeAbHome is not None:
for abColl in list((yield storeAbHome.addressbooks())):
for card in list((yield abColl.addressbookObjects())):
cardName = card.name()
if self.verbose:
uri = "/addressbooks/__uids__/%s/%s/%s" % (uid, abColl.name(), cardName)
if self.dryrun:
print("Would delete: %s" % (uri,))
else:
print("Deleting: %s" % (uri,))
if not self.dryrun:
yield card.remove()
count += 1
abName = abColl.name()
if self.verbose:
if self.dryrun:
print("Would delete addressbook: %s" % (abName,))
else:
print("Deleting addressbook: %s" % (abName,))
if not self.dryrun:
# Also remove the addressbook collection itself
if abColl.owned():
yield storeAbHome.removeChildWithName(abName, useTrash=False)
else:
yield abColl.unshare()
if self.verbose:
if self.dryrun:
print("Would delete addressbook home")
else:
print("Deleting addressbook home")
if not self.dryrun:
yield storeAbHome.remove()
# Commit
yield txn.commit()
except Exception, e:
# Abort
yield txn.abort()
raise e
returnValue(count)
@inlineCallbacks
def _purgeProxyAssignments(self, store, uid):
txn = store.newTransaction()
for readWrite in (True, False):
yield txn.removeDelegates(uid, readWrite)
yield txn.removeDelegateGroups(uid, readWrite)
yield txn.commit()
calendarserver-7.0+dfsg/calendarserver/tools/push.py 0000775 0000000 0000000 00000007776 12607514243 0023020 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
##
# Copyright (c) 2012-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
from calendarserver.tools.cmdline import utilityMain, WorkerService
from argparse import ArgumentParser
from twext.python.log import Logger
from twisted.internet.defer import inlineCallbacks
from twext.who.idirectory import RecordType
import time
log = Logger()
class DisplayAPNSubscriptions(WorkerService):
users = []
def doWork(self):
rootResource = self.rootResource()
directory = rootResource.getDirectory()
return displayAPNSubscriptions(self.store, directory, rootResource,
self.users)
def main():
parser = ArgumentParser(description='Display Apple Push Notification subscriptions')
parser.add_argument('-f', '--config', dest='configFileName', metavar='CONFIGFILE', help='caldavd.plist configuration file path')
parser.add_argument('-d', '--debug', action='store_true', help='show debug logging')
parser.add_argument('user', help='one or more users to display', nargs='+') # Required
args = parser.parse_args()
DisplayAPNSubscriptions.users = args.user
utilityMain(
args.configFileName,
DisplayAPNSubscriptions,
verbose=args.debug,
)
@inlineCallbacks
def displayAPNSubscriptions(store, directory, root, users):
for user in users:
print
record = yield directory.recordWithShortName(RecordType.user, user)
if record is not None:
print("User %s (%s)..." % (user, record.uid))
txn = store.newTransaction(label="Display APN Subscriptions")
subscriptions = (yield txn.apnSubscriptionsBySubscriber(record.uid))
(yield txn.commit())
if subscriptions:
byKey = {}
for apnrecord in subscriptions:
byKey.setdefault(apnrecord.resourceKey, []).append(apnrecord)
for key, apnsrecords in byKey.iteritems():
print
protocol, _ignore_host, path = key.strip("/").split("/", 2)
resource = {
"CalDAV": "calendar",
"CardDAV": "addressbook",
}[protocol]
if "/" in path:
uid, collection = path.split("/")
else:
uid = path
collection = None
record = yield directory.recordWithUID(uid)
user = record.shortNames[0]
if collection:
print("...is subscribed to a share from %s's %s home" % (user, resource),)
else:
print("...is subscribed to %s's %s home" % (user, resource),)
# print(" (key: %s)\n" % (key,))
print("with %d device(s):" % (len(apnsrecords),))
for apnrecords in apnsrecords:
print(" %s\n '%s' from %s\n %s" % (
apnrecords.token, apnrecords.userAgent, apnrecords.ipAddr,
time.strftime(
"on %a, %d %b %Y at %H:%M:%S %z(%Z)",
time.localtime(apnrecords.modified)
)
))
else:
print(" ...is not subscribed to anything.")
else:
print("User %s not found" % (user,))
calendarserver-7.0+dfsg/calendarserver/tools/resources.py 0000775 0000000 0000000 00000010402 12607514243 0024027 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
##
# Copyright (c) 2006-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
__all__ = [
"migrateResources",
]
from getopt import getopt, GetoptError
import os
import sys
from calendarserver.tools.cmdline import utilityMain, WorkerService
from twext.python.log import Logger
from twisted.internet.defer import inlineCallbacks, returnValue
from txdav.who.directory import CalendarDirectoryRecordMixin
from twext.who.directory import DirectoryRecord as BaseDirectoryRecord
from txdav.who.idirectory import RecordType
log = Logger()
class ResourceMigrationService(WorkerService):
@inlineCallbacks
def doWork(self):
try:
from txdav.who.opendirectory import (
DirectoryService as OpenDirectoryService
)
except ImportError:
returnValue(None)
sourceService = OpenDirectoryService()
sourceService.recordType = RecordType
destService = self.store.directoryService()
yield migrateResources(sourceService, destService)
def usage():
name = os.path.basename(sys.argv[0])
print("usage: %s [options] " % (name,))
print("")
print(" Migrates resources and locations from OD to Calendar Server")
print("")
print("options:")
print(" -h --help: print this help and exit")
print(" -f --config : Specify caldavd.plist configuration path")
print(" -v --verbose: print debugging information")
print("")
sys.exit(0)
def main():
try:
(optargs, _ignore_args) = getopt(
sys.argv[1:], "hf:", [
"help",
"config=",
],
)
except GetoptError, e:
usage(e)
#
# Get configuration
#
configFileName = None
verbose = False
for opt, arg in optargs:
if opt in ("-h", "--help"):
usage()
elif opt in ("-f", "--config"):
configFileName = arg
else:
raise NotImplementedError(opt)
utilityMain(configFileName, ResourceMigrationService, verbose=verbose)
class DirectoryRecord(BaseDirectoryRecord, CalendarDirectoryRecordMixin):
pass
@inlineCallbacks
def migrateResources(sourceService, destService, verbose=False):
"""
Fetch all the locations and resources from sourceService that are not
already in destService and copy them into destService.
"""
destRecords = []
for recordType in (
RecordType.resource,
RecordType.location,
):
records = yield sourceService.recordsWithRecordType(recordType)
for sourceRecord in records:
destRecord = yield destService.recordWithUID(sourceRecord.uid)
if destRecord is None:
if verbose:
print(
"Migrating {recordType} {uid}".format(
recordType=recordType.name,
uid=sourceRecord.uid
)
)
fields = sourceRecord.fields.copy()
fields[destService.fieldName.recordType] = destService.recordType.lookupByName(recordType.name)
# Only interested in these fields:
fn = destService.fieldName
interestingFields = [
fn.recordType, fn.shortNames, fn.uid, fn.fullNames, fn.guid
]
for key in fields.keys():
if key not in interestingFields:
del fields[key]
destRecord = DirectoryRecord(destService, fields)
destRecords.append(destRecord)
if destRecords:
yield destService.updateRecords(destRecords, create=True)
if __name__ == "__main__":
main()
calendarserver-7.0+dfsg/calendarserver/tools/shell/ 0000775 0000000 0000000 00000000000 12607514243 0022552 5 ustar 00root root 0000000 0000000 calendarserver-7.0+dfsg/calendarserver/tools/shell/__init__.py 0000664 0000000 0000000 00000001205 12607514243 0024661 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2012-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Data store interactive shell.
"""
calendarserver-7.0+dfsg/calendarserver/tools/shell/cmd.py 0000664 0000000 0000000 00000053005 12607514243 0023672 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2011-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Data store commands.
"""
__all__ = [
"UsageError",
"UnknownArguments",
"CommandsBase",
"Commands",
]
from getopt import getopt
from twext.python.log import Logger
from twisted.internet.defer import succeed
from twisted.internet.defer import inlineCallbacks, returnValue
from twisted.conch.manhole import ManholeInterpreter
from txdav.common.icommondatastore import NotFoundError
from calendarserver.version import version
from calendarserver.tools.tables import Table
from calendarserver.tools.purge import PurgePrincipalService
from calendarserver.tools.shell.vfs import Folder, RootFolder
from calendarserver.tools.shell.directory import findRecords, summarizeRecords, recordInfo
log = Logger()
class UsageError(Exception):
"""
Usage error.
"""
class UnknownArguments(UsageError):
"""
Unknown arguments.
"""
def __init__(self, arguments):
UsageError.__init__(self, "Unknown arguments: %s" % (arguments,))
self.arguments = arguments
class InsufficientArguments(UsageError):
"""
Insufficient arguments.
"""
def __init__(self):
UsageError.__init__(self, "Insufficient arguments.")
class CommandsBase(object):
"""
Base class for commands.
@ivar protocol: a protocol for parsing the incoming command line.
@type protocol: L{calendarserver.tools.shell.terminal.ShellProtocol}
"""
def __init__(self, protocol):
self.protocol = protocol
self.wd = RootFolder(protocol.service)
@property
def terminal(self):
return self.protocol.terminal
#
# Utilities
#
def documentationForCommand(self, command):
"""
@return: the documentation for the given C{command} as a
string.
"""
m = getattr(self, "cmd_%s" % (command,), None)
if m:
doc = m.__doc__.split("\n")
# Throw out first and last line if it's empty
if doc:
if not doc[0].strip():
doc.pop(0)
if not doc[-1].strip():
doc.pop()
if doc:
# Get length of indentation
i = len(doc[0]) - len(doc[0].lstrip())
result = []
for line in doc:
result.append(line[i:])
return "\n".join(result)
else:
self.terminal.write("(No documentation available for %s)\n" % (command,))
else:
raise NotFoundError("Unknown command: %s" % (command,))
def getTarget(self, tokens, wdFallback=False):
"""
Pop's the first token from tokens and locates the File
indicated by that token.
@return: a C{File}.
"""
if tokens:
return self.wd.locate(tokens.pop(0).split("/"))
else:
if wdFallback:
return succeed(self.wd)
else:
return succeed(None)
@inlineCallbacks
def getTargets(self, tokens, wdFallback=False):
"""
For each given C{token}, locate a File to operate on.
@return: iterable of C{File} objects.
"""
if tokens:
result = []
for token in tokens:
try:
target = (yield self.wd.locate(token.split("/")))
except NotFoundError:
raise UsageError("No such target: %s" % (token,))
result.append(target)
returnValue(result)
else:
if wdFallback:
returnValue((self.wd,))
else:
returnValue(())
@inlineCallbacks
def directoryRecordWithID(self, id):
"""
Obtains a directory record corresponding to the given C{id}.
C{id} is assumed to be a record UID. For convenience, may
also take the form C{type:name}, where C{type} is a record
type and C{name} is a record short name.
@return: an C{IDirectoryRecord}
"""
directory = self.protocol.service.directory
record = yield directory.recordWithUID(id)
if not record:
# Try type:name form
try:
recordType, shortName = id.split(":")
except ValueError:
pass
else:
record = yield directory.recordWithShortName(recordType, shortName)
returnValue(record)
def commands(self, showHidden=False):
"""
@return: an iterable of C{(name, method)} tuples, where
C{name} is the name of the command and C{method} is the method
that implements it.
"""
for attr in dir(self):
if attr.startswith("cmd_"):
m = getattr(self, attr)
if showHidden or not hasattr(m, "hidden"):
yield (attr[4:], m)
@staticmethod
def complete(word, items):
"""
List completions for the given C{word} from the given
C{items}.
Completions are the remaining portions of words in C{items}
that start with C{word}.
For example, if C{"foobar"} and C{"foo"} are in C{items}, then
C{""} and C{"bar"} are completions when C{word} C{"foo"}.
@return: an iterable of completions.
"""
for item in items:
if item.startswith(word):
yield item[len(word):]
def complete_commands(self, word):
"""
@return: an iterable of command name completions.
"""
def complete(showHidden):
return self.complete(
word,
(name for name, method in self.commands(showHidden=showHidden))
)
completions = tuple(complete(False))
# If no completions are found, try hidden commands.
if not completions:
completions = complete(True)
return completions
@inlineCallbacks
def complete_files(self, tokens, filter=None):
"""
@return: an iterable of C{File} path completions.
"""
if filter is None:
filter = lambda item: True
if tokens:
token = tokens[-1]
i = token.rfind("/")
if i == -1:
# No "/" in token
base = self.wd
word = token
else:
base = (yield self.wd.locate(token[:i].split("/")))
word = token[i + 1:]
else:
base = self.wd
word = ""
files = (
entry.toString()
for entry in (yield base.list())
if filter(entry)
)
if len(tokens) == 0:
returnValue(files)
else:
returnValue(self.complete(word, files))
class Commands(CommandsBase):
"""
Data store commands.
"""
#
# Basic CLI tools
#
def cmd_exit(self, tokens):
"""
Exit the shell.
usage: exit
"""
if tokens:
raise UnknownArguments(tokens)
self.protocol.exit()
def cmd_help(self, tokens):
"""
Show help.
usage: help [command]
"""
if tokens:
command = tokens.pop(0)
else:
command = None
if tokens:
raise UnknownArguments(tokens)
if command:
self.terminal.write(self.documentationForCommand(command))
self.terminal.nextLine()
else:
self.terminal.write("Available commands:\n")
result = []
max_len = 0
for name, m in self.commands():
for line in m.__doc__.split("\n"):
line = line.strip()
if line:
doc = line
break
else:
doc = "(no info available)"
if len(name) > max_len:
max_len = len(name)
result.append((name, doc))
format = " %%%ds - %%s\n" % (max_len,)
for info in sorted(result):
self.terminal.write(format % (info))
def complete_help(self, tokens):
if len(tokens) == 0:
return (name for name, method in self.commands())
elif len(tokens) == 1:
return self.complete_commands(tokens[0])
else:
return ()
def cmd_emulate(self, tokens):
"""
Emulate editor behavior.
The only correct argument is: emacs
Other choices include: none
usage: emulate editor
"""
if not tokens:
if self.protocol.emulate:
self.terminal.write("Emulating %s.\n" % (self.protocol.emulate,))
else:
self.terminal.write("Emulation disabled.\n")
return
editor = tokens.pop(0).lower()
if tokens:
raise UnknownArguments(tokens)
if editor == "none":
self.terminal.write("Disabling emulation.\n")
editor = None
elif editor in self.protocol.emulation_modes:
self.terminal.write("Emulating %s.\n" % (editor,))
else:
raise UsageError("Unknown editor: %s" % (editor,))
self.protocol.emulate = editor
# FIXME: Need to update key registrations
cmd_emulate.hidden = "incomplete"
def complete_emulate(self, tokens):
if len(tokens) == 0:
return self.protocol.emulation_modes
elif len(tokens) == 1:
return self.complete(tokens[0], self.protocol.emulation_modes)
else:
return ()
def cmd_log(self, tokens):
"""
Enable logging.
usage: log [file]
"""
if hasattr(self, "_logFile"):
self.terminal.write("Already logging to file: %s\n" % (self._logFile,))
return
if tokens:
fileName = tokens.pop(0)
else:
fileName = "/tmp/shell.log"
if tokens:
raise UnknownArguments(tokens)
from twisted.python.log import startLogging
try:
f = open(fileName, "w")
except (IOError, OSError), e:
self.terminal.write("Unable to open file %s: %s\n" % (fileName, e))
return
startLogging(f)
self._logFile = fileName
cmd_log.hidden = "debug tool"
def cmd_version(self, tokens):
"""
Print version.
usage: version
"""
if tokens:
raise UnknownArguments(tokens)
self.terminal.write("%s\n" % (version,))
#
# Filesystem tools
#
def cmd_pwd(self, tokens):
"""
Print working folder.
usage: pwd
"""
if tokens:
raise UnknownArguments(tokens)
self.terminal.write("%s\n" % (self.wd,))
@inlineCallbacks
def cmd_cd(self, tokens):
"""
Change working folder.
usage: cd [folder]
"""
if tokens:
dirname = tokens.pop(0)
else:
return
if tokens:
raise UnknownArguments(tokens)
wd = (yield self.wd.locate(dirname.split("/")))
if not isinstance(wd, Folder):
raise NotFoundError("Not a folder: %s" % (wd,))
# log.info("wd -> %s" % (wd,))
self.wd = wd
@inlineCallbacks
def complete_cd(self, tokens):
returnValue((yield self.complete_files(
tokens,
filter=lambda item: True #issubclass(item[0], Folder)
)))
@inlineCallbacks
def cmd_ls(self, tokens):
"""
List target.
usage: ls [target ...]
"""
targets = (yield self.getTargets(tokens, wdFallback=True))
multiple = len(targets) > 0
for target in targets:
entries = sorted((yield target.list()), key=lambda e: e.fileName)
#
# FIXME: this can be ugly if, for example, there are zillions
# of entries to output. Paging would be good.
#
table = Table()
for entry in entries:
table.addRow(entry.toFields())
if multiple:
self.terminal.write("%s:\n" % (target,))
if table.rows:
table.printTable(self.terminal)
self.terminal.nextLine()
complete_ls = CommandsBase.complete_files
@inlineCallbacks
def cmd_info(self, tokens):
"""
Print information about a target.
usage: info [target]
"""
target = (yield self.getTarget(tokens, wdFallback=True))
if tokens:
raise UnknownArguments(tokens)
description = (yield target.describe())
self.terminal.write(description)
self.terminal.nextLine()
complete_info = CommandsBase.complete_files
@inlineCallbacks
def cmd_cat(self, tokens):
"""
Show contents of target.
usage: cat target [target ...]
"""
targets = (yield self.getTargets(tokens))
if not targets:
raise InsufficientArguments()
for target in targets:
if hasattr(target, "text"):
text = (yield target.text())
self.terminal.write(text)
complete_cat = CommandsBase.complete_files
@inlineCallbacks
def cmd_rm(self, tokens):
"""
Remove target.
usage: rm target [target ...]
"""
options, tokens = getopt(tokens, "", ["no-implicit"])
implicit = True
for option, _ignore_value in options:
if option == "--no-implicit":
# Not in docstring; this is really dangerous.
implicit = False
else:
raise AssertionError("We should't be here.")
targets = (yield self.getTargets(tokens))
if not targets:
raise InsufficientArguments()
for target in targets:
if hasattr(target, "delete"):
target.delete(implicit=implicit)
else:
self.terminal.write("Can not delete read-only target: %s\n" % (target,))
cmd_rm.hidden = "Incomplete"
complete_rm = CommandsBase.complete_files
#
# Principal tools
#
@inlineCallbacks
def cmd_find_principals(self, tokens):
"""
Search for matching principals
usage: find_principal search_term
"""
if not tokens:
raise UsageError("No search term")
directory = self.protocol.service.directory
records = (yield findRecords(directory, tokens))
if records:
self.terminal.write((yield summarizeRecords(directory, records)))
else:
self.terminal.write("No matching principals found.")
self.terminal.nextLine()
@inlineCallbacks
def cmd_print_principal(self, tokens):
"""
Print information about a principal.
usage: print_principal principal_id
"""
if tokens:
id = tokens.pop(0)
else:
raise UsageError("Principal ID required")
if tokens:
raise UnknownArguments(tokens)
directory = self.protocol.service.directory
record = yield self.directoryRecordWithID(id)
if record:
self.terminal.write((yield recordInfo(directory, record)))
else:
self.terminal.write("No such principal.")
self.terminal.nextLine()
#
# Data purge tools
#
@inlineCallbacks
def cmd_purge_principals(self, tokens):
"""
Purge data associated principals.
usage: purge_principals principal_id [principal_id ...]
"""
dryRun = True
completely = False
doimplicit = True
directory = self.protocol.service.directory
records = []
for id in tokens:
record = yield self.directoryRecordWithID(id)
records.append(record)
if not record:
self.terminal.write("Unknown UID: %s\n" % (id,))
if None in records:
self.terminal.write("Aborting.\n")
return
if dryRun:
toPurge = "to purge"
else:
toPurge = "purged"
total = 0
for record in records:
count, _ignore_assignments = (yield PurgePrincipalService.purgeUIDs(
self.protocol.service.store,
directory,
(record.uid,),
verbose=False,
dryrun=dryRun,
completely=completely,
doimplicit=doimplicit,
))
total += count
self.terminal.write(
"%d events %s for UID %s.\n"
% (count, toPurge, record.uid)
)
self.terminal.write(
"%d total events %s.\n"
% (total, toPurge)
)
cmd_purge_principals.hidden = "incomplete"
#
# Sharing
#
@inlineCallbacks
def cmd_share(self, tokens):
"""
Share a resource with a principal.
usage: share mode principal_id target [target ...]
mode: r (read) or rw (read/write)
"""
if len(tokens) < 3:
raise InsufficientArguments()
mode = tokens.pop(0)
principalID = tokens.pop(0)
record = yield self.directoryRecordWithID(principalID)
if not record:
self.terminal.write("Principal not found: %s\n" % (principalID,))
targets = yield self.getTargets(tokens)
if mode == "r":
mode = None
elif mode == "rw":
mode = None
else:
raise UsageError("Unknown mode: %s" % (mode,))
for _ignore_target in targets:
raise NotImplementedError()
cmd_share.hidden = "incomplete"
#
# Python prompt, for the win
#
def cmd_python(self, tokens):
"""
Switch to a python prompt.
usage: python
"""
if tokens:
raise UnknownArguments(tokens)
if not hasattr(self, "_interpreter"):
# Bring in some helpful local variables.
from txdav.common.datastore.sql_tables import schema
from twext.enterprise.dal import syntax
localVariables = dict(
self=self,
store=self.protocol.service.store,
schema=schema,
)
# FIXME: Use syntax.__all__, which needs to be defined
for key, value in syntax.__dict__.items():
if not key.startswith("_"):
localVariables[key] = value
class Handler(object):
def addOutput(innerSelf, bytes, async=False): #@NoSelf
"""
This is a delegate method, called by ManholeInterpreter.
"""
if async:
self.terminal.write("... interrupted for Deferred ...\n")
self.terminal.write(bytes)
if async:
self.terminal.write("\n")
self.protocol.drawInputLine()
self._interpreter = ManholeInterpreter(Handler(), localVariables)
def evalSomePython(line):
if line == "exit":
# Return to normal command mode.
del self.protocol.lineReceived
del self.protocol.ps
try:
del self.protocol.pn
except AttributeError:
pass
self.protocol.drawInputLine()
return
more = self._interpreter.push(line)
self.protocol.pn = bool(more)
lw = self.terminal.lastWrite
if not (lw.endswith("\n") or lw.endswith("\x1bE")):
self.terminal.write("\n")
self.protocol.drawInputLine()
self.protocol.lineReceived = evalSomePython
self.protocol.ps = (">>> ", "... ")
cmd_python.hidden = "debug tool"
#
# SQL prompt, for not as winning
#
def cmd_sql(self, tokens):
"""
Switch to an SQL prompt.
usage: sql
"""
if tokens:
raise UnknownArguments(tokens)
raise NotImplementedError("Command not implemented")
cmd_sql.hidden = "not implemented"
#
# Test tools
#
def cmd_raise(self, tokens):
"""
Raises an exception.
usage: raise [message ...]
"""
raise RuntimeError(" ".join(tokens))
cmd_raise.hidden = "test tool"
def cmd_reload(self, tokens):
"""
Reloads code.
usage: reload
"""
if tokens:
raise UnknownArguments(tokens)
import calendarserver.tools.shell.vfs
reload(calendarserver.tools.shell.vfs)
import calendarserver.tools.shell.directory
reload(calendarserver.tools.shell.directory)
self.protocol.reloadCommands()
cmd_reload.hidden = "test tool"
def cmd_xyzzy(self, tokens):
"""
"""
self.terminal.write("Nothing happens.")
self.terminal.nextLine()
cmd_sql.hidden = ""
calendarserver-7.0+dfsg/calendarserver/tools/shell/directory.py 0000664 0000000 0000000 00000012470 12607514243 0025134 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2012-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Directory tools
"""
__all__ = [
"findRecords",
"recordInfo",
"recordBasicInfo",
"recordGroupMembershipInfo",
"recordProxyAccessInfo",
]
import operator
from twisted.internet.defer import succeed
from twisted.internet.defer import inlineCallbacks, returnValue
from calendarserver.tools.tables import Table
@inlineCallbacks
def findRecords(directory, terms):
records = tuple((yield directory.recordsMatchingTokens(terms)))
returnValue(sorted(records, key=operator.attrgetter("fullName")))
@inlineCallbacks
def recordInfo(directory, record):
"""
Complete record information.
"""
info = []
def add(name, subInfo):
if subInfo:
info.append("%s:" % (name,))
info.append(subInfo)
add("Directory record" , (yield recordBasicInfo(directory, record)))
add("Group memberships", (yield recordGroupMembershipInfo(directory, record)))
add("Proxy access" , (yield recordProxyAccessInfo(directory, record)))
returnValue("\n".join(info))
def recordBasicInfo(directory, record):
"""
Basic information for a record.
"""
table = Table()
def add(name, value):
if value:
table.addRow((name, value))
add("Service" , record.service)
add("Record Type", record.recordType)
for shortName in record.shortNames:
add("Short Name", shortName)
add("GUID" , record.guid)
add("Full Name" , record.fullName)
add("First Name", record.firstName)
add("Last Name" , record.lastName)
try:
for email in record.emailAddresses:
add("Email Address", email)
except AttributeError:
pass
try:
for cua in record.calendarUserAddresses:
add("Calendar User Address", cua)
except AttributeError:
pass
add("Server ID" , record.serverID)
add("Enabled" , record.enabled)
add("Enabled for Calendar", record.hasCalendars)
add("Enabled for Contacts", record.hasContacts)
return succeed(table.toString())
def recordGroupMembershipInfo(directory, record):
"""
Group membership info for a record.
"""
rows = []
for group in record.groups():
rows.append((group.uid, group.shortNames[0], group.fullName))
if not rows:
return succeed(None)
rows = sorted(
rows,
key=lambda row: (row[1], row[2])
)
table = Table()
table.addHeader(("UID", "Short Name", "Full Name"))
for row in rows:
table.addRow(row)
return succeed(table.toString())
@inlineCallbacks
def recordProxyAccessInfo(directory, record):
"""
Group membership info for a record.
"""
# FIXME: This proxy finding logic should be in DirectoryRecord.
def meAndMyGroups(record=record, groups=set((record,))):
for group in record.groups():
groups.add(group)
meAndMyGroups(group, groups)
return groups
# FIXME: This module global is really gross.
from twistedcaldav.directory.calendaruserproxy import ProxyDBService
rows = []
proxyInfoSeen = set()
for record in meAndMyGroups():
proxyUIDs = (yield ProxyDBService.getMemberships(record.uid))
for proxyUID in proxyUIDs:
# These are of the form: F153A05B-FF27-4B6C-BD6D-D1239D0082B0#calendar-proxy-read
# I don't know how to get DirectoryRecord objects for the proxyUID here, so, let's cheat for now.
proxyUID, proxyType = proxyUID.split("#")
if (proxyUID, proxyType) not in proxyInfoSeen:
proxyRecord = yield directory.recordWithUID(proxyUID)
rows.append((proxyUID, proxyRecord.recordType, proxyRecord.shortNames[0], proxyRecord.fullName, proxyType))
proxyInfoSeen.add((proxyUID, proxyType))
if not rows:
returnValue(None)
rows = sorted(
rows,
key=lambda row: (row[1], row[2], row[4])
)
table = Table()
table.addHeader(("UID", "Record Type", "Short Name", "Full Name", "Access"))
for row in rows:
table.addRow(row)
returnValue(table.toString())
def summarizeRecords(directory, records):
table = Table()
table.addHeader((
"UID",
"Record Type",
"Short Names",
"Email Addresses",
"Full Name",
))
def formatItems(items):
if items:
return ", ".join(items)
else:
return None
for record in records:
table.addRow((
record.uid,
record.recordType,
formatItems(record.shortNames),
formatItems(record.emailAddresses),
record.fullName,
))
if table.rows:
return succeed(table.toString())
else:
return succeed(None)
calendarserver-7.0+dfsg/calendarserver/tools/shell/terminal.py 0000664 0000000 0000000 00000027761 12607514243 0024754 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2011-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
"""
Interactive shell for terminals.
"""
__all__ = [
"usage",
"ShellOptions",
"ShellService",
"ShellProtocol",
"main",
]
import string
import os
import sys
import tty
import termios
from shlex import shlex
from twisted.python.failure import Failure
from twisted.python.text import wordWrap
from twisted.python.usage import Options, UsageError
from twisted.internet.defer import Deferred, succeed
from twisted.internet.defer import inlineCallbacks
from twisted.internet.stdio import StandardIO
from twisted.conch.recvline import HistoricRecvLine as ReceiveLineProtocol
from twisted.conch.insults.insults import ServerProtocol
from twext.python.log import Logger
from txdav.common.icommondatastore import NotFoundError
from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE
from calendarserver.tools.cmdline import utilityMain, WorkerService
from calendarserver.tools.shell.cmd import Commands, UsageError as CommandUsageError
log = Logger()
def usage(e=None):
if e:
print(e)
print("")
try:
ShellOptions().opt_help()
except SystemExit:
pass
if e:
sys.exit(64)
else:
sys.exit(0)
class ShellOptions(Options):
"""
Command line options for "calendarserver_shell".
"""
synopsis = "\n".join(
wordWrap(
"""
Usage: calendarserver_shell [options]\n
""" + __doc__,
int(os.environ.get("COLUMNS", "80"))
)
)
optParameters = [
["config", "f", DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."],
]
def __init__(self):
super(ShellOptions, self).__init__()
class ShellService(WorkerService, object):
"""
A L{ShellService} collects all the information that a shell needs to run;
when run, it invokes the shell on stdin/stdout.
@ivar store: the calendar / addressbook store.
@type store: L{txdav.idav.IDataStore}
@ivar directory: the directory service, to look up principals' names
@type directory: L{twistedcaldav.directory.idirectory.IDirectoryService}
@ivar options: the command-line options used to create this shell service
@type options: L{ShellOptions}
@ivar reactor: the reactor under which this service is running
@type reactor: L{IReactorTCP}, L{IReactorTime}, L{IReactorThreads} etc
@ivar config: the configuration associated with this shell service.
@type config: L{twistedcaldav.config.Config}
"""
def __init__(self, store, options, reactor, config):
super(ShellService, self).__init__(store)
self.directory = store.directoryService()
self.options = options
self.reactor = reactor
self.config = config
self.terminalFD = None
self.protocol = None
def doWork(self):
"""
Service startup.
"""
# Set up the terminal for interactive action
self.terminalFD = sys.__stdin__.fileno()
self._oldTerminalSettings = termios.tcgetattr(self.terminalFD)
tty.setraw(self.terminalFD)
self.protocol = ServerProtocol(lambda: ShellProtocol(self))
StandardIO(self.protocol)
return succeed(None)
def postStartService(self):
"""
Don't quit right away
"""
pass
def stopService(self):
"""
Stop the service.
"""
# Restore terminal settings
termios.tcsetattr(self.terminalFD, termios.TCSANOW, self._oldTerminalSettings)
os.write(self.terminalFD, "\r\x1bc\r")
class ShellProtocol(ReceiveLineProtocol):
"""
Data store shell protocol.
@ivar service: a service representing the running shell
@type service: L{ShellService}
"""
# FIXME:
# * Received lines are being echoed; find out why and stop it.
# * Backspace transposes characters in the terminal.
ps = ("ds% ", "... ")
emulation_modes = ("emacs", "none")
def __init__(self, service, commandsClass=Commands):
ReceiveLineProtocol.__init__(self)
self.service = service
self.inputLines = []
self.commands = commandsClass(self)
self.activeCommand = None
self.emulate = "emacs"
def reloadCommands(self):
# FIXME: doesn't work for alternative Commands classes passed
# to __init__.
self.terminal.write("Reloading commands class...\n")
import calendarserver.tools.shell.cmd
reload(calendarserver.tools.shell.cmd)
self.commands = calendarserver.tools.shell.cmd.Commands(self)
#
# Input handling
#
def connectionMade(self):
ReceiveLineProtocol.connectionMade(self)
self.keyHandlers['\x03'] = self.handle_INT # Control-C
self.keyHandlers['\x04'] = self.handle_EOF # Control-D
self.keyHandlers['\x1c'] = self.handle_QUIT # Control-\
self.keyHandlers['\x0c'] = self.handle_FF # Control-L
# self.keyHandlers['\t' ] = self.handle_TAB # Tab
if self.emulate == "emacs":
# EMACS key bindinds
self.keyHandlers['\x10'] = self.handle_UP # Control-P
self.keyHandlers['\x0e'] = self.handle_DOWN # Control-N
self.keyHandlers['\x02'] = self.handle_LEFT # Control-B
self.keyHandlers['\x06'] = self.handle_RIGHT # Control-F
self.keyHandlers['\x01'] = self.handle_HOME # Control-A
self.keyHandlers['\x05'] = self.handle_END # Control-E
def observer(event):
if not event["isError"]:
return
text = log.textFromEventDict(event)
if text is None:
return
self.service.reactor.callFromThread(self.terminal.write, text)
log.startLoggingWithObserver(observer)
def handle_INT(self):
return self.resetInputLine()
def handle_EOF(self):
if self.lineBuffer:
if self.emulate == "emacs":
self.handle_DELETE()
else:
self.terminal.write("\a")
else:
self.handle_QUIT()
def handle_FF(self):
"""
Handle a "form feed" byte - generally used to request a screen
refresh/redraw.
"""
# FIXME: Clear screen != redraw screen.
return self.clearScreen()
def handle_QUIT(self):
return self.exit()
def handle_TAB(self):
return self.completeLine()
#
# Utilities
#
def clearScreen(self):
"""
Clear the display.
"""
self.terminal.eraseDisplay()
self.terminal.cursorHome()
self.drawInputLine()
def resetInputLine(self):
"""
Reset the current input variables to their initial state.
"""
self.pn = 0
self.lineBuffer = []
self.lineBufferIndex = 0
self.terminal.nextLine()
self.drawInputLine()
@inlineCallbacks
def completeLine(self):
"""
Perform auto-completion on the input line.
"""
# Tokenize the text before the cursor
tokens = self.tokenize("".join(self.lineBuffer[:self.lineBufferIndex]))
if tokens:
if len(tokens) == 1 and self.lineBuffer[-1] in string.whitespace:
word = ""
else:
word = tokens[-1]
cmd = tokens.pop(0)
else:
word = cmd = ""
if cmd and (tokens or word == ""):
# Completing arguments
m = getattr(self.commands, "complete_%s" % (cmd,), None)
if not m:
return
try:
completions = tuple((yield m(tokens)))
except Exception, e:
self.handleFailure(Failure(e))
return
log.info("COMPLETIONS: %r" % (completions,))
else:
# Completing command name
completions = tuple(self.commands.complete_commands(cmd))
if len(completions) == 1:
for c in completions.__iter__().next():
self.characterReceived(c, True)
# FIXME: Add a space only if we know we've fully completed the term.
# self.characterReceived(" ", False)
else:
self.terminal.nextLine()
for completion in completions:
# FIXME Emitting these in columns would be swell
self.terminal.write("%s%s\n" % (word, completion))
self.drawInputLine()
def exit(self):
"""
Exit.
"""
self.terminal.loseConnection()
self.service.reactor.stop()
def handleFailure(self, f):
"""
Handle a failure raises in the interpreter by printing a
traceback and resetting the input line.
"""
if self.lineBuffer:
self.terminal.nextLine()
self.terminal.write("Error: %s !!!" % (f.value,))
if not f.check(NotImplementedError, NotFoundError):
log.info(f.getTraceback())
self.resetInputLine()
#
# Command dispatch
#
def lineReceived(self, line):
if self.activeCommand is not None:
self.inputLines.append(line)
return
tokens = self.tokenize(line)
if tokens:
cmd = tokens.pop(0)
# print("Arguments: %r" % (tokens,))
m = getattr(self.commands, "cmd_%s" % (cmd,), None)
if m:
def handleUsageError(f):
f.trap(CommandUsageError)
self.terminal.write("%s\n" % (f.value,))
doc = self.commands.documentationForCommand(cmd)
if doc:
self.terminal.nextLine()
self.terminal.write(doc)
self.terminal.nextLine()
def next(_):
self.activeCommand = None
if self.inputLines:
line = self.inputLines.pop(0)
self.lineReceived(line)
d = self.activeCommand = Deferred()
d.addCallback(lambda _: m(tokens))
if True:
d.callback(None)
else:
# Add time to test callbacks
self.service.reactor.callLater(4, d.callback, None)
d.addErrback(handleUsageError)
d.addCallback(lambda _: self.drawInputLine())
d.addErrback(self.handleFailure)
d.addCallback(next)
else:
self.terminal.write("Unknown command: %s\n" % (cmd,))
self.drawInputLine()
else:
self.drawInputLine()
@staticmethod
def tokenize(line):
"""
Tokenize input line.
@return: an iterable of tokens
"""
lexer = shlex(line)
lexer.whitespace_split = True
tokens = []
while True:
token = lexer.get_token()
if not token:
break
tokens.append(token)
return tokens
def main(argv=sys.argv, stderr=sys.stderr, reactor=None):
if reactor is None:
from twisted.internet import reactor
options = ShellOptions()
try:
options.parseOptions(argv[1:])
except UsageError, e:
usage(e)
def makeService(store):
from twistedcaldav.config import config
return ShellService(store, options, reactor, config)
print("Initializing shell...")
utilityMain(options["config"], makeService, reactor)
calendarserver-7.0+dfsg/calendarserver/tools/shell/test/ 0000775 0000000 0000000 00000000000 12607514243 0023531 5 ustar 00root root 0000000 0000000 calendarserver-7.0+dfsg/calendarserver/tools/shell/test/__init__.py 0000664 0000000 0000000 00000001205 12607514243 0025640 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2012-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Data store interactive shell.
"""
calendarserver-7.0+dfsg/calendarserver/tools/shell/test/test_cmd.py 0000664 0000000 0000000 00000014136 12607514243 0025712 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
##
# Copyright (c) 2012-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
import twisted.trial.unittest
from twisted.internet.defer import inlineCallbacks
from txdav.common.icommondatastore import NotFoundError
from calendarserver.tools.shell.cmd import CommandsBase
from calendarserver.tools.shell.terminal import ShellProtocol
class TestCommandsBase(twisted.trial.unittest.TestCase):
def setUp(self):
self.protocol = ShellProtocol(None, commandsClass=CommandsBase)
self.commands = self.protocol.commands
@inlineCallbacks
def test_getTargetNone(self):
target = (yield self.commands.getTarget([], wdFallback=True))
self.assertEquals(target, self.commands.wd)
target = (yield self.commands.getTarget([]))
self.assertEquals(target, None)
def test_getTargetMissing(self):
return self.assertFailure(self.commands.getTarget(["/foo"]), NotFoundError)
@inlineCallbacks
def test_getTargetOne(self):
target = (yield self.commands.getTarget(["users"]))
match = (yield self.commands.wd.locate(["users"]))
self.assertEquals(target, match)
@inlineCallbacks
def test_getTargetSome(self):
target = (yield self.commands.getTarget(["users", "blah"]))
match = (yield self.commands.wd.locate(["users"]))
self.assertEquals(target, match)
def test_commandsNone(self):
allCommands = self.commands.commands()
self.assertEquals(sorted(allCommands), [])
allCommands = self.commands.commands(showHidden=True)
self.assertEquals(sorted(allCommands), [])
def test_commandsSome(self):
protocol = ShellProtocol(None, commandsClass=SomeCommands)
commands = protocol.commands
allCommands = commands.commands()
self.assertEquals(
sorted(allCommands),
[
("a", commands.cmd_a),
("b", commands.cmd_b),
]
)
allCommands = commands.commands(showHidden=True)
self.assertEquals(
sorted(allCommands),
[
("a", commands.cmd_a),
("b", commands.cmd_b),
("hidden", commands.cmd_hidden),
]
)
def test_complete(self):
items = (
"foo",
"bar",
"foobar",
"baz",
"quux",
)
def c(word):
return sorted(CommandsBase.complete(word, items))
self.assertEquals(c(""), sorted(items))
self.assertEquals(c("f"), ["oo", "oobar"])
self.assertEquals(c("foo"), ["", "bar"])
self.assertEquals(c("foobar"), [""])
self.assertEquals(c("foobars"), [])
self.assertEquals(c("baz"), [""])
self.assertEquals(c("q"), ["uux"])
self.assertEquals(c("xyzzy"), [])
def test_completeCommands(self):
protocol = ShellProtocol(None, commandsClass=SomeCommands)
commands = protocol.commands
def c(word):
return sorted(commands.complete_commands(word))
self.assertEquals(c(""), ["a", "b"])
self.assertEquals(c("a"), [""])
self.assertEquals(c("h"), ["idden"])
self.assertEquals(c("f"), [])
@inlineCallbacks
def _test_completeFiles(self, tests):
protocol = ShellProtocol(None, commandsClass=SomeCommands)
commands = protocol.commands
def c(word):
# One token
d = commands.complete_files((word,))
d.addCallback(lambda c: sorted(c))
return d
def d(word):
# Multiple tokens
d = commands.complete_files(("XYZZY", word))
d.addCallback(lambda c: sorted(c))
return d
def e(word):
# No tokens
d = commands.complete_files(())
d.addCallback(lambda c: sorted(c))
return d
for word, completions in tests:
if word is None:
self.assertEquals((yield e(word)), completions, "Completing %r" % (word,))
else:
self.assertEquals((yield c(word)), completions, "Completing %r" % (word,))
self.assertEquals((yield d(word)), completions, "Completing %r" % (word,))
def test_completeFilesLevelOne(self):
return self._test_completeFiles((
(None , ["groups/", "locations/", "resources/", "uids/", "users/"]),
("" , ["groups/", "locations/", "resources/", "uids/", "users/"]),
("u" , ["ids/", "sers/"]),
("g" , ["roups/"]),
("gr" , ["oups/"]),
("groups", ["/"]),
))
def test_completeFilesLevelOneSlash(self):
return self._test_completeFiles((
("/" , ["groups/", "locations/", "resources/", "uids/", "users/"]),
("/u" , ["ids/", "sers/"]),
("/g" , ["roups/"]),
("/gr" , ["oups/"]),
("/groups", ["/"]),
))
def test_completeFilesDirectory(self):
return self._test_completeFiles((
("users/" , ["wsanchez", "admin"]), # FIXME: Look up users
))
test_completeFilesDirectory.todo = "Doesn't work yet"
def test_completeFilesLevelTwo(self):
return self._test_completeFiles((
("users/w" , ["sanchez"]), # FIXME: Look up users?
))
test_completeFilesLevelTwo.todo = "Doesn't work yet"
class SomeCommands(CommandsBase):
def cmd_a(self, tokens):
pass
def cmd_b(self, tokens):
pass
def cmd_hidden(self, tokens):
pass
cmd_hidden.hidden = "Hidden"
calendarserver-7.0+dfsg/calendarserver/tools/shell/test/test_vfs.py 0000664 0000000 0000000 00000014612 12607514243 0025744 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
##
# Copyright (c) 2012-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from twisted.trial.unittest import TestCase
from twisted.internet.defer import succeed, inlineCallbacks
# from twext.who.test.test_xml import xmlService
# from txdav.common.datastore.test.util import buildStore
from calendarserver.tools.shell.vfs import ListEntry
from calendarserver.tools.shell.vfs import File, Folder
# from calendarserver.tools.shell.vfs import UIDsFolder
# from calendarserver.tools.shell.terminal import ShellService
class TestListEntry(TestCase):
def test_toString(self):
self.assertEquals(
ListEntry(None, File, "thingo").toString(),
"thingo"
)
self.assertEquals(
ListEntry(None, File, "thingo", Foo="foo").toString(),
"thingo"
)
self.assertEquals(
ListEntry(None, Folder, "thingo").toString(),
"thingo/"
)
self.assertEquals(
ListEntry(None, Folder, "thingo", Foo="foo").toString(),
"thingo/"
)
def test_fieldNamesImplicit(self):
# This test assumes File doesn't set list.fieldNames.
assert not hasattr(File.list, "fieldNames")
self.assertEquals(
set(ListEntry(File(None, ()), File, "thingo").fieldNames),
set(("Name",))
)
def test_fieldNamesExplicit(self):
def fieldNames(fileClass):
return ListEntry(
fileClass(None, ()), fileClass, "thingo",
Flavor="Coconut", Style="Hard"
)
# Full list
class MyFile1(File):
def list(self):
return succeed(())
list.fieldNames = ("Name", "Flavor")
self.assertEquals(fieldNames(MyFile1).fieldNames, ("Name", "Flavor"))
# Full list, different order
class MyFile2(File):
def list(self):
return succeed(())
list.fieldNames = ("Flavor", "Name")
self.assertEquals(fieldNames(MyFile2).fieldNames, ("Flavor", "Name"))
# Omits Name, which is implicitly added
class MyFile3(File):
def list(self):
return succeed(())
list.fieldNames = ("Flavor",)
self.assertEquals(fieldNames(MyFile3).fieldNames, ("Name", "Flavor"))
# Emtpy
class MyFile4(File):
def list(self):
return succeed(())
list.fieldNames = ()
self.assertEquals(fieldNames(MyFile4).fieldNames, ("Name",))
def test_toFieldsImplicit(self):
# This test assumes File doesn't set list.fieldNames.
assert not hasattr(File.list, "fieldNames")
# Name first, rest sorted by field name
self.assertEquals(
tuple(
ListEntry(
File(None, ()), File, "thingo",
Flavor="Coconut", Style="Hard"
).toFields()
),
("thingo", "Coconut", "Hard")
)
def test_toFieldsExplicit(self):
def fields(fileClass):
return tuple(
ListEntry(
fileClass(None, ()), fileClass, "thingo",
Flavor="Coconut", Style="Hard"
).toFields()
)
# Full list
class MyFile1(File):
def list(self):
return succeed(())
list.fieldNames = ("Name", "Flavor")
self.assertEquals(fields(MyFile1), ("thingo", "Coconut"))
# Full list, different order
class MyFile2(File):
def list(self):
return succeed(())
list.fieldNames = ("Flavor", "Name")
self.assertEquals(fields(MyFile2), ("Coconut", "thingo"))
# Omits Name, which is implicitly added
class MyFile3(File):
def list(self):
return succeed(())
list.fieldNames = ("Flavor",)
self.assertEquals(fields(MyFile3), ("thingo", "Coconut"))
# Emtpy
class MyFile4(File):
def list(self):
return succeed(())
list.fieldNames = ()
self.assertEquals(fields(MyFile4), ("thingo",))
class UIDsFolderTests(TestCase):
"""
L{UIDsFolder} contains all principals and is keyed by UID.
"""
# @inlineCallbacks
# def setUp(self):
# """
# Create a L{UIDsFolder}.
# """
# directory = xmlService(self.mktemp())
# self.svc = ShellService(
# store=(yield buildStore(self, None, directoryService=directory)),
# directory=directory,
# options=None, reactor=None, config=None
# )
# self.folder = UIDsFolder(self.svc, ())
@inlineCallbacks
def test_list(self):
"""
L{UIDsFolder.list} returns a L{Deferred} firing an iterable of
L{ListEntry} objects, reflecting the directory information for all
calendars and addressbooks created in the store.
"""
txn = self.svc.store.newTransaction()
wsanchez = "6423F94A-6B76-4A3A-815B-D52CFD77935D"
dreid = "5FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1"
yield txn.calendarHomeWithUID(wsanchez, create=True)
yield txn.addressbookHomeWithUID(dreid, create=True)
yield txn.commit()
listing = list((yield self.folder.list()))
self.assertEquals(
[x.fields for x in listing],
[
{
"Record Type": "users",
"Short Name": "wsanchez",
"Full Name": "Wilfredo Sanchez",
"Name": wsanchez
},
{
"Record Type": "users",
"Short Name": "dreid",
"Full Name": "David Reid",
"Name": dreid
},
]
)
test_list.todo = "setup() needs to be reimplemented"
calendarserver-7.0+dfsg/calendarserver/tools/shell/vfs.py 0000664 0000000 0000000 00000054516 12607514243 0023735 0 ustar 00root root 0000000 0000000 # -*- test-case-name: calendarserver.tools.shell.test.test_vfs -*-
##
# Copyright (c) 2011-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Virtual file system for data store objects.
"""
__all__ = [
"ListEntry",
"File",
"Folder",
"RootFolder",
"UIDsFolder",
"RecordFolder",
"UsersFolder",
"LocationsFolder",
"ResourcesFolder",
"GroupsFolder",
"PrincipalHomeFolder",
"CalendarHomeFolder",
"CalendarFolder",
"CalendarObject",
"AddressBookHomeFolder",
]
from cStringIO import StringIO
from time import strftime, localtime
from twisted.internet.defer import succeed
from twisted.internet.defer import inlineCallbacks, returnValue
from twext.python.log import Logger
from txdav.common.icommondatastore import NotFoundError
from twistedcaldav.ical import InvalidICalendarDataError
from calendarserver.tools.tables import Table
from calendarserver.tools.shell.directory import recordInfo
log = Logger()
class ListEntry(object):
"""
Information about a C{File} as returned by C{File.list()}.
"""
def __init__(self, parent, Class, Name, **fields):
self.parent = parent # The class implementing list()
self.fileClass = Class
self.fileName = Name
self.fields = fields
fields["Name"] = Name
def __str__(self):
return self.toString()
def __repr__(self):
fields = self.fields.copy()
del fields["Name"]
if fields:
fields = " %s" % (fields,)
else:
fields = ""
return "<%s(%s): %r%s>" % (
self.__class__.__name__,
self.fileClass.__name__,
self.fileName,
fields,
)
def isFolder(self):
return issubclass(self.fileClass, Folder)
def toString(self):
if self.isFolder():
return "%s/" % (self.fileName,)
else:
return self.fileName
@property
def fieldNames(self):
if not hasattr(self, "_fieldNames"):
if hasattr(self.parent.list, "fieldNames"):
if "Name" in self.parent.list.fieldNames:
self._fieldNames = tuple(self.parent.list.fieldNames)
else:
self._fieldNames = ("Name",) + tuple(self.parent.list.fieldNames)
else:
self._fieldNames = ["Name"] + sorted(n for n in self.fields
if n != "Name")
return self._fieldNames
def toFields(self):
try:
return tuple(self.fields[fieldName] for fieldName in self.fieldNames)
except KeyError, e:
raise AssertionError(
"Field %s is not in %r, defined by %s"
% (e, self.fields.keys(), self.parent.__name__)
)
class File(object):
"""
Object in virtual data hierarchy.
"""
def __init__(self, service, path):
assert type(path) is tuple
self.service = service
self.path = path
def __str__(self):
return "/" + "/".join(self.path)
def __repr__(self):
return "<%s: %s>" % (self.__class__.__name__, self)
def __eq__(self, other):
if isinstance(other, File):
return self.path == other.path
else:
return NotImplemented
def describe(self):
return succeed("%s (%s)" % (self, self.__class__.__name__))
def list(self):
return succeed((
ListEntry(self, self.__class__, self.path[-1]),
))
class Folder(File):
"""
Location in virtual data hierarchy.
"""
def __init__(self, service, path):
File.__init__(self, service, path)
self._children = {}
self._childClasses = {}
def __str__(self):
if self.path:
return "/" + "/".join(self.path) + "/"
else:
return "/"
@inlineCallbacks
def locate(self, path):
if path and path[-1] == "":
path.pop()
if not path:
returnValue(RootFolder(self.service))
name = path[0]
if name:
target = (yield self.child(name))
if len(path) > 1:
target = (yield target.locate(path[1:]))
else:
target = (yield RootFolder(self.service).locate(path[1:]))
returnValue(target)
@inlineCallbacks
def child(self, name):
# FIXME: Move this logic to locate()
# if not name:
# return succeed(self)
# if name == ".":
# return succeed(self)
# if name == "..":
# path = self.path[:-1]
# if not path:
# path = "/"
# return RootFolder(self.service).locate(path)
if name in self._children:
returnValue(self._children[name])
if name in self._childClasses:
child = (yield self._childClasses[name](self.service, self.path + (name,)))
self._children[name] = child
returnValue(child)
raise NotFoundError("Folder %r has no child %r" % (str(self), name))
def list(self):
result = {}
for name in self._children:
result[name] = ListEntry(self, self._children[name].__class__, name)
for name in self._childClasses:
if name not in result:
result[name] = ListEntry(self, self._childClasses[name], name)
return succeed(result.itervalues())
class RootFolder(Folder):
"""
Root of virtual data hierarchy.
Hierarchy::
/ RootFolder
uids/ UIDsFolder
/ PrincipalHomeFolder
calendars/ CalendarHomeFolder
/ CalendarFolder
CalendarObject
addressbooks/ AddressBookHomeFolder
/ AddressBookFolder
AddressBookObject
users/ UsersFolder
/ PrincipalHomeFolder
...
locations/ LocationsFolder
/ PrincipalHomeFolder
...
resources/ ResourcesFolder
/ PrincipalHomeFolder
...
"""
def __init__(self, service):
Folder.__init__(self, service, ())
self._childClasses["uids"] = UIDsFolder
self._childClasses["users"] = UsersFolder
self._childClasses["locations"] = LocationsFolder
self._childClasses["resources"] = ResourcesFolder
self._childClasses["groups"] = GroupsFolder
class UIDsFolder(Folder):
"""
Folder containing all principals by UID.
"""
def child(self, name):
return PrincipalHomeFolder(self.service, self.path + (name,), name)
@inlineCallbacks
def list(self):
results = {}
# FIXME: This should be the merged total of calendar homes and address book homes.
# FIXME: Merge in directory UIDs also?
# FIXME: Add directory info (eg. name) to list entry
@inlineCallbacks
def addResult(ignoredTxn, home):
uid = home.uid()
record = yield self.service.directory.recordWithUID(uid)
if record:
info = {
"Record Type": record.recordType,
"Short Name" : record.shortNames[0],
"Full Name" : record.fullName,
}
else:
info = {}
results[uid] = ListEntry(self, PrincipalHomeFolder, uid, **info)
yield self.service.store.withEachCalendarHomeDo(addResult)
yield self.service.store.withEachAddressbookHomeDo(addResult)
returnValue(results.itervalues())
class RecordFolder(Folder):
def _recordForName(self, name):
recordTypeAttr = "recordType_" + self.recordType
recordType = getattr(self.service.directory, recordTypeAttr)
return self.service.directory.recordWithShortName(recordType, name)
def child(self, name):
record = self._recordForName(name)
if record is None:
return Folder.child(self, name)
return PrincipalHomeFolder(
self.service,
self.path + (name,),
record.uid,
record=record
)
@inlineCallbacks
def list(self):
names = set()
for record in (yield self.service.directory.recordsWithRecordType(
self.recordType
)):
for shortName in record.shortNames:
if shortName in names:
continue
names.add(shortName)
yield ListEntry(
self,
PrincipalHomeFolder,
shortName,
**{
"UID": record.uid,
"Full Name": record.fullName,
}
)
list.fieldNames = ("UID", "Full Name")
class UsersFolder(RecordFolder):
"""
Folder containing all user principals by name.
"""
recordType = "users"
class LocationsFolder(RecordFolder):
"""
Folder containing all location principals by name.
"""
recordType = "locations"
class ResourcesFolder(RecordFolder):
"""
Folder containing all resource principals by name.
"""
recordType = "resources"
class GroupsFolder(RecordFolder):
"""
Folder containing all group principals by name.
"""
recordType = "groups"
class PrincipalHomeFolder(Folder):
"""
Folder containing everything related to a given principal.
"""
def __init__(self, service, path, uid, record=None):
Folder.__init__(self, service, path)
if record is None:
# FIXME: recordWithUID returns a Deferred but we cannot return or yield it in an __init__ method
record = self.service.directory.recordWithUID(uid)
if record is not None:
assert uid == record.uid
self.uid = uid
self.record = record
@inlineCallbacks
def _initChildren(self):
if not hasattr(self, "_didInitChildren"):
txn = self.service.store.newTransaction()
try:
if (
self.record is not None and
self.service.config.EnableCalDAV and
self.record.hasCalendars
):
create = True
else:
create = False
# Try assuming it exists
home = (yield txn.calendarHomeWithUID(self.uid, create=False))
if home is None and create:
# Doesn't exist, so create it in a different
# transaction, to avoid having to commit the live
# transaction.
txnTemp = self.service.store.newTransaction()
try:
home = (yield txnTemp.calendarHomeWithUID(self.uid, create=True))
(yield txnTemp.commit())
# Fetch the home again. This time we expect it to be there.
home = (yield txn.calendarHomeWithUID(self.uid, create=False))
assert home
finally:
(yield txn.abort())
if home:
self._children["calendars"] = CalendarHomeFolder(
self.service,
self.path + ("calendars",),
home,
self.record,
)
if (
self.record is not None and
self.service.config.EnableCardDAV and
self.record.enabledForAddressBooks
):
create = True
else:
create = False
# Again, assume it exists
home = (yield txn.addressbookHomeWithUID(self.uid))
if not home and create:
# Repeat the above dance.
txnTemp = self.service.store.newTransaction()
try:
home = (yield txnTemp.addressbookHomeWithUID(self.uid, create=True))
(yield txnTemp.commit())
# Fetch the home again. This time we expect it to be there.
home = (yield txn.addressbookHomeWithUID(self.uid, create=False))
assert home
finally:
(yield txn.abort())
if home:
self._children["addressbooks"] = AddressBookHomeFolder(
self.service,
self.path + ("addressbooks",),
home,
self.record,
)
finally:
(yield txn.abort())
self._didInitChildren = True
def _needsChildren(m): #@NoSelf
def decorate(self, *args, **kwargs):
d = self._initChildren()
d.addCallback(lambda _: m(self, *args, **kwargs))
return d
return decorate
@_needsChildren
def child(self, name):
return Folder.child(self, name)
@_needsChildren
def list(self):
return Folder.list(self)
def describe(self):
return recordInfo(self.service.directory, self.record)
class CalendarHomeFolder(Folder):
"""
Calendar home folder.
"""
def __init__(self, service, path, home, record):
Folder.__init__(self, service, path)
self.home = home
self.record = record
@inlineCallbacks
def child(self, name):
calendar = (yield self.home.calendarWithName(name))
if calendar:
returnValue(CalendarFolder(self.service, self.path + (name,), calendar))
else:
raise NotFoundError("Calendar home %r has no calendar %r" % (self, name))
@inlineCallbacks
def list(self):
calendars = (yield self.home.calendars())
result = []
for calendar in calendars:
displayName = calendar.displayName()
if displayName is None:
displayName = "(unset)"
info = {
"Display Name": displayName,
"Sync Token" : (yield calendar.syncToken()),
}
result.append(ListEntry(self, CalendarFolder, calendar.name(), **info))
returnValue(result)
@inlineCallbacks
def describe(self):
description = ["Calendar home:\n"]
#
# Attributes
#
uid = (yield self.home.uid())
created = (yield self.home.created())
modified = (yield self.home.modified())
quotaUsed = (yield self.home.quotaUsedBytes())
quotaAllowed = (yield self.home.quotaAllowedBytes())
recordType = (yield self.record.recordType)
recordShortName = (yield self.record.shortNames[0])
rows = []
rows.append(("UID", uid))
rows.append(("Owner", "(%s)%s" % (recordType, recordShortName)))
rows.append(("Created" , timeString(created)))
rows.append(("Last modified", timeString(modified)))
if quotaUsed is not None:
rows.append((
"Quota",
"%s of %s (%.2s%%)"
% (quotaUsed, quotaAllowed, quotaUsed / quotaAllowed)
))
description.append("Attributes:")
description.append(tableString(rows))
#
# Properties
#
properties = (yield self.home.properties())
if properties:
description.append(tableStringForProperties(properties))
returnValue("\n".join(description))
class CalendarFolder(Folder):
"""
Calendar.
"""
def __init__(self, service, path, calendar):
Folder.__init__(self, service, path)
self.calendar = calendar
@inlineCallbacks
def _childWithObject(self, object):
uid = (yield object.uid())
returnValue(CalendarObject(self.service, self.path + (uid,), object, uid))
@inlineCallbacks
def child(self, name):
object = (yield self.calendar.calendarObjectWithUID(name))
if not object:
raise NotFoundError("Calendar %r has no object %r" % (str(self), name))
child = (yield self._childWithObject(object))
returnValue(child)
@inlineCallbacks
def list(self):
result = []
for object in (yield self.calendar.calendarObjects()):
object = (yield self._childWithObject(object))
items = (yield object.list())
result.append(items[0])
returnValue(result)
@inlineCallbacks
def describe(self):
description = ["Calendar:\n"]
#
# Attributes
#
ownerHome = (yield self.calendar.ownerCalendarHome()) # FIXME: Translate into human
syncToken = (yield self.calendar.syncToken())
rows = []
rows.append(("Owner" , ownerHome))
rows.append(("Sync Token", syncToken))
description.append("Attributes:")
description.append(tableString(rows))
#
# Properties
#
properties = (yield self.calendar.properties())
if properties:
description.append(tableStringForProperties(properties))
returnValue("\n".join(description))
def delete(self, implicit=True):
if implicit:
# We need data store-level scheduling support to implement
# this.
raise NotImplementedError("Delete not implemented.")
else:
self.calendarObject.remove()
class CalendarObject(File):
"""
Calendar object.
"""
def __init__(self, service, path, calendarObject, uid):
File.__init__(self, service, path)
self.object = calendarObject
self.uid = uid
@inlineCallbacks
def lookup(self):
if not hasattr(self, "component"):
component = (yield self.object.component())
try:
mainComponent = component.mainComponent()
assert self.uid == mainComponent.propertyValue("UID")
self.componentType = mainComponent.name()
self.summary = mainComponent.propertyValue("SUMMARY")
self.mainComponent = mainComponent
except InvalidICalendarDataError, e:
log.error("%s: %s" % (self.path, e))
self.componentType = "?"
self.summary = "** Invalid data **"
self.mainComponent = None
self.component = component
@inlineCallbacks
def list(self):
(yield self.lookup())
returnValue((ListEntry(self, CalendarObject, self.uid, {
"Component Type": self.componentType,
"Summary": self.summary.replace("\n", " "),
}),))
list.fieldNames = ("Component Type", "Summary")
@inlineCallbacks
def text(self):
(yield self.lookup())
returnValue(str(self.component))
@inlineCallbacks
def describe(self):
(yield self.lookup())
description = ["Calendar object:\n"]
#
# Calendar object attributes
#
rows = []
rows.append(("UID", self.uid))
rows.append(("Component Type", self.componentType))
rows.append(("Summary", self.summary))
organizer = self.mainComponent.getProperty("ORGANIZER")
if organizer:
organizerName = organizer.parameterValue("CN")
organizerEmail = organizer.parameterValue("EMAIL")
name = " (%s)" % (organizerName ,) if organizerName else ""
email = " <%s>" % (organizerEmail,) if organizerEmail else ""
rows.append(("Organizer", "%s%s%s" % (organizer.value(), name, email)))
rows.append(("Created" , timeString(self.object.created())))
rows.append(("Modified", timeString(self.object.modified())))
description.append("Attributes:")
description.append(tableString(rows))
#
# Attachments
#
attachments = (yield self.object.attachments())
for attachment in attachments:
contentType = attachment.contentType()
contentType = "%s/%s" % (contentType.mediaType, contentType.mediaSubtype)
rows = []
rows.append(("Name" , attachment.name()))
rows.append(("Size" , "%s bytes" % (attachment.size(),)))
rows.append(("Content Type", contentType))
rows.append(("MD5 Sum" , attachment.md5()))
rows.append(("Created" , timeString(attachment.created())))
rows.append(("Modified" , timeString(attachment.modified())))
description.append("Attachment:")
description.append(tableString(rows))
#
# Properties
#
properties = (yield self.object.properties())
if properties:
description.append(tableStringForProperties(properties))
returnValue("\n".join(description))
class AddressBookHomeFolder(Folder):
"""
Address book home folder.
"""
def __init__(self, service, path, home, record):
Folder.__init__(self, service, path)
self.home = home
self.record = record
# FIXME
def tableString(rows, header=None):
table = Table()
if header:
table.addHeader(header)
for row in rows:
table.addRow(row)
output = StringIO()
table.printTable(os=output)
return output.getvalue()
def tableStringForProperties(properties):
return "Properties:\n%s" % (tableString((
(name.toString(), truncateAtNewline(properties[name]))
for name in sorted(properties)
)))
def timeString(time):
if time is None:
return "(unknown)"
return strftime("%a, %d %b %Y %H:%M:%S %z(%Z)", localtime(time))
def truncateAtNewline(text):
text = str(text)
try:
index = text.index("\n")
except ValueError:
return text
return text[:index] + "..."
calendarserver-7.0+dfsg/calendarserver/tools/tables.py 0000775 0000000 0000000 00000023205 12607514243 0023274 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2009-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Tables for fixed-width text display.
"""
__all__ = [
"Table",
]
from sys import stdout
import types
from cStringIO import StringIO
class Table(object):
"""
Class that allows pretty printing ascii tables.
The table supports multiline headers and footers, independent
column formatting by row, alternative tab-delimited output.
"""
class ColumnFormat(object):
"""
Defines the format string, justification and span for a column.
"""
LEFT_JUSTIFY = 0
RIGHT_JUSTIFY = 1
CENTER_JUSTIFY = 2
def __init__(self, strFormat="%s", justify=LEFT_JUSTIFY, span=1):
self.format = strFormat
self.justify = justify
self.span = span
def __init__(self, table=None):
self.headers = []
self.headerColumnFormats = []
self.rows = []
self.footers = []
self.footerColumnFormats = []
self.columnCount = 0
self.defaultColumnFormats = []
self.columnFormatsByRow = {}
if table:
self.setData(table)
def setData(self, table):
self.hasTitles = True
self.headers.append(table[0])
self.rows = table[1:]
self._getMaxColumnCount()
def setDefaultColumnFormats(self, columnFormats):
self.defaultColumnFormats = columnFormats
def addDefaultColumnFormat(self, columnFormat):
self.defaultColumnFormats.append(columnFormat)
def setHeaders(self, rows, columnFormats=None):
self.headers = rows
self.headerColumnFormats = columnFormats if columnFormats else [None, ] * len(self.headers)
self._getMaxColumnCount()
def addHeader(self, row, columnFormats=None):
self.headers.append(row)
self.headerColumnFormats.append(columnFormats)
self._getMaxColumnCount()
def addHeaderDivider(self, skipColumns=()):
self.headers.append((None, skipColumns,))
self.headerColumnFormats.append(None)
def setFooters(self, row, columnFormats=None):
self.footers = row
self.footerColumnFormats = columnFormats if columnFormats else [None, ] * len(self.footers)
self._getMaxColumnCount()
def addFooter(self, row, columnFormats=None):
self.footers.append(row)
self.footerColumnFormats.append(columnFormats)
self._getMaxColumnCount()
def addRow(self, row=None, columnFormats=None):
self.rows.append(row)
if columnFormats:
self.columnFormatsByRow[len(self.rows) - 1] = columnFormats
self._getMaxColumnCount()
def addDivider(self, skipColumns=()):
self.rows.append((None, skipColumns,))
def toString(self):
output = StringIO()
self.printTable(os=output)
return output.getvalue()
def printTable(self, os=stdout):
maxWidths = self._getMaxWidths()
self.printDivider(os, maxWidths, False)
if self.headers:
for header, format in zip(self.headers, self.headerColumnFormats):
self.printRow(os, header, self._getHeaderColumnFormat(format), maxWidths)
self.printDivider(os, maxWidths)
for ctr, row in enumerate(self.rows):
self.printRow(os, row, self._getColumnFormatForRow(ctr), maxWidths)
if self.footers:
self.printDivider(os, maxWidths, double=True)
for footer, format in zip(self.footers, self.footerColumnFormats):
self.printRow(os, footer, self._getFooterColumnFormat(format), maxWidths)
self.printDivider(os, maxWidths, False)
def printRow(self, os, row, format, maxWidths):
if row is None or type(row) is tuple and row[0] is None:
self.printDivider(os, maxWidths, skipColumns=row[1] if type(row) is tuple else ())
else:
if len(row) != len(maxWidths):
row = list(row)
row.extend([""] * (len(maxWidths) - len(row)))
t = "|"
ctr = 0
while ctr < len(row):
startCtr = ctr
maxWidth = 0
for _ignore_span in xrange(format[startCtr].span if format else 1):
maxWidth += maxWidths[ctr]
ctr += 1
maxWidth += 3 * ((format[startCtr].span - 1) if format else 0)
text = self._columnText(row, startCtr, format, width=maxWidth)
t += " " + text + " |"
t += "\n"
os.write(t)
def printDivider(self, os, maxWidths, intermediate=True, double=False, skipColumns=()):
t = "|" if intermediate else "+"
for widthctr, width in enumerate(maxWidths):
if widthctr in skipColumns:
c = " "
else:
c = "=" if double else "-"
t += c * (width + 2)
t += "+" if widthctr < len(maxWidths) - 1 else ("|" if intermediate else "+")
t += "\n"
os.write(t)
def printTabDelimitedData(self, os=stdout, footer=True):
if self.headers:
titles = [""] * len(self.headers[0])
for row, header in enumerate(self.headers):
for col, item in enumerate(header):
titles[col] += (" " if row and item else "") + item
self.printTabDelimitedRow(os, titles, self._getHeaderColumnFormat(self.headerColumnFormats[0]))
for ctr, row in enumerate(self.rows):
self.printTabDelimitedRow(os, row, self._getColumnFormatForRow(ctr))
if self.footers and footer:
for footer in self.footers:
self.printTabDelimitedRow(os, footer, self._getFooterColumnFormat(self.footerColumnFormats[0]))
def printTabDelimitedRow(self, os, row, format):
if row is None:
row = [""] * self.columnCount
if len(row) != self.columnCount:
row = list(row)
row.extend([""] * (self.columnCount - len(row)))
textItems = [self._columnText(row, ctr, format) for ctr in xrange((len(row)))]
os.write("\t".join(textItems) + "\n")
def _getMaxColumnCount(self):
self.columnCount = 0
if self.headers:
for header in self.headers:
self.columnCount = max(self.columnCount, len(header) if header else 0)
for row in self.rows:
self.columnCount = max(self.columnCount, len(row) if row else 0)
if self.footers:
for footer in self.footers:
self.columnCount = max(self.columnCount, len(footer) if footer else 0)
def _getMaxWidths(self):
maxWidths = [0] * self.columnCount
if self.headers:
for header, format in zip(self.headers, self.headerColumnFormats):
self._updateMaxWidthsFromRow(header, self._getHeaderColumnFormat(format), maxWidths)
for ctr, row in enumerate(self.rows):
self._updateMaxWidthsFromRow(row, self._getColumnFormatForRow(ctr), maxWidths)
if self.footers:
for footer, format in zip(self.footers, self.footerColumnFormats):
self._updateMaxWidthsFromRow(footer, self._getFooterColumnFormat(format), maxWidths)
return maxWidths
def _updateMaxWidthsFromRow(self, row, format, maxWidths):
if row and (type(row) is not tuple or row[0] is not None):
ctr = 0
while ctr < len(row):
text = self._columnText(row, ctr, format)
startCtr = ctr
for _ignore_span in xrange(format[startCtr].span if format else 1):
maxWidths[ctr] = max(maxWidths[ctr], len(text) / (format[startCtr].span if format else 1))
ctr += 1
def _getHeaderColumnFormat(self, format):
if format:
return format
else:
justify = Table.ColumnFormat.CENTER_JUSTIFY if len(self.headers) == 1 else Table.ColumnFormat.LEFT_JUSTIFY
return [Table.ColumnFormat(justify=justify)] * self.columnCount
def _getFooterColumnFormat(self, format):
if format:
return format
else:
return self.defaultColumnFormats
def _getColumnFormatForRow(self, ctr):
if ctr in self.columnFormatsByRow:
return self.columnFormatsByRow[ctr]
else:
return self.defaultColumnFormats
def _columnText(self, row, column, format, width=0):
if row is None or column >= len(row):
return ""
colData = row[column]
if colData is None:
colData = ""
columnFormat = format[column] if format and column < len(format) else Table.ColumnFormat()
if type(colData) in types.StringTypes:
text = colData
else:
text = columnFormat.format % colData
if width:
if columnFormat.justify == Table.ColumnFormat.LEFT_JUSTIFY:
text = text.ljust(width)
elif columnFormat.justify == Table.ColumnFormat.RIGHT_JUSTIFY:
text = text.rjust(width)
elif columnFormat.justify == Table.ColumnFormat.CENTER_JUSTIFY:
text = text.center(width)
return text
calendarserver-7.0+dfsg/calendarserver/tools/test/ 0000775 0000000 0000000 00000000000 12607514243 0022422 5 ustar 00root root 0000000 0000000 calendarserver-7.0+dfsg/calendarserver/tools/test/__init__.py 0000664 0000000 0000000 00000001222 12607514243 0024530 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Tests for the calendarserver.tools module.
"""
calendarserver-7.0+dfsg/calendarserver/tools/test/deprovision/ 0000775 0000000 0000000 00000000000 12607514243 0024763 5 ustar 00root root 0000000 0000000 calendarserver-7.0+dfsg/calendarserver/tools/test/deprovision/augments.xml 0000664 0000000 0000000 00000001757 12607514243 0027342 0 ustar 00root root 0000000 0000000
E9E78C86-4829-4520-A35D-70DDADAB2092truetrue291C2C29-B663-4342-8EA1-A055E6A04D65truetrue
calendarserver-7.0+dfsg/calendarserver/tools/test/deprovision/caldavd.plist 0000664 0000000 0000000 00000043132 12607514243 0027441 0 ustar 00root root 0000000 0000000
ServerHostNameHTTPPort8008SSLPort8443RedirectHTTPToHTTPSBindAddressesBindHTTPPortsBindSSLPortsServerRoot%(ServerRoot)sDataRootDataDocumentRootDocumentsConfigRoot/etc/caldavdLogRoot/var/log/caldavdRunRoot/var/runAliasesUserQuota104857600MaximumAttachmentSize1048576MaxAttendeesPerInstance100DirectoryServicetypetwistedcaldav.directory.xmlfile.XMLDirectoryServiceparamsxmlFileaccounts.xmlrecordTypesusersgroupsResourceServiceEnabledtypetwistedcaldav.directory.xmlfile.XMLDirectoryServiceparamsxmlFileresources.xmlrecordTypesresourceslocationsAugmentServicetypetwistedcaldav.directory.augment.AugmentXMLDBparamsxmlFilesaugments.xmlProxyDBServicetypetwistedcaldav.directory.calendaruserproxy.ProxySqliteDBparamsdbpathproxies.sqliteProxyLoadFromFileconf/auth/proxies-test.xmlAdminPrincipals/principals/__uids__/admin/ReadPrincipalsEnableProxyPrincipalsEnableAnonymousReadRootEnableAnonymousReadNavEnablePrincipalListingsEnableMonolithicCalendarsAuthenticationBasicEnabledDigestEnabledAlgorithmmd5QopKerberosEnabledServicePrincipalWikiEnabledCookiesessionIDURLhttp://127.0.0.1/RPC2UserMethoduserForSessionWikiMethodaccessLevelForUserWikiCalendarAccessLogFilelogs/access.logRotateAccessLogErrorLogFilelogs/error.logDefaultLogLevelwarnLogLevelsPIDFilelogs/caldavd.pidAccountingCategoriesiTIPHTTPAccountingPrincipalsSSLCertificatetwistedcaldav/test/data/server.pemSSLAuthorityChainSSLPrivateKeytwistedcaldav/test/data/server.pemUserNameGroupNameProcessTypeCombinedMultiProcessProcessCount2NotificationsCoalesceSeconds3InternalNotificationHostlocalhostInternalNotificationPort62309ServicesSimpleLineNotifierServicetwistedcaldav.notify.SimpleLineNotifierServiceEnabledPort62308XMPPNotifierServicetwistedcaldav.notify.XMPPNotifierServiceEnabledHostxmpp.host.namePort5222JIDjid@xmpp.host.name/resourcePasswordpassword_goes_hereServiceAddresspubsub.xmpp.host.nameNodeConfigurationpubsub#deliver_payloads1pubsub#persist_items1KeepAliveSeconds120HeartbeatMinutes30AllowedJIDsSchedulingCalDAVEmailDomainHTTPDomainAddressPatternsOldDraftCompatibilityScheduleTagCompatibilityEnablePrivateCommentsiScheduleEnabledAddressPatternsServersconf/servertoserver-test.xmliMIPEnabledMailGatewayServerlocalhostMailGatewayPort62310SendingServerPort587UseSSLUsernamePasswordAddressReceivingServerPort995TypeUseSSLUsernamePasswordPollingSeconds30AddressPatternsmailto:.*OptionsAllowGroupAsOrganizerAllowLocationAsOrganizerAllowResourceAsOrganizerFreeBusyURLEnabledTimePeriod14AnonymousAccessEnableDropBoxEnablePrivateEventsEnableTimezoneServiceUsePackageTimezonesEnableSACLsEnableWebAdminResponseCompressionHTTPRetryAfter180ControlSocketlogs/caldavd.sockMemcachedMaxClients5memcachedmemcachedOptionsPoolsDefaultClientEnabledServerEnabledTwistedtwistd../Twisted/bin/twistdLocalizationLocalesDirectorylocalesLanguageEnglish
calendarserver-7.0+dfsg/calendarserver/tools/test/deprovision/resources-locations.xml 0000664 0000000 0000000 00000002025 12607514243 0031507 0 ustar 00root root 0000000 0000000
location%02dlocation%02dlocation%02dRoom %02dresource%02dresource%02dresource%02dResource %02d
calendarserver-7.0+dfsg/calendarserver/tools/test/deprovision/users-groups.xml 0000664 0000000 0000000 00000002236 12607514243 0030166 0 ustar 00root root 0000000 0000000
deprovisionedE9E78C86-4829-4520-A35D-70DDADAB2092testDeprovisioned UserDeprovisionedUserkeeper291C2C29-B663-4342-8EA1-A055E6A04D65testKeeper UserKeeperUser
calendarserver-7.0+dfsg/calendarserver/tools/test/gateway/ 0000775 0000000 0000000 00000000000 12607514243 0024063 5 ustar 00root root 0000000 0000000 calendarserver-7.0+dfsg/calendarserver/tools/test/gateway/augments.xml 0000664 0000000 0000000 00000002054 12607514243 0026431 0 ustar 00root root 0000000 0000000
user01truetrueuser02falsefalselocation01truetrue
calendarserver-7.0+dfsg/calendarserver/tools/test/gateway/caldavd.plist 0000664 0000000 0000000 00000046277 12607514243 0026556 0 ustar 00root root 0000000 0000000
ServerHostNameEnableCalDAVEnableCardDAVHTTPPort8008SSLPort8443RedirectHTTPToHTTPSBindAddressesBindHTTPPortsBindSSLPortsServerRoot%(ServerRoot)sDataRoot%(DataRoot)sDatabaseRoot%(DatabaseRoot)sDocumentRoot%(DocumentRoot)sConfigRoot%(ConfigRoot)sLogRoot%(LogRoot)sRunRoot%(RunRoot)sAliasesUserQuota104857600MaximumAttachmentSize1048576MaxAttendeesPerInstance100DirectoryServicetypetwistedcaldav.directory.xmlfile.XMLDirectoryServiceparamsxmlFileaccounts.xmlrecordTypesusersgroupsResourceServiceEnabledtypetwistedcaldav.directory.xmlfile.XMLDirectoryServiceparamsxmlFileresources.xmlrecordTypesresourceslocationsaddressesAugmentServicetypetwistedcaldav.directory.augment.AugmentXMLDBparamsxmlFilesaugments.xmlProxyDBServicetypetwistedcaldav.directory.calendaruserproxy.ProxySqliteDBparamsdbpathproxies.sqliteProxyLoadFromFileAdminPrincipals/principals/__uids__/admin/ReadPrincipalsEnableProxyPrincipalsEnableAnonymousReadRootEnableAnonymousReadNavEnablePrincipalListingsEnableMonolithicCalendarsAuthenticationBasicEnabledDigestEnabledAlgorithmmd5QopKerberosEnabledServicePrincipalWikiEnabledCookiesessionIDURLhttp://127.0.0.1/RPC2UserMethoduserForSessionWikiMethodaccessLevelForUserWikiCalendarAccessLogFilelogs/access.logRotateAccessLogErrorLogFilelogs/error.logDefaultLogLevelwarnLogLevelsPIDFilelogs/caldavd.pidAccountingCategoriesiTIPHTTPAccountingPrincipalsSSLCertificatetwistedcaldav/test/data/server.pemSSLAuthorityChainSSLPrivateKeytwistedcaldav/test/data/server.pemUserNameGroupNameProcessTypeCombinedMultiProcessProcessCount2NotificationsCoalesceSeconds3InternalNotificationHostlocalhostInternalNotificationPort62309ServicesAPNSEnabledEnableStaggeringStaggerSeconds5CalDAVCertificatePath/example/calendar.cerPrivateKeyPath/example/calendar.pemCardDAVCertificatePath/example/contacts.cerPrivateKeyPath/example/contacts.pemSimpleLineNotifierServicetwistedcaldav.notify.SimpleLineNotifierServiceEnabledPort62308XMPPNotifierServicetwistedcaldav.notify.XMPPNotifierServiceEnabledHostxmpp.host.namePort5222JIDjid@xmpp.host.name/resourcePasswordpassword_goes_hereServiceAddresspubsub.xmpp.host.nameNodeConfigurationpubsub#deliver_payloads1pubsub#persist_items1KeepAliveSeconds120HeartbeatMinutes30AllowedJIDsSchedulingCalDAVEmailDomainHTTPDomainAddressPatternsOldDraftCompatibilityScheduleTagCompatibilityEnablePrivateCommentsiScheduleEnabledAddressPatternsServersconf/servertoserver-test.xmliMIPEnabledMailGatewayServerlocalhostMailGatewayPort62310SendingServerPort587UseSSLUsernamePasswordAddressReceivingServerPort995TypeUseSSLUsernamePasswordPollingSeconds30AddressPatternsmailto:.*OptionsAllowGroupAsOrganizerAllowLocationAsOrganizerAllowResourceAsOrganizerFreeBusyURLEnabledTimePeriod14AnonymousAccessEnableDropBoxEnablePrivateEventsEnableTimezoneServiceUsePackageTimezonesEnableSACLsEnableWebAdminResponseCompressionHTTPRetryAfter180ControlSocketlogs/caldavd.sockMemcachedMaxClients5memcachedmemcachedOptionsPoolsDefaultClientEnabledServerEnabledEnableResponseCacheResponseCacheTimeout30SharedConnectionPoolDirectoryProxyInProcessCachingSeconds0InSidecarCachingSeconds0Twistedtwistd../Twisted/bin/twistdLocalizationLocalesDirectorylocalesLanguageEnglishIncludes%(WritablePlist)sWritableConfigFile%(WritablePlist)s
calendarserver-7.0+dfsg/calendarserver/tools/test/gateway/resources-locations.xml 0000664 0000000 0000000 00000007474 12607514243 0030624 0 ustar 00root root 0000000 0000000
location01location01Room 01location02location02Room 02location03location03Room 03location04location04Room 04location05location05Room 05location06location06Room 06location07location07Room 07location08location08Room 08location09location09Room 09location10location10Room 10resource01resource01Resource 01resource02resource02Resource 02resource03resource03Resource 03resource04resource04Resource 04resource05resource05Resource 05resource06resource06Resource 06resource07resource07Resource 07resource08resource08Resource 08resource09resource09Resource 09resource10resource10Resource 10
calendarserver-7.0+dfsg/calendarserver/tools/test/gateway/users-groups.xml 0000664 0000000 0000000 00000006621 12607514243 0027270 0 ustar 00root root 0000000 0000000
user01user01user01User 01user01@example.comuser02user02user02User 02user02@example.comuser03user03user03User 03user03@example.comuser04user04user04User 04user04@example.comuser05user05user05User 05user05@example.comuser06user06user06User 06user06@example.comuser07user07user07User 07user07@example.comuser08user08user08User 08user08@example.comuser09user09user09User 09user09@example.comuser10user10user10User 10user10@example.come5a6142c-4189-4e9e-90b0-9cd0268b314btestgroup1Group 01user01user02
calendarserver-7.0+dfsg/calendarserver/tools/test/principals/ 0000775 0000000 0000000 00000000000 12607514243 0024566 5 ustar 00root root 0000000 0000000 calendarserver-7.0+dfsg/calendarserver/tools/test/principals/augments.xml 0000664 0000000 0000000 00000002054 12607514243 0027134 0 ustar 00root root 0000000 0000000
user01truetrueuser02falsefalselocation01truetrue
calendarserver-7.0+dfsg/calendarserver/tools/test/principals/caldavd.plist 0000664 0000000 0000000 00000043770 12607514243 0027254 0 ustar 00root root 0000000 0000000
ServerHostNameHTTPPort8008SSLPort8443RedirectHTTPToHTTPSBindAddressesBindHTTPPortsBindSSLPortsServerRoot%(ServerRoot)sDataRoot%(DataRoot)sDatabaseRoot%(DatabaseRoot)sDocumentRoot%(DocumentRoot)sConfigRootconfigLogRoot%(LogRoot)sRunRoot%(LogRoot)sAliasesUserQuota104857600MaximumAttachmentSize1048576MaxAttendeesPerInstance100DirectoryServicetypetwistedcaldav.directory.xmlfile.XMLDirectoryServiceparamsxmlFileaccounts.xmlrecordTypesusersgroupsResourceServiceEnabledtypetwistedcaldav.directory.xmlfile.XMLDirectoryServiceparamsxmlFileresources.xmlrecordTypesresourceslocationsaddressesAugmentServicetypetwistedcaldav.directory.augment.AugmentXMLDBparamsxmlFilesaugments.xmlProxyDBServicetypetwistedcaldav.directory.calendaruserproxy.ProxySqliteDBparamsdbpathproxies.sqliteProxyLoadFromFileAdminPrincipals/principals/__uids__/admin/ReadPrincipalsEnableProxyPrincipalsEnableAnonymousReadRootEnableAnonymousReadNavEnablePrincipalListingsEnableMonolithicCalendarsAuthenticationBasicEnabledDigestEnabledAlgorithmmd5QopKerberosEnabledServicePrincipalWikiEnabledCookiesessionIDURLhttp://127.0.0.1/RPC2UserMethoduserForSessionWikiMethodaccessLevelForUserWikiCalendarAccessLogFilelogs/access.logRotateAccessLogErrorLogFilelogs/error.logDefaultLogLevelwarnLogLevelsPIDFilelogs/caldavd.pidAccountingCategoriesiTIPHTTPAccountingPrincipalsSSLCertificatetwistedcaldav/test/data/server.pemSSLAuthorityChainSSLPrivateKeytwistedcaldav/test/data/server.pemUserNameGroupNameProcessTypeCombinedMultiProcessProcessCount2NotificationsCoalesceSeconds3InternalNotificationHostlocalhostInternalNotificationPort62309ServicesSimpleLineNotifierServicetwistedcaldav.notify.SimpleLineNotifierServiceEnabledPort62308XMPPNotifierServicetwistedcaldav.notify.XMPPNotifierServiceEnabledHostxmpp.host.namePort5222JIDjid@xmpp.host.name/resourcePasswordpassword_goes_hereServiceAddresspubsub.xmpp.host.nameNodeConfigurationpubsub#deliver_payloads1pubsub#persist_items1KeepAliveSeconds120HeartbeatMinutes30AllowedJIDsSchedulingCalDAVEmailDomainHTTPDomainAddressPatternsOldDraftCompatibilityScheduleTagCompatibilityEnablePrivateCommentsiScheduleEnabledAddressPatternsServersconf/servertoserver-test.xmliMIPEnabledMailGatewayServerlocalhostMailGatewayPort62310SendingServerPort587UseSSLUsernamePasswordAddressReceivingServerPort995TypeUseSSLUsernamePasswordPollingSeconds30AddressPatternsmailto:.*OptionsAllowGroupAsOrganizerAllowLocationAsOrganizerAllowResourceAsOrganizerFreeBusyURLEnabledTimePeriod14AnonymousAccessEnableDropBoxEnablePrivateEventsEnableTimezoneServiceUsePackageTimezonesEnableSACLsEnableWebAdminResponseCompressionHTTPRetryAfter180ControlSocketlogs/caldavd.sockMemcachedMaxClients5memcachedmemcachedOptionsPoolsDefaultClientEnabledServerEnabledEnableResponseCacheResponseCacheTimeout30SharedConnectionPoolTwistedtwistd../Twisted/bin/twistdLocalizationLocalesDirectorylocalesLanguageEnglish
calendarserver-7.0+dfsg/calendarserver/tools/test/principals/resources-locations.xml 0000664 0000000 0000000 00000007474 12607514243 0031327 0 ustar 00root root 0000000 0000000
location01location01Room 01location02location02Room 02location03location03Room 03location04location04Room 04location05location05Room 05location06location06Room 06location07location07Room 07location08location08Room 08location09location09Room 09location10location10Room 10resource01resource01Resource 01resource02resource02Resource 02resource03resource03Resource 03resource04resource04Resource 04resource05resource05Resource 05resource06resource06Resource 06resource07resource07Resource 07resource08resource08Resource 08resource09resource09Resource 09resource10resource10Resource 10
calendarserver-7.0+dfsg/calendarserver/tools/test/principals/users-groups.xml 0000664 0000000 0000000 00000007601 12607514243 0027772 0 ustar 00root root 0000000 0000000
user01user01user01User 01user01@example.comuser02user02user02User 02user02@example.comuser03user03user03User 03user03@example.comuser04user04user04User 04user04@example.comuser05user05user05User 05user05@example.comuser06user06user06User 06user06@example.comuser07user07user07User 07user07@example.comuser08user08user08User 08user08@example.comuser09user09user09User 09user09@example.comuser10user10user10User 10user10@example.comtestuser1testuser1testuser1Test User Onetestuser10@example.come5a6142c-4189-4e9e-90b0-9cd0268b314btestgroup1Group 01user01user02group2group2Test Group Twogroup3group3Test Group Three
calendarserver-7.0+dfsg/calendarserver/tools/test/test_agent.py 0000664 0000000 0000000 00000007046 12607514243 0025140 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2013-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
try:
from calendarserver.tools.agent import AgentRealm
from calendarserver.tools.agent import InactivityDetector
from twistedcaldav.test.util import TestCase
from twisted.internet.task import Clock
from twisted.web.resource import IResource
from twisted.web.resource import ForbiddenResource
except ImportError:
pass
else:
class FakeRecord(object):
def __init__(self, shortName):
self.shortNames = [shortName]
class AgentTestCase(TestCase):
def test_AgentRealm(self):
realm = AgentRealm("root", ["abc"])
# Valid avatar
_ignore_interface, resource, ignored = realm.requestAvatar(
FakeRecord("abc"), None, IResource
)
self.assertEquals(resource, "root")
# Not allowed avatar
_ignore_interface, resource, ignored = realm.requestAvatar(
FakeRecord("def"), None, IResource
)
self.assertTrue(isinstance(resource, ForbiddenResource))
# Interface unhandled
try:
realm.requestAvatar(FakeRecord("def"), None, None)
except NotImplementedError:
pass
else:
self.fail("Didn't raise NotImplementedError")
class InactivityDectectorTestCase(TestCase):
def test_inactivity(self):
clock = Clock()
self.inactivityReached = False
def becameInactive():
self.inactivityReached = True
id = InactivityDetector(clock, 5, becameInactive)
# After 3 seconds, not inactive
clock.advance(3)
self.assertFalse(self.inactivityReached)
# Activity happens, pushing out the inactivity threshold
id.activity()
clock.advance(3)
self.assertFalse(self.inactivityReached)
# Time passes without activity
clock.advance(3)
self.assertTrue(self.inactivityReached)
id.stop()
# Verify a timeout of 0 does not ever fire
id = InactivityDetector(clock, 0, becameInactive)
self.assertEquals(clock.getDelayedCalls(), [])
class FakeRequest(object):
def getClientIP(self):
return "127.0.0.1"
class FakeOpenDirectory(object):
def returnThisRecord(self, response):
self.recordResponse = response
def getUserRecord(self, ignored, username):
return self.recordResponse
def returnThisAuthResponse(self, response):
self.authResponse = response
def authenticateUserDigest(
self, ignored, node, username, challenge, response, method
):
return self.authResponse
ODNSerror = "Error"
class FakeCredentials(object):
def __init__(self, username, fields):
self.username = username
self.fields = fields
self.method = "POST"
calendarserver-7.0+dfsg/calendarserver/tools/test/test_calverify.py 0000664 0000000 0000000 00000276350 12607514243 0026034 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2012-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
"""
Tests for calendarserver.tools.calverify
"""
from calendarserver.tools.calverify import BadDataService, \
SchedulingMismatchService, DoubleBookingService, DarkPurgeService, \
EventSplitService
from pycalendar.datetime import DateTime
from twisted.internet import reactor
from twisted.internet.defer import inlineCallbacks
from twistedcaldav.config import config
from twistedcaldav.ical import normalize_iCalStr
from twistedcaldav.test.util import StoreTestCase
from txdav.common.datastore.test.util import populateCalendarsFrom
from StringIO import StringIO
OK_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:OK
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Missing DTSTAMP
BAD1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD1
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Bad recurrence
BAD2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD2
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
RRULE:FREQ=DAILY;COUNT=3
SEQUENCE:2
END:VEVENT
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD2
RECURRENCE-ID:20100307T120000Z
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Bad recurrence
BAD3_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD2
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
RRULE:FREQ=DAILY;COUNT=3
SEQUENCE:2
END:VEVENT
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD2
RECURRENCE-ID:20100307T120000Z
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Missing Organizer
BAD3_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD3
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
RRULE:FREQ=DAILY;COUNT=3
SEQUENCE:2
END:VEVENT
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD3
RECURRENCE-ID:20100307T111500Z
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# https Organizer
BAD4_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD4
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
ORGANIZER:http://demo.com:8008/principals/__uids__/D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# https Attendee
BAD5_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD5
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:http://demo.com:8008/principals/__uids__/D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# https Organizer and Attendee
BAD6_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD6
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
ORGANIZER:http://demo.com:8008/principals/__uids__/D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:http://demo.com:8008/principals/__uids__/D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Base64 Organizer and Attendee parameter
OK8_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:OK8
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
ORGANIZER;CALENDARSERVER-OLD-CUA="base64-aHR0cDovL2RlbW8uY29tOjgwMDgvcHJpbm
NpcGFscy9fX3VpZHNfXy9ENDZGM0Q3MS0wNEI3LTQzQzItQTdCNi02RjkyRjkyRTYxRDA=":
urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;CALENDARSERVER-OLD-CUA="base64-aHR0cDovL2RlbW8uY29tOjgwMDgvcHJpbmN
pcGFscy9fX3VpZHNfXy9ENDZGM0Q3MS0wNEI3LTQzQzItQTdCNi02RjkyRjkyRTYxRDA=":u
rn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Non-mailto: Organizer
BAD10_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD10
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
ORGANIZER;CN=Example User1;SCHEDULE-AGENT=NONE:example1@example.com
ATTENDEE;CN=Example User1:example1@example.com
ATTENDEE;CN=Example User2:example2@example.com
ATTENDEE;CN=Example User3:/principals/users/example3
ATTENDEE;CN=Example User4:http://demo.com:8008/principals/users/example4
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Bad recurrence EXDATE
BAD11_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD11
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
EXDATE:20100314T111500Z
RRULE:FREQ=WEEKLY
SEQUENCE:2
END:VEVENT
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD11
RECURRENCE-ID:20100314T111500Z
DTEND:20100314T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100314T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
BAD12_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD12
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
ORGANIZER:mailto:example2@example.com
ATTENDEE:mailto:example1@example.com
ATTENDEE:mailto:example2@example.com
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
BAD13_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD13
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
class CalVerifyDataTests(StoreTestCase):
"""
Tests calverify for iCalendar data problems.
"""
metadata = {
"accessMode": "PUBLIC",
"isScheduleObject": True,
"scheduleTag": "abc",
"scheduleEtags": (),
"hasPrivateComment": False,
}
requirements = {
"home1" : {
"calendar_1" : {
"ok.ics" : (OK_ICS, metadata,),
"bad1.ics" : (BAD1_ICS, metadata,),
"bad2.ics" : (BAD2_ICS, metadata,),
"bad3.ics" : (BAD3_ICS, metadata,),
"bad4.ics" : (BAD4_ICS, metadata,),
"bad5.ics" : (BAD5_ICS, metadata,),
"bad6.ics" : (BAD6_ICS, metadata,),
"ok8.ics" : (OK8_ICS, metadata,),
"bad10.ics" : (BAD10_ICS, metadata,),
"bad11.ics" : (BAD11_ICS, metadata,),
"bad12.ics" : (BAD12_ICS, metadata,),
"bad13.ics" : (BAD13_ICS, metadata,),
}
},
}
number_to_process = len(requirements["home1"]["calendar_1"])
@inlineCallbacks
def populate(self):
# Need to bypass normal validation inside the store
yield populateCalendarsFrom(self.requirements, self.storeUnderTest())
self.notifierFactory.reset()
def storeUnderTest(self):
"""
Create and return a L{CalendarStore} for testing.
"""
return self._sqlCalendarStore
def verifyResultsByUID(self, results, expected):
reported = set([(home, uid) for home, uid, _ignore_resid, _ignore_reason in results])
self.assertEqual(reported, expected)
@inlineCallbacks
def test_scanBadData(self):
"""
CalVerifyService.doScan without fix. Make sure it detects common errors.
Make sure sync-token is not changed.
"""
sync_token_old = (yield (yield self.calendarUnderTest()).syncToken())
yield self.commit()
options = {
"ical": True,
"fix": False,
"nobase64": False,
"verbose": False,
"uid": "",
"uuid": "",
"tzid": "",
}
output = StringIO()
calverify = BadDataService(self._sqlCalendarStore, options, output, reactor, config)
calverify.emailDomain = "example.com"
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], self.number_to_process)
self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set((
("home1", "BAD1",),
("home1", "BAD2",),
("home1", "BAD3",),
("home1", "BAD4",),
("home1", "BAD5",),
("home1", "BAD6",),
("home1", "BAD10",),
("home1", "BAD11",),
("home1", "BAD12",),
("home1", "BAD13",),
)))
sync_token_new = (yield (yield self.calendarUnderTest()).syncToken())
self.assertEqual(sync_token_old, sync_token_new)
@inlineCallbacks
def test_fixBadData(self):
"""
CalVerifyService.doScan with fix. Make sure it detects and fixes as much as it can.
Make sure sync-token is changed.
"""
sync_token_old = (yield (yield self.calendarUnderTest()).syncToken())
yield self.commit()
options = {
"ical": True,
"fix": True,
"nobase64": False,
"verbose": False,
"uid": "",
"uuid": "",
"tzid": "",
}
output = StringIO()
# Do fix
self.patch(config.Scheduling.Options, "PrincipalHostAliases", "demo.com")
self.patch(config, "HTTPPort", 8008)
calverify = BadDataService(self._sqlCalendarStore, options, output, reactor, config)
calverify.emailDomain = "example.com"
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], self.number_to_process)
self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set((
("home1", "BAD1",),
("home1", "BAD2",),
("home1", "BAD3",),
("home1", "BAD4",),
("home1", "BAD5",),
("home1", "BAD6",),
("home1", "BAD10",),
("home1", "BAD11",),
("home1", "BAD12",),
("home1", "BAD13",),
)))
# Do scan
options["fix"] = False
calverify = BadDataService(self._sqlCalendarStore, options, output, reactor, config)
calverify.emailDomain = "example.com"
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], self.number_to_process)
self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set((
("home1", "BAD1",),
("home1", "BAD13",),
)))
sync_token_new = (yield (yield self.calendarUnderTest()).syncToken())
self.assertNotEqual(sync_token_old, sync_token_new)
# Make sure mailto: fix results in urn:uuid value without SCHEDULE-AGENT
obj = yield self.calendarObjectUnderTest(name="bad10.ics")
ical = yield obj.component()
org = ical.getOrganizerProperty()
self.assertEqual(org.value(), "urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0")
self.assertFalse(org.hasParameter("SCHEDULE-AGENT"))
for attendee in ical.getAllAttendeeProperties():
self.assertTrue(
attendee.value().startswith("urn:x-uid:") or
attendee.value().startswith("/principals")
)
@inlineCallbacks
def test_scanBadCuaOnly(self):
"""
CalVerifyService.doScan without fix for CALENDARSERVER-OLD-CUA only. Make sure it detects
and fixes as much as it can. Make sure sync-token is not changed.
"""
sync_token_old = (yield (yield self.calendarUnderTest()).syncToken())
yield self.commit()
options = {
"ical": False,
"fix": False,
"badcua": True,
"nobase64": False,
"verbose": False,
"uid": "",
"uuid": "",
"tzid": "",
}
output = StringIO()
calverify = BadDataService(self._sqlCalendarStore, options, output, reactor, config)
calverify.emailDomain = "example.com"
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], self.number_to_process)
self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set((
("home1", "BAD4",),
("home1", "BAD5",),
("home1", "BAD6",),
("home1", "BAD10",),
("home1", "BAD12",),
)))
sync_token_new = (yield (yield self.calendarUnderTest()).syncToken())
self.assertEqual(sync_token_old, sync_token_new)
@inlineCallbacks
def test_fixBadCuaOnly(self):
"""
CalVerifyService.doScan with fix for CALENDARSERVER-OLD-CUA only. Make sure it detects
and fixes as much as it can. Make sure sync-token is changed.
"""
sync_token_old = (yield (yield self.calendarUnderTest()).syncToken())
yield self.commit()
options = {
"ical": False,
"fix": True,
"badcua": True,
"nobase64": False,
"verbose": False,
"uid": "",
"uuid": "",
"tzid": "",
}
output = StringIO()
# Do fix
self.patch(config.Scheduling.Options, "PrincipalHostAliases", "demo.com")
self.patch(config, "HTTPPort", 8008)
calverify = BadDataService(self._sqlCalendarStore, options, output, reactor, config)
calverify.emailDomain = "example.com"
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], self.number_to_process)
self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set((
("home1", "BAD4",),
("home1", "BAD5",),
("home1", "BAD6",),
("home1", "BAD10",),
("home1", "BAD12",),
)))
# Do scan
options["fix"] = False
calverify = BadDataService(self._sqlCalendarStore, options, output, reactor, config)
calverify.emailDomain = "example.com"
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], self.number_to_process)
self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set((
)))
sync_token_new = (yield (yield self.calendarUnderTest()).syncToken())
self.assertNotEqual(sync_token_old, sync_token_new)
class CalVerifyMismatchTestsBase(StoreTestCase):
"""
Tests calverify for iCalendar mismatch problems.
"""
metadata = {
"accessMode": "PUBLIC",
"isScheduleObject": True,
"scheduleTag": "abc",
"scheduleEtags": (),
"hasPrivateComment": False,
}
uuid1 = "D46F3D71-04B7-43C2-A7B6-6F92F92E61D0"
uuid2 = "47B16BB4-DB5F-4BF6-85FE-A7DA54230F92"
uuid3 = "AC478592-7783-44D1-B2AE-52359B4E8415"
uuidl1 = "75EA36BE-F71B-40F9-81F9-CF59BF40CA8F"
@inlineCallbacks
def populate(self):
# Need to bypass normal validation inside the store
yield populateCalendarsFrom(self.requirements, self.storeUnderTest())
self.notifierFactory.reset()
now = DateTime.getToday()
now.setDay(1)
now.offsetMonth(2)
nowYear = now.getYear()
nowMonth = now.getMonth()
class CalVerifyMismatchTestsNonRecurring(CalVerifyMismatchTestsBase):
"""
Tests calverify for iCalendar mismatch problems for non-recurring events.
"""
# Organizer has event, attendees do not
MISSING_ATTENDEE_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISSING_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Attendees have event, organizer does not
MISSING_ORGANIZER_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISSING_ORGANIZER_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISSING_ORGANIZER_3_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISSING_ORGANIZER_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Attendee partstat mismatch
MISMATCH_ATTENDEE_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=DECLINED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH_ATTENDEE_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH_ATTENDEE_3_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Attendee events outside time range
MISMATCH2_ATTENDEE_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH2_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=DECLINED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH2_ATTENDEE_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH2_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH2_ATTENDEE_3_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH2_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Organizer event outside time range
MISMATCH_ORGANIZER_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH_ORGANIZER_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=DECLINED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear - 1, "month": nowMonth}
MISMATCH_ORGANIZER_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH_ORGANIZER_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Attendee uuid3 has event with different organizer
MISMATCH3_ATTENDEE_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH3_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH3_ATTENDEE_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH3_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH3_ATTENDEE_3_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH3_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH_ORGANIZER_3_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH_ORGANIZER_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Attendee uuid3 has event they are not invited to
MISMATCH2_ORGANIZER_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH2_ORGANIZER_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=DECLINED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH2_ORGANIZER_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH2_ORGANIZER_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=DECLINED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH2_ORGANIZER_3_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH2_ORGANIZER_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=DECLINED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
requirements = {
CalVerifyMismatchTestsBase.uuid1 : {
"calendar" : {
"missing_attendee.ics" : (MISSING_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched_attendee.ics" : (MISMATCH_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched2_attendee.ics" : (MISMATCH2_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched3_attendee.ics" : (MISMATCH3_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched_organizer.ics" : (MISMATCH_ORGANIZER_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched2_organizer.ics" : (MISMATCH2_ORGANIZER_1_ICS, CalVerifyMismatchTestsBase.metadata,),
},
"inbox" : {},
},
CalVerifyMismatchTestsBase.uuid2 : {
"calendar" : {
"mismatched_attendee.ics" : (MISMATCH_ATTENDEE_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched2_attendee.ics" : (MISMATCH2_ATTENDEE_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched3_attendee.ics" : (MISMATCH3_ATTENDEE_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"missing_organizer.ics" : (MISSING_ORGANIZER_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched_organizer.ics" : (MISMATCH_ORGANIZER_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched2_organizer.ics" : (MISMATCH2_ORGANIZER_2_ICS, CalVerifyMismatchTestsBase.metadata,),
},
"inbox" : {},
},
CalVerifyMismatchTestsBase.uuid3 : {
"calendar" : {
"mismatched_attendee.ics" : (MISMATCH_ATTENDEE_3_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched3_attendee.ics" : (MISMATCH3_ATTENDEE_3_ICS, CalVerifyMismatchTestsBase.metadata,),
"missing_organizer.ics" : (MISSING_ORGANIZER_3_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched2_organizer.ics" : (MISMATCH2_ORGANIZER_3_ICS, CalVerifyMismatchTestsBase.metadata,),
},
"calendar2" : {
"mismatched_organizer.ics" : (MISMATCH_ORGANIZER_3_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched2_attendee.ics" : (MISMATCH2_ATTENDEE_3_ICS, CalVerifyMismatchTestsBase.metadata,),
},
"inbox" : {},
},
}
@inlineCallbacks
def setUp(self):
yield super(CalVerifyMismatchTestsNonRecurring, self).setUp()
home = (yield self.homeUnderTest(name=self.uuid3))
calendar = (yield self.calendarUnderTest(name="calendar2", home=self.uuid3))
yield home.setDefaultCalendar(calendar, "VEVENT")
yield self.commit()
@inlineCallbacks
def test_scanMismatchOnly(self):
"""
CalVerifyService.doScan without fix for mismatches. Make sure it detects
as much as it can. Make sure sync-token is not changed.
"""
sync_token_old1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_old2 = (yield (yield self.calendarUnderTest(home=self.uuid2, name="calendar")).syncToken())
sync_token_old3 = (yield (yield self.calendarUnderTest(home=self.uuid3, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": True,
"nobase64": False,
"fix": False,
"verbose": False,
"details": False,
"uid": "",
"uuid": "",
"tzid": "",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
}
output = StringIO()
calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 17)
self.assertEqual(calverify.results["Missing Attendee"], set((
("MISSING_ATTENDEE_ICS", self.uuid1, self.uuid2,),
("MISSING_ATTENDEE_ICS", self.uuid1, self.uuid3,),
)))
self.assertEqual(calverify.results["Mismatch Attendee"], set((
("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuid2,),
("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuid3,),
("MISMATCH2_ATTENDEE_ICS", self.uuid1, self.uuid2,),
("MISMATCH2_ATTENDEE_ICS", self.uuid1, self.uuid3,),
("MISMATCH3_ATTENDEE_ICS", self.uuid1, self.uuid3,),
)))
self.assertEqual(calverify.results["Missing Organizer"], set((
("MISSING_ORGANIZER_ICS", self.uuid2, self.uuid1,),
("MISSING_ORGANIZER_ICS", self.uuid3, self.uuid1,),
)))
self.assertEqual(calverify.results["Mismatch Organizer"], set((
("MISMATCH_ORGANIZER_ICS", self.uuid2, self.uuid1,),
("MISMATCH_ORGANIZER_ICS", self.uuid3, self.uuid1,),
("MISMATCH2_ORGANIZER_ICS", self.uuid3, self.uuid1,),
)))
self.assertTrue("Fix change event" not in calverify.results)
self.assertTrue("Fix add event" not in calverify.results)
self.assertTrue("Fix add inbox" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
self.assertTrue("Fix failures" not in calverify.results)
self.assertTrue("Auto-Accepts" not in calverify.results)
sync_token_new1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_new2 = (yield (yield self.calendarUnderTest(home=self.uuid2, name="calendar")).syncToken())
sync_token_new3 = (yield (yield self.calendarUnderTest(home=self.uuid3, name="calendar")).syncToken())
self.assertEqual(sync_token_old1, sync_token_new1)
self.assertEqual(sync_token_old2, sync_token_new2)
self.assertEqual(sync_token_old3, sync_token_new3)
@inlineCallbacks
def test_fixMismatch(self):
"""
CalVerifyService.doScan with fix for mismatches. Make sure it detects
and fixes as much as it can. Make sure sync-token is not changed.
"""
sync_token_old1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_old2 = (yield (yield self.calendarUnderTest(home=self.uuid2, name="calendar")).syncToken())
sync_token_old3 = (yield (yield self.calendarUnderTest(home=self.uuid3, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": True,
"nobase64": False,
"fix": True,
"verbose": False,
"details": False,
"uid": "",
"uuid": "",
"tzid": "",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
}
output = StringIO()
calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 17)
self.assertEqual(calverify.results["Missing Attendee"], set((
("MISSING_ATTENDEE_ICS", self.uuid1, self.uuid2,),
("MISSING_ATTENDEE_ICS", self.uuid1, self.uuid3,),
)))
self.assertEqual(calverify.results["Mismatch Attendee"], set((
("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuid2,),
("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuid3,),
("MISMATCH2_ATTENDEE_ICS", self.uuid1, self.uuid2,),
("MISMATCH2_ATTENDEE_ICS", self.uuid1, self.uuid3,),
("MISMATCH3_ATTENDEE_ICS", self.uuid1, self.uuid3,),
)))
self.assertEqual(calverify.results["Missing Organizer"], set((
("MISSING_ORGANIZER_ICS", self.uuid2, self.uuid1,),
("MISSING_ORGANIZER_ICS", self.uuid3, self.uuid1,),
)))
self.assertEqual(calverify.results["Mismatch Organizer"], set((
("MISMATCH_ORGANIZER_ICS", self.uuid2, self.uuid1,),
("MISMATCH_ORGANIZER_ICS", self.uuid3, self.uuid1,),
("MISMATCH2_ORGANIZER_ICS", self.uuid3, self.uuid1,),
)))
self.assertEqual(calverify.results["Fix change event"], set((
(self.uuid2, "calendar", "MISMATCH_ATTENDEE_ICS",),
(self.uuid3, "calendar", "MISMATCH_ATTENDEE_ICS",),
(self.uuid2, "calendar", "MISMATCH2_ATTENDEE_ICS",),
(self.uuid3, "calendar2", "MISMATCH2_ATTENDEE_ICS",),
(self.uuid3, "calendar", "MISMATCH3_ATTENDEE_ICS",),
(self.uuid2, "calendar", "MISMATCH_ORGANIZER_ICS",),
(self.uuid3, "calendar2", "MISMATCH_ORGANIZER_ICS",),
)))
self.assertEqual(calverify.results["Fix add event"], set((
(self.uuid2, "calendar", "MISSING_ATTENDEE_ICS",),
(self.uuid3, "calendar2", "MISSING_ATTENDEE_ICS",),
)))
self.assertEqual(calverify.results["Fix add inbox"], set((
(self.uuid2, "MISSING_ATTENDEE_ICS",),
(self.uuid3, "MISSING_ATTENDEE_ICS",),
(self.uuid2, "MISMATCH_ATTENDEE_ICS",),
(self.uuid3, "MISMATCH_ATTENDEE_ICS",),
(self.uuid2, "MISMATCH2_ATTENDEE_ICS",),
(self.uuid3, "MISMATCH2_ATTENDEE_ICS",),
(self.uuid3, "MISMATCH3_ATTENDEE_ICS",),
(self.uuid2, "MISMATCH_ORGANIZER_ICS",),
(self.uuid3, "MISMATCH_ORGANIZER_ICS",),
)))
self.assertEqual(calverify.results["Fix remove"], set((
(self.uuid2, "calendar", "missing_organizer.ics",),
(self.uuid3, "calendar", "missing_organizer.ics",),
(self.uuid3, "calendar", "mismatched2_organizer.ics",),
)))
obj = yield self.calendarObjectUnderTest(home=self.uuid2, calendar_name="calendar", name="missing_organizer.ics")
self.assertEqual(obj, None)
obj = yield self.calendarObjectUnderTest(home=self.uuid3, calendar_name="calendar", name="missing_organizer.ics")
self.assertEqual(obj, None)
obj = yield self.calendarObjectUnderTest(home=self.uuid3, calendar_name="calendar", name="mismatched2_organizer.ics")
self.assertEqual(obj, None)
self.assertEqual(calverify.results["Fix failures"], 0)
self.assertEqual(calverify.results["Auto-Accepts"], [])
sync_token_new1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_new2 = (yield (yield self.calendarUnderTest(home=self.uuid2, name="calendar")).syncToken())
sync_token_new3 = (yield (yield self.calendarUnderTest(home=self.uuid3, name="calendar")).syncToken())
self.assertEqual(sync_token_old1, sync_token_new1)
self.assertNotEqual(sync_token_old2, sync_token_new2)
self.assertNotEqual(sync_token_old3, sync_token_new3)
# Re-scan after changes to make sure there are no errors
yield self.commit()
options["fix"] = False
calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 14)
self.assertTrue("Missing Attendee" not in calverify.results)
self.assertTrue("Mismatch Attendee" not in calverify.results)
self.assertTrue("Missing Organizer" not in calverify.results)
self.assertTrue("Mismatch Organizer" not in calverify.results)
self.assertTrue("Fix add event" not in calverify.results)
self.assertTrue("Fix add inbox" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
self.assertTrue("Fix failures" not in calverify.results)
self.assertTrue("Auto-Accepts" not in calverify.results)
class CalVerifyMismatchTestsAutoAccept(CalVerifyMismatchTestsBase):
"""
Tests calverify for iCalendar mismatch problems for auto-accept attendees.
"""
# Organizer has event, attendee do not
MISSING_ATTENDEE_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISSING_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Attendee partstat mismatch
MISMATCH_ATTENDEE_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH_ATTENDEE_L1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
requirements = {
CalVerifyMismatchTestsBase.uuid1 : {
"calendar" : {
"missing_attendee.ics" : (MISSING_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched_attendee.ics" : (MISMATCH_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,),
},
"inbox" : {},
},
CalVerifyMismatchTestsBase.uuid2 : {
"calendar" : {},
"inbox" : {},
},
CalVerifyMismatchTestsBase.uuid3 : {
"calendar" : {},
"inbox" : {},
},
CalVerifyMismatchTestsBase.uuidl1 : {
"calendar" : {
"mismatched_attendee.ics" : (MISMATCH_ATTENDEE_L1_ICS, CalVerifyMismatchTestsBase.metadata,),
},
"inbox" : {},
},
}
@inlineCallbacks
def test_scanMismatchOnly(self):
"""
CalVerifyService.doScan without fix for mismatches. Make sure it detects
as much as it can. Make sure sync-token is not changed.
"""
sync_token_old1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": True,
"nobase64": False,
"fix": False,
"verbose": False,
"details": False,
"uid": "",
"uuid": "",
"tzid": "",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
}
output = StringIO()
calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 3)
self.assertEqual(calverify.results["Missing Attendee"], set((
("MISSING_ATTENDEE_ICS", self.uuid1, self.uuidl1,),
)))
self.assertEqual(calverify.results["Mismatch Attendee"], set((
("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuidl1,),
)))
self.assertTrue("Missing Organizer" not in calverify.results)
self.assertTrue("Mismatch Organizer" not in calverify.results)
self.assertTrue("Fix change event" not in calverify.results)
self.assertTrue("Fix add event" not in calverify.results)
self.assertTrue("Fix add inbox" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
self.assertTrue("Fix failures" not in calverify.results)
self.assertTrue("Auto-Accepts" not in calverify.results)
sync_token_new1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
self.assertEqual(sync_token_old1, sync_token_new1)
self.assertEqual(sync_token_oldl1, sync_token_newl1)
@inlineCallbacks
def test_fixMismatch(self):
"""
CalVerifyService.doScan with fix for mismatches. Make sure it detects
and fixes as much as it can. Make sure sync-token is not changed.
"""
sync_token_old1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": True,
"nobase64": False,
"fix": True,
"verbose": False,
"details": False,
"uid": "",
"uuid": "",
"tzid": "",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
}
output = StringIO()
calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 3)
self.assertEqual(calverify.results["Missing Attendee"], set((
("MISSING_ATTENDEE_ICS", self.uuid1, self.uuidl1,),
)))
self.assertEqual(calverify.results["Mismatch Attendee"], set((
("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuidl1,),
)))
self.assertTrue("Missing Organizer" not in calverify.results)
self.assertTrue("Mismatch Organizer" not in calverify.results)
self.assertEqual(calverify.results["Fix change event"], set((
(self.uuidl1, "calendar", "MISMATCH_ATTENDEE_ICS",),
)))
self.assertEqual(calverify.results["Fix add event"], set((
(self.uuidl1, "calendar", "MISSING_ATTENDEE_ICS",),
)))
self.assertEqual(calverify.results["Fix add inbox"], set((
(self.uuidl1, "MISSING_ATTENDEE_ICS",),
(self.uuidl1, "MISMATCH_ATTENDEE_ICS",),
)))
self.assertTrue("Fix remove" not in calverify.results)
self.assertEqual(calverify.results["Fix failures"], 0)
testResults = sorted(calverify.results["Auto-Accepts"], key=lambda x: x["uid"])
self.assertEqual(testResults[0]["path"], "/calendars/__uids__/%s/calendar/mismatched_attendee.ics" % self.uuidl1)
self.assertEqual(testResults[0]["uid"], "MISMATCH_ATTENDEE_ICS")
self.assertEqual(testResults[0]["start"].getText()[:8], "%(year)s%(month)02d07" % {"year": nowYear, "month": nowMonth})
self.assertEqual(testResults[1]["uid"], "MISSING_ATTENDEE_ICS")
self.assertEqual(testResults[1]["start"].getText()[:8], "%(year)s%(month)02d07" % {"year": nowYear, "month": nowMonth})
sync_token_new1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
self.assertEqual(sync_token_old1, sync_token_new1)
self.assertNotEqual(sync_token_oldl1, sync_token_newl1)
# Re-scan after changes to make sure there are no errors
yield self.commit()
options["fix"] = False
calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 4)
self.assertTrue("Missing Attendee" not in calverify.results)
self.assertTrue("Mismatch Attendee" not in calverify.results)
self.assertTrue("Missing Organizer" not in calverify.results)
self.assertTrue("Mismatch Organizer" not in calverify.results)
self.assertTrue("Fix add event" not in calverify.results)
self.assertTrue("Fix add inbox" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
self.assertTrue("Fix failures" not in calverify.results)
self.assertTrue("Auto-Accepts" not in calverify.results)
class CalVerifyMismatchTestsUUID(CalVerifyMismatchTestsBase):
"""
Tests calverify for iCalendar mismatch problems for auto-accept attendees.
"""
# Organizer has event, attendee do not
MISSING_ATTENDEE_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISSING_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Attendee partstat mismatch
MISMATCH_ATTENDEE_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH_ATTENDEE_L1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
requirements = {
CalVerifyMismatchTestsBase.uuid1 : {
"calendar" : {
"missing_attendee.ics" : (MISSING_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched_attendee.ics" : (MISMATCH_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,),
},
"inbox" : {},
},
CalVerifyMismatchTestsBase.uuid2 : {
"calendar" : {},
"inbox" : {},
},
CalVerifyMismatchTestsBase.uuid3 : {
"calendar" : {},
"inbox" : {},
},
CalVerifyMismatchTestsBase.uuidl1 : {
"calendar" : {
"mismatched_attendee.ics" : (MISMATCH_ATTENDEE_L1_ICS, CalVerifyMismatchTestsBase.metadata,),
},
"inbox" : {},
},
}
@inlineCallbacks
def test_scanMismatchOnly(self):
"""
CalVerifyService.doScan without fix for mismatches. Make sure it detects
as much as it can. Make sure sync-token is not changed.
"""
sync_token_old1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": True,
"nobase64": False,
"fix": False,
"verbose": False,
"details": False,
"uid": "",
"uuid": CalVerifyMismatchTestsBase.uuidl1,
"tzid": "",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
}
output = StringIO()
calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 2)
self.assertTrue("Missing Attendee" not in calverify.results)
self.assertEqual(calverify.results["Mismatch Attendee"], set((
("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuidl1,),
)))
self.assertTrue("Missing Organizer" not in calverify.results)
self.assertTrue("Mismatch Organizer" not in calverify.results)
self.assertTrue("Fix change event" not in calverify.results)
self.assertTrue("Fix add event" not in calverify.results)
self.assertTrue("Fix add inbox" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
self.assertTrue("Fix failures" not in calverify.results)
self.assertTrue("Auto-Accepts" not in calverify.results)
sync_token_new1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
self.assertEqual(sync_token_old1, sync_token_new1)
self.assertEqual(sync_token_oldl1, sync_token_newl1)
@inlineCallbacks
def test_fixMismatch(self):
"""
CalVerifyService.doScan with fix for mismatches. Make sure it detects
and fixes as much as it can. Make sure sync-token is not changed.
"""
sync_token_old1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": True,
"nobase64": False,
"fix": True,
"verbose": False,
"details": False,
"uid": "",
"uuid": CalVerifyMismatchTestsBase.uuidl1,
"tzid": "",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
}
output = StringIO()
calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 2)
self.assertTrue("Missing Attendee" not in calverify.results)
self.assertEqual(calverify.results["Mismatch Attendee"], set((
("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuidl1,),
)))
self.assertTrue("Missing Organizer" not in calverify.results)
self.assertTrue("Mismatch Organizer" not in calverify.results)
self.assertEqual(calverify.results["Fix change event"], set((
(self.uuidl1, "calendar", "MISMATCH_ATTENDEE_ICS",),
)))
self.assertTrue("Fix add event" not in calverify.results)
self.assertEqual(calverify.results["Fix add inbox"], set((
(self.uuidl1, "MISMATCH_ATTENDEE_ICS",),
)))
self.assertTrue("Fix remove" not in calverify.results)
self.assertEqual(calverify.results["Fix failures"], 0)
testResults = sorted(calverify.results["Auto-Accepts"], key=lambda x: x["uid"])
self.assertEqual(testResults[0]["path"], "/calendars/__uids__/%s/calendar/mismatched_attendee.ics" % self.uuidl1)
self.assertEqual(testResults[0]["uid"], "MISMATCH_ATTENDEE_ICS")
self.assertEqual(testResults[0]["start"].getText()[:8], "%(year)s%(month)02d07" % {"year": nowYear, "month": nowMonth})
sync_token_new1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
self.assertEqual(sync_token_old1, sync_token_new1)
self.assertNotEqual(sync_token_oldl1, sync_token_newl1)
# Re-scan after changes to make sure there are no errors
yield self.commit()
options["fix"] = False
options["uuid"] = CalVerifyMismatchTestsBase.uuidl1
calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 2)
self.assertTrue("Missing Attendee" not in calverify.results)
self.assertTrue("Mismatch Attendee" not in calverify.results)
self.assertTrue("Missing Organizer" not in calverify.results)
self.assertTrue("Mismatch Organizer" not in calverify.results)
self.assertTrue("Fix add event" not in calverify.results)
self.assertTrue("Fix add inbox" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
self.assertTrue("Fix failures" not in calverify.results)
self.assertTrue("Auto-Accepts" not in calverify.results)
class CalVerifyDoubleBooked(CalVerifyMismatchTestsBase):
"""
Tests calverify for double-bookings.
"""
# No overlap
INVITE_NO_OVERLAP_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_OVERLAP_ICS
DTSTART:%(year)s%(month)02d07T100000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Two overlapping
INVITE_NO_OVERLAP1_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP1_1_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_OVERLAP1_1_ICS
DTSTART:%(year)s%(month)02d07T110000Z
DURATION:PT2H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
INVITE_NO_OVERLAP1_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP1_2_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_OVERLAP1_2_ICS
DTSTART:%(year)s%(month)02d07T120000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Two overlapping with one transparent
INVITE_NO_OVERLAP2_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP2_1_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_OVERLAP2_1_ICS
DTSTART:%(year)s%(month)02d07T140000Z
DURATION:PT2H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
INVITE_NO_OVERLAP2_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP2_2_ICS
SUMMARY:INVITE_NO_OVERLAP2_2_ICS
DTSTART:%(year)s%(month)02d07T150000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
TRANSP:TRANSPARENT
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Two overlapping with one cancelled
INVITE_NO_OVERLAP3_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP3_1_ICS
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T170000Z
DURATION:PT2H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
INVITE_NO_OVERLAP3_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP3_2_ICS
SUMMARY:INVITE_NO_OVERLAP3_2_ICS
DTSTART:%(year)s%(month)02d07T180000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
STATUS:CANCELLED
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Two overlapping recurring
INVITE_NO_OVERLAP4_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP4_1_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_OVERLAP4_1_ICS
DTSTART:%(year)s%(month)02d08T120000Z
DURATION:PT2H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
RRULE:FREQ=DAILY;COUNT=3
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
INVITE_NO_OVERLAP4_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP4_2_ICS
SUMMARY:INVITE_NO_OVERLAP4_2_ICS
DTSTART:%(year)s%(month)02d09T120000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
RRULE:FREQ=DAILY;COUNT=2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Two overlapping on one recurrence instance
INVITE_NO_OVERLAP5_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP5_1_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_OVERLAP5_1_ICS
DTSTART:%(year)s%(month)02d12T120000Z
DURATION:PT2H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
RRULE:FREQ=DAILY;COUNT=3
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
INVITE_NO_OVERLAP5_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP5_2_ICS
SUMMARY:INVITE_NO_OVERLAP5_2_ICS
DTSTART:%(year)s%(month)02d13T140000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
RRULE:FREQ=DAILY;COUNT=2
END:VEVENT
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP5_2_ICS
SUMMARY:INVITE_NO_OVERLAP5_2_ICS
RECURRENCE-ID:%(year)s%(month)02d14T140000Z
DTSTART:%(year)s%(month)02d14T130000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Two not overlapping - one all-day
INVITE_NO_OVERLAP6_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
DTSTART:20070311T020000
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:PDT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20071104T020000
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:PST
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP6_1_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_OVERLAP6_1_ICS
DTSTART;TZID=America/Los_Angeles:%(year)s%(month)02d20T200000
DURATION:PT2H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
INVITE_NO_OVERLAP6_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP6_2_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_OVERLAP6_2_ICS
DTSTART;VALUE=DATE:%(year)s%(month)02d21
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Two overlapping - same organizer and summary
INVITE_NO_OVERLAP7_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP7_1_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_OVERLAP7_1_ICS
DTSTART:%(year)s%(month)02d23T110000Z
DURATION:PT2H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
INVITE_NO_OVERLAP7_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP7_2_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_OVERLAP7_1_ICS
DTSTART:%(year)s%(month)02d23T120000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
allEvents = {
"invite1.ics" : (INVITE_NO_OVERLAP_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite2.ics" : (INVITE_NO_OVERLAP1_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite3.ics" : (INVITE_NO_OVERLAP1_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite4.ics" : (INVITE_NO_OVERLAP2_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite5.ics" : (INVITE_NO_OVERLAP2_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite6.ics" : (INVITE_NO_OVERLAP3_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite7.ics" : (INVITE_NO_OVERLAP3_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite8.ics" : (INVITE_NO_OVERLAP4_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite9.ics" : (INVITE_NO_OVERLAP4_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite10.ics" : (INVITE_NO_OVERLAP5_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite11.ics" : (INVITE_NO_OVERLAP5_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite12.ics" : (INVITE_NO_OVERLAP6_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite13.ics" : (INVITE_NO_OVERLAP6_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite14.ics" : (INVITE_NO_OVERLAP7_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite15.ics" : (INVITE_NO_OVERLAP7_2_ICS, CalVerifyMismatchTestsBase.metadata,),
}
requirements = {
CalVerifyMismatchTestsBase.uuid1 : {
"calendar" : allEvents,
"inbox" : {},
},
CalVerifyMismatchTestsBase.uuid2 : {
"calendar" : {},
"inbox" : {},
},
CalVerifyMismatchTestsBase.uuid3 : {
"calendar" : {},
"inbox" : {},
},
CalVerifyMismatchTestsBase.uuidl1 : {
"calendar" : allEvents,
"inbox" : {},
},
}
@inlineCallbacks
def test_scanDoubleBookingOnly(self):
"""
CalVerifyService.doScan without fix for mismatches. Make sure it detects
as much as it can. Make sure sync-token is not changed.
"""
sync_token_old1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": False,
"nobase64": False,
"double": True,
"fix": False,
"verbose": False,
"details": False,
"summary": False,
"days": 365,
"uid": "",
"uuid": self.uuidl1,
"tzid": "utc",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
}
output = StringIO()
calverify = DoubleBookingService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], len(self.requirements[CalVerifyMismatchTestsBase.uuidl1]["calendar"]))
self.assertEqual(
[(sorted((i.uid1, i.uid2,)), str(i.start),) for i in calverify.results["Double-bookings"]],
[
(["INVITE_NO_OVERLAP1_1_ICS", "INVITE_NO_OVERLAP1_2_ICS"], "%(year)s%(month)02d07T120000Z" % {"year": nowYear, "month": nowMonth}),
(["INVITE_NO_OVERLAP4_1_ICS", "INVITE_NO_OVERLAP4_2_ICS"], "%(year)s%(month)02d09T120000Z" % {"year": nowYear, "month": nowMonth}),
(["INVITE_NO_OVERLAP4_1_ICS", "INVITE_NO_OVERLAP4_2_ICS"], "%(year)s%(month)02d10T120000Z" % {"year": nowYear, "month": nowMonth}),
(["INVITE_NO_OVERLAP5_1_ICS", "INVITE_NO_OVERLAP5_2_ICS"], "%(year)s%(month)02d14T130000Z" % {"year": nowYear, "month": nowMonth}),
],
)
self.assertEqual(calverify.results["Number of double-bookings"], 4)
self.assertEqual(calverify.results["Number of unique double-bookings"], 3)
sync_token_new1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
self.assertEqual(sync_token_old1, sync_token_new1)
self.assertEqual(sync_token_oldl1, sync_token_newl1)
class CalVerifyDarkPurge(CalVerifyMismatchTestsBase):
"""
Tests calverify for events.
"""
# No organizer
INVITE_NO_ORGANIZER_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_ORGANIZER_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_ORGANIZER_ICS
DTSTART:%(year)s%(month)02d07T100000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Valid organizer
INVITE_VALID_ORGANIZER_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_VALID_ORGANIZER_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_VALID_ORGANIZER_ICS
DTSTART:%(year)s%(month)02d08T100000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Invalid organizer #1
INVITE_INVALID_ORGANIZER_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_INVALID_ORGANIZER_1_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_INVALID_ORGANIZER_1_ICS
DTSTART:%(year)s%(month)02d09T100000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0-1
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0-1
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Invalid organizer #2
INVITE_INVALID_ORGANIZER_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_INVALID_ORGANIZER_2_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_INVALID_ORGANIZER_2_ICS
DTSTART:%(year)s%(month)02d10T100000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:mailto:foobar@example.com
ATTENDEE:mailto:foobar@example.com
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
allEvents = {
"invite1.ics" : (INVITE_NO_ORGANIZER_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite2.ics" : (INVITE_VALID_ORGANIZER_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite3.ics" : (INVITE_INVALID_ORGANIZER_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite4.ics" : (INVITE_INVALID_ORGANIZER_2_ICS, CalVerifyMismatchTestsBase.metadata,),
}
requirements = {
CalVerifyMismatchTestsBase.uuid1 : {
"calendar" : {},
"inbox" : {},
},
CalVerifyMismatchTestsBase.uuid2 : {
"calendar" : {},
"inbox" : {},
},
CalVerifyMismatchTestsBase.uuid3 : {
"calendar" : {},
"inbox" : {},
},
CalVerifyMismatchTestsBase.uuidl1 : {
"calendar" : allEvents,
"inbox" : {},
},
}
@inlineCallbacks
def test_scanDarkEvents(self):
"""
CalVerifyService.doScan without fix for dark events. Make sure it detects
as much as it can. Make sure sync-token is not changed.
"""
sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": False,
"nobase64": False,
"double": True,
"dark-purge": False,
"fix": False,
"verbose": False,
"details": False,
"summary": False,
"days": 365,
"uid": "",
"uuid": self.uuidl1,
"tzid": "utc",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
"no-organizer": False,
"invalid-organizer": False,
"disabled-organizer": False,
}
output = StringIO()
calverify = DarkPurgeService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], len(self.requirements[CalVerifyMismatchTestsBase.uuidl1]["calendar"]))
self.assertEqual(
sorted([i.uid for i in calverify.results["Dark Events"]]),
["INVITE_INVALID_ORGANIZER_1_ICS", "INVITE_INVALID_ORGANIZER_2_ICS", ]
)
self.assertEqual(calverify.results["Number of dark events"], 2)
self.assertTrue("Fix dark events" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
self.assertEqual(sync_token_oldl1, sync_token_newl1)
@inlineCallbacks
def test_fixDarkEvents(self):
"""
CalVerifyService.doScan with fix for dark events. Make sure it detects
as much as it can. Make sure sync-token is changed.
"""
sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": False,
"nobase64": False,
"double": True,
"dark-purge": False,
"fix": True,
"verbose": False,
"details": False,
"summary": False,
"days": 365,
"uid": "",
"uuid": self.uuidl1,
"tzid": "utc",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
"no-organizer": False,
"invalid-organizer": False,
"disabled-organizer": False,
}
output = StringIO()
calverify = DarkPurgeService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], len(self.requirements[CalVerifyMismatchTestsBase.uuidl1]["calendar"]))
self.assertEqual(
sorted([i.uid for i in calverify.results["Dark Events"]]),
["INVITE_INVALID_ORGANIZER_1_ICS", "INVITE_INVALID_ORGANIZER_2_ICS", ]
)
self.assertEqual(calverify.results["Number of dark events"], 2)
self.assertEqual(calverify.results["Fix dark events"], 2)
self.assertTrue("Fix remove" in calverify.results)
sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
self.assertNotEqual(sync_token_oldl1, sync_token_newl1)
# Re-scan after changes to make sure there are no errors
yield self.commit()
options["fix"] = False
options["uuid"] = self.uuidl1
calverify = DarkPurgeService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 2)
self.assertEqual(len(calverify.results["Dark Events"]), 0)
self.assertTrue("Fix dark events" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
@inlineCallbacks
def test_fixDarkEventsNoOrganizerOnly(self):
"""
CalVerifyService.doScan with fix for dark events. Make sure it detects
as much as it can. Make sure sync-token is changed.
"""
sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": False,
"nobase64": False,
"double": True,
"dark-purge": False,
"fix": True,
"verbose": False,
"details": False,
"summary": False,
"days": 365,
"uid": "",
"uuid": self.uuidl1,
"tzid": "utc",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
"no-organizer": True,
"invalid-organizer": False,
"disabled-organizer": False,
}
output = StringIO()
calverify = DarkPurgeService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], len(self.requirements[CalVerifyMismatchTestsBase.uuidl1]["calendar"]))
self.assertEqual(
sorted([i.uid for i in calverify.results["Dark Events"]]),
["INVITE_NO_ORGANIZER_ICS", ]
)
self.assertEqual(calverify.results["Number of dark events"], 1)
self.assertEqual(calverify.results["Fix dark events"], 1)
self.assertTrue("Fix remove" in calverify.results)
sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
self.assertNotEqual(sync_token_oldl1, sync_token_newl1)
# Re-scan after changes to make sure there are no errors
yield self.commit()
options["fix"] = False
options["uuid"] = self.uuidl1
calverify = DarkPurgeService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 3)
self.assertEqual(len(calverify.results["Dark Events"]), 0)
self.assertTrue("Fix dark events" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
@inlineCallbacks
def test_fixDarkEventsAllTypes(self):
"""
CalVerifyService.doScan with fix for dark events. Make sure it detects
as much as it can. Make sure sync-token is changed.
"""
sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": False,
"nobase64": False,
"double": True,
"dark-purge": False,
"fix": True,
"verbose": False,
"details": False,
"summary": False,
"days": 365,
"uid": "",
"uuid": self.uuidl1,
"tzid": "utc",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
"no-organizer": True,
"invalid-organizer": True,
"disabled-organizer": True,
}
output = StringIO()
calverify = DarkPurgeService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], len(self.requirements[CalVerifyMismatchTestsBase.uuidl1]["calendar"]))
self.assertEqual(
sorted([i.uid for i in calverify.results["Dark Events"]]),
["INVITE_INVALID_ORGANIZER_1_ICS", "INVITE_INVALID_ORGANIZER_2_ICS", "INVITE_NO_ORGANIZER_ICS", ]
)
self.assertEqual(calverify.results["Number of dark events"], 3)
self.assertEqual(calverify.results["Fix dark events"], 3)
self.assertTrue("Fix remove" in calverify.results)
sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
self.assertNotEqual(sync_token_oldl1, sync_token_newl1)
# Re-scan after changes to make sure there are no errors
yield self.commit()
options["fix"] = False
options["uuid"] = self.uuidl1
calverify = DarkPurgeService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 1)
self.assertEqual(len(calverify.results["Dark Events"]), 0)
self.assertTrue("Fix dark events" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
class CalVerifyEventPurge(CalVerifyMismatchTestsBase):
"""
Tests calverify for events.
"""
# No organizer
NO_ORGANIZER_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_ORGANIZER_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_ORGANIZER_ICS
DTSTART:%(now_fwd10)s
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Valid organizer
VALID_ORGANIZER_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
BEGIN:VEVENT
UID:INVITE_VALID_ORGANIZER_ICS
DTSTART:%(now)s
DURATION:PT1H
ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s
ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s
ORGANIZER:urn:x-uid:%(uuid1)s
RRULE:FREQ=DAILY
SUMMARY:INVITE_VALID_ORGANIZER_ICS
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Valid attendee
VALID_ATTENDEE_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
BEGIN:VEVENT
UID:INVITE_VALID_ORGANIZER_ICS
DTSTART:%(now)s
DURATION:PT1H
ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s
ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s
ORGANIZER:urn:x-uid:%(uuid1)s
RRULE:FREQ=DAILY
SUMMARY:INVITE_VALID_ORGANIZER_ICS
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Valid organizer
VALID_ORGANIZER_FUTURE_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
BEGIN:VEVENT
UID:INVITE_VALID_ORGANIZER_ICS
DTSTART:%(now_fwd11)s
DURATION:PT1H
ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s
ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s
ORGANIZER:urn:x-uid:%(uuid1)s
RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
RRULE:FREQ=DAILY
SEQUENCE:1
SUMMARY:INVITE_VALID_ORGANIZER_ICS
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Valid attendee
VALID_ATTENDEE_FUTURE_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
BEGIN:VEVENT
UID:INVITE_VALID_ORGANIZER_ICS
DTSTART:%(now_fwd11)s
DURATION:PT1H
ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s
ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s
ORGANIZER:urn:x-uid:%(uuid1)s
RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
RRULE:FREQ=DAILY
SEQUENCE:1
SUMMARY:INVITE_VALID_ORGANIZER_ICS
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Valid organizer
VALID_ORGANIZER_PAST_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
BEGIN:VEVENT
UID:%(uid)s
DTSTART:%(now)s
DURATION:PT1H
ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s
ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s
ORGANIZER:urn:x-uid:%(uuid1)s
RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
RRULE:FREQ=DAILY;UNTIL=%(now_fwd11_1)s
SEQUENCE:1
SUMMARY:INVITE_VALID_ORGANIZER_ICS
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Valid attendee
VALID_ATTENDEE_PAST_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
BEGIN:VEVENT
UID:%(uid)s
DTSTART:%(now)s
DURATION:PT1H
ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s
ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s
ORGANIZER:urn:x-uid:%(uuid1)s
RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
RRULE:FREQ=DAILY;UNTIL=%(now_fwd11_1)s
SEQUENCE:1
SUMMARY:INVITE_VALID_ORGANIZER_ICS
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Valid organizer
VALID_ORGANIZER_OVERRIDE_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
BEGIN:VEVENT
UID:VALID_ORGANIZER_OVERRIDE_ICS
DTSTART:%(now)s
DURATION:PT1H
ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s
ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s
ORGANIZER:urn:x-uid:%(uuid1)s
RRULE:FREQ=DAILY
SUMMARY:INVITE_VALID_ORGANIZER_ICS
END:VEVENT
BEGIN:VEVENT
UID:VALID_ORGANIZER_OVERRIDE_ICS
RECURRENCE-ID:%(now_fwd11)s
DTSTART:%(now_fwd11)s
DURATION:PT2H
ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s
ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s
ORGANIZER:urn:x-uid:%(uuid1)s
RRULE:FREQ=DAILY
SUMMARY:INVITE_VALID_ORGANIZER_ICS
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
@inlineCallbacks
def setUp(self):
self.subs = {
"uuid1" : CalVerifyMismatchTestsBase.uuid1,
"uuid2" : CalVerifyMismatchTestsBase.uuid2,
}
self.now = DateTime.getNowUTC()
self.now.setHHMMSS(0, 0, 0)
self.subs["now"] = self.now
for i in range(30):
attrname = "now_back%s" % (i + 1,)
setattr(self, attrname, self.now.duplicate())
getattr(self, attrname).offsetDay(-(i + 1))
self.subs[attrname] = getattr(self, attrname)
attrname_12h = "now_back%s_12h" % (i + 1,)
setattr(self, attrname_12h, getattr(self, attrname).duplicate())
getattr(self, attrname_12h).offsetHours(12)
self.subs[attrname_12h] = getattr(self, attrname_12h)
attrname_1 = "now_back%s_1" % (i + 1,)
setattr(self, attrname_1, getattr(self, attrname).duplicate())
getattr(self, attrname_1).offsetSeconds(-1)
self.subs[attrname_1] = getattr(self, attrname_1)
for i in range(30):
attrname = "now_fwd%s" % (i + 1,)
setattr(self, attrname, self.now.duplicate())
getattr(self, attrname).offsetDay(i + 1)
self.subs[attrname] = getattr(self, attrname)
attrname_12h = "now_fwd%s_12h" % (i + 1,)
setattr(self, attrname_12h, getattr(self, attrname).duplicate())
getattr(self, attrname_12h).offsetHours(12)
self.subs[attrname_12h] = getattr(self, attrname_12h)
attrname_1 = "now_fwd%s_1" % (i + 1,)
setattr(self, attrname_1, getattr(self, attrname).duplicate())
getattr(self, attrname_1).offsetSeconds(-1)
self.subs[attrname_1] = getattr(self, attrname_1)
self.requirements = {
CalVerifyMismatchTestsBase.uuid1 : {
"calendar" : {
"invite1.ics" : (self.NO_ORGANIZER_ICS % self.subs, CalVerifyMismatchTestsBase.metadata,),
"invite2.ics" : (self.VALID_ORGANIZER_ICS % self.subs, CalVerifyMismatchTestsBase.metadata,),
"invite3.ics" : (self.VALID_ORGANIZER_OVERRIDE_ICS % self.subs, CalVerifyMismatchTestsBase.metadata,),
},
"inbox" : {},
},
CalVerifyMismatchTestsBase.uuid2 : {
"calendar" : {
"invite2a.ics" : (self.VALID_ATTENDEE_ICS % self.subs, CalVerifyMismatchTestsBase.metadata,),
},
"inbox" : {},
},
CalVerifyMismatchTestsBase.uuid3 : {
"calendar" : {},
"inbox" : {},
},
CalVerifyMismatchTestsBase.uuidl1 : {
"calendar" : {},
"inbox" : {},
},
}
yield super(CalVerifyEventPurge, self).setUp()
@inlineCallbacks
def test_validSplit(self):
"""
CalVerifyService.doScan without fix for dark events. Make sure it detects
as much as it can. Make sure sync-token is not changed.
"""
options = {
"nuke": False,
"missing": False,
"ical": False,
"mismatch": False,
"double": True,
"dark-purge": False,
"split": True,
"path": "/calendars/__uids__/%(uuid1)s/calendar/invite2.ics" % self.subs,
"rid": "%(now_fwd11)s" % self.subs,
"summary": False,
}
output = StringIO()
calverify = EventSplitService(self._sqlCalendarStore, options, output, reactor, config)
oldUID, oldRelatedTo = yield calverify.doAction()
relsubs = dict(self.subs)
relsubs["uid"] = oldUID
relsubs["relID"] = oldRelatedTo
calendar = yield self.calendarUnderTest(home=CalVerifyMismatchTestsBase.uuid1, name="calendar")
objs = yield calendar.listObjectResources()
self.assertEqual(len(objs), 4)
self.assertTrue("invite2.ics" in objs)
oldName = filter(lambda x: not x.startswith("invite"), objs)[0]
obj1 = yield calendar.objectResourceWithName("invite2.ics")
ical1 = yield obj1.component()
self.assertEqual(normalize_iCalStr(ical1), self.VALID_ORGANIZER_FUTURE_ICS % relsubs)
obj2 = yield calendar.objectResourceWithName(oldName)
ical2 = yield obj2.component()
self.assertEqual(normalize_iCalStr(ical2), self.VALID_ORGANIZER_PAST_ICS % relsubs)
calendar = yield self.calendarUnderTest(home=CalVerifyMismatchTestsBase.uuid2, name="calendar")
objs = yield calendar.listObjectResources()
self.assertEqual(len(objs), 2)
self.assertTrue("invite2a.ics" in objs)
oldName = filter(lambda x: not x.startswith("invite"), objs)[0]
obj1 = yield calendar.objectResourceWithName("invite2a.ics")
ical1 = yield obj1.component()
self.assertEqual(normalize_iCalStr(ical1), self.VALID_ATTENDEE_FUTURE_ICS % relsubs)
obj2 = yield calendar.objectResourceWithName(oldName)
ical2 = yield obj2.component()
self.assertEqual(normalize_iCalStr(ical2), self.VALID_ATTENDEE_PAST_ICS % relsubs)
@inlineCallbacks
def test_summary(self):
"""
CalVerifyService.doScan without fix for dark events. Make sure it detects
as much as it can. Make sure sync-token is not changed.
"""
options = {
"nuke": False,
"missing": False,
"ical": False,
"mismatch": False,
"double": True,
"dark-purge": False,
"split": True,
"path": "/calendars/__uids__/%(uuid1)s/calendar/invite3.ics" % self.subs,
"rid": "%(now_fwd11)s" % self.subs,
"summary": True,
}
output = StringIO()
calverify = EventSplitService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
result = output.getvalue().splitlines()
self.assertTrue("%(now)s" % self.subs in result)
self.assertTrue("%(now_fwd10)s" % self.subs in result)
self.assertTrue("%(now_fwd11)s *" % self.subs in result)
self.assertTrue("%(now_fwd12)s" % self.subs in result)
calendarserver-7.0+dfsg/calendarserver/tools/test/test_changeip.py 0000664 0000000 0000000 00000003635 12607514243 0025620 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from twistedcaldav.test.util import TestCase
from calendarserver.tools.changeip_calendar import updateConfig
class ChangeIPTestCase(TestCase):
def test_updateConfig(self):
plist = {
"Untouched": "dont_change_me",
"ServerHostName": "",
"Scheduling": {
"iMIP": {
"Receiving": {
"Server": "original_hostname",
},
"Sending": {
"Server": "original_hostname",
"Address": "user@original_hostname",
},
},
},
}
updateConfig(
plist, "10.1.1.1", "10.1.1.2", "original_hostname", "new_hostname"
)
self.assertEquals(
plist,
{
"Untouched": "dont_change_me",
"ServerHostName": "",
"Scheduling": {
"iMIP": {
"Receiving": {
"Server": "new_hostname",
},
"Sending": {
"Server": "new_hostname",
"Address": "user@new_hostname",
},
},
},
}
)
calendarserver-7.0+dfsg/calendarserver/tools/test/test_config.py 0000664 0000000 0000000 00000037032 12607514243 0025305 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2013-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from twistedcaldav.test.util import TestCase
from twistedcaldav.config import ConfigDict
from calendarserver.tools.config import (
WritableConfig, setKeyPath, getKeyPath,
flattenDictionary, processArgs)
from calendarserver.tools.test.test_gateway import RunCommandTestCase
from twisted.internet.defer import inlineCallbacks
from twisted.python.filepath import FilePath
from xml.parsers.expat import ExpatError
import plistlib
PREAMBLE = """
"""
class WritableConfigTestCase(TestCase):
def setUp(self):
self.configFile = self.mktemp()
self.fp = FilePath(self.configFile)
def test_readSuccessful(self):
content = """stringfoo"""
self.fp.setContent(PREAMBLE + content)
config = ConfigDict()
writable = WritableConfig(config, self.configFile)
writable.read()
self.assertEquals(writable.currentConfigSubset, {"string": "foo"})
def test_readInvalidXML(self):
self.fp.setContent("invalid")
config = ConfigDict()
writable = WritableConfig(config, self.configFile)
self.assertRaises(ExpatError, writable.read)
def test_updates(self):
content = """key1beforekey210"""
self.fp.setContent(PREAMBLE + content)
config = ConfigDict()
writable = WritableConfig(config, self.configFile)
writable.read()
writable.set({"key1": "after"})
writable.set({"key2": 15})
writable.set({"key2": 20}) # override previous set
writable.set({"key3": ["a", "b", "c"]})
self.assertEquals(writable.currentConfigSubset, {"key1": "after", "key2": 20, "key3": ["a", "b", "c"]})
writable.save()
writable2 = WritableConfig(config, self.configFile)
writable2.read()
self.assertEquals(writable2.currentConfigSubset, {"key1": "after", "key2": 20, "key3": ["a", "b", "c"]})
def test_convertToValue(self):
self.assertEquals(True, WritableConfig.convertToValue("True"))
self.assertEquals(False, WritableConfig.convertToValue("False"))
self.assertEquals(1, WritableConfig.convertToValue("1"))
self.assertEquals(1.2, WritableConfig.convertToValue("1.2"))
self.assertEquals("xyzzy", WritableConfig.convertToValue("xyzzy"))
self.assertEquals("xy.zzy", WritableConfig.convertToValue("xy.zzy"))
def test_processArgs(self):
"""
Ensure utf-8 encoded command line args are handled properly
"""
content = """key1before"""
self.fp.setContent(PREAMBLE + content)
config = ConfigDict()
writable = WritableConfig(config, self.configFile)
writable.read()
processArgs(writable, ["key1=\xf0\x9f\x92\xa3"], restart=False)
writable2 = WritableConfig(config, self.configFile)
writable2.read()
self.assertEquals(writable2.currentConfigSubset, {'key1': u'\U0001f4a3'})
class ConfigTestCase(RunCommandTestCase):
@inlineCallbacks
def test_readConfig(self):
"""
Verify readConfig returns with only the writable keys
"""
results = yield self.runCommand(
command_readConfig,
script="calendarserver_config")
self.assertEquals(results["result"]["RedirectHTTPToHTTPS"], False)
self.assertEquals(results["result"]["EnableSearchAddressBook"], False)
self.assertEquals(results["result"]["EnableCalDAV"], True)
self.assertEquals(results["result"]["EnableCardDAV"], True)
self.assertEquals(results["result"]["EnableSSL"], False)
self.assertEquals(results["result"]["DefaultLogLevel"], "warn")
self.assertEquals(results["result"]["Notifications"]["Services"]["APNS"]["Enabled"], False)
# Verify not all keys are present, such as umask which is not accessible
self.assertFalse("umask" in results["result"])
@inlineCallbacks
def test_writeConfig(self):
"""
Verify writeConfig updates the writable plist file only
"""
results = yield self.runCommand(
command_writeConfig,
script="calendarserver_config")
self.assertEquals(results["result"]["EnableCalDAV"], False)
self.assertEquals(results["result"]["EnableCardDAV"], False)
self.assertEquals(results["result"]["EnableSSL"], True)
self.assertEquals(results["result"]["Notifications"]["Services"]["APNS"]["Enabled"], True)
hostName = "hostname_%s_%s" % (unichr(208), u"\ud83d\udca3")
self.assertTrue(results["result"]["ServerHostName"].endswith(hostName))
# We tried to modify ServerRoot, but make sure our change did not take
self.assertNotEquals(results["result"]["ServerRoot"], "/You/Shall/Not/Pass")
# The static plist should still have EnableCalDAV = True
staticPlist = plistlib.readPlist(self.configFileName)
self.assertTrue(staticPlist["EnableCalDAV"])
@inlineCallbacks
def test_error(self):
"""
Verify sending a bogus command returns an error
"""
results = yield self.runCommand(
command_bogusCommand,
script="calendarserver_config")
self.assertEquals(results["error"], "Unknown command 'bogus'")
@inlineCallbacks
def test_commandLineArgs(self):
"""
Verify commandline assignments works
"""
# Check current values
results = yield self.runCommand(
command_readConfig,
script="calendarserver_config")
self.assertEquals(results["result"]["DefaultLogLevel"], "warn")
self.assertEquals(results["result"]["Scheduling"]["iMIP"]["Enabled"], False)
# Set a single-level key and a multi-level key
results = yield self.runCommand(
None,
script="calendarserver_config",
additionalArgs=["DefaultLogLevel=debug", "Scheduling.iMIP.Enabled=True"],
parseOutput=False
)
results = yield self.runCommand(
command_readConfig,
script="calendarserver_config")
self.assertEquals(results["result"]["DefaultLogLevel"], "debug")
self.assertEquals(results["result"]["Scheduling"]["iMIP"]["Enabled"], True)
self.assertEquals(results["result"]["LogLevels"], {})
# Directly set some LogLevels sub-keys
results = yield self.runCommand(
None,
script="calendarserver_config",
additionalArgs=["LogLevels.testing=debug", "LogLevels.testing2=warn"],
parseOutput=False
)
results = yield self.runCommand(
command_readConfig,
script="calendarserver_config")
self.assertEquals(
results["result"]["LogLevels"],
{"testing": "debug", "testing2" : "warn"}
)
# Test that setting additional sub-keys retains previous sub-keys
results = yield self.runCommand(
None,
script="calendarserver_config",
additionalArgs=["LogLevels.testing3=info", "LogLevels.testing4=warn"],
parseOutput=False
)
results = yield self.runCommand(
command_readConfig,
script="calendarserver_config")
self.assertEquals(
results["result"]["LogLevels"],
{"testing": "debug", "testing2" : "warn", "testing3": "info", "testing4" : "warn"}
)
# Test that an empty value deletes the key
results = yield self.runCommand(
None,
script="calendarserver_config",
additionalArgs=["LogLevels="],
parseOutput=False
)
results = yield self.runCommand(
command_readConfig,
script="calendarserver_config")
self.assertEquals(
results["result"]["LogLevels"], # now an empty dict
{}
)
@inlineCallbacks
def test_loggingCategories(self):
"""
Verify logging categories get mapped to the correct python module names
"""
# Check existing values
results = yield self.runCommand(
command_readConfig,
script="calendarserver_config")
self.assertEquals(results["result"]["LogLevels"], {})
# Set to directory and imip
results = yield self.runCommand(
None,
script="calendarserver_config",
additionalArgs=["--logging=directory,imip"],
parseOutput=False
)
results = yield self.runCommand(
command_readConfig,
script="calendarserver_config")
self.assertEquals(
results["result"]["LogLevels"],
{
"twext.who": "debug",
"txdav.who": "debug",
"txdav.caldav.datastore.scheduling.imip": "debug",
}
)
# Set up to imip, and the directory modules should go away
results = yield self.runCommand(
None,
script="calendarserver_config",
additionalArgs=["--logging=imip"],
parseOutput=False
)
results = yield self.runCommand(
command_readConfig,
script="calendarserver_config")
self.assertEquals(
results["result"]["LogLevels"],
{
"txdav.caldav.datastore.scheduling.imip": "debug",
}
)
# Set to default, and all modules should go away
results = yield self.runCommand(
None,
script="calendarserver_config",
additionalArgs=["--logging=default"],
parseOutput=False
)
results = yield self.runCommand(
command_readConfig,
script="calendarserver_config")
self.assertEquals(
results["result"]["LogLevels"], {}
)
@inlineCallbacks
def test_accountingCategories(self):
"""
Verify accounting categories get mapped to the correct keys
"""
# Check existing values
results = yield self.runCommand(
command_readConfig,
script="calendarserver_config")
self.assertEquals(
results["result"]["AccountingCategories"],
{
'AutoScheduling': False,
'HTTP': False,
'Implicit Errors': False,
'iSchedule': False,
'iTIP': False,
'iTIP-VFREEBUSY': False,
'migration': False,
}
)
# Set to http and itip
results = yield self.runCommand(
None,
script="calendarserver_config",
additionalArgs=["--accounting=http,itip"],
parseOutput=False
)
results = yield self.runCommand(
command_readConfig,
script="calendarserver_config")
self.assertEquals(
results["result"]["AccountingCategories"],
{
'AutoScheduling': False,
'HTTP': True,
'Implicit Errors': False,
'iSchedule': False,
'iTIP': True,
'iTIP-VFREEBUSY': True,
'migration': False,
}
)
# Set to off, so all are off
results = yield self.runCommand(
None,
script="calendarserver_config",
additionalArgs=["--accounting=off"],
parseOutput=False
)
results = yield self.runCommand(
command_readConfig,
script="calendarserver_config")
self.assertEquals(
results["result"]["AccountingCategories"],
{
'AutoScheduling': False,
'HTTP': False,
'Implicit Errors': False,
'iSchedule': False,
'iTIP': False,
'iTIP-VFREEBUSY': False,
'migration': False,
}
)
def test_keyPath(self):
d = ConfigDict()
setKeyPath(d, "one", "A")
setKeyPath(d, "one", "B")
setKeyPath(d, "two.one", "C")
setKeyPath(d, "two.one", "D")
setKeyPath(d, "two.two", "E")
setKeyPath(d, "three.one.one", "F")
setKeyPath(d, "three.one.two", "G")
self.assertEquals(d.one, "B")
self.assertEquals(d.two.one, "D")
self.assertEquals(d.two.two, "E")
self.assertEquals(d.three.one.one, "F")
self.assertEquals(d.three.one.two, "G")
self.assertEquals(getKeyPath(d, "one"), "B")
self.assertEquals(getKeyPath(d, "two.one"), "D")
self.assertEquals(getKeyPath(d, "two.two"), "E")
self.assertEquals(getKeyPath(d, "three.one.one"), "F")
self.assertEquals(getKeyPath(d, "three.one.two"), "G")
def test_flattenDictionary(self):
dictionary = {
"one" : "A",
"two" : {
"one" : "D",
"two" : "E",
},
"three" : {
"one" : {
"one" : "F",
"two" : "G",
},
},
}
self.assertEquals(
set(list(flattenDictionary(dictionary))),
set([("one", "A"), ("three.one.one", "F"), ("three.one.two", "G"), ("two.one", "D"), ("two.two", "E")])
)
command_readConfig = """
commandreadConfig
"""
command_writeConfig = """
commandwriteConfigValuesEnableCalDAVEnableCardDAVEnableSSLNotifications.Services.APNS.EnabledServerRoot/You/Shall/Not/PassServerHostNamehostname_%s_%s
""" % (unichr(208), u"\ud83d\udca3")
command_bogusCommand = """
commandbogus
"""
calendarserver-7.0+dfsg/calendarserver/tools/test/test_diagnose.py 0000664 0000000 0000000 00000002205 12607514243 0025623 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2014-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
from calendarserver.tools.diagnose import (
runCommand, FileNotFound
)
from twisted.trial.unittest import TestCase
class DiagnoseTestCase(TestCase):
def test_runCommand(self):
code, stdout, stderr = runCommand(
"/bin/ls", "-al", "/"
)
self.assertEquals(code, 0)
self.assertEquals(stderr, "")
self.assertTrue("total" in stdout)
def test_runCommand_nonExistent(self):
self.assertRaises(FileNotFound, runCommand, "/xyzzy/plugh/notthere")
calendarserver-7.0+dfsg/calendarserver/tools/test/test_export.py 0000664 0000000 0000000 00000033537 12607514243 0025367 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2011-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Unit tests for L{calendarsever.tools.export}.
"""
import sys
from cStringIO import StringIO
from twisted.trial.unittest import TestCase
from twisted.internet.defer import inlineCallbacks
from twisted.python.modules import getModule
from twext.enterprise.ienterprise import AlreadyFinishedError
from twistedcaldav.ical import Component
from twistedcaldav.datafilters.test.test_peruserdata import dataForTwoUsers
from twistedcaldav.datafilters.test.test_peruserdata import resultForUser2
from twistedcaldav.test.util import StoreTestCase
from calendarserver.tools import export
from calendarserver.tools.export import ExportOptions, main
from calendarserver.tools.export import DirectoryExporter, UIDExporter
from twisted.python.filepath import FilePath
from twisted.internet.defer import Deferred
from txdav.common.datastore.test.util import populateCalendarsFrom
from calendarserver.tools.export import usage, exportToFile
def holiday(uid):
return (
getModule("twistedcaldav.test").filePath
.sibling("data").child("Holidays").child(uid + ".ics")
.getContent()
)
def sample(name):
return (
getModule("twistedcaldav.test").filePath
.sibling("data").child(name + ".ics").getContent()
)
valentines = holiday("C31854DA-1ED0-11D9-A5E0-000A958A3252")
newYears = holiday("C3184A66-1ED0-11D9-A5E0-000A958A3252")
payday = sample("PayDay")
one = sample("OneEvent")
another = sample("AnotherEvent")
third = sample("ThirdEvent")
class CommandLine(TestCase):
"""
Simple tests for command-line parsing.
"""
def test_usageMessage(self):
"""
The 'usage' message should print something to standard output (and
nothing to standard error) and exit.
"""
orig = sys.stdout
orige = sys.stderr
try:
out = sys.stdout = StringIO()
err = sys.stderr = StringIO()
self.assertRaises(SystemExit, usage)
finally:
sys.stdout = orig
sys.stderr = orige
self.assertEquals(len(out.getvalue()) > 0, True, "No output.")
self.assertEquals(len(err.getvalue()), 0)
def test_oneRecord(self):
"""
One '--record' option will result in a single L{DirectoryExporter}
object with no calendars in its list.
"""
eo = ExportOptions()
eo.parseOptions(["--record", "users:bob"])
self.assertEquals(len(eo.exporters), 1)
exp = eo.exporters[0]
self.assertIsInstance(exp, DirectoryExporter)
self.assertEquals(exp.recordType, "users")
self.assertEquals(exp.shortName, "bob")
self.assertEquals(exp.collections, [])
def test_oneUID(self):
"""
One '--uid' option will result in a single L{UIDExporter} object with no
calendars in its list.
"""
eo = ExportOptions()
eo.parseOptions(["--uid", "bob's your guid"])
self.assertEquals(len(eo.exporters), 1)
exp = eo.exporters[0]
self.assertIsInstance(exp, UIDExporter)
self.assertEquals(exp.uid, "bob's your guid")
def test_homeAndCollections(self):
"""
The --collection option adds specific calendars to the last home that
was exported.
"""
eo = ExportOptions()
eo.parseOptions(["--record", "users:bob",
"--collection", "work stuff",
"--uid", "jethroUID",
"--collection=fun stuff"])
self.assertEquals(len(eo.exporters), 2)
exp = eo.exporters[0]
self.assertEquals(exp.recordType, "users")
self.assertEquals(exp.shortName, "bob")
self.assertEquals(exp.collections, ["work stuff"])
exp = eo.exporters[1]
self.assertEquals(exp.uid, "jethroUID")
self.assertEquals(exp.collections, ["fun stuff"])
def test_outputFileSelection(self):
"""
The --output option selects the file to write to, '-' or no parameter
meaning stdout; the L{ExportOptions.openOutput} method returns that
file.
"""
eo = ExportOptions()
eo.parseOptions([])
self.assertIdentical(eo.openOutput(), sys.stdout)
eo = ExportOptions()
eo.parseOptions(["--output", "-"])
self.assertIdentical(eo.openOutput(), sys.stdout)
eo = ExportOptions()
tmpnam = self.mktemp()
eo.parseOptions(["--output", tmpnam])
self.assertEquals(eo.openOutput().name, tmpnam)
def test_outputFileError(self):
"""
If the output file cannot be opened for writing, an error will be
displayed to the user on stderr.
"""
io = StringIO()
systemExit = self.assertRaises(
SystemExit, main, ['calendarserver_export',
'--output', '/not/a/file'], io
)
self.assertEquals(systemExit.code, 1)
self.assertEquals(
io.getvalue(),
"Unable to open output file for writing: "
"[Errno 2] No such file or directory: '/not/a/file'\n")
class IntegrationTests(StoreTestCase):
"""
Tests for exporting data from a live store.
"""
@inlineCallbacks
def setUp(self):
"""
Set up a store and fix the imported C{utilityMain} function (normally
from L{calendarserver.tools.cmdline.utilityMain}) to point to a
temporary method of this class. Also, patch the imported C{reactor},
since the SUT needs to call C{reactor.stop()} in order to work with
L{utilityMain}.
"""
self.mainCalled = False
self.patch(export, "utilityMain", self.fakeUtilityMain)
yield super(IntegrationTests, self).setUp()
self.store = self.storeUnderTest()
self.waitToStop = Deferred()
def stop(self):
"""
Emulate reactor.stop(), which the service must call when it is done with
work.
"""
self.waitToStop.callback(None)
def fakeUtilityMain(self, configFileName, serviceClass, reactor=None, verbose=False):
"""
Verify a few basic things.
"""
if self.mainCalled:
raise RuntimeError(
"Main called twice during this test; duplicate reactor run.")
self.mainCalled = True
self.usedConfigFile = configFileName
self.usedReactor = reactor
self.exportService = serviceClass(self.store)
self.exportService.startService()
self.addCleanup(self.exportService.stopService)
@inlineCallbacks
def test_serviceState(self):
"""
export.main() invokes utilityMain with the configuration file specified
on the command line, and creates an L{ExporterService} pointed at the
appropriate store.
"""
tempConfig = self.mktemp()
main(['calendarserver_export', '--config', tempConfig, '--output',
self.mktemp()], reactor=self)
self.assertEquals(self.mainCalled, True, "Main not called.")
self.assertEquals(self.usedConfigFile, tempConfig)
self.assertEquals(self.usedReactor, self)
self.assertEquals(self.exportService.store, self.store)
yield self.waitToStop
@inlineCallbacks
def test_emptyCalendar(self):
"""
Exporting an empty calendar results in an empty calendar.
"""
io = StringIO()
value = yield exportToFile([], io)
# it doesn't return anything, it writes to the file.
self.assertEquals(value, None)
# but it should write a valid component to the file.
self.assertEquals(Component.fromString(io.getvalue()),
Component.newCalendar())
def txn(self):
"""
Create a new transaction and automatically clean it up when the test
completes.
"""
aTransaction = self.store.newTransaction()
@inlineCallbacks
def maybeAbort():
try:
yield aTransaction.abort()
except AlreadyFinishedError:
pass
self.addCleanup(maybeAbort)
return aTransaction
@inlineCallbacks
def test_oneEventCalendar(self):
"""
Exporting an calendar with one event in it will result in just that
event.
"""
yield populateCalendarsFrom(
{
"user01": {
"calendar1": {
"valentines-day.ics": (valentines, {})
}
}
}, self.store
)
expected = Component.newCalendar()
[theComponent] = Component.fromString(valentines).subcomponents()
expected.addComponent(theComponent)
io = StringIO()
yield exportToFile(
[(yield self.txn().calendarHomeWithUID("user01"))
.calendarWithName("calendar1")], io
)
self.assertEquals(Component.fromString(io.getvalue()),
expected)
@inlineCallbacks
def test_twoSimpleEvents(self):
"""
Exporting a calendar with two events in it will result in a VCALENDAR
component with both VEVENTs in it.
"""
yield populateCalendarsFrom(
{
"user01": {
"calendar1": {
"valentines-day.ics": (valentines, {}),
"new-years-day.ics": (newYears, {})
}
}
}, self.store
)
expected = Component.newCalendar()
a = Component.fromString(valentines)
b = Component.fromString(newYears)
for comp in a, b:
for sub in comp.subcomponents():
expected.addComponent(sub)
io = StringIO()
yield exportToFile(
[(yield self.txn().calendarHomeWithUID("user01"))
.calendarWithName("calendar1")], io
)
self.assertEquals(Component.fromString(io.getvalue()),
expected)
@inlineCallbacks
def test_onlyOneVTIMEZONE(self):
"""
C{VTIMEZONE} subcomponents with matching TZIDs in multiple event
calendar objects should only be rendered in the resulting output once.
(Note that the code to suppor this is actually in PyCalendar, not the
export tool itself.)
"""
yield populateCalendarsFrom(
{
"user01": {
"calendar1": {
"1.ics": (one, {}), # EST
"2.ics": (another, {}), # EST
"3.ics": (third, {}) # PST
}
}
}, self.store
)
io = StringIO()
yield exportToFile(
[(yield self.txn().calendarHomeWithUID("user01"))
.calendarWithName("calendar1")], io
)
result = Component.fromString(io.getvalue())
def filtered(name):
for c in result.subcomponents():
if c.name() == name:
yield c
timezones = list(filtered("VTIMEZONE"))
events = list(filtered("VEVENT"))
# Sanity check to make sure we picked up all three events:
self.assertEquals(len(events), 3)
self.assertEquals(len(timezones), 2)
self.assertEquals(set([tz.propertyValue("TZID") for tz in timezones]),
# Use an intentionally wrong TZID in order to make
# sure we don't depend on caching effects elsewhere.
set(["America/New_Yrok", "US/Pacific"]))
@inlineCallbacks
def test_perUserFiltering(self):
"""
L{exportToFile} performs per-user component filtering based on the owner
of that calendar.
"""
yield populateCalendarsFrom(
{
"user02": {
"calendar1": {
"peruser.ics": (dataForTwoUsers, {}), # EST
}
}
}, self.store
)
io = StringIO()
yield exportToFile(
[(yield self.txn().calendarHomeWithUID("user02"))
.calendarWithName("calendar1")], io
)
self.assertEquals(
Component.fromString(resultForUser2),
Component.fromString(io.getvalue())
)
@inlineCallbacks
def test_full(self):
"""
Running C{calendarserver_export} on the command line exports an ics
file. (Almost-full integration test, starting from the main point, using
as few test fakes as possible.)
Note: currently the only test for directory interaction.
"""
yield populateCalendarsFrom(
{
"user02": {
# TODO: more direct test for skipping inbox
"inbox": {
"inbox-item.ics": (valentines, {})
},
"calendar1": {
"peruser.ics": (dataForTwoUsers, {}), # EST
}
}
}, self.store
)
output = FilePath(self.mktemp())
main(['calendarserver_export', '--output',
output.path, '--user', 'user02'], reactor=self)
yield self.waitToStop
self.assertEquals(
Component.fromString(resultForUser2),
Component.fromString(output.getContent())
)
calendarserver-7.0+dfsg/calendarserver/tools/test/test_gateway.py 0000664 0000000 0000000 00000102031 12607514243 0025471 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
import os
from plistlib import readPlistFromString
import plistlib
import xml
from twistedcaldav.stdconfig import config
from twext.python.filepath import CachingFilePath as FilePath
from twisted.internet import reactor
from twisted.internet.defer import inlineCallbacks, Deferred, returnValue
from twisted.trial.unittest import TestCase
from twistedcaldav import memcacher
from twistedcaldav.memcacheclient import ClientFactory
from twistedcaldav.test.util import CapturingProcessProtocol
from txdav.common.datastore.test.util import (
theStoreBuilder, StubNotifierFactory
)
from txdav.who.idirectory import AutoScheduleMode
from txdav.who.util import directoryFromConfig
class RunCommandTestCase(TestCase):
@inlineCallbacks
def setUp(self):
self.serverRoot = self.mktemp()
os.mkdir(self.serverRoot)
self.absoluteServerRoot = os.path.abspath(self.serverRoot)
configRoot = os.path.join(self.absoluteServerRoot, "Config")
if not os.path.exists(configRoot):
os.makedirs(configRoot)
dataRoot = os.path.join(self.absoluteServerRoot, "Data")
if not os.path.exists(dataRoot):
os.makedirs(dataRoot)
documentRoot = os.path.join(self.absoluteServerRoot, "Documents")
if not os.path.exists(documentRoot):
os.makedirs(documentRoot)
logRoot = os.path.join(self.absoluteServerRoot, "Logs")
if not os.path.exists(logRoot):
os.makedirs(logRoot)
runRoot = os.path.join(self.absoluteServerRoot, "Run")
if not os.path.exists(runRoot):
os.makedirs(runRoot)
config.reset()
testRoot = os.path.join(os.path.dirname(__file__), "gateway")
templateName = os.path.join(testRoot, "caldavd.plist")
templateFile = open(templateName)
template = templateFile.read()
templateFile.close()
databaseRoot = os.path.abspath("_spawned_scripts_db" + str(os.getpid()))
newConfig = template % {
"ServerRoot": self.absoluteServerRoot,
"DataRoot": dataRoot,
"DatabaseRoot": databaseRoot,
"DocumentRoot": documentRoot,
"ConfigRoot": configRoot,
"LogRoot": logRoot,
"RunRoot": runRoot,
"WritablePlist": os.path.join(
os.path.abspath(configRoot), "caldavd-writable.plist"
),
}
configFilePath = FilePath(
os.path.join(configRoot, "caldavd.plist")
)
configFilePath.setContent(newConfig)
self.configFileName = configFilePath.path
config.load(self.configFileName)
config.Memcached.Pools.Default.ClientEnabled = False
config.Memcached.Pools.Default.ServerEnabled = False
ClientFactory.allowTestCache = True
memcacher.Memcacher.allowTestCache = True
memcacher.Memcacher.reset()
config.DirectoryAddressBook.Enabled = False
config.UsePackageTimezones = True
origUsersFile = FilePath(
os.path.join(
os.path.dirname(__file__),
"gateway",
"users-groups.xml"
)
)
copyUsersFile = FilePath(
os.path.join(config.DataRoot, "accounts.xml")
)
origUsersFile.copyTo(copyUsersFile)
origResourcesFile = FilePath(
os.path.join(
os.path.dirname(__file__),
"gateway",
"resources-locations.xml"
)
)
copyResourcesFile = FilePath(
os.path.join(config.DataRoot, "resources.xml")
)
origResourcesFile.copyTo(copyResourcesFile)
origAugmentFile = FilePath(
os.path.join(
os.path.dirname(__file__),
"gateway",
"augments.xml"
)
)
copyAugmentFile = FilePath(os.path.join(config.DataRoot, "augments.xml"))
origAugmentFile.copyTo(copyAugmentFile)
self.notifierFactory = StubNotifierFactory()
self.store = yield theStoreBuilder.buildStore(self, self.notifierFactory)
self.directory = directoryFromConfig(config, self.store)
@inlineCallbacks
def runCommand(
self, command, error=False, script="calendarserver_command_gateway",
additionalArgs=None, parseOutput=True
):
"""
Run the given command by feeding it as standard input to
calendarserver_command_gateway in a subprocess.
"""
if isinstance(command, unicode):
command = command.encode("utf-8")
sourceRoot = os.path.dirname(
os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
)
cmd = script # assumes it's on PATH
args = [cmd, "-f", self.configFileName]
if error:
args.append("--error")
if additionalArgs:
args.extend(additionalArgs)
cwd = sourceRoot
deferred = Deferred()
reactor.spawnProcess(CapturingProcessProtocol(deferred, command), cmd, args, env=os.environ, path=cwd)
output = yield deferred
if parseOutput:
try:
plist = readPlistFromString(output)
returnValue(plist)
except xml.parsers.expat.ExpatError, e:
print("Error (%s) parsing (%s)" % (e, output))
raise
else:
returnValue(output)
class GatewayTestCase(RunCommandTestCase):
def _flush(self):
# Flush both XML directories
self.directory._directory.services[1].flush()
self.directory._directory.services[2].flush()
@inlineCallbacks
def test_getLocationAndResourceList(self):
results = yield self.runCommand(command_getLocationAndResourceList)
self.assertEquals(len(results["result"]), 20)
@inlineCallbacks
def test_getLocationList(self):
results = yield self.runCommand(command_getLocationList)
self.assertEquals(len(results["result"]), 10)
@inlineCallbacks
def test_getLocationAttributes(self):
yield self.runCommand(command_createLocation)
# Tell the resources services to flush its cache and re-read XML
self._flush()
results = yield self.runCommand(command_getLocationAttributes)
# self.assertEquals(results["result"]["Capacity"], "40")
# self.assertEquals(results["result"]["Description"], "Test Description")
self.assertEquals(results["result"]["RecordName"], ["createdlocation01"])
self.assertEquals(
results["result"]["RealName"],
"Created Location 01 %s %s" % (unichr(208), u"\ud83d\udca3"))
# self.assertEquals(results["result"]["Comment"], "Test Comment")
self.assertEquals(results["result"]["AutoScheduleMode"], u"acceptIfFree")
self.assertEquals(results["result"]["AutoAcceptGroup"], "E5A6142C-4189-4E9E-90B0-9CD0268B314B")
self.assertEquals(set(results["result"]["ReadProxies"]), set(['user03', 'user04']))
self.assertEquals(set(results["result"]["WriteProxies"]), set(['user05', 'user06']))
@inlineCallbacks
def test_getResourceList(self):
results = yield self.runCommand(command_getResourceList)
self.assertEquals(len(results["result"]), 10)
@inlineCallbacks
def test_getResourceAttributes(self):
yield self.runCommand(command_createResource)
# Tell the resources services to flush its cache and re-read XML
self._flush()
results = yield self.runCommand(command_getResourceAttributes)
# self.assertEquals(results["result"]["Comment"], "Test Comment")
# self.assertEquals(results["result"]["Type"], "Computer")
self.assertEquals(set(results["result"]["ReadProxies"]), set(['user03', 'user04']))
self.assertEquals(set(results["result"]["WriteProxies"]), set(['user05', 'user06']))
@inlineCallbacks
def test_createAddress(self):
record = yield self.directory.recordWithUID("C701069D-9CA1-4925-A1A9-5CD94767B74B")
self.assertEquals(record, None)
yield self.runCommand(command_createAddress)
# Tell the resources services to flush its cache and re-read XML
self._flush()
record = yield self.directory.recordWithUID("C701069D-9CA1-4925-A1A9-5CD94767B74B")
self.assertEquals(
record.displayName,
"Created Address 01 %s %s" % (unichr(208), u"\ud83d\udca3")
)
self.assertEquals(record.abbreviatedName, "Addr1")
self.assertEquals(record.streetAddress, "1 Infinite Loop\nCupertino, 95014\nCA")
self.assertEquals(record.geographicLocation, "geo:37.331,-122.030")
results = yield self.runCommand(command_getAddressList)
self.assertEquals(len(results["result"]), 1)
results = yield self.runCommand(command_getAddressAttributes)
self.assertEquals(results["result"]["RealName"], u'Created Address 01 \xd0 \U0001f4a3')
results = yield self.runCommand(command_setAddressAttributes)
results = yield self.runCommand(command_getAddressAttributes)
self.assertEquals(results["result"]["RealName"], u'Updated Address')
self.assertEquals(results["result"]["StreetAddress"], u'Updated Street Address')
self.assertEquals(results["result"]["GeographicLocation"], u'Updated Geo')
results = yield self.runCommand(command_deleteAddress)
results = yield self.runCommand(command_getAddressList)
self.assertEquals(len(results["result"]), 0)
@inlineCallbacks
def test_createLocation(self):
record = yield self.directory.recordWithUID("836B1B66-2E9A-4F46-8B1C-3DD6772C20B2")
self.assertEquals(record, None)
yield self.runCommand(command_createLocation)
# Tell the resources services to flush its cache and re-read XML
self._flush()
record = yield self.directory.recordWithUID("836B1B66-2E9A-4F46-8B1C-3DD6772C20B2")
self.assertEquals(
record.fullNames[0],
u"Created Location 01 %s %s" % (unichr(208), u"\ud83d\udca3"))
self.assertNotEquals(record, None)
# self.assertEquals(record.autoScheduleMode, "")
self.assertEquals(record.floor, u"First")
# self.assertEquals(record.extras["capacity"], "40")
results = yield self.runCommand(command_getLocationAttributes)
self.assertEquals(set(results["result"]["ReadProxies"]), set(['user03', 'user04']))
self.assertEquals(set(results["result"]["WriteProxies"]), set(['user05', 'user06']))
@inlineCallbacks
def test_createLocationMinimal(self):
"""
Ensure we can create a location with just the bare minimum of values,
i.e. RealName
"""
yield self.runCommand(command_createLocationMinimal)
# Tell the resources services to flush its cache and re-read XML
self._flush()
records = yield self.directory.recordsWithRecordType(self.directory.recordType.location)
for record in records:
if record.displayName == u"Minimal":
break
else:
self.fail("We did not find the Minimal record")
@inlineCallbacks
def test_setLocationAttributes(self):
yield self.runCommand(command_createLocation)
yield self.runCommand(command_setLocationAttributes)
# Tell the resources services to flush its cache and re-read XML
self._flush()
record = yield self.directory.recordWithUID("836B1B66-2E9A-4F46-8B1C-3DD6772C20B2")
# self.assertEquals(record.extras["comment"], "Updated Test Comment")
self.assertEquals(record.floor, "Second")
# self.assertEquals(record.extras["capacity"], "41")
self.assertEquals(record.autoScheduleMode, AutoScheduleMode.acceptIfFree)
self.assertEquals(record.autoAcceptGroup, "F5A6142C-4189-4E9E-90B0-9CD0268B314B")
results = yield self.runCommand(command_getLocationAttributes)
self.assertEquals(results["result"]["AutoScheduleMode"], "acceptIfFree")
self.assertEquals(results["result"]["AutoAcceptGroup"], "F5A6142C-4189-4E9E-90B0-9CD0268B314B")
self.assertEquals(set(results["result"]["ReadProxies"]), set(['user03']))
self.assertEquals(set(results["result"]["WriteProxies"]), set(['user05', 'user06', 'user07']))
@inlineCallbacks
def test_setAddressOnLocation(self):
yield self.runCommand(command_createLocation)
yield self.runCommand(command_createAddress)
yield self.runCommand(command_setAddressOnLocation)
results = yield self.runCommand(command_getLocationAttributes)
self.assertEquals(results["result"]["AssociatedAddress"], "C701069D-9CA1-4925-A1A9-5CD94767B74B")
self._flush()
record = yield self.directory.recordWithUID("836B1B66-2E9A-4F46-8B1C-3DD6772C20B2")
self.assertEquals(record.associatedAddress, "C701069D-9CA1-4925-A1A9-5CD94767B74B")
yield self.runCommand(command_removeAddressFromLocation)
results = yield self.runCommand(command_getLocationAttributes)
self.assertEquals(results["result"]["AssociatedAddress"], "")
self._flush()
record = yield self.directory.recordWithUID("836B1B66-2E9A-4F46-8B1C-3DD6772C20B2")
self.assertEquals(record.associatedAddress, u"")
@inlineCallbacks
def test_destroyLocation(self):
record = yield self.directory.recordWithUID("location01")
self.assertNotEquals(record, None)
yield self.runCommand(command_deleteLocation)
# Tell the resources services to flush its cache and re-read XML
self._flush()
record = yield self.directory.recordWithUID("location01")
self.assertEquals(record, None)
@inlineCallbacks
def test_createResource(self):
record = yield self.directory.recordWithUID("AF575A61-CFA6-49E1-A0F6-B5662C9D9801")
self.assertEquals(record, None)
yield self.runCommand(command_createResource)
# Tell the resources services to flush its cache and re-read XML
self._flush()
record = yield self.directory.recordWithUID("AF575A61-CFA6-49E1-A0F6-B5662C9D9801")
self.assertNotEquals(record, None)
@inlineCallbacks
def test_setResourceAttributes(self):
yield self.runCommand(command_createResource)
record = yield self.directory.recordWithUID("AF575A61-CFA6-49E1-A0F6-B5662C9D9801")
self.assertEquals(record.displayName, "Laptop 1")
yield self.runCommand(command_setResourceAttributes)
# Tell the resources services to flush its cache and re-read XML
self._flush()
record = yield self.directory.recordWithUID("AF575A61-CFA6-49E1-A0F6-B5662C9D9801")
self.assertEquals(record.displayName, "Updated Laptop 1")
@inlineCallbacks
def test_destroyResource(self):
record = yield self.directory.recordWithUID("resource01")
self.assertNotEquals(record, None)
yield self.runCommand(command_deleteResource)
# Tell the resources services to flush its cache and re-read XML
self._flush()
record = yield self.directory.recordWithUID("resource01")
self.assertEquals(record, None)
@inlineCallbacks
def test_addWriteProxy(self):
results = yield self.runCommand(command_addWriteProxy)
self.assertEquals(len(results["result"]["Proxies"]), 1)
@inlineCallbacks
def test_removeWriteProxy(self):
yield self.runCommand(command_addWriteProxy)
results = yield self.runCommand(command_removeWriteProxy)
self.assertEquals(len(results["result"]["Proxies"]), 0)
@inlineCallbacks
def test_purgeOldEvents(self):
results = yield self.runCommand(command_purgeOldEvents)
self.assertEquals(results["result"]["EventsRemoved"], 0)
self.assertEquals(results["result"]["RetainDays"], 42)
results = yield self.runCommand(command_purgeOldEventsNoDays)
self.assertEquals(results["result"]["RetainDays"], 365)
@inlineCallbacks
def test_readConfig(self):
"""
Verify readConfig returns with only the writable keys
"""
results = yield self.runCommand(
command_readConfig,
script="calendarserver_config"
)
self.assertEquals(results["result"]["RedirectHTTPToHTTPS"], False)
self.assertEquals(results["result"]["EnableSearchAddressBook"], False)
self.assertEquals(results["result"]["EnableCalDAV"], True)
self.assertEquals(results["result"]["EnableCardDAV"], True)
self.assertEquals(results["result"]["EnableSSL"], False)
self.assertEquals(results["result"]["DefaultLogLevel"], "warn")
self.assertEquals(results["result"]["Notifications"]["Services"]["APNS"]["Enabled"], False)
# This is a read only key that is returned
self.assertEquals(results["result"]["ServerRoot"], self.absoluteServerRoot)
# Verify other non-writeable keys are not present, such as DataRoot which is not writable
self.assertFalse("DataRoot" in results["result"])
@inlineCallbacks
def test_writeConfig(self):
"""
Verify writeConfig updates the writable plist file only
"""
results = yield self.runCommand(
command_writeConfig,
script="calendarserver_config"
)
self.assertEquals(results["result"]["EnableCalDAV"], False)
self.assertEquals(results["result"]["EnableCardDAV"], False)
self.assertEquals(results["result"]["EnableSSL"], True)
self.assertEquals(results["result"]["Notifications"]["Services"]["APNS"]["Enabled"], True)
hostName = "hostname_%s_%s" % (unichr(208), u"\ud83d\udca3")
self.assertTrue(results["result"]["ServerHostName"].endswith(hostName))
# The static plist should still have EnableCalDAV = True
staticPlist = plistlib.readPlist(self.configFileName)
self.assertTrue(staticPlist["EnableCalDAV"])
command_addReadProxy = """
commandaddReadProxyPrincipallocation01Proxyuser03
"""
command_addWriteProxy = """
commandaddWriteProxyPrincipallocation01Proxyuser01
"""
command_createAddress = """
commandcreateAddressGeneratedUIDC701069D-9CA1-4925-A1A9-5CD94767B74BRealNameCreated Address 01 %s %sAbbreviatedNameAddr1RecordNamecreatedaddress01StreetAddress1 Infinite Loop\nCupertino, 95014\nCAGeographicLocationgeo:37.331,-122.030
""" % (unichr(208), u"\ud83d\udca3")
command_createLocation = """
commandcreateLocationAutoScheduleModeacceptIfFreeAutoAcceptGroupE5A6142C-4189-4E9E-90B0-9CD0268B314BGeneratedUID836B1B66-2E9A-4F46-8B1C-3DD6772C20B2RealNameCreated Location 01 %s %sRecordNamecreatedlocation01CommentTest CommentDescriptionTest DescriptionFloorFirstAssociatedAddressC701069D-9CA1-4925-A1A9-5CD94767B74BReadProxiesuser03user04WriteProxiesuser05user06
""" % (unichr(208), u"\ud83d\udca3")
command_createLocationMinimal = """
commandcreateLocationRealNameMinimal
"""
command_createResource = """
commandcreateResourceAutoScheduleModedeclineIfBusyGeneratedUIDAF575A61-CFA6-49E1-A0F6-B5662C9D9801RealNameLaptop 1RecordNamelaptop1ReadProxiesuser03user04WriteProxiesuser05user06
"""
command_deleteLocation = """
commanddeleteLocationGeneratedUIDlocation01
"""
command_deleteResource = """
commanddeleteResourceGeneratedUIDresource01
"""
command_deleteAddress = """
commanddeleteAddressGeneratedUIDC701069D-9CA1-4925-A1A9-5CD94767B74B
"""
command_getLocationAndResourceList = """
commandgetLocationAndResourceList
"""
command_getLocationList = """
commandgetLocationList
"""
command_getResourceList = """
commandgetResourceList
"""
command_getAddressList = """
commandgetAddressList
"""
command_listReadProxies = """
commandlistReadProxiesPrincipal836B1B66-2E9A-4F46-8B1C-3DD6772C20B2
"""
command_listWriteProxies = """
commandlistWriteProxiesPrincipal836B1B66-2E9A-4F46-8B1C-3DD6772C20B2
"""
command_removeReadProxy = """
commandremoveReadProxyPrincipallocation01Proxyuser03
"""
command_removeWriteProxy = """
commandremoveWriteProxyPrincipallocation01Proxyuser01
"""
command_setLocationAttributes = """
commandsetLocationAttributesAutoScheduleAutoAcceptGroupF5A6142C-4189-4E9E-90B0-9CD0268B314BGeneratedUID836B1B66-2E9A-4F46-8B1C-3DD6772C20B2RealNameUpdated Location 01RecordNamecreatedlocation01CommentUpdated Test CommentDescriptionUpdated Test DescriptionFloorSecondReadProxiesuser03WriteProxiesuser05user06user07
"""
command_setAddressOnLocation = """
commandsetLocationAttributesGeneratedUID836B1B66-2E9A-4F46-8B1C-3DD6772C20B2AssociatedAddressC701069D-9CA1-4925-A1A9-5CD94767B74B
"""
command_removeAddressFromLocation = """
commandsetLocationAttributesGeneratedUID836B1B66-2E9A-4F46-8B1C-3DD6772C20B2AssociatedAddress
"""
command_getLocationAttributes = """
commandgetLocationAttributesGeneratedUID836B1B66-2E9A-4F46-8B1C-3DD6772C20B2
"""
command_getAddressAttributes = """
commandgetAddressAttributesGeneratedUIDC701069D-9CA1-4925-A1A9-5CD94767B74B
"""
command_setAddressAttributes = """
commandsetAddressAttributesGeneratedUIDC701069D-9CA1-4925-A1A9-5CD94767B74BRealNameUpdated AddressStreetAddressUpdated Street AddressGeographicLocationUpdated Geo
"""
command_setResourceAttributes = """
commandsetResourceAttributesAutoScheduleModeacceptIfFreeGeneratedUIDAF575A61-CFA6-49E1-A0F6-B5662C9D9801RealNameUpdated Laptop 1RecordNamelaptop1
"""
command_getResourceAttributes = """
commandgetResourceAttributesGeneratedUIDAF575A61-CFA6-49E1-A0F6-B5662C9D9801
"""
command_purgeOldEvents = """
commandpurgeOldEventsRetainDays42
"""
command_purgeOldEventsNoDays = """
commandpurgeOldEvents
"""
command_readConfig = """
commandreadConfig
"""
command_writeConfig = """
commandwriteConfigValuesEnableCalDAVEnableCardDAVEnableSSLNotifications.Services.APNS.EnabledServerHostNamehostname_%s_%s
""" % (unichr(208), u"\ud83d\udca3")
calendarserver-7.0+dfsg/calendarserver/tools/test/test_importer.py 0000664 0000000 0000000 00000041501 12607514243 0025675 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2014-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Unit tests for L{calendarsever.tools.importer}.
"""
from calendarserver.tools.importer import (
importCollectionComponent, ImportException,
storeComponentInHomeAndCalendar
)
from twext.enterprise.jobs.jobitem import JobItem
from twisted.internet import reactor
from twisted.internet.defer import inlineCallbacks
from twistedcaldav import customxml
from twistedcaldav.ical import Component
from twistedcaldav.test.util import StoreTestCase
from txdav.base.propertystore.base import PropertyName
from txdav.xml import element as davxml
DATA_MISSING_SOURCE = """BEGIN:VCALENDAR
CALSCALE:GREGORIAN
PRODID:-//Apple Computer\, Inc//iCal 2.0//EN
VERSION:2.0
END:VCALENDAR
"""
DATA_NO_SCHEDULING = """BEGIN:VCALENDAR
VERSION:2.0
NAME:Sample Import Calendar
COLOR:#0E61B9FF
SOURCE;VALUE=URI:http://example.com/calendars/__uids__/user01/calendar/
PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
BEGIN:VEVENT
UID:5CE3B280-DBC9-4E8E-B0B2-996754020E5F
DTSTART;TZID=America/Los_Angeles:20141108T093000
DTEND;TZID=America/Los_Angeles:20141108T103000
CREATED:20141106T192546Z
DTSTAMP:20141106T192546Z
RRULE:FREQ=DAILY
SEQUENCE:0
SUMMARY:repeating event
TRANSP:OPAQUE
END:VEVENT
BEGIN:VEVENT
UID:5CE3B280-DBC9-4E8E-B0B2-996754020E5F
RECURRENCE-ID;TZID=America/Los_Angeles:20141111T093000
DTSTART;TZID=America/Los_Angeles:20141111T110000
DTEND;TZID=America/Los_Angeles:20141111T120000
CREATED:20141106T192546Z
DTSTAMP:20141106T192546Z
SEQUENCE:0
SUMMARY:repeating event
TRANSP:OPAQUE
END:VEVENT
BEGIN:VEVENT
UID:F6342D53-7D5E-4B5E-9E0A-F0A08977AFE5
DTSTART;TZID=America/Los_Angeles:20141108T080000
DTEND;TZID=America/Los_Angeles:20141108T091500
CREATED:20141104T205338Z
DTSTAMP:20141104T205338Z
SEQUENCE:0
SUMMARY:simple event
TRANSP:OPAQUE
END:VEVENT
END:VCALENDAR
"""
DATA_NO_SCHEDULING_REIMPORT = """BEGIN:VCALENDAR
VERSION:2.0
NAME:Sample Import Calendar Reimported
COLOR:#FFFFFFFF
SOURCE;VALUE=URI:http://example.com/calendars/__uids__/user01/calendar/
PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
BEGIN:VEVENT
UID:5CE3B280-DBC9-4E8E-B0B2-996754020E5F
DTSTART;TZID=America/Los_Angeles:20141108T093000
DTEND;TZID=America/Los_Angeles:20141108T103000
CREATED:20141106T192546Z
DTSTAMP:20141106T192546Z
RRULE:FREQ=DAILY
SEQUENCE:0
SUMMARY:repeating event
TRANSP:OPAQUE
END:VEVENT
BEGIN:VEVENT
UID:5CE3B280-DBC9-4E8E-B0B2-996754020E5F
RECURRENCE-ID;TZID=America/Los_Angeles:20141111T093000
DTSTART;TZID=America/Los_Angeles:20141111T110000
DTEND;TZID=America/Los_Angeles:20141111T120000
CREATED:20141106T192546Z
DTSTAMP:20141106T192546Z
SEQUENCE:0
SUMMARY:repeating event
TRANSP:OPAQUE
END:VEVENT
BEGIN:VEVENT
UID:CB194340-9B3C-40B1-B4E2-062FDB7C8829
DTSTART;TZID=America/Los_Angeles:20141108T080000
DTEND;TZID=America/Los_Angeles:20141108T091500
CREATED:20141104T205338Z
DTSTAMP:20141104T205338Z
SEQUENCE:0
SUMMARY:new event
TRANSP:OPAQUE
END:VEVENT
END:VCALENDAR
"""
DATA_WITH_ORGANIZER = """BEGIN:VCALENDAR
VERSION:2.0
NAME:I'm the organizer
COLOR:#0000FFFF
SOURCE;VALUE=URI:http://example.com/calendars/__uids__/user01/calendar/
PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
BEGIN:VEVENT
UID:AB49C0C0-4238-41A4-8B43-F3E3DDF0E59C
DTSTART;TZID=America/Los_Angeles:20141108T053000
DTEND;TZID=America/Los_Angeles:20141108T070000
ATTENDEE;CN=User 01;CUTYPE=INDIVIDUAL;ROLE=CHAIR:urn:x-uid:user01
ATTENDEE;CN=User 02;CUTYPE=INDIVIDUAL:urn:x-uid:user02
ATTENDEE;CN=User 03;CUTYPE=INDIVIDUAL:urn:x-uid:user03
ATTENDEE;CN=Mercury Seven;CUTYPE=ROOM:urn:x-uid:mercury
CREATED:20141107T172645Z
DTSTAMP:20141107T172645Z
LOCATION:Mercury
ORGANIZER;CN=User 01:urn:x-uid:user01
SEQUENCE:0
SUMMARY:I'm the organizer
TRANSP:OPAQUE
END:VEVENT
END:VCALENDAR
"""
DATA_USER02_INVITES_USER01_ORGANIZER_COPY = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
BEGIN:VEVENT
UID:6DB84FB1-C943-4144-BE65-9B0DD9A9E2C7
DTSTART;TZID=America/Los_Angeles:20141115T074500
DTEND;TZID=America/Los_Angeles:20141115T091500
RRULE:FREQ=DAILY;COUNT=20
ATTENDEE;CN=User 01;CUTYPE=INDIVIDUAL:urn:x-uid:user01
ATTENDEE;CN=User 02;CUTYPE=INDIVIDUAL;ROLE=CHAIR:urn:x-uid:user02
ATTENDEE;CN=User 03;CUTYPE=INDIVIDUAL:urn:x-uid:user03
ATTENDEE;CN=Mercury Seven;CUTYPE=ROOM:urn:x-uid:mercury
CREATED:20141107T172645Z
DTSTAMP:20141107T172645Z
LOCATION:Mercury
ORGANIZER;CN=User 02:urn:x-uid:user02
SEQUENCE:0
SUMMARY:Other organizer
TRANSP:OPAQUE
END:VEVENT
BEGIN:VEVENT
UID:6DB84FB1-C943-4144-BE65-9B0DD9A9E2C7
RECURRENCE-ID;TZID=America/Los_Angeles:20141117T074500
DTSTART;TZID=America/Los_Angeles:20141117T093000
DTEND;TZID=America/Los_Angeles:20141117T110000
ATTENDEE;CN=User 01;CUTYPE=INDIVIDUAL:urn:x-uid:user01
ATTENDEE;CN=User 02;CUTYPE=INDIVIDUAL;ROLE=CHAIR:urn:x-uid:user02
ATTENDEE;CN=User 03;CUTYPE=INDIVIDUAL:urn:x-uid:user03
ATTENDEE;CN=Mercury Seven;CUTYPE=ROOM:urn:x-uid:mercury
CREATED:20141107T172645Z
DTSTAMP:20141107T172645Z
LOCATION:Mercury
ORGANIZER;CN=User 02:urn:x-uid:user02
SEQUENCE:0
SUMMARY:Other organizer
TRANSP:OPAQUE
END:VEVENT
BEGIN:VEVENT
UID:6DB84FB1-C943-4144-BE65-9B0DD9A9E2C7
RECURRENCE-ID;TZID=America/Los_Angeles:20141119T074500
DTSTART;TZID=America/Los_Angeles:20141119T103000
DTEND;TZID=America/Los_Angeles:20141119T120000
ATTENDEE;CN=User 01;CUTYPE=INDIVIDUAL:urn:x-uid:user01
ATTENDEE;CN=User 02;CUTYPE=INDIVIDUAL;ROLE=CHAIR:urn:x-uid:user02
ATTENDEE;CN=User 03;CUTYPE=INDIVIDUAL:urn:x-uid:user03
ATTENDEE;CN=Mercury Seven;CUTYPE=ROOM:urn:x-uid:mercury
CREATED:20141107T172645Z
DTSTAMP:20141107T172645Z
LOCATION:Mercury
ORGANIZER;CN=User 02:urn:x-uid:user02
SEQUENCE:0
SUMMARY:Other organizer
TRANSP:OPAQUE
END:VEVENT
END:VCALENDAR
"""
DATA_USER02_INVITES_USER01_ATTENDEE_COPY = """BEGIN:VCALENDAR
VERSION:2.0
NAME:I'm an attendee
COLOR:#0000FFFF
SOURCE;VALUE=URI:http://example.com/calendars/__uids__/user01/calendar/
PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
BEGIN:VEVENT
UID:6DB84FB1-C943-4144-BE65-9B0DD9A9E2C7
DTSTART;TZID=America/Los_Angeles:20141115T074500
DTEND;TZID=America/Los_Angeles:20141115T091500
RRULE:FREQ=DAILY;COUNT=20
ATTENDEE;CN=User 01;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:x-uid:user01
ATTENDEE;CN=User 02;CUTYPE=INDIVIDUAL;ROLE=CHAIR:urn:x-uid:user02
ATTENDEE;CN=User 03;CUTYPE=INDIVIDUAL:urn:x-uid:user03
ATTENDEE;CN=Mercury Seven;CUTYPE=ROOM:urn:x-uid:mercury
CREATED:20141107T172645Z
DTSTAMP:20141107T172645Z
LOCATION:Mercury
ORGANIZER;CN=User 02:urn:x-uid:user02
SEQUENCE:0
SUMMARY:Other organizer
TRANSP:OPAQUE
END:VEVENT
BEGIN:VEVENT
UID:6DB84FB1-C943-4144-BE65-9B0DD9A9E2C7
RECURRENCE-ID;TZID=America/Los_Angeles:20141117T074500
DTSTART;TZID=America/Los_Angeles:20141117T093000
DTEND;TZID=America/Los_Angeles:20141117T110000
ATTENDEE;CN=User 01;CUTYPE=INDIVIDUAL;PARTSTAT=TENTATIVE:urn:x-uid:user01
ATTENDEE;CN=User 02;CUTYPE=INDIVIDUAL;ROLE=CHAIR:urn:x-uid:user02
ATTENDEE;CN=User 03;CUTYPE=INDIVIDUAL:urn:x-uid:user03
ATTENDEE;CN=Mercury Seven;CUTYPE=ROOM:urn:x-uid:mercury
CREATED:20141107T172645Z
DTSTAMP:20141107T172645Z
LOCATION:Mercury
ORGANIZER;CN=User 02:urn:x-uid:user02
SEQUENCE:0
SUMMARY:Other organizer
TRANSP:OPAQUE
END:VEVENT
BEGIN:VEVENT
UID:6DB84FB1-C943-4144-BE65-9B0DD9A9E2C7
RECURRENCE-ID;TZID=America/Los_Angeles:20141121T074500
DTSTART;TZID=America/Los_Angeles:20141121T101500
DTEND;TZID=America/Los_Angeles:20141121T114500
ATTENDEE;CN=User 01;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:x-uid:user01
ATTENDEE;CN=User 02;CUTYPE=INDIVIDUAL;ROLE=CHAIR:urn:x-uid:user02
ATTENDEE;CN=User 03;CUTYPE=INDIVIDUAL:urn:x-uid:user03
ATTENDEE;CN=Mercury Seven;CUTYPE=ROOM:urn:x-uid:mercury
CREATED:20141107T172645Z
DTSTAMP:20141107T172645Z
LOCATION:Mercury
ORGANIZER;CN=User 02:urn:x-uid:user02
SEQUENCE:0
SUMMARY:Other organizer
TRANSP:OPAQUE
END:VEVENT
END:VCALENDAR
"""
class ImportTests(StoreTestCase):
"""
Tests for importing data to a live store.
"""
def configure(self):
super(ImportTests, self).configure()
# Enable the queue and make it fast
self.patch(self.config.Scheduling.Options.WorkQueues, "Enabled", True)
self.patch(self.config.Scheduling.Options.WorkQueues, "RequestDelaySeconds", 0.1)
self.patch(self.config.Scheduling.Options.WorkQueues, "ReplyDelaySeconds", 0.1)
self.patch(self.config.Scheduling.Options.WorkQueues, "AutoReplyDelaySeconds", 0.1)
self.patch(self.config.Scheduling.Options.WorkQueues, "AttendeeRefreshBatchDelaySeconds", 0.1)
self.patch(self.config.Scheduling.Options.WorkQueues, "AttendeeRefreshBatchIntervalSeconds", 0.1)
# Reschedule quickly
self.patch(JobItem, "failureRescheduleInterval", 1)
self.patch(JobItem, "lockRescheduleInterval", 1)
@inlineCallbacks
def test_ImportComponentMissingSource(self):
component = Component.allFromString(DATA_MISSING_SOURCE)
try:
yield importCollectionComponent(self.store, component)
except ImportException:
pass
else:
self.fail("Did not raise ImportException")
@inlineCallbacks
def test_ImportComponentNoScheduling(self):
component = Component.allFromString(DATA_NO_SCHEDULING)
yield importCollectionComponent(self.store, component)
txn = self.store.newTransaction()
home = yield txn.calendarHomeWithUID("user01")
collection = yield home.childWithName("calendar")
# Verify properties have been set
collectionProperties = collection.properties()
for element, value in (
(davxml.DisplayName, "Sample Import Calendar"),
(customxml.CalendarColor, "#0E61B9FF"),
):
self.assertEquals(
value,
collectionProperties[PropertyName.fromElement(element)]
)
# Verify child objects
objects = yield collection.listObjectResources()
self.assertEquals(len(objects), 2)
yield txn.commit()
# Reimport different component into same collection
component = Component.allFromString(DATA_NO_SCHEDULING_REIMPORT)
yield importCollectionComponent(self.store, component)
txn = self.store.newTransaction()
home = yield txn.calendarHomeWithUID("user01")
collection = yield home.childWithName("calendar")
# Verify properties have been changed
collectionProperties = collection.properties()
for element, value in (
(davxml.DisplayName, "Sample Import Calendar Reimported"),
(customxml.CalendarColor, "#FFFFFFFF"),
):
self.assertEquals(
value,
collectionProperties[PropertyName.fromElement(element)]
)
# Verify child objects (should be 3 now)
objects = yield collection.listObjectResources()
self.assertEquals(len(objects), 3)
yield txn.commit()
@inlineCallbacks
def test_ImportComponentOrganizer(self):
component = Component.allFromString(DATA_WITH_ORGANIZER)
yield importCollectionComponent(self.store, component)
yield JobItem.waitEmpty(self.store.newTransaction, reactor, 60)
txn = self.store.newTransaction()
home = yield txn.calendarHomeWithUID("user01")
collection = yield home.childWithName("calendar")
# Verify properties have been set
collectionProperties = collection.properties()
for element, value in (
(davxml.DisplayName, "I'm the organizer"),
(customxml.CalendarColor, "#0000FFFF"),
):
self.assertEquals(
value,
collectionProperties[PropertyName.fromElement(element)]
)
# Verify the organizer's child objects
objects = yield collection.listObjectResources()
self.assertEquals(len(objects), 1)
# Verify the attendees' child objects
home = yield txn.calendarHomeWithUID("user02")
collection = yield home.childWithName("calendar")
objects = yield collection.listObjectResources()
self.assertEquals(len(objects), 1)
home = yield txn.calendarHomeWithUID("user03")
collection = yield home.childWithName("calendar")
objects = yield collection.listObjectResources()
self.assertEquals(len(objects), 1)
home = yield txn.calendarHomeWithUID("mercury")
collection = yield home.childWithName("calendar")
objects = yield collection.listObjectResources()
self.assertEquals(len(objects), 1)
yield txn.commit()
@inlineCallbacks
def test_ImportComponentAttendee(self):
# Have user02 invite this user01
yield storeComponentInHomeAndCalendar(
self.store,
Component.allFromString(DATA_USER02_INVITES_USER01_ORGANIZER_COPY),
"user02",
"calendar",
"invite.ics"
)
yield JobItem.waitEmpty(self.store.newTransaction, reactor, 60)
# Delete the attendee's copy, thus declining the event
txn = self.store.newTransaction()
home = yield txn.calendarHomeWithUID("user01")
collection = yield home.childWithName("calendar")
objects = yield collection.objectResources()
self.assertEquals(len(objects), 1)
yield objects[0].remove()
yield txn.commit()
yield JobItem.waitEmpty(self.store.newTransaction, reactor, 60)
# Make sure attendee's copy is gone
txn = self.store.newTransaction()
home = yield txn.calendarHomeWithUID("user01")
collection = yield home.childWithName("calendar")
objects = yield collection.objectResources()
self.assertEquals(len(objects), 0)
yield txn.commit()
# Make sure attendee shows as declined to the organizer
txn = self.store.newTransaction()
home = yield txn.calendarHomeWithUID("user02")
collection = yield home.childWithName("calendar")
objects = yield collection.objectResources()
self.assertEquals(len(objects), 1)
component = yield objects[0].component()
prop = component.getAttendeeProperty(("urn:x-uid:user01",))
self.assertEquals(
prop.parameterValue("PARTSTAT"),
"DECLINED"
)
yield txn.commit()
# When importing the event again, update through the organizer's copy
# of the event as if it were an iTIP reply
component = Component.allFromString(DATA_USER02_INVITES_USER01_ATTENDEE_COPY)
yield importCollectionComponent(self.store, component)
yield JobItem.waitEmpty(self.store.newTransaction, reactor, 60)
# Make sure organizer now sees the right partstats
txn = self.store.newTransaction()
home = yield txn.calendarHomeWithUID("user02")
collection = yield home.childWithName("calendar")
objects = yield collection.objectResources()
self.assertEquals(len(objects), 1)
component = yield objects[0].component()
# print(str(component))
props = component.getAttendeeProperties(("urn:x-uid:user01",))
# The master is ACCEPTED
self.assertEquals(
props[0].parameterValue("PARTSTAT"),
"ACCEPTED"
)
# 2nd instance is TENTATIVE
self.assertEquals(
props[1].parameterValue("PARTSTAT"),
"TENTATIVE"
)
# 3rd instance is not in the attendee's copy, so remains DECLILNED
self.assertEquals(
props[2].parameterValue("PARTSTAT"),
"DECLINED"
)
yield txn.commit()
# Make sure attendee now sees the right partstats
txn = self.store.newTransaction()
home = yield txn.calendarHomeWithUID("user01")
collection = yield home.childWithName("calendar")
objects = yield collection.objectResources()
self.assertEquals(len(objects), 1)
component = yield objects[0].component()
# print(str(component))
props = component.getAttendeeProperties(("urn:x-uid:user01",))
# The master is ACCEPTED
self.assertEquals(
props[0].parameterValue("PARTSTAT"),
"ACCEPTED"
)
# 2nd instance is TENTATIVE
self.assertEquals(
props[1].parameterValue("PARTSTAT"),
"TENTATIVE"
)
# 3rd instance is not in the organizer's copy, so should inherit
# the value from the master, which is ACCEPTED
self.assertEquals(
props[2].parameterValue("PARTSTAT"),
"ACCEPTED"
)
yield txn.commit()
test_ImportComponentAttendee.todo = "Need to fix iTip reply processing"
calendarserver-7.0+dfsg/calendarserver/tools/test/test_principals.py 0000664 0000000 0000000 00000033344 12607514243 0026206 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
import os
from twistedcaldav.stdconfig import config
from calendarserver.tools.principals import (
parseCreationArgs, matchStrings,
recordForPrincipalID, getProxies, setProxies
)
from twext.python.filepath import CachingFilePath as FilePath
from twisted.internet import reactor
from twisted.internet.defer import inlineCallbacks, Deferred, returnValue
from twistedcaldav.test.util import (
TestCase, StoreTestCase, CapturingProcessProtocol, ErrorOutput
)
class ManagePrincipalsTestCase(TestCase):
def setUp(self):
super(ManagePrincipalsTestCase, self).setUp()
testRoot = os.path.join(os.path.dirname(__file__), "principals")
templateName = os.path.join(testRoot, "caldavd.plist")
templateFile = open(templateName)
template = templateFile.read()
templateFile.close()
databaseRoot = os.path.abspath("_spawned_scripts_db" + str(os.getpid()))
newConfig = template % {
"ServerRoot": os.path.abspath(config.ServerRoot),
"DataRoot": os.path.abspath(config.DataRoot),
"DatabaseRoot": databaseRoot,
"DocumentRoot": os.path.abspath(config.DocumentRoot),
"LogRoot": os.path.abspath(config.LogRoot),
}
configFilePath = FilePath(os.path.join(config.ConfigRoot, "caldavd.plist"))
configFilePath.setContent(newConfig)
self.configFileName = configFilePath.path
config.load(self.configFileName)
origUsersFile = FilePath(
os.path.join(
os.path.dirname(__file__),
"principals",
"users-groups.xml"
)
)
copyUsersFile = FilePath(os.path.join(config.DataRoot, "accounts.xml"))
origUsersFile.copyTo(copyUsersFile)
origResourcesFile = FilePath(
os.path.join(
os.path.dirname(__file__),
"principals",
"resources-locations.xml"
)
)
copyResourcesFile = FilePath(os.path.join(config.DataRoot, "resources.xml"))
origResourcesFile.copyTo(copyResourcesFile)
origAugmentFile = FilePath(
os.path.join(
os.path.dirname(__file__),
"principals",
"augments.xml"
)
)
copyAugmentFile = FilePath(os.path.join(config.DataRoot, "augments.xml"))
origAugmentFile.copyTo(copyAugmentFile)
# Make sure trial puts the reactor in the right state, by letting it
# run one reactor iteration. (Ignore me, please.)
d = Deferred()
reactor.callLater(0, d.callback, True)
return d
@inlineCallbacks
def runCommand(self, *additional):
"""
Run calendarserver_manage_principals, passing additional as args.
"""
sourceRoot = os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(__file__))))
cmd = "calendarserver_manage_principals" # assumes it's on PATH
args = [cmd, "-f", self.configFileName]
args.extend(additional)
cwd = sourceRoot
deferred = Deferred()
reactor.spawnProcess(CapturingProcessProtocol(deferred, None), cmd, args, env=os.environ, path=cwd)
output = yield deferred
returnValue(output)
@inlineCallbacks
def test_help(self):
results = yield self.runCommand("--help")
self.assertTrue(results.startswith("usage:"))
@inlineCallbacks
def test_principalTypes(self):
results = yield self.runCommand("--list-principal-types")
self.assertTrue("groups" in results)
self.assertTrue("users" in results)
self.assertTrue("locations" in results)
self.assertTrue("resources" in results)
self.assertTrue("addresses" in results)
@inlineCallbacks
def test_listPrincipals(self):
results = yield self.runCommand("--list-principals=users")
for i in xrange(1, 10):
self.assertTrue("user%02d" % (i,) in results)
@inlineCallbacks
def test_search(self):
results = yield self.runCommand("--search", "user")
self.assertTrue("11 matches found" in results)
for i in xrange(1, 10):
self.assertTrue("user%02d" % (i,) in results)
results = yield self.runCommand("--search", "user", "04")
self.assertTrue("1 matches found" in results)
self.assertTrue("user04" in results)
results = yield self.runCommand("--context=group", "--search", "test")
self.assertTrue("2 matches found" in results)
self.assertTrue("group2" in results)
self.assertTrue("group3" in results)
results = yield self.runCommand("--context=attendee", "--search", "test")
self.assertTrue("3 matches found" in results)
self.assertTrue("testuser1" in results)
self.assertTrue("group2" in results)
self.assertTrue("group3" in results)
@inlineCallbacks
def test_addRemove(self):
results = yield self.runCommand(
"--add", "resources",
"New Resource", "newresource", "newresourceuid"
)
self.assertTrue("Added 'New Resource'" in results)
results = yield self.runCommand(
"--get-auto-schedule-mode",
"resources:newresource"
)
self.assertTrue(
results.startswith(
'Auto-schedule mode for "New Resource" newresourceuid (resource) newresource is accept if free, decline if busy'
)
)
results = yield self.runCommand("--list-principals=resources")
self.assertTrue("newresource" in results)
results = yield self.runCommand(
"--add", "resources", "New Resource",
"newresource1", "newresourceuid"
)
self.assertTrue("UID already in use: newresourceuid" in results)
results = yield self.runCommand(
"--add", "resources", "New Resource",
"newresource", "uniqueuid"
)
self.assertTrue("Record name already in use" in results)
results = yield self.runCommand("--remove", "resources:newresource")
self.assertTrue("Removed 'New Resource'" in results)
results = yield self.runCommand("--list-principals=resources")
self.assertFalse("newresource" in results)
def test_parseCreationArgs(self):
self.assertEquals(
("full name", "short name", "uid"),
parseCreationArgs(("full name", "short name", "uid"))
)
def test_matchStrings(self):
self.assertEquals("abc", matchStrings("a", ("abc", "def")))
self.assertEquals("def", matchStrings("de", ("abc", "def")))
self.assertRaises(
ValueError,
matchStrings, "foo", ("abc", "def")
)
@inlineCallbacks
def test_modifyWriteProxies(self):
results = yield self.runCommand(
"--add-write-proxy=users:user01", "locations:location01"
)
self.assertTrue(
results.startswith('Added "User 01" user01 (user) user01 as a write proxy for "Room 01" location01 (location) location01')
)
results = yield self.runCommand(
"--list-write-proxies", "locations:location01"
)
self.assertTrue("User 01" in results)
results = yield self.runCommand(
"--remove-proxy=users:user01", "locations:location01"
)
results = yield self.runCommand(
"--list-write-proxies", "locations:location01"
)
self.assertTrue(
'No write proxies for "Room 01" location01 (location) location01' in results
)
@inlineCallbacks
def test_modifyReadProxies(self):
results = yield self.runCommand(
"--add-read-proxy=users:user01", "locations:location01"
)
self.assertTrue(
results.startswith('Added "User 01" user01 (user) user01 as a read proxy for "Room 01" location01 (location) location01')
)
results = yield self.runCommand(
"--list-read-proxies", "locations:location01"
)
self.assertTrue("User 01" in results)
results = yield self.runCommand(
"--remove-proxy=users:user01", "locations:location01"
)
results = yield self.runCommand(
"--list-read-proxies", "locations:location01"
)
self.assertTrue(
'No read proxies for "Room 01" location01 (location) location01' in results
)
@inlineCallbacks
def test_autoScheduleMode(self):
results = yield self.runCommand(
"--get-auto-schedule-mode", "locations:location01"
)
self.assertTrue(
results.startswith('Auto-schedule mode for "Room 01" location01 (location) location01 is accept if free, decline if busy')
)
results = yield self.runCommand(
"--set-auto-schedule-mode=accept-if-free", "locations:location01"
)
self.assertTrue(
results.startswith('Setting auto-schedule-mode to accept if free for "Room 01" location01 (location) location01')
)
results = yield self.runCommand(
"--get-auto-schedule-mode",
"locations:location01"
)
self.assertTrue(
results.startswith('Auto-schedule mode for "Room 01" location01 (location) location01 is accept if free')
)
results = yield self.runCommand(
"--set-auto-schedule-mode=decline-if-busy", "users:user01"
)
self.assertTrue(results.startswith('Setting auto-schedule-mode for "User 01" user01 (user) user01 is not allowed.'))
try:
results = yield self.runCommand(
"--set-auto-schedule-mode=bogus",
"users:user01"
)
except ErrorOutput:
pass
else:
self.fail("Expected command failure")
@inlineCallbacks
def test_groupChanges(self):
results = yield self.runCommand(
"--list-group-members", "groups:testgroup1"
)
self.assertTrue("user01" in results)
self.assertTrue("user02" in results)
self.assertTrue("user03" not in results)
results = yield self.runCommand(
"--add-group-member", "users:user03", "groups:testgroup1"
)
self.assertTrue("Added" in results)
self.assertTrue("Existing" not in results)
self.assertTrue("Invalid" not in results)
results = yield self.runCommand(
"--list-group-members", "groups:testgroup1"
)
self.assertTrue("user01" in results)
self.assertTrue("user02" in results)
self.assertTrue("user03" in results)
results = yield self.runCommand(
"--add-group-member", "users:user03", "groups:testgroup1"
)
self.assertTrue("Added" not in results)
self.assertTrue("Existing" in results)
self.assertTrue("Invalid" not in results)
results = yield self.runCommand(
"--add-group-member", "users:bogus", "groups:testgroup1"
)
self.assertTrue("Added" not in results)
self.assertTrue("Existing" not in results)
self.assertTrue("Invalid" in results)
results = yield self.runCommand(
"--remove-group-member", "users:user03", "groups:testgroup1"
)
self.assertTrue("Removed" in results)
self.assertTrue("Missing" not in results)
self.assertTrue("Invalid" not in results)
results = yield self.runCommand(
"--list-group-members", "groups:testgroup1"
)
self.assertTrue("user01" in results)
self.assertTrue("user02" in results)
self.assertTrue("user03" not in results)
results = yield self.runCommand(
"--remove-group-member", "users:user03", "groups:testgroup1"
)
self.assertTrue("Removed" not in results)
self.assertTrue("Missing" in results)
self.assertTrue("Invalid" not in results)
class SetProxiesTestCase(StoreTestCase):
@inlineCallbacks
def test_setProxies(self):
"""
Read and Write proxies can be set en masse
"""
directory = self.directory
record = yield recordForPrincipalID(directory, "users:user01")
readProxies, writeProxies = yield getProxies(record)
self.assertEquals(readProxies, []) # initially empty
self.assertEquals(writeProxies, []) # initially empty
readProxies = [
(yield recordForPrincipalID(directory, "users:user03")),
(yield recordForPrincipalID(directory, "users:user04")),
]
writeProxies = [
(yield recordForPrincipalID(directory, "users:user05")),
]
yield setProxies(record, readProxies, writeProxies)
readProxies, writeProxies = yield getProxies(record)
self.assertEquals(set([r.uid for r in readProxies]), set(["user03", "user04"]))
self.assertEquals(set([r.uid for r in writeProxies]), set(["user05"]))
# Using None for a proxy list indicates a no-op
yield setProxies(record, [], None)
readProxies, writeProxies = yield getProxies(record)
self.assertEquals(readProxies, []) # now empty
self.assertEquals(set([r.uid for r in writeProxies]), set(["user05"])) # unchanged
calendarserver-7.0+dfsg/calendarserver/tools/test/test_purge.py 0000664 0000000 0000000 00000040011 12607514243 0025151 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from calendarserver.tools.purge import PurgePrincipalService, \
PrincipalPurgeHomeWork, PrincipalPurgePollingWork, PrincipalPurgeCheckWork, \
PrincipalPurgeWork
from pycalendar.datetime import DateTime
from twisted.internet import reactor
from twisted.internet.defer import inlineCallbacks, Deferred
from twistedcaldav.config import config
from twistedcaldav.test.util import StoreTestCase
from txdav.common.datastore.sql_tables import _BIND_MODE_WRITE
from txdav.common.datastore.test.util import populateCalendarsFrom
from txweb2.http_headers import MimeType
import datetime
future = DateTime.getNowUTC()
future.offsetDay(1)
future = future.getText()
past = DateTime.getNowUTC()
past.offsetDay(-1)
past = past.getText()
# For test_purgeExistingGUID
# No organizer/attendee
NON_INVITE_ICS = """BEGIN:VCALENDAR
VERSION:2.0
BEGIN:VEVENT
UID:151AFC76-6036-40EF-952B-97D1840760BF
SUMMARY:Non Invitation
DTSTART:%s
DURATION:PT1H
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % (past,)
# Purging existing organizer; has existing attendee
ORGANIZER_ICS = """BEGIN:VCALENDAR
VERSION:2.0
BEGIN:VEVENT
UID:7ED97931-9A19-4596-9D4D-52B36D6AB803
SUMMARY:Organizer
DTSTART:%s
DURATION:PT1H
ORGANIZER:urn:uuid:10000000-0000-0000-0000-000000000001
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:10000000-0000-0000-0000-000000000001
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:10000000-0000-0000-0000-000000000002
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % (future,)
# Purging existing attendee; has existing organizer
ATTENDEE_ICS = """BEGIN:VCALENDAR
VERSION:2.0
BEGIN:VEVENT
UID:1974603C-B2C0-4623-92A0-2436DEAB07EF
SUMMARY:Attendee
DTSTART:%s
DURATION:PT1H
ORGANIZER:urn:uuid:10000000-0000-0000-0000-000000000002
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:10000000-0000-0000-0000-000000000001
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:10000000-0000-0000-0000-000000000002
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % (future,)
# For test_purgeNonExistentGUID
# No organizer/attendee, in the past
NON_INVITE_PAST_ICS = """BEGIN:VCALENDAR
VERSION:2.0
BEGIN:VEVENT
UID:151AFC76-6036-40EF-952B-97D1840760BF
SUMMARY:Non Invitation
DTSTART:%s
DURATION:PT1H
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % (past,)
# No organizer/attendee, in the future
NON_INVITE_FUTURE_ICS = """BEGIN:VCALENDAR
VERSION:2.0
BEGIN:VEVENT
UID:251AFC76-6036-40EF-952B-97D1840760BF
SUMMARY:Non Invitation
DTSTART:%s
DURATION:PT1H
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % (future,)
# Purging non-existent organizer; has existing attendee
ORGANIZER_ICS_2 = """BEGIN:VCALENDAR
VERSION:2.0
BEGIN:VEVENT
UID:7ED97931-9A19-4596-9D4D-52B36D6AB803
SUMMARY:Organizer
DTSTART:%s
DURATION:PT1H
ORGANIZER:urn:uuid:F0000000-0000-0000-0000-000000000001
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:F0000000-0000-0000-0000-000000000001
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:10000000-0000-0000-0000-000000000002
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % (future,)
# Purging non-existent attendee; has existing organizer
ATTENDEE_ICS_2 = """BEGIN:VCALENDAR
VERSION:2.0
BEGIN:VEVENT
UID:1974603C-B2C0-4623-92A0-2436DEAB07EF
SUMMARY:Attendee
DTSTART:%s
DURATION:PT1H
ORGANIZER:urn:uuid:10000000-0000-0000-0000-000000000002
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:F0000000-0000-0000-0000-000000000001
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:10000000-0000-0000-0000-000000000002
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % (future,)
# Purging non-existent organizer; has existing attendee; repeating
REPEATING_ORGANIZER_ICS = """BEGIN:VCALENDAR
VERSION:2.0
BEGIN:VEVENT
UID:8ED97931-9A19-4596-9D4D-52B36D6AB803
SUMMARY:Repeating Organizer
DTSTART:%s
DURATION:PT1H
RRULE:FREQ=DAILY;COUNT=400
ORGANIZER:urn:uuid:F0000000-0000-0000-0000-000000000001
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:F0000000-0000-0000-0000-000000000001
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:10000000-0000-0000-0000-000000000002
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % (past,)
# For test_purgeMultipleNonExistentGUIDs
# No organizer/attendee
NON_INVITE_ICS_3 = """BEGIN:VCALENDAR
VERSION:2.0
BEGIN:VEVENT
UID:151AFC76-6036-40EF-952B-97D1840760BF
SUMMARY:Non Invitation
DTSTART:%s
DURATION:PT1H
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % (past,)
# Purging non-existent organizer; has non-existent and existent attendees
ORGANIZER_ICS_3 = """BEGIN:VCALENDAR
VERSION:2.0
BEGIN:VEVENT
UID:7ED97931-9A19-4596-9D4D-52B36D6AB803
SUMMARY:Organizer
DTSTART:%s
DURATION:PT1H
ORGANIZER:urn:uuid:F0000000-0000-0000-0000-000000000001
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:F0000000-0000-0000-0000-000000000001
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:F0000000-0000-0000-0000-000000000002
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:10000000-0000-0000-0000-000000000002
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % (future,)
# Purging non-existent attendee; has non-existent organizer and existent attendee
# (Note: Implicit scheduling doesn't update this at all for the existing attendee)
ATTENDEE_ICS_3 = """BEGIN:VCALENDAR
VERSION:2.0
BEGIN:VEVENT
UID:1974603C-B2C0-4623-92A0-2436DEAB07EF
SUMMARY:Attendee
DTSTART:%s
DURATION:PT1H
ORGANIZER:urn:uuid:F0000000-0000-0000-0000-000000000002
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:F0000000-0000-0000-0000-000000000002
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:10000000-0000-0000-0000-000000000001
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:F0000000-0000-0000-0000-000000000001
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % (future,)
# Purging non-existent attendee; has non-existent attendee and existent organizer
ATTENDEE_ICS_4 = """BEGIN:VCALENDAR
VERSION:2.0
BEGIN:VEVENT
UID:79F26B10-6ECE-465E-9478-53F2A9FCAFEE
SUMMARY:2 non-existent attendees
DTSTART:%s
DURATION:PT1H
ORGANIZER:urn:uuid:10000000-0000-0000-0000-000000000002
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:10000000-0000-0000-0000-000000000002
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:F0000000-0000-0000-0000-000000000001
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:F0000000-0000-0000-0000-000000000002
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % (future,)
ATTACHMENT_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:US/Pacific
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
DTSTART:20070311T020000
TZNAME:PDT
TZOFFSETTO:-0700
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
DTSTART:20071104T020000
TZNAME:PST
TZOFFSETTO:-0800
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CREATED:20100303T195159Z
UID:F2F14D94-B944-43D9-8F6F-97F95B2764CA
DTEND;TZID=US/Pacific:20100304T141500
TRANSP:OPAQUE
SUMMARY:Attachment
DTSTART;TZID=US/Pacific:20100304T120000
DTSTAMP:20100303T195203Z
SEQUENCE:2
X-APPLE-DROPBOX:/calendars/__uids__/6423F94A-6B76-4A3A-815B-D52CFD77935D/dropbox/F2F14D94-B944-43D9-8F6F-97F95B2764CA.dropbox
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Purging non-existent organizer; has existing attendee; repeating
REPEATING_PUBLIC_EVENT_ORGANIZER_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
X-CALENDARSERVER-ACCESS:PRIVATE
BEGIN:VEVENT
UID:8ED97931-9A19-4596-9D4D-52B36D6AB803
SUMMARY:Repeating Organizer
DTSTART:%s
DURATION:PT1H
RRULE:FREQ=DAILY;COUNT=400
ORGANIZER:urn:x-uid:user01
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:x-uid:user01
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:x-uid:user02
DTSTAMP:20100303T195203Z
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % (past,)
class PurgePrincipalTests(StoreTestCase):
"""
Tests for purging the data belonging to a given principal
"""
uid = "user01"
uid2 = "user02"
metadata = {
"accessMode": "PUBLIC",
"isScheduleObject": True,
"scheduleTag": "abc",
"scheduleEtags": (),
"hasPrivateComment": False,
}
requirements = {
uid : {
"calendar1" : {
"attachment.ics" : (ATTACHMENT_ICS, metadata,),
"organizer.ics" : (REPEATING_PUBLIC_EVENT_ORGANIZER_ICS, metadata,),
},
"inbox": {},
},
uid2 : {
"calendar2" : {
"attendee.ics" : (REPEATING_PUBLIC_EVENT_ORGANIZER_ICS, metadata,),
},
"inbox": {},
},
}
@inlineCallbacks
def setUp(self):
yield super(PurgePrincipalTests, self).setUp()
txn = self._sqlCalendarStore.newTransaction()
# Add attachment to attachment.ics
self._sqlCalendarStore._dropbox_ok = True
home = yield txn.calendarHomeWithUID(self.uid)
calendar = yield home.calendarWithName("calendar1")
event = yield calendar.calendarObjectWithName("attachment.ics")
attachment = yield event.createAttachmentWithName("attachment.txt")
t = attachment.store(MimeType("text", "x-fixture"))
t.write("attachment")
t.write(" text")
yield t.loseConnection()
self._sqlCalendarStore._dropbox_ok = False
# Share calendars each way
home2 = yield txn.calendarHomeWithUID(self.uid2)
calendar2 = yield home2.calendarWithName("calendar2")
self.sharedName = yield calendar2.shareWith(home, _BIND_MODE_WRITE)
self.sharedName2 = yield calendar.shareWith(home2, _BIND_MODE_WRITE)
yield txn.commit()
txn = self._sqlCalendarStore.newTransaction()
home = yield txn.calendarHomeWithUID(self.uid)
calendar2 = yield home.childWithName(self.sharedName)
self.assertNotEquals(calendar2, None)
home2 = yield txn.calendarHomeWithUID(self.uid2)
calendar1 = yield home2.childWithName(self.sharedName2)
self.assertNotEquals(calendar1, None)
yield txn.commit()
# Now remove user01
yield self.directory.removeRecords((self.uid,))
self.patch(config.Scheduling.Options.WorkQueues, "Enabled", False)
self.patch(config.AutomaticPurging, "Enabled", True)
self.patch(config.AutomaticPurging, "PollingIntervalSeconds", -1)
self.patch(config.AutomaticPurging, "CheckStaggerSeconds", 1)
self.patch(config.AutomaticPurging, "PurgeIntervalSeconds", 3)
self.patch(config.AutomaticPurging, "HomePurgeDelaySeconds", 1)
@inlineCallbacks
def populate(self):
yield populateCalendarsFrom(self.requirements, self.storeUnderTest())
self.notifierFactory.reset()
@inlineCallbacks
def test_purgeUIDs(self):
"""
Verify purgeUIDs removes homes, and doesn't provision homes that don't exist
"""
# Now you see it
home = yield self.homeUnderTest(name=self.uid)
self.assertNotEquals(home, None)
calobj2 = yield self.calendarObjectUnderTest(name="attendee.ics", calendar_name="calendar2", home=self.uid2)
comp = yield calobj2.componentForUser()
self.assertTrue("STATUS:CANCELLED" not in str(comp))
self.assertTrue(";UNTIL=" not in str(comp))
yield self.commit()
count = (yield PurgePrincipalService.purgeUIDs(
self.storeUnderTest(), self.directory,
(self.uid,), verbose=False, proxies=False))
self.assertEquals(count, 2) # 2 events
# Wait for queue to process
while(True):
txn = self.transactionUnderTest()
work = yield PrincipalPurgeHomeWork.all(txn)
yield self.commit()
if len(work) == 0:
break
d = Deferred()
reactor.callLater(1, lambda : d.callback(None))
yield d
# Now you don't
home = yield self.homeUnderTest(name=self.uid)
self.assertEquals(home, None)
# Verify calendar1 was unshared to uid2
home2 = yield self.homeUnderTest(name=self.uid2)
self.assertEquals((yield home2.childWithName(self.sharedName)), None)
yield self.commit()
count = yield PurgePrincipalService.purgeUIDs(
self.storeUnderTest(),
self.directory,
(self.uid,),
verbose=False,
proxies=False,
)
self.assertEquals(count, 0)
# And you still don't (making sure it's not provisioned)
home = yield self.homeUnderTest(name=self.uid)
self.assertEquals(home, None)
yield self.commit()
calobj2 = yield self.calendarObjectUnderTest(name="attendee.ics", calendar_name="calendar2", home=self.uid2)
comp = yield calobj2.componentForUser()
self.assertTrue("STATUS:CANCELLED" in str(comp))
self.assertTrue(";UNTIL=" not in str(comp))
yield self.commit()
class PurgePrincipalTestsWithWorkQueue(PurgePrincipalTests):
"""
Same as L{PurgePrincipalTests} but with the work queue enabled.
"""
@inlineCallbacks
def setUp(self):
yield super(PurgePrincipalTestsWithWorkQueue, self).setUp()
self.patch(config.Scheduling.Options.WorkQueues, "Enabled", True)
self.patch(config.AutomaticPurging, "Enabled", True)
self.patch(config.AutomaticPurging, "PollingIntervalSeconds", -1)
self.patch(config.AutomaticPurging, "CheckStaggerSeconds", 1)
self.patch(config.AutomaticPurging, "PurgeIntervalSeconds", 3)
self.patch(config.AutomaticPurging, "HomePurgeDelaySeconds", 1)
@inlineCallbacks
def test_purgeUIDService(self):
"""
Test that the full sequence of work items are processed via automatic polling.
"""
# Now you see it
home = yield self.homeUnderTest(name=self.uid)
self.assertNotEquals(home, None)
calobj2 = yield self.calendarObjectUnderTest(name="attendee.ics", calendar_name="calendar2", home=self.uid2)
comp = yield calobj2.componentForUser()
self.assertTrue("STATUS:CANCELLED" not in str(comp))
self.assertTrue(";UNTIL=" not in str(comp))
yield self.commit()
txn = self.transactionUnderTest()
notBefore = (
datetime.datetime.utcnow() +
datetime.timedelta(seconds=3)
)
yield txn.enqueue(PrincipalPurgePollingWork, notBefore=notBefore)
yield self.commit()
while True:
txn = self.transactionUnderTest()
work1 = yield PrincipalPurgePollingWork.all(txn)
work2 = yield PrincipalPurgeCheckWork.all(txn)
work3 = yield PrincipalPurgeWork.all(txn)
work4 = yield PrincipalPurgeHomeWork.all(txn)
if len(work4) != 0:
home = yield txn.calendarHomeWithUID(self.uid)
self.assertTrue(home.purging())
yield self.commit()
# print len(work1), len(work2), len(work3), len(work4)
if len(work1) + len(work2) + len(work3) + len(work4) == 0:
break
d = Deferred()
reactor.callLater(1, lambda : d.callback(None))
yield d
# Now you don't
home = yield self.homeUnderTest(name=self.uid)
self.assertEquals(home, None)
# Verify calendar1 was unshared to uid2
home2 = yield self.homeUnderTest(name=self.uid2)
self.assertEquals((yield home2.childWithName(self.sharedName)), None)
calobj2 = yield self.calendarObjectUnderTest(name="attendee.ics", calendar_name="calendar2", home=self.uid2)
comp = yield calobj2.componentForUser()
self.assertTrue("STATUS:CANCELLED" in str(comp))
self.assertTrue(";UNTIL=" not in str(comp))
yield self.commit()
calendarserver-7.0+dfsg/calendarserver/tools/test/test_purge_old_events.py 0000664 0000000 0000000 00000137676 12607514243 0027423 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2011-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Tests for calendarserver.tools.purge
"""
import os
from calendarserver.tools.purge import (
PurgeOldEventsService, PurgeAttachmentsService, PurgePrincipalService, PrincipalPurgeHomeWork
)
from pycalendar.datetime import DateTime
from twext.enterprise.dal.syntax import Update, Delete
from twext.enterprise.util import parseSQLTimestamp
from twisted.internet import reactor
from twisted.internet.defer import inlineCallbacks, returnValue, Deferred
from twistedcaldav.config import config
from twistedcaldav.test.util import StoreTestCase
from twistedcaldav.vcard import Component as VCardComponent
from txdav.common.datastore.sql_tables import schema
from txdav.common.datastore.test.util import populateCalendarsFrom
from txweb2.http_headers import MimeType
now = DateTime.getToday().getYear()
OLD_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:US/Pacific
BEGIN:STANDARD
TZOFFSETFROM:-0700
RRULE:FREQ=YEARLY;UNTIL=20061029T090000Z;BYMONTH=10;BYDAY=-1SU
DTSTART:19621028T020000
TZNAME:PST
TZOFFSETTO:-0800
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
RRULE:FREQ=YEARLY;UNTIL=20060402T100000Z;BYMONTH=4;BYDAY=1SU
DTSTART:19870405T020000
TZNAME:PDT
TZOFFSETTO:-0700
END:DAYLIGHT
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
DTSTART:20070311T020000
TZNAME:PDT
TZOFFSETTO:-0700
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
DTSTART:20071104T020000
TZNAME:PST
TZOFFSETTO:-0800
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:685BC3A1-195A-49B3-926D-388DDACA78A6
DTEND;TZID=US/Pacific:%(year)s0307T151500
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART;TZID=US/Pacific:%(year)s0307T111500
DTSTAMP:20100303T181220Z
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": now - 5}
ATTACHMENT_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:US/Pacific
BEGIN:STANDARD
TZOFFSETFROM:-0700
RRULE:FREQ=YEARLY;UNTIL=20061029T090000Z;BYMONTH=10;BYDAY=-1SU
DTSTART:19621028T020000
TZNAME:PST
TZOFFSETTO:-0800
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
RRULE:FREQ=YEARLY;UNTIL=20060402T100000Z;BYMONTH=4;BYDAY=1SU
DTSTART:19870405T020000
TZNAME:PDT
TZOFFSETTO:-0700
END:DAYLIGHT
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
DTSTART:20070311T020000
TZNAME:PDT
TZOFFSETTO:-0700
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
DTSTART:20071104T020000
TZNAME:PST
TZOFFSETTO:-0800
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:57A5D1F6-9A57-4F74-9520-25C617F54B88-%(uid)s
TRANSP:OPAQUE
SUMMARY:Ancient event with attachment
DTSTART;TZID=US/Pacific:%(year)s0308T111500
DTEND;TZID=US/Pacific:%(year)s0308T151500
DTSTAMP:20100303T181220Z
X-APPLE-DROPBOX:/calendars/__uids__/user01/dropbox/%(dropboxid)s.dropbox
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
MATTACHMENT_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:US/Pacific
BEGIN:STANDARD
TZOFFSETFROM:-0700
RRULE:FREQ=YEARLY;UNTIL=20061029T090000Z;BYMONTH=10;BYDAY=-1SU
DTSTART:19621028T020000
TZNAME:PST
TZOFFSETTO:-0800
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
RRULE:FREQ=YEARLY;UNTIL=20060402T100000Z;BYMONTH=4;BYDAY=1SU
DTSTART:19870405T020000
TZNAME:PDT
TZOFFSETTO:-0700
END:DAYLIGHT
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
DTSTART:20070311T020000
TZNAME:PDT
TZOFFSETTO:-0700
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
DTSTART:20071104T020000
TZNAME:PST
TZOFFSETTO:-0800
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:57A5D1F6-9A57-4F74-9520-25C617F54B88-%(uid)s
TRANSP:OPAQUE
SUMMARY:Ancient event with attachment
DTSTART;TZID=US/Pacific:%(year)s0308T111500
DTEND;TZID=US/Pacific:%(year)s0308T151500
DTSTAMP:20100303T181220Z
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
ENDLESS_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:US/Pacific
BEGIN:STANDARD
TZOFFSETFROM:-0700
RRULE:FREQ=YEARLY;UNTIL=20061029T090000Z;BYMONTH=10;BYDAY=-1SU
DTSTART:19621028T020000
TZNAME:PST
TZOFFSETTO:-0800
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
RRULE:FREQ=YEARLY;UNTIL=20060402T100000Z;BYMONTH=4;BYDAY=1SU
DTSTART:19870405T020000
TZNAME:PDT
TZOFFSETTO:-0700
END:DAYLIGHT
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
DTSTART:20070311T020000
TZNAME:PDT
TZOFFSETTO:-0700
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
DTSTART:20071104T020000
TZNAME:PST
TZOFFSETTO:-0800
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CREATED:20100303T194654Z
UID:9FDE0E4C-1495-4CAF-863B-F7F0FB15FE8C
DTEND;TZID=US/Pacific:%(year)s0308T151500
RRULE:FREQ=YEARLY;INTERVAL=1
TRANSP:OPAQUE
SUMMARY:Ancient Repeating Endless
DTSTART;TZID=US/Pacific:%(year)s0308T111500
DTSTAMP:20100303T194710Z
SEQUENCE:4
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": now - 5}
REPEATING_AWHILE_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:US/Pacific
BEGIN:STANDARD
TZOFFSETFROM:-0700
RRULE:FREQ=YEARLY;UNTIL=20061029T090000Z;BYMONTH=10;BYDAY=-1SU
DTSTART:19621028T020000
TZNAME:PST
TZOFFSETTO:-0800
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
RRULE:FREQ=YEARLY;UNTIL=20060402T100000Z;BYMONTH=4;BYDAY=1SU
DTSTART:19870405T020000
TZNAME:PDT
TZOFFSETTO:-0700
END:DAYLIGHT
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
DTSTART:20070311T020000
TZNAME:PDT
TZOFFSETTO:-0700
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
DTSTART:20071104T020000
TZNAME:PST
TZOFFSETTO:-0800
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CREATED:20100303T194716Z
UID:76236B32-2BC4-4D78-956B-8D42D4086200
DTEND;TZID=US/Pacific:%(year)s0309T151500
RRULE:FREQ=YEARLY;INTERVAL=1;COUNT=3
TRANSP:OPAQUE
SUMMARY:Ancient Repeat Awhile
DTSTART;TZID=US/Pacific:%(year)s0309T111500
DTSTAMP:20100303T194747Z
SEQUENCE:6
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": now - 5}
STRADDLING_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:US/Pacific
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
DTSTART:20070311T020000
TZNAME:PDT
TZOFFSETTO:-0700
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
DTSTART:20071104T020000
TZNAME:PST
TZOFFSETTO:-0800
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CREATED:20100303T213643Z
UID:1C219DAD-D374-4822-8C98-ADBA85E253AB
DTEND;TZID=US/Pacific:%(year)s0508T121500
RRULE:FREQ=MONTHLY;INTERVAL=1;UNTIL=%(until)s0509T065959Z
TRANSP:OPAQUE
SUMMARY:Straddling cut-off
DTSTART;TZID=US/Pacific:%(year)s0508T111500
DTSTAMP:20100303T213704Z
SEQUENCE:5
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": now - 2, "until": now + 1}
RECENT_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:US/Pacific
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
DTSTART:20070311T020000
TZNAME:PDT
TZOFFSETTO:-0700
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
DTSTART:20071104T020000
TZNAME:PST
TZOFFSETTO:-0800
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CREATED:20100303T195159Z
UID:F2F14D94-B944-43D9-8F6F-97F95B2764CA
DTEND;TZID=US/Pacific:%(year)s0304T141500
TRANSP:OPAQUE
SUMMARY:Recent
DTSTART;TZID=US/Pacific:%(year)s0304T120000
DTSTAMP:20100303T195203Z
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": now}
VCARD_1 = """BEGIN:VCARD
VERSION:3.0
N:User;Test
FN:Test User
EMAIL;type=INTERNET;PREF:testuser@example.com
UID:12345-67890-1.1
END:VCARD
""".replace("\n", "\r\n")
class PurgeOldEventsTests(StoreTestCase):
"""
Tests for deleting events older than a given date
"""
metadata = {
"accessMode": "PUBLIC",
"isScheduleObject": True,
"scheduleTag": "abc",
"scheduleEtags": (),
"hasPrivateComment": False,
}
requirements = {
"home1" : {
"calendar1" : {
"old.ics" : (OLD_ICS, metadata,),
"endless.ics" : (ENDLESS_ICS, metadata,),
"oldattachment1.ics" : (ATTACHMENT_ICS % {"year": now - 5, "uid": "1.1", "dropboxid": "1.1"}, metadata,),
"oldattachment2.ics" : (ATTACHMENT_ICS % {"year": now - 5, "uid": "1.2", "dropboxid": "1.2"}, metadata,),
"currentattachment3.ics" : (ATTACHMENT_ICS % {"year": now + 1, "uid": "1.3", "dropboxid": "1.3"}, metadata,),
"oldmattachment1.ics" : (MATTACHMENT_ICS % {"year": now - 5, "uid": "1.1m"}, metadata,),
"oldmattachment2.ics" : (MATTACHMENT_ICS % {"year": now - 5, "uid": "1.2m"}, metadata,),
"currentmattachment3.ics" : (MATTACHMENT_ICS % {"year": now + 1, "uid": "1.3m"}, metadata,),
},
"inbox": {},
},
"home2" : {
"calendar2" : {
"straddling.ics" : (STRADDLING_ICS, metadata,),
"recent.ics" : (RECENT_ICS, metadata,),
"oldattachment1.ics" : (ATTACHMENT_ICS % {"year": now - 5, "uid": "2.1", "dropboxid": "2.1"}, metadata,),
"currentattachment2.ics" : (ATTACHMENT_ICS % {"year": now + 1, "uid": "2.2", "dropboxid": "2.1"}, metadata,),
"oldattachment3.ics" : (ATTACHMENT_ICS % {"year": now - 5, "uid": "2.3", "dropboxid": "2.2"}, metadata,),
"oldattachment4.ics" : (ATTACHMENT_ICS % {"year": now - 5, "uid": "2.4", "dropboxid": "2.2"}, metadata,),
"oldmattachment1.ics" : (MATTACHMENT_ICS % {"year": now - 5, "uid": "2.1"}, metadata,),
"currentmattachment2.ics" : (MATTACHMENT_ICS % {"year": now + 1, "uid": "2.2"}, metadata,),
"oldmattachment3.ics" : (MATTACHMENT_ICS % {"year": now - 5, "uid": "2.3"}, metadata,),
"oldmattachment4.ics" : (MATTACHMENT_ICS % {"year": now - 5, "uid": "2.4"}, metadata,),
},
"calendar3" : {
"repeating_awhile.ics" : (REPEATING_AWHILE_ICS, metadata,),
},
"inbox": {},
}
}
def configure(self):
super(PurgeOldEventsTests, self).configure()
# Turn off delayed indexing option so we can have some useful tests
self.patch(config, "FreeBusyIndexDelayedExpand", False)
# Tweak queue timing to speed things up
self.patch(config.Scheduling.Options.WorkQueues, "Enabled", False)
self.patch(config.AutomaticPurging, "PollingIntervalSeconds", -1)
self.patch(config.AutomaticPurging, "CheckStaggerSeconds", 1)
self.patch(config.AutomaticPurging, "PurgeIntervalSeconds", 3)
self.patch(config.AutomaticPurging, "HomePurgeDelaySeconds", 1)
# self.patch(config.DirectoryService.params, "xmlFile",
# os.path.join(
# os.path.dirname(__file__), "purge", "accounts.xml"
# )
# )
# self.patch(config.ResourceService.params, "xmlFile",
# os.path.join(
# os.path.dirname(__file__), "purge", "resources.xml"
# )
# )
@inlineCallbacks
def populate(self):
yield populateCalendarsFrom(self.requirements, self.storeUnderTest())
self.notifierFactory.reset()
txn = self._sqlCalendarStore.newTransaction()
Delete(
From=schema.ATTACHMENT,
Where=None
).on(txn)
(yield txn.commit())
@inlineCallbacks
def test_eventsOlderThan(self):
cutoff = DateTime(now, 4, 1, 0, 0, 0)
txn = self._sqlCalendarStore.newTransaction()
# Query for all old events
results = (yield txn.eventsOlderThan(cutoff))
self.assertEquals(
sorted(results),
sorted([
['home1', 'calendar1', 'old.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
['home1', 'calendar1', 'oldattachment1.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
['home1', 'calendar1', 'oldattachment2.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
['home1', 'calendar1', 'oldmattachment1.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
['home1', 'calendar1', 'oldmattachment2.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
['home2', 'calendar3', 'repeating_awhile.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
['home2', 'calendar2', 'recent.ics', parseSQLTimestamp('%s-03-04 22:15:00' % (now,))],
['home2', 'calendar2', 'oldattachment1.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
['home2', 'calendar2', 'oldattachment3.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
['home2', 'calendar2', 'oldattachment4.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
['home2', 'calendar2', 'oldmattachment1.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
['home2', 'calendar2', 'oldmattachment3.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
['home2', 'calendar2', 'oldmattachment4.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
])
)
# Query for oldest event - actually with limited time caching, the oldest event
# cannot be precisely known, all we get back is the first one in the sorted list
# where each has the 1901 "dummy" time stamp to indicate a partial cache
results = (yield txn.eventsOlderThan(cutoff, batchSize=1))
self.assertEquals(len(results), 1)
@inlineCallbacks
def test_removeOldEvents(self):
cutoff = DateTime(now, 4, 1, 0, 0, 0)
txn = self._sqlCalendarStore.newTransaction()
# Remove oldest event - except we don't know what that is because of the dummy timestamps
# used with a partial index. So all we can check is that one event was removed.
count = (yield txn.removeOldEvents(cutoff, batchSize=1))
self.assertEquals(count, 1)
results = (yield txn.eventsOlderThan(cutoff))
self.assertEquals(len(results), 12)
# Remove remaining oldest events
count = (yield txn.removeOldEvents(cutoff))
self.assertEquals(count, 12)
results = (yield txn.eventsOlderThan(cutoff))
self.assertEquals(list(results), [])
# Remove oldest events (none left)
count = (yield txn.removeOldEvents(cutoff))
self.assertEquals(count, 0)
@inlineCallbacks
def _addAttachment(self, home, calendar, event, name):
self._sqlCalendarStore._dropbox_ok = True
txn = self._sqlCalendarStore.newTransaction()
# Create an event with an attachment
home = (yield txn.calendarHomeWithUID(home))
calendar = (yield home.calendarWithName(calendar))
event = (yield calendar.calendarObjectWithName(event))
attachment = (yield event.createAttachmentWithName(name))
t = attachment.store(MimeType("text", "x-fixture"))
t.write("%s/%s/%s/%s" % (home, calendar, event, name,))
t.write(" attachment")
(yield t.loseConnection())
(yield txn.commit())
self._sqlCalendarStore._dropbox_ok = False
returnValue(attachment)
@inlineCallbacks
def _orphanAttachment(self, home, calendar, event):
txn = self._sqlCalendarStore.newTransaction()
# Reset dropbox id in calendar_object
home = (yield txn.calendarHomeWithUID(home))
calendar = (yield home.calendarWithName(calendar))
event = (yield calendar.calendarObjectWithName(event))
co = schema.CALENDAR_OBJECT
Update(
{co.DROPBOX_ID: None, },
Where=co.RESOURCE_ID == event._resourceID,
).on(txn)
(yield txn.commit())
@inlineCallbacks
def _addManagedAttachment(self, home, calendar, event, name):
txn = self._sqlCalendarStore.newTransaction()
# Create an event with an attachment
home = (yield txn.calendarHomeWithUID(home))
calendar = (yield home.calendarWithName(calendar))
event = (yield calendar.calendarObjectWithName(event))
attachment = (yield event.createManagedAttachment())
t = attachment.store(MimeType("text", "x-fixture"), name)
t.write("%s/%s/%s/%s" % (home, calendar, event, name,))
t.write(" managed attachment")
(yield t.loseConnection())
(yield txn.commit())
returnValue(attachment)
@inlineCallbacks
def test_removeOrphanedAttachments(self):
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota1 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertEqual(quota1, 0)
attachment = (yield self._addAttachment("home1", "calendar1", "oldattachment1.ics", "att1"))
attachmentPath = attachment._path.path
self.assertTrue(os.path.exists(attachmentPath))
mattachment1 = (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment1.ics", "matt1"))
mattachment2 = (yield self._addManagedAttachment("home1", "calendar1", "currentmattachment3.ics", "matt3"))
mattachmentPath1 = mattachment1._path.path
self.assertTrue(os.path.exists(mattachmentPath1))
mattachmentPath2 = mattachment2._path.path
self.assertTrue(os.path.exists(mattachmentPath2))
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota2 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota2 > quota1)
orphans = (yield self.transactionUnderTest().orphanedAttachments())
self.assertEquals(len(orphans), 0)
count = (yield self.transactionUnderTest().removeOrphanedAttachments(batchSize=100))
self.assertEquals(count, 0)
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertNotEqual(quota, 0)
# Files still exist
self.assertTrue(os.path.exists(attachmentPath))
self.assertTrue(os.path.exists(mattachmentPath1))
self.assertTrue(os.path.exists(mattachmentPath2))
# Delete all old events (including the event containing the attachment)
cutoff = DateTime(now, 4, 1, 0, 0, 0)
count = (yield self.transactionUnderTest().removeOldEvents(cutoff))
# See which events have gone and which exist
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
calendar = (yield home.calendarWithName("calendar1"))
self.assertNotEqual((yield calendar.calendarObjectWithName("endless.ics")), None)
self.assertNotEqual((yield calendar.calendarObjectWithName("currentmattachment3.ics")), None)
self.assertEqual((yield calendar.calendarObjectWithName("old.ics")), None)
self.assertEqual((yield calendar.calendarObjectWithName("oldattachment1.ics")), None)
self.assertEqual((yield calendar.calendarObjectWithName("oldmattachment1.ics")), None)
(yield self.commit())
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota3 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota3 < quota2)
self.assertNotEqual(quota3, 0)
# Just look for orphaned attachments - none left
orphans = (yield self.transactionUnderTest().orphanedAttachments())
self.assertEquals(len(orphans), 0)
# Files
self.assertFalse(os.path.exists(attachmentPath))
self.assertFalse(os.path.exists(mattachmentPath1))
self.assertTrue(os.path.exists(mattachmentPath2))
@inlineCallbacks
def test_purgeOldEvents(self):
# Dry run
total = (yield PurgeOldEventsService.purgeOldEvents(
self._sqlCalendarStore,
None,
DateTime(now, 4, 1, 0, 0, 0),
2,
dryrun=True,
debug=True
))
self.assertEquals(total, 13)
# Actually remove
total = (yield PurgeOldEventsService.purgeOldEvents(
self._sqlCalendarStore,
None,
DateTime(now, 4, 1, 0, 0, 0),
2,
debug=True
))
self.assertEquals(total, 13)
# There should be no more left
total = (yield PurgeOldEventsService.purgeOldEvents(
self._sqlCalendarStore,
None,
DateTime(now, 4, 1, 0, 0, 0),
2,
debug=True
))
self.assertEquals(total, 0)
@inlineCallbacks
def test_purgeOldEvents_home_filtering(self):
# Dry run
total = (yield PurgeOldEventsService.purgeOldEvents(
self._sqlCalendarStore,
"ho",
DateTime(now, 4, 1, 0, 0, 0),
2,
dryrun=True,
debug=True
))
self.assertEquals(total, 13)
# Dry run
total = (yield PurgeOldEventsService.purgeOldEvents(
self._sqlCalendarStore,
"home",
DateTime(now, 4, 1, 0, 0, 0),
2,
dryrun=True,
debug=True
))
self.assertEquals(total, 13)
# Dry run
total = (yield PurgeOldEventsService.purgeOldEvents(
self._sqlCalendarStore,
"home1",
DateTime(now, 4, 1, 0, 0, 0),
2,
dryrun=True,
debug=True
))
self.assertEquals(total, 5)
# Dry run
total = (yield PurgeOldEventsService.purgeOldEvents(
self._sqlCalendarStore,
"home2",
DateTime(now, 4, 1, 0, 0, 0),
2,
dryrun=True,
debug=True
))
self.assertEquals(total, 8)
@inlineCallbacks
def test_purgeOldEvents_old_cutoff(self):
# Dry run
cutoff = DateTime.getToday()
cutoff.setDateOnly(False)
cutoff.offsetDay(-400)
total = (yield PurgeOldEventsService.purgeOldEvents(
self._sqlCalendarStore,
"ho",
cutoff,
2,
dryrun=True,
debug=True
))
self.assertEquals(total, 12)
# Actually remove
total = (yield PurgeOldEventsService.purgeOldEvents(
self._sqlCalendarStore,
None,
cutoff,
2,
debug=True
))
self.assertEquals(total, 12)
total = (yield PurgeOldEventsService.purgeOldEvents(
self._sqlCalendarStore,
"ho",
cutoff,
2,
dryrun=True,
debug=True
))
self.assertEquals(total, 0)
@inlineCallbacks
def test_purgeUID(self):
txn = self._sqlCalendarStore.newTransaction()
# Create an addressbook and one CardDAV resource
abHome = (yield txn.addressbookHomeWithUID("home1", create=True))
abColl = (yield abHome.addressbookWithName("addressbook"))
(yield abColl.createAddressBookObjectWithName(
"card1",
VCardComponent.fromString(VCARD_1)))
self.assertEquals(len((yield abColl.addressbookObjects())), 1)
# Verify there are 8 events in calendar1
calHome = (yield txn.calendarHomeWithUID("home1"))
calColl = (yield calHome.calendarWithName("calendar1"))
self.assertEquals(len((yield calColl.calendarObjects())), 8)
# Make the newly created objects available to the purgeUID transaction
(yield txn.commit())
# Purge home1 completely
total = yield PurgePrincipalService.purgeUIDs(
self._sqlCalendarStore, self.directory,
("home1",), verbose=False, proxies=False)
# Wait for queue to process
while(True):
txn = self.transactionUnderTest()
work = yield PrincipalPurgeHomeWork.all(txn)
yield self.commit()
if len(work) == 0:
break
d = Deferred()
reactor.callLater(1, lambda : d.callback(None))
yield d
# 9 items deleted: 8 events and 1 vcard
self.assertEquals(total, 9)
# Homes have been deleted as well
txn = self._sqlCalendarStore.newTransaction()
abHome = (yield txn.addressbookHomeWithUID("home1"))
self.assertEquals(abHome, None)
calHome = (yield txn.calendarHomeWithUID("home1"))
self.assertEquals(calHome, None)
@inlineCallbacks
def test_purgeAttachmentsWithoutCutoffWithPurgeOld(self):
"""
L{PurgeAttachmentsService.purgeAttachments} purges only orphaned attachments, not current ones.
"""
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota1 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertEqual(quota1, 0)
(yield self._addAttachment("home1", "calendar1", "oldattachment1.ics", "att1"))
(yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.1"))
(yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.2"))
(yield self._addAttachment("home1", "calendar1", "currentattachment3.ics", "att3"))
(yield self._addAttachment("home2", "calendar2", "oldattachment1.ics", "att4"))
(yield self._addAttachment("home2", "calendar2", "currentattachment2.ics", "att5"))
(yield self._addAttachment("home2", "calendar2", "oldattachment3.ics", "att6"))
(yield self._addAttachment("home2", "calendar2", "oldattachment4.ics", "att7"))
(yield self._orphanAttachment("home1", "calendar1", "oldattachment1.ics"))
(yield self._addManagedAttachment("home1", "calendar1", "oldmattachment1.ics", "matt1"))
(yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.1"))
(yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.2"))
(yield self._addManagedAttachment("home1", "calendar1", "currentmattachment3.ics", "matt3"))
(yield self._addManagedAttachment("home2", "calendar2", "oldmattachment1.ics", "matt4"))
(yield self._addManagedAttachment("home2", "calendar2", "currentmattachment2.ics", "matt5"))
(yield self._addManagedAttachment("home2", "calendar2", "oldmattachment3.ics", "matt6"))
(yield self._addManagedAttachment("home2", "calendar2", "oldmattachment4.ics", "matt7"))
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota2 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota2 > quota1)
# Remove old events first
total = (yield PurgeOldEventsService.purgeOldEvents(
self._sqlCalendarStore,
None,
DateTime(now, 4, 1, 0, 0, 0),
2,
debug=False
))
self.assertEquals(total, 13)
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota3 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota3 < quota2)
# Dry run
total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, None, 0, 2, dryrun=True, verbose=False))
self.assertEquals(total, 1)
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota4 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota4 == quota3)
# Actually remove
total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, None, 0, 2, dryrun=False, verbose=False))
self.assertEquals(total, 1)
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota5 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota5 < quota4)
# There should be no more left
total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, None, 0, 2, dryrun=False, verbose=False))
self.assertEquals(total, 0)
@inlineCallbacks
def test_purgeAttachmentsWithoutCutoff(self):
"""
L{PurgeAttachmentsService.purgeAttachments} purges only orphaned attachments, not current ones.
"""
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota1 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertEqual(quota1, 0)
(yield self._addAttachment("home1", "calendar1", "oldattachment1.ics", "att1"))
(yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.1"))
(yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.2"))
(yield self._addAttachment("home1", "calendar1", "currentattachment3.ics", "att3"))
(yield self._addAttachment("home2", "calendar2", "oldattachment1.ics", "att4"))
(yield self._addAttachment("home2", "calendar2", "currentattachment2.ics", "att5"))
(yield self._addAttachment("home2", "calendar2", "oldattachment3.ics", "att6"))
(yield self._addAttachment("home2", "calendar2", "oldattachment4.ics", "att7"))
(yield self._orphanAttachment("home1", "calendar1", "oldattachment1.ics"))
(yield self._orphanAttachment("home2", "calendar2", "oldattachment1.ics"))
(yield self._orphanAttachment("home2", "calendar2", "currentattachment2.ics"))
(yield self._addManagedAttachment("home1", "calendar1", "oldmattachment1.ics", "matt1"))
(yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.1"))
(yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.2"))
(yield self._addManagedAttachment("home1", "calendar1", "currentmattachment3.ics", "matt3"))
(yield self._addManagedAttachment("home2", "calendar2", "oldmattachment1.ics", "matt4"))
(yield self._addManagedAttachment("home2", "calendar2", "currentmattachment2.ics", "matt5"))
(yield self._addManagedAttachment("home2", "calendar2", "oldmattachment3.ics", "matt6"))
(yield self._addManagedAttachment("home2", "calendar2", "oldmattachment4.ics", "matt7"))
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota2 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota2 > quota1)
# Dry run
total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, None, 0, 2, dryrun=True, verbose=False))
self.assertEquals(total, 3)
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota3 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota3 == quota2)
# Actually remove
total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, None, 0, 2, dryrun=False, verbose=False))
self.assertEquals(total, 3)
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota4 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota4 < quota3)
# There should be no more left
total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, None, 0, 2, dryrun=False, verbose=False))
self.assertEquals(total, 0)
@inlineCallbacks
def test_purgeAttachmentsWithoutCutoffWithMatchingUUID(self):
"""
L{PurgeAttachmentsService.purgeAttachments} purges only orphaned attachments, not current ones.
"""
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota1 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertEqual(quota1, 0)
(yield self._addAttachment("home1", "calendar1", "oldattachment1.ics", "att1"))
(yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.1"))
(yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.2"))
(yield self._addAttachment("home1", "calendar1", "currentattachment3.ics", "att3"))
(yield self._addAttachment("home2", "calendar2", "oldattachment1.ics", "att4"))
(yield self._addAttachment("home2", "calendar2", "currentattachment2.ics", "att5"))
(yield self._addAttachment("home2", "calendar2", "oldattachment3.ics", "att6"))
(yield self._addAttachment("home2", "calendar2", "oldattachment4.ics", "att7"))
(yield self._orphanAttachment("home1", "calendar1", "oldattachment1.ics"))
(yield self._orphanAttachment("home2", "calendar2", "oldattachment1.ics"))
(yield self._orphanAttachment("home2", "calendar2", "currentattachment2.ics"))
(yield self._addManagedAttachment("home1", "calendar1", "oldmattachment1.ics", "matt1"))
(yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.1"))
(yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.2"))
(yield self._addManagedAttachment("home1", "calendar1", "currentmattachment3.ics", "matt3"))
(yield self._addManagedAttachment("home2", "calendar2", "oldmattachment1.ics", "matt4"))
(yield self._addManagedAttachment("home2", "calendar2", "currentmattachment2.ics", "matt5"))
(yield self._addManagedAttachment("home2", "calendar2", "oldmattachment3.ics", "matt6"))
(yield self._addManagedAttachment("home2", "calendar2", "oldmattachment4.ics", "matt7"))
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota2 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota2 > quota1)
# Dry run
total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home1", 0, 2, dryrun=True, verbose=False))
self.assertEquals(total, 1)
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota3 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota3 == quota2)
# Actually remove
total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home1", 0, 2, dryrun=False, verbose=False))
self.assertEquals(total, 1)
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota4 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota4 < quota3)
# There should be no more left
total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home1", 0, 2, dryrun=False, verbose=False))
self.assertEquals(total, 0)
@inlineCallbacks
def test_purgeAttachmentsWithoutCutoffWithoutMatchingUUID(self):
"""
L{PurgeAttachmentsService.purgeAttachments} purges only orphaned attachments, not current ones.
"""
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota1 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertEqual(quota1, 0)
(yield self._addAttachment("home1", "calendar1", "oldattachment1.ics", "att1"))
(yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.1"))
(yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.2"))
(yield self._addAttachment("home1", "calendar1", "currentattachment3.ics", "att3"))
(yield self._addAttachment("home2", "calendar2", "oldattachment1.ics", "att4"))
(yield self._addAttachment("home2", "calendar2", "currentattachment2.ics", "att5"))
(yield self._addAttachment("home2", "calendar2", "oldattachment3.ics", "att6"))
(yield self._addAttachment("home2", "calendar2", "oldattachment4.ics", "att7"))
(yield self._orphanAttachment("home1", "calendar1", "oldattachment1.ics"))
(yield self._addManagedAttachment("home1", "calendar1", "oldmattachment1.ics", "matt1"))
(yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.1"))
(yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.2"))
(yield self._addManagedAttachment("home1", "calendar1", "currentmattachment3.ics", "matt3"))
(yield self._addManagedAttachment("home2", "calendar2", "oldmattachment1.ics", "matt4"))
(yield self._addManagedAttachment("home2", "calendar2", "currentmattachment2.ics", "matt5"))
(yield self._addManagedAttachment("home2", "calendar2", "oldmattachment3.ics", "matt6"))
(yield self._addManagedAttachment("home2", "calendar2", "oldmattachment4.ics", "matt7"))
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota2 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota2 > quota1)
# Dry run
total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home2", 0, 2, dryrun=True, verbose=False))
self.assertEquals(total, 0)
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota3 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota3 == quota2)
# Actually remove
total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home2", 0, 2, dryrun=False, verbose=False))
self.assertEquals(total, 0)
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota4 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota4 == quota3)
# There should be no more left
total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home2", 0, 2, dryrun=False, verbose=False))
self.assertEquals(total, 0)
@inlineCallbacks
def test_purgeAttachmentsWithCutoffOld(self):
"""
L{PurgeAttachmentsService.purgeAttachments} purges only orphaned attachments, not current ones.
"""
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota1 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertEqual(quota1, 0)
(yield self._addAttachment("home1", "calendar1", "oldattachment1.ics", "att1"))
(yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.1"))
(yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.2"))
(yield self._addAttachment("home1", "calendar1", "currentattachment3.ics", "att3"))
(yield self._addAttachment("home2", "calendar2", "oldattachment1.ics", "att4"))
(yield self._addAttachment("home2", "calendar2", "currentattachment2.ics", "att5"))
(yield self._addAttachment("home2", "calendar2", "oldattachment3.ics", "att6"))
(yield self._addAttachment("home2", "calendar2", "oldattachment4.ics", "att7"))
(yield self._orphanAttachment("home1", "calendar1", "oldattachment1.ics"))
(yield self._orphanAttachment("home2", "calendar2", "oldattachment1.ics"))
(yield self._orphanAttachment("home2", "calendar2", "currentattachment2.ics"))
(yield self._addManagedAttachment("home1", "calendar1", "oldmattachment1.ics", "matt1"))
(yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.1"))
(yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.2"))
(yield self._addManagedAttachment("home1", "calendar1", "currentmattachment3.ics", "matt3"))
(yield self._addManagedAttachment("home2", "calendar2", "oldmattachment1.ics", "matt4"))
(yield self._addManagedAttachment("home2", "calendar2", "currentmattachment2.ics", "matt5"))
(yield self._addManagedAttachment("home2", "calendar2", "oldmattachment3.ics", "matt6"))
(yield self._addManagedAttachment("home2", "calendar2", "oldmattachment4.ics", "matt7"))
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota2 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota2 > quota1)
# Dry run
total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, None, 14, 2, dryrun=True, verbose=False))
self.assertEquals(total, 13)
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota3 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota3 == quota2)
# Actually remove
total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, None, 14, 2, dryrun=False, verbose=False))
self.assertEquals(total, 13)
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota4 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota4 < quota3)
# There should be no more left
total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, None, 14, 2, dryrun=False, verbose=False))
self.assertEquals(total, 0)
@inlineCallbacks
def test_purgeAttachmentsWithCutoffOldWithMatchingUUID(self):
"""
L{PurgeAttachmentsService.purgeAttachments} purges only orphaned attachments, not current ones.
"""
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota1 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertEqual(quota1, 0)
(yield self._addAttachment("home1", "calendar1", "oldattachment1.ics", "att1"))
(yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.1"))
(yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.2"))
(yield self._addAttachment("home1", "calendar1", "currentattachment3.ics", "att3"))
(yield self._addAttachment("home2", "calendar2", "oldattachment1.ics", "att4"))
(yield self._addAttachment("home2", "calendar2", "currentattachment2.ics", "att5"))
(yield self._addAttachment("home2", "calendar2", "oldattachment3.ics", "att6"))
(yield self._addAttachment("home2", "calendar2", "oldattachment4.ics", "att7"))
(yield self._orphanAttachment("home1", "calendar1", "oldattachment1.ics"))
(yield self._orphanAttachment("home2", "calendar2", "oldattachment1.ics"))
(yield self._orphanAttachment("home2", "calendar2", "currentattachment2.ics"))
(yield self._addManagedAttachment("home1", "calendar1", "oldmattachment1.ics", "matt1"))
(yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.1"))
(yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.2"))
(yield self._addManagedAttachment("home1", "calendar1", "currentmattachment3.ics", "matt3"))
(yield self._addManagedAttachment("home2", "calendar2", "oldmattachment1.ics", "matt4"))
(yield self._addManagedAttachment("home2", "calendar2", "currentmattachment2.ics", "matt5"))
(yield self._addManagedAttachment("home2", "calendar2", "oldmattachment3.ics", "matt6"))
(yield self._addManagedAttachment("home2", "calendar2", "oldmattachment4.ics", "matt7"))
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota2 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota2 > quota1)
# Dry run
total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home1", 14, 2, dryrun=True, verbose=False))
self.assertEquals(total, 6)
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota3 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota3 == quota2)
# Actually remove
total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home1", 14, 2, dryrun=False, verbose=False))
self.assertEquals(total, 6)
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota4 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota4 < quota3)
# There should be no more left
total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home1", 14, 2, dryrun=False, verbose=False))
self.assertEquals(total, 0)
@inlineCallbacks
def test_purgeAttachmentsWithCutoffOldWithoutMatchingUUID(self):
"""
L{PurgeAttachmentsService.purgeAttachments} purges only orphaned attachments, not current ones.
"""
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota1 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertEqual(quota1, 0)
(yield self._addAttachment("home1", "calendar1", "oldattachment1.ics", "att1"))
(yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.1"))
(yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.2"))
(yield self._addAttachment("home1", "calendar1", "currentattachment3.ics", "att3"))
(yield self._addAttachment("home2", "calendar2", "oldattachment1.ics", "att4"))
(yield self._addAttachment("home2", "calendar2", "currentattachment2.ics", "att5"))
(yield self._addAttachment("home2", "calendar2", "oldattachment3.ics", "att6"))
(yield self._addAttachment("home2", "calendar2", "oldattachment4.ics", "att7"))
(yield self._orphanAttachment("home1", "calendar1", "oldattachment1.ics"))
(yield self._orphanAttachment("home2", "calendar2", "oldattachment1.ics"))
(yield self._orphanAttachment("home2", "calendar2", "currentattachment2.ics"))
(yield self._addManagedAttachment("home1", "calendar1", "oldmattachment1.ics", "matt1"))
(yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.1"))
(yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.2"))
(yield self._addManagedAttachment("home1", "calendar1", "currentmattachment3.ics", "matt3"))
(yield self._addManagedAttachment("home2", "calendar2", "oldmattachment1.ics", "matt4"))
(yield self._addManagedAttachment("home2", "calendar2", "currentmattachment2.ics", "matt5"))
(yield self._addManagedAttachment("home2", "calendar2", "oldmattachment3.ics", "matt6"))
(yield self._addManagedAttachment("home2", "calendar2", "oldmattachment4.ics", "matt7"))
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota2 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota2 > quota1)
# Dry run
total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home2", 14, 2, dryrun=True, verbose=False))
self.assertEquals(total, 7)
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota3 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota3 == quota2)
# Actually remove
total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home2", 14, 2, dryrun=False, verbose=False))
self.assertEquals(total, 7)
home = (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
quota4 = (yield home.quotaUsedBytes())
(yield self.commit())
self.assertTrue(quota4 == quota3)
# There should be no more left
total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home2", 14, 2, dryrun=False, verbose=False))
self.assertEquals(total, 0)
calendarserver-7.0+dfsg/calendarserver/tools/test/test_resources.py 0000664 0000000 0000000 00000010761 12607514243 0026052 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from twisted.internet.defer import inlineCallbacks
from calendarserver.tools.resources import migrateResources
from twistedcaldav.test.util import StoreTestCase
from txdav.who.test.support import InMemoryDirectoryService
from twext.who.directory import DirectoryRecord
from txdav.who.idirectory import RecordType as CalRecordType
from txdav.who.directory import CalendarDirectoryRecordMixin
class TestRecord(DirectoryRecord, CalendarDirectoryRecordMixin):
pass
class MigrateResourcesTest(StoreTestCase):
@inlineCallbacks
def setUp(self):
yield super(MigrateResourcesTest, self).setUp()
self.store = self.storeUnderTest()
self.sourceService = InMemoryDirectoryService(None)
fieldName = self.sourceService.fieldName
records = (
TestRecord(
self.sourceService,
{
fieldName.uid: u"location1",
fieldName.shortNames: (u"loc1",),
fieldName.recordType: CalRecordType.location,
}
),
TestRecord(
self.sourceService,
{
fieldName.uid: u"location2",
fieldName.shortNames: (u"loc2",),
fieldName.recordType: CalRecordType.location,
}
),
TestRecord(
self.sourceService,
{
fieldName.uid: u"resource1",
fieldName.shortNames: (u"res1",),
fieldName.recordType: CalRecordType.resource,
}
),
)
yield self.sourceService.updateRecords(records, create=True)
@inlineCallbacks
def test_migrateResources(self):
# Record location1 has not been migrated
record = yield self.directory.recordWithUID(u"location1")
self.assertEquals(record, None)
# Migrate location1, location2, and resource1
yield migrateResources(self.sourceService, self.directory)
record = yield self.directory.recordWithUID(u"location1")
self.assertEquals(record.uid, u"location1")
self.assertEquals(record.shortNames[0], u"loc1")
record = yield self.directory.recordWithUID(u"location2")
self.assertEquals(record.uid, u"location2")
self.assertEquals(record.shortNames[0], u"loc2")
record = yield self.directory.recordWithUID(u"resource1")
self.assertEquals(record.uid, u"resource1")
self.assertEquals(record.shortNames[0], u"res1")
# Add a new location to the sourceService, and modify an existing
# location
fieldName = self.sourceService.fieldName
newRecords = (
TestRecord(
self.sourceService,
{
fieldName.uid: u"location1",
fieldName.shortNames: (u"newloc1",),
fieldName.recordType: CalRecordType.location,
}
),
TestRecord(
self.sourceService,
{
fieldName.uid: u"location3",
fieldName.shortNames: (u"loc3",),
fieldName.recordType: CalRecordType.location,
}
),
)
yield self.sourceService.updateRecords(newRecords, create=True)
yield migrateResources(self.sourceService, self.directory)
# Ensure an existing record does not get migrated again; verified by
# seeing if shortNames changed, which they should not:
record = yield self.directory.recordWithUID(u"location1")
self.assertEquals(record.uid, u"location1")
self.assertEquals(record.shortNames[0], u"loc1")
# Ensure new record does get migrated
record = yield self.directory.recordWithUID(u"location3")
self.assertEquals(record.uid, u"location3")
self.assertEquals(record.shortNames[0], u"loc3")
calendarserver-7.0+dfsg/calendarserver/tools/test/test_util.py 0000664 0000000 0000000 00000002525 12607514243 0025014 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
import os
import tempfile
from twistedcaldav.test.util import TestCase
from twistedcaldav.config import ConfigurationError
from calendarserver.tools.util import loadConfig, checkDirectory
class UtilTestCase(TestCase):
def test_loadConfig(self):
testRoot = os.path.join(os.path.dirname(__file__), "util")
configPath = os.path.join(testRoot, "caldavd.plist")
config = loadConfig(configPath)
self.assertEquals(config.EnableCalDAV, True)
self.assertEquals(config.EnableCardDAV, True)
def test_checkDirectory(self):
tmpDir = tempfile.mkdtemp()
tmpFile = os.path.join(tmpDir, "tmpFile")
self.assertRaises(ConfigurationError, checkDirectory, tmpFile, "Test file")
os.rmdir(tmpDir)
calendarserver-7.0+dfsg/calendarserver/tools/test/util/ 0000775 0000000 0000000 00000000000 12607514243 0023377 5 ustar 00root root 0000000 0000000 calendarserver-7.0+dfsg/calendarserver/tools/test/util/caldavd.plist 0000664 0000000 0000000 00000000434 12607514243 0026053 0 ustar 00root root 0000000 0000000
EnableCalDAVEnableCardDAV
calendarserver-7.0+dfsg/calendarserver/tools/trash.py 0000775 0000000 0000000 00000026560 12607514243 0023152 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# -*- test-case-name: calendarserver.tools.test.test_trash -*-
##
# Copyright (c) 2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
from argparse import ArgumentParser
import datetime
from calendarserver.tools.cmdline import utilityMain, WorkerService
from calendarserver.tools.util import prettyRecord, displayNameForCollection, agoString, locationString
from pycalendar.datetime import DateTime, Timezone
from twext.python.log import Logger
from twisted.internet.defer import inlineCallbacks, returnValue
log = Logger()
class TrashRestorationService(WorkerService):
operation = None
operationArgs = []
def doWork(self):
return self.operation(self.store, *self.operationArgs)
def main():
parser = ArgumentParser(description='Restore events from trash')
parser.add_argument('-f', '--config', dest='configFileName', metavar='CONFIGFILE', help='caldavd.plist configuration file path')
parser.add_argument('-d', '--debug', action='store_true', help='show debug logging')
parser.add_argument('-p', '--principal', dest='principal', help='the principal to use (uid)')
parser.add_argument('-e', '--events', action='store_true', help='list trashed events')
parser.add_argument('-c', '--collections', action='store_true', help='list trashed collections for principal (uid)')
parser.add_argument('-r', '--recover', dest='resourceID', type=int, help='recover trashed collection or event (by resource ID)')
parser.add_argument('--empty', action='store_true', help='empty the principal\'s trash')
parser.add_argument('--days', type=int, default=0, help='number of past days to retain')
args = parser.parse_args()
if not args.principal:
print("--principal missing")
return
if args.empty:
operation = emptyTrashForPrincipal
operationArgs = [args.principal, args.days]
elif args.collections:
if args.resourceID:
operation = restoreTrashedCollection
operationArgs = [args.principal, args.resourceID]
else:
operation = listTrashedCollectionsForPrincipal
operationArgs = [args.principal]
elif args.events:
if args.resourceID:
operation = restoreTrashedEvent
operationArgs = [args.principal, args.resourceID]
else:
operation = listTrashedEventsForPrincipal
operationArgs = [args.principal]
else:
operation = listTrashedCollectionsForPrincipal
operationArgs = [args.principal]
TrashRestorationService.operation = operation
TrashRestorationService.operationArgs = operationArgs
utilityMain(
args.configFileName,
TrashRestorationService,
verbose=args.debug,
loadTimezones=True
)
@inlineCallbacks
def listTrashedCollectionsForPrincipal(service, store, principalUID):
directory = store.directoryService()
record = yield directory.recordWithUID(principalUID)
if record is None:
print("No record found for:", principalUID)
returnValue(None)
@inlineCallbacks
def doIt(txn):
home = yield txn.calendarHomeWithUID(principalUID)
if home is None:
print("No home for principal")
returnValue(None)
trash = yield home.getTrash()
if trash is None:
print("No trash available")
returnValue(None)
trashedCollections = yield home.children(onlyInTrash=True)
if len(trashedCollections) == 0:
print("No trashed collections for:", prettyRecord(record))
returnValue(None)
print("Trashed collections for:", prettyRecord(record))
nowDT = datetime.datetime.utcnow()
for collection in trashedCollections:
displayName = displayNameForCollection(collection)
whenTrashed = collection.whenTrashed()
ago = nowDT - whenTrashed
print()
print(" Trashed {}:".format(agoString(ago)))
print(
" \"{}\" (collection) Recovery ID = {}".format(
displayName.encode("utf-8"), collection._resourceID
)
)
startTime = whenTrashed - datetime.timedelta(minutes=5)
children = yield trash.trashForCollection(
collection._resourceID, start=startTime
)
print(" ...containing events:")
for child in children:
component = yield child.component()
summary = component.mainComponent().propertyValue("SUMMARY", "")
print(" \"{}\"".format(summary.encode("utf-8")))
yield store.inTransaction(label="List trashed collections", operation=doIt)
def startString(pydt):
return pydt.getLocaleDateTime(DateTime.FULLDATE, False, True, pydt.getTimezoneID())
@inlineCallbacks
def printEventDetails(event):
nowPyDT = DateTime.getNowUTC()
nowDT = datetime.datetime.utcnow()
oneYearInFuture = DateTime.getNowUTC()
oneYearInFuture.offsetDay(365)
component = yield event.component()
mainSummary = component.mainComponent().propertyValue("SUMMARY", u"")
whenTrashed = event.whenTrashed()
ago = nowDT - whenTrashed
print(" Trashed {}:".format(agoString(ago)))
if component.isRecurring():
print(
" \"{}\" (repeating) Recovery ID = {}".format(
mainSummary, event._resourceID
)
)
print(" ...upcoming instances:")
instances = component.cacheExpandedTimeRanges(oneYearInFuture)
instances = sorted(instances.instances.values(), key=lambda x: x.start)
limit = 3
count = 0
for instance in instances:
if instance.start >= nowPyDT:
summary = instance.component.propertyValue("SUMMARY", u"")
location = locationString(instance.component)
tzid = instance.component.getProperty("DTSTART").parameterValue("TZID", None)
dtstart = instance.start
if tzid is not None:
timezone = Timezone(tzid=tzid)
dtstart.adjustTimezone(timezone)
print(" \"{}\" {} {}".format(summary, startString(dtstart), location))
count += 1
limit -= 1
if limit == 0:
break
if not count:
print(" (none)")
else:
print(
" \"{}\" (non-repeating) Recovery ID = {}".format(
mainSummary, event._resourceID
)
)
dtstart = component.mainComponent().propertyValue("DTSTART")
location = locationString(component.mainComponent())
print(" {} {}".format(startString(dtstart), location))
@inlineCallbacks
def listTrashedEventsForPrincipal(service, store, principalUID):
directory = store.directoryService()
record = yield directory.recordWithUID(principalUID)
if record is None:
print("No record found for:", principalUID)
returnValue(None)
@inlineCallbacks
def doIt(txn):
home = yield txn.calendarHomeWithUID(principalUID)
if home is None:
print("No home for principal")
returnValue(None)
trash = yield home.getTrash()
if trash is None:
print("No trash available")
returnValue(None)
untrashedCollections = yield home.children(onlyInTrash=False)
if len(untrashedCollections) == 0:
print("No untrashed collections for:", prettyRecord(record))
returnValue(None)
for collection in untrashedCollections:
displayName = displayNameForCollection(collection)
children = yield trash.trashForCollection(collection._resourceID)
if len(children) == 0:
continue
print("Trashed events in calendar \"{}\":".format(displayName.encode("utf-8")))
for child in children:
print()
yield printEventDetails(child)
print("")
yield store.inTransaction(label="List trashed events", operation=doIt)
@inlineCallbacks
def restoreTrashedCollection(service, store, principalUID, resourceID):
directory = store.directoryService()
record = yield directory.recordWithUID(principalUID)
if record is None:
print("No record found for:", principalUID)
returnValue(None)
@inlineCallbacks
def doIt(txn):
home = yield txn.calendarHomeWithUID(principalUID)
if home is None:
print("No home for principal")
returnValue(None)
collection = yield home.childWithID(resourceID, onlyInTrash=True)
if collection is None:
print("Collection {} is not in the trash".format(resourceID))
returnValue(None)
yield collection.fromTrash(
restoreChildren=True, delta=datetime.timedelta(minutes=5), verbose=True
)
yield store.inTransaction(label="Restore trashed collection", operation=doIt)
@inlineCallbacks
def restoreTrashedEvent(service, store, principalUID, resourceID):
directory = store.directoryService()
record = yield directory.recordWithUID(principalUID)
if record is None:
print("No record found for:", principalUID)
returnValue(None)
@inlineCallbacks
def doIt(txn):
home = yield txn.calendarHomeWithUID(principalUID)
if home is None:
print("No home for principal")
returnValue(None)
trash = yield home.getTrash()
if trash is None:
print("No trash available")
returnValue(None)
child = yield trash.objectResourceWithID(resourceID)
if child is None:
print("Event not found")
returnValue(None)
component = yield child.component()
summary = component.mainComponent().propertyValue("SUMMARY", "")
print("Restoring \"{}\"".format(summary.encode("utf-8")))
yield child.fromTrash()
yield store.inTransaction(label="Restore trashed event", operation=doIt)
@inlineCallbacks
def emptyTrashForPrincipal(service, store, principalUID, days, txn=None, verbose=True):
directory = store.directoryService()
record = yield directory.recordWithUID(principalUID)
if record is None:
if verbose:
print("No record found for:", principalUID)
returnValue(None)
@inlineCallbacks
def doIt(txn):
home = yield txn.calendarHomeWithUID(principalUID)
if home is None:
if verbose:
print("No home for principal")
returnValue(None)
yield home.emptyTrash(days=days, verbose=verbose)
if txn is None:
yield store.inTransaction(label="Empty trash", operation=doIt)
else:
yield doIt(txn)
if __name__ == "__main__":
main()
calendarserver-7.0+dfsg/calendarserver/tools/upgrade.py 0000664 0000000 0000000 00000015634 12607514243 0023455 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# -*- test-case-name: calendarserver.tools.test.test_upgrade -*-
##
# Copyright (c) 2006-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
This tool allows any necessary upgrade to complete, then exits.
"""
from __future__ import print_function
import os
import sys
import time
from twisted.internet.defer import succeed
from twisted.python.text import wordWrap
from twisted.python.usage import Options, UsageError
from twext.python.log import Logger, LogLevel, formatEvent, addObserver
from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE
from calendarserver.tools.cmdline import utilityMain, WorkerService
from calendarserver.tap.caldav import CalDAVServiceMaker
log = Logger()
def usage(e=None):
if e:
print(e)
print("")
try:
UpgradeOptions().opt_help()
except SystemExit:
pass
if e:
sys.exit(64)
else:
sys.exit(0)
description = '\n'.join(
wordWrap(
"""
Usage: calendarserver_upgrade [options] [input specifiers]\n
""" + __doc__,
int(os.environ.get('COLUMNS', '80'))
)
)
class UpgradeOptions(Options):
"""
Command-line options for 'calendarserver_upgrade'
@ivar upgraders: a list of L{DirectoryUpgradeer} objects which can identify the
calendars to upgrade, given a directory service. This list is built by
parsing --record and --collection options.
"""
synopsis = description
optFlags = [
['status', 's', "Check database status and exit."],
['postprocess', 'p', "Perform post-database-import processing."],
['debug', 'D', "Print log messages to STDOUT."],
]
optParameters = [
['config', 'f', DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."],
['prefix', 'x', "", "Only upgrade homes with the specified GUID prefix - partial upgrade only."],
]
def __init__(self):
super(UpgradeOptions, self).__init__()
self.upgradeers = []
self.outputName = '-'
self.merge = False
def opt_output(self, filename):
"""
Specify output file path (default: '-', meaning stdout).
"""
self.outputName = filename
opt_o = opt_output
def opt_merge(self):
"""
Rather than skipping homes that exist on the filesystem but not in the
database, merge their data into the existing homes.
"""
self.merge = True
opt_m = opt_merge
def openOutput(self):
"""
Open the appropriate output file based on the '--output' option.
"""
if self.outputName == '-':
return sys.stdout
else:
return open(self.outputName, 'wb')
class UpgraderService(WorkerService, object):
"""
Service which runs, exports the appropriate records, then stops the reactor.
"""
started = False
def __init__(self, store, options, output, reactor, config):
super(UpgraderService, self).__init__(store)
self.options = options
self.output = output
self.reactor = reactor
self.config = config
self._directory = None
def doWork(self):
"""
Immediately stop. The upgrade will have been run before this.
"""
# If we get this far the database is OK
if self.options["status"]:
self.output.write("Database OK.\n")
else:
self.output.write("Upgrade complete, shutting down.\n")
UpgraderService.started = True
return succeed(None)
def doWorkWithoutStore(self):
"""
Immediately stop. The upgrade will have been run before this.
"""
if self.options["status"]:
self.output.write("Upgrade needed.\n")
else:
self.output.write("Upgrade failed.\n")
UpgraderService.started = True
return succeed(None)
def postStartService(self):
"""
Quit right away
"""
from twisted.internet import reactor
from twisted.internet.error import ReactorNotRunning
try:
reactor.stop()
except ReactorNotRunning:
# I don't care.
pass
def stopService(self):
"""
Stop the service. Nothing to do; everything should be finished by this
time.
"""
def main(argv=sys.argv, stderr=sys.stderr, reactor=None):
"""
Do the export.
"""
from twistedcaldav.config import config
if reactor is None:
from twisted.internet import reactor
options = UpgradeOptions()
try:
options.parseOptions(argv[1:])
except UsageError, e:
usage(e)
try:
output = options.openOutput()
except IOError, e:
stderr.write("Unable to open output file for writing: %s\n" % (e))
sys.exit(1)
if options.merge:
def setMerge(data):
data.MergeUpgrades = True
config.addPostUpdateHooks([setMerge])
def makeService(store):
return UpgraderService(store, options, output, reactor, config)
def onlyUpgradeEvents(eventDict):
text = formatEvent(eventDict)
output.write(logDateString() + " " + text + "\n")
output.flush()
if not options["status"]:
log.publisher.levels.setLogLevelForNamespace(None, LogLevel.debug)
addObserver(onlyUpgradeEvents)
def customServiceMaker():
customService = CalDAVServiceMaker()
customService.doPostImport = options["postprocess"]
return customService
def _patchConfig(config):
config.FailIfUpgradeNeeded = options["status"]
if options["prefix"]:
config.UpgradeHomePrefix = options["prefix"]
def _onShutdown():
if not UpgraderService.started:
print("Failed to start service.")
utilityMain(options["config"], makeService, reactor, customServiceMaker, patchConfig=_patchConfig, onShutdown=_onShutdown, verbose=options["debug"])
def logDateString():
logtime = time.localtime()
Y, M, D, h, m, s = logtime[:6]
tz = computeTimezoneForLog(time.timezone)
return '%02d-%02d-%02d %02d:%02d:%02d%s' % (Y, M, D, h, m, s, tz)
def computeTimezoneForLog(tz):
if tz > 0:
neg = 1
else:
neg = 0
tz = -tz
h, rem = divmod(tz, 3600)
m, rem = divmod(rem, 60)
if neg:
return '-%02d%02d' % (h, m)
else:
return '+%02d%02d' % (h, m)
if __name__ == '__main__':
main()
calendarserver-7.0+dfsg/calendarserver/tools/util.py 0000664 0000000 0000000 00000042440 12607514243 0022776 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2008-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Utility functionality shared between calendarserver tools.
"""
__all__ = [
"loadConfig",
"UsageError",
"booleanArgument",
]
import datetime
import os
from time import sleep
import socket
from pwd import getpwnam
from grp import getgrnam
from uuid import UUID
from calendarserver.tools import diagnose
from twistedcaldav.config import config, ConfigurationError
from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE
from twext.python.log import Logger
from twisted.internet.defer import inlineCallbacks, returnValue
from txdav.xml import element as davxml
from twistedcaldav import memcachepool
from txdav.base.propertystore.base import PropertyName
from txdav.xml import element
from pycalendar.datetime import DateTime, Timezone
log = Logger()
def loadConfig(configFileName):
"""
Helper method for command-line utilities to load configuration plist
and override certain values.
"""
if configFileName is None:
configFileName = DEFAULT_CONFIG_FILE
if not os.path.isfile(configFileName):
raise ConfigurationError("No config file: %s" % (configFileName,))
config.load(configFileName)
# Command-line utilities always want these enabled:
config.EnableCalDAV = True
config.EnableCardDAV = True
return config
class UsageError (StandardError):
pass
def booleanArgument(arg):
if arg in ("true", "yes", "yup", "uh-huh", "1", "t", "y"):
return True
elif arg in ("false", "no", "nope", "nuh-uh", "0", "f", "n"):
return False
else:
raise ValueError("Not a boolean: %s" % (arg,))
def autoDisableMemcached(config):
"""
Set ClientEnabled to False for each pool whose memcached is not running
"""
for pool in config.Memcached.Pools.itervalues():
if pool.ClientEnabled:
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((pool.BindAddress, pool.Port))
s.close()
except socket.error:
pool.ClientEnabled = False
def setupMemcached(config):
#
# Connect to memcached
#
memcachepool.installPools(
config.Memcached.Pools,
config.Memcached.MaxClients
)
autoDisableMemcached(config)
def checkDirectory(dirpath, description, access=None, create=None, wait=False):
"""
Make sure dirpath is an existing directory, and optionally ensure it has the
expected permissions. Alternatively the function can create the directory or
can wait for someone else to create it.
@param dirpath: The directory path we're checking
@type dirpath: string
@param description: A description of what the directory path represents, used in
log messages
@type description: string
@param access: The type of access we're expecting, either os.W_OK or os.R_OK
@param create: A tuple of (file permissions mode, username, groupname) to use
when creating the directory. If create=None then no attempt will be made
to create the directory.
@type create: tuple
@param wait: Wether the function should wait in a loop for the directory to be
created by someone else (or mounted, etc.)
@type wait: boolean
"""
# Note: we have to use print here because the logging mechanism has not
# been set up yet.
if not os.path.exists(dirpath) or (diagnose.detectPhantomVolume(dirpath) == diagnose.EXIT_CODE_PHANTOM_DATA_VOLUME):
if wait:
# If we're being told to wait, post an alert that we can't continue
# until the volume is mounted
if not os.path.exists(dirpath) or (diagnose.detectPhantomVolume(dirpath) == diagnose.EXIT_CODE_PHANTOM_DATA_VOLUME):
from calendarserver.tap.util import postAlert
postAlert("MissingDataVolumeAlert", ["volumePath", dirpath])
while not os.path.exists(dirpath) or (diagnose.detectPhantomVolume(dirpath) == diagnose.EXIT_CODE_PHANTOM_DATA_VOLUME):
if not os.path.exists(dirpath):
print("Path does not exist: %s" % (dirpath,))
else:
print("Path is not a real volume: %s" % (dirpath,))
sleep(5)
else:
try:
mode, username, groupname = create
except TypeError:
raise ConfigurationError("%s does not exist: %s"
% (description, dirpath))
try:
os.mkdir(dirpath)
except (OSError, IOError), e:
print("Could not create %s: %s" % (dirpath, e))
raise ConfigurationError(
"%s does not exist and cannot be created: %s"
% (description, dirpath)
)
if username:
uid = getpwnam(username).pw_uid
else:
uid = -1
if groupname:
gid = getgrnam(groupname).gr_gid
else:
gid = -1
try:
os.chmod(dirpath, mode)
os.chown(dirpath, uid, gid)
except (OSError, IOError), e:
print("Unable to change mode/owner of %s: %s" % (dirpath, e))
print("Created directory: %s" % (dirpath,))
if not os.path.isdir(dirpath):
raise ConfigurationError("%s is not a directory: %s" % (description, dirpath))
if access and not os.access(dirpath, access):
raise ConfigurationError(
"Insufficient permissions for server on %s directory: %s"
% (description, dirpath)
)
@inlineCallbacks
def principalForPrincipalID(principalID, checkOnly=False, directory=None):
# Allow a directory parameter to be passed in, but default to config.directory
# But config.directory isn't set right away, so only use it when we're doing more
# than checking.
if not checkOnly and not directory:
directory = config.directory
if principalID.startswith("/"):
segments = principalID.strip("/").split("/")
if (
len(segments) == 3 and
segments[0] == "principals" and segments[1] == "__uids__"
):
uid = segments[2]
else:
raise ValueError("Can't resolve all paths yet")
if checkOnly:
returnValue(None)
returnValue((yield directory.principalCollection.principalForUID(uid)))
if principalID.startswith("("):
try:
i = principalID.index(")")
if checkOnly:
returnValue(None)
recordType = principalID[1:i]
shortName = principalID[i + 1:]
if not recordType or not shortName or "(" in recordType:
raise ValueError()
returnValue((yield directory.principalCollection.principalForShortName(recordType, shortName)))
except ValueError:
pass
if ":" in principalID:
if checkOnly:
returnValue(None)
recordType, shortName = principalID.split(":", 1)
returnValue((yield directory.principalCollection.principalForShortName(recordType, shortName)))
try:
UUID(principalID)
if checkOnly:
returnValue(None)
returnValue((yield directory.principalCollection.principalForUID(principalID)))
except ValueError:
pass
raise ValueError("Invalid principal identifier: %s" % (principalID,))
@inlineCallbacks
def recordForPrincipalID(directory, principalID, checkOnly=False):
if principalID.startswith("/"):
segments = principalID.strip("/").split("/")
if (
len(segments) == 3 and
segments[0] == "principals" and segments[1] == "__uids__"
):
uid = segments[2]
else:
raise ValueError("Can't resolve all paths yet")
if checkOnly:
returnValue(None)
returnValue((yield directory.recordWithUID(uid)))
if principalID.startswith("("):
try:
i = principalID.index(")")
if checkOnly:
returnValue(None)
recordType = directory.oldNameToRecordType(principalID[1:i])
shortName = principalID[i + 1:]
if not recordType or not shortName or "(" in recordType:
raise ValueError()
returnValue((yield directory.recordWithShortName(recordType, shortName)))
except ValueError:
pass
if ":" in principalID:
if checkOnly:
returnValue(None)
recordType, shortName = principalID.split(":", 1)
recordType = directory.oldNameToRecordType(recordType)
if recordType is None:
returnValue(None)
returnValue((yield directory.recordWithShortName(recordType, shortName)))
try:
if checkOnly:
returnValue(None)
returnValue((yield directory.recordWithUID(principalID)))
except ValueError:
pass
raise ValueError("Invalid principal identifier: %s" % (principalID,))
def proxySubprincipal(principal, proxyType):
return principal.getChild("calendar-proxy-" + proxyType)
@inlineCallbacks
def action_addProxyPrincipal(rootResource, directory, store, principal, proxyType, proxyPrincipal):
try:
(yield addProxy(rootResource, directory, store, principal, proxyType, proxyPrincipal))
print("Added %s as a %s proxy for %s" % (
prettyPrincipal(proxyPrincipal), proxyType,
prettyPrincipal(principal)))
except ProxyError, e:
print("Error:", e)
except ProxyWarning, e:
print(e)
@inlineCallbacks
def action_removeProxyPrincipal(rootResource, directory, store, principal, proxyPrincipal, **kwargs):
try:
removed = (yield removeProxy(
rootResource, directory, store,
principal, proxyPrincipal, **kwargs
))
if removed:
print("Removed %s as a proxy for %s" % (
prettyPrincipal(proxyPrincipal),
prettyPrincipal(principal)))
except ProxyError, e:
print("Error:", e)
except ProxyWarning, e:
print(e)
@inlineCallbacks
def addProxy(rootResource, directory, store, principal, proxyType, proxyPrincipal):
proxyURL = proxyPrincipal.url()
subPrincipal = proxySubprincipal(principal, proxyType)
if subPrincipal is None:
raise ProxyError(
"Unable to edit %s proxies for %s\n" % (
proxyType,
prettyPrincipal(principal)
)
)
membersProperty = (yield subPrincipal.readProperty(davxml.GroupMemberSet, None))
for memberURL in membersProperty.children:
if str(memberURL) == proxyURL:
raise ProxyWarning("%s is already a %s proxy for %s" % (
prettyPrincipal(proxyPrincipal), proxyType,
prettyPrincipal(principal)))
else:
memberURLs = list(membersProperty.children)
memberURLs.append(davxml.HRef(proxyURL))
membersProperty = davxml.GroupMemberSet(*memberURLs)
(yield subPrincipal.writeProperty(membersProperty, None))
proxyTypes = ["read", "write"]
proxyTypes.remove(proxyType)
yield action_removeProxyPrincipal(
rootResource, directory, store,
principal, proxyPrincipal, proxyTypes=proxyTypes
)
# Schedule work the PeerConnectionPool will pick up as overdue
def groupPollNow(txn):
from txdav.who.groups import GroupCacherPollingWork
return GroupCacherPollingWork.reschedule(txn, 0, force=True)
yield store.inTransaction("addProxy groupPollNow", groupPollNow)
@inlineCallbacks
def removeProxy(rootResource, directory, store, principal, proxyPrincipal, **kwargs):
removed = False
proxyTypes = kwargs.get("proxyTypes", ("read", "write"))
for proxyType in proxyTypes:
proxyURL = proxyPrincipal.url()
subPrincipal = proxySubprincipal(principal, proxyType)
if subPrincipal is None:
raise ProxyError(
"Unable to edit %s proxies for %s\n" % (
proxyType,
prettyPrincipal(principal)
)
)
membersProperty = (yield subPrincipal.readProperty(davxml.GroupMemberSet, None))
memberURLs = [
m for m in membersProperty.children
if str(m) != proxyURL
]
if len(memberURLs) == len(membersProperty.children):
# No change
continue
else:
removed = True
membersProperty = davxml.GroupMemberSet(*memberURLs)
(yield subPrincipal.writeProperty(membersProperty, None))
if removed:
# Schedule work the PeerConnectionPool will pick up as overdue
def groupPollNow(txn):
from txdav.who.groups import GroupCacherPollingWork
return GroupCacherPollingWork.reschedule(txn, 0, force=True)
yield store.inTransaction("removeProxy groupPollNow", groupPollNow)
returnValue(removed)
def prettyPrincipal(principal):
prettyRecord(principal.record)
def prettyRecord(record):
try:
shortNames = record.shortNames
except AttributeError:
shortNames = []
return "\"{d}\" {uid} ({rt}) {sn}".format(
d=record.displayName.encode("utf-8"),
rt=record.recordType.name,
uid=record.uid,
sn=(", ".join([sn.encode("utf-8") for sn in shortNames]))
)
def displayNameForCollection(collection):
try:
displayName = collection.properties()[
PropertyName.fromElement(element.DisplayName)
]
displayName = displayName.toString()
except:
displayName = collection.name()
return displayName
def agoString(delta):
if delta.days:
agoString = "{} days ago".format(delta.days)
elif delta.seconds:
if delta.seconds < 60:
agoString = "{} second{} ago".format(delta.seconds, "s" if delta.seconds > 1 else "")
else:
minutesAgo = delta.seconds / 60
if minutesAgo < 60:
agoString = "{} minute{} ago".format(minutesAgo, "s" if minutesAgo > 1 else "")
else:
hoursAgo = minutesAgo / 60
agoString = "{} hour{} ago".format(hoursAgo, "s" if hoursAgo > 1 else "")
return agoString
def locationString(component):
locationProps = component.properties("LOCATION")
if locationProps is not None:
locations = []
for locationProp in locationProps:
locations.append(locationProp.value())
locationString = ", ".join(locations)
else:
locationString = ""
return locationString
@inlineCallbacks
def getEventDetails(event):
detail = {}
nowPyDT = DateTime.getNowUTC()
nowDT = datetime.datetime.utcnow()
oneYearInFuture = DateTime.getNowUTC()
oneYearInFuture.offsetDay(365)
component = yield event.component()
mainSummary = component.mainComponent().propertyValue("SUMMARY", u"")
whenTrashed = event.whenTrashed()
ago = nowDT - whenTrashed
detail["summary"] = mainSummary
detail["whenTrashed"] = agoString(ago)
detail["recoveryID"] = event._resourceID
if component.isRecurring():
detail["recurring"] = True
detail["instances"] = []
instances = component.cacheExpandedTimeRanges(oneYearInFuture)
instances = sorted(instances.instances.values(), key=lambda x: x.start)
limit = 3
count = 0
for instance in instances:
if instance.start >= nowPyDT:
summary = instance.component.propertyValue("SUMMARY", u"")
location = locationString(instance.component)
tzid = instance.component.getProperty("DTSTART").parameterValue("TZID", None)
dtstart = instance.start
if tzid is not None:
timezone = Timezone(tzid=tzid)
dtstart.adjustTimezone(timezone)
detail["instances"].append({
"summary": summary,
"starttime": dtstart.getLocaleDateTime(DateTime.FULLDATE, False, True, dtstart.getTimezoneID()),
"location": location
})
count += 1
limit -= 1
if limit == 0:
break
else:
detail["recurring"] = False
dtstart = component.mainComponent().propertyValue("DTSTART")
detail["starttime"] = dtstart.getLocaleDateTime(DateTime.FULLDATE, False, True, dtstart.getTimezoneID())
detail["location"] = locationString(component.mainComponent())
returnValue(detail)
class ProxyError(Exception):
"""
Raised when proxy assignments cannot be performed
"""
class ProxyWarning(Exception):
"""
Raised for harmless proxy assignment failures such as trying to add a
duplicate or remove a non-existent assignment.
"""
calendarserver-7.0+dfsg/calendarserver/tools/validcalendardata.py 0000775 0000000 0000000 00000014524 12607514243 0025451 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
##
# Copyright (c) 2012-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
"""
This tool takes data from stdin and validates it as iCalendar data suitable
for the server.
"""
from calendarserver.tools.cmdline import utilityMain, WorkerService
from twisted.internet.defer import succeed
from twisted.python.text import wordWrap
from twisted.python.usage import Options
from twistedcaldav.config import config
from twistedcaldav.ical import Component, InvalidICalendarDataError
from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE
import os
import sys
def usage(e=None):
if e:
print(e)
print("")
try:
ValidOptions().opt_help()
except SystemExit:
pass
if e:
sys.exit(64)
else:
sys.exit(0)
description = '\n'.join(
wordWrap(
"""
Usage: validcalendardata [options] [input specifiers]\n
""",
int(os.environ.get('COLUMNS', '80'))
)
)
class ValidOptions(Options):
"""
Command-line options for 'validcalendardata'
"""
synopsis = description
optFlags = [
['verbose', 'v', "Verbose logging."],
['debug', 'D', "Debug logging."],
['parse-only', 'p', "Only validate parsing of the data."],
]
optParameters = [
['config', 'f', DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."],
]
def __init__(self):
super(ValidOptions, self).__init__()
self.outputName = '-'
self.inputName = '-'
def opt_output(self, filename):
"""
Specify output file path (default: '-', meaning stdout).
"""
self.outputName = filename
opt_o = opt_output
def openOutput(self):
"""
Open the appropriate output file based on the '--output' option.
"""
if self.outputName == '-':
return sys.stdout
else:
return open(self.outputName, "wb")
def opt_input(self, filename):
"""
Specify output file path (default: '-', meaning stdin).
"""
self.inputName = filename
opt_i = opt_input
def openInput(self):
"""
Open the appropriate output file based on the '--input' option.
"""
if self.inputName == '-':
return sys.stdin
else:
return open(os.path.expanduser(self.inputName), "rb")
errorPrefix = "Calendar data had unfixable problems:\n "
class ValidService(WorkerService, object):
"""
Service which runs, exports the appropriate records, then stops the reactor.
"""
def __init__(self, store, options, output, input, reactor, config):
super(ValidService, self).__init__(store)
self.options = options
self.output = output
self.input = input
self.reactor = reactor
self.config = config
self._directory = None
def doWork(self):
"""
Start the service.
"""
super(ValidService, self).startService()
if self.options["parse-only"]:
result, message = self.parseCalendarData()
else:
result, message = self.validCalendarData()
if result:
print("Calendar data OK")
else:
print(message)
return succeed(None)
def parseCalendarData(self):
"""
Check the calendar data for valid iCalendar data.
"""
result = True
message = ""
try:
component = Component.fromString(self.input.read())
# Do underlying iCalendar library validation with data fix
fixed, unfixed = component._pycalendar.validate(doFix=True)
if unfixed:
raise InvalidICalendarDataError("Calendar data had unfixable problems:\n %s" % ("\n ".join(unfixed),))
if fixed:
print("Calendar data had fixable problems:\n %s" % ("\n ".join(fixed),))
except ValueError, e:
result = False
message = str(e)
if message.startswith(errorPrefix):
message = message[len(errorPrefix):]
return (result, message,)
def validCalendarData(self):
"""
Check the calendar data for valid iCalendar data.
"""
result = True
message = ""
truncated = False
try:
component = Component.fromString(self.input.read())
if getattr(self.config, "MaxInstancesForRRULE", 0) != 0:
truncated = component.truncateRecurrence(config.MaxInstancesForRRULE)
component.validCalendarData(doFix=False, validateRecurrences=True)
component.validCalendarForCalDAV(methodAllowed=True)
component.validOrganizerForScheduling(doFix=False)
except ValueError, e:
result = False
message = str(e)
if message.startswith(errorPrefix):
message = message[len(errorPrefix):]
if truncated:
message = "Calendar data RRULE truncated\n" + message
return (result, message,)
def main(argv=sys.argv, stderr=sys.stderr, reactor=None):
"""
Do the export.
"""
if reactor is None:
from twisted.internet import reactor
options = ValidOptions()
options.parseOptions(argv[1:])
try:
output = options.openOutput()
except IOError, e:
stderr.write("Unable to open output file for writing: %s\n" % (e))
sys.exit(1)
try:
input = options.openInput()
except IOError, e:
stderr.write("Unable to open input file for reading: %s\n" % (e))
sys.exit(1)
def makeService(store):
return ValidService(store, options, output, input, reactor, config)
utilityMain(options["config"], makeService, reactor, verbose=options["debug"])
if __name__ == "__main__":
main()
calendarserver-7.0+dfsg/calendarserver/webadmin/ 0000775 0000000 0000000 00000000000 12607514243 0022071 5 ustar 00root root 0000000 0000000 calendarserver-7.0+dfsg/calendarserver/webadmin/__init__.py 0000664 0000000 0000000 00000001205 12607514243 0024200 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2009-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Calendar Server Web Admin UI.
"""
calendarserver-7.0+dfsg/calendarserver/webadmin/config.py 0000664 0000000 0000000 00000010126 12607514243 0023710 0 ustar 00root root 0000000 0000000 # -*- test-case-name: calendarserver.webadmin.test.test_landing -*-
##
# Copyright (c) 2009-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Calendar Server Configuration UI.
"""
__all__ = [
"ConfigurationResource",
]
from twisted.web.template import renderer, tags as html
from twext.python.log import Logger
from .resource import PageElement, TemplateResource
class ConfigurationPageElement(PageElement):
"""
Configuration management page element.
"""
def __init__(self, configuration):
super(ConfigurationPageElement, self).__init__(u"config")
self.configuration = configuration
def pageSlots(self):
slots = {
u"title": u"Server Configuration",
}
for key in (
"ServerHostName",
"HTTPPort",
"SSLPort",
"BindAddresses",
"BindHTTPPorts",
"BindSSLPorts",
"EnableSSL",
"RedirectHTTPToHTTPS",
"SSLCertificate",
"SSLPrivateKey",
"SSLAuthorityChain",
"SSLMethod",
"SSLCiphers",
"EnableCalDAV",
"EnableCardDAV",
"ServerRoot",
"EnableSSL",
"UserQuota",
"MaxCollectionsPerHome",
"MaxResourcesPerCollection",
"MaxResourceSize",
"UserName",
"GroupName",
"ProcessType",
"MultiProcess.MinProcessCount",
"MultiProcess.ProcessCount",
"MaxRequests",
):
value = self.configuration
for subkey in key.split("."):
value = getattr(value, subkey)
def describe(value):
if value == u"":
return u"(empty string)"
if value is None:
return u"(no value)"
return html.code(unicode(value))
if isinstance(value, list):
result = []
for item in value:
result.append(describe(item))
result.append(html.br())
result.pop()
else:
result = describe(value)
slots[key] = result
return slots
@renderer
def settings_header(self, request, tag):
return tag(
html.tr(
html.th(u"Option"),
html.th(u"Value"),
html.th(u"Note"),
),
)
@renderer
def log_level_row(self, request, tag):
def rowsForNamespaces(namespaces):
for namespace in namespaces:
if namespace is None:
name = u"(default)"
else:
name = html.code(namespace)
value = html.code(
Logger.publisher.levels.logLevelForNamespace(
namespace
).name
)
yield tag.clone().fillSlots(
log_level_name=name,
log_level_value=value,
)
# FIXME: Using private attributes is bad.
namespaces = Logger.publisher.levels._logLevelsByNamespace.keys()
return rowsForNamespaces(namespaces)
class ConfigurationResource(TemplateResource):
"""
Configuration management page resource.
"""
addSlash = False
def __init__(self, configuration, principalCollections):
super(ConfigurationResource, self).__init__(
lambda: ConfigurationPageElement(configuration), principalCollections, isdir=False
)
calendarserver-7.0+dfsg/calendarserver/webadmin/config.xhtml 0000664 0000000 0000000 00000017537 12607514243 0024431 0 ustar 00root root 0000000 0000000
Public Network Address Information
These settings determine how one accesses the server on the Internet.
This may or may not be the same address that the server listens to itself.
This may, for example, be the address for a load balancer which forwards requests to the server.
Host Name
HTTP Port
For most servers, this should be 80.
HTTPS (TLS) Port
For most servers, this should be 443.
Bound Network Address Information
These settings determine the actual network address that the server binds to.
By default, this is the same as the public network address information provided above.
IP Addresses
HTTP Ports
TLS Ports
Transport Layer Security (TLS) Settings
These settings determine how one accesses the server on the Internet.
This may or may not be the same address that the server listens to itself.
This may, for example, be the address for a load balancer which forwards requests to the server.
Enable HTTPS (TLS)
It is strongly advised that TLS is enabled on production servers.
Redirect To HTTPS
Redirect clients to the secure port (recommended).
TLS Certificate
TLS Private Key
TLS Authority Chain
TLS Methods
TLS Ciphers
Service Settings
Enable CalDAV
Enable CardDAV
Data Store
Server Root
Data Source Name
Database connection string.
Attachment Quota (Bytes)
Max # Collections Per Account
Collections include calendars and address books.
Max # Objects Per Collection
Objects include events, to-dos and contacts.
Max Object Size
Process Management
User Name
Group Name
Process Type
Min Process Count
Max Process Count
Max Requests Per Process
Log Levels
Namespace
Log Level
calendarserver-7.0+dfsg/calendarserver/webadmin/eventsource.py 0000664 0000000 0000000 00000017211 12607514243 0025007 0 ustar 00root root 0000000 0000000 # -*- test-case-name: calendarserver.webadmin.test.test_principals -*-
##
# Copyright (c) 2014-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
"""
Calendar Server principal management web UI.
"""
__all__ = [
"textAsEvent",
"EventSourceResource",
]
from collections import deque
from zope.interface import implementer, Interface
from twistedcaldav.simpleresource import SimpleResource
from twisted.internet.defer import Deferred, succeed
from txweb2.stream import IByteStream, fallbackSplit
from txweb2.http_headers import MimeType
from txweb2.http import Response
def textAsEvent(text, eventID=None, eventClass=None, eventRetry=None):
"""
Format some text as an HTML5 EventSource event. Since the EventSource data
format is text-oriented, this function expects L{unicode}, not L{bytes};
binary data should be encoded as text if it is to be used in an EventSource
stream.
UTF-8 encoded L{bytes} are returned because
http://www.w3.org/TR/eventsource/ states that the only allowed encoding
for C{text/event-stream} is UTF-8.
@param text: The text (ie. the message) to send in the event.
@type text: L{unicode}
@param eventID: An unique identifier for the event.
@type eventID: L{unicode}
@param eventClass: A class name (ie. a categorization) for the event.
@type eventClass: L{unicode}
@param eventRetry: The retry interval (in milliseconds) for the client to
wait before reconnecting if it gets disconnected.
@type eventRetry: L{int}
@return: An HTML5 EventSource event as text.
@rtype: UTF-8 encoded L{bytes}
"""
if text is None:
raise TypeError("text may not be None")
event = []
if eventID is not None:
event.append(u"id: {0}".format(eventID))
if eventClass is not None:
event.append(u"event: {0}".format(eventClass))
if eventRetry is not None:
event.append(u"retry: {0:d}".format(eventRetry))
event.extend(
u"data: {0}".format(l) for l in text.split("\n")
)
return (u"\n".join(event) + u"\n\n").encode("utf-8")
class IEventDecoder(Interface):
"""
An object that can be used to extract data from an application-specific
event object for encoding into an EventSource data stream.
"""
def idForEvent(event):
"""
@return: An unique identifier for the given event.
@rtype: L{unicode}
"""
def classForEvent(event):
"""
@return: A class name (ie. a categorization) for the event.
@rtype: L{unicode}
"""
def textForEvent(event):
"""
@return: The text (ie. the message) to send in the event.
@rtype: L{unicode}
"""
def retryForEvent(event):
"""
@return: The retry interval (in milliseconds) for the client to wait
before reconnecting if it gets disconnected.
@rtype: L{int}
"""
class EventSourceResource(SimpleResource):
"""
Resource that vends HTML5 EventSource events.
Events are stored in a ring buffer and streamed to clients.
"""
addSlash = False
def __init__(self, eventDecoder, principalCollections, bufferSize=400):
"""
@param eventDecoder: An object that can be used to extract data from
an event for encoding into an EventSource data stream.
@type eventDecoder: L{IEventDecoder}
@param bufferSize: The maximum number of events to keep in the ring
buffer.
@type bufferSize: L{int}
"""
super(EventSourceResource, self).__init__(principalCollections, isdir=False)
self._eventDecoder = eventDecoder
self._events = deque(maxlen=bufferSize)
self._streams = set()
def addEvents(self, events):
self._events.extend(events)
# Notify outbound streams that there is new data to vend
for stream in self._streams:
stream.didAddEvents()
def render(self, request):
lastID = request.headers.getRawHeaders(u"last-event-id")
response = Response()
response.stream = EventStream(self._eventDecoder, self._events, lastID)
response.headers.setHeader(
b"content-type", MimeType.fromString(b"text/event-stream")
)
# Keep track of the event streams
def cleanupFilter(_request, _response):
self._streams.remove(response.stream)
return _response
request.addResponseFilter(cleanupFilter)
self._streams.add(response.stream)
return response
@implementer(IByteStream)
class EventStream(object):
"""
L{IByteStream} that streams out HTML5 EventSource events.
"""
length = None
def __init__(self, eventDecoder, events, lastID):
"""
@param eventDecoder: An object that can be used to extract data from
an event for encoding into an EventSource data stream.
@type eventDecoder: L{IEventDecoder}
@param events: Application-specific event objects.
@type events: sequence of L{object}
@param lastID: The identifier for the last event that was vended from
C{events}. Vending will resume starting from the following event.
@type lastID: L{int}
"""
super(EventStream, self).__init__()
self._eventDecoder = eventDecoder
self._events = events
self._lastID = lastID
self._closed = False
self._deferredRead = None
def didAddEvents(self):
d = self._deferredRead
if d is not None:
d.addCallback(lambda _: self.read())
d.callback(None)
def read(self):
if self._closed:
return succeed(None)
lastID = self._lastID
eventID = None
idForEvent = self._eventDecoder.idForEvent
classForEvent = self._eventDecoder.classForEvent
textForEvent = self._eventDecoder.textForEvent
retryForEvent = self._eventDecoder.retryForEvent
for event in self._events:
eventID = idForEvent(event)
# If lastID is not None, skip messages up to and including the one
# referenced by lastID.
if lastID is not None:
if eventID == lastID:
eventID = None
lastID = None
continue
eventClass = classForEvent(event)
eventText = textForEvent(event)
eventRetry = retryForEvent(event)
self._lastID = eventID
return succeed(
textAsEvent(eventText, eventID, eventClass, eventRetry)
)
if eventID is not None:
# We just scanned all the messages, and none are the last one the
# client saw.
self._lastID = None
return succeed(b"")
# # This causes the client to poll, which is undesirable, but the
# # deferred below doesn't seem to work in real use...
# return succeed(None)
d = Deferred()
self._deferredRead = d
return d
def split(self, point):
return fallbackSplit(self, point)
def close(self):
self._closed = True
calendarserver-7.0+dfsg/calendarserver/webadmin/landing.py 0000664 0000000 0000000 00000006475 12607514243 0024073 0 ustar 00root root 0000000 0000000 # -*- test-case-name: calendarserver.webadmin.test.test_landing -*-
##
# Copyright (c) 2009-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Calendar Server Web Admin UI.
"""
__all__ = [
"WebAdminLandingResource",
]
# from twisted.web.template import renderer
from .resource import PageElement, TemplateResource
class WebAdminLandingPageElement(PageElement):
"""
Web administration langing page element.
"""
def __init__(self):
super(WebAdminLandingPageElement, self).__init__(u"landing")
def pageSlots(self):
return {
u"title": u"Server Administration",
}
class WebAdminLandingResource(TemplateResource):
"""
Web administration landing page resource.
"""
addSlash = True
def __init__(self, path, root, directory, store, principalCollections=()):
super(WebAdminLandingResource, self).__init__(WebAdminLandingPageElement, principalCollections, True)
from twistedcaldav.config import config as configuration
self.configuration = configuration
self.directory = directory
self.store = store
# self._path = path
# self._root = root
# self._principalCollections = principalCollections
# from .config import ConfigurationResource
# self.putChild(u"config", ConfigurationResource(configuration, principalCollections))
# from .principals import PrincipalsResource
# self.putChild(u"principals", PrincipalsResource(directory, store, principalCollections))
# from .logs import LogsResource
# self.putChild(u"logs", LogsResource())
# from .work import WorkMonitorResource
# self.putChild(u"work", WorkMonitorResource(store))
def getChild(self, name):
bound = super(WebAdminLandingResource, self).getChild(name)
if bound is not None:
return bound
#
# Dynamically load and vend child resources not bound using putChild()
# in __init__(). This is useful for development, since it allows one
# to comment out the putChild() call above, and then code will be
# re-loaded for each request.
#
from . import config, principals, logs, work
if name == u"config":
reload(config)
return config.ConfigurationResource(self.configuration, self._principalCollections)
elif name == u"principals":
reload(principals)
return principals.PrincipalsResource(self.directory, self.store, self._principalCollections)
elif name == u"logs":
reload(logs)
return logs.LogsResource(self._principalCollections)
elif name == u"work":
reload(work)
return work.WorkMonitorResource(self.store, self._principalCollections)
return None
calendarserver-7.0+dfsg/calendarserver/webadmin/landing.xhtml 0000664 0000000 0000000 00000001013 12607514243 0024556 0 ustar 00root root 0000000 0000000
calendarserver-7.0+dfsg/calendarserver/webadmin/logs.py 0000664 0000000 0000000 00000010506 12607514243 0023411 0 ustar 00root root 0000000 0000000 # -*- test-case-name: calendarserver.webadmin.test.test_logs -*-
##
# Copyright (c) 2014-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
"""
Calendar Server log viewing web UI.
"""
__all__ = [
"LogsResource",
"LogEventsResource",
]
from zope.interface import implementer
from twisted.python.log import FileLogObserver
from calendarserver.accesslog import CommonAccessLoggingObserverExtensions
from .eventsource import EventSourceResource, IEventDecoder
from .resource import PageElement, TemplateResource
class LogsPageElement(PageElement):
"""
Logs page element.
"""
def __init__(self):
super(LogsPageElement, self).__init__(u"logs")
def pageSlots(self):
return {
u"title": u"Server Logs",
}
class LogsResource(TemplateResource):
"""
Logs page resource.
"""
addSlash = True
def __init__(self, principalCollections):
super(LogsResource, self).__init__(LogsPageElement, principalCollections, isdir=False)
self.putChild(u"events", LogEventsResource(principalCollections))
class LogEventsResource(EventSourceResource):
"""
Log event vending resource.
"""
def __init__(self, principalCollections):
super(LogEventsResource, self).__init__(EventDecoder, principalCollections)
self.observers = (
AccessLogObserver(self),
ServerLogObserver(self),
)
for observer in self.observers:
observer.start()
class ServerLogObserver(FileLogObserver):
"""
Log observer that sends events to an L{EventSourceResource} instead of
writing to a file.
@note: L{ServerLogObserver} is an old-style log observer, as it inherits
from L{FileLogObserver}.
"""
timeFormat = None
def __init__(self, resource):
class FooIO(object):
def write(_, s):
self._lastMessage = s
def flush(_):
pass
FileLogObserver.__init__(self, FooIO())
self.lastMessage = None
self._resource = resource
def emit(self, event):
self._resource.addEvents(((self, u"server", event),))
def formatEvent(self, event):
self._lastMessage = None
FileLogObserver.emit(self, event)
return self._lastMessage
class AccessLogObserver(CommonAccessLoggingObserverExtensions):
"""
Log observer that captures apache-style access log text entries in a
buffer.
@note: L{AccessLogObserver} is an old-style log observer, as it
ultimately inherits from L{txweb2.log.BaseCommonAccessLoggingObserver}.
"""
def __init__(self, resource):
super(AccessLogObserver, self).__init__()
self._resource = resource
def logStats(self, event):
# Only look at access log events
if event[u"type"] != u"access-log":
return
self._resource.addEvents(((self, u"access", event),))
@implementer(IEventDecoder)
class EventDecoder(object):
"""
Decodes logging events.
"""
@staticmethod
def idForEvent(event):
_ignore_observer, _ignore_eventClass, logEvent = event
return id(logEvent)
@staticmethod
def classForEvent(event):
_ignore_observer, eventClass, _ignore_logEvent = event
return eventClass
@staticmethod
def textForEvent(event):
observer, eventClass, logEvent = event
try:
if eventClass == u"access":
text = logEvent[u"log-format"] % logEvent
else:
text = observer.formatEvent(logEvent)
if text is None:
text = u""
except:
text = u"*** Error while formatting event ***"
return text
@staticmethod
def retryForEvent(event):
return None
calendarserver-7.0+dfsg/calendarserver/webadmin/logs.xhtml 0000664 0000000 0000000 00000005061 12607514243 0024115 0 ustar 00root root 0000000 0000000
Access Log
Server Log
calendarserver-7.0+dfsg/calendarserver/webadmin/principals.py 0000664 0000000 0000000 00000022364 12607514243 0024616 0 ustar 00root root 0000000 0000000 # -*- test-case-name: calendarserver.webadmin.test.test_principals -*-
##
# Copyright (c) 2014-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
"""
Calendar Server principal management web UI.
"""
__all__ = [
"PrincipalsResource",
]
from cStringIO import StringIO
from zipfile import ZipFile
from twistedcaldav.simpleresource import SimpleResource
from twisted.internet.defer import inlineCallbacks, returnValue
from twisted.web.template import tags as html, renderer
from txweb2.stream import MemoryStream
from txweb2.http import Response
from txweb2.http_headers import MimeType
from .resource import PageElement, TemplateResource
class PrincipalsPageElement(PageElement):
"""
Principal management page element.
"""
def __init__(self, directory):
super(PrincipalsPageElement, self).__init__(u"principals")
self._directory = directory
def pageSlots(self):
return {
u"title": u"Principal Management",
}
@renderer
def search_terms(self, request, tag):
"""
Inserts search terms as a text child of C{tag}.
"""
terms = searchTerms(request)
if terms:
return tag(value=u" ".join(terms))
else:
return tag
@renderer
def if_search_results(self, request, tag):
"""
Renders C{tag} if there are search results, otherwise removes it.
"""
if searchTerms(request):
return tag
else:
return u""
@renderer
def search_results_row(self, request, tag):
def rowsForRecords(records):
for record in records:
yield tag.clone().fillSlots(
**slotsForRecord(record)
)
d = self.recordsForSearchTerms(request)
d.addCallback(rowsForRecords)
return d
@inlineCallbacks
def recordsForSearchTerms(self, request):
if not hasattr(request, "_search_result_records"):
terms = searchTerms(request)
records = yield self._directory.recordsMatchingTokens(terms)
request._search_result_records = tuple(records)
returnValue(request._search_result_records)
class PrincipalsResource(TemplateResource):
"""
Principal management page resource.
"""
addSlash = True
def __init__(self, directory, store, principalCollections):
super(PrincipalsResource, self).__init__(
lambda: PrincipalsPageElement(directory), principalCollections, isdir=False
)
self._directory = directory
self._store = store
@inlineCallbacks
def getChild(self, name):
if name == "":
returnValue(self)
record = yield self._directory.recordWithUID(name)
if record:
returnValue(PrincipalResource(record, self._store, self._principalCollections))
else:
returnValue(None)
class PrincipalPageElement(PageElement):
"""
Principal editing page element.
"""
def __init__(self, record):
super(PrincipalPageElement, self).__init__(u"principals_edit")
self._record = record
def pageSlots(self):
slots = slotsForRecord(self._record)
slots[u"title"] = u"Calendar & Contacts Server Principal Information"
slots[u"service"] = (
u"{self._record.service.__class__.__name__}: "
"{self._record.service.realmName}"
.format(self=self)
)
return slots
class PrincipalResource(TemplateResource):
"""
Principal editing resource.
"""
addSlash = True
def __init__(self, record, store, principalCollections):
super(PrincipalResource, self).__init__(
lambda: PrincipalPageElement(record), principalCollections, isdir=False
)
self._record = record
self._store = store
def getChild(self, name):
if name == "":
return self
if name == "calendars_combined":
return PrincipalCalendarsExportResource(self._record, self._store, self._principalCollections)
class PrincipalCalendarsExportResource(SimpleResource):
"""
Resource that vends a principal's calendars as iCalendar text.
"""
addSlash = False
def __init__(self, record, store, principalCollections):
super(PrincipalCalendarsExportResource, self).__init__(principalCollections, isdir=False)
self._record = record
self._store = store
@inlineCallbacks
def calendarComponents(self):
uid = self._record.uid
calendarComponents = []
txn = self._store.newTransaction()
try:
calendarHome = yield txn.calendarHomeWithUID(uid)
if calendarHome is None:
raise RuntimeError("No calendar home for UID: {}".format(uid))
for calendar in (yield calendarHome.calendars()):
name = calendar.displayName()
for calendarObject in (yield calendar.calendarObjects()):
perUser = yield calendarObject.filteredComponent(uid, True)
calendarComponents.add((name, perUser))
finally:
txn.abort()
returnValue(calendarComponents)
@inlineCallbacks
def iCalendarZipArchiveData(self):
calendarComponents = yield self.calendarComponents()
fileHandle = StringIO()
try:
zipFile = ZipFile(fileHandle, "w", allowZip64=True)
try:
zipFile.comment = (
"Calendars for UID: {}".format(self._record.uid)
)
names = set()
for name, component in calendarComponents:
if name in names:
i = 0
while True:
i += 1
nextName = "{} {:d}".format(name, i)
if nextName not in names:
name = nextName
break
assert i < len(calendarComponents)
text = component.getText().encode("utf-8")
zipFile.writestr(name.encode("utf-8"), text)
finally:
zipFile.close()
data = fileHandle.getvalue()
finally:
fileHandle.close()
returnValue(data)
@inlineCallbacks
def render(self, request):
response = Response()
response.stream = MemoryStream((yield self.iCalendarZipArchiveData()))
# FIXME: Use content-encoding instead?
response.headers.setHeader(
b"content-type",
MimeType.fromString(b"application/zip")
)
returnValue(response)
def searchTerms(request):
if not hasattr(request, "_search_terms"):
terms = set()
if request.args:
for query in request.args.get(u"search", []):
for term in query.split(u" "):
if term:
terms.add(term)
for term in request.args.get(u"term", []):
if term:
terms.add(term)
request._search_terms = terms
return request._search_terms
#
# This should work when we switch to twext.who
#
def slotsForRecord(record):
def asText(obj):
if obj is None:
return u"(no value)"
else:
try:
return unicode(obj)
except UnicodeDecodeError:
try:
return unicode(repr(obj))
except UnicodeDecodeError:
return u"(error rendering value)"
def joinWithBR(elements):
noValues = True
for element in elements:
if not noValues:
yield html.br()
yield asText(element)
noValues = False
if noValues:
yield u"(no values)"
# slots = {}
# for field, values in record.fields.iteritems():
# if not record.service.fieldName.isMultiValue(field):
# values = (values,)
# slots[field.name] = joinWithBR(asText(value) for value in values)
# return slots
return {
u"service": (
u"{record.service.__class__.__name__}: {record.service.realmName}"
.format(record=record)
),
u"uid": joinWithBR((record.uid,)),
u"guid": joinWithBR((record.guid,)),
u"recordType": joinWithBR((record.recordType,)),
u"shortNames": joinWithBR(record.shortNames),
u"fullNames": joinWithBR((record.fullName,)),
u"emailAddresses": joinWithBR(record.emailAddresses),
u"calendarUserAddresses": joinWithBR(record.calendarUserAddresses),
u"serverID": joinWithBR((record.serverID,)),
}
calendarserver-7.0+dfsg/calendarserver/webadmin/principals.xhtml 0000664 0000000 0000000 00000002666 12607514243 0025325 0 ustar 00root root 0000000 0000000
calendarserver-7.0+dfsg/calendarserver/webadmin/resource.py 0000664 0000000 0000000 00000005577 12607514243 0024310 0 ustar 00root root 0000000 0000000 # -*- test-case-name: calendarserver.webadmin.test.test_resource -*-
##
# Copyright (c) 2009-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Calendar Server Web Admin UI.
"""
__all__ = [
"PageElement",
"TemplateResource",
]
from twistedcaldav.simpleresource import SimpleResource
from twisted.internet.defer import inlineCallbacks, returnValue
from twisted.python.modules import getModule
from twisted.web.template import (
Element, renderer, XMLFile, flattenString, tags
)
from txweb2.stream import MemoryStream
from txweb2.http import Response
from txweb2.http_headers import MimeType
class PageElement(Element):
"""
Page element.
"""
def __init__(self, templateName):
super(PageElement, self).__init__()
self.loader = XMLFile(
getModule(__name__).filePath.sibling(
u"{name}.xhtml".format(name=templateName)
)
)
def pageSlots(self):
return {}
@renderer
def main(self, request, tag):
"""
Main renderer, which fills page-global slots like 'title'.
"""
tag.fillSlots(**self.pageSlots())
return tag
@renderer
def stylesheet(self, request, tag):
return tags.link(
rel=u"stylesheet",
media=u"screen",
href=u"/style.css",
type=u"text/css",
)
class TemplateResource(SimpleResource):
"""
Resource that renders a template.
"""
# @staticmethod
# def queryValue(request, argument):
# for value in request.args.get(argument, []):
# return value
# return u""
# @staticmethod
# def queryValues(request, arguments):
# return request.args.get(arguments, [])
def __init__(self, elementClass, pc, isdir):
super(TemplateResource, self).__init__(pc, isdir=isdir)
self.elementClass = elementClass
# def handleQueryArguments(self, request):
# return succeed(None)
@inlineCallbacks
def render(self, request):
# yield self.handleQueryArguments(request)
htmlContent = yield flattenString(request, self.elementClass())
response = Response()
response.stream = MemoryStream(htmlContent)
response.headers.setHeader(
b"content-type", MimeType.fromString(b"text/html; charset=utf-8")
)
returnValue(response)
calendarserver-7.0+dfsg/calendarserver/webadmin/test/ 0000775 0000000 0000000 00000000000 12607514243 0023050 5 ustar 00root root 0000000 0000000 calendarserver-7.0+dfsg/calendarserver/webadmin/test/__init__.py 0000664 0000000 0000000 00000001136 12607514243 0025162 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2011-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
calendarserver-7.0+dfsg/calendarserver/webadmin/test/test_eventsource.py 0000664 0000000 0000000 00000017007 12607514243 0027030 0 ustar 00root root 0000000 0000000 ##
# Copyright (c) 2011-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Tests for L{calendarserver.webadmin.eventsource}.
"""
from __future__ import print_function
from zope.interface import implementer
from twisted.internet.defer import inlineCallbacks
from twisted.trial.unittest import TestCase
from txweb2.server import Request
from txweb2.http_headers import Headers
from ..eventsource import textAsEvent, EventSourceResource, IEventDecoder
class EventGenerationTests(TestCase):
"""
Tests for emitting HTML5 EventSource events.
"""
def test_textAsEvent(self):
"""
Generate an event from some text.
"""
self.assertEquals(
textAsEvent(u"Hello, World!"),
b"data: Hello, World!\n\n"
)
def test_textAsEvent_newlines(self):
"""
Text with newlines generates multiple C{data:} lines in event.
"""
self.assertEquals(
textAsEvent(u"AAPL\nApple Inc.\n527.10"),
b"data: AAPL\ndata: Apple Inc.\ndata: 527.10\n\n"
)
def test_textAsEvent_newlineTrailing(self):
"""
Text with trailing newline generates trailing blank C{data:} line.
"""
self.assertEquals(
textAsEvent(u"AAPL\nApple Inc.\n527.10\n"),
b"data: AAPL\ndata: Apple Inc.\ndata: 527.10\ndata: \n\n"
)
def test_textAsEvent_encoding(self):
"""
Event text is encoded as UTF-8.
"""
self.assertEquals(
textAsEvent(u"S\xe1nchez"),
b"data: S\xc3\xa1nchez\n\n"
)
def test_textAsEvent_emptyText(self):
"""
Text may not be L{None}.
"""
self.assertEquals(
textAsEvent(u""),
b"data: \n\n" # Note that b"data\n\n" is a valid result here also
)
def test_textAsEvent_NoneText(self):
"""
Text may not be L{None}.
"""
self.assertRaises(TypeError, textAsEvent, None)
def test_textAsEvent_eventID(self):
"""
Event ID goes into the C{id} field.
"""
self.assertEquals(
textAsEvent(u"", eventID=u"1234"),
b"id: 1234\ndata: \n\n" # Note order is allowed to vary
)
def test_textAsEvent_eventClass(self):
"""
Event class goes into the C{event} field.
"""
self.assertEquals(
textAsEvent(u"", eventClass=u"buckets"),
b"event: buckets\ndata: \n\n" # Note order is allowed to vary
)
class EventSourceResourceTests(TestCase):
"""
Tests for L{EventSourceResource}.
"""
def eventSourceResource(self):
return EventSourceResource(DictionaryEventDecoder, None)
def render(self, resource):
headers = Headers()
request = Request(
chanRequest=None,
command=None,
path="/",
version=None,
contentLength=None,
headers=headers
)
return resource.render(request)
def test_status(self):
"""
Response status is C{200}.
"""
resource = self.eventSourceResource()
response = self.render(resource)
self.assertEquals(response.code, 200)
def test_contentType(self):
"""
Response content type is C{text/event-stream}.
"""
resource = self.eventSourceResource()
response = self.render(resource)
self.assertEquals(
response.headers.getRawHeaders(b"Content-Type"),
["text/event-stream"]
)
def test_contentLength(self):
"""
Response content length is not provided.
"""
resource = self.eventSourceResource()
response = self.render(resource)
self.assertEquals(
response.headers.getRawHeaders(b"Content-Length"),
None
)
@inlineCallbacks
def test_streamBufferedEvents(self):
"""
Events already buffered are vended immediately, then we get EOF.
"""
events = (
dict(eventID=u"1", eventText=u"A"),
dict(eventID=u"2", eventText=u"B"),
dict(eventID=u"3", eventText=u"C"),
dict(eventID=u"4", eventText=u"D"),
)
resource = self.eventSourceResource()
resource.addEvents(events)
response = self.render(resource)
# Each result from read() is another event
for i in range(len(events)):
result = yield response.stream.read()
self.assertEquals(
result,
textAsEvent(
text=events[i]["eventText"],
eventID=events[i]["eventID"]
)
)
@inlineCallbacks
def test_streamWaitForEvents(self):
"""
Stream reading blocks on additional events.
"""
resource = self.eventSourceResource()
response = self.render(resource)
# Read should block on new events.
d = response.stream.read()
self.assertFalse(d.called)
d.addErrback(lambda f: None)
d.cancel()
test_streamWaitForEvents.todo = "Feature disabled; needs debugging"
@inlineCallbacks
def test_streamNewEvents(self):
"""
Events not already buffered are vended after they are posted.
"""
events = (
dict(eventID=u"1", eventText=u"A"),
dict(eventID=u"2", eventText=u"B"),
dict(eventID=u"3", eventText=u"C"),
dict(eventID=u"4", eventText=u"D"),
)
resource = self.eventSourceResource()
response = self.render(resource)
# The first read should block on new events.
d = response.stream.read()
self.assertFalse(d.called)
# Add some events
resource.addEvents(events)
# We should now be unblocked
self.assertTrue(d.called)
# Each result from read() is another event
for i in range(len(events)):
if d is None:
result = yield response.stream.read()
else:
result = yield d
d = None
self.assertEquals(
result,
textAsEvent(
text=events[i]["eventText"],
eventID=(events[i]["eventID"])
)
)
# The next read should block on new events.
d = response.stream.read()
self.assertFalse(d.called)
d.addErrback(lambda f: None)
d.cancel()
test_streamNewEvents.todo = "Feature disabled; needs debugging"
@implementer(IEventDecoder)
class DictionaryEventDecoder(object):
"""
Decodes events represented as dictionaries.
"""
@staticmethod
def idForEvent(event):
return event.get("eventID")
@staticmethod
def classForEvent(event):
return event.get("eventClass")
@staticmethod
def textForEvent(event):
return event.get("eventText")
@staticmethod
def retryForEvent(event):
return None
calendarserver-7.0+dfsg/calendarserver/webadmin/work.py 0000664 0000000 0000000 00000016341 12607514243 0023432 0 ustar 00root root 0000000 0000000 # -*- test-case-name: calendarserver.webadmin.test.test_principals -*-
##
# Copyright (c) 2014-2015 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
"""
Calendar Server principal management web UI.
"""
__all__ = [
"WorkMonitorResource",
]
from time import time
from json import dumps
from zope.interface import implementer
from twisted.internet.defer import inlineCallbacks, returnValue
from twext.python.log import Logger
from twext.enterprise.jobs.jobitem import JobItem
from txdav.caldav.datastore.scheduling.imip.inbound import (
IMIPPollingWork, IMIPReplyWork
)
from txdav.who.groups import GroupCacherPollingWork
from calendarserver.push.notifier import PushNotificationWork
from txdav.caldav.datastore.scheduling.work import (
ScheduleOrganizerWork, ScheduleRefreshWork,
ScheduleReplyWork, ScheduleAutoReplyWork,
)
from .eventsource import EventSourceResource, IEventDecoder
from .resource import PageElement, TemplateResource
class WorkMonitorPageElement(PageElement):
"""
Principal management page element.
"""
def __init__(self):
super(WorkMonitorPageElement, self).__init__(u"work")
def pageSlots(self):
return {
u"title": u"Workload Monitor",
}
class WorkMonitorResource(TemplateResource):
"""
Principal management page resource.
"""
addSlash = True
def __init__(self, store, principalCollections):
super(WorkMonitorResource, self).__init__(
lambda: WorkMonitorPageElement(), principalCollections, isdir=False
)
self.putChild(u"events", WorkEventsResource(store, principalCollections))
class WorkEventsResource(EventSourceResource):
"""
Resource that vends work queue information via HTML5 EventSource events.
"""
log = Logger()
def __init__(self, store, principalCollections, pollInterval=1000):
super(WorkEventsResource, self).__init__(EventDecoder, principalCollections, bufferSize=100)
self._store = store
self._pollInterval = pollInterval
self._polling = False
@inlineCallbacks
def render(self, request):
yield self.poll()
returnValue(super(WorkEventsResource, self).render(request))
@inlineCallbacks
def poll(self):
if self._polling:
return
self._polling = True
txn = self._store.newTransaction()
try:
# Look up all of the jobs
events = []
jobsByTypeName = {}
for job in (yield JobItem.all(txn)):
jobsByTypeName.setdefault(job.workType, []).append(job)
totalsByTypeName = {}
for workType in JobItem.workTypes():
typeName = workType.table.model.name
jobs = jobsByTypeName.get(typeName, [])
totalsByTypeName[typeName] = len(jobs)
jobDicts = []
for job in jobs:
def formatTime(datetime):
if datetime is None:
return None
else:
# FIXME: Use HTTP time format
return datetime.ctime()
jobDict = dict(
job_jobID=job.jobID,
job_priority=job.priority,
job_weight=job.weight,
job_notBefore=formatTime(job.notBefore),
)
work = yield job.workItem()
attrs = ("workID", "group")
if workType == PushNotificationWork:
attrs += ("pushID", "priority")
elif workType == ScheduleOrganizerWork:
attrs += ("icalendarUID", "attendeeCount")
elif workType == ScheduleRefreshWork:
attrs += ("icalendarUID", "attendeeCount")
elif workType == ScheduleReplyWork:
attrs += ("icalendarUID",)
elif workType == ScheduleAutoReplyWork:
attrs += ("icalendarUID",)
elif workType == GroupCacherPollingWork:
attrs += ()
elif workType == IMIPPollingWork:
attrs += ()
elif workType == IMIPReplyWork:
attrs += ("organizer", "attendee")
else:
attrs = ()
if attrs:
if work is None:
self.log.error(
"workItem() returned None for job: {job}",
job=job
)
# jobDict.update((attr, None) for attr in attrs)
for attr in attrs:
jobDict["work_{}".format(attr)] = None
else:
# jobDict.update(
# ("work_{}".format(attr), getattr(work, attr))
# for attr in attrs
# )
for attr in attrs:
jobDict["work_{}".format(attr)] = (
getattr(work, attr)
)
jobDicts.append(jobDict)
if jobDicts:
events.append(dict(
eventClass=typeName,
eventID=time(),
eventText=asJSON(jobDicts),
))
events.append(dict(
eventClass=u"work-total",
eventID=time(),
eventText=asJSON(totalsByTypeName),
eventRetry=(self._pollInterval),
))
# Send data
self.addEvents(events)
except:
self._polling = False
yield txn.abort()
raise
else:
yield txn.commit()
# Schedule the next poll
if not hasattr(self, "_clock"):
from twisted.internet import reactor
self._clock = reactor
self._clock.callLater(self._pollInterval / 1000, self.poll)
@implementer(IEventDecoder)
class EventDecoder(object):
@staticmethod
def idForEvent(event):
return event.get("eventID")
@staticmethod
def classForEvent(event):
return event.get("eventClass")
@staticmethod
def textForEvent(event):
return event.get("eventText")
@staticmethod
def retryForEvent(event):
return event.get("eventRetry")
def asJSON(obj):
return dumps(obj, separators=(',', ':'))
calendarserver-7.0+dfsg/calendarserver/webadmin/work.xhtml 0000664 0000000 0000000 00000030753 12607514243 0024141 0 ustar 00root root 0000000 0000000