offlineimap-6.6.1/000077500000000000000000000000001264010144500140125ustar00rootroot00000000000000offlineimap-6.6.1/.gitignore000066400000000000000000000002161264010144500160010ustar00rootroot00000000000000# Backups. .*.swp .*.swo *.html *~ # websites. /website/ /wiki/ # Generated files. /docs/dev-doc/ /build/ *.pyc offlineimap.1 offlineimapui.7 offlineimap-6.6.1/CONTRIBUTING.rst000066400000000000000000000075151264010144500164630ustar00rootroot00000000000000.. -*- coding: utf-8 -*- .. vim: spelllang=en ts=2 expandtab: .. _OfflineIMAP: https://github.com/OfflineIMAP/offlineimap .. _Github: https://github.com/OfflineIMAP/offlineimap .. _repository: git://github.com/OfflineIMAP/offlineimap.git .. _maintainers: https://github.com/OfflineIMAP/offlineimap/blob/next/MAINTAINERS.rst .. _mailing list: http://lists.alioth.debian.org/mailman/listinfo/offlineimap-project .. _Developer's Certificate of Origin: https://github.com/OfflineIMAP/offlineimap/blob/next/docs/doc-src/dco.rst .. _Community's website: http://offlineimap.org .. _APIs in OfflineIMAP: http://offlineimap.org/documentation.html#available-apis .. _documentation: http://offlineimap.org/documentation.html .. _Coding Guidelines: http://offlineimap.org/doc/CodingGuidelines.html .. _Know the status of your patches: http://offlineimap.org/doc/GitAdvanced.html#know-the-status-of-your-patch-after-submission ================= HOW TO CONTRIBUTE ================= You'll find here the **basics** to contribute to OfflineIMAP_, addressed to users as well as learning or experienced developers to quickly provide contributions. **For more detailed documentation, see the** `Community's website`_. .. contents:: :depth: 3 Submit issues ============= Issues are welcome to both Github_ and the `mailing list`_, at your own convenience. You might help closing some issues, too. :-) For the imaptients ================== - `Coding Guidelines`_ - `APIs in OfflineIMAP`_ - `Know the status of your patches`_ after submission - All the `documentation`_ Community ========= All contributors to OfflineIMAP_ are benevolent volunteers. This makes hacking to OfflineIMAP_ **fun and open**. Thanks to Python, almost every developer can quickly become productive. Students and novices are welcome. Third-parties patches are essential and proved to be a wonderful source of changes for both fixes and new features. OfflineIMAP_ is entirely written in Python, works on IMAP and source code is tracked with Git. *It is expected that most contributors don't have skills to all of these areas.* That's why the best thing you could do for you, is to ask us about any difficulty or question raising in your mind. We actually do our best to help new comers. **We've all started like this.** - The official repository_ is maintained by the core team maintainers_. - The `mailing list`_ is where all the exciting things happen. Getting started =============== Occasional contributors ----------------------- * Clone the official repository_. Regular contributors -------------------- * Create an account and login to Github. * Fork the official repository_. * Clone your own fork to your local workspace. * Add a reference to your fork (once):: $ git remote add myfork https://github.com//offlineimap.git * Regularly fetch the changes applied by the maintainers:: $ git fetch origin $ git checkout master $ git merge offlineimap/master $ git checkout next $ git merge offlineimap/next Making changes (all contributors) --------------------------------- 1. Create your own topic branch off of ``next`` (recently updated) via:: $ git checkout -b my_topic next 2. Check for unnecessary whitespaces with ``git diff --check`` before committing. 3. Commit your changes into logical/atomic commits. **Sign-off your work** to confirm you agree with the `Developer's Certificate of Origin`_. 4. Write a good *commit message* about **WHY** this patch (take samples from the ``git log``). Learn more ========== There is already a lot of documentation. Here's where you might want to look first: - The directory ``offlineimap/docs`` has all kind of additional documentation (man pages, RFCs). - The file ``offlineimap.conf`` allows to know all the supported features. - The file ``TODO.rst`` express code changes we'd like and current *Work In Progress* (WIP). offlineimap-6.6.1/COPYING000066400000000000000000000456271264010144500150630ustar00rootroot00000000000000# This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Library General Public License instead of this License. ---------------------------------------------------------------- In addition, as a special exception, the copyright holders give permission to link the code of portions of this program with the OpenSSL library under certain conditions as described in each individual source file, and distribute linked combinations including the two. You must obey the GNU General Public License in all respects for all of the code used other than OpenSSL. If you modify file(s) with this exception, you may extend this exception to your version of the file(s), but you are not obligated to do so. If you do not wish to do so, delete this exception statement from your version. If you delete this exception statement from all source files in the program, then also delete it here. offlineimap-6.6.1/Changelog.maint.md000066400000000000000000000011531264010144500173320ustar00rootroot00000000000000--- layout: page title: Changelog of the stable branch --- * The following excerpt is only usefull when rendered in the website. {:toc} This is the Changelog of the maintenance branch. **NOTE FROM THE MAINTAINER:** This branch comes almost as-is. With no URGENT requirements to update this branch (e.g. big security fix), it is left behind. If anyone volunteers to maintain it and backport patches, let us know! ### OfflineIMAP v6.3.2.1 (2011-03-23) #### Bug Fixes * Sanity checks for SSL cacertfile configuration. * Fix regression (UIBase is no more). * Make profiling mode really enforce single-threading. offlineimap-6.6.1/Changelog.md000066400000000000000000001174601264010144500162340ustar00rootroot00000000000000--- layout: page title: Changelog of mainline --- * The following excerpt is only usefull when rendered in the website. {:toc} ### OfflineIMAP v6.6.1 (2015-12-28) #### Notes This is a very small new stable release for two fixes. Amending support for BINARY APPEND which is not correctly implemented. Also, remove potential harms from dot files in a local maildir. #### Fixes - Bump imaplib2 from 2.52 to 2.53. Remove support for binary send. - Ignore aloo dot files in the Maildir while scanning for mails. ### OfflineIMAP v6.6.0 (2015-12-05) #### Features - Maildir learns to mimic Dovecot's format of lower-case letters (a,b,c..) for "custom flags" or user keywords. #### Fixes - Broken retry loop would break connection management. - Replace rogue `print` statement by `self.ui.debug`. #### Changes - Bump imaplib2 from v2.52 to v2.53. - Code cleanups. - Add a full stack of all thread dump upon EXIT or KILL signal in thread debug mode. ### OfflineIMAP v6.6.0-rc3 (2015-11-05) #### Notes Changes are slowing down and the code is under serious testing by some new contributors. Everything expected at this time in the release cycle. Thanks to them. SSL is now enabled by default to prevent from sending private data in clear stream to the wild. #### Features - Add new config option `filename_use_mail_timestamp`. #### Fixes - Bump from imaplib2 v2.51 to v2.52. - Minor fixes. #### Changes - Enable SSL by default. - Fix: avoid writing password to log. - offlineimap.conf: improve namtrans doc a bit. ### OfflineIMAP v6.6.0-rc2 (2015-10-15) #### Notes Interesting job was done in this release with 3 new features: - Support for XOAUTH2; - New 'tls_level' configuration option to automatically discard insecure SSL protocols; - New interface 'syslog' comes in, next to the -s CLI option. This allows better integration with systemd. I won't merge big changes until the stable is out. IOW, you can seriously start testing this rc2. #### Features - Add a new syslog ui. - Introduce the 'tls_level' configuration option. - Learn XOAUTH2 authentication (used by Gmail servers). - Manual IDLE section improved (minor). #### Fixes - Configuration option utime_from_header handles out-of-bounds dates. - offlineimap.conf: fix erroneous assumption about ssl23. - Fix status code to reflect success or failure of a sync. - contrib/release.sh: fix changelog edition. #### Changes - Bump imaplib2 from v2.48 to v2.51. - README: new section status and future. - Minor code cleanups. - Makefile: improve building of targz. - systemd: log to syslog rather than stderr for better integration. ### OfflineIMAP v6.6.0-rc1 (2015-09-28) #### Notes Let's go with a new release. Basic UTF support was implemented while it is still exeprimental. Use this with care. OfflineIMAP can now send the logs to syslog and notify on new mail. #### Features - logging: add a switch to log to syslog. - Added the newmail_hook. - utf-7 feature is set experimental. #### Fixes - offlineimap.conf: fix a typo in the new mail hook example. - Fix language. - Fix spelling inconsistency. - offlineimap.conf: don't use quotes for sep option. - man page: fingerprint can be used with SSL. - fix #225 « Runonce (offlineimap -o) does not stop if autorefresh is declared in DEFAULT section ». - CONTRIBUTING: fix links to offlineimap.org. #### Changes - Bump imaplib2 from 2.43 to 2.48 - README: small improvements ### OfflineIMAP v6.5.7 (2015-05-15) #### Notes Almost no change since last release candidate. This is a sign that this release is stable. ,-) There was big changes since previous stable and users - especially distribution maintainers - should really read the intermediate changelogs. At the beginning of this year, I've tried to implement Unicode support. As you know, I was not satisfied with the result. Then, I've published my code analysis where I talk about doing a lot of refactoring for more proper OOP practices. What's new is that I've actually done it and stopped this work as soon as I realized that it means entirely rewriting the software. On top of this, I'm not fully satisfied with other current limitations: - old legacy support; - migration to Python 3; - complex multithreading design; - some restrictions of the GPLv2 license; - etc. That's why I've started a new product. I'll publish it in the coming weeks under the MIT license. #### Features - Better documentation for Windows users. - contrib/release.sh (v0.2): fixes and improvements. #### Fixes - Report exceptions via exit code. - Proxy feature leaks DNS support: offlineimap.conf talks about this. - Email parsing for date coudn't work: fix datetuple dst check. #### Changes - Little code refactoring. ### OfflineIMAP v6.5.7-rc4 (2015-04-07) #### Notes Contrary to what the detailed following changes look like, here is a much bigger release than expected. Most important change is about maxage being sightly revisited. The whole internal logic was found broken. Janna Martl did the hard work of raising the issues and get them fixed. New configuration options are added. Maintainer Dmitrijs Ledkovs has left the organization. We wish you well! ,-) Sebastian Spaeth let us know he will be almost inactive. We wish you well, too! #### Features - Add configuration option "utime_from_header" (TESTING). - Add systemd integration files. - mbnames: add new option "incremental" to write the file once per account. #### Fixes - maxage: fix timezone issues, remove IMAP-IMAP support, add startdate option. - Test suites fixed and improved. - Fix inaccurate UI messages when some messages are internally excluded from the cached lists. #### Changes - imaplib2: bump to v2.43. - More documentations moves to the website. - Maintainer Dmitrijs has left the organization. - Remove unnecessary imaplib2 workaround. - release.sh: script for maintainers improved. ### OfflineIMAP v6.5.7-rc3 (2015-03-19) #### Notes Here comes a much bigger release than expected! With this release, the new website is made official. Distribution maintainers, be aware that we now have a new man page offlineimapui(7)! Also, the man page offlineimap(1) is sightly revised to explain the command line options. Since `offlineimap --help` won't detail the options anymore, it becomes critical. The maxage feature was broken by design and could delete mails on one side. It is still under heavy work to fix issues when timezones are not synced. Gmail is known to use different timezones accross mailboxes. The IMAP library imaplib2 was updated for the upcoming course to Python 3. The most other important changes are: - Possibility to use a proxy. - All the documentation are SIGHTLY revisited and updated from all the available places (sources files in the repository, wiki, website). A lot was moved from the wiki and the sources to the website. - the RFCs are available in the repository. #### Features - Add proxy support powered by PySocks. - New man page offlineimapui to explain the available UIs. - Add a CONTRIBUTING.rst file. - Add a `TODO.rst` list for the contributors. - Add a script for maintainers to roll out new releases. - Add the `scripts/get-repository.sh` script to work on the website and the wiki. - Doc: add IMAP RFCs. #### Fixes - Don't loose local mails because of maxage. - Properly handle the cached messagelist. - Do not error if `remoteuser` is not configured. - imaplibutil: add missing errno import. - LocalStatusSQLite: labels: don't fail if database returns unexpected None value. - IDLE: continue trying selecting the folder on `OfflineImapError.Error`. #### Changes - imaplib2: bump to v2.42 - `--help` becomes concise. - Changelogs: move format back to markdown/kramdown to be more compatible with Jekyll. - README: deep cleanups. - code cleanups. - code: more style consistency. - sqlite: provide offending filename when open fails. - MANUAL: full refactoring, change format to asciidoc. - MANUAL: rename "KNOWN BUGS" TO "KNOWN ISSUES". - MANUAL: add known issues entry about socktimeout for suspended sessions. - offlineimap.conf: say what is the default value for the sep option. - sqlite: provide information on what is failing for `OperationalError`. - remove obsolete documentation. ### OfflineIMAP v6.5.7-rc2 (2015-01-18) #### Notes This release candidate should be minor for most users. The best points are about SSL not falling back on other authentication methods when failing, better RAM footprint and reduced I/O access. Documentation had our attention, too. There's some code cleanups and code refactoring, as usual. #### Features * Do not keep reloading pyhtonfile, make it stateful. * HACKING: how to create tags. * MANUAL: add minor sample on how to retrieve a password with a helper python file. #### Fixes * Make OS-default CA certificate file to be requested explicitely. * SSL: do not fallback on other authentication mode if it fails. * Fix regression introduced while style patching. * API documentation: properly auto-document main class, fixes. * ui: Machine: remove offending param for a _printData() call. * Drop caches after having processed folders. #### Changes * Fix unexpected garbage code. * Properly re-raise exception to save original tracebacks. * Refactoring: avoid redefining various Python keywords. * Code: improvements of comments and more style consistency. * Configuration file: better design and other small improvements. * nametrans documentation: fix minor error. * Unused import removal. * Add a note about the incorrect rendering of the docstring with Sphinx. * Errors handling: log the messages with level ERROR. * MAINTAINERS: add mailing list maintainers. * Fixed copyright statement. * COPYING: fix unexpected characters. ### OfflineIMAP v6.5.7-rc1 (2015-01-07) #### Notes I think it's time for a new release candidate. Our release cycles are long enough and users are asked to use the current TIP of the next branch to test our recent patches. The current version makes better support for environment variable expansion and improves OS portability. Gmail should be better supported: we are still expecting feedbacks. Embedded library imaplib2 is updated to v2.37. Debugging messages are added and polished. There's some code cleanups and refactoring, also. #### Features * Expand environment variables in the following configuration items: - general.pythonfile; - general.metadata; - mbnames.filename; - Repository.localfolders. - Repository.sslcacertfile. Make tilde and environment variable expansion in the following configuration items: - Repository.sslclientcert; - Repository.sslclientkey. * Support default CA bundle locations for a couple of known Unix systems (Michael Vogt, GutHub pull #19) * Added default CA bundle location for OpenBSD (GitHub pull #120) and DragonFlyBSD. #### Fixes * Fix unbounded recursion during flag update (Josh Berry). * Do not ignore gmail labels if header appears multiple times * Delete gmail labels header before adding a new one * Fix improper header separator for X-OfflineIMAP header * Match header names case-insensitively * Create SQLite database directory if it doesn't exist yet; warn if path is not a directory (Nick Farrell, GutHub pull #102) * Properly manipulate contents of messagelist for folder * Fix label processing in GmailMaildir * Properly capitalize OpenSSL * Fix warning-level message processing by MachineUI (GitHub pull #64, GitHub pull #118). * Properly generate tarball from "sdist" command (GitHub #137) * Fix Markdown formatting * Fix typo in apply_xforms invocation * Merge pull request #136 from aroig/gh/label-fix * Fix mangled message headers for servers without UIDPLUS: X-OfflineIMAP was added with preceeding '\n' instead of '\r\n' just before message was uploaded to the IMAP server. * Add missing version bump for 6.5.6 (it was released with 6.5.5 in setup.py and other places). #### Changes * Warn about a tricky piece of code in addmessageheader * Rename addmessageheader()'s crlf parameter to linebreak * addmessageheader: fix case #2 and flesh out docstring * addmessageheader(): add debug for header insertion * Add version qualifier to differentiate releases and development ones * More clearly show results of folder name translation * IMAP: provide message-id in error messages * Trade recursion by plain old cycle * Avoid copying array every time, just slice it * Added OpenSSL exception clause to our main GPL to allow people to link with OpenSSL in run-time. It is needed at least for Debian, see https://lists.debian.org/debian-legal/2002/10/msg00113.html for details. * Brought CustomConfig.py into more proper shape * Updated bundled imaplib2 to 2.37: - add missing idle_lock in _handler() * Imaplib2: trade backticks to repr() * Introduce CustomConfig method that applies set of transforms * imaplibutil.py: remove unused imports * CustomConfig.py: remove unused imports * init.py: remove unused import * repository/Base.py: remove unused import * repository/GmailMaildir.py: remove unused import * repository/LocalStatus.py: remove unused import * ui/Curses.py: remove unused import * ui/UIBase.py: remove unused import * localeval: comment on security issues * docs: remove obsolete comment about SubmittingPatches.rst * utils/const.py: fix ident * ui/UIBase: folderlist(): avoid built-in list() redefinition * more consistent style ### OfflineIMAP v6.5.6 (2014-05-14) * Fix IDLE mode regression (it didn't worked) introduced after v6.5.5 (pointy hat goes to Eygene Ryabinkin, kudos -- to Tomasz Żok) ### OfflineIMAP v6.5.6-rc1 (2014-05-14) * Add knob to invoke folderfilter dynamically on each sync (GitHub#73) * Add knob to apply compression to IMAP connections (Abdó Roig-Maranges) * Add knob to filter some headers before uploading message to IMAP server (Abdó Roig-Maranges) * Allow to sync GMail labels and implement GmailMaildir repository that adds mechanics to change message labels (Abdó Roig-Maranges) * Allow to migrate status data across differend backends (Abdó Roig-Maranges) * Support XDG Base Directory Specification (if $XDG_CONFIG_HOME/offlineimap/config exists, use it as the default configuration path; ~/.offlineimaprc is still tried after XDG location) (GitHub#32) * Allow multiple certificate fingerprints to be specified inside 'cert_fingerprint' ### OfflineIMAP v6.5.5 (2013-10-07) * Avoid lockups for IMAP synchronizations running with the "-1" command-line switch (X-Ryl669 ) * Dump stacktrace for all threads on SIGQUIT: ease debugging of threading and other issues * SIGHUP is now handled as the termination notification rather than the signal to reread the configuration (Dmitrijs Ledkovs) * Honor the timezone of emails (Tobias Thierer) * Allow mbnames output to be sorted by a custom sort key by specifying a 'sort_keyfunc' function in the [mbnames] section of the config. * Support SASL PLAIN authentication method. (Andreas Mack) * Support transport-only tunnels that requre full IMAP authentication. (Steve Purcell) * Make the list of authentication mechanisms to be configurable. (Andreas Mack) * Allow to set message access and modification timestamps based on the "Date" header of the message itself. (Cyril Russo) * "peritem" format string for [mbnames] got new expansion key "localfolders" that corresponds to the same parameter of the local repository for the account being processed. * [regression] pass folder names to the foldersort function, revert the documented behaviour * Fix handling of zero-sized IMAP data items (GitHub#15). * Updated bundled imaplib2 to 2.35: - fix for Gmail sending a BYE response after reading >100 messages in a session; - includes fix for GitHub#15: patch was accepted upstream. * Updated bundled imaplib2 to 2.36: it includes support for SSL version override that was integrated into our code before, no other changes. * Fixed parsing of quoted strings in IMAP responses: strings like "\\" were treated as having \" as the escaped quote, rather than treating it as the quoted escaped backslash (GitHub#53). * Execute pre/post-sync hooks during synchronizations toggled by IMAP IDLE message processing. (maxgerer@gmail.com) * Catch unsuccessful local mail uploads when IMAP server responds with "NO" status; that resulted in a loss of such local messages. (Adam Spiers) * Don't create folders if readonly is enabled. * Learn to deal with readonly folders to properly detect this condition and act accordingly. One example is Gmail's "Chats" folder that is read-only, but contains logs of the quick chats. (E. Ryabinkin) * Fix str.format() calls for Python 2.6 (D. Logie) * Remove APPENDUID hack, previously introduced to fix Gmail, no longer necessary, it might have been breaking things. (J. Wiegley) * Improve regex that could lead to 'NoneType' object has no attribute 'group' (D. Franke) * Improved error throwing on repository misconfiguration ### OfflineIMAP v6.5.4 (2012-06-02) * bump bundled imaplib2 library 2.29 --> 2.33 * Actually perform the SSL fingerprint check (reported by J. Cook) * Curses UI, don't use colors after we shut down curses already (C.Höger) * Document that '%' needs encoding as '%%' in configuration files. * Fix crash when IMAP.quickchanged() led to an Error (reported by sharat87) * Implement the createfolders setting to disable folder propagation (see docs) ### OfflineIMAP v6.5.3.1 (2012-04-03) * Don't fail if no dry-run setting exists in offlineimap.conf (introduced in 6.5.3) ### OfflineIMAP v6.5.3 (2012-04-02) * --dry-run mode protects us from performing any actual action. It will not precisely give the exact information what will happen. If e.g. it would need to create a folder, it merely outputs "Would create folder X", but not how many and which mails it would transfer. * internal code changes to prepare for Python3 * Improve user documentation of nametrans/folderfilter * Fixed some cases where invalid nametrans rules were not caught and we would not propagate local folders to the remote repository. (now tested in test03) * Revert "* Slight performance enhancement uploading mails to an IMAP server in the common case." It might have led to instabilities. * Revamped documentation structure. `make` in the `docs` dir or `make doc` in the root dir will now create the 1) man page and 2) the user documentation using sphinx (requiring python-doctools, and sphinx). The resulting user docs are in `docs/html`. You can also only create the man pages with `make man` in the `docs` dir. * -f command line option only works on the untranslated remote repository folder names now. Previously folderfilters had to match both the local AND remote name which caused unwanted behavior in combination with nametrans rules. Clarify in the help text. * Some better output when using nonsensical configuration settings * Improve compatability of the curses UI with python 2.6 ### OfflineIMAP v6.5.2.1 (2012-04-04) * Fix python2.6 compatibility with the TTYUI backend (crash) * Fix TTYUI regression from 6.5.2 in refresh loop (crash) * Fix crashes related to UIDVALIDITY returning "None" * Beginning of a test suite. So far there is only one test. Configure test/credentials.conf and invoke with "python setup.py test" * Make folders containing quotes work rather than crashing (reported by Mark Eichin) * Improve delete msg performance with SQLITE backend * Enforce basic UI when using the --info switch * Remove the Gmail "realdelete" option, as it could lead to potential data loss. ### OfflineIMAP v6.5.2 (2012-01-17) * Gmail "realdelete" option is considered harmful and has the potential for data loss. Analysis at http://article.gmane.org/gmane.mail.imap.offlineimap.general/5265 Warnings were added to offlineimap.conf * Rather than write out the nametrans'lated folder names for mbnames, we now write out the local untransformed box names. This is generally what we want. This became relevant since we support nametrans rules on the local side since only a short time. Reported by Paul Collignan. * Some sanity checks and improved error messages. * Revert 6.5.1.1 change to use public imaplib2 function, it was reported to not always work. * Don't fail when ~/netrc is not readable by us. * Don't emit noisy regular sleeping announcements in Basic UI. ### OfflineIMAP v6.5.1.2 (2012-01-07) - "Baby steps" Smallish bug fixes that deserve to be put out. * Fix possible crash during --info run * Fix reading in Maildirs, where we would attempt to create empty directories on REMOTE. * Do not attempt to sync lower case custom Maildir flags. We do not support them (yet) (this prevents many scary bogus sync messages) * Add filter information to the filter list in --info output ### OfflineIMAP v6.5.1.1 (2012-01-07) - "Das machine control is nicht fur gerfinger-poken und mittengrabben" Blinkenlights UI 6.5.0 regression fixes only. * Sleep led to crash ('abort_signal' not existing) * Make exit via 'q' key work again cleanly ### OfflineIMAP v6.5.1 (2012-01-07) - "Quest for stability" * Fixed Maildir regression "flagmatchre" not found. (regressed in 6.5.0) * Have console output go by default to STDOUT and not STDERR (regression in 6.5.0) * Fixed MachineUI to urlencode() output lines again, rather than outputting multi-line items. It's ugly as hell, but it had been that way for years. * Remove the old global locking system. We lock only the accounts that we currently sync, so you can invoke OfflineImap multiple times now as long as you sync different accounts. This system is compatible with all releases >= 6.4.0, so don't run older releases simultanous to this one. ### OfflineIMAP v6.5.0 (2012-01-06) This is a CRITICAL bug fix release for everyone who is on the 6.4.x series. Please upgrade to avoid potential data loss! The version has been bumped to 6.5.0, please let everyone know that the 6.4.x series is problematic. * Uploading multiple emails to an IMAP server would lead to wrong UIDs being returned (ie the same for all), which confused offlineimap and led to recurrent upload/download loops and inconsistencies in the IMAP<->IMAP uid mapping. * Uploading of Messages from Maildir and IMAP<->IMAP has been made more efficient by renaming files/mapping entries, rather than actually loading and saving the message under a new UID. * Fix regression that broke MachineUI ### OfflineIMAP v6.4.4 (2012-01-06) This is a bugfix release, fixing regressions occurring in or since 6.4.0. * Fix the missing folder error that occured when a new remote folder was detected (IMAP<->Maildir) * Possibly fixed bug that prevented us from ever re-reading Maildir folders, so flag changes and deletions were not detected when running in a refresh loop. This is a regression that was introduced in about 6.4.0. * Never mangle maildir file names when using nonstandard Maildir flags (such as 'a'), note that they will still be deleted as they are not supported in the sync to an IMAP server. ### OfflineIMAP v6.4.3 (2012-01-04) #### New Features * add a --info command line switch that outputs useful information about the server and the configuration for all enabled accounts. #### Changes * Reworked logging which was reported to e.g. not flush output to files often enough. User-visible changes: a) console output goes to stderr (for now). b) file output has timestamps and looks identical in the basic and ttyui UIs. c) File output should be flushed after logging by default (do report if not). * Bumped bundled imaplib2 to release 2.29 * Make ctrl-c exit cleanly rather aborting brutally (which could leave around temporary files, half-written cache files, etc). Exiting on SIGTERM and CTRL-C can take a little longer, but will be clean. ### OfflineIMAP v6.4.2 (2011-12-01) * IMAP<->IMAP sync with a readonly local IMAP repository failed with a rather mysterious "TypeError: expected a character buffer object" error. Fix this my retrieving the list of folders early enough even for readonly repositories. * Fix regression from 6.4.0. When using local Maildirs with "/" as a folder separator, all folder names would get a trailing slash appended, which is plain wrong. ### OfflineIMAP v6.4.1 (2011-11-17) #### Changes * Indicate progress when copying many messages (slightly change log format) * Output how long an account sync took (min:sec). #### Bug Fixes * Syncing multiple accounts in single-threaded mode would fail as we try to "register" a thread as belonging to two accounts which was fatal. Make it non-fatal (it can be legitimate). * New folders on the remote would be skipped on the very sync run they are created and only by synced in subsequent runs. Fixed. * a readonly parameter to select() was not always treated correctly, which could result in some folders being opened read-only when we really needed read-write. ### OfflineIMAP v6.4.0 (2011-09-29) This is the first stable release to support the forward-compatible per-account locks and remote folder creation that has been introduced in the 6.3.5 series. * Various regression and bug fixes from the last couple of RCs ### OfflineIMAP v6.3.5-rc3 (2011-09-21) #### Changes * Refresh server capabilities after login, so we know that Gmail supports UIDPLUS (it only announces that after login, not before). This prevents us from adding custom headers to Gmail uploads. #### Bug Fixes * Fix the creation of folders on remote repositories, which was still botched on rc2. ### OfflineIMAP v6.3.5-rc2 (2011-09-19) #### New Features * Implement per-account locking, so that it will possible to sync different accounts at the same time. The old global lock is still in place for backward compatibility reasons (to be able to run old and new versions of OfflineImap concurrently) and will be removed in the future. Starting with this version, OfflineImap will be forward-compatible with the per-account locking style. * Implement RFC 2595 LOGINDISABLED. Warn the user and abort when we attempt a plaintext login but the server has explicitly disabled plaintext logins rather than crashing. * Folders will now also be automatically created on the REMOTE side of an account if they exist on the local side. Use the folderfilters setting on the local side to prevent some folders from migrating to the remote side. Also, if you have a nametrans setting on the remote repository, you might need a nametrans setting on the local repository that leads to the original name (reverse nametrans). #### Changes * Documentation improvements concerning 'restoreatime' and some code cleanup * Maildir repositories now also respond to folderfilter= configurations. #### Bug Fixes * New emails are not created with "-rwxr-xr-x" but as "-rw-r--r--" anymore, fixing a regression in 6.3.4. ### OfflineIMAP v6.3.5-rc1 (2011-09-12) #### Notes Idle feature and SQLite backend leave the experimental stage! ,-) #### New Features * When a message upload/download fails, we do not abort the whole folder synchronization, but only skip that message, informing the user at the end of the sync run. * If you connect via ssl and 'cert_fingerprint' is configured, we check that the server certificate is actually known and identical by comparing the stored sha1 fingerprint with the current one. #### Changes * Refactor our IMAPServer class. Background work without user-visible changes. * Remove the configurability of the Blinkenlights statuschar. It cluttered the main configuration file for little gain. * Updated bundled imaplib2 to version 2.28. #### Bug Fixes * We protect more robustly against asking for inexistent messages from the IMAP server, when someone else deletes or moves messages while we sync. * Selecting inexistent folders specified in folderincludes now throws nice errors and continues to sync with all other folders rather than exiting offlineimap with a traceback. ### OfflineIMAP v6.3.4 (2011-08-10) #### Notes Here we are. A nice release since v6.3.3, I think. #### Changes * Handle when UID can't be found on saved messages. ### OfflineIMAP v6.3.4-rc4 (2011-07-27) #### Notes There is nothing exciting in this release. This is somewhat expected due to the late merge on -rc3. #### New Features * Support maildir for Windows. #### Changes * Manual improved. ### OfflineIMAP v6.3.4-rc3 (2011-07-07) #### Notes Here is a surprising release. :-) As expected we have a lot bug fixes in this round (see git log for details), including a fix for a bug we had for ages (details below) which is a very good news. What makes this cycle so unusual is that I merged a feature to support StartTLS automatically (thanks Sebastian!). Another very good news. We usually don't do much changes so late in a cycle. Now, things are highly calming down and I hope a lot of people will test this release. Next one could be the stable! #### New Features * Added StartTLS support, it will automatically be used if the server supports it. #### Bug Fixes * We protect more robustly against asking for inexistent messages from the IMAP server, when someone else deletes or moves messages while we sync. ### OfflineIMAP v6.3.4-rc2 (2011-06-15) #### Notes This was a very active rc1 and we could expect a lot of new fixes for the next release. The most important fix is about a bug that could lead to data loss. Find more information about his bug here: http://permalink.gmane.org/gmane.mail.imap.offlineimap.general/3803 The IDLE support is merged as experimental feature. #### New Features * Implement experimental IDLE feature. #### Changes * Maildirs use less memory while syncing. #### Bug Fixes * Saving to Maildirs now checks for file existence without race conditions. * A bug in the underlying imap library has been fixed that could potentially lead to data loss if the server interrupted responses with unexpected but legal server status responses. This would mainly occur in folders with many thousands of emails. Upgrading from the previous release is strongly recommended. ### OfflineIMAP v6.3.4-rc1 (2011-05-16) #### Notes Welcome to the v6.3.4 pre-release cycle. Your favorite IMAP tool wins 2 new features which were asked for a long time: * an experimental SQL-based backend for the local cache; * one-way synchronization cabability. Logic synchronization is reviewed and simplified (from 4 to 3 passes) giving improved performance. Lot of work was done to give OfflineIMAP a better code base. Raised errors can now rely on a new error system and should become the default in the coming releases. As usual, we ask our users to test this release as much as possible, especially the SQL backend. Have fun! #### New Features * Begin sphinx-based documentation for the code. * Enable 1-way synchronization by settting a [Repository ...] to readonly = True. When e.g. using offlineimap for backup purposes you can thus make sure that no changes in your backup trickle back into the main IMAP server. * Optional: experimental SQLite-based backend for the LocalStatus cache. Plain text remains the default. #### Changes * Start a enhanced error handling background system. This is designed to not stop a whole sync process on all errors (not much used, yet). * Documentation improvements: the FAQ wins new entries and add a new HACKING file for developers. * Lot of code cleanups. * Reduced our sync logic from 4 passes to 3 passes (integrating upload of "new" and "existing" messages into one function). This should result in a slight speedup. * No whitespace is stripped from comma-separated arguments passed via the -f option. * Give more detailed error when encountering a corrupt UID mapping file. #### Bug Fixes * Drop connection if synchronization failed. This is needed if resuming the system from suspend mode gives a wrong connection. * Fix the offlineimap crash when invoking debug option 'thread'. * Make 'thread' command line option work. ### OfflineIMAP v6.3.3 (2011-04-24) #### Notes Make this last candidate cycle short. It looks like we don't need more tests as most issues were raised and solved in the second round. Also, we have huge work to merge big and expected features into OfflineIMAP. Thanks to all contributors, again. With such a contribution rate, we can release stable faster. I hope it will be confirmed in the longer run! #### Changes * Improved documentation for querying password. ### OfflineIMAP v6.3.3-rc3 (2011-04-19) #### Notes It's more than a week since the previous release. Most of the issues raised were discussed and fixed since last release. I think we can be glad and confident for the future while the project live his merry life. #### Changes * The -f option did not work with Folder names with spaces. It works now, use with quoting e.g. -f "INBOX, Deleted Mails". * Improved documentation. * Bump from imaplib2 v2.20 to v2.22. * Code refactoring. #### Bug Fixes * Fix IMAP4 tunnel with imaplib2. ### OfflineIMAP v6.3.3-rc2 (2011-04-07) #### Notes We are now at the third week of the -rc1 cycle. I think it's welcome to begin the -rc2 cycle. Things are highly calming down in the code even if we had much more feedbacks than usual. Keep going your effort! I'd like to thank reporters who involved in this cycle: - Баталов Григорий - Alexander Skwar - Christoph Höger - dtk - Greg Grossmeier - h2oz7v - Iain Dalton - Pan Tsu - Vincent Beffara - Will Styler (my apologies if I forget somebody) ...and all active developers, of course! The imaplib2 migration looks to go the right way to be definetly released but still needs more tests. So, here we go... #### Changes * Increase compatability with Gmail servers which claim to not support the UIDPLUS extension but in reality do. #### Bug Fixes * Fix hang when using Ctrl+C in some cases. ### OfflineIMAP v6.3.3-rc1 (2011-03-16) #### Notes Here is time to begin the tests cycle. If feature topics are sent, I may merge or delay them until the next stable release. Main change comes from the migration from imaplib to imaplib2. It's internal code changes and doesn't impact users. UIDPLUS and subjectAltName for SSL are also great improvements. This release includes a hang fix due to infinite loop. Users seeing OfflineIMAP hang and consuming a lot of CPU are asked to update. That beeing said, this is still an early release candidate you should use for non-critical data only! #### New Features * Implement UIDPLUS extension support. OfflineIMAP will now not insert an X-OfflineIMAP header if the mail server supports the UIDPLUS extension. * SSL: support subjectAltName. #### Changes * Use imaplib2 instead of imaplib. * Makefile use magic to find the version number. * Rework the repository module * Change UI names to Blinkenlights,TTYUI,Basic,Quiet,MachineUI. Old names will still work, but are deprecated. Document that we don't accept a list of UIs anymore. * Reworked the syncing strategy. The only user-visible change is that blowing away LocalStatus will not require you to redownload ALL of your mails if you still have the local Maildir. It will simply recreate LocalStatus. * TTYUI ouput improved. * Code cleanups. #### Bug Fixes * Fix ignoring output while determining the rst2xxx command name to build documentation. * Fix hang because of infinite loop reading EOF. * Allow SSL connections to send keep-alive messages. * Fix regression (UIBase is no more). * Make profiling mode really enforce single-threading * Do not send localized date strings to the IMAP server as it will either ignore or refuse them. ### OfflineIMAP v6.3.2 (2010-02-21) #### Notes First of all I'm really happy to announce our new official `website `_. Most of the work started from the impulse of Philippe LeCavalier with the help of Sebastian Spaeth and other contributors. Thanks to everybody. In this release, we are still touched by the "SSL3 write pending" but I think time was long enough to try to fix it. We have our first entry in the "KNOWN BUG" section of the manual about that. I'm afraid it could impact a lot of users if some distribution package any SSL library not having underlying (still obscure) requirements. Distribution maintainers should be care of it. I hope this release will help us to have more reports. This release will also be the root of our long maintenance support. Other bugs were fixed. #### Bug Fixes * Fix craches for getglobalui(). * Fix documentation build. * Restore compatibiliy with python 2.5. ### OfflineIMAP v6.3.2-rc3 (2010-02-06) #### Notes We are still touched by the "SSL3 write pending" bug it would be really nice to fix before releasing the coming stable. In the worse case, we'll have to add the first entry in the "KNOWN BUG" section of the manual. I'm afraid it could impact a lot of users if some distribution package any SSL library not having underlying (still obscure) requirements. The best news with this release are the Curse UI fixed and the better reports on errors. In this release I won't merge any patch not fixing a bug or a security issue. More feedbacks on the main issue would be appreciated. #### Changes * Sample offlineimap.conf states it expects a PEM formatted certificat. * Give better trace information if an error occurs. * Have --version ONLY print the version number. * Code cleanups. #### Bug Fixes * Fix Curses UI (simplified by moving from MultiLock to Rlock implementation). * Makefile: docutils build work whether python extension command is stripped or not. * Makefile: clean now removes HTML documentation files. ### OfflineIMAP v6.3.2-rc2 (2010-12-21) #### Notes We are beginning a new tests cycle. At this stage, I expect most people will try to intensively stuck OfflineIMAP. :-) #### New Features * Makefile learn to build the package and make it the default. * Introduce a Changelog to involve community in the releasing process. * Migrate documentation to restructuredtext. #### Changes * Improve CustomConfig documentation. * Imply single threading mode in debug mode exept for "-d thread". * Code and import cleanups. * Allow UI to have arbitrary names. * Code refactoring around UI and UIBase. * Improve version managment and make it easier. * Introduce a true single threading mode. #### Bug Fixes * Understand multiple EXISTS replies from servers like Zimbra. * Only verify hostname if we actually use CA cert. * Fix ssl ca-cert in the sample configuration file. * Fix 'Ctrl+C' interruptions in threads. * Fix makefile clean for files having whitespaces. * Fix makefile to not remove unrelated files. * Fixes in README. * Remove uneeded files. ### OfflineIMAP v6.3.2-rc1 (2010-12-19) #### Notes We are beginning a tests cycle. If feature topics are sent, I may merge or delay them until the next stable release. #### New Features * Primitive implementation of SSL certificates check. #### Changes * Use OptionParser instead of getopts. * Code cleanups. #### Bug Fixes * Fix reading password from UI. ### OfflineIMAP v6.3.1 (2010-12-11) #### Notes Yes, I know I've just annouced the v6.3.0 in the same week. As said, it was not really a true release for the software. This last release includes fixes and improvements it might be nice to update to. Thanks to every body who helped to make this release with patches and tips through the mailing list. This is clearly a release they own. #### Changes * cProfile becomes the default profiler. Sebastian Spaeth did refactoring to prepare to the coming unit test suites. * UI output formating enhanced. * Some code cleanups. #### Bug Fixes * Fix possible overflow while working with Exchange. * Fix time sleep while exiting threads. ### OfflineIMAP v6.3.0 (2010-12-09) #### Notes This release is more "administrative" than anything else and mainly marks the change of the maintainer. New workflow and policy for developers come in. BTW, I don't think I'll maintain debian/changelog. At least, not in the debian way. Most users and maintainers may rather want to skip this release. #### Bug Fixes * Fix terminal display on exit. * netrc password authentication. * User name querying from netrc. offlineimap-6.6.1/MAINTAINERS.rst000066400000000000000000000007431264010144500163220ustar00rootroot00000000000000.. -*- coding: utf-8 -*- Official maintainers ==================== Eygene Ryabinkin email: rea at freebsd.org github: konvpalto Sebastian Spaeth email: sebastian at sspaeth.de github: spaetz Nicolas Sebrecht email: nicolas.s-dev at laposte.net github: nicolas33 Mailing List maintainers ======================== Eygene Ryabinkin email: rea at freebsd.org Sebastian Spaeth email: sebastian at sspaeth.de Nicolas Sebrecht email: nicolas.s-dev at laposte.net offlineimap-6.6.1/MANIFEST.in000066400000000000000000000005171264010144500155530ustar00rootroot00000000000000global-exclude .gitignore .git *.bak *.orig *.rej include setup.py include COPYING include Changelog* include MAINTAINERS include MANIFEST.in include Makefile include README.md include offlineimap.conf* include offlineimap.py recursive-include offlineimap *.py recursive-include bin * recursive-include docs * recursive-include test * offlineimap-6.6.1/Makefile000066400000000000000000000037461264010144500154640ustar00rootroot00000000000000# Copyright (C) 2002 - 2006 John Goerzen # # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA VERSION=$(shell ./offlineimap.py --version) ABBREV=$(shell git log --format='%h' HEAD~1..) TARGZ=offlineimap-$(VERSION)-$(ABBREV) SHELL=/bin/bash RST2HTML=`type rst2html >/dev/null 2>&1 && echo rst2html || echo rst2html.py` all: build build: python setup.py build @echo @echo "Build process finished, run 'python setup.py install' to install" \ "or 'python setup.py --help' for more information". clean: -python setup.py clean --all -rm -f bin/offlineimapc 2>/dev/null -find . -name '*.pyc' -exec rm -f {} \; -find . -name '*.pygc' -exec rm -f {} \; -find . -name '*.class' -exec rm -f {} \; -find . -name '.cache*' -exec rm -f {} \; -rm -f manpage.links manpage.refs 2>/dev/null -find . -name auth -exec rm -vf {}/password {}/username \; @$(MAKE) -C clean docs: @$(MAKE) -C docs websitedoc: @$(MAKE) -C websitedoc targz: ../$(TARGZ) ../$(TARGZ): cd .. && tar -zhcv --transform s,^offlineimap,$(TARGZ), -f $(TARGZ).tar.gz --exclude '*.pyc' offlineimap/{bin,Changelog.md,contrib,CONTRIBUTING.rst,COPYING,docs,MAINTAINERS.rst,MANIFEST.in,offlineimap,offlineimap.conf,offlineimap.conf.minimal,offlineimap.py,README.md,scripts,setup.py,test,TODO.rst} rpm: targz cd .. && sudo rpmbuild -ta $(TARGZ) offlineimap-6.6.1/README.md000066400000000000000000000061601264010144500152740ustar00rootroot00000000000000[offlineimap]: http://github.com/OfflineIMAP/offlineimap [website]: http://offlineimap.org [wiki]: http://github.com/OfflineIMAP/offlineimap/wiki [blog]: http://offlineimap.org/posts.html # OfflineIMAP ## Description OfflineIMAP is a software to dispose your e-mail mailbox(es) as a **local Maildir**. OfflineIMAP will synchronize both sides via *IMAP*. The main downside about IMAP is that you have to **trust** your email provider to not lose your mails. This is not something impossible while not very common. With OfflineIMAP, you can download your Mailboxes and make you own backups of the [Maildir](https://en.wikipedia.org/wiki/Maildir). This allows reading your email while offline without the need for the mail reader (MUA) to support IMAP disconnected operations. Need an attachment from a message without internet connection? It's fine, the message is still there. ## License GNU General Public License v2. ## Why should I use OfflineIMAP? * It is **fast**. * It is **reliable**. * It is **flexible**. * It is **safe**. ## Project status and future > As one of the maintainer of OfflineIMAP, I'd like to put my efforts into > [imapfw](http://github.com/OfflineIMAP/imapfw). **imapfw** is a software in > development that I intend to replace OfflineIMAP in the long term. > > That's why I'm not going to do development in OfflineIMAP. I continue to do > the minimal maintenance job in OfflineIMAP: fixing small bugs, (quick) > reviewing/merging patches and rolling out new releases, but that's all. > > While I keep tracking issues for OfflineIMAP, you should not expect support > from me anymore. > > You won't be left at the side. OfflineIMAP's community is large enough so that > you'll find people for most of your issues. > > Get news from the [blog][blog]. > > Nicolas Sebrecht. ,-) ## Downloads You should first check if your distribution already packages OfflineIMAP for you. Downloads releases as [tarball or zipball](https://github.com/OfflineIMAP/offlineimap/tags). ## Feedbacks and contributions **The user discussions, development, announcements and all the exciting stuff take place on the mailing list.** While not mandatory to send emails, you can [subscribe here](http://lists.alioth.debian.org/mailman/listinfo/offlineimap-project). Bugs, issues and contributions can be requested to both the mailing list or the [official Github project][offlineimap]. ## The community * OfflineIMAP's main site is the [project page at Github][offlineimap]. * There is the [OfflineIMAP community's website][website]. * And finally, [the wiki][wiki]. ## Requirements * Python v2.7 * Python SQlite (optional while recommended) * Python json and urllib (used for XOAuth2 authentication) ## Documentation All the current and updated documentation is at the [community's website][website]. ### Read documentation locally You might want to read the documentation locally. Get the sources of the website. For the other documentation, run the appropriate make target: ``` $ ./scripts/get-repository.sh website $ cd docs $ make html # Requires rst2html $ make man # Requires a2x $ make api # Requires sphinx ``` offlineimap-6.6.1/TODO.rst000066400000000000000000000103551264010144500153150ustar00rootroot00000000000000.. vim: spelllang=en ts=2 expandtab : .. _coding style: https://github.com/OfflineIMAP/offlineimap/blob/next/docs/CodingGuidelines.rst ============================ TODO list by relevance order ============================ Should be the starting point to improve the `coding style`_. Write your WIP directly in this file. TODO list --------- * Better names for variables, objects, etc. * Improve comments. Most of the current comments assume a very good knowledge of the internals. That sucks because I guess nobody is anymore aware of ALL of them. Time when this was a one guy made project has long passed. * Better policy on objects. - Turn ALL attributes private and use accessors. This is not "pythonic" but such pythonic thing turn the code into intricated code. - Turn ALL methods not intended to be used outside, private. * Revamp the factorization. It's not unusual to find "factorized" code for bad reasons: because it made the code /look/ nicer, but the factorized function/methods is actually called from ONE place. While it might locally help, such practice globally defeat the purpose because we lose the view of what is true factorized code and what is not. * Namespace the factorized code. If a method require a local function, DON'T USE yet another method. Use a local namespaced function.:: class BLah(object): def _internal_method(self, arg): def local_factorized(local_arg): # local_factorized's code # _internal_method's code. Python allows local namespaced functions for good reasons. * Better inheritance policy. Take the sample of the folder/LocalStatus(SQlite) and folder/Base stuffs. It's *nearly IMPOSSIBLE* to know and understand what parent method is used by what child, for what purpose, etc. So, instead of (re)defining methods in the wild, keep the well common NON-redefined stuff into the parent and define the required methods in the childs. We really don't want anything like:: def method(self): raise NotImplemented While this is common practice in Python, think about that again: how a parent object should know all the expected methods/accessors of all the possible kind of childs? Inheritance is about factorizing, certainly **NOT** about **defining the interface** of the childs. * Introduce as many as intermediate inherited objects as required. Keeping linear inheritance is good because Python sucks at playing with multiple parents and it keeps things simple. But a parent should have ALL its methods used in ALL the childs. If not, it's a good sign that a new intermediate object should be introduced in the inheritance line. * Don't blindly inherit from library objects. We do want **well defined interfaces**. For example, we do too much things like imapobj.methodcall() while the imapobj is far inherited from imaplib2. We have NO clue about what we currently use from the library. Having a dump wrappper for each call should be made mandatory for objects inherited from a library. Using composed objects should be seriously considered in this case, instead of using inheritance. * Use factories. Current objects do too much initialization stuff varying with the context it is used. Move things like that into factories and keep the objects definitions clean. * Make it clear when we expect a composite object and what we expect exactly. Even the more obvious composed objects are badly defined. For example, the ``conf`` instances are spread across a lot of objects. Did you know that such composed objects are sometimes restricted to the section the object works on, and most of the time it's not restricted at all? How many time it requires to find and understand on what we are currently working? * Seriously improve our debugging/hacking sessions (AGAIN). Until now, we have limited the improvements to allow better/full stack traces. While this was actually required, we now hit some limitations of the whole exception-based paradigm. For example, it's very HARD to follow an instance during its life time. I have a good overview of what we could do in this area, so don't matter much about that if you don't get the point or what could be done. * Support Unicode. offlineimap-6.6.1/bin/000077500000000000000000000000001264010144500145625ustar00rootroot00000000000000offlineimap-6.6.1/bin/offlineimap000077500000000000000000000016421264010144500170040ustar00rootroot00000000000000#!/usr/bin/env python # Startup from system-wide installation # Copyright (C) 2002 - 2009 John Goerzen # # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA from offlineimap import OfflineImap oi = OfflineImap() oi.run() offlineimap-6.6.1/contrib/000077500000000000000000000000001264010144500154525ustar00rootroot00000000000000offlineimap-6.6.1/contrib/release.sh000077500000000000000000000227131264010144500174360ustar00rootroot00000000000000#!/bin/sh # # Put into Public Domain, by Nicolas Sebrecht # # Create new releases in OfflineIMAP. # TODO: https://developer.github.com/v3/repos/releases/#create-a-release # https://developer.github.com/libraries/ # https://github.com/turnkeylinux/octohub # https://github.com/michaelliao/githubpy (onefile) # https://github.com/sigmavirus24/github3.py # https://github.com/copitux/python-github3 # https://github.com/PyGithub/PyGithub # https://github.com/micha/resty (curl) # TODO: move configuration out and source it. # TODO: implement rollback. __VERSION__='v0.3' SPHINXBUILD=sphinx-build MAILING_LIST='offlineimap-project@lists.alioth.debian.org' DOCSDIR='docs' ANNOUNCE_MAGIC='#### Notes ' CHANGELOG_MAGIC='{:toc}' CHANGELOG='Changelog.md' CACHEDIR='.git/offlineimap-release' WEBSITE='website' WEBSITE_LATEST="${WEBSITE}/_data/latest.yml" TMP_CHANGELOG_EXCERPT="${CACHEDIR}/changelog.excerpt.md" TMP_CHANGELOG_EXCERPT_OLD="${TMP_CHANGELOG_EXCERPT}.old" TMP_CHANGELOG="${CACHEDIR}/changelog.md" TMP_ANNOUNCE="${CACHEDIR}/announce.txt" True=0 False=1 Yes=$True No=$False DEBUG=$True # # $1: EXIT_CODE # $2..: message function die () { n=$1 shift echo $* exit $n } function debug () { if test $DEBUG -eq $True then echo "DEBUG: $*" >&2 fi } # # $1: question # $2: message on abort # function ask () { echo echo -n "--- $1 " read -r ans test "n$ans" = 'n' -o "n$ans" = 'ny' && return $Yes test "n$ans" = "ns" -o "n$ans" = 'nn' && return $No die 1 "! $2" } # # $1: message # $1: path to file # function edit_file () { ask "Press Enter to $1" test $? -eq $Yes && { $EDITOR "$2" reset } } function fix_pwd () { debug 'in fix_pwd' cd "$(git rev-parse --show-toplevel)" || \ die 2 "cannot determine the root of the repository" } function prepare_env () { debug 'in prepare_env' mkdir "$CACHEDIR" 2>/dev/null test ! -d "$CACHEDIR" && die 5 "Could not make cache directory $CACHEDIR" } function check_dirty () { debug 'in check_dirty' git diff --quiet 2>/dev/null && git diff --quiet --cached 2>/dev/null || { die 4 "Commit all your changes first!" } } function welcome () { debug 'in welcome' cat <' : yes, continue - 'n' : no - 's' : skip (ONLY where applicable, otherwise continue) Any other key will abort the program. EOF ask 'Ready?' } function checkout_next () { debug 'in checkout_next' git checkout --quiet next || { die 6 "Could not checkout 'next' branch" } } function get_version () { debug 'in get_version' echo "v$(./offlineimap.py --version)" } function update_offlineimap_version () { debug 'in update_offlineimap_version' edit_file 'update the version in __init__.py' offlineimap/__init__.py } # # $1: previous version # function get_git_history () { debug 'in get_git_history' git log --oneline "${1}.." | sed -r -e 's,^(.),\- \1,' } # # $1: new version # $2: shortlog function changelog_template () { debug 'in changelog_template' cat < "$TMP_CHANGELOG_EXCERPT" get_git_history "$2" >> "$TMP_CHANGELOG_EXCERPT" edit_file "the Changelog excerpt" $TMP_CHANGELOG_EXCERPT # Remove comments. grep -v '//' "$TMP_CHANGELOG_EXCERPT" > "${TMP_CHANGELOG_EXCERPT}.nocomment" mv -f "${TMP_CHANGELOG_EXCERPT}.nocomment" "$TMP_CHANGELOG_EXCERPT" fi # Write new Changelog. cat "$CHANGELOG" > "$TMP_CHANGELOG" debug "include excerpt $TMP_CHANGELOG_EXCERPT to $TMP_CHANGELOG" sed -i -e "/${CHANGELOG_MAGIC}/ r ${TMP_CHANGELOG_EXCERPT}" "$TMP_CHANGELOG" debug 'remove trailing whitespaces' sed -i -r -e 's, +$,,' "$TMP_CHANGELOG" # Remove trailing whitespaces. debug "copy to $TMP_CHANGELOG -> $CHANGELOG" cp -f "$TMP_CHANGELOG" "$CHANGELOG" # Check and edit Changelog. ask "Next step: you'll be asked to review the diff of $CHANGELOG" while true do git diff -- "$CHANGELOG" | less ask 'edit Changelog?' $CHANGELOG test ! $? -eq $Yes && break # Asked to edit the Changelog; will loop again. $EDITOR "$CHANGELOG" done } # # $1: new version # function git_release () { debug 'in git_release' git commit -as -m"$1" git tag -a "$1" -m"$1" git checkout master git merge next git checkout next } function get_last_rc () { git tag | grep -E '^v([0-9][\.-]){3}rc' | sort -n | tail -n1 } function get_last_stable () { git tag | grep -E '^v([0-9][\.])+' | grep -v '\-rc' | sort -n | tail -n1 } function update_website_releases_info() { cat > "$WEBSITE_LATEST" < /dev/null 2>&1 if test ! $? -eq 0 then echo "Oops! you don't have $SPHINXBUILD installed?" echo "Cannot update the webite documentation..." echo "You should install it and run:" echo " $ cd docs" echo " $ make websitedoc" echo "Then, commit and push changes of the website." ask 'continue' return fi # Check website sources are available. cd website if test ! $? -eq 0 then echo "ERROR: cannot go to the website sources" ask 'continue' return fi # Stash any WIP in the website sources. git diff --quiet 2>/dev/null && git diff --quiet --cached 2>/dev/null || { echo "There is WIP in the website repository, stashing" echo "git stash create 'WIP during offlineimap API import'" git stash create 'WIP during offlineimap API import' ask 'continue' } cd .. # Back to offlineimap.git. update_website_releases_info cd "./$DOCSDIR" # Enter the docs directory in offlineimap.git. # Build the docs! make websitedoc && { # Commit changes in a branch. cd ../website # Enter the website sources. branch_name="import-$1" git checkout -b "$branch_name" git add '_doc/versions' git commit -a -s -m"update for offlineimap $1" echo "website: branch '$branch_name' ready for a merge in master!" } ask 'website updated locally; continue' fi } function git_username () { git config --get user.name } function git_usermail () { git config --get user.email } # # $1: new version # function announce_header () { cat < Date: $(git log HEAD~1.. --oneline --pretty='%cD') From: $(git_username) <$(git_usermail)> To: $MAILING_LIST Subject: [ANNOUNCE] OfflineIMAP $1 released OfflineIMAP $1 is out. Downloads: http://github.com/OfflineIMAP/offlineimap/archive/${1}.tar.gz http://github.com/OfflineIMAP/offlineimap/archive/${1}.zip EOF } function announce_footer () { cat < "$TMP_ANNOUNCE" grep -v '^### OfflineIMAP' "$TMP_CHANGELOG_EXCERPT" | \ grep -v '^#### Notes' >> "$TMP_ANNOUNCE" sed -i -r -e "s,^$ANNOUNCE_MAGIC,," "$TMP_ANNOUNCE" sed -i -r -e "s,^#### ,# ," "$TMP_ANNOUNCE" announce_footer >> "$TMP_ANNOUNCE" } function edit_announce () { edit_file 'edit announce' "$TMP_ANNOUNCE" } # # run # function run () { debug 'in run' fix_pwd check_dirty prepare_env checkout_next clear welcome if test -f "$TMP_CHANGELOG_EXCERPT" then head "$TMP_CHANGELOG_EXCERPT" ask "A previous Changelog excerpt (head above) was found, use it?" if test ! $? -eq $Yes then mv -f "$TMP_CHANGELOG_EXCERPT" "$TMP_CHANGELOG_EXCERPT_OLD" fi fi previous_version="$(get_version)" message="Safety check: release after version:" ask "$message $previous_version ?" update_offlineimap_version new_version="$(get_version)" ask "Safety check: make a new release with version: '$new_version'" "Clear changes and restart" update_changelog "$new_version" "$previous_version" build_announce "$new_version" "$previous_version" edit_announce git_release $new_version update_website $new_version } run cat < master:master - git push next:next - git push $new_version - cd website - git checkout master - git merge $branch_name - git push master:master - cd .. - git send-email $TMP_ANNOUNCE Have fun! ,-) EOF # vim: expandtab ts=2 : offlineimap-6.6.1/contrib/systemd/000077500000000000000000000000001264010144500171425ustar00rootroot00000000000000offlineimap-6.6.1/contrib/systemd/README.md000066400000000000000000000016661264010144500204320ustar00rootroot00000000000000--- layout: page title: Integrating OfflineIMAP into systemd author: Ben Boeckel date: 2015-03-22 contributors: Abdo Roig-Maranges updated: 2015-03-25 --- ## Systemd units These unit files are meant to be used in the user session. You may drop them into `/etc/systemd/user` or `${XDG_DATA_HOME}/systemd/user` followed by `systemctl --user daemon-reload` to have systemd aware of the unit files. These files are meant to be triggered either manually using `systemctl --user start offlineimap.service` or by enabling the timer unit using `systemctl --user enable offlineimap.timer`. Additionally, specific accounts may be triggered by using `offlineimap@myaccount.timer` or `offlineimap@myaccount.service`. These unit files are installed as being enabled via a `mail.target` unit which is intended to be a catch-all for mail-related unit files. A simple `mail.target` file is also provided. offlineimap-6.6.1/contrib/systemd/mail.target000066400000000000000000000001021264010144500212650ustar00rootroot00000000000000[Unit] Description=Mail Target [Install] WantedBy=default.target offlineimap-6.6.1/contrib/systemd/offlineimap.service000066400000000000000000000002131264010144500230110ustar00rootroot00000000000000[Unit] Description=Offlineimap Service [Service] Type=oneshot ExecStart=/usr/bin/offlineimap -o -u syslog [Install] WantedBy=mail.target offlineimap-6.6.1/contrib/systemd/offlineimap.timer000066400000000000000000000002031264010144500224700ustar00rootroot00000000000000[Unit] Description=Offlineimap Query Timer [Timer] OnUnitInactiveSec=15m Unit=offlineimap.service [Install] WantedBy=mail.target offlineimap-6.6.1/contrib/systemd/offlineimap@.service000066400000000000000000000002401264010144500231110ustar00rootroot00000000000000[Unit] Description=Offlineimap Service for account %i [Service] Type=oneshot ExecStart=/usr/bin/offlineimap -o -a %i -u syslog [Install] WantedBy=mail.target offlineimap-6.6.1/contrib/systemd/offlineimap@.timer000066400000000000000000000002251264010144500225740ustar00rootroot00000000000000[Unit] Description=Offlineimap Query Timer for account %i [Timer] OnUnitInactiveSec=15m Unit=offlineimap@%i.service [Install] WantedBy=mail.target offlineimap-6.6.1/docs/000077500000000000000000000000001264010144500147425ustar00rootroot00000000000000offlineimap-6.6.1/docs/Makefile000066400000000000000000000017011264010144500164010ustar00rootroot00000000000000# This program is free software under the terms of the GNU General Public # License. See the COPYING file which must come with this package. SOURCES = $(wildcard *.rst) HTML_TARGETS = $(patsubst %.rst,%.html,$(SOURCES)) RM = rm RST2HTML=`type rst2html >/dev/null 2>&1 && echo rst2html || echo rst2html.py` RST2MAN=`type rst2man >/dev/null 2>&1 && echo rst2man || echo rst2man.py` SPHINXBUILD = sphinx-build docs: man api html: $(HTML_TARGETS) $(HTML_TARGETS): %.html : %.rst $(RST2HTML) $? $@ man: offlineimap.1 offlineimapui.7 offlineimap.1: offlineimap.txt a2x -v -f manpage $? offlineimapui.7: offlineimapui.txt a2x -v -f manpage $? api: $(SPHINXBUILD) -b html -d html/doctrees doc-src html websitedoc: ./website-doc.sh releases ./website-doc.sh api ./website-doc.sh contrib clean: $(RM) -f $(HTML_TARGETS) $(RM) -f offlineimap.1 $(RM) -f offlineimap.7 $(RM) -rf html/* -find ./docs -name '*.html' -exec rm -f {} \; .PHONY: clean doc offlineimap-6.6.1/docs/doc-src/000077500000000000000000000000001264010144500162745ustar00rootroot00000000000000offlineimap-6.6.1/docs/doc-src/API.rst000066400000000000000000000051331264010144500174410ustar00rootroot00000000000000.. OfflineImap API documentation .. currentmodule:: offlineimap .. _API docs: :mod:`offlineimap's` API documentation ====================================== Within :mod:`offlineimap`, the classes :class:`OfflineImap` provides the high-level functionality. The rest of the classes should usually not needed to be touched by the user. Email repositories are represented by a :class:`offlineimap.repository.Base.BaseRepository` or derivatives (see :mod:`offlineimap.repository` for details). A folder within a repository is represented by a :class:`offlineimap.folder.Base.BaseFolder` or any derivative from :mod:`offlineimap.folder`. This page contains the main API overview of OfflineImap |release|. OfflineImap can be imported as:: from offlineimap import OfflineImap :mod:`offlineimap` -- The OfflineImap module ============================================= .. module:: offlineimap .. autoclass:: offlineimap.OfflineImap(cmdline_opts = None) :members: :inherited-members: :undoc-members: :private-members: :class:`offlineimap.account` ============================ An :class:`accounts.Account` connects two email repositories that are to be synced. It comes in two flavors, normal and syncable. .. autoclass:: offlineimap.accounts.Account .. autoclass:: offlineimap.accounts.SyncableAccount :members: :inherited-members: .. autodata:: ui Contains the current :mod:`offlineimap.ui`, and can be used for logging etc. :exc:`OfflineImapError` -- A Notmuch execution error -------------------------------------------------------- .. autoexception:: offlineimap.error.OfflineImapError :members: This exception inherits directly from :exc:`Exception` and is raised on errors during the offlineimap execution. It has an attribute `severity` that denotes the severity level of the error. :mod:`offlineimap.globals` -- module with global variables ========================================================== Module `offlineimap.globals` provides the read-only storage for the global variables. All exported module attributes can be set manually, but this practice is highly discouraged and shouldn't be used. However, attributes of all stored variables can only be read, write access to them is denied. Currently, we have only :attr:`options` attribute that holds command-line options as returned by OptionParser. The value of :attr:`options` must be set by :func:`set_options` prior to its first use. .. automodule:: offlineimap.globals :members: .. data:: options You can access the values of stored options using the usual syntax, offlineimap.globals.options. offlineimap-6.6.1/docs/doc-src/conf.py000066400000000000000000000150501264010144500175740ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # pyDNS documentation build configuration file, created by # sphinx-quickstart on Tue Feb 2 10:00:47 2010. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('../..')) from offlineimap import __version__, __bigversion__, __author__, __copyright__ # -- General configuration ----------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.intersphinx', 'sphinx.ext.todo', 'sphinx.ext.viewcode'] autoclass_content = "both" # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. project = u'OfflineIMAP' copyright = __copyright__ # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = __version__ # The full version, including alpha/beta/rc tags. release = __bigversion__ # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. #unused_docs = [] # List of directories, relative to source directory, that shouldn't be searched # for source files. exclude_trees = [] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = False # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. html_theme = 'default' #html_style = '' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". #html_static_path = ['html'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. html_use_modindex = False # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'dev-doc' # -- Options for LaTeX output -------------------------------------------------- # The paper size ('letter' or 'a4'). #latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). #latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'offlineimap.tex', u'OfflineIMAP Documentation', u'OfflineIMAP contributors', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # Additional stuff for the LaTeX preamble. #latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_use_modindex = True # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = {'http://docs.python.org/': None} offlineimap-6.6.1/docs/doc-src/dco.rst000066400000000000000000000050511264010144500175740ustar00rootroot00000000000000.. _dco Developer's Certificate of Origin ================================= v1.1:: By making a contribution to this project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it. (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved. Then, you just add a line saying:: Signed-off-by: Random J Developer This line can be automatically added by git if you run the git-commit command with the ``-s`` option. Signing can made be afterword with ``--amend -s``. Notice that you can place your own ``Signed-off-by:`` line when forwarding somebody else's patch with the above rules for D-C-O. Indeed you are encouraged to do so. Do not forget to place an in-body ``From:`` line at the beginning to properly attribute the change to its true author (see above). Also notice that a real name is used in the ``Signed-off-by:`` line. Please don't hide your real name. If you like, you can put extra tags at the end: Reported-by is used to to credit someone who found the bug that the patch attempts to fix. Acked-by says that the person who is more familiar with the area the patch attempts to modify liked the patch. Reviewed-by unlike the other tags, can only be offered by the reviewer and means that she is completely satisfied that the patch is ready for application. It is usually offered only after a detailed review. Tested-by is used to indicate that the person applied the patch and found it to have the desired effect. You can also create your own tag or use one that's in common usage such as ``Thanks-to:``, ``Based-on-patch-by:``, or ``Mentored-by:``. offlineimap-6.6.1/docs/doc-src/index.rst000066400000000000000000000006731264010144500201430ustar00rootroot00000000000000.. OfflineImap documentation master file .. _OfflineIMAP: http://offlineimap.org Welcome to OfflineIMAP's developer documentation ================================================ **License** :doc:`dco` (dco) **Documented APIs** .. toctree:: API repository ui .. moduleauthor:: John Goerzen, and many others. See AUTHORS and the git history for a full list. :License: This module is covered under the GNU GPL v2 (or later). offlineimap-6.6.1/docs/doc-src/repository.rst000066400000000000000000000043071264010144500212510ustar00rootroot00000000000000.. currentmodule:: offlineimap.repository :mod:`offlineimap.repository` -- Email repositories ------------------------------------------------------------ A derivative of class :class:`Base.BaseRepository` represents an email repository depending on the type of storage, possible options are: * :class:`IMAPRepository`, * :class:`MappedIMAPRepository` * :class:`GmailRepository`, * :class:`MaildirRepository`, or * :class:`LocalStatusRepository`. Which class you need depends on your account configuration. The helper class :class:`offlineimap.repository.Repository` is an *autoloader*, that returns the correct class depending on your configuration. So when you want to instanciate a new :mod:`offlineimap.repository`, you will mostly do it through this class. .. autoclass:: offlineimap.repository.Repository :members: :inherited-members: :mod:`offlineimap.repository.Base.BaseRepository` -- Representation of a mail repository ------------------------------------------------------------------------------------------ .. autoclass:: offlineimap.repository.Base.BaseRepository :members: :inherited-members: :undoc-members: .. .. note:: :meth:`foo` .. .. attribute:: Database.MODE Defines constants that are used as the mode in which to open a database. MODE.READ_ONLY Open the database in read-only mode MODE.READ_WRITE Open the database in read-write mode .. autoclass:: offlineimap.repository.IMAPRepository .. autoclass:: offlineimap.repository.MappedIMAPRepository .. autoclass:: offlineimap.repository.GmailRepository .. autoclass:: offlineimap.repository.MaildirRepository .. autoclass:: offlineimap.repository.LocalStatusRepository :mod:`offlineimap.folder` -- Basic representation of a local or remote Mail folder --------------------------------------------------------------------------------------------------------- .. autoclass:: offlineimap.folder.Base.BaseFolder :members: :inherited-members: :undoc-members: .. .. attribute:: Database.MODE Defines constants that are used as the mode in which to open a database. MODE.READ_ONLY Open the database in read-only mode MODE.READ_WRITE Open the database in read-write mode offlineimap-6.6.1/docs/doc-src/ui.rst000066400000000000000000000015311264010144500174430ustar00rootroot00000000000000:mod:`offlineimap.ui` -- A flexible logging system -------------------------------------------------------- .. currentmodule:: offlineimap.ui OfflineImap has various ui systems, that can be selected. They offer various functionalities. They must implement all functions that the :class:`offlineimap.ui.UIBase` offers. Early on, the ui must be set using :meth:`getglobalui` .. automethod:: offlineimap.ui.setglobalui .. automethod:: offlineimap.ui.getglobalui Base UI plugin ^^^^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: offlineimap.ui.UIBase.UIBase :members: :inherited-members: .. .. note:: :meth:`foo` .. .. attribute:: Database.MODE Defines constants that are used as the mode in which to open a database. MODE.READ_ONLY Open the database in read-only mode MODE.READ_WRITE Open the database in read-write mode offlineimap-6.6.1/docs/offlineimap.txt000066400000000000000000000355561264010144500200120ustar00rootroot00000000000000 offlineimap(1) ============== NAME ---- offlineimap - Synchronize mailboxes and Maildirs SYNOPSIS -------- [verse] 'offlineimap' (options) DESCRIPTION ----------- Synchronize the accounts configured in the configuration file via IMAP. Each account has two sides. One of the side must be an IMAP server. The other side can either be a Maildir or another IMAP server. OPTIONS ------- -h:: --help:: Display summary of options. --version:: Output version. --dry-run:: Run in dry run mode. + Do not actually modify any store but check and print what synchronization actions would be taken if a sync would be performed. It will not precisely give the exact information what will happen. If e.g. we need to create a folder, it merely outputs 'Would create folder X', but not how many and which mails it would transfer. --info:: Output information on the configured email repositories. + Useful for debugging and bug reporting. Use in conjunction with the -a option to limit the output to a single account. This mode will prevent any actual sync to occur and exits after it outp ut the debug information. -1:: Limit multithreading operations and run solely a single-thread sync. + This effectively sets the maxsyncaccounts and all maxconnections configuration file variables to 1. This is 1, the number. -P :: Set OfflineIMAP into profile mode. + The program will create DIR (it must not already exist). As it runs, Python profiling information about each thread is logged into profiledir. Please note: This option is present for debugging and optimization only, and should NOT be used unless you have a specific reason to do so. It will significantly decrease program performance, may reduce reliability, and can generate huge amounts of data. This option implies the -1 option. -a :: Overrides the accounts section in the config file. + Allows to specify a particular account or set of accounts to sync without having to edit the config file. -c :: Specifies a configuration file to use. -d :: Enables debugging for OfflineIMAP. + This is useful if you are to track down a malfunction or figure out what is going on under the hood. This option requires one or more debugtypes, separated by commas. These define what exactly will be debugged, and so far include two options: imap, thread, maildir or ALL. The imap option will enable IMAP protocol stream and parsing debugging. Note that the output may contain passwords, so take care to remove that from the debugging output before sending it to anyone else. The maildir option will enable debugging for certain Maildir operations. The use of any debug option (unless 'thread' is included), implies the single-thread option -1. -l :: Send logs to . -s:: Send logs to syslog. -f :: Only sync the specified folders. + The folder names are the untranslated foldernames of the remote repository. This command-line option overrides any 'folderfilter' and 'folderincludes' options in the configuration file. -k <[section:]option=value:: Override any configuration file option. + If "section" is omitted, it defaults to "general". Any underscores in the section name are replaced with spaces: for instance, to override option "autorefresh" in the "[Account Personal]" section in the config file one would use "-k Account_Personal:autorefresh=30". Repeat this option as much as necessary to redefine multiple options. -o:: Run only once. + Ignore any autorefresh setting in the configuration file. -q:: Run only quick synchronizations. + Ignore any flag updates on IMAP servers. If a flag on the remote IMAP changes, and we have the message locally, it will be left untouched in a quick run. This option is ignored if maxage is set. -u :: Specifies an alternative user interface to use. + This overrides the default specified in the configuration file. The UI specified with -u will be forced to be used, even if checks determine that it is not usable. Possible interface choices are: quiet, basic, syslog, ttyui, blinkenlights, machineui. --column[=]:: --no-column:: Display branch listing in columns. See configuration variable column.branch for option syntax.`--column` and `--no-column` without options are equivalent to 'always' and 'never' respectively. + This option is only applicable in non-verbose mode. Synchronization Performance --------------------------- By default, we use fairly conservative settings that are safe for syncing but that might not be the best performing one. Once you got everything set up and running, you might want to look into speeding up your synchronization. Here are a couple of hints and tips on how to achieve this. 1. Use maxconnections > 1. + By default we only use one connection to an IMAP server. Using 2 or even 3 speeds things up considerably in most cases. This setting goes into the [Repository XXX] section. 2. Use folderfilters. + The quickest sync is a sync that can ignore some folders. I sort my inbox into monthly folders, and ignore every folder that is more than 2-3 months old, this lets me only inspect a fraction of my Mails on every sync. If you haven't done this yet, do it :). See the folderfilter section the example offlineimap.conf. 3. The cache. + The default status cache is a plain text file that will write out the complete file for each single new message (or even changed flag) to a temporary file. If you have plenty of files in a folder, this is a few hundred kilo to megabytes for each mail and is bound to make things slower. I recommend to use the sqlite backend for that. See the status_backend = sqlite setting in the example offlineimap.conf. You will need to have python-sqlite installed in order to use this. This will save you plenty of disk activity. Do note that the sqlite backend is still considered experimental as it has only been included recently (although a loss of your status cache should not be a tragedy as that file can be rebuilt automatically) 4. Use quick sync. + A regular sync will request all flags and all UIDs of all mails in each folder which takes quite some time. A 'quick' sync only compares the number of messages in a folder on the IMAP side (it will detect flag changes on the Maildir side of things though). A quick sync on my smallish account will take 7 seconds rather than 40 seconds. Eg, I run a cron script that does a regular sync once a day, and does quick syncs (-q) only synchronizing the "-f INBOX" in between. 5. Turn off fsync. + In the [general] section you can set fsync to True or False. If you want to play 110% safe and wait for all operations to hit the disk before continueing, you can set this to True. If you set it to False, you lose some of that safety, trading it for speed. Upgrading from plain text to SQLite cache format ------------------------------------------------ OfflineImap uses a cache to store the last know status of mails (flags etc). Historically that has meant plain text files, but recently we introduced sqlite-based cache, which helps with performance and CPU usage on large folders. Here is how to upgrade existing plain text cache installations to sqlite based one: 1. Sync to make sure things are reasonably similar. 2. Change the account section to "status_backend = sqlite". 3. Run a new sync. + This will convert your plain text cache to an sqlite cache (but leave the old plain text cache around for easy reverting). This must be quick and not involve any mail up/downloading. 4. See if it works! :-) 5. If it does not work, go back to the old version or set "status_backend = plain" 6. Delete the old text cache files. Once you are sure it works, you can delete the ~/.offlineimap/Account-foo/LocalStatus folder (the new cache will be in the LocalStatus-sqlite folder) Security and SSL ---------------- By default, OfflineIMAP will connect using any method that 'openssl' supports, that is SSLv2, SSLv3, or TLSv1. Do note that SSLv2 is notoriously insecure and deprecated. Unfortunately, python2 does not offer easy ways to disable SSLv2. It is recommended you test your setup and make sure that the mail server does not use an SSLv2 connection. Use e.g. "openssl s_client -host mail.server -port 443" to find out the connection that is used by default. * Certificate checking + Unfortunately, by default we will not verify the certificate of an IMAP TLS/SSL server we connect to, so connecting by SSL is no guarantee against man-in-the-middle attacks. While verifying a server certificate checking the fingerprint is recommended. There is currently only one safe way to ensure that you connect to the correct server in an encrypted manner: you can specify a 'sslcacertfile' setting in your repository section of offlineimap.conf pointing to a file that contains (among others) a CA Certificate in PEM format which validating your server certificate. In this case, we will check that: 1. The server SSL certificate is validated by the CA Certificate. 2. The server host name matches the SSL certificate. 3. The server certificate is not past its expiration date. The FAQ has an entry on how to create your own certificate and CA certificate. * StartTLS + If you have not configured your account to connect via SSL anyway, OfflineImap will still attempt to set up an SSL connection via the STARTTLS function, in case the imap server supports it. + There is no certificate or fingerprint checking involved at all, when using STARTTLS (the underlying imaplib library does not support this yet). This means that you will be protected against passively listening eavesdroppers and they will not be able to see your password or email contents. However, this will not protect you from active attacks, such as Man-In-The-Middle attacks which cause you to connect to the wrong server and pretend to be your mail server. + DO NOT RELY ON STARTTLS AS A SAFE CONNECTION GUARANTEEING THE AUTHENTICITY OF YOUR IMAP SERVER! Unix Signals ------------ OfflineImap listens to the unix signals SIGUSR1, SIGUSR2, SIGTERM, SIGINT, SIGHUP, SIGQUIT. * If sent a SIGUSR1 it will abort any current (or next future) sleep of all accounts that are configured to "autorefresh". In effect, this will trigger a full sync of all accounts to be performed as soon as possible. * If sent a SIGUSR2, it will stop "autorefresh mode" for all accounts. That is, accounts will abort any current sleep and will exit after a currently running synchronization has finished. This signal can be used to gracefully exit out of a running offlineimap "daemon". * SIGTERM, SIGINT, SIGHUP are all treated to gracefully terminate as soon as possible. This means it will finish syncing the current folder in each account, close keep alive connections, remove locks on the accounts and exit. + It may take up to 10 seconds, if autorefresh option is used. * If sent SIGQUIT, dumps stack traces for all threads and tries to dump process core. Known Issues ------------ * SSL3 write pending. + Users enabling SSL may hit a bug about "SSL3 write pending". If so, the account(s) will stay unsynchronised from the time the bug appeared. Running OfflineIMAP again can help. We are still working on this bug. Patches or detailed bug reports would be appreciated. Please check you're running the last stable version and send us a report to the mailing list including the full log. * IDLE support is incomplete and experimental. Bugs may be encountered. - No hook exists for "run after an IDLE response". + Email will show up, but may not be processed until the next refresh cycle. - nametrans may not be supported correctly. - IMAP IDLE <-> IMAP IDLE doesn't work yet. - IDLE might stop syncing on a system suspend/resume. - IDLE may only work "once" per refresh. + If you encounter this bug, please send a report to the list! * Maildir support in Windows drive. + Maildir uses colon caracter (:) in message file names. Colon is however forbidden character in windows drives. There are several workarounds for that situation: . Enable file name character translation in windows registry (not tested). - . Use cygwin managed mount (not tested). - not available anymore since cygwin 1.7 . Use "maildir-windows-compatible = yes" account OfflineIMAP configuration. - That makes OfflineIMAP to use exclamation mark (!) instead of colon for storing messages. Such files can be written to windows partitions. But you will probably loose compatibility with other programs trying to read the same Maildir. + - Exclamation mark was chosen because of the note in http://docs.python.org/library/mailbox.html + - If you have some messages already stored without this option, you will have to re-sync them again * OfflineIMAP confused after system suspend. + When resuming a suspended session, OfflineIMAP does not cleanly handles the broken socket(s) if socktimeout option is not set. You should enable this option with a value like 10. * OfflineIMAP confused when mails change while in a sync. + When OfflineIMAP is syncing, some events happening since the invokation on remote or local side are badly handled. OfflineIMAP won't track for changes during the sync. * Sharing a maildir with multiple IMAP servers. + Generally a word of caution mixing IMAP repositories on the same Maildir root. You have to be careful that you *never* use the same maildir folder for 2 IMAP servers. In the best case, the folder MD5 will be different, and you will get a loop where it will upload your mails to both servers in turn (infinitely!) as it thinks you have placed new mails in the local Maildir. In the worst case, the MD5 is the same (likely) and mail UIDs overlap (likely too!) and it will fail to sync some mails as it thinks they are already existent. + I would create a new local Maildir Repository for the Personal Gmail and use a different root to be on the safe side here. You could e.g. use `~/mail/Pro` as Maildir root for the ProGmail and `~/mail/Personal` as root for the personal one. + If you then point your local mutt, or whatever MUA you use to `~/mail/` as root, it should still recognize all folders. * Edge cases with maxage causing too many messages to be synced. + All messages from at most maxage days ago (+/- a few hours, depending on timezones) are synced, but there are cases in which older messages can also be synced. This happens when a message's UID is significantly higher than those of other messages with similar dates, e.g. when messages are added to the local folder behind offlineimap's back, causing them to get assigned a new UID, or when offlineimap first syncs a pre-existing Maildir. In the latter case, it could appear as if a noticeable and random subset of old messages are synced. Main authors ------------ John Goerzen, Sebastian Spaetz, Eygene Ryabinkin, Nicolas Sebrecht. See Also -------- offlineimapui(7), openssl(1), signal(7), sqlite3(1). http://offlineimap.org offlineimap-6.6.1/docs/offlineimapui.txt000066400000000000000000000101151264010144500203300ustar00rootroot00000000000000 offlineimapui(7) ================ NAME ---- offlineimapui - The User Interfaces DESCRIPTION ----------- OfflineIMAP comes with differents UI, each aiming its own purpose. TTYUI ------ TTYUI interface is for people running in terminals. It prints out basic status messages and is generally friendly to use on a console or xterm. Basic ------ Basic is designed for situations in which OfflineIMAP will be run non-attended and the status of its execution will be logged. + This user interface is not capable of reading a password from the keyboard; account passwords must be specified using one of the configuration file options. For example, it will not print periodic sleep announcements and tends to be a tad less verbose, in general. Blinkenlights ------------- Blinkenlights is an interface designed to be sleek, fun to watch, and informative of the overall picture of what OfflineIMAP is doing. Blinkenlights contains a row of "LEDs" with command buttons and a log. The log shows more detail about what is happening and is color-coded to match the color of the lights. Each light in the Blinkenlights interface represents a thread of execution -- that is, a particular task that OfflineIMAP is performing right now. The colors indicate what task the particular thread is performing, and are as follows: * Black indicates that this light's thread has terminated; it will light up again later when new threads start up. So, black indicates no activity. * Red (Meaning 1) is the color of the main program's thread, which basically does nothing but monitor the others. It might remind you of HAL 9000 in 2001. * Gray indicates that the thread is establishing a new connection to the IMAP server. * Purple is the color of an account synchronization thread that is monitoring the progress of the folders in that account (not generating any I/O). * Cyan indicates that the thread is syncing a folder. * Green means that a folder's message list is being loaded. * Blue is the color of a message synchronization controller thread. * Orange indicates that an actual message is being copied. (We use fuchsia for fake messages.) * Red (meaning 2) indicates that a message is being deleted. * Yellow / bright orange indicates that message flags are being added. * Pink / bright red indicates that message flags are being removed. * Red / Black Flashing corresponds to the countdown timer that runs between synchronizations. The name of this interfaces derives from a bit of computer history. Eric Raymond's Jargon File defines blinkenlights, in part, as: Front-panel diagnostic lights on a computer, esp. a dinosaur. Now that dinosaurs are rare, this term usually refers to status lights on a modem, network hub, or the like. This term derives from the last word of the famous blackletter-Gothic sign in mangled pseudo-German that once graced about half the computer rooms in the English-speaking world. One version ran in its entirety as follows: ACHTUNG! ALLES LOOKENSPEEPERS! Das computermachine ist nicht fuer gefingerpoken und mittengrabben. Ist easy schnappen der springenwerk, blowenfusen und poppencorken mit spitzensparken. Ist nicht fuer gewerken bei das dumpkopfen. Das rubbernecken sichtseeren keepen das cotten-pickenen hans in das pockets muss; relaxen und watchen das blinkenlichten. Quiet ----- It will output nothing except errors and serious warnings. Like Basic, this user interface is not capable of reading a password from the keyboard; account passwords must be specified using one of the configuration file options. Syslog ------ Syslog is designed for situations where OfflineIMAP is run as a daemon (e.g., as a systemd --user service), but errors should be forwarded to the system log. Like Basic, this user interface is not capable of reading a password from the keyboard; account passwords must be specified using one of the configuration file options. MachineUI --------- MachineUI generates output in a machine-parsable format. It is designed for other programs that will interface to OfflineIMAP. See Also -------- offlineimap(1) offlineimap-6.6.1/docs/website-doc.sh000077500000000000000000000052061264010144500175110ustar00rootroot00000000000000#!/bin/sh # # vim: expandtab ts=2 : ARGS=$* SPHINXBUILD=sphinx-build TMPDIR='/tmp/offlineimap-sphinx-doctrees' WEBSITE='./website' DOCBASE="${WEBSITE}/_doc" DESTBASE="${DOCBASE}/versions" VERSIONS_YML="${WEBSITE}/_data/versions.yml" ANNOUNCES_YML="${WEBSITE}/_data/announces.yml" CONTRIB_YML="${WEBSITE}/_data/contribs.yml" CONTRIB="${DOCBASE}/contrib" HEADER="# DO NOT EDIT MANUALLY: it is generated by a script (website-doc.sh)." function fix_pwd () { cd "$(git rev-parse --show-toplevel)" || \ exit 2 "cannot determine the root of the repository" test -d "$DESTBASE" || exit 1 } fix_pwd version="v$(./offlineimap.py --version)" # # Add the doc for the contrib files. # function contrib () { echo $HEADER > "$CONTRIB_YML" # systemd cp -afv "./contrib/systemd/README.md" "${CONTRIB}/systemd.md" echo "- {filename: 'systemd', linkname: 'Integrate with systemd'}" >> "$CONTRIB_YML" } # # Build the sphinx documentation. # function api () { # Build the doc with sphinx. dest="${DESTBASE}/${version}" echo "Cleaning target directory: $dest" rm -rf "$dest" $SPHINXBUILD -b html -d "$TMPDIR" ./docs/doc-src "$dest" # Build the JSON definitions for Jekyll. # This let know the website about the available APIs documentations. echo "Building Jekyll data: $VERSIONS_YML" # Erase previous content. echo "$HEADER" > "$VERSIONS_YML" for version in $(ls "$DESTBASE" -1 | sort -nr) do echo "- $version" done >> "$VERSIONS_YML" } # # Make Changelog public and save links to them as JSON. # function releases () { # Copy the Changelogs. for foo in ./Changelog.md ./Changelog.maint.md do cp -afv "$foo" "$DOCBASE" done # Build the announces JSON list. Format is JSON: # - {version: '', link: ''} # - ... echo "$HEADER" > "$ANNOUNCES_YML" grep -E '^### OfflineIMAP' ./Changelog.md | while read title do link="$(echo $title | sed -r -e 's,^### (OfflineIMAP.*)\),\1,' \ | tr '[:upper:]' '[:lower:]' \ | sed -r -e 's,[\.("],,g' \ | sed -r -e 's, ,-,g' )" v="$(echo $title \ | sed -r -e 's,^### [a-Z]+ (v[^ ]+).*,\1,' )" echo "- {version: '${v}', link: '$link'}" done | tee -a "$ANNOUNCES_YML" } exit_code=0 test "n$ARGS" = 'n' && ARG='usage' # no option passed for arg in $ARGS do # PWD was fixed at the very beginning. case "n$arg" in "nreleases") releases ;; "napi") api ;; "ncontrib") contrib ;; "nusage") echo "Usage: website-doc.sh " ;; *) echo "unkown option $arg" exit_code=$(( $exit_code + 1 )) ;; esac done exit $exit_code offlineimap-6.6.1/offlineimap.conf000066400000000000000000001204651264010144500171620ustar00rootroot00000000000000# Offlineimap sample configuration file # This file documents *all* possible options and can be quite scary. # Looking for a quick start? Take a look at offlineimap.conf.minimal. # More details can be found at http://offlineimap.org . ################################################## # Overview ################################################## # The default configuration file is "~/.offlineimaprc". # # OfflineIMAP ships with a file named "offlineimap.conf" that you should copy to # that location and then edit. # # OfflineIMAP also ships a file named "offlineimap.conf.minimal" that you can # also try. It's useful if you want to get started with the most basic feature # set, and you can read about other features later with "offlineimap.conf". # # If you want to be XDG-compatible, you can put your configuration file into # "$XDG_CONFIG_HOME/offlineimap/config". ################################################## # General definitions ################################################## # NOTE 1: Settings generally support python interpolation. This means # values can contain python format strings which refer to other values # in the same section, or values in a special DEFAULT section. This # allows you for example to use common settings for multiple accounts: # # [Repository Gmail1] # trashfolder: %(gmailtrashfolder)s # # [Repository Gmail2] # trashfolder: %(gmailtrashfolder)s # # [DEFAULT] # gmailtrashfolder = [Gmail]/Papierkorb # # would set the trashfolder setting for your German Gmail accounts. # NOTE 2: Above feature implies that any '%' needs to be encoded as '%%' # NOTE 3: Any variable that is subject to the environment variables # ($NAME) and tilde (~username/~) expansions will receive tilde # expansion first and only after the environment variable will be # expanded in the resulting string. This behaviour is intentional # as it coincides with typical shell expansion strategy. # NOTE 4: multiple same-named sections. # The library used to parse the configuration file has known issue when multiple # sections have the same name. In such case, only the last section is considered. # It is strongly discouraged to have multiple sections with the same name. # See https://github.com/OfflineIMAP/offlineimap/issues/143 for more details. [general] # This specifies where OfflineIMAP is to store its metadata. # This directory will be created if it does not already exist. # # Tilde and environment variable expansions will be performed. # #metadata = ~/.offlineimap # This option stands in the [general] section. # # This variable specifies which accounts are defined. Separate them with commas. # Account names should be alphanumeric only. You will need to specify one # section per account below. You may not use "general" for an account name. # # Always use ASCII characters only. # accounts = Test # This option stands in the [general] section. # # Offlineimap can synchronize more than one account at a time. If you want to # enable this feature, set the below value to something greater than 1. To # force it to synchronize only one account at a time, set it to 1. # # NOTE: if you are using autorefresh and have more than one account, you must # set this number to be >= to the number of accounts you have; since any given # sync run never "finishes" due to a timer, you will never sync your additional # accounts if this is 1. # #maxsyncaccounts = 1 # This option stands in the [general] section. # # You can specify one or more user interface. OfflineIMAP will try the first in # the list, and if it fails, the second, and so forth. # # The pre-defined options are: # Blinkenlights -- A fancy (terminal) interface # TTYUI -- a text-based (terminal) interface # Basic -- Noninteractive interface suitable for cron'ing # Quiet -- Noninteractive interface, generates no output # except for errors. # MachineUI -- Interactive interface suitable for machine # parsing. # # See also offlineimapui(7) # # You can override this with a command-line option -u. # #ui = basic # This option stands in the [general] section. # # If you try to synchronize messages to a folder which the IMAP server # considers read-only, OfflineIMAP will generate a warning. If you want # to suppress these warnings, set ignore-readonly to yes. Read-only # IMAP folders allow reading but not modification, so if you try to # change messages in the local copy of such a folder, the IMAP server # will prevent OfflineIMAP from propagating those changes to the IMAP # server. Note that ignore-readonly is UNRELATED to the "readonly" # setting which prevents a repository from being modified at all. # #ignore-readonly = no ########## Advanced settings # This option stands in the [general] section. # # You can give a Python source filename here and all config file # python snippets will be evaluated in the context of that file. # This allows you to e.g. define helper functions in the Python # source file and call them from this config file. You can find # an example of this in the manual. # # Tilde and environment variable expansions will be performed. # #pythonfile = ~/.offlineimap.py # This option is in the [general] section. # # By default, OfflineIMAP will not exit due to a network error until the # operating system returns an error code. Operating systems can sometimes take # forever to notice this. Here you can activate a timeout on the socket. This # timeout applies to individual socket reads and writes, not to an overall sync # operation. You could perfectly well have a 30s timeout here and your sync # still take minutes. # # Values in the 30-120 second range are reasonable. # # The default is to have no timeout beyond the OS. Times are given in seconds. # #socktimeout = 60 # This option stands in the [general] section. # # By default, OfflineIMAP will use fsync() to force data out to disk at # opportune times to ensure consistency. This can, however, reduce performance. # Users where /home is on SSD (Flash) may also wish to reduce write cycles. # Therefore, you can disable OfflineIMAP's use of fsync(). Doing so will come # at the expense of greater risk of message duplication in the event of a system # crash or power loss. Default is true. Set it to false to disable fsync. # #fsync = true ################################################## # Mailbox name recorder ################################################## [mbnames] # OfflineIMAP can record your mailbox names in a format you specify. # You can define the header, each mailbox item, the separator, # and the footer. Here is an example for Mutt. # If enabled is yes, all six setting must be specified, even if they # are just the empty string "". # # The header, peritem, sep, and footer are all Python expressions passed # through eval, so you can (and must) use Python quoting. # # The incremental setting controls whether the file is written after each # account completes or once all synced accounts are complete. This is usefull if # an account is sightly slower than the other. It allows keeping the previous # file rather than having it partially written. # This works best with "no" if in one-shot mode started by cron or systemd # timers. Default: no. # # The following hash key are available to the expansion for 'peritem': # - accountname: the name of the corresponding account; # - foldername: the name of the folder; # - localfolders: path to the local directory hosting all Maildir # folders for the account. # # Tilde and environment variable expansions will be performed # for "filename" knob. # #enabled = no #filename = ~/Mutt/muttrc.mailboxes #header = "mailboxes " #peritem = "+%(accountname)s/%(foldername)s" #sep = " " #footer = "\n" #incremental = no # This option stands in the [mbnames] section. # # You can also specify a folderfilter. It will apply to the *translated* folder # name here, and it takes TWO arguments: accountname and foldername. In all # other ways, it will behave identically to the folderfilter for accounts. # Please see the folderfilter option for more information and examples. # # This filter can be used only to further restrict mbnames to a subset of # folders that pass the account's folderfilter. # # You can customize the order in which mailbox names are listed in the generated # file by specifying a sort_keyfunc, which takes a single dict argument # containing keys 'accountname' and 'foldername'. This function will be called # once for each mailbox, and should return a suitable sort key that defines this # mailbox' position in the custom ordering. # # This is useful with e.g. Mutt-sidebar, which uses the mailbox order from the # generated file when listing mailboxes in the sidebar. # # Default setting is: #sort_keyfunc = lambda d: (d['accountname'], d['foldername']) ################################################## # Accounts ################################################## # This is an account definition clause. You'll have one of these for each # account listed in the "accounts" option in [general] section (above). [Account Test] # These settings specify the two folders that you will be syncing. # You'll need to have a "Repository ..." section for each one. localrepository = LocalExample remoterepository = RemoteExample ########## Advanced settings # This option stands in the [Account Test] section. # # You can have OfflineIMAP continue running indefinitely, automatically syncing # your mail periodically. If you want that, specify how frequently to do that # (in minutes) here. Fractional minutes (ie, 3.25) is allowed. # #autorefresh = 5 # This option stands in the [Account Test] section. # # OfflineImap can replace a number of full updates by quick synchronizations. # This option is ignored if maxage or startdate are used. # # It only synchronizes a folder if # # 1) a Maildir folder has changed # # or # # 2) if an IMAP folder has received new messages or had messages deleted, ie # it does not update if only IMAP flags have changed. # # Full updates need to fetch ALL flags for all messages, so this makes quite a # performance difference (especially if syncing between two IMAP servers). # # Specify 0 for never, -1 for always (works even in non-autorefresh mode) # # A positive integer to do quick updates before doing another full # synchronization (requires autorefresh). Updates are always performed after # minutes, be they quick or full. # #quick = 10 # This option stands in the [Account Test] section. # # You can specify a pre and post sync hook to execute a external command. In # this case a call to imapfilter to filter mail before the sync process starts # and a custom shell script after the sync completes. # # The pre sync script has to complete before a sync to the account will start. # #presynchook = imapfilter -c someotherconfig.lua #postsynchook = notifysync.sh # This option stands in the [Account Test] section. # # You can specify a newmail hook to execute an external command upon receipt # of new mail in the INBOX. # # This example plays a sound file of your chosing when new mail arrives. # # This feature is experimental. # #newmail_hook = lambda: os.system("cvlc --play-and-stop --play-and-exit /path/to/sound/file.mp3" + # " > /dev/null 2>&1") # This option stands in the [Account Test] section. # # OfflineImap caches the state of the synchronisation to e.g. be able to # determine if a mail has been added or deleted on either side. # # The default and historical backend is 'plain' which writes out the # state in plain text files. On Repositories with large numbers of # mails, the performance might not be optimal, as we write out the # complete file for each change. Another new backend 'sqlite' is # available which stores the status in sqlite databases. # # If you switch the backend, you may want to delete the old cache # directory in ~/.offlineimap/Account-/LocalStatus manually # once you are sure that things work. # #status_backend = plain # This option stands in the [Account Test] section. # # If you have a limited amount of bandwidth available you can exclude larger # messages (e.g. those with large attachments etc). If you do this it will # appear to OfflineIMAP that these messages do not exist at all. They will not # be copied, have flags changed etc. For this to work on an IMAP server the # server must have server side search enabled. This works with Gmail and most # imap servers (e.g. cyrus etc) # # The maximum size should be specified in bytes - e.g. 2000000 for approx 2MB # #maxsize = 2000000 # This option stands in the [Account Test] section. # # maxage enables you to sync only recent messages. There are two ways to specify # what "recent" means: if maxage is given as an integer, then only messages from # the last maxage days will be synced. If maxage is given as a date, then only # messages later than that date will be synced. # # Messages older than the cutoff will not be synced, their flags will not be # changed, they will not be deleted, etc. For OfflineIMAP it will be like these # messages do not exist. This will perform an IMAP search in the case of IMAP or # Gmail and therefore requires that the server support server side searching. # # Known edge cases are described in offlineimap(1). # # maxage is allowed only when the local folder is of type Maildir. It can't be # used with startdate. # # The maxage option expects an integer (for the number of days) or a date of the # form yyyy-mm-dd. # #maxage = 3 #maxage = 2015-04-01 # This option stands in the [Account Test] section. # # Maildir file format uses colon (:) separator between uniq name and info. # Unfortunatelly colon is not allowed character in windows file name. If you # enable maildir-windows-compatible option, OfflineIMAP will be able to store # messages on windows drive, but you will probably loose compatibility with # other programs working with the maildir. # #maildir-windows-compatible = no # This option stands in the [Account Test] section. # # Specifies if we want to sync GMail labels with the local repository. # Effective only for GMail IMAP repositories. # # Non-ASCII characters in labels are bad handled or won't work at all. # #synclabels = no # This option stands in the [Account Test] section. # # Name of the header to use for label storage. Format for the header # value differs for different headers, because there are some de-facto # "standards" set by popular clients: # # - X-Label or Keywords keep values separated with spaces; for these # you, obviously, should not have label values that contain spaces; # # - X-Keywords use comma (',') as the separator. # # To be consistent with the usual To-like headers, for the rest of header # types we use comma as the separator. # # Use ASCII characters only. # #labelsheader = X-Keywords # This option stands in the [Account Test] section. # # Set of labels to be ignored. Comma-separated list. GMail-specific # labels all start with backslash ('\'). # # Use ASCII characters only. # #ignorelabels = \Inbox, \Starred, \Sent, \Draft, \Spam, \Trash, \Important # This option stands in the [Account Test] section. # # OfflineIMAP can strip off some headers when your messages are propagated # back to the IMAP server. This option carries the comma-separated list # of headers to trim off. Header name matching is case-sensitive. # # This knob is respected only by IMAP-based accounts. Value of labelsheader # for GMail-based accounts is automatically added to this list, you don't # need to specify it explicitely. # # Use ASCII characters only. # #filterheaders = X-Some-Weird-Header # This option stands in the [Account Test] section. # # Use proxy connection for this account. Usefull to bypass the GFW in China. # To specify a proxy connection, join proxy type, host and port with colons. # Available proxy types are SOCKS5, SOCKS4, HTTP. # You also need to install PySocks through pip. # # Currently, this feature leaks DNS support. # #proxy = SOCKS5:IP:9999 [Repository LocalExample] # Each repository requires a "type" declaration. The types supported for # local repositories are Maildir, GmailMaildir and IMAP. # type = Maildir # This option stands in the [Repository LocalExample] section. # # Specify local repository. Your IMAP folders will be synchronized # to maildirs created under this path. OfflineIMAP will create the # maildirs for you as needed. # localfolders = ~/Test # This option stands in the [Repository LocalExample] section. # # You can specify the "folder separator character" used for your Maildir # folders. It is inserted in-between the components of the tree. If you # want your folders to be nested directories, set it to "/". 'sep' is # ignored for IMAP repositories, as it is queried automatically. # Otherwise, default value is ".". # # Don't use quotes. # #sep = . # This option stands in the [Repository LocalExample] section. # # startdate syncs mails starting from a given date. It applies the date # restriction to LocalExample only. The remote repository MUST be empty # at the first sync where this option is used. # # Unlike maxage, this is supported for IMAP-IMAP sync. # # startdate can't be used with maxage. # # The startdate option expects a date in the format yyyy-mm-dd. # #startdate = 2015-04-01 # This option stands in the [Repository LocalExample] section. # # Some users may not want the atime (last access time) of folders to be # modified by OfflineIMAP. If 'restoreatime' is set to yes, OfflineIMAP # will restore the atime of the "new" and "cur" folders in each maildir # folder to their original value after each sync. # # In nearly all cases, the default should be fine. # #restoreatime = no # This option stands in the [Repository LocalExample] section. # # Set modification time of messages basing on the message's "Date" header. This # option makes sense for the Maildir type, only. # # This is useful if you are doing some processing/finding on your Maildir (for # example, finding messages older than 3 months), without parsing each # file/message content. # # If enabled, this forbid the -q (quick mode) CLI option to work correctly. # This option is still "TESTING" feature. # # Default: no. # #utime_from_header = no # This option stands in the [Repository LocalExample] section. # # This option is similar to "utime_from_header" and could be use as a # complementary feature to keep track of a message date. This option only # makes sense for the Maildir type. # # By default each message is stored in a file which prefix is the fetch # timestamp and an order rank such as "1446590057_0". In a multithreading # environment message are fetched in a random order, then you can't trust # the file name to sort your boxes. # # If set to "yes" the file name prefix if build on the message "Date" header # (which should be present) or the "Received-date" if "Date" is not # found. If neither "Received-date" nor "Date" is found, the current system # date is used. Now you can quickly sort your messages using their file # names. # # Used in combination with "utime_from_header" all your message would be in # order with the correct mtime attribute. # #filename_use_mail_timestamp = no # This option stands in the [Repository LocalExample] section. # # Map IMAP [user-defined] keywords to lowercase letters, similar to Dovecot's # format described in http://wiki2.dovecot.org/MailboxFormat/Maildir . This # option makes sense for the Maildir type, only. # # Configuration example: # customflag_x = some_keyword # # With the configuration example above enabled, all IMAP messages that have # 'some_keyword' in their FLAGS field will have an 'x' in the flags part of the # maildir filename: # 1234567890.M20046P2137.mailserver,S=4542,W=4642:2,Sx # # Valid fields are customflag_[a-z], valid values are whatever the IMAP server # allows. # # Comparison in offlineimap is case-sensitive. # # This option is EXPERIMENTAL. # #customflag_a = some_keyword #customflag_b = $OtherKeyword #customflag_c = NonJunk #customflag_d = ToDo [Repository GmailLocalExample] # This type of repository enables syncing of Gmail. All Maildir # configuration settings are also valid here. # # This is a separate Repository type from Maildir because it involves # some extra overhead which sometimes may be significant. We look for # modified tags in local messages by looking only to the files # modified since last run. This is usually rather fast, but the first # time OfflineIMAP runs with synclabels enabled, it will have to check # the contents of all individual messages for labels and this may take # a while. # type = GmailMaildir [Repository RemoteExample] # The remote repository. We only support IMAP or Gmail here. # type = IMAP # These options stands in the [Repository RemoteExample] section. # # The following can fetch the account credentials via a python expression that # is parsed from the pythonfile parameter. For example, a function called # "getcredentials" that parses a file "filename" and returns the account # details for "hostname". # #remotehosteval = getcredentials("filename", "hostname", "hostname") #remoteporteval = getcredentials("filename", "hostname", "port") #remoteusereval = getcredentials("filename", "hostname", "user") #remotepasseval = getcredentials("filename", "hostname", "passwd") # This option stands in the [Repository RemoteExample] section. # # Specify the remote hostname. # remotehost = examplehost # This option stands in the [Repository RemoteExample] section. # # Whether or not to use SSL. # # Note: be care to configure the 'remotehost' line with the domain name defined # in the certificate. E.g., if you trust your provider and want to use the # certificate it provides on a shared server. Otherwise, OfflineIMAP will stop # and say that the domain is not named in the certificate. # #ssl = yes # This option stands in the [Repository RemoteExample] section. # # SSL Client certificate (optional). # # Tilde and environment variable expansions will be performed. # #sslclientcert = /path/to/file.crt # This option stands in the [Repository RemoteExample] section. # # SSL Client key (optional). # # Tilde and environment variable expansions will be performed. # #sslclientkey = /path/to/file.key # This option stands in the [Repository RemoteExample] section. # # SSL CA Cert(s) to verify the server cert against (optional). # No SSL verification is done without this option. If it is # specified, the CA Cert(s) need to verify the Server cert AND # match the hostname (* wildcard allowed on the left hand side) # The certificate should be in PEM format. # # Tilde and environment variable expansions will be performed. # # Special value OS-DEFAULT makes OfflineIMAP to automatically # determine system-wide location of standard trusted CA roots file # for known OS distributions and use the first bundle encountered # (if any). If no system-wide CA bundle is found, OfflineIMAP # will refuse to continue; this is done to prevent creation # of false security expectations ("I had configured CA bundle, # thou certificate verification shalt be present"). # # You can also use fingerprint verification via cert_fingerprint. # See below for more verbose explanation. # #sslcacertfile = /path/to/cacertfile.crt # This option stands in the [Repository RemoteExample] section. # # If you connect via SSL/TLS (ssl = yes) and you have no CA certificate # specified, OfflineIMAP will refuse to sync as it connects to a server # with an unknown "fingerprint". If you are sure you connect to the # correct server, you can then configure the presented server # fingerprint here. OfflineIMAP will verify that the server fingerprint # has not changed on each connect and refuse to connect otherwise. # # You can also configure fingerprint validation in addition to # CA certificate validation above and it will check both: # OfflineIMAP fill verify certificate first and if things will be fine, # fingerprint will be validated. # # Multiple fingerprints can be specified, separated by commas. # # In Windows, Microsoft uses the term "thumbprint" instead of "fingerprint". # # Fingerprints must be in hexadecimal form without leading '0x': # 40 hex digits like bbfe29cf97acb204591edbafe0aa8c8f914287c9. # #cert_fingerprint = [, ] # This option stands in the [Repository RemoteExample] section. # # Set SSL version to use (optional). # # It is best to leave this unset, in which case the correct version will be # automatically detected. In rare cases, it may be necessary to specify a # particular version from: tls1, ssl2, ssl3, ssl23. # # ssl23 is the highest protocol version that both the client and server support. # Despite the name, this option can select “TLS” protocols as well as “SSL”. # # See the configuration option tls_level to automatically disable insecure # protocols. # #ssl_version = ssl23 # This option stands in the [Repository RemoteExample] section. # # TLS support level (optional). # # Specify the level of support that should be allowed for this repository. # Can be used to disallow insecure SSL versions as defined by IETF # (see https://tools.ietf.org/html/rfc6176). # # Supported values are: # tls_secure, tls_no_ssl, tls_compat (the default). # # This option is EXPERIMENTAL. # #tls_level = tls_compat # This option stands in the [Repository RemoteExample] section. # # Specify the port. If not specified, use a default port. # #remoteport = 993 # This option stands in the [Repository RemoteExample] section. # # Specify the remote user name. # remoteuser = username # This option stands in the [Repository RemoteExample] section. # # Specify the user to be authorized as. Sometimes we want to # authenticate with our login/password, but tell the server that we # really want to be treated as some other user; perhaps server will # allow us to do that (or maybe not). Some IMAP servers migrate # account names using this functionality: your credentials remain # intact, but remote identity changes. # # Currently this variable is used only for SASL PLAIN authentication # mechanism, so consider using auth_mechanisms to prioritize PLAIN # or even make it the only mechanism to be tried. # #remote_identity = authzuser # This option stands in the [Repository RemoteExample] section. # # Specify which authentication/authorization mechanisms we should try and the # order in which OfflineIMAP will try them. # # NOTE: any given mechanism will be tried ONLY if it is supported by the remote # IMAP server. # # Default value is ranged is from strongest to more weak ones. Due to technical # limitations, if GSSAPI is set, it will be tried first, no matter where it was # specified in the list. # #auth_mechanisms = GSSAPI, CRAM-MD5, XOAUTH2, PLAIN, LOGIN # This option stands in the [Repository RemoteExample] section. # # XOAuth2 authentication (for instance, to use with Gmail). # # This feature is currently EXPERIMENTAL (tested on Gmail only, but should work # with type = IMAP for compatible servers). # # Mandatory parameters are "oauth2_client_id", "oauth2_client_secret" and # "oauth2_refresh_token". See below to learn how to get those. # # Specify the OAuth2 client id and secret to use for the connection.. # Here's how to register an OAuth2 client for Gmail, as of 10-2-2016: # - Go to the Google developer console # https://console.developers.google.com/project # - Create a new project # - In API & Auth, select Credentials # - Setup the OAuth Consent Screen # - Then add Credentials of type OAuth 2.0 Client ID # - Choose application type Other; type in a name for your client # - You now have a client ID and client secret # #oauth2_client_id = YOUR_CLIENT_ID #oauth2_client_secret = YOUR_CLIENT_SECRET # Specify the refresh token to use for the connection to the mail server. # Here's an example of a way to get a refresh token: # - Clone this project: https://github.com/google/gmail-oauth2-tools # - Type the following command-line in a terminal and follow the instructions # python python/oauth2.py --generate_oauth2_token \ # --client_id=YOUR_CLIENT_ID --client_secret=YOUR_CLIENT_SECRET # #oauth2_refresh_token = REFRESH_TOKEN ########## Passwords # There are six ways to specify the password for the IMAP server: # # 1. No password at all specified in the config file. # If a matching entry is found in ~/.netrc (see netrc (5) for # information) this password will be used. Do note that netrc only # allows one entry per hostname. If there is no ~/.netrc file but # there is an /etc/netrc file, the password will instead be taken # from there. Otherwise you will be prompted for the password when # OfflineIMAP starts when using a UI that supports this. # # 2. The remote password stored in this file with the remotepass # option. Any '%' needs to be encoded as '%%'. Example: # remotepass = mypassword # # 3. The remote password stored as a single line in an external # file, which is referenced by the remotefile option. Example: # remotepassfile = ~/Password.IMAP.Account1 # # 4. With a preauth tunnel. With this method, you invoke an external # program that is guaranteed *NOT* to ask for a password, but rather # to read from stdin and write to stdout an IMAP procotol stream that # begins life in the PREAUTH state. When you use a tunnel, you do # NOT specify a user or password (if you do, they'll be ignored.) # Instead, you specify a preauthtunnel, as this example illustrates # for Courier IMAP on Debian: # preauthtunnel = ssh -q imaphost '/usr/bin/imapd ./Maildir' # # 5. If you are using Kerberos and have the Python Kerberos package # installed, you should not specify a remotepass. If the user has a # valid Kerberos TGT, OfflineIMAP will figure out the rest all by # itself, and fall back to password authentication if needed. # # 6. Using arbitrary python code. With this method, you invoke a # function from your pythonfile. To use this method assign the name # of the function to the variable 'remotepasseval'. Example: # remotepasseval = get_password("imap.example.net") # You can also query for the username: # remoteusereval = get_username("imap.example.net") # This method can be used to design more elaborate setups, e.g. by # querying the gnome-keyring via its python bindings. ########## Advanced settings # These options stands in the [Repository RemoteExample] section. # # Tunnels. There are two types: # # - preauth: they teleport your connection to the remote system # and you don't need to authenticate yourself there; the sole # fact that you succeeded to get the tunnel running is enough. # This tunnel type was explained above in the 'Passwords' section. # # - transport: the just provide the transport (probably encrypted) # to the IMAP server, but you still need to authenticate at the # IMAP server. # # Tunnels are currently working only with IMAP servers and their # derivatives (GMail currently). Additionally, for GMail accounts # preauth tunnel settings are ignored: we don't believe that there # are ways to preauthenticate at Google mail system IMAP servers. # # You must choose at most one tunnel type, be wise M'Lord! # #preauthtunnel = ssh -q imaphost '/usr/bin/imapd ./Maildir' #transporttunnel = openssl s_client -host myimap -port 993 -quiet # This option stands in the [Repository RemoteExample] section. # # Some IMAP servers need a "reference" which often refers to the "folder root". # # This is most commonly needed with UW IMAP, where you might need to specify the # directory in which your mail is stored. The 'reference' value will be prefixed # to all folder paths refering to that repository. E.g. accessing folder 'INBOX' # with "reference = Mail" will try to access Mail/INBOX. # # The nametrans and folderfilter functions will apply to the full path, # including the reference prefix. Most users will not need this. # #reference = Mail # This option stands in the [Repository RemoteExample] section. # # IMAP defines an encoding for non-ASCII ("international") characters. Enable # this option if you want to decode them to the nowadays ubiquitous UTF-8. # # Note that the IMAP 4rev1 specification (RFC 3501) allows both UTF-8 and # modified UTF-7 folder names. # # This option is disabled by default to retain compatibility with older versions # of offlineimap. # # This option is EXPERIMENTAL. # #decodefoldernames = no # This option stands in the [Repository RemoteExample] section. # # In between synchronisations, OfflineIMAP can monitor mailboxes for new # messages using the IDLE command. If you want to enable this, specify here the # folders you wish to monitor. IMAP protocol requires a separate connection for # each folder monitored in this way, so setting this option will force settings # for: # # - maxconnections: to be at least the number of folders you give # - holdconnectionopen: to be true # - keepalive: to be 29 minutes unless you specify otherwise # # This feature isn't complete and may well have problems. See the "Known Issues" # entry in the manual for more details. # # This option should return a Python list. For example # #idlefolders = ['INBOX', 'INBOX.Alerts'] # This option stands in the [Repository RemoteExample] section. # # OfflineIMAP can use a compressed connection to the IMAP server. # This can result in faster downloads for some cases. # #usecompression = yes # This option stands in the [Repository RemoteExample] section. # # OfflineIMAP can use multiple connections to the server in order # to perform multiple synchronization actions simultaneously. # This may place a higher burden on the server. In most cases, # setting this value to 2 or 3 will speed up the sync, but in some # cases, it may slow things down. The safe answer is 1. You should # probably never set it to a value more than 5. # #maxconnections = 2 # This option stands in the [Repository RemoteExample] section. # # OfflineIMAP normally closes IMAP server connections between refreshes if # the global option autorefresh is specified. If you wish it to keep the # connection open, set this to true. If not specified, the default is # false. Keeping the connection open means a faster sync start the # next time and may use fewer server resources on connection, but uses # more server memory. This setting has no effect if autorefresh is not set. # #holdconnectionopen = no # This option stands in the [Repository RemoteExample] section. # # If you want to have "keepalives" sent while waiting between syncs, specify the # amount of time IN SECONDS between keepalives here. Note that sometimes more # than this amount of time might pass, so don't make it tight. This setting has # no effect if autorefresh and holdconnectionopen are not both set. # #keepalive = 60 # This option stands in the [Repository RemoteExample] section. # # Normally, OfflineIMAP will expunge deleted messages from the server. You can # disable that if you wish. This means that OfflineIMAP will mark them deleted # on the server, but not actually delete them. You must use some other IMAP # client to delete them if you use this setting; otherwise, the messages will # just pile up there forever. Therefore, this setting is definitely NOT # recommended for a long term. # #expunge = no # This option stands in the [Repository RemoteExample] section. # # Specify whether to process all mail folders on the server, or only # those listed as "subscribed". # #subscribedonly = no # This option stands in the [Repository RemoteExample] section. # # You can specify a folder translator. This must be a eval-able. # # Python expression that takes a foldername arg and returns the new value. A # lambda function is suggested. # # WARNING: you MUST construct it so that it NEVER returns the same value for two # folders, UNLESS the second values are filtered out by folderfilter below. # Failure to follow this rule will result in undefined behavior. # # If you enable nametrans, you will likely need to set the reversed nametrans on # the other side. See the user documentation for details and use cases. They # are also online at: http://offlineimap.org/doc/nametrans.html # # This example below will remove "INBOX." from the leading edge of folders # (great for Courier IMAP users). # #nametrans = lambda foldername: re.sub('^INBOX\.', '', foldername) # # Using Courier remotely and want to duplicate its mailbox naming locally? Try # this: # #nametrans = lambda foldername: re.sub('^INBOX\.*', '.', foldername) # This option stands in the [Repository RemoteExample] section. # # Determines if folderfilter will be invoked on each run (dynamic folder # filtering) or filtering status will be determined at startup (default # behaviour). # #dynamic_folderfilter = False # This option stands in the [Repository RemoteExample] section. # # You can specify which folders to sync using the folderfilter setting. You can # provide any python function (e.g. a lambda function) which will be invoked for # each foldername. If the filter function returns True, the folder will be # synced, if it returns False, it. # # The folderfilter operates on the *UNTRANSLATED* name (before any nametrans # translation takes place). # # Example 1: synchronizing only INBOX and Sent. # #folderfilter = lambda foldername: foldername in ['INBOX', 'Sent'] # # Example 2: synchronizing everything except Trash. # #folderfilter = lambda foldername: foldername not in ['Trash'] # # Example 3: Using a regular expression to exclude Trash and all folders # containing the characters "Del". # #folderfilter = lambda foldername: not re.search('(^Trash$|Del)', foldername) # # If folderfilter is not specified, ALL remote folders will be synchronized. # # You can span multiple lines by indenting the others. (Use backslashes at the # end when required by Python syntax) For instance: # #folderfilter = lambda foldername: foldername in [ # 'INBOX', 'Sent Mail', # 'Deleted Items', 'Received'] # This option stands in the [Repository RemoteExample] section. # # You can specify folderincludes to include additional folders. It should # return a Python list. This might be used to include a folder that was # excluded by your folderfilter rule, to include a folder that your server does # not specify with its LIST option, or to include a folder that is outside your # basic reference. # # The 'reference' value will not be prefixed to this folder name, even if you # have specified one. For example: # #folderincludes = ['debian.user', 'debian.personal'] # This option stands in the [Repository RemoteExample] section. # # If you do not want to have any folders created on this repository, # set the createfolders variable to False, the default is True. Using # this feature you can e.g. disable the propagation of new folders to # the new repository. # #createfolders = True # This option stands in the [Repository RemoteExample] section. # # 'foldersort' determines how folders are sorted. # # This affects order of synchronization and mbnames. The expression should # return -1, 0, or 1, as the default Python cmp() does. The two arguments, x # and y, are strings representing the names of the folders to be sorted. The # sorting is applied *AFTER* nametrans, if any. The default is to sort IMAP # folders alphabetically (case-insensitive). Usually, you should never have to # modify this. To eg. reverse the sort: # #foldersort = lambda x, y: -cmp(x, y) # This option stands in the [Repository RemoteExample] section. # # Enable 1-way synchronization. When setting 'readonly' to True, this # repository will not be modified during synchronization. Usefull to # e.g. backup an IMAP server. The readonly setting can be applied to any # type of Repository (Maildir, Imap, etc). # #readonly = False [Repository GmailExample] # A repository using Gmail's IMAP interface. # # Any configuration parameter of "IMAP" type repositories can be used here. # Only "remoteuser" (or "remoteusereval" ) is mandatory. Default values for # other parameters are OK, and you should not need fiddle with those. # # The Gmail repository will use hard-coded values for "remotehost", # "remoteport", "tunnel" and "ssl". Any attempt to set those parameters will be # silently ignored. For details, see # # http://mail.google.com/support/bin/answer.py?answer=78799&topic=12814 # # To enable GMail labels synchronisation, set the option "synclabels" # in the corresponding "Account" section. # type = Gmail # This option stands in the [Repository GmailExample] section. # # Specify the Gmail user name. This is the only mandatory parameter. # remoteuser = username@gmail.com # This option stands in the [Repository GmailExample] section. # # The trash folder name may be different from [Gmail]/Trash due to localization. # You should look for the localized names of the spam folder too: "spamfolder" # tunable will help you to override the standard name. # # For example on German Gmail, this setting should be: # #trashfolder = [Gmail]/Papierkorb offlineimap-6.6.1/offlineimap.conf.minimal000066400000000000000000000005031264010144500205750ustar00rootroot00000000000000# Sample minimal config file. Copy this to ~/.offlineimaprc and edit to # get started fast. [general] accounts = Test [Account Test] localrepository = Local remoterepository = Remote [Repository Local] type = Maildir localfolders = ~/Test [Repository Remote] type = IMAP remotehost = examplehost remoteuser = jgoerzen offlineimap-6.6.1/offlineimap.py000077500000000000000000000024311264010144500166600ustar00rootroot00000000000000#!/usr/bin/env python # Startup from single-user installation # Copyright (C) 2002 - 2008 John Goerzen # # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA import os import sys if not 'DEVELOPING_OFFLINEIMAP_PYTHON3_SUPPORT' in os.environ: if sys.version_info[0] > 2: sys.stderr.write("""IIMAPS! Sorry, OfflineIMAP currently doesn't support Python higher than 2.x. We're doing our best to bring in support for 3.x really soon. You can also join us at https://github.com/OfflineIMAP/offlineimap/ and help. """) sys.exit(1) from offlineimap import OfflineImap oi = OfflineImap() oi.run() offlineimap-6.6.1/offlineimap/000077500000000000000000000000001264010144500163035ustar00rootroot00000000000000offlineimap-6.6.1/offlineimap/CustomConfig.py000066400000000000000000000252671264010144500212710ustar00rootroot00000000000000# Copyright (C) 2003-2015 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import os import re from sys import exc_info try: from ConfigParser import SafeConfigParser, Error except ImportError: #python3 from configparser import SafeConfigParser, Error from offlineimap.localeval import LocalEval class CustomConfigParser(SafeConfigParser): def __init__(self): SafeConfigParser.__init__(self) self.localeval = None def getdefault(self, section, option, default, *args, **kwargs): """Same as config.get, but returns the value of `default` if there is no such option specified.""" if self.has_option(section, option): return self.get(*(section, option) + args, **kwargs) else: return default def getdefaultint(self, section, option, default, *args, **kwargs): """Same as config.getint, but returns the value of `default` if there is no such option specified.""" if self.has_option(section, option): return self.getint(*(section, option) + args, **kwargs) else: return default def getdefaultfloat(self, section, option, default, *args, **kwargs): """Same as config.getfloat, but returns the value of `default` if there is no such option specified.""" if self.has_option(section, option): return self.getfloat(*(section, option) + args, **kwargs) else: return default def getdefaultboolean(self, section, option, default, *args, **kwargs): """Same as config.getboolean, but returns the value of `default` if there is no such option specified.""" if self.has_option(section, option): return self.getboolean(*(section, option) + args, **kwargs) else: return default def getlist(self, section, option, separator_re): """Parses option as the list of values separated by the given regexp.""" try: val = self.get(section, option).strip() return re.split(separator_re, val) except re.error as e: raise Error("Bad split regexp '%s': %s" % \ (separator_re, e)), None, exc_info()[2] def getdefaultlist(self, section, option, default, separator_re): """Same as getlist, but returns the value of `default` if there is no such option specified.""" if self.has_option(section, option): return self.getlist(*(section, option, separator_re)) else: return default def getmetadatadir(self): xforms = [os.path.expanduser, os.path.expandvars] d = self.getdefault("general", "metadata", "~/.offlineimap") metadatadir = self.apply_xforms(d, xforms) if not os.path.exists(metadatadir): os.mkdir(metadatadir, 0o700) return metadatadir def getlocaleval(self): # We already loaded pythonfile, so return this copy. if self.localeval is not None: return self.localeval xforms = [os.path.expanduser, os.path.expandvars] if self.has_option("general", "pythonfile"): path = self.get("general", "pythonfile") path = self.apply_xforms(path, xforms) else: path = None self.localeval = LocalEval(path) return self.localeval def getsectionlist(self, key): """Returns a list of sections that start with (str) key + " ". That is, if key is "Account", returns all section names that start with "Account ", but strips off the "Account ". For instance, for "Account Test", returns "Test".""" key = key + ' ' return [x[len(key):] for x in self.sections() \ if x.startswith(key)] def set_if_not_exists(self, section, option, value): """Set a value if it does not exist yet. This allows to set default if the user has not explicitly configured anything.""" if not self.has_option(section, option): self.set(section, option, value) def apply_xforms(self, string, transforms): """Applies set of transformations to a string. Arguments: - string: source string; if None, then no processing will take place. - transforms: iterable that returns transformation function on each turn. Returns transformed string.""" if string == None: return None for f in transforms: string = f(string) return string def CustomConfigDefault(): """Just a constant that won't occur anywhere else. This allows us to differentiate if the user has passed in any default value to the getconf* functions in ConfigHelperMixin derived classes.""" pass class ConfigHelperMixin: """Allow comfortable retrieving of config values pertaining to a section. If a class inherits from cls:`ConfigHelperMixin`, it needs to provide 2 functions: - meth:`getconfig` (returning a CustomConfigParser object) - and meth:`getsection` (returning a string which represents the section to look up). All calls to getconf* will then return the configuration values for the CustomConfigParser object in the specific section. """ def _confighelper_runner(self, option, default, defaultfunc, mainfunc, *args): """Returns configuration or default value for option that contains in section identified by getsection(). Arguments: - option: name of the option to retrieve; - default: governs which function we will call. * When CustomConfigDefault is passed, we will call the mainfunc. * When any other value is passed, we will call the defaultfunc and the value of `default` will be passed as the third argument to this function. - defaultfunc and mainfunc: processing helpers. - args: additional trailing arguments that will be passed to all processing helpers. """ lst = [self.getsection(), option] if default == CustomConfigDefault: return mainfunc(*(lst + list(args))) else: lst.append(default) return defaultfunc(*(lst + list(args))) def getconfig(self): """Returns CustomConfigParser object that we will use for all our actions. Must be overriden in all classes that use this mix-in.""" raise NotImplementedError("ConfigHelperMixin.getconfig() " "is to be overriden") def getsection(self): """Returns name of configuration section in which our class keeps its configuration. Must be overriden in all classes that use this mix-in.""" raise NotImplementedError("ConfigHelperMixin.getsection() " "is to be overriden") def getconf(self, option, default = CustomConfigDefault): """Retrieves string from the configuration. Arguments: - option: option name whose value is to be retrieved; - default: default return value if no such option exists. """ return self._confighelper_runner(option, default, self.getconfig().getdefault, self.getconfig().get) def getconf_xform(self, option, xforms, default = CustomConfigDefault): """Retrieves string from the configuration transforming the result. Arguments: - option: option name whose value is to be retrieved; - xforms: iterable that returns transform functions to be applied to the value of the option, both retrieved and default one; - default: default value for string if no such option exists. """ value = self.getconf(option, default) return self.getconfig().apply_xforms(value, xforms) def getconfboolean(self, option, default = CustomConfigDefault): """Retrieves boolean value from the configuration. Arguments: - option: option name whose value is to be retrieved; - default: default return value if no such option exists. """ return self._confighelper_runner(option, default, self.getconfig().getdefaultboolean, self.getconfig().getboolean) def getconfint(self, option, default = CustomConfigDefault): """ Retrieves integer value from the configuration. Arguments: - option: option name whose value is to be retrieved; - default: default return value if no such option exists. """ return self._confighelper_runner(option, default, self.getconfig().getdefaultint, self.getconfig().getint) def getconffloat(self, option, default = CustomConfigDefault): """Retrieves floating-point value from the configuration. Arguments: - option: option name whose value is to be retrieved; - default: default return value if no such option exists. """ return self._confighelper_runner(option, default, self.getconfig().getdefaultfloat, self.getconfig().getfloat) def getconflist(self, option, separator_re, default = CustomConfigDefault): """Retrieves strings from the configuration and splits it into the list of strings. Arguments: - option: option name whose value is to be retrieved; - separator_re: regular expression for separator to be used for split operation; - default: default return value if no such option exists. """ return self._confighelper_runner(option, default, self.getconfig().getdefaultlist, self.getconfig().getlist, separator_re) offlineimap-6.6.1/offlineimap/__init__.py000066400000000000000000000015131264010144500204140ustar00rootroot00000000000000__all__ = ['OfflineImap'] __productname__ = 'OfflineIMAP' __version__ = "6.6.1" __revision__ = "" __bigversion__ = __version__ + __revision__ __copyright__ = "Copyright 2002-2015 John Goerzen & contributors" __author__ = "John Goerzen" __author_email__= "offlineimap-project@lists.alioth.debian.org" __description__ = "Disconnected Universal IMAP Mail Synchronization/Reader Support" __license__ = "Licensed under the GNU GPL v2 or any later version (with an OpenSSL exception)" __bigcopyright__ = """%(__productname__)s %(__bigversion__)s %(__license__)s""" % locals() __homepage__ = "http://offlineimap.org" banner = __bigcopyright__ from offlineimap.error import OfflineImapError # put this last, so we don't run into circular dependencies using # e.g. offlineimap.__version__. from offlineimap.init import OfflineImap offlineimap-6.6.1/offlineimap/accounts.py000066400000000000000000000563511264010144500205060ustar00rootroot00000000000000# Copyright (C) 2003-2015 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA from subprocess import Popen, PIPE from threading import Event import os import time from sys import exc_info import traceback from offlineimap import mbnames, CustomConfig, OfflineImapError from offlineimap import globals from offlineimap.repository import Repository from offlineimap.ui import getglobalui from offlineimap.threadutil import InstanceLimitedThread try: import fcntl except: pass # ok if this fails, we can do without # FIXME: spaghetti code alert! def getaccountlist(customconfig): # Account names in a list. return customconfig.getsectionlist('Account') # FIXME: spaghetti code alert! def AccountListGenerator(customconfig): """Returns a list of instanciated Account class, one per account name.""" return [Account(customconfig, accountname) for accountname in getaccountlist(customconfig)] # FIXME: spaghetti code alert! def AccountHashGenerator(customconfig): """Returns a dict of instanciated Account class with the account name as key.""" retval = {} for item in AccountListGenerator(customconfig): retval[item.getname()] = item return retval class Account(CustomConfig.ConfigHelperMixin): """Represents an account (ie. 2 repositories) to sync. Most of the time you will actually want to use the derived :class:`accounts.SyncableAccount` which contains all functions used for syncing an account.""" # Signal gets set when we should stop looping. abort_soon_signal = Event() # Signal gets set on CTRL-C/SIGTERM. abort_NOW_signal = Event() def __init__(self, config, name): """ :param config: Representing the offlineimap configuration file. :type config: :class:`offlineimap.CustomConfig.CustomConfigParser` :param name: A (str) string denoting the name of the Account as configured. """ self.config = config self.name = name self.metadatadir = config.getmetadatadir() self.localeval = config.getlocaleval() # current :mod:`offlineimap.ui`, can be used for logging: self.ui = getglobalui() self.refreshperiod = self.getconffloat('autorefresh', 0.0) # should we run in "dry-run" mode? self.dryrun = self.config.getboolean('general', 'dry-run') self.quicknum = 0 if self.refreshperiod == 0.0: self.refreshperiod = None def getlocaleval(self): return self.localeval # Interface from CustomConfig.ConfigHelperMixin def getconfig(self): return self.config def getname(self): return self.name def __str__(self): return self.name def getaccountmeta(self): return os.path.join(self.metadatadir, 'Account-' + self.name) # Interface from CustomConfig.ConfigHelperMixin def getsection(self): return 'Account ' + self.getname() @classmethod def set_abort_event(cls, config, signum): """Set skip sleep/abort event for all accounts. If we want to skip a current (or the next) sleep, or if we want to abort an autorefresh loop, the main thread can use set_abort_event() to send the corresponding signal. Signum = 1 implies that we want all accounts to abort or skip the current or next sleep phase. Signum = 2 will end the autorefresh loop, ie all accounts will return after they finished a sync. signum=3 means, abort NOW, e.g. on SIGINT or SIGTERM. This is a class method, it will send the signal to all accounts. """ if signum == 1: # resync signal, set config option for all accounts for acctsection in getaccountlist(config): config.set('Account ' + acctsection, "skipsleep", '1') elif signum == 2: # don't autorefresh anymore cls.abort_soon_signal.set() elif signum == 3: # abort ASAP cls.abort_NOW_signal.set() def get_abort_event(self): """Checks if an abort signal had been sent. If the 'skipsleep' config option for this account had been set, with `set_abort_event(config, 1)` it will get cleared in this function. Ie, we will only skip one sleep and not all. :returns: True, if the main thread had called :meth:`set_abort_event` earlier, otherwise 'False'. """ skipsleep = self.getconfboolean("skipsleep", 0) if skipsleep: self.config.set(self.getsection(), "skipsleep", '0') return skipsleep or Account.abort_soon_signal.is_set() or \ Account.abort_NOW_signal.is_set() def _sleeper(self): """Sleep if the account is set to autorefresh. :returns: 0:timeout expired, 1: canceled the timer, 2:request to abort the program, 100: if configured to not sleep at all. """ if not self.refreshperiod: return 100 kaobjs = [] if hasattr(self, 'localrepos'): kaobjs.append(self.localrepos) if hasattr(self, 'remoterepos'): kaobjs.append(self.remoterepos) for item in kaobjs: item.startkeepalive() refreshperiod = int(self.refreshperiod * 60) sleepresult = self.ui.sleep(refreshperiod, self) # Cancel keepalive for item in kaobjs: item.stopkeepalive() if sleepresult: if Account.abort_soon_signal.is_set() or \ Account.abort_NOW_signal.is_set(): return 2 self.quicknum = 0 return 1 return 0 def serverdiagnostics(self): """Output diagnostics for all involved repositories.""" remote_repo = Repository(self, 'remote') local_repo = Repository(self, 'local') #status_repo = Repository(self, 'status') self.ui.serverdiagnostics(remote_repo, 'Remote') self.ui.serverdiagnostics(local_repo, 'Local') #self.ui.serverdiagnostics(statusrepos, 'Status') class SyncableAccount(Account): """A syncable email account connecting 2 repositories. Derives from :class:`accounts.Account` but contains the additional functions :meth:`syncrunner`, :meth:`sync`, :meth:`syncfolders`, used for syncing.""" def __init__(self, *args, **kwargs): Account.__init__(self, *args, **kwargs) self._lockfd = None self._lockfilepath = os.path.join( self.config.getmetadatadir(), "%s.lock"% self) def __lock(self): """Lock the account, throwing an exception if it is locked already.""" self._lockfd = open(self._lockfilepath, 'w') try: fcntl.lockf(self._lockfd, fcntl.LOCK_EX|fcntl.LOCK_NB) except NameError: #fcntl not available (Windows), disable file locking... :( pass except IOError: self._lockfd.close() raise OfflineImapError("Could not lock account %s. Is another " "instance using this account?"% self, OfflineImapError.ERROR.REPO), None, exc_info()[2] def _unlock(self): """Unlock the account, deleting the lock file""" #If we own the lock file, delete it if self._lockfd and not self._lockfd.closed: self._lockfd.close() try: os.unlink(self._lockfilepath) except OSError: pass # Failed to delete for some reason. def syncrunner(self): self.ui.registerthread(self) try: accountmetadata = self.getaccountmeta() if not os.path.exists(accountmetadata): os.mkdir(accountmetadata, 0o700) self.remoterepos = Repository(self, 'remote') self.localrepos = Repository(self, 'local') self.statusrepos = Repository(self, 'status') except OfflineImapError as e: self.ui.error(e, exc_info()[2]) if e.severity >= OfflineImapError.ERROR.CRITICAL: raise return # Loop account sync if needed (bail out after 3 failures) looping = 3 while looping: self.ui.acct(self) try: self.__lock() self.__sync() except (KeyboardInterrupt, SystemExit): raise except OfflineImapError as e: # Stop looping and bubble up Exception if needed. if e.severity >= OfflineImapError.ERROR.REPO: if looping: looping -= 1 if e.severity >= OfflineImapError.ERROR.CRITICAL: raise self.ui.error(e, exc_info()[2]) except Exception as e: self.ui.error(e, exc_info()[2], msg= "While attempting to sync account '%s'"% self) else: # after success sync, reset the looping counter to 3 if self.refreshperiod: looping = 3 finally: self.ui.acctdone(self) self._unlock() if looping and self._sleeper() >= 2: looping = 0 def get_local_folder(self, remotefolder): """Return the corresponding local folder for a given remotefolder.""" return self.localrepos.getfolder( remotefolder.getvisiblename(). replace(self.remoterepos.getsep(), self.localrepos.getsep())) def __sync(self): """Synchronize the account once, then return. Assumes that `self.remoterepos`, `self.localrepos`, and `self.statusrepos` has already been populated, so it should only be called from the :meth:`syncrunner` function.""" folderthreads = [] hook = self.getconf('presynchook', '') self.callhook(hook) quickconfig = self.getconfint('quick', 0) if quickconfig < 0: quick = True elif quickconfig > 0: if self.quicknum == 0 or self.quicknum > quickconfig: self.quicknum = 1 quick = False else: self.quicknum = self.quicknum + 1 quick = True else: quick = False try: remoterepos = self.remoterepos localrepos = self.localrepos statusrepos = self.statusrepos # init repos with list of folders, so we have them (and the # folder delimiter etc) remoterepos.getfolders() localrepos.getfolders() remoterepos.sync_folder_structure(localrepos, statusrepos) # replicate the folderstructure between REMOTE to LOCAL if not localrepos.getconfboolean('readonly', False): self.ui.syncfolders(remoterepos, localrepos) # iterate through all folders on the remote repo and sync for remotefolder in remoterepos.getfolders(): # check for CTRL-C or SIGTERM if Account.abort_NOW_signal.is_set(): break if not remotefolder.sync_this: self.ui.debug('', "Not syncing filtered folder '%s'" "[%s]"% (remotefolder, remoterepos)) continue # Ignore filtered folder localfolder = self.get_local_folder(remotefolder) if not localfolder.sync_this: self.ui.debug('', "Not syncing filtered folder '%s'" "[%s]"% (localfolder, localfolder.repository)) continue # Ignore filtered folder if not globals.options.singlethreading: thread = InstanceLimitedThread(\ instancename = 'FOLDER_' + self.remoterepos.getname(), target = syncfolder, name = "Folder %s [acc: %s]"% (remotefolder.getexplainedname(), self), args = (self, remotefolder, quick)) thread.start() folderthreads.append(thread) else: syncfolder(self, remotefolder, quick) # wait for all threads to finish for thr in folderthreads: thr.join() # Write out mailbox names if required and not in dry-run mode if not self.dryrun: mbnames.write(False) localrepos.forgetfolders() remoterepos.forgetfolders() except: #error while syncing. Drop all connections that we have, they #might be bogus by now (e.g. after suspend) localrepos.dropconnections() remoterepos.dropconnections() raise else: # sync went fine. Hold or drop depending on config localrepos.holdordropconnections() remoterepos.holdordropconnections() hook = self.getconf('postsynchook', '') self.callhook(hook) def callhook(self, cmd): # check for CTRL-C or SIGTERM and run postsynchook if Account.abort_NOW_signal.is_set(): return if not cmd: return try: self.ui.callhook("Calling hook: " + cmd) if self.dryrun: # don't if we are in dry-run mode return p = Popen(cmd, shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True) r = p.communicate() self.ui.callhook("Hook stdout: %s\nHook stderr:%s\n"% r) self.ui.callhook("Hook return code: %d"% p.returncode) except (KeyboardInterrupt, SystemExit): raise except Exception as e: self.ui.error(e, exc_info()[2], msg="Calling hook") def syncfolder(account, remotefolder, quick): """Synchronizes given remote folder for the specified account. Filtered folders on the remote side will not invoke this function.""" def check_uid_validity(localfolder, remotefolder, statusfolder): # If either the local or the status folder has messages and # there is a UID validity problem, warn and abort. If there are # no messages, UW IMAPd loses UIDVALIDITY. But we don't really # need it if both local folders are empty. So, in that case, # just save it off. if localfolder.getmessagecount() > 0 or statusfolder.getmessagecount() > 0: if not localfolder.check_uidvalidity(): ui.validityproblem(localfolder) localfolder.repository.restore_atime() return if not remotefolder.check_uidvalidity(): ui.validityproblem(remotefolder) localrepos.restore_atime() return else: # Both folders empty, just save new UIDVALIDITY localfolder.save_uidvalidity() remotefolder.save_uidvalidity() def save_min_uid(folder, min_uid): uidfile = folder.get_min_uid_file() fd = open(uidfile, 'wt') fd.write(str(min_uid) + "\n") fd.close() def cachemessagelists_upto_date(localfolder, remotefolder, date): """ Returns messages with uid > min(uids of messages newer than date).""" localfolder.cachemessagelist(min_date=date) check_uid_validity(localfolder, remotefolder, statusfolder) # local messagelist had date restriction applied already. Restrict # sync to messages with UIDs >= min_uid from this list. # # local messagelist might contain new messages (with uid's < 0). positive_uids = filter( lambda uid: uid > 0, localfolder.getmessageuidlist()) if len(positive_uids) > 0: remotefolder.cachemessagelist(min_uid=min(positive_uids)) else: # No messages with UID > 0 in range in localfolder. # date restriction was applied with respect to local dates but # remote folder timezone might be different from local, so be # safe and make sure the range isn't bigger than in local. remotefolder.cachemessagelist( min_date=time.gmtime(time.mktime(date) + 24*60*60)) def cachemessagelists_startdate(new, partial, date): """ Retrieve messagelists when startdate has been set for the folder 'partial'. Idea: suppose you want to clone the messages after date in one account (partial) to a new one (new). If new is empty, then copy messages in partial newer than date to new, and keep track of the min uid. On subsequent syncs, sync all the messages in new against those after that min uid in partial. This is a partial replacement for maxage in the IMAP-IMAP sync case, where maxage doesn't work: the UIDs of the messages in localfolder might not be in the same order as those of corresponding messages in remotefolder, so if L in local corresponds to R in remote, the ranges [L, ...] and [R, ...] might not correspond. But, if we're cloning a folder into a new one, [min_uid, ...] does correspond to [1, ...]. This is just for IMAP-IMAP. For Maildir-IMAP, use maxage instead. """ new.cachemessagelist() min_uid = partial.retrieve_min_uid() if min_uid == None: # min_uid file didn't exist if len(new.getmessageuidlist()) > 0: raise OfflineImapError("To use startdate on Repository %s, " "Repository %s must be empty"% (partial.repository.name, new.repository.name), OfflineImapError.ERROR.MESSAGE) else: partial.cachemessagelist(min_date=date) # messagelist.keys() instead of getuidmessagelist() because in # the UID mapped case we want the actual local UIDs, not their # remote counterparts positive_uids = filter( lambda uid: uid > 0, partial.messagelist.keys()) if len(positive_uids) > 0: min_uid = min(positive_uids) else: min_uid = 1 save_min_uid(partial, min_uid) else: partial.cachemessagelist(min_uid=min_uid) remoterepos = account.remoterepos localrepos = account.localrepos statusrepos = account.statusrepos ui = getglobalui() ui.registerthread(account) try: # Load local folder. localfolder = account.get_local_folder(remotefolder) # Write the mailboxes mbnames.add(account.name, localfolder.getname(), localrepos.getlocalroot()) # Load status folder. statusfolder = statusrepos.getfolder(remotefolder.getvisiblename(). replace(remoterepos.getsep(), statusrepos.getsep())) if localfolder.get_uidvalidity() == None: # This is a new folder, so delete the status cache to be # sure we don't have a conflict. # TODO: This does not work. We always return a value, need # to rework this... statusfolder.deletemessagelist() statusfolder.cachemessagelist() # Load local folder. ui.syncingfolder(remoterepos, remotefolder, localrepos, localfolder) # Retrieve messagelists, taking into account age-restriction # options maxage = localfolder.getmaxage() localstart = localfolder.getstartdate() remotestart = remotefolder.getstartdate() if (maxage != None) + (localstart != None) + (remotestart != None) > 1: raise OfflineImapError("You can set at most one of the " "following: maxage, startdate (for the local folder), " "startdate (for the remote folder)", OfflineImapError.ERROR.REPO), None, exc_info()[2] if (maxage != None or localstart or remotestart) and quick: # IMAP quickchanged isn't compatible with options that # involve restricting the messagelist, since the "quick" # check can only retrieve a full list of UIDs in the folder. ui.warn("Quick syncs (-q) not supported in conjunction " "with maxage or startdate; ignoring -q.") if maxage != None: cachemessagelists_upto_date(localfolder, remotefolder, maxage) elif localstart != None: cachemessagelists_startdate(remotefolder, localfolder, localstart) check_uid_validity(localfolder, remotefolder, statusfolder) elif remotestart != None: cachemessagelists_startdate(localfolder, remotefolder, remotestart) check_uid_validity(localfolder, remotefolder, statusfolder) else: localfolder.cachemessagelist() if quick: if (not localfolder.quickchanged(statusfolder) and not remotefolder.quickchanged(statusfolder)): ui.skippingfolder(remotefolder) localrepos.restore_atime() return check_uid_validity(localfolder, remotefolder, statusfolder) remotefolder.cachemessagelist() # Synchronize remote changes. if not localrepos.getconfboolean('readonly', False): ui.syncingmessages(remoterepos, remotefolder, localrepos, localfolder) remotefolder.syncmessagesto(localfolder, statusfolder) else: ui.debug('imap', "Not syncing to read-only repository '%s'" \ % localrepos.getname()) # Synchronize local changes if not remoterepos.getconfboolean('readonly', False): ui.syncingmessages(localrepos, localfolder, remoterepos, remotefolder) localfolder.syncmessagesto(remotefolder, statusfolder) else: ui.debug('', "Not syncing to read-only repository '%s'" \ % remoterepos.getname()) statusfolder.save() localrepos.restore_atime() except (KeyboardInterrupt, SystemExit): raise except OfflineImapError as e: # bubble up severe Errors, skip folder otherwise if e.severity > OfflineImapError.ERROR.FOLDER: raise else: ui.error(e, exc_info()[2], msg = "Aborting sync, folder '%s' " "[acc: '%s']" % (localfolder, account)) except Exception as e: ui.error(e, msg = "ERROR in syncfolder for %s folder %s: %s"% (account, remotefolder.getvisiblename(), traceback.format_exc())) finally: for folder in ["statusfolder", "localfolder", "remotefolder"]: if folder in locals(): locals()[folder].dropmessagelistcache() offlineimap-6.6.1/offlineimap/emailutil.py000066400000000000000000000030201264010144500206350ustar00rootroot00000000000000# Some useful functions to extract data out of emails # Copyright (C) 2002-2015 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import email from email.Parser import Parser as MailParser def get_message_date(content, header='Date'): """Parses mail and returns resulting timestamp. :param header: the header to extract date from; :returns: timestamp or `None` in the case of failure. """ message = MailParser().parsestr(content, True) dateheader = message.get(header) # parsedate_tz returns a 10-tuple that can be passed to mktime_tz # Will be None if missing or not in a valid format. Note that # indexes 6, 7, and 8 of the result tuple are not usable. datetuple = email.utils.parsedate_tz(dateheader) if datetuple is None: return None return email.utils.mktime_tz(datetuple) offlineimap-6.6.1/offlineimap/error.py000066400000000000000000000026371264010144500200160ustar00rootroot00000000000000class OfflineImapError(Exception): """An Error during offlineimap synchronization""" class ERROR: """Severity level of an Exception * **MESSAGE**: Abort the current message, but continue with folder * **FOLDER_RETRY**: Error syncing folder, but do retry * **FOLDER**: Abort folder sync, but continue with next folder * **REPO**: Abort repository sync, continue with next account * **CRITICAL**: Immediately exit offlineimap """ MESSAGE, FOLDER_RETRY, FOLDER, REPO, CRITICAL = 0, 10, 15, 20, 30 def __init__(self, reason, severity, errcode=None): """ :param reason: Human readable string suitable for logging :param severity: denoting which operations should be aborted. E.g. a ERROR.MESSAGE can occur on a faulty message, but a ERROR.REPO occurs when the server is offline. :param errcode: optional number denoting a predefined error situation (which let's us exit with a predefined exit value). So far, no errcodes have been defined yet. :type severity: OfflineImapError.ERROR value""" self.errcode = errcode self.severity = severity # 'reason' is stored in the Exception().args tuple. super(OfflineImapError, self).__init__(reason) @property def reason(self): return self.args[0] offlineimap-6.6.1/offlineimap/folder/000077500000000000000000000000001264010144500175565ustar00rootroot00000000000000offlineimap-6.6.1/offlineimap/folder/Base.py000066400000000000000000001200241264010144500210010ustar00rootroot00000000000000# Base folder support # Copyright (C) 2002-2015 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import os.path import re import time from sys import exc_info from offlineimap import threadutil from offlineimap import globals from offlineimap.ui import getglobalui from offlineimap.error import OfflineImapError import offlineimap.accounts class BaseFolder(object): def __init__(self, name, repository): """ :param name: Path & name of folder minus root or reference :param repository: Repository() in which the folder is. """ self.ui = getglobalui() # Save original name for folderfilter operations self.ffilter_name = name # Top level dir name is always '' self.root = None self.name = name if not name == self.getsep() else '' self.newmail_hook = None # Only set the newmail_hook if the IMAP folder is named 'INBOX' if self.name == 'INBOX': self.newmail_hook = repository.newmail_hook self.have_newmail = False self.repository = repository self.visiblename = repository.nametrans(name) # In case the visiblename becomes '.' or '/' (top-level) we use # '' as that is the name that e.g. the Maildir scanning will # return for the top-level dir. if self.visiblename == self.getsep(): self.visiblename = '' self.config = repository.getconfig() utime_from_header_global = self.config.getdefaultboolean( "general", "utime_from_header", False) repo = "Repository " + repository.name self._utime_from_header = self.config.getdefaultboolean(repo, "utime_from_header", utime_from_header_global) # Do we need to use mail timestamp for filename prefix? filename_use_mail_timestamp_global = self.config.getdefaultboolean( "general", "filename_use_mail_timestamp", False) repo = "Repository " + repository.name self._filename_use_mail_timestamp = self.config.getdefaultboolean(repo, "filename_use_mail_timestamp", filename_use_mail_timestamp_global) # Determine if we're running static or dynamic folder filtering # and check filtering status self._dynamic_folderfilter = self.config.getdefaultboolean( repo, "dynamic_folderfilter", False) self._sync_this = repository.should_sync_folder(self.ffilter_name) if self._dynamic_folderfilter: self.ui.debug('', "Running dynamic folder filtering on '%s'[%s]"% (self.ffilter_name, repository)) elif not self._sync_this: self.ui.debug('', "Filtering out '%s'[%s] due to folderfilter"% (self.ffilter_name, repository)) # Passes for syncmessagesto self.syncmessagesto_passes = [('copying messages' , self.__syncmessagesto_copy), ('deleting messages' , self.__syncmessagesto_delete), ('syncing flags' , self.__syncmessagesto_flags)] def getname(self): """Returns name""" return self.name def __str__(self): # FIMXE: remove calls of this. We have getname(). return self.name @property def accountname(self): """Account name as string""" return self.repository.accountname @property def sync_this(self): """Should this folder be synced or is it e.g. filtered out?""" if not self._dynamic_folderfilter: return self._sync_this else: return self.repository.should_sync_folder(self.ffilter_name) @property def utime_from_header(self): return self._utime_from_header def suggeststhreads(self): """Returns true if this folder suggests using threads for actions; false otherwise. Probably only IMAP will return true.""" return 0 def waitforthread(self): """Implements method that waits for thread to be usable. Should be implemented only for folders that suggest threads.""" raise NotImplementedError # XXX: we may need someting like supports_quickstatus() to check # XXX: if user specifies 'quick' flag for folder that doesn't # XXX: support quick status queries, so one believes that quick # XXX: status checks will be done, but it won't really be so. def quickchanged(self, statusfolder): """ Runs quick check for folder changes and returns changed status: True -- changed, False -- not changed. :param statusfolder: keeps track of the last known folder state. """ return True def getcopyinstancelimit(self): """For threading folders, returns the instancelimitname for InstanceLimitedThreads.""" raise NotImplementedError def storesmessages(self): """Should be true for any backend that actually saves message bodies. (Almost all of them). False for the LocalStatus backend. Saves us from having to slurp up messages just for localstatus purposes.""" return 1 def getvisiblename(self): """The nametrans-transposed name of the folder's name.""" return self.visiblename def getexplainedname(self): """Name that shows both real and nametrans-mangled values.""" if self.name == self.visiblename: return self.name else: return "%s [remote name %s]"% (self.visiblename, self.name) def getrepository(self): """Returns the repository object that this folder is within.""" return self.repository def getroot(self): """Returns the root of the folder, in a folder-specific fashion.""" return self.root def getsep(self): """Returns the separator for this folder type.""" return self.sep def getfullname(self): if self.getroot(): return self.getroot() + self.getsep() + self.getname() else: return self.getname() def getfolderbasename(self): """Return base file name of file to store Status/UID info in.""" if not self.name: basename = '.' else: # Avoid directory hierarchies and file names such as '/'. basename = self.name.replace('/', '.') # Replace with literal 'dot' if final path name is '.' as '.' is # an invalid file name. basename = re.sub('(^|\/)\.$','\\1dot', basename) return basename def check_uidvalidity(self): """Tests if the cached UIDVALIDITY match the real current one If required it saves the UIDVALIDITY value. In this case the function is not threadsafe. So don't attempt to call it from concurrent threads. :returns: Boolean indicating the match. Returns True in case it implicitely saved the UIDVALIDITY.""" if self.get_saveduidvalidity() != None: return self.get_saveduidvalidity() == self.get_uidvalidity() else: self.save_uidvalidity() return True def _getuidfilename(self): """provides UIDVALIDITY cache filename for class internal purposes.""" return os.path.join(self.repository.getuiddir(), self.getfolderbasename()) def get_saveduidvalidity(self): """Return the previously cached UIDVALIDITY value :returns: UIDVALIDITY as (long) number or None, if None had been saved yet.""" if hasattr(self, '_base_saved_uidvalidity'): return self._base_saved_uidvalidity uidfilename = self._getuidfilename() if not os.path.exists(uidfilename): self._base_saved_uidvalidity = None else: file = open(uidfilename, "rt") self._base_saved_uidvalidity = long(file.readline().strip()) file.close() return self._base_saved_uidvalidity def save_uidvalidity(self): """Save the UIDVALIDITY value of the folder to the cache This function is not threadsafe, so don't attempt to call it from concurrent threads.""" newval = self.get_uidvalidity() uidfilename = self._getuidfilename() with open(uidfilename + ".tmp", "wt") as file: file.write("%d\n"% newval) os.rename(uidfilename + ".tmp", uidfilename) self._base_saved_uidvalidity = newval def get_uidvalidity(self): """Retrieve the current connections UIDVALIDITY value This function needs to be implemented by each Backend :returns: UIDVALIDITY as a (long) number""" raise NotImplementedError def cachemessagelist(self): """Reads the message list from disk or network and stores it in memory for later use. This list will not be re-read from disk or memory unless this function is called again.""" raise NotImplementedError def ismessagelistempty(self): """Empty everythings we know about messages.""" if len(self.messagelist.keys()) < 1: return True return False def dropmessagelistcache(self): """Empty everythings we know about messages.""" self.messagelist = {} def getmessagelist(self): """Gets the current message list. You must call cachemessagelist() before calling this function!""" raise NotImplementedError def msglist_item_initializer(self, uid): """Returns value for empty messagelist element with given UID. This function must initialize all fields of messagelist item and must be called every time when one creates new messagelist entry to ensure that all fields that must be present are present.""" raise NotImplementedError def uidexists(self, uid): """Returns True if uid exists""" return uid in self.getmessagelist() def getmessageuidlist(self): """Gets a list of UIDs. You may have to call cachemessagelist() before calling this function!""" return self.getmessagelist().keys() def getmessagecount(self): """Gets the number of messages.""" return len(self.getmessagelist()) def getmessage(self, uid): """Returns the content of the specified message.""" raise NotImplementedError def getmaxage(self): """ maxage is allowed to be either an integer or a date of the form YYYY-mm-dd. This returns a time_struct. """ maxagestr = self.config.getdefault("Account %s"% self.accountname, "maxage", None) if maxagestr == None: return None # is it a number? try: maxage = int(maxagestr) if maxage < 1: raise OfflineImapError("invalid maxage value %d"% maxage, OfflineImapError.ERROR.MESSAGE) return time.gmtime(time.time() - 60*60*24*maxage) except ValueError: pass # maybe it was a date # is it a date string? try: date = time.strptime(maxagestr, "%Y-%m-%d") if date[0] < 1900: raise OfflineImapError("maxage led to year %d. " "Abort syncing."% date[0], OfflineImapError.ERROR.MESSAGE) return date except ValueError: raise OfflineImapError("invalid maxage value %s"% maxagestr, OfflineImapError.ERROR.MESSAGE) def getmaxsize(self): return self.config.getdefaultint("Account %s"% self.accountname, "maxsize", None) def getstartdate(self): """ Retrieve the value of the configuration option startdate """ datestr = self.config.getdefault("Repository " + self.repository.name, 'startdate', None) try: if not datestr: return None date = time.strptime(datestr, "%Y-%m-%d") if date[0] < 1900: raise OfflineImapError("startdate led to year %d. " "Abort syncing."% date[0], OfflineImapError.ERROR.MESSAGE) return date except ValueError: raise OfflineImapError("invalid startdate value %s", OfflineImapError.ERROR.MESSAGE) def get_min_uid_file(self): startuiddir = os.path.join(self.config.getmetadatadir(), 'Repository-' + self.repository.name, 'StartUID') if not os.path.exists(startuiddir): os.mkdir(startuiddir, 0o700) return os.path.join(startuiddir, self.getfolderbasename()) def retrieve_min_uid(self): uidfile = self.get_min_uid_file() if not os.path.exists(uidfile): return None try: fd = open(uidfile, 'rt') min_uid = long(fd.readline().strip()) fd.close() return min_uid except: raise IOError("Can't read %s"% uidfile) def savemessage(self, uid, content, flags, rtime): """Writes a new message, with the specified uid. If the uid is < 0: The backend should assign a new uid and return it. In case it cannot assign a new uid, it returns the negative uid passed in WITHOUT saving the message. If the backend CAN assign a new uid, but cannot find out what this UID is (as is the case with some IMAP servers), it returns 0 but DOES save the message. IMAP backend should be the only one that can assign a new uid. If the uid is > 0, the backend should set the uid to this, if it can. If it cannot set the uid to that, it will save it anyway. It will return the uid assigned in any case. Note that savemessage() does not check against dryrun settings, so you need to ensure that savemessage is never called in a dryrun mode.""" raise NotImplementedError def getmessagetime(self, uid): """Return the received time for the specified message.""" raise NotImplementedError def getmessagemtime(self, uid): """Returns the message modification time of the specified message.""" raise NotImplementedError def getmessageflags(self, uid): """Returns the flags for the specified message.""" raise NotImplementedError def getmessagekeywords(self, uid): """Returns the keywords for the specified message.""" raise NotImplementedError def savemessageflags(self, uid, flags): """Sets the specified message's flags to the given set. Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode.""" raise NotImplementedError def addmessageflags(self, uid, flags): """Adds the specified flags to the message's flag set. If a given flag is already present, it will not be duplicated. Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode. :param flags: A set() of flags""" newflags = self.getmessageflags(uid) | flags self.savemessageflags(uid, newflags) def addmessagesflags(self, uidlist, flags): """Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode.""" for uid in uidlist: if self.uidexists(uid): self.addmessageflags(uid, flags) def deletemessageflags(self, uid, flags): """Removes each flag given from the message's flag set. If a given flag is already removed, no action will be taken for that flag. Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode.""" newflags = self.getmessageflags(uid) - flags self.savemessageflags(uid, newflags) def deletemessagesflags(self, uidlist, flags): """ Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode.""" for uid in uidlist: self.deletemessageflags(uid, flags) def getmessagelabels(self, uid): """Returns the labels for the specified message.""" raise NotImplementedError def savemessagelabels(self, uid, labels, ignorelabels=set(), mtime=0): """Sets the specified message's labels to the given set. Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode.""" raise NotImplementedError def addmessagelabels(self, uid, labels): """Adds the specified labels to the message's labels set. If a given label is already present, it will not be duplicated. Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode. :param labels: A set() of labels""" newlabels = self.getmessagelabels(uid) | labels self.savemessagelabels(uid, newlabels) def addmessageslabels(self, uidlist, labels): """Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode.""" for uid in uidlist: self.addmessagelabels(uid, labels) def deletemessagelabels(self, uid, labels): """Removes each label given from the message's label set. If a given label is already removed, no action will be taken for that label. Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode.""" newlabels = self.getmessagelabels(uid) - labels self.savemessagelabels(uid, newlabels) def deletemessageslabels(self, uidlist, labels): """ Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode.""" for uid in uidlist: self.deletemessagelabels(uid, labels) def addmessageheader(self, content, linebreak, headername, headervalue): """Adds new header to the provided message. WARNING: This function is a bit tricky, and modifying it in the wrong way, may easily lead to data-loss. Arguments: - content: message content, headers and body as a single string - linebreak: string that carries line ending - headername: name of the header to add - headervalue: value of the header to add .. note:: The following documentation will not get displayed correctly after being processed by Sphinx. View the source of this method to read it. This has to deal with strange corner cases where the header is missing or empty. Here are illustrations for all the cases, showing where the header gets inserted and what the end result is. In each illustration, '+' means the added contents. Note that these examples assume LF for linebreak, not CRLF, so '\n' denotes a linebreak and '\n\n' corresponds to the transition between header and body. However if the linebreak parameter is set to '\r\n' then you would have to substitute '\r\n' for '\n' in the below examples. * Case 1: No '\n\n', leading '\n' +X-Flying-Pig-Header: i am here\n \n This is the body\n next line\n * Case 2: '\n\n' at position 0 +X-Flying-Pig-Header: i am here \n \n This is the body\n next line\n * Case 3: No '\n\n', no leading '\n' +X-Flying-Pig-Header: i am here\n +\n This is the body\n next line\n * Case 4: '\n\n' at non-zero position Subject: Something wrong with OI\n From: some@person.at +\nX-Flying-Pig-Header: i am here \n \n This is the body\n next line\n """ self.ui.debug('', 'addmessageheader: called to add %s: %s'% (headername, headervalue)) insertionpoint = content.find(linebreak * 2) if insertionpoint == -1: self.ui.debug('', 'addmessageheader: headers were missing') else: self.ui.debug('', 'addmessageheader: headers end at position %d' % insertionpoint) mark = '==>EOH<==' contextstart = max(0, insertionpoint - 100) contextend = min(len(content), insertionpoint + 100) self.ui.debug('', 'addmessageheader: header/body transition context (marked by %s): %s' % (mark, repr(content[contextstart:insertionpoint]) + \ mark + repr(content[insertionpoint:contextend]))) # Hoping for case #4 prefix = linebreak suffix = '' # Case #2 if insertionpoint == 0: prefix = '' suffix = '' # Either case #1 or #3 elif insertionpoint == -1: prefix = '' suffix = linebreak insertionpoint = 0 # Case #3: when body starts immediately, without preceding '\n' # (this shouldn't happen with proper mail messages, but # we seen many broken ones), we should add '\n' to make # new (and the only header, in this case) to be properly # separated from the message body. if content[0:len(linebreak)] != linebreak: suffix = suffix + linebreak self.ui.debug('', 'addmessageheader: insertionpoint = %d'% insertionpoint) headers = content[0:insertionpoint] self.ui.debug('', 'addmessageheader: headers = %s'% repr(headers)) new_header = prefix + ("%s: %s" % (headername, headervalue)) + suffix self.ui.debug('', 'addmessageheader: new_header = ' + repr(new_header)) return headers + new_header + content[insertionpoint:] def __find_eoh(self, content): """ Searches for the point where mail headers end. Either double '\n', or end of string. Arguments: - content: contents of the message to search in Returns: position of the first non-header byte. """ eoh_cr = content.find('\n\n') if eoh_cr == -1: eoh_cr = len(content) return eoh_cr def getmessageheader(self, content, name): """Searches for the first occurence of the given header and returns its value. Header name is case-insensitive. Arguments: - contents: message itself - name: name of the header to be searched Returns: header value or None if no such header was found """ self.ui.debug('', 'getmessageheader: called to get %s'% name) eoh = self.__find_eoh(content) self.ui.debug('', 'getmessageheader: eoh = %d'% eoh) headers = content[0:eoh] self.ui.debug('', 'getmessageheader: headers = %s'% repr(headers)) m = re.search('^%s:(.*)$' % name, headers, flags = re.MULTILINE | re.IGNORECASE) if m: return m.group(1).strip() else: return None def getmessageheaderlist(self, content, name): """Searches for the given header and returns a list of values for that header. Arguments: - contents: message itself - name: name of the header to be searched Returns: list of header values or emptylist if no such header was found """ self.ui.debug('', 'getmessageheaderlist: called to get %s' % name) eoh = self.__find_eoh(content) self.ui.debug('', 'getmessageheaderlist: eoh = %d' % eoh) headers = content[0:eoh] self.ui.debug('', 'getmessageheaderlist: headers = %s' % repr(headers)) return re.findall('^%s:(.*)$' % name, headers, flags = re.MULTILINE | re.IGNORECASE) def deletemessageheaders(self, content, header_list): """Deletes headers in the given list from the message content. Arguments: - content: message itself - header_list: list of headers to be deleted or just the header name We expect our message to have '\n' as line endings. """ if type(header_list) != type([]): header_list = [header_list] self.ui.debug('', 'deletemessageheaders: called to delete %s'% (header_list)) if not len(header_list): return content eoh = self.__find_eoh(content) self.ui.debug('', 'deletemessageheaders: end of headers = %d'% eoh) headers = content[0:eoh] rest = content[eoh:] self.ui.debug('', 'deletemessageheaders: headers = %s'% repr(headers)) new_headers = [] for h in headers.split('\n'): keep_it = True for trim_h in header_list: if len(h) > len(trim_h) and h[0:len(trim_h)+1] == (trim_h + ":"): keep_it = False break if keep_it: new_headers.append(h) return ('\n'.join(new_headers) + rest) def change_message_uid(self, uid, new_uid): """Change the message from existing uid to new_uid If the backend supports it (IMAP does not). :param new_uid: (optional) If given, the old UID will be changed to a new UID. This allows backends efficient renaming of messages if the UID has changed.""" raise NotImplementedError def deletemessage(self, uid): """Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode.""" raise NotImplementedError def deletemessages(self, uidlist): """Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode.""" for uid in uidlist: self.deletemessage(uid) def copymessageto(self, uid, dstfolder, statusfolder, register = 1): """Copies a message from self to dst if needed, updating the status Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode. :param uid: uid of the message to be copied. :param dstfolder: A BaseFolder-derived instance :param statusfolder: A LocalStatusFolder instance :param register: whether we should register a new thread." :returns: Nothing on success, or raises an Exception.""" # Sometimes, it could be the case that if a sync takes awhile, # a message might be deleted from the maildir before it can be # synced to the status cache. This is only a problem with # self.getmessage(). So, don't call self.getmessage unless # really needed. if register: # output that we start a new thread self.ui.registerthread(self.repository.account) try: message = None flags = self.getmessageflags(uid) rtime = self.getmessagetime(uid) # If any of the destinations actually stores the message body, # load it up. if dstfolder.storesmessages(): message = self.getmessage(uid) # Succeeded? -> IMAP actually assigned a UID. If newid # remained negative, no server was willing to assign us an # UID. If newid is 0, saving succeeded, but we could not # retrieve the new UID. Ignore message in this case. new_uid = dstfolder.savemessage(uid, message, flags, rtime) if new_uid > 0: if new_uid != uid: # Got new UID, change the local uid to match the new one. self.change_message_uid(uid, new_uid) statusfolder.deletemessage(uid) # Got new UID, change the local uid. # Save uploaded status in the statusfolder statusfolder.savemessage(new_uid, message, flags, rtime) # Check whether the mail has been seen if 'S' not in flags: self.have_newmail = True elif new_uid == 0: # Message was stored to dstfolder, but we can't find it's UID # This means we can't link current message to the one created # in IMAP. So we just delete local message and on next run # we'll sync it back # XXX This could cause infinite loop on syncing between two # IMAP servers ... self.deletemessage(uid) else: raise OfflineImapError("Trying to save msg (uid %d) on folder " "%s returned invalid uid %d"% (uid, dstfolder.getvisiblename(), new_uid), OfflineImapError.ERROR.MESSAGE) except (KeyboardInterrupt): # bubble up CTRL-C raise except OfflineImapError as e: if e.severity > OfflineImapError.ERROR.MESSAGE: raise # bubble severe errors up self.ui.error(e, exc_info()[2]) except Exception as e: self.ui.error(e, exc_info()[2], msg = "Copying message %s [acc: %s]"% (uid, self.accountname)) raise #raise on unknown errors, so we can fix those def __syncmessagesto_copy(self, dstfolder, statusfolder): """Pass1: Copy locally existing messages not on the other side. This will copy messages to dstfolder that exist locally but are not in the statusfolder yet. The strategy is: 1) Look for messages present in self but not in statusfolder. 2) invoke copymessageto() on those which: - If dstfolder doesn't have it yet, add them to dstfolder. - Update statusfolder This function checks and protects us from action in dryrun mode.""" # We have no new mail yet self.have_newmail = False threads = [] copylist = filter(lambda uid: not statusfolder.uidexists(uid), self.getmessageuidlist()) num_to_copy = len(copylist) if num_to_copy and self.repository.account.dryrun: self.ui.info("[DRYRUN] Copy {0} messages from {1}[{2}] to {3}".format( num_to_copy, self, self.repository, dstfolder.repository)) return for num, uid in enumerate(copylist): # bail out on CTRL-C or SIGTERM if offlineimap.accounts.Account.abort_NOW_signal.is_set(): break if uid > 0 and dstfolder.uidexists(uid): # dst has message with that UID already, only update status flags = self.getmessageflags(uid) rtime = self.getmessagetime(uid) statusfolder.savemessage(uid, None, flags, rtime) continue self.ui.copyingmessage(uid, num+1, num_to_copy, self, dstfolder) # exceptions are caught in copymessageto() if self.suggeststhreads() and not globals.options.singlethreading: self.waitforthread() thread = threadutil.InstanceLimitedThread(\ self.getcopyinstancelimit(), target = self.copymessageto, name = "Copy message from %s:%s" % (self.repository, self), args = (uid, dstfolder, statusfolder)) thread.start() threads.append(thread) else: self.copymessageto(uid, dstfolder, statusfolder, register = 0) for thread in threads: thread.join() # Execute new mail hook if we have new mail if self.have_newmail: if self.newmail_hook != None: self.newmail_hook(); def __syncmessagesto_delete(self, dstfolder, statusfolder): """Pass 2: Remove locally deleted messages on dst. Get all UIDS in statusfolder but not self. These are messages that were deleted in 'self'. Delete those from dstfolder and statusfolder. This function checks and protects us from action in dryrun mode. """ deletelist = filter(lambda uid: uid >= 0 and not self.uidexists(uid), statusfolder.getmessageuidlist()) if len(deletelist): # Delete in statusfolder first to play safe. In case of abort, we # won't lose message, we will just unneccessarily retransmit some. # Delete messages from statusfolder that were either deleted by the # user, or not being tracked (e.g. because of maxage). statusfolder.deletemessages(deletelist) # Filter out untracked messages deletelist = filter(lambda uid: dstfolder.uidexists(uid), deletelist) if len(deletelist): self.ui.deletingmessages(deletelist, [dstfolder]) if self.repository.account.dryrun: return #don't delete messages in dry-run mode dstfolder.deletemessages(deletelist) def combine_flags_and_keywords(self, uid, dstfolder): """Combine the message's flags and keywords using the mapping for the destination folder.""" # Take a copy of the message flag set, otherwise # __syncmessagesto_flags() will fail because statusflags is actually a # reference to selfflags (which it should not, but I don't have time to # debug THAT). selfflags = set(self.getmessageflags(uid)) try: keywordmap = dstfolder.getrepository().getkeywordmap() if keywordmap is None: return selfflags knownkeywords = set(keywordmap.keys()) selfkeywords = self.getmessagekeywords(uid) if not knownkeywords >= selfkeywords: #some of the message's keywords are not in the mapping, so #skip them skipped_keywords = list(selfkeywords - knownkeywords) selfkeywords &= knownkeywords self.ui.warn("Unknown keywords skipped: %s\n" "You may want to change your configuration to include " "those\n" % (skipped_keywords)) keywordletterset = set([keywordmap[keyw] for keyw in selfkeywords]) #add the mapped keywords to the list of message flags selfflags |= keywordletterset except NotImplementedError: pass return selfflags def __syncmessagesto_flags(self, dstfolder, statusfolder): """Pass 3: Flag synchronization. Compare flag mismatches in self with those in statusfolder. If msg has a valid UID and exists on dstfolder (has not e.g. been deleted there), sync the flag change to both dstfolder and statusfolder. This function checks and protects us from action in ryrun mode. """ # For each flag, we store a list of uids to which it should be # added. Then, we can call addmessagesflags() to apply them in # bulk, rather than one call per message. addflaglist = {} delflaglist = {} for uid in self.getmessageuidlist(): # Ignore messages with negative UIDs missed by pass 1 and # don't do anything if the message has been deleted remotely if uid < 0 or not dstfolder.uidexists(uid): continue if statusfolder.uidexists(uid): statusflags = statusfolder.getmessageflags(uid) else: statusflags = set() selfflags = self.combine_flags_and_keywords(uid, dstfolder) addflags = selfflags - statusflags delflags = statusflags - selfflags for flag in addflags: if not flag in addflaglist: addflaglist[flag] = [] addflaglist[flag].append(uid) for flag in delflags: if not flag in delflaglist: delflaglist[flag] = [] delflaglist[flag].append(uid) for flag, uids in addflaglist.items(): self.ui.addingflags(uids, flag, dstfolder) if self.repository.account.dryrun: continue #don't actually add in a dryrun dstfolder.addmessagesflags(uids, set(flag)) statusfolder.addmessagesflags(uids, set(flag)) for flag,uids in delflaglist.items(): self.ui.deletingflags(uids, flag, dstfolder) if self.repository.account.dryrun: continue #don't actually remove in a dryrun dstfolder.deletemessagesflags(uids, set(flag)) statusfolder.deletemessagesflags(uids, set(flag)) def syncmessagesto(self, dstfolder, statusfolder): """Syncs messages in this folder to the destination dstfolder. This is the high level entry for syncing messages in one direction. Syncsteps are: Pass1: Copy locally existing messages Copy messages in self, but not statusfolder to dstfolder if not already in dstfolder. dstfolder might assign a new UID (e.g. if uploading to IMAP). Update statusfolder. Pass2: Remove locally deleted messages Get all UIDS in statusfolder but not self. These are messages that were deleted in 'self'. Delete those from dstfolder and statusfolder. After this pass, the message lists should be identical wrt the uids present (except for potential negative uids that couldn't be placed anywhere). Pass3: Synchronize flag changes Compare flag mismatches in self with those in statusfolder. If msg has a valid UID and exists on dstfolder (has not e.g. been deleted there), sync the flag change to both dstfolder and statusfolder. Pass4: Synchronize label changes (Gmail only) Compares label mismatches in self with those in statusfolder. If msg has a valid UID and exists on dstfolder, syncs the labels to both dstfolder and statusfolder. :param dstfolder: Folderinstance to sync the msgs to. :param statusfolder: LocalStatus instance to sync against. """ for (passdesc, action) in self.syncmessagesto_passes: # bail out on CTRL-C or SIGTERM if offlineimap.accounts.Account.abort_NOW_signal.is_set(): break try: action(dstfolder, statusfolder) except (KeyboardInterrupt): raise except OfflineImapError as e: if e.severity > OfflineImapError.ERROR.FOLDER: raise self.ui.error(e, exc_info()[2]) except Exception as e: self.ui.error(e, exc_info()[2], "Syncing folder %s [acc: %s]" %\ (self, self.accountname)) raise # raise unknown Exceptions so we can fix them def __eq__(self, other): """Comparisons work either on string comparing folder names or on the same instance. MailDirFolder('foo') == 'foo' --> True a = MailDirFolder('foo'); a == b --> True MailDirFolder('foo') == 'moo' --> False MailDirFolder('foo') == IMAPFolder('foo') --> False MailDirFolder('foo') == MaildirFolder('foo') --> False """ if isinstance(other, basestring): return other == self.name return id(self) == id(other) def __ne__(self, other): return not self.__eq__(other) offlineimap-6.6.1/offlineimap/folder/Gmail.py000066400000000000000000000375641264010144500212000ustar00rootroot00000000000000# Gmail IMAP folder support # Copyright (C) 2008 Riccardo Murri # Copyright (C) 2002-2007 John Goerzen # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import re from sys import exc_info from offlineimap import imaputil, OfflineImapError from offlineimap import imaplibutil import offlineimap.accounts from .IMAP import IMAPFolder """Folder implementation to support features of the Gmail IMAP server.""" class GmailFolder(IMAPFolder): """Folder implementation to support features of the Gmail IMAP server. Removing a message from a folder will only remove the "label" from the message and keep it in the "All mails" folder. To really delete a message it needs to be copied to the Trash folder. However, this is dangerous as our folder moves are implemented as a 1) delete in one folder and 2) append to the other. If 2 comes before 1, this will effectively delete the message from all folders. So we cannot do that until we have a smarter folder move mechanism. For more information on the Gmail IMAP server: http://mail.google.com/support/bin/answer.py?answer=77657&topic=12815 https://developers.google.com/google-apps/gmail/imap_extensions """ def __init__(self, imapserver, name, repository): super(GmailFolder, self).__init__(imapserver, name, repository) self.trash_folder = repository.gettrashfolder(name) # Gmail will really delete messages upon EXPUNGE in these folders self.real_delete_folders = [ self.trash_folder, repository.getspamfolder() ] # The header under which labels are stored self.labelsheader = self.repository.account.getconf('labelsheader', 'X-Keywords') # enables / disables label sync self.synclabels = self.repository.account.getconfboolean('synclabels', False) # if synclabels is enabled, add a 4th pass to sync labels if self.synclabels: self.imap_query.insert(0, 'X-GM-LABELS') self.syncmessagesto_passes.append(('syncing labels', self.syncmessagesto_labels)) # Labels to be left alone ignorelabels = self.repository.account.getconf('ignorelabels', '') self.ignorelabels = set([l for l in re.split(r'\s*,\s*', ignorelabels) if len(l)]) def getmessage(self, uid): """Retrieve message with UID from the IMAP server (incl body). Also gets Gmail labels and embeds them into the message. :returns: the message body or throws and OfflineImapError (probably severity MESSAGE) if e.g. no message with this UID could be found. """ data = self._fetch_from_imap(str(uid), 2) # data looks now e.g. #[('320 (X-GM-LABELS (...) UID 17061 BODY[] {2565}','msgbody....')] # we only asked for one message, and that msg is in data[0]. # msbody is in [0][1]. body = data[0][1].replace("\r\n", "\n") # Embed the labels into the message headers if self.synclabels: m = re.search('X-GM-LABELS\s*\(([^\)]*)\)', data[0][0]) if m: labels = set([imaputil.dequote(lb) for lb in imaputil.imapsplit(m.group(1))]) else: labels = set() labels = labels - self.ignorelabels labels_str = imaputil.format_labels_string(self.labelsheader, sorted(labels)) # First remove old label headers that may be in the message content retrieved # from gmail Then add a labels header with current gmail labels. body = self.deletemessageheaders(body, self.labelsheader) body = self.addmessageheader(body, '\n', self.labelsheader, labels_str) if len(body)>200: dbg_output = "%s...%s"% (str(body)[:150], str(body)[-50:]) else: dbg_output = body self.ui.debug('imap', "Returned object from fetching %d: '%s'"% (uid, dbg_output)) return body def getmessagelabels(self, uid): if 'labels' in self.messagelist[uid]: return self.messagelist[uid]['labels'] else: return set() # Interface from BaseFolder def msglist_item_initializer(self, uid): return {'uid': uid, 'flags': set(), 'labels': set(), 'time': 0} # TODO: merge this code with the parent's cachemessagelist: # TODO: they have too much common logics. def cachemessagelist(self, min_date=None, min_uid=None): if not self.synclabels: return super(GmailFolder, self).cachemessagelist( min_date=min_date, min_uid=min_uid) self.messagelist = {} self.ui.collectingdata(None, self) imapobj = self.imapserver.acquireconnection() try: msgsToFetch = self._msgs_to_fetch( imapobj, min_date=min_date, min_uid=min_uid) if not msgsToFetch: return # No messages to sync # Get the flags and UIDs for these. single-quotes prevent # imaplib2 from quoting the sequence. # # NB: msgsToFetch are sequential numbers, not UID's res_type, response = imapobj.fetch("'%s'"% msgsToFetch, '(FLAGS X-GM-LABELS UID)') if res_type != 'OK': raise OfflineImapError("FETCHING UIDs in folder [%s]%s failed. " % \ (self.getrepository(), self) + \ "Server responded '[%s] %s'" % \ (res_type, response), OfflineImapError.ERROR.FOLDER), \ None, exc_info()[2] finally: self.imapserver.releaseconnection(imapobj) for messagestr in response: # looks like: '1 (FLAGS (\\Seen Old) X-GM-LABELS (\\Inbox \\Favorites) UID 4807)' or None if no msg # Discard initial message number. if messagestr == None: continue messagestr = messagestr.split(' ', 1)[1] options = imaputil.flags2hash(messagestr) if not 'UID' in options: self.ui.warn('No UID in message with options %s' %\ str(options), minor = 1) else: uid = long(options['UID']) self.messagelist[uid] = self.msglist_item_initializer(uid) flags = imaputil.flagsimap2maildir(options['FLAGS']) m = re.search('\(([^\)]*)\)', options['X-GM-LABELS']) if m: labels = set([imaputil.dequote(lb) for lb in imaputil.imapsplit(m.group(1))]) else: labels = set() labels = labels - self.ignorelabels rtime = imaplibutil.Internaldate2epoch(messagestr) self.messagelist[uid] = {'uid': uid, 'flags': flags, 'labels': labels, 'time': rtime} def savemessage(self, uid, content, flags, rtime): """Save the message on the Server This backend always assigns a new uid, so the uid arg is ignored. This function will update the self.messagelist dict to contain the new message after sucessfully saving it, including labels. See folder/Base for details. Note that savemessage() does not check against dryrun settings, so you need to ensure that savemessage is never called in a dryrun mode. :param rtime: A timestamp to be used as the mail date :returns: the UID of the new message as assigned by the server. If the message is saved, but it's UID can not be found, it will return 0. If the message can't be written (folder is read-only for example) it will return -1.""" if not self.synclabels: return super(GmailFolder, self).savemessage(uid, content, flags, rtime) labels = set() for hstr in self.getmessageheaderlist(content, self.labelsheader): labels.update(imaputil.labels_from_header(self.labelsheader, hstr)) ret = super(GmailFolder, self).savemessage(uid, content, flags, rtime) self.savemessagelabels(ret, labels) return ret def _messagelabels_aux(self, arg, uidlist, labels): """Common code to savemessagelabels and addmessagelabels""" labels = labels - self.ignorelabels uidlist = [uid for uid in uidlist if uid > 0] if len(uidlist) > 0: imapobj = self.imapserver.acquireconnection() try: labels_str = '(' + ' '.join([imaputil.quote(lb) for lb in labels]) + ')' # Coalesce uid's into ranges uid_str = imaputil.uid_sequence(uidlist) result = self._store_to_imap(imapobj, uid_str, arg, labels_str) except imapobj.readonly: self.ui.labelstoreadonly(self, uidlist, labels) return None finally: self.imapserver.releaseconnection(imapobj) if result: retlabels = imaputil.flags2hash(imaputil.imapsplit(result)[1])['X-GM-LABELS'] retlabels = set([imaputil.dequote(lb) for lb in imaputil.imapsplit(retlabels)]) return retlabels return None def savemessagelabels(self, uid, labels): """Change a message's labels to `labels`. Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode.""" if uid in self.messagelist and 'labels' in self.messagelist[uid]: oldlabels = self.messagelist[uid]['labels'] else: oldlabels = set() labels = labels - self.ignorelabels newlabels = labels | (oldlabels & self.ignorelabels) if oldlabels != newlabels: result = self._messagelabels_aux('X-GM-LABELS', [uid], newlabels) if result: self.messagelist[uid]['labels'] = newlabels else: self.messagelist[uid]['labels'] = oldlabels def addmessageslabels(self, uidlist, labels): """Add `labels` to all messages in uidlist. Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode.""" labels = labels - self.ignorelabels result = self._messagelabels_aux('+X-GM-LABELS', uidlist, labels) if result: for uid in uidlist: self.messagelist[uid]['labels'] = self.messagelist[uid]['labels'] | labels def deletemessageslabels(self, uidlist, labels): """Delete `labels` from all messages in uidlist. Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode.""" labels = labels - self.ignorelabels result = self._messagelabels_aux('-X-GM-LABELS', uidlist, labels) if result: for uid in uidlist: self.messagelist[uid]['labels'] = self.messagelist[uid]['labels'] - labels def copymessageto(self, uid, dstfolder, statusfolder, register = 1): """Copies a message from self to dst if needed, updating the status Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode. :param uid: uid of the message to be copied. :param dstfolder: A BaseFolder-derived instance :param statusfolder: A LocalStatusFolder instance :param register: whether we should register a new thread." :returns: Nothing on success, or raises an Exception.""" # Check if we are really copying realcopy = uid > 0 and not dstfolder.uidexists(uid) # first copy the message super(GmailFolder, self).copymessageto(uid, dstfolder, statusfolder, register) # sync labels and mtime now when the message is new (the embedded labels are up to date) # otherwise we may be spending time for nothing, as they will get updated on a later pass. if realcopy and self.synclabels: try: mtime = dstfolder.getmessagemtime(uid) labels = dstfolder.getmessagelabels(uid) statusfolder.savemessagelabels(uid, labels, mtime=mtime) # dstfolder is not GmailMaildir. except NotImplementedError: return def syncmessagesto_labels(self, dstfolder, statusfolder): """Pass 4: Label Synchronization (Gmail only) Compare label mismatches in self with those in statusfolder. If msg has a valid UID and exists on dstfolder (has not e.g. been deleted there), sync the labels change to both dstfolder and statusfolder. This function checks and protects us from action in dryrun mode. """ # This applies the labels message by message, as this makes more sense for a # Maildir target. If applied with an other Gmail IMAP target it would not be # the fastest thing in the world though... uidlist = [] # filter the uids (fast) try: for uid in self.getmessageuidlist(): # bail out on CTRL-C or SIGTERM if offlineimap.accounts.Account.abort_NOW_signal.is_set(): break # Ignore messages with negative UIDs missed by pass 1 and # don't do anything if the message has been deleted remotely if uid < 0 or not dstfolder.uidexists(uid): continue selflabels = self.getmessagelabels(uid) - self.ignorelabels if statusfolder.uidexists(uid): statuslabels = statusfolder.getmessagelabels(uid) - self.ignorelabels else: statuslabels = set() if selflabels != statuslabels: uidlist.append(uid) # now sync labels (slow) mtimes = {} labels = {} for i, uid in enumerate(uidlist): # bail out on CTRL-C or SIGTERM if offlineimap.accounts.Account.abort_NOW_signal.is_set(): break selflabels = self.getmessagelabels(uid) - self.ignorelabels if statusfolder.uidexists(uid): statuslabels = statusfolder.getmessagelabels(uid) - self.ignorelabels else: statuslabels = set() if selflabels != statuslabels: self.ui.settinglabels(uid, i+1, len(uidlist), sorted(selflabels), dstfolder) if self.repository.account.dryrun: continue #don't actually add in a dryrun dstfolder.savemessagelabels(uid, selflabels, ignorelabels = self.ignorelabels) mtime = dstfolder.getmessagemtime(uid) mtimes[uid] = mtime labels[uid] = selflabels # Update statusfolder in a single DB transaction. It is safe, as if something fails, # statusfolder will be updated on the next run. statusfolder.savemessageslabelsbulk(labels) statusfolder.savemessagesmtimebulk(mtimes) except NotImplementedError: self.ui.warn("Can't sync labels. You need to configure a local repository of type GmailMaildir") offlineimap-6.6.1/offlineimap/folder/GmailMaildir.py000066400000000000000000000321261264010144500224670ustar00rootroot00000000000000# Maildir folder support with labels # Copyright (C) 2002 - 2011 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import os from sys import exc_info from .Maildir import MaildirFolder from offlineimap import OfflineImapError import offlineimap.accounts from offlineimap import imaputil class GmailMaildirFolder(MaildirFolder): """Folder implementation to support adding labels to messages in a Maildir. """ def __init__(self, root, name, sep, repository): super(GmailMaildirFolder, self).__init__(root, name, sep, repository) # The header under which labels are stored self.labelsheader = self.repository.account.getconf('labelsheader', 'X-Keywords') # enables / disables label sync self.synclabels = self.repository.account.getconfboolean('synclabels', 0) # if synclabels is enabled, add a 4th pass to sync labels if self.synclabels: self.syncmessagesto_passes.append(('syncing labels', self.syncmessagesto_labels)) def quickchanged(self, statusfolder): """Returns True if the Maildir has changed. Checks uids, flags and mtimes""" self.cachemessagelist() # Folder has different uids than statusfolder => TRUE if sorted(self.getmessageuidlist()) != \ sorted(statusfolder.getmessageuidlist()): return True # check for flag changes, it's quick on a Maildir for (uid, message) in self.getmessagelist().iteritems(): if message['flags'] != statusfolder.getmessageflags(uid): return True # check for newer mtimes. it is also fast for (uid, message) in self.getmessagelist().iteritems(): if message['mtime'] > statusfolder.getmessagemtime(uid): return True return False #Nope, nothing changed # Interface from BaseFolder def msglist_item_initializer(self, uid): return {'flags': set(), 'labels': set(), 'labels_cached': False, 'filename': '/no-dir/no-such-file/', 'mtime': 0} def cachemessagelist(self, min_date=None, min_uid=None): if self.ismessagelistempty(): self.messagelist = self._scanfolder(min_date=min_date, min_uid=min_uid) # Get mtimes if self.synclabels: for uid, msg in self.messagelist.items(): filepath = os.path.join(self.getfullname(), msg['filename']) msg['mtime'] = long(os.stat(filepath).st_mtime) def getmessagelabels(self, uid): # Labels are not cached in cachemessagelist because it is too slow. if not self.messagelist[uid]['labels_cached']: filename = self.messagelist[uid]['filename'] filepath = os.path.join(self.getfullname(), filename) if not os.path.exists(filepath): return set() file = open(filepath, 'rt') content = file.read() file.close() self.messagelist[uid]['labels'] = set() for hstr in self.getmessageheaderlist(content, self.labelsheader): self.messagelist[uid]['labels'].update( imaputil.labels_from_header(self.labelsheader, hstr)) self.messagelist[uid]['labels_cached'] = True return self.messagelist[uid]['labels'] def getmessagemtime(self, uid): if not 'mtime' in self.messagelist[uid]: return 0 else: return self.messagelist[uid]['mtime'] def savemessage(self, uid, content, flags, rtime): """Writes a new message, with the specified uid. See folder/Base for detail. Note that savemessage() does not check against dryrun settings, so you need to ensure that savemessage is never called in a dryrun mode.""" if not self.synclabels: return super(GmailMaildirFolder, self).savemessage(uid, content, flags, rtime) labels = set() for hstr in self.getmessageheaderlist(content, self.labelsheader): labels.update(imaputil.labels_from_header(self.labelsheader, hstr)) ret = super(GmailMaildirFolder, self).savemessage(uid, content, flags, rtime) # Update the mtime and labels filename = self.messagelist[uid]['filename'] filepath = os.path.join(self.getfullname(), filename) self.messagelist[uid]['mtime'] = long(os.stat(filepath).st_mtime) self.messagelist[uid]['labels'] = labels return ret def savemessagelabels(self, uid, labels, ignorelabels=set()): """Change a message's labels to `labels`. Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode.""" filename = self.messagelist[uid]['filename'] filepath = os.path.join(self.getfullname(), filename) file = open(filepath, 'rt') content = file.read() file.close() oldlabels = set() for hstr in self.getmessageheaderlist(content, self.labelsheader): oldlabels.update(imaputil.labels_from_header(self.labelsheader, hstr)) labels = labels - ignorelabels ignoredlabels = oldlabels & ignorelabels oldlabels = oldlabels - ignorelabels # Nothing to change if labels == oldlabels: return # Change labels into content labels_str = imaputil.format_labels_string(self.labelsheader, sorted(labels | ignoredlabels)) # First remove old labels header, and then add the new one content = self.deletemessageheaders(content, self.labelsheader) content = self.addmessageheader(content, '\n', self.labelsheader, labels_str) mtime = long(os.stat(filepath).st_mtime) # write file with new labels to a unique file in tmp messagename = self.new_message_filename(uid, set()) tmpname = self.save_to_tmp_file(messagename, content) tmppath = os.path.join(self.getfullname(), tmpname) # move to actual location try: os.rename(tmppath, filepath) except OSError as e: raise OfflineImapError("Can't rename file '%s' to '%s': %s" % \ (tmppath, filepath, e[1]), OfflineImapError.ERROR.FOLDER), \ None, exc_info()[2] # if utime_from_header=true, we don't want to change the mtime. if self.utime_from_header and mtime: os.utime(filepath, (mtime, mtime)) # save the new mtime and labels self.messagelist[uid]['mtime'] = long(os.stat(filepath).st_mtime) self.messagelist[uid]['labels'] = labels def copymessageto(self, uid, dstfolder, statusfolder, register = 1): """Copies a message from self to dst if needed, updating the status Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode. :param uid: uid of the message to be copied. :param dstfolder: A BaseFolder-derived instance :param statusfolder: A LocalStatusFolder instance :param register: whether we should register a new thread." :returns: Nothing on success, or raises an Exception.""" # Check if we are really copying realcopy = uid > 0 and not dstfolder.uidexists(uid) # first copy the message super(GmailMaildirFolder, self).copymessageto(uid, dstfolder, statusfolder, register) # sync labels and mtime now when the message is new (the embedded labels are up to date, # and have already propagated to the remote server. # for message which already existed on the remote, this is useless, as later the labels may # get updated. if realcopy and self.synclabels: try: labels = dstfolder.getmessagelabels(uid) statusfolder.savemessagelabels(uid, labels, mtime=self.getmessagemtime(uid)) # dstfolder is not GmailMaildir. except NotImplementedError: return def syncmessagesto_labels(self, dstfolder, statusfolder): """Pass 4: Label Synchronization (Gmail only) Compare label mismatches in self with those in statusfolder. If msg has a valid UID and exists on dstfolder (has not e.g. been deleted there), sync the labels change to both dstfolder and statusfolder. Also skips messages whose mtime remains the same as statusfolder, as the contents have not changed. This function checks and protects us from action in ryrun mode. """ # For each label, we store a list of uids to which it should be # added. Then, we can call addmessageslabels() to apply them in # bulk, rather than one call per message. addlabellist = {} dellabellist = {} uidlist = [] try: # filter uids (fast) for uid in self.getmessageuidlist(): # bail out on CTRL-C or SIGTERM if offlineimap.accounts.Account.abort_NOW_signal.is_set(): break # Ignore messages with negative UIDs missed by pass 1 and # don't do anything if the message has been deleted remotely if uid < 0 or not dstfolder.uidexists(uid): continue selfmtime = self.getmessagemtime(uid) if statusfolder.uidexists(uid): statusmtime = statusfolder.getmessagemtime(uid) else: statusmtime = 0 if selfmtime > statusmtime: uidlist.append(uid) self.ui.collectingdata(uidlist, self) # This can be slow if there is a lot of modified files for uid in uidlist: # bail out on CTRL-C or SIGTERM if offlineimap.accounts.Account.abort_NOW_signal.is_set(): break selflabels = self.getmessagelabels(uid) if statusfolder.uidexists(uid): statuslabels = statusfolder.getmessagelabels(uid) else: statuslabels = set() addlabels = selflabels - statuslabels dellabels = statuslabels - selflabels for lb in addlabels: if not lb in addlabellist: addlabellist[lb] = [] addlabellist[lb].append(uid) for lb in dellabels: if not lb in dellabellist: dellabellist[lb] = [] dellabellist[lb].append(uid) for lb, uids in addlabellist.items(): # bail out on CTRL-C or SIGTERM if offlineimap.accounts.Account.abort_NOW_signal.is_set(): break self.ui.addinglabels(uids, lb, dstfolder) if self.repository.account.dryrun: continue #don't actually add in a dryrun dstfolder.addmessageslabels(uids, set([lb])) statusfolder.addmessageslabels(uids, set([lb])) for lb, uids in dellabellist.items(): # bail out on CTRL-C or SIGTERM if offlineimap.accounts.Account.abort_NOW_signal.is_set(): break self.ui.deletinglabels(uids, lb, dstfolder) if self.repository.account.dryrun: continue #don't actually remove in a dryrun dstfolder.deletemessageslabels(uids, set([lb])) statusfolder.deletemessageslabels(uids, set([lb])) # Update mtimes on StatusFolder. It is done last to be safe. If something els fails # and the mtime is not updated, the labels will still be synced next time. mtimes = {} for uid in uidlist: # bail out on CTRL-C or SIGTERM if offlineimap.accounts.Account.abort_NOW_signal.is_set(): break if self.repository.account.dryrun: continue #don't actually update statusfolder filename = self.messagelist[uid]['filename'] filepath = os.path.join(self.getfullname(), filename) mtimes[uid] = long(os.stat(filepath).st_mtime) # finally update statusfolder in a single DB transaction statusfolder.savemessagesmtimebulk(mtimes) except NotImplementedError: self.ui.warn("Can't sync labels. You need to configure a remote repository of type Gmail.") offlineimap-6.6.1/offlineimap/folder/IMAP.py000066400000000000000000001121551264010144500206630ustar00rootroot00000000000000# IMAP folder support # Copyright (C) 2002-2012 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import random import binascii import re import os import time from sys import exc_info from .Base import BaseFolder from offlineimap import imaputil, imaplibutil, emailutil, OfflineImapError from offlineimap import globals from offlineimap.imaplib2 import MonthNames # Globals CRLF = '\r\n' # NB: message returned from getmessage() will have '\n' all over the place, # NB: there will be no CRLFs. Just before the sending stage of savemessage() # NB: '\n' will be transformed back to CRLF. So, for the most parts of the # NB: code the stored content will be clean of CRLF and one can rely that # NB: line endings will be pure '\n'. class IMAPFolder(BaseFolder): def __init__(self, imapserver, name, repository): # FIXME: decide if unquoted name is from the responsability of the # caller or not, but not both. name = imaputil.dequote(name) self.sep = imapserver.delim super(IMAPFolder, self).__init__(name, repository) self.expunge = repository.getexpunge() self.root = None # imapserver.root self.imapserver = imapserver self.messagelist = {} self.randomgenerator = random.Random() #self.ui is set in BaseFolder self.imap_query = ['BODY.PEEK[]'] fh_conf = self.repository.account.getconf('filterheaders', '') self.filterheaders = [h for h in re.split(r'\s*,\s*', fh_conf) if h] def __selectro(self, imapobj, force=False): """Select this folder when we do not need write access. Prefer SELECT to EXAMINE if we can, since some servers (Courier) do not stabilize UID validity until the folder is selected. .. todo: Still valid? Needs verification :param: Enforce new SELECT even if we are on that folder already. :returns: raises :exc:`OfflineImapError` severity FOLDER on error""" try: imapobj.select(self.getfullname(), force = force) except imapobj.readonly: imapobj.select(self.getfullname(), readonly = True, force = force) # Interface from BaseFolder def suggeststhreads(self): return not globals.options.singlethreading # Interface from BaseFolder def waitforthread(self): self.imapserver.connectionwait() def getmaxage(self): if self.config.getdefault("Account %s"% self.accountname, "maxage", None): raise OfflineImapError("maxage is not supported on IMAP-IMAP sync", OfflineImapError.ERROR.REPO), None, exc_info()[2] # Interface from BaseFolder def getcopyinstancelimit(self): return 'MSGCOPY_' + self.repository.getname() # Interface from BaseFolder def get_uidvalidity(self): """Retrieve the current connections UIDVALIDITY value UIDVALIDITY value will be cached on the first call. :returns: The UIDVALIDITY as (long) number.""" if hasattr(self, '_uidvalidity'): # use cached value if existing return self._uidvalidity imapobj = self.imapserver.acquireconnection() try: # SELECT (if not already done) and get current UIDVALIDITY self.__selectro(imapobj) typ, uidval = imapobj.response('UIDVALIDITY') assert uidval != [None] and uidval != None, \ "response('UIDVALIDITY') returned [None]!" self._uidvalidity = long(uidval[-1]) return self._uidvalidity finally: self.imapserver.releaseconnection(imapobj) # Interface from BaseFolder def quickchanged(self, statusfolder): # An IMAP folder has definitely changed if the number of # messages or the UID of the last message have changed. Otherwise # only flag changes could have occurred. retry = True # Should we attempt another round or exit? while retry: retry = False imapobj = self.imapserver.acquireconnection() try: # Select folder and get number of messages restype, imapdata = imapobj.select(self.getfullname(), True, True) self.imapserver.releaseconnection(imapobj) except OfflineImapError as e: # retry on dropped connections, raise otherwise self.imapserver.releaseconnection(imapobj, True) if e.severity == OfflineImapError.ERROR.FOLDER_RETRY: retry = True else: raise except: # cleanup and raise on all other errors self.imapserver.releaseconnection(imapobj, True) raise # 1. Some mail servers do not return an EXISTS response # if the folder is empty. 2. ZIMBRA servers can return # multiple EXISTS replies in the form 500, 1000, 1500, # 1623 so check for potentially multiple replies. if imapdata == [None]: return True maxmsgid = 0 for msgid in imapdata: maxmsgid = max(long(msgid), maxmsgid) # Different number of messages than last time? if maxmsgid != statusfolder.getmessagecount(): return True return False def _msgs_to_fetch(self, imapobj, min_date=None, min_uid=None): """Determines sequence numbers of messages to be fetched. Message sequence numbers (MSNs) are more easily compacted into ranges which makes transactions slightly faster. Arguments: - imapobj: instance of IMAPlib - min_date (optional): a time_struct; only fetch messages newer than this - min_uid (optional): only fetch messages with UID >= min_uid This function should be called with at MOST one of min_date OR min_uid set but not BOTH. Returns: range(s) for messages or None if no messages are to be fetched.""" def search(search_conditions): """Actually request the server with the specified conditions. Returns: range(s) for messages or None if no messages are to be fetched.""" res_type, res_data = imapobj.search(None, search_conditions) if res_type != 'OK': raise OfflineImapError("SEARCH in folder [%s]%s failed. " "Search string was '%s'. Server responded '[%s] %s'"% ( self.getrepository(), self, search_cond, res_type, res_data), OfflineImapError.ERROR.FOLDER) return res_data[0].split() res_type, imapdata = imapobj.select(self.getfullname(), True, True) if imapdata == [None] or imapdata[0] == '0': # Empty folder, no need to populate message list. return None conditions = [] # 1. min_uid condition. if min_uid != None: conditions.append("UID %d:*"% min_uid) # 2. date condition. elif min_date != None: # Find out what the oldest message is that we should look at. conditions.append("SINCE %02d-%s-%d"% ( min_date[2], MonthNames[min_date[1]], min_date[0])) # 3. maxsize condition. maxsize = self.getmaxsize() if maxsize != None: conditions.append("SMALLER %d"% maxsize) if len(conditions) >= 1: # Build SEARCH command. search_cond = "(%s)"% ' '.join(conditions) search_result = search(search_cond) return imaputil.uid_sequence(search_result) # By default consider all messages in this folder. return '1:*' # Interface from BaseFolder def msglist_item_initializer(self, uid): return {'uid': uid, 'flags': set(), 'time': 0} # Interface from BaseFolder def cachemessagelist(self, min_date=None, min_uid=None): self.ui.loadmessagelist(self.repository, self) self.messagelist = {} imapobj = self.imapserver.acquireconnection() try: msgsToFetch = self._msgs_to_fetch( imapobj, min_date=min_date, min_uid=min_uid) if not msgsToFetch: return # No messages to sync # Get the flags and UIDs for these. single-quotes prevent # imaplib2 from quoting the sequence. res_type, response = imapobj.fetch("'%s'"% msgsToFetch, '(FLAGS UID INTERNALDATE)') if res_type != 'OK': raise OfflineImapError("FETCHING UIDs in folder [%s]%s failed. " "Server responded '[%s] %s'"% (self.getrepository(), self, res_type, response), OfflineImapError.ERROR.FOLDER) finally: self.imapserver.releaseconnection(imapobj) for messagestr in response: # looks like: '1 (FLAGS (\\Seen Old) UID 4807)' or None if no msg # Discard initial message number. if messagestr == None: continue messagestr = messagestr.split(' ', 1)[1] options = imaputil.flags2hash(messagestr) if not 'UID' in options: self.ui.warn('No UID in message with options %s'% \ str(options), minor = 1) else: uid = long(options['UID']) self.messagelist[uid] = self.msglist_item_initializer(uid) flags = imaputil.flagsimap2maildir(options['FLAGS']) keywords = imaputil.flagsimap2keywords(options['FLAGS']) rtime = imaplibutil.Internaldate2epoch(messagestr) self.messagelist[uid] = {'uid': uid, 'flags': flags, 'time': rtime, 'keywords': keywords} self.ui.messagelistloaded(self.repository, self, self.getmessagecount()) def dropmessagelistcache(self): self.messagelist = {} # Interface from BaseFolder def getvisiblename(self): vname = super(IMAPFolder, self).getvisiblename() if self.repository.getdecodefoldernames(): return imaputil.decode_mailbox_name(vname) return vname # Interface from BaseFolder def getmessagelist(self): return self.messagelist # Interface from BaseFolder def getmessage(self, uid): """Retrieve message with UID from the IMAP server (incl body). After this function all CRLFs will be transformed to '\n'. :returns: the message body or throws and OfflineImapError (probably severity MESSAGE) if e.g. no message with this UID could be found. """ data = self._fetch_from_imap(str(uid), 2) # data looks now e.g. [('320 (UID 17061 BODY[] # {2565}','msgbody....')] we only asked for one message, # and that msg is in data[0]. msbody is in [0][1] data = data[0][1].replace(CRLF, "\n") if len(data)>200: dbg_output = "%s...%s"% (str(data)[:150], str(data)[-50:]) else: dbg_output = data self.ui.debug('imap', "Returned object from fetching %d: '%s'"% (uid, dbg_output)) return data # Interface from BaseFolder def getmessagetime(self, uid): return self.messagelist[uid]['time'] # Interface from BaseFolder def getmessageflags(self, uid): return self.messagelist[uid]['flags'] # Interface from BaseFolder def getmessagekeywords(self, uid): return self.messagelist[uid]['keywords'] def __generate_randomheader(self, content): """Returns a unique X-OfflineIMAP header Generate an 'X-OfflineIMAP' mail header which contains a random unique value (which is based on the mail content, and a random number). This header allows us to fetch a mail after APPENDing it to an IMAP server and thus find out the UID that the server assigned it. :returns: (headername, headervalue) tuple, consisting of strings headername == 'X-OfflineIMAP' and headervalue will be a random string """ headername = 'X-OfflineIMAP' # We need a random component too. If we ever upload the same # mail twice (e.g. in different folders), we would still need to # get the UID for the correct one. As we won't have too many # mails with identical content, the randomness requirements are # not extremly critial though. # compute unsigned crc32 of 'content' as unique hash # NB: crc32 returns unsigned only starting with python 3.0 headervalue = str( binascii.crc32(content) & 0xffffffff ) + '-' headervalue += str(self.randomgenerator.randint(0,9999999999)) return (headername, headervalue) def __savemessage_searchforheader(self, imapobj, headername, headervalue): self.ui.debug('imap', '__savemessage_searchforheader called for %s: %s'% \ (headername, headervalue)) # Now find the UID it got. headervalue = imapobj._quote(headervalue) try: matchinguids = imapobj.uid('search', 'HEADER', headername, headervalue)[1][0] except imapobj.error as err: # IMAP server doesn't implement search or had a problem. self.ui.debug('imap', "__savemessage_searchforheader: got IMAP error '%s' while attempting to UID SEARCH for message with header %s"% (err, headername)) return 0 self.ui.debug('imap', '__savemessage_searchforheader got initial matchinguids: ' + repr(matchinguids)) if matchinguids == '': self.ui.debug('imap', "__savemessage_searchforheader: UID SEARCH for message with header %s yielded no results"% headername) return 0 matchinguids = matchinguids.split(' ') self.ui.debug('imap', '__savemessage_searchforheader: matchinguids now ' + \ repr(matchinguids)) if len(matchinguids) != 1 or matchinguids[0] == None: raise ValueError("While attempting to find UID for message with " "header %s, got wrong-sized matchinguids of %s"%\ (headername, str(matchinguids))) return long(matchinguids[0]) def __savemessage_fetchheaders(self, imapobj, headername, headervalue): """ We fetch all new mail headers and search for the right X-OfflineImap line by hand. The response from the server has form: ( 'OK', [ ( '185 (RFC822.HEADER {1789}', '... mail headers ...' ), ' UID 2444)', ( '186 (RFC822.HEADER {1789}', '... 2nd mail headers ...' ), ' UID 2445)' ] ) We need to locate the UID just after mail headers containing our X-OfflineIMAP line. Returns UID when found, 0 when not found.""" self.ui.debug('imap', '__savemessage_fetchheaders called for %s: %s'% \ (headername, headervalue)) # run "fetch X:* rfc822.header" # since we stored the mail we are looking for just recently, it would # not be optimal to fetch all messages. So we'll find highest message # UID in our local messagelist and search from there (exactly from # UID+1). That works because UIDs are guaranteed to be unique and # ascending. if self.getmessagelist(): start = 1 + max(self.getmessagelist().keys()) else: # Folder was empty - start from 1 start = 1 # Imaplib quotes all parameters of a string type. That must not happen # with the range X:*. So we use bytearray to stop imaplib from getting # in our way result = imapobj.uid('FETCH', bytearray('%d:*'% start), 'rfc822.header') if result[0] != 'OK': raise OfflineImapError('Error fetching mail headers: %s'% '. '.join(result[1]), OfflineImapError.ERROR.MESSAGE) result = result[1] found = 0 for item in result: if found == 0 and type(item) == type( () ): # Walk just tuples if re.search("(?:^|\\r|\\n)%s:\s*%s(?:\\r|\\n)"% (headername, headervalue), item[1], flags=re.IGNORECASE): found = 1 elif found == 1: if type(item) == type (""): uid = re.search("UID\s+(\d+)", item, flags=re.IGNORECASE) if uid: return int(uid.group(1)) else: self.ui.warn("Can't parse FETCH response, can't find UID: %s", result.__repr__()) else: self.ui.warn("Can't parse FETCH response, we awaited string: %s", result.__repr__()) return 0 def __getmessageinternaldate(self, content, rtime=None): """Parses mail and returns an INTERNALDATE string It will use information in the following order, falling back as an attempt fails: - rtime parameter - Date header of email We return None, if we couldn't find a valid date. In this case the IMAP server will use the server local time when appening (per RFC). Note, that imaplib's Time2Internaldate is inherently broken as it returns localized date strings which are invalid for IMAP servers. However, that function is called for *every* append() internally. So we need to either pass in `None` or the correct string (in which case Time2Internaldate() will do nothing) to append(). The output of this function is designed to work as input to the imapobj.append() function. TODO: We should probably be returning a bytearray rather than a string here, because the IMAP server will expect plain ASCII. However, imaplib.Time2INternaldate currently returns a string so we go with the same for now. :param rtime: epoch timestamp to be used rather than analyzing the email. :returns: string in the form of "DD-Mmm-YYYY HH:MM:SS +HHMM" (including double quotes) or `None` in case of failure (which is fine as value for append).""" if rtime is None: rtime = emailutil.get_message_date(content) if rtime == None: return None datetuple = time.localtime(rtime) try: # Check for invalid dates if datetuple[0] < 1981: raise ValueError # Check for invalid dates datetuple_check = time.localtime(time.mktime(datetuple)) if datetuple[:2] != datetuple_check[:2]: raise ValueError except (ValueError, OverflowError): # Argh, sometimes it's a valid format but year is 0102 # or something. Argh. It seems that Time2Internaldate # will rause a ValueError if the year is 0102 but not 1902, # but some IMAP servers nonetheless choke on 1902. self.ui.debug('imap', "Message with invalid date %s. " "Server will use local time."% datetuple) return None # Produce a string representation of datetuple that works as # INTERNALDATE. num2mon = {1:'Jan', 2:'Feb', 3:'Mar', 4:'Apr', 5:'May', 6:'Jun', 7:'Jul', 8:'Aug', 9:'Sep', 10:'Oct', 11:'Nov', 12:'Dec'} # tm_isdst coming from email.parsedate is not usable, we still use it # here, mhh. if datetuple.tm_isdst == 1: zone = -time.altzone else: zone = -time.timezone offset_h, offset_m = divmod(zone//60, 60) internaldate = '"%02d-%s-%04d %02d:%02d:%02d %+03d%02d"'% \ (datetuple.tm_mday, num2mon[datetuple.tm_mon], datetuple.tm_year, \ datetuple.tm_hour, datetuple.tm_min, datetuple.tm_sec, offset_h, offset_m) return internaldate # Interface from BaseFolder def savemessage(self, uid, content, flags, rtime): """Save the message on the Server This backend always assigns a new uid, so the uid arg is ignored. This function will update the self.messagelist dict to contain the new message after sucessfully saving it. See folder/Base for details. Note that savemessage() does not check against dryrun settings, so you need to ensure that savemessage is never called in a dryrun mode. :param rtime: A timestamp to be used as the mail date :returns: the UID of the new message as assigned by the server. If the message is saved, but it's UID can not be found, it will return 0. If the message can't be written (folder is read-only for example) it will return -1.""" self.ui.savemessage('imap', uid, flags, self) # already have it, just save modified flags if uid > 0 and self.uidexists(uid): self.savemessageflags(uid, flags) return uid content = self.deletemessageheaders(content, self.filterheaders) # Use proper CRLF all over the message content = re.sub("(?200: dbg_output = "%s...%s"% (content[:150], content[-50:]) else: dbg_output = content self.ui.debug('imap', "savemessage: date: %s, content: '%s'"% (date, dbg_output)) try: # Select folder for append and make the box READ-WRITE imapobj.select(self.getfullname()) except imapobj.readonly: # readonly exception. Return original uid to notify that # we did not save the message. (see savemessage in Base.py) self.ui.msgtoreadonly(self, uid, content, flags) return uid #Do the APPEND try: (typ, dat) = imapobj.append(self.getfullname(), imaputil.flagsmaildir2imap(flags), date, content) # This should only catch 'NO' responses since append() # will raise an exception for 'BAD' responses: if typ != 'OK': # For example, Groupwise IMAP server can return something like: # # NO APPEND The 1500 MB storage limit has been exceeded. # # In this case, we should immediately abort the repository sync # and continue with the next account. msg = \ "Saving msg (%s) in folder '%s', repository '%s' failed (abort). " \ "Server responded: %s %s\n"% \ (msg_id, self, self.getrepository(), typ, dat) raise OfflineImapError(msg, OfflineImapError.ERROR.REPO) retry_left = 0 # Mark as success except imapobj.abort as e: # connection has been reset, release connection and retry. retry_left -= 1 self.imapserver.releaseconnection(imapobj, True) imapobj = self.imapserver.acquireconnection() if not retry_left: raise OfflineImapError("Saving msg (%s) in folder '%s', " "repository '%s' failed (abort). Server responded: %s\n" "Message content was: %s"% (msg_id, self, self.getrepository(), str(e), dbg_output), OfflineImapError.ERROR.MESSAGE), \ None, exc_info()[2] # XXX: is this still needed? self.ui.error(e, exc_info()[2]) except imapobj.error as e: # APPEND failed # If the server responds with 'BAD', append() # raise()s directly. So we catch that too. # drop conn, it might be bad. self.imapserver.releaseconnection(imapobj, True) imapobj = None raise OfflineImapError("Saving msg (%s) folder '%s', repo '%s'" "failed (error). Server responded: %s\nMessage content was: " "%s" % (msg_id, self, self.getrepository(), str(e), dbg_output), OfflineImapError.ERROR.MESSAGE), None, exc_info()[2] # Checkpoint. Let it write out stuff, etc. Eg searches for # just uploaded messages won't work if we don't do this. (typ,dat) = imapobj.check() assert(typ == 'OK') # get the new UID, do we use UIDPLUS? if use_uidplus: # get new UID from the APPENDUID response, it could look # like OK [APPENDUID 38505 3955] APPEND completed with # 38505 bein folder UIDvalidity and 3955 the new UID. # note: we would want to use .response() here but that # often seems to return [None], even though we have # data. TODO resp = imapobj._get_untagged_response('APPENDUID') if resp == [None] or resp is None: self.ui.warn("Server supports UIDPLUS but got no APPENDUID " "appending a message.") return 0 uid = long(resp[-1].split(' ')[1]) if uid == 0: self.ui.warn("savemessage: Server supports UIDPLUS, but" " we got no usable uid back. APPENDUID reponse was " "'%s'"% str(resp)) else: # we don't support UIDPLUS uid = self.__savemessage_searchforheader(imapobj, headername, headervalue) # See docs for savemessage in Base.py for explanation # of this and other return values if uid == 0: self.ui.debug('imap', 'savemessage: attempt to get new UID ' 'UID failed. Search headers manually.') uid = self.__savemessage_fetchheaders(imapobj, headername, headervalue) self.ui.warn('imap', "savemessage: Searching mails for new " "Message-ID failed. Could not determine new UID.") finally: if imapobj: self.imapserver.releaseconnection(imapobj) if uid: # avoid UID FETCH 0 crash happening later on self.messagelist[uid] = self.msglist_item_initializer(uid) self.messagelist[uid]['flags'] = flags self.ui.debug('imap', 'savemessage: returning new UID %d'% uid) return uid def _fetch_from_imap(self, uids, retry_num=1): """Fetches data from IMAP server. Arguments: - imapobj: IMAPlib object - uids: message UIDS - retry_num: number of retries to make Returns: data obtained by this query.""" imapobj = self.imapserver.acquireconnection() try: query = "(%s)"% (" ".join(self.imap_query)) fails_left = retry_num ## retry on dropped connection while fails_left: try: imapobj.select(self.getfullname(), readonly = True) res_type, data = imapobj.uid('fetch', uids, query) break except imapobj.abort as e: fails_left -= 1 # self.ui.error() will show the original traceback if fails_left <= 0: message = ("%s, while fetching msg %r in folder %r." " Max retry reached (%d)"% (e, uids, self.name, retry_num)) severity = OfflineImapError.ERROR.MESSAGE raise OfflineImapError(message, OfflineImapError.ERROR.MESSAGE) # Release dropped connection, and get a new one self.imapserver.releaseconnection(imapobj, True) imapobj = self.imapserver.acquireconnection() self.ui.error("%s. While fetching msg %r in folder %r." " Retrying (%d/%d)"% (e, uids, self.name, retry_num - fails_left, retry_num)) finally: # The imapobj here might be different than the one created before # the ``try`` clause. So please avoid transforming this to a nice # ``with`` without taking this into account. self.imapserver.releaseconnection(imapobj) if data == [None] or res_type != 'OK': #IMAP server says bad request or UID does not exist severity = OfflineImapError.ERROR.MESSAGE reason = "IMAP server '%s' failed to fetch messages UID '%s'."\ "Server responded: %s %s"% (self.getrepository(), uids, res_type, data) if data == [None]: #IMAP server did not find a message with this UID reason = "IMAP server '%s' does not have a message "\ "with UID '%s'" % (self.getrepository(), uids) raise OfflineImapError(reason, severity) return data def _store_to_imap(self, imapobj, uid, field, data): """Stores data to IMAP server Arguments: - imapobj: instance of IMAPlib to use - uid: message UID - field: field name to be stored/updated - data: field contents """ imapobj.select(self.getfullname()) res_type, retdata = imapobj.uid('store', uid, field, data) if res_type != 'OK': severity = OfflineImapError.ERROR.MESSAGE reason = "IMAP server '%s' failed to store %s for message UID '%d'."\ "Server responded: %s %s"% ( self.getrepository(), field, uid, res_type, retdata) raise OfflineImapError(reason, severity) return retdata[0] # Interface from BaseFolder def savemessageflags(self, uid, flags): """Change a message's flags to `flags`. Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode.""" imapobj = self.imapserver.acquireconnection() try: result = self._store_to_imap(imapobj, str(uid), 'FLAGS', imaputil.flagsmaildir2imap(flags)) except imapobj.readonly: self.ui.flagstoreadonly(self, [uid], flags) return finally: self.imapserver.releaseconnection(imapobj) if not result: self.messagelist[uid]['flags'] = flags else: flags = imaputil.flags2hash(imaputil.imapsplit(result)[1])['FLAGS'] self.messagelist[uid]['flags'] = imaputil.flagsimap2maildir(flags) # Interface from BaseFolder def addmessageflags(self, uid, flags): self.addmessagesflags([uid], flags) def __addmessagesflags_noconvert(self, uidlist, flags): self.__processmessagesflags('+', uidlist, flags) # Interface from BaseFolder def addmessagesflags(self, uidlist, flags): """This is here for the sake of UIDMaps.py -- deletemessages must add flags and get a converted UID, and if we don't have noconvert, then UIDMaps will try to convert it twice.""" self.__addmessagesflags_noconvert(uidlist, flags) # Interface from BaseFolder def deletemessageflags(self, uid, flags): self.deletemessagesflags([uid], flags) # Interface from BaseFolder def deletemessagesflags(self, uidlist, flags): self.__processmessagesflags('-', uidlist, flags) def __processmessagesflags_real(self, operation, uidlist, flags): imapobj = self.imapserver.acquireconnection() try: try: imapobj.select(self.getfullname()) except imapobj.readonly: self.ui.flagstoreadonly(self, uidlist, flags) return r = imapobj.uid('store', imaputil.uid_sequence(uidlist), operation + 'FLAGS', imaputil.flagsmaildir2imap(flags)) assert r[0] == 'OK', 'Error with store: ' + '. '.join(r[1]) r = r[1] finally: self.imapserver.releaseconnection(imapobj) # Some IMAP servers do not always return a result. Therefore, # only update the ones that it talks about, and manually fix # the others. needupdate = list(uidlist) for result in r: if result == None: # Compensate for servers that don't return anything from # STORE. continue attributehash = imaputil.flags2hash(imaputil.imapsplit(result)[1]) if not ('UID' in attributehash and 'FLAGS' in attributehash): # Compensate for servers that don't return a UID attribute. continue flagstr = attributehash['FLAGS'] uid = long(attributehash['UID']) self.messagelist[uid]['flags'] = imaputil.flagsimap2maildir(flagstr) try: needupdate.remove(uid) except ValueError: # Let it slide if it's not in the list pass for uid in needupdate: if operation == '+': self.messagelist[uid]['flags'] |= flags elif operation == '-': self.messagelist[uid]['flags'] -= flags def __processmessagesflags(self, operation, uidlist, flags): # Hack for those IMAP servers with a limited line length batch_size = 100 for i in xrange(0, len(uidlist), batch_size): self.__processmessagesflags_real(operation, uidlist[i:i + batch_size], flags) return # Interface from BaseFolder def change_message_uid(self, uid, new_uid): """Change the message from existing uid to new_uid If the backend supports it. IMAP does not and will throw errors.""" raise OfflineImapError('IMAP backend cannot change a messages UID from ' '%d to %d'% (uid, new_uid), OfflineImapError.ERROR.MESSAGE) # Interface from BaseFolder def deletemessage(self, uid): self.__deletemessages_noconvert([uid]) # Interface from BaseFolder def deletemessages(self, uidlist): self.__deletemessages_noconvert(uidlist) def __deletemessages_noconvert(self, uidlist): if not len(uidlist): return self.__addmessagesflags_noconvert(uidlist, set('T')) imapobj = self.imapserver.acquireconnection() try: try: imapobj.select(self.getfullname()) except imapobj.readonly: self.ui.deletereadonly(self, uidlist) return if self.expunge: assert(imapobj.expunge()[0] == 'OK') finally: self.imapserver.releaseconnection(imapobj) for uid in uidlist: del self.messagelist[uid] offlineimap-6.6.1/offlineimap/folder/LocalStatus.py000066400000000000000000000226231264010144500223730ustar00rootroot00000000000000# Local status cache virtual folder # Copyright (C) 2002-2015 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA from sys import exc_info import os import threading from .Base import BaseFolder class LocalStatusFolder(BaseFolder): """LocalStatus backend implemented as a plain text file.""" cur_version = 2 magicline = "OFFLINEIMAP LocalStatus CACHE DATA - DO NOT MODIFY - FORMAT %d" def __init__(self, name, repository): self.sep = '.' #needs to be set before super.__init__() super(LocalStatusFolder, self).__init__(name, repository) self.root = repository.root self.filename = os.path.join(self.getroot(), self.getfolderbasename()) self.messagelist = {} self.savelock = threading.Lock() # Should we perform fsyncs as often as possible? self.doautosave = self.config.getdefaultboolean( "general", "fsync", False) # Interface from BaseFolder def storesmessages(self): return 0 def isnewfolder(self): return not os.path.exists(self.filename) # Interface from BaseFolder def getfullname(self): return self.filename def deletemessagelist(self): if not self.isnewfolder(): os.unlink(self.filename) # Interface from BaseFolder def msglist_item_initializer(self, uid): return {'uid': uid, 'flags': set(), 'labels': set(), 'time': 0, 'mtime': 0} def readstatus_v1(self, fp): """Read status folder in format version 1. Arguments: - fp: I/O object that points to the opened database file. """ for line in fp.xreadlines(): line = line.strip() try: uid, flags = line.split(':') uid = long(uid) flags = set(flags) except ValueError as e: errstr = "Corrupt line '%s' in cache file '%s'" % \ (line, self.filename) self.ui.warn(errstr) raise ValueError(errstr), None, exc_info()[2] self.messagelist[uid] = self.msglist_item_initializer(uid) self.messagelist[uid]['flags'] = flags def readstatus(self, fp): """Read status file in the current format. Arguments: - fp: I/O object that points to the opened database file. """ for line in fp.xreadlines(): line = line.strip() try: uid, flags, mtime, labels = line.split('|') uid = long(uid) flags = set(flags) mtime = long(mtime) labels = set([lb.strip() for lb in labels.split(',') if len(lb.strip()) > 0]) except ValueError as e: errstr = "Corrupt line '%s' in cache file '%s'"% \ (line, self.filename) self.ui.warn(errstr) raise ValueError(errstr), None, exc_info()[2] self.messagelist[uid] = self.msglist_item_initializer(uid) self.messagelist[uid]['flags'] = flags self.messagelist[uid]['mtime'] = mtime self.messagelist[uid]['labels'] = labels # Interface from BaseFolder def cachemessagelist(self): if self.isnewfolder(): self.messagelist = {} return # Loop as many times as version, and update format. for i in range(1, self.cur_version + 1): self.messagelist = {} cachefd = open(self.filename, "rt") line = cachefd.readline().strip() # Format is up to date. break. if line == (self.magicline % self.cur_version): break # Convert from format v1. elif line == (self.magicline % 1): self.ui._msg('Upgrading LocalStatus cache from version 1' 'to version 2 for %s:%s'% (self.repository, self)) self.readstatus_v1(cachefd) cachefd.close() self.save() # NOTE: Add other format transitions here in the future. # elif line == (self.magicline % 2): # self.ui._msg(u'Upgrading LocalStatus cache from version 2' # 'to version 3 for %s:%s'% (self.repository, self)) # self.readstatus_v2(cache) # cache.close() # cache.save() # Something is wrong. else: errstr = "Unrecognized cache magicline in '%s'" % self.filename self.ui.warn(errstr) raise ValueError(errstr) if not line: # The status file is empty - should not have happened, # but somehow did. errstr = "Cache file '%s' is empty."% self.filename self.ui.warn(errstr) cachefd.close() return assert(line == (self.magicline % self.cur_version)) self.readstatus(cachefd) cachefd.close() def save(self): """Save changed data to disk. For this backend it is the same as saveall.""" self.saveall() def saveall(self): """Saves the entire messagelist to disk.""" with self.savelock: cachefd = open(self.filename + ".tmp", "wt") cachefd.write((self.magicline % self.cur_version) + "\n") for msg in self.messagelist.values(): flags = ''.join(sorted(msg['flags'])) labels = ', '.join(sorted(msg['labels'])) cachefd.write("%s|%s|%d|%s\n" % (msg['uid'], flags, msg['mtime'], labels)) cachefd.flush() if self.doautosave: os.fsync(cachefd.fileno()) cachefd.close() os.rename(self.filename + ".tmp", self.filename) if self.doautosave: fd = os.open(os.path.dirname(self.filename), os.O_RDONLY) os.fsync(fd) os.close(fd) # Interface from BaseFolder def getmessagelist(self): return self.messagelist # Interface from BaseFolder def savemessage(self, uid, content, flags, rtime, mtime=0, labels=set()): """Writes a new message, with the specified uid. See folder/Base for detail. Note that savemessage() does not check against dryrun settings, so you need to ensure that savemessage is never called in a dryrun mode.""" if uid < 0: # We cannot assign a uid. return uid if self.uidexists(uid): # already have it self.savemessageflags(uid, flags) return uid self.messagelist[uid] = self.msglist_item_initializer(uid) self.messagelist[uid]['flags'] = flags self.messagelist[uid]['time'] = rtime self.messagelist[uid]['mtime'] = mtime self.messagelist[uid]['labels'] = labels self.save() return uid # Interface from BaseFolder def getmessageflags(self, uid): return self.messagelist[uid]['flags'] # Interface from BaseFolder def getmessagetime(self, uid): return self.messagelist[uid]['time'] # Interface from BaseFolder def savemessageflags(self, uid, flags): self.messagelist[uid]['flags'] = flags self.save() def savemessagelabels(self, uid, labels, mtime=None): self.messagelist[uid]['labels'] = labels if mtime: self.messagelist[uid]['mtime'] = mtime self.save() def savemessageslabelsbulk(self, labels): """Saves labels from a dictionary in a single database operation.""" for uid, lb in labels.items(): self.messagelist[uid]['labels'] = lb self.save() def addmessageslabels(self, uids, labels): for uid in uids: self.messagelist[uid]['labels'] = self.messagelist[uid]['labels'] | labels self.save() def deletemessageslabels(self, uids, labels): for uid in uids: self.messagelist[uid]['labels'] = self.messagelist[uid]['labels'] - labels self.save() def getmessagelabels(self, uid): return self.messagelist[uid]['labels'] def savemessagesmtimebulk(self, mtimes): """Saves mtimes from the mtimes dictionary in a single database operation.""" for uid, mt in mtimes.items(): self.messagelist[uid]['mtime'] = mt self.save() def getmessagemtime(self, uid): return self.messagelist[uid]['mtime'] # Interface from BaseFolder def deletemessage(self, uid): self.deletemessages([uid]) # Interface from BaseFolder def deletemessages(self, uidlist): # Weed out ones not in self.messagelist uidlist = [uid for uid in uidlist if uid in self.messagelist] if not len(uidlist): return for uid in uidlist: del(self.messagelist[uid]) self.save() offlineimap-6.6.1/offlineimap/folder/LocalStatusSQLite.py000066400000000000000000000371751264010144500234650ustar00rootroot00000000000000# Local status cache virtual folder: SQLite backend # Copyright (C) 2009-2011 Stewart Smith and contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import os from sys import exc_info from threading import Lock try: import sqlite3 as sqlite except: pass #fail only if needed later on, not on import from .Base import BaseFolder class LocalStatusSQLiteFolder(BaseFolder): """LocalStatus backend implemented with an SQLite database As python-sqlite currently does not allow to access the same sqlite objects from various threads, we need to open get and close a db connection and cursor for all operations. This is a big disadvantage and we might want to investigate if we cannot hold an object open for a thread somehow.""" # Though. According to sqlite docs, you need to commit() before # the connection is closed or your changes will be lost! # get db connection which autocommits # connection = sqlite.connect(self.filename, isolation_level=None) # cursor = connection.cursor() # return connection, cursor # Current version of our db format. cur_version = 2 def __init__(self, name, repository): self.sep = '.' # Needs to be set before super.__init__() super(LocalStatusSQLiteFolder, self).__init__(name, repository) self.root = repository.root self.filename = os.path.join(self.getroot(), self.getfolderbasename()) self.messagelist = {} self._newfolder = False # Flag if the folder is new. dirname = os.path.dirname(self.filename) if not os.path.exists(dirname): os.makedirs(dirname) if not os.path.isdir(dirname): raise UserWarning("SQLite database path '%s' is not a directory."% dirname) # dblock protects against concurrent writes in same connection. self._dblock = Lock() # Try to establish connection, no need for threadsafety in __init__. try: self.connection = sqlite.connect(self.filename, check_same_thread=False) except NameError: # sqlite import had failed. raise UserWarning("SQLite backend chosen, but cannot connect " "with available bindings to '%s'. Is the sqlite3 package " "installed?."% self.filename), None, exc_info()[2] except sqlite.OperationalError as e: # Operation had failed. raise UserWarning("cannot open database file '%s': %s.\nYou might " "want to check the rights to that file and if it cleanly opens " "with the 'sqlite<3>' command."% (self.filename, e)), None, exc_info()[2] # Make sure sqlite is in multithreading SERIALIZE mode. assert sqlite.threadsafety == 1, 'Your sqlite is not multithreading safe.' # Test if db version is current enough and if db is readable. try: cursor = self.connection.execute( "SELECT value from metadata WHERE key='db_version'") except sqlite.DatabaseError: #db file missing or corrupt, recreate it. self.__create_db() else: # fetch db version and upgrade if needed version = int(cursor.fetchone()[0]) if version < LocalStatusSQLiteFolder.cur_version: self.__upgrade_db(version) def storesmessages(self): return False def getfullname(self): return self.filename # Interface from LocalStatusFolder def isnewfolder(self): return self._newfolder # Interface from LocalStatusFolder def deletemessagelist(self): """Delete all messages in the db.""" self.__sql_write('DELETE FROM status') def __sql_write(self, sql, vars=None, executemany=False): """Execute some SQL, retrying if the db was locked. :param sql: the SQL string passed to execute() :param vars: the variable values to `sql`. E.g. (1,2) or {uid:1, flags:'T'}. See sqlite docs for possibilities. :param executemany: bool indicating whether we want to perform conn.executemany() or conn.execute(). :returns: the Cursor() or raises an Exception""" success = False while not success: self._dblock.acquire() try: if vars is None: if executemany: cursor = self.connection.executemany(sql) else: cursor = self.connection.execute(sql) else: if executemany: cursor = self.connection.executemany(sql, vars) else: cursor = self.connection.execute(sql, vars) success = True self.connection.commit() except sqlite.OperationalError as e: if e.args[0] == 'cannot commit - no transaction is active': pass elif e.args[0] == 'database is locked': self.ui.debug('', "Locked sqlite database, retrying.") success = False else: raise finally: self._dblock.release() return cursor def __upgrade_db(self, from_ver): """Upgrade the sqlite format from version 'from_ver' to current""" if hasattr(self, 'connection'): self.connection.close() #close old connections first self.connection = sqlite.connect(self.filename, check_same_thread = False) # Upgrade from database version 1 to version 2 # This change adds labels and mtime columns, to be used by Gmail IMAP and Maildir folders. if from_ver <= 1: self.ui._msg('Upgrading LocalStatus cache from version 1 to version 2 for %s:%s'% (self.repository, self)) self.connection.executescript("""ALTER TABLE status ADD mtime INTEGER DEFAULT 0; ALTER TABLE status ADD labels VARCHAR(256) DEFAULT ''; UPDATE metadata SET value='2' WHERE key='db_version'; """) self.connection.commit() # Future version upgrades come here... # if from_ver <= 2: ... #upgrade from 2 to 3 # if from_ver <= 3: ... #upgrade from 3 to 4 def __create_db(self): """Create a new db file. self.connection must point to the opened and valid SQlite database connection.""" self.ui._msg('Creating new Local Status db for %s:%s' \ % (self.repository, self)) self.connection.executescript(""" CREATE TABLE metadata (key VARCHAR(50) PRIMARY KEY, value VARCHAR(128)); INSERT INTO metadata VALUES('db_version', '2'); CREATE TABLE status (id INTEGER PRIMARY KEY, flags VARCHAR(50), mtime INTEGER, labels VARCHAR(256)); """) self.connection.commit() self._newfolder = True # Interface from BaseFolder def msglist_item_initializer(self, uid): return {'uid': uid, 'flags': set(), 'labels': set(), 'time': 0, 'mtime': 0} # Interface from BaseFolder def cachemessagelist(self): self.messagelist = {} cursor = self.connection.execute('SELECT id,flags,mtime,labels from status') for row in cursor: uid = row[0] self.messagelist[uid] = self.msglist_item_initializer(uid) flags = set(row[1]) try: labels = set([lb.strip() for lb in row[3].split(',') if len(lb.strip()) > 0]) except AttributeError: # FIXME: This except clause was introduced because row[3] from # database can be found of unexpected type NoneType. See # https://github.com/OfflineIMAP/offlineimap/issues/103 # # We are fixing the type here but this would require more # researches to find the true root cause. row[3] is expected to # be a (empty) string, not None. # # Also, since database might return None, we have to fix the # database, too. labels = set() self.messagelist[uid]['flags'] = flags self.messagelist[uid]['labels'] = labels self.messagelist[uid]['mtime'] = row[2] def dropmessagelistcache(self): self.messagelist = {} # Interface from LocalStatusFolder def save(self): pass # Noop. every transaction commits to database! def saveall(self): """Saves the entire messagelist to the database.""" data = [] for uid, msg in self.messagelist.items(): mtime = msg['mtime'] flags = ''.join(sorted(msg['flags'])) labels = ', '.join(sorted(msg['labels'])) data.append((uid, flags, mtime, labels)) self.__sql_write('INSERT OR REPLACE INTO status ' '(id,flags,mtime,labels) VALUES (?,?,?,?)', data, executemany=True) # Following some pure SQLite functions, where we chose to use # BaseFolder() methods instead. Doing those on the in-memory list is # quicker anyway. If our db becomes so big that we don't want to # maintain the in-memory list anymore, these might come in handy # in the future though. # #def uidexists(self,uid): # conn, cursor = self.get_cursor() # with conn: # cursor.execute('SELECT id FROM status WHERE id=:id',{'id': uid}) # return cursor.fetchone() # This would be the pure SQLite solution, use BaseFolder() method, # to avoid threading with sqlite... #def getmessageuidlist(self): # conn, cursor = self.get_cursor() # with conn: # cursor.execute('SELECT id from status') # r = [] # for row in cursor: # r.append(row[0]) # return r #def getmessagecount(self): # conn, cursor = self.get_cursor() # with conn: # cursor.execute('SELECT count(id) from status'); # return cursor.fetchone()[0] #def getmessageflags(self, uid): # conn, cursor = self.get_cursor() # with conn: # cursor.execute('SELECT flags FROM status WHERE id=:id', # {'id': uid}) # for row in cursor: # flags = [x for x in row[0]] # return flags # assert False,"getmessageflags() called on non-existing message" # Interface from BaseFolder def getmessagelist(self): return self.messagelist # Interface from BaseFolder def savemessage(self, uid, content, flags, rtime, mtime=0, labels=set()): """Writes a new message, with the specified uid. See folder/Base for detail. Note that savemessage() does not check against dryrun settings, so you need to ensure that savemessage is never called in a dryrun mode.""" if uid < 0: # We cannot assign a uid. return uid if self.uidexists(uid): # already have it self.savemessageflags(uid, flags) return uid self.messagelist[uid] = self.msglist_item_initializer(uid) self.messagelist[uid] = {'uid': uid, 'flags': flags, 'time': rtime, 'mtime': mtime, 'labels': labels} flags = ''.join(sorted(flags)) labels = ', '.join(sorted(labels)) self.__sql_write('INSERT INTO status (id,flags,mtime,labels) VALUES (?,?,?,?)', (uid,flags,mtime,labels)) return uid # Interface from BaseFolder def savemessageflags(self, uid, flags): assert self.uidexists(uid) self.messagelist[uid]['flags'] = flags flags = ''.join(sorted(flags)) self.__sql_write('UPDATE status SET flags=? WHERE id=?',(flags,uid)) def getmessageflags(self, uid): return self.messagelist[uid]['flags'] def savemessagelabels(self, uid, labels, mtime=None): self.messagelist[uid]['labels'] = labels if mtime: self.messagelist[uid]['mtime'] = mtime labels = ', '.join(sorted(labels)) if mtime: self.__sql_write('UPDATE status SET labels=?, mtime=? WHERE id=?',(labels,mtime,uid)) else: self.__sql_write('UPDATE status SET labels=? WHERE id=?',(labels,uid)) def savemessageslabelsbulk(self, labels): """ Saves labels from a dictionary in a single database operation. """ data = [(', '.join(sorted(l)), uid) for uid, l in labels.items()] self.__sql_write('UPDATE status SET labels=? WHERE id=?', data, executemany=True) for uid, l in labels.items(): self.messagelist[uid]['labels'] = l def addmessageslabels(self, uids, labels): data = [] for uid in uids: newlabels = self.messagelist[uid]['labels'] | labels data.append((', '.join(sorted(newlabels)), uid)) self.__sql_write('UPDATE status SET labels=? WHERE id=?', data, executemany=True) for uid in uids: self.messagelist[uid]['labels'] = self.messagelist[uid]['labels'] | labels def deletemessageslabels(self, uids, labels): data = [] for uid in uids: newlabels = self.messagelist[uid]['labels'] - labels data.append((', '.join(sorted(newlabels)), uid)) self.__sql_write('UPDATE status SET labels=? WHERE id=?', data, executemany=True) for uid in uids: self.messagelist[uid]['labels'] = self.messagelist[uid]['labels'] - labels def getmessagelabels(self, uid): return self.messagelist[uid]['labels'] def savemessagesmtimebulk(self, mtimes): """Saves mtimes from the mtimes dictionary in a single database operation.""" data = [(mt, uid) for uid, mt in mtimes.items()] self.__sql_write('UPDATE status SET mtime=? WHERE id=?', data, executemany=True) for uid, mt in mtimes.items(): self.messagelist[uid]['mtime'] = mt def getmessagemtime(self, uid): return self.messagelist[uid]['mtime'] # Interface from BaseFolder def deletemessage(self, uid): if not uid in self.messagelist: return self.__sql_write('DELETE FROM status WHERE id=?', (uid, )) del(self.messagelist[uid]) # Interface from BaseFolder def deletemessages(self, uidlist): """Delete list of UIDs from status cache This function uses sqlites executemany() function which is much faster than iterating through deletemessage() when we have many messages to delete.""" # Weed out ones not in self.messagelist uidlist = [uid for uid in uidlist if uid in self.messagelist] if not len(uidlist): return # arg2 needs to be an iterable of 1-tuples [(1,),(2,),...] self.__sql_write('DELETE FROM status WHERE id=?', zip(uidlist, ), True) for uid in uidlist: del(self.messagelist[uid]) offlineimap-6.6.1/offlineimap/folder/Maildir.py000066400000000000000000000472171264010144500215240ustar00rootroot00000000000000# Maildir folder support # Copyright (C) 2002-2015 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import socket import time import re import os from sys import exc_info from .Base import BaseFolder from threading import Lock try: from hashlib import md5 except ImportError: from md5 import md5 try: # python 2.6 has set() built in set except NameError: from sets import Set as set from offlineimap import OfflineImapError, emailutil # Find the UID in a message filename re_uidmatch = re.compile(',U=(\d+)') # Find a numeric timestamp in a string (filename prefix) re_timestampmatch = re.compile('(\d+)'); timehash = {} timelock = Lock() def _gettimeseq(date=None): global timehash, timelock timelock.acquire() try: if date is None: date = long(time.time()) if timehash.has_key(date): timehash[date] += 1 else: timehash[date] = 0 return (date, timehash[date]) finally: timelock.release() class MaildirFolder(BaseFolder): def __init__(self, root, name, sep, repository): self.sep = sep # needs to be set before super().__init__ super(MaildirFolder, self).__init__(name, repository) self.dofsync = self.config.getdefaultboolean("general", "fsync", True) self.root = root self.messagelist = {} # check if we should use a different infosep to support Win file systems self.wincompatible = self.config.getdefaultboolean( "Account "+self.accountname, "maildir-windows-compatible", False) self.infosep = '!' if self.wincompatible else ':' """infosep is the separator between maildir name and flag appendix""" self.re_flagmatch = re.compile('%s2,(\w*)'% self.infosep) #self.ui is set in BaseFolder.init() # Everything up to the first comma or colon (or ! if Windows): self.re_prefixmatch = re.compile('([^'+ self.infosep + ',]*)') # folder's md, so we can match with recorded file md5 for validity. self._foldermd5 = md5(self.getvisiblename()).hexdigest() # Cache the full folder path, as we use getfullname() very often. self._fullname = os.path.join(self.getroot(), self.getname()) # Interface from BaseFolder def getfullname(self): """Return the absolute file path to the Maildir folder (sans cur|new)""" return self._fullname # Interface from BaseFolder def get_uidvalidity(self): """Retrieve the current connections UIDVALIDITY value Maildirs have no notion of uidvalidity, so we just return a magic token.""" return 42 def _iswithintime(self, messagename, date): """Check to see if the given message is newer than date (a time_struct) according to the maildir name which should begin with a timestamp.""" timestampmatch = re_timestampmatch.search(messagename) if not timestampmatch: return True timestampstr = timestampmatch.group() timestamplong = long(timestampstr) if(timestamplong < time.mktime(date)): return False else: return True def _parse_filename(self, filename): """Returns a messages file name components Receives the file name (without path) of a msg. Usual format is '<%d_%d.%d.%s>,U=<%d>,FMD5=<%s>:2,' (pointy brackets denoting the various components). If FMD5 does not correspond with the current folder MD5, we will return None for the UID & FMD5 (as it is not valid in this folder). If UID or FMD5 can not be detected, we return `None` for the respective element. If flags are empty or cannot be detected, we return an empty flags list. :returns: (prefix, UID, FMD5, flags). UID is a numeric "long" type. flags is a set() of Maildir flags. """ prefix, uid, fmd5, flags = None, None, None, set() prefixmatch = self.re_prefixmatch.match(filename) if prefixmatch: prefix = prefixmatch.group(1) folderstr = ',FMD5=%s'% self._foldermd5 foldermatch = folderstr in filename # If there was no folder MD5 specified, or if it mismatches, # assume it is a foreign (new) message and ret: uid, fmd5 = None, None if foldermatch: uidmatch = re_uidmatch.search(filename) if uidmatch: uid = long(uidmatch.group(1)) flagmatch = self.re_flagmatch.search(filename) if flagmatch: flags = set((c for c in flagmatch.group(1))) return prefix, uid, fmd5, flags def _scanfolder(self, min_date=None, min_uid=None): """Cache the message list from a Maildir. If min_date is set, this finds the min UID of all messages newer than min_date and uses it as the real cutoff for considering messages. This handles the edge cases where the date is much earlier than messages with similar UID's (e.g. the UID was reassigned much later). Maildir flags are: R (replied) S (seen) T (trashed) D (draft) F (flagged), plus lower-case letters for custom flags. :returns: dict that can be used as self.messagelist. """ maxsize = self.getmaxsize() retval = {} files = [] nouidcounter = -1 # Messages without UIDs get negative UIDs. for dirannex in ['new', 'cur']: fulldirname = os.path.join(self.getfullname(), dirannex) files.extend((dirannex, filename) for filename in os.listdir(fulldirname)) date_excludees = {} for dirannex, filename in files: if filename.startswith('.'): continue # Ignore dot files. # We store just dirannex and filename, ie 'cur/123...' filepath = os.path.join(dirannex, filename) # Check maxsize if this message should be considered. if maxsize and (os.path.getsize(os.path.join( self.getfullname(), filepath)) > maxsize): continue (prefix, uid, fmd5, flags) = self._parse_filename(filename) if uid is None: # Assign negative uid to upload it. uid = nouidcounter nouidcounter -= 1 else: # It comes from our folder. uidmatch = re_uidmatch.search(filename) uid = None if not uidmatch: uid = nouidcounter nouidcounter -= 1 else: uid = long(uidmatch.group(1)) if min_uid != None and uid > 0 and uid < min_uid: continue if min_date != None and not self._iswithintime(filename, min_date): # Keep track of messages outside of the time limit, because they # still might have UID > min(UIDs of within-min_date). We hit # this case for maxage if any message had a known/valid datetime # and was re-uploaded because the UID in the filename got lost # (e.g. local copy/move). On next sync, it was assigned a new # UID from the server and will be included in the SEARCH # condition. So, we must re-include them later in this method # in order to avoid inconsistent lists of messages. date_excludees[uid] = self.msglist_item_initializer(uid) date_excludees[uid]['flags'] = flags date_excludees[uid]['filename'] = filepath else: # 'filename' is 'dirannex/filename', e.g. cur/123,U=1,FMD5=1:2,S retval[uid] = self.msglist_item_initializer(uid) retval[uid]['flags'] = flags retval[uid]['filename'] = filepath if min_date != None: # Re-include messages with high enough uid's. positive_uids = filter(lambda uid: uid > 0, retval) if positive_uids: min_uid = min(positive_uids) for uid in date_excludees.keys(): if uid > min_uid: # This message was originally excluded because of # its date. It is re-included now because we want all # messages with UID > min_uid. retval[uid] = date_excludees[uid] return retval # Interface from BaseFolder def quickchanged(self, statusfolder): """Returns True if the Maildir has changed Assumes cachemessagelist() has already been called """ # Folder has different uids than statusfolder => TRUE. if sorted(self.getmessageuidlist()) != \ sorted(statusfolder.getmessageuidlist()): return True # Also check for flag changes, it's quick on a Maildir. for (uid, message) in self.getmessagelist().iteritems(): if message['flags'] != statusfolder.getmessageflags(uid): return True return False # Nope, nothing changed. # Interface from BaseFolder def msglist_item_initializer(self, uid): return {'flags': set(), 'filename': '/no-dir/no-such-file/'} # Interface from BaseFolder def cachemessagelist(self, min_date=None, min_uid=None): if self.ismessagelistempty(): self.ui.loadmessagelist(self.repository, self) self.messagelist = self._scanfolder(min_date=min_date, min_uid=min_uid) self.ui.messagelistloaded(self.repository, self, self.getmessagecount()) # Interface from BaseFolder def getmessagelist(self): return self.messagelist # Interface from BaseFolder def getmessage(self, uid): """Return the content of the message.""" filename = self.messagelist[uid]['filename'] filepath = os.path.join(self.getfullname(), filename) file = open(filepath, 'rt') retval = file.read() file.close() #TODO: WHY are we replacing \r\n with \n here? And why do we # read it as text? return retval.replace("\r\n", "\n") # Interface from BaseFolder def getmessagetime(self, uid): filename = self.messagelist[uid]['filename'] filepath = os.path.join(self.getfullname(), filename) return os.path.getmtime(filepath) def new_message_filename(self, uid, flags=set(), date=None): """Creates a new unique Maildir filename :param uid: The UID`None`, or a set of maildir flags :param flags: A set of maildir flags :returns: String containing unique message filename""" timeval, timeseq = _gettimeseq(date) return '%d_%d.%d.%s,U=%d,FMD5=%s%s2,%s'% \ (timeval, timeseq, os.getpid(), socket.gethostname(), uid, self._foldermd5, self.infosep, ''.join(sorted(flags))) def save_to_tmp_file(self, filename, content): """Saves given content to the named temporary file in the 'tmp' subdirectory of $CWD. Arguments: - filename: name of the temporary file; - content: data to be saved. Returns: relative path to the temporary file that was created.""" tmpname = os.path.join('tmp', filename) # open file and write it out tries = 7 while tries: tries = tries - 1 try: fd = os.open(os.path.join(self.getfullname(), tmpname), os.O_EXCL|os.O_CREAT|os.O_WRONLY, 0o666) break except OSError as e: if e.errno == e.EEXIST: if tries: time.sleep(0.23) continue severity = OfflineImapError.ERROR.MESSAGE raise OfflineImapError("Unique filename %s already exists."% filename, severity), None, exc_info()[2] else: raise fd = os.fdopen(fd, 'wt') fd.write(content) # Make sure the data hits the disk. fd.flush() if self.dofsync: os.fsync(fd) fd.close() return tmpname # Interface from BaseFolder def savemessage(self, uid, content, flags, rtime): """Writes a new message, with the specified uid. See folder/Base for detail. Note that savemessage() does not check against dryrun settings, so you need to ensure that savemessage is never called in a dryrun mode.""" # This function only ever saves to tmp/, # but it calls savemessageflags() to actually save to cur/ or new/. self.ui.savemessage('maildir', uid, flags, self) if uid < 0: # We cannot assign a new uid. return uid if uid in self.messagelist: # We already have it, just update flags. self.savemessageflags(uid, flags) return uid # Otherwise, save the message in tmp/ and then call savemessageflags() # to give it a permanent home. tmpdir = os.path.join(self.getfullname(), 'tmp') # use the mail timestamp given by either Date or Delivery-date mail # headers. message_timestamp = None if self._filename_use_mail_timestamp: try: message_timestamp = emailutil.get_message_date(content, 'Date') if message_timestamp is None: # Give a try with Delivery-date date = emailutil.get_message_date(content, 'Delivery-date') except: # This should never happen from email.Parser import Parser from offlineimap.ui import getglobalui datestr = Parser().parsestr(content, True).get("Date") ui = getglobalui() ui.warn("UID %d has invalid date %s: %s\n" "Not using message timestamp as file prefix" % (uid, datestr, e)) # No need to check if date is None here since it would # be overridden by _gettimeseq. messagename = self.new_message_filename(uid, flags, date=message_timestamp) tmpname = self.save_to_tmp_file(messagename, content) if self.utime_from_header: try: date = emailutil.get_message_date(content, 'Date') if date is not None: os.utime(os.path.join(self.getfullname(), tmpname), (date, date)) # In case date is wrongly so far into the future as to be > max int32 except Exception as e: from email.Parser import Parser from offlineimap.ui import getglobalui datestr = Parser().parsestr(content, True).get("Date") ui = getglobalui() ui.warn("UID %d has invalid date %s: %s\n" "Not changing file modification time" % (uid, datestr, e)) self.messagelist[uid] = self.msglist_item_initializer(uid) self.messagelist[uid]['flags'] = flags self.messagelist[uid]['filename'] = tmpname # savemessageflags moves msg to 'cur' or 'new' as appropriate self.savemessageflags(uid, flags) self.ui.debug('maildir', 'savemessage: returning uid %d' % uid) return uid # Interface from BaseFolder def getmessageflags(self, uid): return self.messagelist[uid]['flags'] # Interface from BaseFolder def savemessageflags(self, uid, flags): """Sets the specified message's flags to the given set. This function moves the message to the cur or new subdir, depending on the 'S'een flag. Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode.""" assert uid in self.messagelist oldfilename = self.messagelist[uid]['filename'] dir_prefix, filename = os.path.split(oldfilename) # If a message has been seen, it goes into 'cur' dir_prefix = 'cur' if 'S' in flags else 'new' if flags != self.messagelist[uid]['flags']: # Flags have actually changed, construct new filename Strip # off existing infostring infomatch = self.re_flagmatch.search(filename) if infomatch: filename = filename[:-len(infomatch.group())] #strip off infostr = '%s2,%s'% (self.infosep, ''.join(sorted(flags))) filename += infostr newfilename = os.path.join(dir_prefix, filename) if (newfilename != oldfilename): try: os.rename(os.path.join(self.getfullname(), oldfilename), os.path.join(self.getfullname(), newfilename)) except OSError as e: raise OfflineImapError("Can't rename file '%s' to '%s': %s" % ( oldfilename, newfilename, e[1]), OfflineImapError.ERROR.FOLDER), \ None, exc_info()[2] self.messagelist[uid]['flags'] = flags self.messagelist[uid]['filename'] = newfilename # Interface from BaseFolder def change_message_uid(self, uid, new_uid): """Change the message from existing uid to new_uid This will not update the statusfolder UID, you need to do that yourself. :param new_uid: (optional) If given, the old UID will be changed to a new UID. The Maildir backend can implement this as an efficient rename. """ if not uid in self.messagelist: raise OfflineImapError("Cannot change unknown Maildir UID %s"% uid) if uid == new_uid: return oldfilename = self.messagelist[uid]['filename'] dir_prefix, filename = os.path.split(oldfilename) flags = self.getmessageflags(uid) newfilename = os.path.join(dir_prefix, self.new_message_filename(new_uid, flags)) os.rename(os.path.join(self.getfullname(), oldfilename), os.path.join(self.getfullname(), newfilename)) self.messagelist[new_uid] = self.messagelist[uid] self.messagelist[new_uid]['filename'] = newfilename del self.messagelist[uid] # Interface from BaseFolder def deletemessage(self, uid): """Unlinks a message file from the Maildir. :param uid: UID of a mail message :type uid: String :return: Nothing, or an Exception if UID but no corresponding file found. """ filename = self.messagelist[uid]['filename'] filepath = os.path.join(self.getfullname(), filename) try: os.unlink(filepath) except OSError: # Can't find the file -- maybe already deleted? newmsglist = self._scanfolder() if uid in newmsglist: # Nope, try new filename. filename = newmsglist[uid]['filename'] filepath = os.path.join(self.getfullname(), filename) os.unlink(filepath) # Yep -- return. del(self.messagelist[uid]) offlineimap-6.6.1/offlineimap/folder/UIDMaps.py000066400000000000000000000275741264010144500214110ustar00rootroot00000000000000# Base folder support # Copyright (C) 2002-2015 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA from sys import exc_info from threading import Lock from offlineimap import OfflineImapError from .IMAP import IMAPFolder import os.path class MappedIMAPFolder(IMAPFolder): """IMAP class to map between Folder() instances where both side assign a uid This Folder is used on the local side, while the remote side should be an IMAPFolder. Instance variables (self.): r2l: dict mapping message uids: self.r2l[remoteuid]=localuid l2r: dict mapping message uids: self.r2l[localuid]=remoteuid #TODO: what is the difference, how are they used? diskr2l: dict mapping message uids: self.r2l[remoteuid]=localuid diskl2r: dict mapping message uids: self.r2l[localuid]=remoteuid""" def __init__(self, *args, **kwargs): IMAPFolder.__init__(self, *args, **kwargs) self.maplock = Lock() (self.diskr2l, self.diskl2r) = self._loadmaps() self._mb = IMAPFolder(*args, **kwargs) """Representing the local IMAP Folder using local UIDs""" def _getmapfilename(self): return os.path.join(self.repository.getmapdir(), self.getfolderbasename()) def _loadmaps(self): self.maplock.acquire() try: mapfilename = self._getmapfilename() if not os.path.exists(mapfilename): return ({}, {}) file = open(mapfilename, 'rt') r2l = {} l2r = {} while 1: line = file.readline() if not len(line): break try: line = line.strip() except ValueError: raise Exception("Corrupt line '%s' in UID mapping file '%s'"% (line, mapfilename)), None, exc_info()[2] (str1, str2) = line.split(':') loc = long(str1) rem = long(str2) r2l[rem] = loc l2r[loc] = rem return (r2l, l2r) finally: self.maplock.release() def _savemaps(self, dolock = 1): mapfilename = self._getmapfilename() if dolock: self.maplock.acquire() try: file = open(mapfilename + ".tmp", 'wt') for (key, value) in self.diskl2r.iteritems(): file.write("%d:%d\n"% (key, value)) file.close() os.rename(mapfilename + '.tmp', mapfilename) finally: if dolock: self.maplock.release() def _uidlist(self, mapping, items): try: return [mapping[x] for x in items] except KeyError as e: raise OfflineImapError("Could not find UID for msg '{0}' (f:'{1}'." " This is usually a bad thing and should be reported on the ma" "iling list.".format(e.args[0], self), OfflineImapError.ERROR.MESSAGE), None, exc_info()[2] # Interface from BaseFolder def cachemessagelist(self, min_date=None, min_uid=None): self._mb.cachemessagelist(min_date=min_date, min_uid=min_uid) reallist = self._mb.getmessagelist() self.messagelist = self._mb.messagelist self.maplock.acquire() try: # OK. Now we've got a nice list. First, delete things from the # summary that have been deleted from the folder. for luid in self.diskl2r.keys(): if not luid in reallist: ruid = self.diskl2r[luid] del self.diskr2l[ruid] del self.diskl2r[luid] # Now, assign negative UIDs to local items. self._savemaps(dolock = 0) nextneg = -1 self.r2l = self.diskr2l.copy() self.l2r = self.diskl2r.copy() for luid in reallist.keys(): if not luid in self.l2r: ruid = nextneg nextneg -= 1 self.l2r[luid] = ruid self.r2l[ruid] = luid finally: self.maplock.release() def dropmessagelistcache(self): self._mb.dropmessagelistcache() # Interface from BaseFolder def uidexists(self, ruid): """Checks if the (remote) UID exists in this Folder""" # This implementation overrides the one in BaseFolder, as it is # much more efficient for the mapped case. return ruid in self.r2l # Interface from BaseFolder def getmessageuidlist(self): """Gets a list of (remote) UIDs. You may have to call cachemessagelist() before calling this function!""" # This implementation overrides the one in BaseFolder, as it is # much more efficient for the mapped case. return self.r2l.keys() # Interface from BaseFolder def getmessagecount(self): """Gets the number of messages in this folder. You may have to call cachemessagelist() before calling this function!""" # This implementation overrides the one in BaseFolder, as it is # much more efficient for the mapped case. return len(self.r2l) # Interface from BaseFolder def getmessagelist(self): """Gets the current message list. This function's implementation is quite expensive for the mapped UID case. You must call cachemessagelist() before calling this function!""" retval = {} localhash = self._mb.getmessagelist() self.maplock.acquire() try: for key, value in localhash.items(): try: key = self.l2r[key] except KeyError: # Sometimes, the IMAP backend may put in a new message, # then this function acquires the lock before the system # has the chance to note it in the mapping. In that case, # just ignore it. continue value = value.copy() value['uid'] = self.l2r[value['uid']] retval[key] = value return retval finally: self.maplock.release() # Interface from BaseFolder def getmessage(self, uid): """Returns the content of the specified message.""" return self._mb.getmessage(self.r2l[uid]) # Interface from BaseFolder def savemessage(self, uid, content, flags, rtime): """Writes a new message, with the specified uid. The UIDMaps class will not return a newly assigned uid, as it internally maps different uids between IMAP servers. So a successful savemessage() invocation will return the same uid it has been invoked with. As it maps between 2 IMAP servers which means the source message must already have an uid, it requires a positive uid to be passed in. Passing in a message with a negative uid will do nothing and return the negative uid. If the uid is > 0, the backend should set the uid to this, if it can. If it cannot set the uid to that, it will save it anyway. It will return the uid assigned in any case. See folder/Base for details. Note that savemessage() does not check against dryrun settings, so you need to ensure that savemessage is never called in a dryrun mode. """ self.ui.savemessage('imap', uid, flags, self) # Mapped UID instances require the source to already have a # positive UID, so simply return here. if uid < 0: return uid # If msg uid already exists, just modify the flags. if uid in self.r2l: self.savemessageflags(uid, flags) return uid newluid = self._mb.savemessage(-1, content, flags, rtime) if newluid < 1: raise ValueError("Backend could not find uid for message, " "returned %s"% newluid) self.maplock.acquire() try: self.diskl2r[newluid] = uid self.diskr2l[uid] = newluid self.l2r[newluid] = uid self.r2l[uid] = newluid self._savemaps(dolock = 0) finally: self.maplock.release() return uid # Interface from BaseFolder def getmessageflags(self, uid): return self._mb.getmessageflags(self.r2l[uid]) # Interface from BaseFolder def getmessagetime(self, uid): return None # Interface from BaseFolder def savemessageflags(self, uid, flags): """Note that this function does not check against dryrun settings, so you need to ensure that it is never called in a dryrun mode.""" self._mb.savemessageflags(self.r2l[uid], flags) # Interface from BaseFolder def addmessageflags(self, uid, flags): self._mb.addmessageflags(self.r2l[uid], flags) # Interface from BaseFolder def addmessagesflags(self, uidlist, flags): self._mb.addmessagesflags(self._uidlist(self.r2l, uidlist), flags) # Interface from BaseFolder def change_message_uid(self, ruid, new_ruid): """Change the message from existing ruid to new_ruid :param new_uid: The old remote UID will be changed to a new UID. The UIDMaps case handles this efficiently by simply changing the mappings file.""" if ruid not in self.r2l: raise OfflineImapError("Cannot change unknown Maildir UID %s"% ruid, OfflineImapError.ERROR.MESSAGE) if ruid == new_ruid: return # sanity check shortcut self.maplock.acquire() try: luid = self.r2l[ruid] self.l2r[luid] = new_ruid del self.r2l[ruid] self.r2l[new_ruid] = luid # TODO: diskl2r|r2l are a pain to sync and should be done away with # diskl2r only contains positive UIDs, so wrap in ifs. if luid > 0: self.diskl2r[luid] = new_ruid if ruid > 0: del self.diskr2l[ruid] if new_ruid > 0: self.diskr2l[new_ruid] = luid self._savemaps(dolock = 0) finally: self.maplock.release() def _mapped_delete(self, uidlist): self.maplock.acquire() try: needssave = 0 for ruid in uidlist: luid = self.r2l[ruid] del self.r2l[ruid] del self.l2r[luid] if ruid > 0: del self.diskr2l[ruid] del self.diskl2r[luid] needssave = 1 if needssave: self._savemaps(dolock = 0) finally: self.maplock.release() # Interface from BaseFolder def deletemessageflags(self, uid, flags): self._mb.deletemessageflags(self.r2l[uid], flags) # Interface from BaseFolder def deletemessagesflags(self, uidlist, flags): self._mb.deletemessagesflags(self._uidlist(self.r2l, uidlist), flags) # Interface from BaseFolder def deletemessage(self, uid): self._mb.deletemessage(self.r2l[uid]) self._mapped_delete([uid]) # Interface from BaseFolder def deletemessages(self, uidlist): self._mb.deletemessages(self._uidlist(self.r2l, uidlist)) self._mapped_delete(uidlist) offlineimap-6.6.1/offlineimap/folder/__init__.py000066400000000000000000000000671264010144500216720ustar00rootroot00000000000000from . import Base, Gmail, IMAP, Maildir, LocalStatus offlineimap-6.6.1/offlineimap/globals.py000066400000000000000000000004531264010144500203020ustar00rootroot00000000000000# Copyright 2013 Eygene A. Ryabinkin. # # Module that holds various global objects. from offlineimap.utils import const # Holds command-line options for OfflineIMAP. options = const.ConstProxy() def set_options (source): """ Sets the source for options variable """ options.set_source (source) offlineimap-6.6.1/offlineimap/imaplibutil.py000066400000000000000000000210351264010144500211710ustar00rootroot00000000000000# imaplib utilities # Copyright (C) 2002-2015 John Goerzen & contributors # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import os import fcntl import time import subprocess from sys import exc_info import threading from hashlib import sha1 import socket import errno from offlineimap.ui import getglobalui from offlineimap import OfflineImapError from offlineimap.imaplib2 import IMAP4, IMAP4_SSL, zlib, InternalDate, Mon2num class UsefulIMAPMixIn(object): def __getselectedfolder(self): if self.state == 'SELECTED': return self.mailbox return None def select(self, mailbox='INBOX', readonly=False, force=False): """Selects a mailbox on the IMAP server :returns: 'OK' on success, nothing if the folder was already selected or raises an :exc:`OfflineImapError`.""" if self.__getselectedfolder() == mailbox and \ self.is_readonly == readonly and \ not force: # No change; return. return try: result = super(UsefulIMAPMixIn, self).select(mailbox, readonly) except self.readonly as e: # pass self.readonly to our callers raise except self.abort as e: # self.abort is raised when we are supposed to retry errstr = "Server '%s' closed connection, error on SELECT '%s'. Ser"\ "ver said: %s" % (self.host, mailbox, e.args[0]) severity = OfflineImapError.ERROR.FOLDER_RETRY raise OfflineImapError(errstr, severity), None, exc_info()[2] if result[0] != 'OK': #in case of error, bail out with OfflineImapError errstr = "Error SELECTing mailbox '%s', server reply:\n%s" %\ (mailbox, result) severity = OfflineImapError.ERROR.FOLDER raise OfflineImapError(errstr, severity) return result # Overrides private function from IMAP4 (@imaplib2) def _mesg(self, s, tn=None, secs=None): new_mesg(self, s, tn, secs) # Overrides private function from IMAP4 (@imaplib2) def open_socket(self): """open_socket() Open socket choosing first address family available.""" msg = (-1, 'could not open socket') for res in socket.getaddrinfo(self.host, self.port, socket.AF_UNSPEC, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res try: # use socket of our own, possiblly socksified socket. s = self.socket(af, socktype, proto) except socket.error, msg: continue try: for i in (0, 1): try: s.connect(sa) break except socket.error, msg: if len(msg.args) < 2 or msg.args[0] != errno.EINTR: raise else: raise socket.error(msg) except socket.error, msg: s.close() continue break else: raise socket.error(msg) return s class IMAP4_Tunnel(UsefulIMAPMixIn, IMAP4): """IMAP4 client class over a tunnel Instantiate with: IMAP4_Tunnel(tunnelcmd) tunnelcmd -- shell command to generate the tunnel. The result will be in PREAUTH stage.""" def __init__(self, tunnelcmd, **kwargs): if "use_socket" in kwargs: self.socket = kwargs['use_socket'] del kwargs['use_socket'] IMAP4.__init__(self, tunnelcmd, **kwargs) def open(self, host, port): """The tunnelcmd comes in on host!""" self.host = host self.process = subprocess.Popen(host, shell=True, close_fds=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE) (self.outfd, self.infd) = (self.process.stdin, self.process.stdout) # imaplib2 polls on this fd self.read_fd = self.infd.fileno() self.set_nonblocking(self.read_fd) def set_nonblocking(self, fd): """Mark fd as nonblocking""" # get the file's current flag settings fl = fcntl.fcntl(fd, fcntl.F_GETFL) # clear non-blocking mode from flags fl = fl & ~os.O_NONBLOCK fcntl.fcntl(fd, fcntl.F_SETFL, fl) def read(self, size): """data = read(size) Read at most 'size' bytes from remote.""" if self.decompressor is None: return os.read(self.read_fd, size) if self.decompressor.unconsumed_tail: data = self.decompressor.unconsumed_tail else: data = os.read(self.read_fd, 8192) return self.decompressor.decompress(data, size) def send(self, data): if self.compressor is not None: data = self.compressor.compress(data) data += self.compressor.flush(zlib.Z_SYNC_FLUSH) self.outfd.write(data) def shutdown(self): self.infd.close() self.outfd.close() self.process.wait() def new_mesg(self, s, tn=None, secs=None): if secs is None: secs = time.time() if tn is None: tn = threading.currentThread().getName() tm = time.strftime('%M:%S', time.localtime(secs)) getglobalui().debug('imap', ' %s.%02d %s %s' % (tm, (secs*100)%100, tn, s)) class WrappedIMAP4_SSL(UsefulIMAPMixIn, IMAP4_SSL): """Improved version of imaplib.IMAP4_SSL overriding select().""" def __init__(self, *args, **kwargs): if "use_socket" in kwargs: self.socket = kwargs['use_socket'] del kwargs['use_socket'] self._fingerprint = kwargs.get('fingerprint', None) if type(self._fingerprint) != type([]): self._fingerprint = [self._fingerprint] if 'fingerprint' in kwargs: del kwargs['fingerprint'] super(WrappedIMAP4_SSL, self).__init__(*args, **kwargs) def open(self, host=None, port=None): if not self.ca_certs and not self._fingerprint: raise OfflineImapError("No CA certificates " "and no server fingerprints configured. " "You must configure at least something, otherwise " "having SSL helps nothing.", OfflineImapError.ERROR.REPO) super(WrappedIMAP4_SSL, self).open(host, port) if self._fingerprint: # compare fingerprints fingerprint = sha1(self.sock.getpeercert(True)).hexdigest() if fingerprint not in self._fingerprint: raise OfflineImapError("Server SSL fingerprint '%s' " "for hostname '%s' " "does not match configured fingerprint(s) %s. " "Please verify and set 'cert_fingerprint' accordingly " "if not set yet."% (fingerprint, host, self._fingerprint), OfflineImapError.ERROR.REPO) class WrappedIMAP4(UsefulIMAPMixIn, IMAP4): """Improved version of imaplib.IMAP4 overriding select().""" def __init__(self, *args, **kwargs): if "use_socket" in kwargs: self.socket = kwargs['use_socket'] del kwargs['use_socket'] IMAP4.__init__(self, *args, **kwargs) def Internaldate2epoch(resp): """Convert IMAP4 INTERNALDATE to UT. Returns seconds since the epoch.""" from calendar import timegm mo = InternalDate.match(resp) if not mo: return None mon = Mon2num[mo.group('mon')] zonen = mo.group('zonen') day = int(mo.group('day')) year = int(mo.group('year')) hour = int(mo.group('hour')) min = int(mo.group('min')) sec = int(mo.group('sec')) zoneh = int(mo.group('zoneh')) zonem = int(mo.group('zonem')) # INTERNALDATE timezone must be subtracted to get UT zone = (zoneh*60 + zonem)*60 if zonen == '-': zone = -zone tt = (year, mon, day, hour, min, sec, -1, -1, -1) return timegm(tt) - zone offlineimap-6.6.1/offlineimap/imapserver.py000066400000000000000000001006551264010144500210410ustar00rootroot00000000000000# IMAP server support # Copyright (C) 2002 - 2011 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA from threading import Lock, BoundedSemaphore, Thread, Event, currentThread import hmac import socket import base64 import json import urllib import time import errno from sys import exc_info from socket import gaierror from ssl import SSLError, cert_time_to_seconds from offlineimap import imaplibutil, imaputil, threadutil, OfflineImapError import offlineimap.accounts from offlineimap.ui import getglobalui try: # do we have a recent pykerberos? have_gss = False import kerberos if 'authGSSClientWrap' in dir(kerberos): have_gss = True except ImportError: pass class IMAPServer: """Initializes all variables from an IMAPRepository() instance Various functions, such as acquireconnection() return an IMAP4 object on which we can operate. Public instance variables are: self.: delim The server's folder delimiter. Only valid after acquireconnection() """ GSS_STATE_STEP = 0 GSS_STATE_WRAP = 1 def __init__(self, repos): self.ui = getglobalui() self.repos = repos self.config = repos.getconfig() self.preauth_tunnel = repos.getpreauthtunnel() self.transport_tunnel = repos.gettransporttunnel() if self.preauth_tunnel and self.transport_tunnel: raise OfflineImapError('%s: '% repos + 'you must enable precisely one ' 'type of tunnel (preauth or transport), ' 'not both', OfflineImapError.ERROR.REPO) self.tunnel = \ self.preauth_tunnel if self.preauth_tunnel \ else self.transport_tunnel self.username = \ None if self.preauth_tunnel else repos.getuser() self.user_identity = repos.get_remote_identity() self.authmechs = repos.get_auth_mechanisms() self.password = None self.passworderror = None self.goodpassword = None self.usessl = repos.getssl() self.hostname = \ None if self.preauth_tunnel else repos.gethost() self.port = repos.getport() if self.port == None: self.port = 993 if self.usessl else 143 self.sslclientcert = repos.getsslclientcert() self.sslclientkey = repos.getsslclientkey() self.sslcacertfile = repos.getsslcacertfile() if self.sslcacertfile is None: self.__verifycert = None # disable cert verification self.fingerprint = repos.get_ssl_fingerprint() self.sslversion = repos.getsslversion() self.tlslevel = repos.gettlslevel() self.oauth2_refresh_token = repos.getoauth2_refresh_token() self.oauth2_client_id = repos.getoauth2_client_id() self.oauth2_client_secret = repos.getoauth2_client_secret() self.oauth2_request_url = repos.getoauth2_request_url() self.oauth2_access_token = None self.delim = None self.root = None self.maxconnections = repos.getmaxconnections() self.availableconnections = [] self.assignedconnections = [] self.lastowner = {} self.semaphore = BoundedSemaphore(self.maxconnections) self.connectionlock = Lock() self.reference = repos.getreference() self.idlefolders = repos.getidlefolders() self.gss_step = self.GSS_STATE_STEP self.gss_vc = None self.gssapi = False # In order to support proxy connection, we have to override the # default socket instance with our own socksified socket instance. # We add this option to bypass the GFW in China. _account_section = 'Account ' + self.repos.account.name if not self.config.has_option(_account_section, 'proxy'): self.proxied_socket = socket.socket else: proxy = self.config.get(_account_section, 'proxy') # Powered by PySocks. try: import socks proxy_type, host, port = proxy.split(":") port = int(port) socks.setdefaultproxy(getattr(socks, proxy_type), host, port) self.proxied_socket = socks.socksocket except ImportError: self.ui.warn("PySocks not installed, ignoring proxy option.") self.proxied_socket = socket.socket except (AttributeError, ValueError) as e: self.ui.warn("Bad proxy option %s for account %s: %s " "Ignoring proxy option."% (proxy, self.repos.account.name, e)) self.proxied_socket = socket.socket def __getpassword(self): """Returns the server password or None""" if self.goodpassword != None: # use cached good one first return self.goodpassword if self.password != None and self.passworderror == None: return self.password # non-failed preconfigured one # get 1) configured password first 2) fall back to asking via UI self.password = self.repos.getpassword() or \ self.ui.getpass(self.repos.getname(), self.config, self.passworderror) self.passworderror = None return self.password # XXX: is this function used anywhere? def getroot(self): """Returns this server's folder root. Can only be called after one or more calls to acquireconnection.""" return self.root def releaseconnection(self, connection, drop_conn=False): """Releases a connection, returning it to the pool. :param drop_conn: If True, the connection will be released and not be reused. This can be used to indicate broken connections.""" if connection is None: return #noop on bad connection self.connectionlock.acquire() self.assignedconnections.remove(connection) # Don't reuse broken connections if connection.Terminate or drop_conn: connection.logout() else: self.availableconnections.append(connection) self.connectionlock.release() self.semaphore.release() def __md5handler(self, response): challenge = response.strip() self.ui.debug('imap', '__md5handler: got challenge %s'% challenge) passwd = self.__getpassword() retval = self.username + ' ' + hmac.new(passwd, challenge).hexdigest() self.ui.debug('imap', '__md5handler: returning %s'% retval) return retval def __loginauth(self, imapobj): """ Basic authentication via LOGIN command.""" self.ui.debug('imap', 'Attempting IMAP LOGIN authentication') imapobj.login(self.username, self.__getpassword()) def __plainhandler(self, response): """Implements SASL PLAIN authentication, RFC 4616, http://tools.ietf.org/html/rfc4616""" authc = self.username passwd = self.__getpassword() authz = '' if self.user_identity != None: authz = self.user_identity NULL = u'\x00' retval = NULL.join((authz, authc, passwd)).encode('utf-8') logsafe_retval = NULL.join((authz, authc, "(passwd hidden for log)")).encode('utf-8') self.ui.debug('imap', '__plainhandler: returning %s' % logsafe_retval) return retval def __xoauth2handler(self, response): if self.oauth2_refresh_token is None: return None if self.oauth2_access_token is None: # need to move these to config # generate new access token params = {} params['client_id'] = self.oauth2_client_id params['client_secret'] = self.oauth2_client_secret params['refresh_token'] = self.oauth2_refresh_token params['grant_type'] = 'refresh_token' self.ui.debug('imap', 'xoauth2handler: url "%s"' % self.oauth2_request_url) self.ui.debug('imap', 'xoauth2handler: params "%s"' % params) response = urllib.urlopen(self.oauth2_request_url, urllib.urlencode(params)).read() resp = json.loads(response) self.ui.debug('imap', 'xoauth2handler: response "%s"' % resp) self.oauth2_access_token = resp['access_token'] self.ui.debug('imap', 'xoauth2handler: access_token "%s"' % self.oauth2_access_token) auth_string = 'user=%s\1auth=Bearer %s\1\1' % (self.username, self.oauth2_access_token) #auth_string = base64.b64encode(auth_string) self.ui.debug('imap', 'xoauth2handler: returning "%s"' % auth_string) return auth_string def __gssauth(self, response): data = base64.b64encode(response) try: if self.gss_step == self.GSS_STATE_STEP: if not self.gss_vc: rc, self.gss_vc = kerberos.authGSSClientInit( 'imap@' + self.hostname) response = kerberos.authGSSClientResponse(self.gss_vc) rc = kerberos.authGSSClientStep(self.gss_vc, data) if rc != kerberos.AUTH_GSS_CONTINUE: self.gss_step = self.GSS_STATE_WRAP elif self.gss_step == self.GSS_STATE_WRAP: rc = kerberos.authGSSClientUnwrap(self.gss_vc, data) response = kerberos.authGSSClientResponse(self.gss_vc) rc = kerberos.authGSSClientWrap( self.gss_vc, response, self.username) response = kerberos.authGSSClientResponse(self.gss_vc) except kerberos.GSSError as err: # Kerberos errored out on us, respond with None to cancel the # authentication self.ui.debug('imap', '%s: %s'% (err[0][0], err[1][0])) return None if not response: response = '' return base64.b64decode(response) def __start_tls(self, imapobj): if 'STARTTLS' in imapobj.capabilities and not self.usessl: self.ui.debug('imap', 'Using STARTTLS connection') try: imapobj.starttls() except imapobj.error as e: raise OfflineImapError("Failed to start " "TLS connection: %s"% str(e), OfflineImapError.ERROR.REPO, None, exc_info()[2]) ## All __authn_* procedures are helpers that do authentication. ## They are class methods that take one parameter, IMAP object. ## ## Each function should return True if authentication was ## successful and False if authentication wasn't even tried ## for some reason (but not when IMAP has no such authentication ## capability, calling code checks that). ## ## Functions can also raise exceptions; two types are special ## and will be handled by the calling code: ## ## - imapobj.error means that there was some error that ## comes from imaplib2; ## ## - OfflineImapError means that function detected some ## problem by itself. def __authn_gssapi(self, imapobj): if not have_gss: return False self.connectionlock.acquire() try: imapobj.authenticate('GSSAPI', self.__gssauth) return True except imapobj.error as e: self.gssapi = False raise else: self.gssapi = True kerberos.authGSSClientClean(self.gss_vc) self.gss_vc = None self.gss_step = self.GSS_STATE_STEP finally: self.connectionlock.release() def __authn_cram_md5(self, imapobj): imapobj.authenticate('CRAM-MD5', self.__md5handler) return True def __authn_plain(self, imapobj): imapobj.authenticate('PLAIN', self.__plainhandler) return True def __authn_xoauth2(self, imapobj): imapobj.authenticate('XOAUTH2', self.__xoauth2handler) return True def __authn_login(self, imapobj): # Use LOGIN command, unless LOGINDISABLED is advertized # (per RFC 2595) if 'LOGINDISABLED' in imapobj.capabilities: raise OfflineImapError("IMAP LOGIN is " "disabled by server. Need to use SSL?", OfflineImapError.ERROR.REPO) else: self.__loginauth(imapobj) return True def __authn_helper(self, imapobj): """Authentication machinery for self.acquireconnection(). Raises OfflineImapError() of type ERROR.REPO when there are either fatal problems or no authentications succeeded. If any authentication method succeeds, routine should exit: warnings for failed methods are to be produced in the respective except blocks.""" # Authentication routines, hash keyed by method name # with value that is a tuple with # - authentication function, # - tryTLS flag, # - check IMAP capability flag. auth_methods = { "GSSAPI": (self.__authn_gssapi, False, True), "CRAM-MD5": (self.__authn_cram_md5, True, True), "XOAUTH2": (self.__authn_xoauth2, True, True), "PLAIN": (self.__authn_plain, True, True), "LOGIN": (self.__authn_login, True, False), } # Stack stores pairs of (method name, exception) exc_stack = [] tried_to_authn = False tried_tls = False mechs = self.authmechs # GSSAPI must be tried first: we will probably go TLS after it # and GSSAPI mustn't be tunneled over TLS. if "GSSAPI" in mechs: mechs.remove("GSSAPI") mechs.insert(0, "GSSAPI") for m in mechs: if m not in auth_methods: raise Exception("Bad authentication method %s, " "please, file OfflineIMAP bug" % m) func, tryTLS, check_cap = auth_methods[m] # TLS must be initiated before checking capabilities: # they could have been changed after STARTTLS. if tryTLS and not tried_tls: tried_tls = True self.__start_tls(imapobj) if check_cap: cap = "AUTH=" + m if cap not in imapobj.capabilities: continue tried_to_authn = True self.ui.debug('imap', u'Attempting ' '%s authentication'% m) try: if func(imapobj): return except (imapobj.error, OfflineImapError) as e: self.ui.warn('%s authentication failed: %s'% (m, e)) exc_stack.append((m, e)) if len(exc_stack): msg = "\n\t".join(map( lambda x: ": ".join((x[0], str(x[1]))), exc_stack )) raise OfflineImapError("All authentication types " "failed:\n\t%s"% msg, OfflineImapError.ERROR.REPO) if not tried_to_authn: methods = ", ".join(map( lambda x: x[5:], filter(lambda x: x[0:5] == "AUTH=", imapobj.capabilities) )) raise OfflineImapError(u"Repository %s: no supported " "authentication mechanisms found; configured %s, " "server advertises %s"% (self.repos, ", ".join(self.authmechs), methods), OfflineImapError.ERROR.REPO) # XXX: move above, closer to releaseconnection() def acquireconnection(self): """Fetches a connection from the pool, making sure to create a new one if needed, to obey the maximum connection limits, etc. Opens a connection to the server and returns an appropriate object.""" self.semaphore.acquire() self.connectionlock.acquire() curThread = currentThread() imapobj = None if len(self.availableconnections): # One is available. # Try to find one that previously belonged to this thread # as an optimization. Start from the back since that's where # they're popped on. imapobj = None for i in range(len(self.availableconnections) - 1, -1, -1): tryobj = self.availableconnections[i] if self.lastowner[tryobj] == curThread.ident: imapobj = tryobj del(self.availableconnections[i]) break if not imapobj: imapobj = self.availableconnections[0] del(self.availableconnections[0]) self.assignedconnections.append(imapobj) self.lastowner[imapobj] = curThread.ident self.connectionlock.release() return imapobj self.connectionlock.release() # Release until need to modify data # Must be careful here that if we fail we should bail out gracefully # and release locks / threads so that the next attempt can try... success = 0 try: while not success: # Generate a new connection. if self.tunnel: self.ui.connecting('tunnel', self.tunnel) imapobj = imaplibutil.IMAP4_Tunnel( self.tunnel, timeout=socket.getdefaulttimeout(), use_socket=self.proxied_socket, ) success = 1 elif self.usessl: self.ui.connecting(self.hostname, self.port) imapobj = imaplibutil.WrappedIMAP4_SSL( self.hostname, self.port, self.sslclientkey, self.sslclientcert, self.sslcacertfile, self.__verifycert, self.sslversion, timeout=socket.getdefaulttimeout(), fingerprint=self.fingerprint, use_socket=self.proxied_socket, tls_level=self.tlslevel, ) else: self.ui.connecting(self.hostname, self.port) imapobj = imaplibutil.WrappedIMAP4( self.hostname, self.port, timeout=socket.getdefaulttimeout(), use_socket=self.proxied_socket, ) if not self.preauth_tunnel: try: self.__authn_helper(imapobj) self.goodpassword = self.password success = 1 except OfflineImapError as e: self.passworderror = str(e) raise # Enable compression if self.repos.getconfboolean('usecompression', 0): imapobj.enable_compression() # update capabilities after login, e.g. gmail serves different ones typ, dat = imapobj.capability() if dat != [None]: imapobj.capabilities = tuple(dat[-1].upper().split()) if self.delim == None: listres = imapobj.list(self.reference, '""')[1] if listres == [None] or listres == None: # Some buggy IMAP servers do not respond well to LIST "" "" # Work around them. listres = imapobj.list(self.reference, '"*"')[1] if listres == [None] or listres == None: # No Folders were returned. This occurs, e.g. if the # 'reference' prefix does not exist on the mail # server. Raise exception. err = "Server '%s' returned no folders in '%s'"% \ (self.repos.getname(), self.reference) self.ui.warn(err) raise Exception(err) self.delim, self.root = \ imaputil.imapsplit(listres[0])[1:] self.delim = imaputil.dequote(self.delim) self.root = imaputil.dequote(self.root) with self.connectionlock: self.assignedconnections.append(imapobj) self.lastowner[imapobj] = curThread.ident return imapobj except Exception as e: """If we are here then we did not succeed in getting a connection - we should clean up and then re-raise the error...""" self.semaphore.release() severity = OfflineImapError.ERROR.REPO if type(e) == gaierror: #DNS related errors. Abort Repo sync #TODO: special error msg for e.errno == 2 "Name or service not known"? reason = "Could not resolve name '%s' for repository "\ "'%s'. Make sure you have configured the ser"\ "ver name correctly and that you are online."%\ (self.hostname, self.repos) raise OfflineImapError(reason, severity), None, exc_info()[2] elif isinstance(e, SSLError) and e.errno == errno.EPERM: # SSL unknown protocol error # happens e.g. when connecting via SSL to a non-SSL service if self.port != 993: reason = "Could not connect via SSL to host '%s' and non-s"\ "tandard ssl port %d configured. Make sure you connect"\ " to the correct port."% (self.hostname, self.port) else: reason = "Unknown SSL protocol connecting to host '%s' for "\ "repository '%s'. OpenSSL responded:\n%s"\ % (self.hostname, self.repos, e) raise OfflineImapError(reason, severity), None, exc_info()[2] elif isinstance(e, socket.error) and e.args[0] == errno.ECONNREFUSED: # "Connection refused", can be a non-existing port, or an unauthorized # webproxy (open WLAN?) reason = "Connection to host '%s:%d' for repository '%s' was "\ "refused. Make sure you have the right host and port "\ "configured and that you are actually able to access the "\ "network."% (self.hostname, self.port, self.repos) raise OfflineImapError(reason, severity), None, exc_info()[2] # Could not acquire connection to the remote; # socket.error(last_error) raised if str(e)[:24] == "can't open socket; error": raise OfflineImapError("Could not connect to remote server '%s' "\ "for repository '%s'. Remote does not answer." % (self.hostname, self.repos), OfflineImapError.ERROR.REPO), None, exc_info()[2] else: # re-raise all other errors raise def connectionwait(self): """Waits until there is a connection available. Note that between the time that a connection becomes available and the time it is requested, another thread may have grabbed it. This function is mainly present as a way to avoid spawning thousands of threads to copy messages, then have them all wait for 3 available connections. It's OK if we have maxconnections + 1 or 2 threads, which is what this will help us do.""" self.semaphore.acquire() self.semaphore.release() def close(self): # Make sure I own all the semaphores. Let the threads finish # their stuff. This is a blocking method. with self.connectionlock: # first, wait till all connections had been released. # TODO: won't work IMHO, as releaseconnection() also # requires the connectionlock, leading to a potential # deadlock! Audit & check! threadutil.semaphorereset(self.semaphore, self.maxconnections) for imapobj in self.assignedconnections + self.availableconnections: imapobj.logout() self.assignedconnections = [] self.availableconnections = [] self.lastowner = {} # reset kerberos state self.gss_step = self.GSS_STATE_STEP self.gss_vc = None self.gssapi = False def keepalive(self, timeout, event): """Sends a NOOP to each connection recorded. It will wait a maximum of timeout seconds between doing this, and will continue to do so until the Event object as passed is true. This method is expected to be invoked in a separate thread, which should be join()'d after the event is set.""" self.ui.debug('imap', 'keepalive thread started') while not event.isSet(): self.connectionlock.acquire() numconnections = len(self.assignedconnections) + \ len(self.availableconnections) self.connectionlock.release() threads = [] for i in range(numconnections): self.ui.debug('imap', 'keepalive: processing connection %d of %d'% (i, numconnections)) if len(self.idlefolders) > i: # IDLE thread idler = IdleThread(self, self.idlefolders[i]) else: # NOOP thread idler = IdleThread(self) idler.start() threads.append(idler) self.ui.debug('imap', 'keepalive: waiting for timeout') event.wait(timeout) self.ui.debug('imap', 'keepalive: after wait') for idler in threads: # Make sure all the commands have completed. idler.stop() idler.join() self.ui.debug('imap', 'keepalive: all threads joined') self.ui.debug('imap', 'keepalive: event is set; exiting') return def __verifycert(self, cert, hostname): """Verify that cert (in socket.getpeercert() format) matches hostname. CRLs are not handled. Returns error message if any problems are found and None on success.""" errstr = "CA Cert verifying failed: " if not cert: return ('%s no certificate received'% errstr) dnsname = hostname.lower() certnames = [] # cert expired? notafter = cert.get('notAfter') if notafter: if time.time() >= cert_time_to_seconds(notafter): return '%s certificate expired %s'% (errstr, notafter) # First read commonName for s in cert.get('subject', []): key, value = s[0] if key == 'commonName': certnames.append(value.lower()) if len(certnames) == 0: return ('%s no commonName found in certificate'% errstr) # Then read subjectAltName for key, value in cert.get('subjectAltName', []): if key == 'DNS': certnames.append(value.lower()) # And finally try to match hostname with one of these names for certname in certnames: if (certname == dnsname or '.' in dnsname and certname == '*.' + dnsname.split('.', 1)[1]): return None return ('%s no matching domain name found in certificate'% errstr) class IdleThread(object): def __init__(self, parent, folder=None): """If invoked without 'folder', perform a NOOP and wait for self.stop() to be called. If invoked with folder, switch to IDLE mode and synchronize once we have a new message""" self.parent = parent self.folder = folder self.stop_sig = Event() self.ui = getglobalui() if folder is None: self.thread = Thread(target=self.noop) else: self.thread = Thread(target=self.__idle) self.thread.setDaemon(1) def start(self): self.thread.start() def stop(self): self.stop_sig.set() def join(self): self.thread.join() def noop(self): # TODO: AFAIK this is not optimal, we will send a NOOP on one # random connection (ie not enough to keep all connections # open). In case we do the noop multiple times, we can well use # the same connection every time, as we get a random one. This # function should IMHO send a noop on ALL available connections # to the server. imapobj = self.parent.acquireconnection() try: imapobj.noop() except imapobj.abort: self.ui.warn('Attempting NOOP on dropped connection %s'% imapobj.identifier) self.parent.releaseconnection(imapobj, True) imapobj = None finally: if imapobj: self.parent.releaseconnection(imapobj) self.stop_sig.wait() # wait until we are supposed to quit def __dosync(self): remoterepos = self.parent.repos account = remoterepos.account localrepos = account.localrepos remoterepos = account.remoterepos statusrepos = account.statusrepos remotefolder = remoterepos.getfolder(self.folder) hook = account.getconf('presynchook', '') account.callhook(hook) offlineimap.accounts.syncfolder(account, remotefolder, quick=False) hook = account.getconf('postsynchook', '') account.callhook(hook) ui = getglobalui() ui.unregisterthread(currentThread()) #syncfolder registered the thread def __idle(self): """Invoke IDLE mode until timeout or self.stop() is invoked.""" def callback(args): """IDLE callback function invoked by imaplib2. This is invoked when a) The IMAP server tells us something while in IDLE mode, b) we get an Exception (e.g. on dropped connections, or c) the standard imaplib IDLE timeout of 29 minutes kicks in.""" result, cb_arg, exc_data = args if exc_data is None and not self.stop_sig.isSet(): # No Exception, and we are not supposed to stop: self.needsync = True self.stop_sig.set() # Continue to sync. while not self.stop_sig.isSet(): self.needsync = False success = False # Successfully selected FOLDER? while not success: imapobj = self.parent.acquireconnection() try: imapobj.select(self.folder) except OfflineImapError as e: if e.severity == OfflineImapError.ERROR.FOLDER_RETRY: # Connection closed, release connection and retry. self.ui.error(e, exc_info()[2]) self.parent.releaseconnection(imapobj, True) elif e.severity == OfflineImapError.ERROR.FOLDER: # Just continue the process on such error for now. self.ui.error(e, exc_info()[2]) else: # Stops future attempts to sync this account. raise else: success = True if "IDLE" in imapobj.capabilities: imapobj.idle(callback=callback) else: self.ui.warn("IMAP IDLE not supported on server '%s'." "Sleep until next refresh cycle."% imapobj.identifier) imapobj.noop() self.stop_sig.wait() # self.stop() or IDLE callback are invoked. try: # End IDLE mode with noop, imapobj can point to a dropped conn. imapobj.noop() except imapobj.abort: self.ui.warn('Attempting NOOP on dropped connection %s'% imapobj.identifier) self.parent.releaseconnection(imapobj, True) else: self.parent.releaseconnection(imapobj) if self.needsync: # Here not via self.stop, but because IDLE responded. Do # another round and invoke actual syncing. self.stop_sig.clear() self.__dosync() offlineimap-6.6.1/offlineimap/imaputil.py000066400000000000000000000277611264010144500205160ustar00rootroot00000000000000# IMAP utility module # Copyright (C) 2002-2015 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import re import string from offlineimap.ui import getglobalui ## Globals # Message headers that use space as the separator (for label storage) SPACE_SEPARATED_LABEL_HEADERS = ('X-Label', 'Keywords') # Find the modified UTF-7 shifts of an international mailbox name. MUTF7_SHIFT_RE = re.compile(r'&[^-]*-|\+') def __debug(*args): msg = [] for arg in args: msg.append(str(arg)) getglobalui().debug('imap', " ".join(msg)) def dequote(s): """Takes string which may or may not be quoted and unquotes it. It only considers double quotes. This function does NOT consider parenthised lists to be quoted.""" if s and s.startswith('"') and s.endswith('"'): s = s[1:-1] # Strip off the surrounding quotes. s = s.replace('\\"', '"') s = s.replace('\\\\', '\\') return s def quote(s): """Takes an unquoted string and quotes it. It only adds double quotes. This function does NOT consider parenthised lists to be quoted.""" s = s.replace('"', '\\"') s = s.replace('\\', '\\\\') return '"%s"'% s def flagsplit(s): """Converts a string of IMAP flags to a list :returns: E.g. '(\\Draft \\Deleted)' returns ['\\Draft','\\Deleted']. (FLAGS (\\Seen Old) UID 4807) returns ['FLAGS,'(\\Seen Old)','UID', '4807'] """ if s[0] != '(' or s[-1] != ')': raise ValueError("Passed s '%s' is not a flag list"% s) return imapsplit(s[1:-1]) def __options2hash(list): """convert list [1,2,3,4,5,6] to {1:2, 3:4, 5:6}""" # effectively this does dict(zip(l[::2],l[1::2])), however # measurements seemed to have indicated that the manual variant is # faster for mosly small lists. retval = {} counter = 0 while (counter < len(list)): retval[list[counter]] = list[counter + 1] counter += 2 __debug("__options2hash returning:", retval) return retval def flags2hash(flags): """Converts IMAP response string from eg IMAP4.fetch() to a hash. E.g. '(FLAGS (\\Seen Old) UID 4807)' leads to {'FLAGS': '(\\Seen Old)', 'UID': '4807'}""" return __options2hash(flagsplit(flags)) def imapsplit(imapstring): """Takes a string from an IMAP conversation and returns a list containing its components. One example string is: (\\HasNoChildren) "." "INBOX.Sent" The result from parsing this will be: ['(\\HasNoChildren)', '"."', '"INBOX.Sent"']""" if not isinstance(imapstring, basestring): __debug("imapsplit() got a non-string input; working around.") # Sometimes, imaplib will throw us a tuple if the input # contains a literal. See Python bug # #619732 at https://sourceforge.net/tracker/index.php?func=detail&aid=619732&group_id=5470&atid=105470 # One example is: # result[0] = '() "\\\\" Admin' # result[1] = ('() "\\\\" {19}', 'Folder\\2') # # This function will effectively get result[0] or result[1], so # if we get the result[1] version, we need to parse apart the tuple # and figure out what to do with it. Each even-numbered # part of it should end with the {} number, and each odd-numbered # part should be directly a part of the result. We'll # artificially quote it to help out. retval = [] for i in range(len(imapstring)): if i % 2: # Odd: quote then append. arg = imapstring[i] # Quote code lifted from imaplib arg = arg.replace('\\', '\\\\') arg = arg.replace('"', '\\"') arg = '"%s"' % arg __debug("imapsplit() non-string [%d]: Appending %s"% (i, arg)) retval.append(arg) else: # Even -- we have a string that ends with a literal # size specifier. We need to strip off that, then run # what remains through the regular imapsplit parser. # Recursion to the rescue. arg = imapstring[i] arg = re.sub('\{\d+\}$', '', arg) __debug("imapsplit() non-string [%d]: Feeding %s to recursion"%\ (i, arg)) retval.extend(imapsplit(arg)) __debug("imapsplit() non-string: returning %s" % str(retval)) return retval workstr = imapstring.strip() retval = [] while len(workstr): # handle parenthized fragments (...()...) if workstr[0] == '(': rparenc = 1 # count of right parenthesis to match rpareni = 1 # position to examine while rparenc: # Find the end of the group. if workstr[rpareni] == ')': # end of a group rparenc -= 1 elif workstr[rpareni] == '(': # start of a group rparenc += 1 rpareni += 1 # Move to next character. parenlist = workstr[0:rpareni] workstr = workstr[rpareni:].lstrip() retval.append(parenlist) elif workstr[0] == '"': # quoted fragments '"...\"..."' (quoted, rest) = __split_quoted(workstr) retval.append(quoted) workstr = rest else: splits = string.split(workstr, maxsplit = 1) splitslen = len(splits) # The unquoted word is splits[0]; the remainder is splits[1] if splitslen == 2: # There's an unquoted word, and more string follows. retval.append(splits[0]) workstr = splits[1] # split will have already lstripped it continue elif splitslen == 1: # We got a last unquoted word, but nothing else retval.append(splits[0]) # Nothing remains. workstr would be '' break elif splitslen == 0: # There was not even an unquoted word. break return retval flagmap = [('\\Seen', 'S'), ('\\Answered', 'R'), ('\\Flagged', 'F'), ('\\Deleted', 'T'), ('\\Draft', 'D')] def flagsimap2maildir(flagstring): """Convert string '(\\Draft \\Deleted)' into a flags set(DR).""" retval = set() imapflaglist = flagstring[1:-1].split() for imapflag, maildirflag in flagmap: if imapflag in imapflaglist: retval.add(maildirflag) return retval def flagsimap2keywords(flagstring): """Convert string '(\\Draft \\Deleted somekeyword otherkeyword)' into a keyword set (somekeyword otherkeyword).""" imapflagset = set(flagstring[1:-1].split()) serverflagset = set([flag for (flag, c) in flagmap]) return imapflagset - serverflagset def flagsmaildir2imap(maildirflaglist): """Convert set of flags ([DR]) into a string '(\\Deleted \\Draft)'.""" retval = [] for imapflag, maildirflag in flagmap: if maildirflag in maildirflaglist: retval.append(imapflag) return '(' + ' '.join(sorted(retval)) + ')' def uid_sequence(uidlist): """Collapse UID lists into shorter sequence sets [1,2,3,4,5,10,12,13] will return "1:5,10,12:13". This function sorts the list, and only collapses if subsequent entries form a range. :returns: The collapsed UID list as string.""" def getrange(start, end): if start == end: return(str(start)) return "%s:%s"% (start, end) if not len(uidlist): return '' # Empty list, return start, end = None, None retval = [] # Force items to be longs and sort them sorted_uids = sorted(map(int, uidlist)) for item in iter(sorted_uids): item = int(item) if start == None: # First item start, end = item, item elif item == end + 1: # Next item in a range end = item else: # Starting a new range retval.append(getrange(start, end)) start, end = item, item retval.append(getrange(start, end)) # Add final range/item return ",".join(retval) def __split_quoted(s): """Looks for the ending quote character in the string that starts with quote character, splitting out quoted component and the rest of the string (without possible space between these two parts. First character of the string is taken to be quote character. Examples: - "this is \" a test" (\\None) => ("this is \" a test", (\\None)) - "\\" => ("\\", ) """ if len(s) == 0: return ('', '') q = quoted = s[0] rest = s[1:] while True: next_q = rest.find(q) if next_q == -1: raise ValueError("can't find ending quote '%s' in '%s'"% (q, s)) # If quote is preceeded by even number of backslashes, # then it is the ending quote, otherwise the quote # character is escaped by backslash, so we should # continue our search. is_escaped = False i = next_q - 1 while i >= 0 and rest[i] == '\\': i -= 1 is_escaped = not is_escaped quoted += rest[0:next_q + 1] rest = rest[next_q + 1:] if not is_escaped: return (quoted, rest.lstrip()) def format_labels_string(header, labels): """Formats labels for embedding into a message, with format according to header name. Headers from SPACE_SEPARATED_LABEL_HEADERS keep space-separated list of labels, the rest uses comma (',') as the separator. Also see parse_labels_string() and modify it accordingly if logics here gets changed.""" if header in SPACE_SEPARATED_LABEL_HEADERS: sep = ' ' else: sep = ',' return sep.join(labels) def parse_labels_string(header, labels_str): """Parses a string into a set of labels, with a format according to the name of the header. See __format_labels_string() for explanation on header handling and keep these two functions synced with each other. TODO: add test to ensure that - format_labels_string * parse_labels_string is unity and - parse_labels_string * format_labels_string is unity """ if header in SPACE_SEPARATED_LABEL_HEADERS: sep = ' ' else: sep = ',' labels = labels_str.strip().split(sep) return set([l.strip() for l in labels if l.strip()]) def labels_from_header(header_name, header_value): """Helper that builds label set from the corresponding header value. Arguments: - header_name: name of the header that keeps labels; - header_value: value of the said header, can be None Returns: set of labels parsed from the header (or empty set). """ if header_value: labels = parse_labels_string(header_name, header_value) else: labels = set() return labels def decode_mailbox_name(name): """Decodes a modified UTF-7 mailbox name. If the string cannot be decoded, it is returned unmodified. See RFC 3501, sec. 5.1.3. Arguments: - name: string, possibly encoded with modified UTF-7 Returns: decoded UTF-8 string. """ def demodify(m): s = m.group() if s == '+': return '+-' return '+' + s[1:-1].replace(',', '/') + '-' ret = MUTF7_SHIFT_RE.sub(demodify, name) try: return ret.decode('utf-7').encode('utf-8') except UnicodeEncodeError: return name offlineimap-6.6.1/offlineimap/init.py000066400000000000000000000420371264010144500176260ustar00rootroot00000000000000# OfflineIMAP initialization code # Copyright (C) 2002-2015 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import os import sys import threading import offlineimap.imaplib2 as imaplib import signal import socket import logging from optparse import OptionParser import offlineimap from offlineimap import accounts, threadutil, syncmaster from offlineimap import globals from offlineimap.ui import UI_LIST, setglobalui, getglobalui from offlineimap.CustomConfig import CustomConfigParser from offlineimap.utils import stacktrace import traceback import collections class OfflineImap: """The main class that encapsulates the high level use of OfflineImap. To invoke OfflineImap you would call it with:: oi = OfflineImap() oi.run() """ def run(self): """Parse the commandline and invoke everything""" # next line also sets self.config and self.ui options, args = self.__parse_cmd_options() if options.diagnostics: self.__serverdiagnostics(options) else: return self.__sync(options) def __parse_cmd_options(self): parser = OptionParser(version=offlineimap.__bigversion__, description="%s.\n\n%s" % (offlineimap.__copyright__, offlineimap.__license__)) parser.add_option("--dry-run", action="store_true", dest="dryrun", default=False, help="dry run mode") parser.add_option("--info", action="store_true", dest="diagnostics", default=False, help="output information on the configured email repositories") parser.add_option("-1", action="store_true", dest="singlethreading", default=False, help="(the number one) disable all multithreading operations") parser.add_option("-P", dest="profiledir", metavar="DIR", help="sets OfflineIMAP into profile mode.") parser.add_option("-a", dest="accounts", metavar="account1[,account2[,...]]", help="list of accounts to sync") parser.add_option("-c", dest="configfile", metavar="FILE", default=None, help="specifies a configuration file to use") parser.add_option("-d", dest="debugtype", metavar="type1[,type2[,...]]", help="enables debugging for OfflineIMAP " " (types: imap, maildir, thread)") parser.add_option("-l", dest="logfile", metavar="FILE", help="log to FILE") parser.add_option("-s", action="store_true", dest="syslog", default=False, help="log to syslog") parser.add_option("-f", dest="folders", metavar="folder1[,folder2[,...]]", help="only sync the specified folders") parser.add_option("-k", dest="configoverride", action="append", metavar="[section:]option=value", help="override configuration file option") parser.add_option("-o", action="store_true", dest="runonce", default=False, help="run only once (ignore autorefresh)") parser.add_option("-q", action="store_true", dest="quick", default=False, help="run only quick synchronizations (don't update flags)") parser.add_option("-u", dest="interface", help="specifies an alternative user interface" " (quiet, basic, syslog, ttyui, blinkenlights, machineui)") (options, args) = parser.parse_args() globals.set_options (options) #read in configuration file if not options.configfile: # Try XDG location, then fall back to ~/.offlineimaprc xdg_var = 'XDG_CONFIG_HOME' if not xdg_var in os.environ or not os.environ[xdg_var]: xdg_home = os.path.expanduser('~/.config') else: xdg_home = os.environ[xdg_var] options.configfile = os.path.join(xdg_home, "offlineimap", "config") if not os.path.exists(options.configfile): options.configfile = os.path.expanduser('~/.offlineimaprc') configfilename = options.configfile else: configfilename = os.path.expanduser(options.configfile) config = CustomConfigParser() if not os.path.exists(configfilename): # TODO, initialize and make use of chosen ui for logging logging.error(" *** Config file '%s' does not exist; aborting!"% configfilename) sys.exit(1) config.read(configfilename) #profile mode chosen? if options.profiledir: if not options.singlethreading: # TODO, make use of chosen ui for logging logging.warn("Profile mode: Forcing to singlethreaded.") options.singlethreading = True if os.path.exists(options.profiledir): # TODO, make use of chosen ui for logging logging.warn("Profile mode: Directory '%s' already exists!"% options.profiledir) else: os.mkdir(options.profiledir) threadutil.ExitNotifyThread.set_profiledir(options.profiledir) # TODO, make use of chosen ui for logging logging.warn("Profile mode: Potentially large data will be " "created in '%s'"% options.profiledir) #override a config value if options.configoverride: for option in options.configoverride: (key, value) = option.split('=', 1) if ':' in key: (secname, key) = key.split(':', 1) section = secname.replace("_", " ") else: section = "general" config.set(section, key, value) #which ui to use? cmd line option overrides config file ui_type = config.getdefault('general', 'ui', 'ttyui') if options.interface != None: ui_type = options.interface if '.' in ui_type: #transform Curses.Blinkenlights -> Blinkenlights ui_type = ui_type.split('.')[-1] # TODO, make use of chosen ui for logging logging.warning('Using old interface name, consider using one ' 'of %s'% ', '.join(UI_LIST.keys())) if options.diagnostics: ui_type = 'basic' # enforce basic UI for --info # dry-run? Set [general]dry-run=True if options.dryrun: dryrun = config.set('general', 'dry-run', 'True') config.set_if_not_exists('general', 'dry-run', 'False') try: # create the ui class self.ui = UI_LIST[ui_type.lower()](config) except KeyError: logging.error("UI '%s' does not exist, choose one of: %s"% \ (ui_type, ', '.join(UI_LIST.keys()))) sys.exit(1) setglobalui(self.ui) #set up additional log files if options.logfile: self.ui.setlogfile(options.logfile) #set up syslog if options.syslog: self.ui.setup_sysloghandler() #welcome blurb self.ui.init_banner() if options.debugtype: self.ui.logger.setLevel(logging.DEBUG) if options.debugtype.lower() == 'all': options.debugtype = 'imap,maildir,thread' #force single threading? if not ('thread' in options.debugtype.split(',') \ and not options.singlethreading): self.ui._msg("Debug mode: Forcing to singlethreaded.") options.singlethreading = True debugtypes = options.debugtype.split(',') + [''] for dtype in debugtypes: dtype = dtype.strip() self.ui.add_debug(dtype) if dtype.lower() == u'imap': imaplib.Debug = 5 if options.runonce: # Must kill the possible default option if config.has_option('DEFAULT', 'autorefresh'): config.remove_option('DEFAULT', 'autorefresh') # FIXME: spaghetti code alert! for section in accounts.getaccountlist(config): config.remove_option('Account ' + section, "autorefresh") if options.quick: for section in accounts.getaccountlist(config): config.set('Account ' + section, "quick", '-1') #custom folder list specified? if options.folders: foldernames = options.folders.split(",") folderfilter = "lambda f: f in %s"% foldernames folderincludes = "[]" for accountname in accounts.getaccountlist(config): account_section = 'Account ' + accountname remote_repo_section = 'Repository ' + \ config.get(account_section, 'remoterepository') config.set(remote_repo_section, "folderfilter", folderfilter) config.set(remote_repo_section, "folderincludes", folderincludes) if options.logfile: sys.stderr = self.ui.logfile socktimeout = config.getdefaultint("general", "socktimeout", 0) if socktimeout > 0: socket.setdefaulttimeout(socktimeout) threadutil.initInstanceLimit('ACCOUNTLIMIT', config.getdefaultint('general', 'maxsyncaccounts', 1)) for reposname in config.getsectionlist('Repository'): for instancename in ["FOLDER_" + reposname, "MSGCOPY_" + reposname]: if options.singlethreading: threadutil.initInstanceLimit(instancename, 1) else: threadutil.initInstanceLimit(instancename, config.getdefaultint('Repository ' + reposname, 'maxconnections', 2)) self.config = config return (options, args) def __dumpstacks(self, context=1, sighandler_deep=2): """ Signal handler: dump a stack trace for each existing thread.""" currentThreadId = threading.currentThread().ident def unique_count(l): d = collections.defaultdict(lambda: 0) for v in l: d[tuple(v)] += 1 return list((k, v) for k, v in d.iteritems()) stack_displays = [] for threadId, stack in sys._current_frames().items(): stack_display = [] for filename, lineno, name, line in traceback.extract_stack(stack): stack_display.append(' File: "%s", line %d, in %s' % (filename, lineno, name)) if line: stack_display.append(" %s" % (line.strip())) if currentThreadId == threadId: stack_display = stack_display[:- (sighandler_deep * 2)] stack_display.append(' => Stopped to handle current signal. ') stack_displays.append(stack_display) stacks = unique_count(stack_displays) self.ui.debug('thread', "** Thread List:\n") for stack, times in stacks: if times == 1: msg = "%s Thread is at:\n%s\n" else: msg = "%s Threads are at:\n%s\n" self.ui.debug('thread', msg % (times, '\n'.join(stack[- (context * 2):]))) self.ui.debug('thread', "Dumped a total of %d Threads." % len(sys._current_frames().keys())) def __sync(self, options): """Invoke the correct single/multithread syncing self.config is supposed to have been correctly initialized already.""" try: pidfd = open(self.config.getmetadatadir() + "/pid", "w") pidfd.write(str(os.getpid()) + "\n") pidfd.close() except: pass try: # Honor CLI --account option, only. # Accounts to sync are put into syncaccounts variable. activeaccounts = self.config.get("general", "accounts") if options.accounts: activeaccounts = options.accounts activeaccounts = activeaccounts.replace(" ", "") activeaccounts = activeaccounts.split(",") allaccounts = accounts.AccountHashGenerator(self.config) syncaccounts = [] for account in activeaccounts: if account not in allaccounts: if len(allaccounts) == 0: errormsg = "The account '%s' does not exist because no" \ " accounts are defined!"% account else: errormsg = "The account '%s' does not exist. Valid ac" \ "counts are: %s"% \ (account, ", ".join(allaccounts.keys())) self.ui.terminate(1, errormsg=errormsg) if account not in syncaccounts: syncaccounts.append(account) def sig_handler(sig, frame): if sig == signal.SIGUSR1: # tell each account to stop sleeping accounts.Account.set_abort_event(self.config, 1) elif sig == signal.SIGUSR2: # tell each account to stop looping getglobalui().warn("Terminating after this sync...") accounts.Account.set_abort_event(self.config, 2) elif sig in (signal.SIGTERM, signal.SIGINT, signal.SIGHUP): # tell each account to ABORT ASAP (ctrl-c) getglobalui().warn("Terminating NOW (this may "\ "take a few seconds)...") accounts.Account.set_abort_event(self.config, 3) if 'thread' in self.ui.debuglist: self.__dumpstacks(5) elif sig == signal.SIGQUIT: stacktrace.dump(sys.stderr) os.abort() signal.signal(signal.SIGHUP, sig_handler) signal.signal(signal.SIGUSR1, sig_handler) signal.signal(signal.SIGUSR2, sig_handler) signal.signal(signal.SIGTERM, sig_handler) signal.signal(signal.SIGINT, sig_handler) signal.signal(signal.SIGQUIT, sig_handler) #various initializations that need to be performed: offlineimap.mbnames.init(self.config, syncaccounts) if options.singlethreading: #singlethreaded self.__sync_singlethreaded(syncaccounts) else: # multithreaded t = threadutil.ExitNotifyThread(target=syncmaster.syncitall, name='Sync Runner', kwargs = {'accounts': syncaccounts, 'config': self.config}) t.start() threadutil.exitnotifymonitorloop(threadutil.threadexited) if not options.dryrun: offlineimap.mbnames.write(True) self.ui.terminate() return 0 except (SystemExit): raise except Exception as e: self.ui.error(e) self.ui.terminate() return 1 def __sync_singlethreaded(self, accs): """Executed if we do not want a separate syncmaster thread :param accs: A list of accounts that should be synced """ for accountname in accs: account = offlineimap.accounts.SyncableAccount(self.config, accountname) threading.currentThread().name = "Account sync %s"% accountname account.syncrunner() def __serverdiagnostics(self, options): activeaccounts = self.config.get("general", "accounts") if options.accounts: activeaccounts = options.accounts activeaccounts = activeaccounts.split(",") allaccounts = accounts.AccountListGenerator(self.config) for account in allaccounts: if account.name not in activeaccounts: continue account.serverdiagnostics() offlineimap-6.6.1/offlineimap/localeval.py000066400000000000000000000032261264010144500206220ustar00rootroot00000000000000"""Eval python code with global namespace of a python source file.""" # Copyright (C) 2002-2014 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import imp try: import errno except: pass class LocalEval: """Here is a powerfull but very dangerous option, of course.""" def __init__(self, path=None): self.namespace = {} if path is not None: # FIXME: limit opening files owned by current user with rights set # to fixed mode 644. foo = open(path, 'r') module = imp.load_module( '', foo, path, ('', 'r', imp.PY_SOURCE)) for attr in dir(module): self.namespace[attr] = getattr(module, attr) def eval(self, text, namespace=None): names = {} names.update(self.namespace) if namespace is not None: names.update(namespace) return eval(text, names) offlineimap-6.6.1/offlineimap/mbnames.py000066400000000000000000000071341264010144500203040ustar00rootroot00000000000000# Mailbox name generator # # Copyright (C) 2002-2015 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import os.path import re # for folderfilter from threading import Lock boxes = {} localroots = {} config = None accounts = None mblock = Lock() def init(conf, accts): global config, accounts config = conf accounts = accts def add(accountname, foldername, localfolders): if not accountname in boxes: boxes[accountname] = [] localroots[accountname] = localfolders if not foldername in boxes[accountname]: boxes[accountname].append(foldername) def write(allcomplete): incremental = config.getdefaultboolean("mbnames", "incremental", False) # Skip writing if we don't want incremental writing and we're not done. if not incremental and not allcomplete: return # Skip writing if we want incremental writing and we're done. if incremental and allcomplete: return # See if we're ready to write it out. for account in accounts: if account not in boxes: return __genmbnames() def __genmbnames(): """Takes a configparser object and a boxlist, which is a list of hashes containing 'accountname' and 'foldername' keys.""" xforms = [os.path.expanduser, os.path.expandvars] mblock.acquire() try: localeval = config.getlocaleval() if not config.getdefaultboolean("mbnames", "enabled", 0): return path = config.apply_xforms(config.get("mbnames", "filename"), xforms) file = open(path, "wt") file.write(localeval.eval(config.get("mbnames", "header"))) folderfilter = lambda accountname, foldername: 1 if config.has_option("mbnames", "folderfilter"): folderfilter = localeval.eval(config.get("mbnames", "folderfilter"), {'re': re}) mb_sort_keyfunc = lambda d: (d['accountname'], d['foldername']) if config.has_option("mbnames", "sort_keyfunc"): mb_sort_keyfunc = localeval.eval(config.get("mbnames", "sort_keyfunc"), {'re': re}) itemlist = [] for accountname in boxes.keys(): localroot = localroots[accountname] for foldername in boxes[accountname]: if folderfilter(accountname, foldername): itemlist.append({'accountname': accountname, 'foldername': foldername, 'localfolders': localroot}) itemlist.sort(key = mb_sort_keyfunc) format_string = config.get("mbnames", "peritem", raw=1) itemlist = [format_string % d for d in itemlist] file.write(localeval.eval(config.get("mbnames", "sep")).join(itemlist)) file.write(localeval.eval(config.get("mbnames", "footer"))) file.close() finally: mblock.release() offlineimap-6.6.1/offlineimap/repository/000077500000000000000000000000001264010144500205225ustar00rootroot00000000000000offlineimap-6.6.1/offlineimap/repository/Base.py000066400000000000000000000256441264010144500217610ustar00rootroot00000000000000# Base repository support # Copyright (C) 2002-2015 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import re import os.path from sys import exc_info from offlineimap import CustomConfig from offlineimap.ui import getglobalui from offlineimap.error import OfflineImapError class BaseRepository(CustomConfig.ConfigHelperMixin, object): def __init__(self, reposname, account): self.ui = getglobalui() self.account = account self.config = account.getconfig() self.name = reposname self.localeval = account.getlocaleval() self._accountname = self.account.getname() self._readonly = self.getconfboolean('readonly', False) self.uiddir = os.path.join(self.config.getmetadatadir(), 'Repository-' + self.name) if not os.path.exists(self.uiddir): os.mkdir(self.uiddir, 0o700) self.mapdir = os.path.join(self.uiddir, 'UIDMapping') if not os.path.exists(self.mapdir): os.mkdir(self.mapdir, 0o700) # FIXME: self.uiddir variable name is lying about itself. self.uiddir = os.path.join(self.uiddir, 'FolderValidity') if not os.path.exists(self.uiddir): os.mkdir(self.uiddir, 0o700) self.nametrans = lambda foldername: foldername self.folderfilter = lambda foldername: 1 self.folderincludes = [] self.foldersort = None self.newmail_hook = None if self.config.has_option(self.getsection(), 'nametrans'): self.nametrans = self.localeval.eval( self.getconf('nametrans'), {'re': re}) if self.config.has_option(self.getsection(), 'folderfilter'): self.folderfilter = self.localeval.eval( self.getconf('folderfilter'), {'re': re}) if self.config.has_option(self.getsection(), 'folderincludes'): self.folderincludes = self.localeval.eval( self.getconf('folderincludes'), {'re': re}) if self.config.has_option(self.getsection(), 'foldersort'): self.foldersort = self.localeval.eval( self.getconf('foldersort'), {'re': re}) def restore_atime(self): """Sets folders' atime back to their values after a sync Controlled by the 'restoreatime' config parameter (default False), applies only to local Maildir mailboxes and does nothing on all other repository types.""" pass def connect(self): """Establish a connection to the remote, if necessary. This exists so that IMAP connections can all be established up front, gathering passwords as needed. It was added in order to support the error recovery -- we need to connect first outside of the error trap in order to validate the password, and that's the point of this function.""" pass def holdordropconnections(self): pass def dropconnections(self): pass def getaccount(self): return self.account def getname(self): return self.name def __str__(self): return self.name @property def accountname(self): """Account name as string""" return self._accountname def getuiddir(self): return self.uiddir def getmapdir(self): return self.mapdir # Interface from CustomConfig.ConfigHelperMixin def getsection(self): return 'Repository ' + self.name # Interface from CustomConfig.ConfigHelperMixin def getconfig(self): return self.config @property def readonly(self): """Is the repository readonly?""" return self._readonly def getlocaleval(self): return self.account.getlocaleval() def getfolders(self): """Returns a list of ALL folders on this server.""" return [] def forgetfolders(self): """Forgets the cached list of folders, if any. Useful to run after a sync run.""" pass def getsep(self): raise NotImplementedError def getkeywordmap(self): raise NotImplementedError def should_sync_folder(self, fname): """Should this folder be synced?""" return fname in self.folderincludes or self.folderfilter(fname) def get_create_folders(self): """Is folder creation enabled on this repository? It is disabled by either setting the whole repository 'readonly' or by using the 'createfolders' setting.""" return (not self._readonly) and \ self.getconfboolean('createfolders', True) def makefolder(self, foldername): """Create a new folder.""" raise NotImplementedError def deletefolder(self, foldername): raise NotImplementedError def getfolder(self, foldername): raise NotImplementedError def sync_folder_structure(self, dst_repo, status_repo): """Syncs the folders in this repository to those in dest. It does NOT sync the contents of those folders. nametrans rules in both directions will be honored, but there are NO checks yet that forward and backward nametrans actually match up! Configuring nametrans on BOTH repositories therefore could lead to infinite folder creation cycles.""" if not self.get_create_folders() and not dst_repo.get_create_folders(): # quick exit if no folder creation is enabled on either side. return src_repo = self src_folders = src_repo.getfolders() dst_folders = dst_repo.getfolders() # Do we need to refresh the folder list afterwards? src_haschanged, dst_haschanged = False, False # Create hashes with the names, but convert the source folders # to the dest folder's sep. src_hash = {} for folder in src_folders: src_hash[folder.getvisiblename().replace( src_repo.getsep(), dst_repo.getsep())] = folder dst_hash = {} for folder in dst_folders: dst_hash[folder.getvisiblename().replace( dst_repo.getsep(), src_repo.getsep())] = folder # Find new folders on src_repo. for src_name_t, src_folder in src_hash.iteritems(): # Don't create on dst_repo, if it is readonly if not dst_repo.get_create_folders(): break if src_folder.sync_this and not src_name_t in dst_folders: try: dst_repo.makefolder(src_name_t) dst_haschanged = True # Need to refresh list except OfflineImapError as e: self.ui.error(e, exc_info()[2], "Creating folder %s on repository %s"% (src_name_t, dst_repo)) raise status_repo.makefolder(src_name_t.replace(dst_repo.getsep(), status_repo.getsep())) # Find new folders on dst_repo. for dst_name_t, dst_folder in dst_hash.iteritems(): if not src_repo.get_create_folders(): # Don't create missing folder on readonly repo. break if dst_folder.sync_this and not dst_name_t in src_folders: # nametrans sanity check! # Does nametrans back&forth lead to identical names? # 1) would src repo filter out the new folder name? In this # case don't create it on it: if not self.should_sync_folder(dst_name_t): self.ui.debug('', "Not creating folder '%s' (repository '%s" "') as it would be filtered out on that repository."% (dst_name_t, self)) continue # get IMAPFolder and see if the reverse nametrans # works fine TODO: getfolder() works only because we # succeed in getting inexisting folders which I would # like to change. Take care! folder = self.getfolder(dst_name_t) # apply reverse nametrans to see if we end up with the same name newdst_name = folder.getvisiblename().replace( src_repo.getsep(), dst_repo.getsep()) if dst_folder.name != newdst_name: raise OfflineImapError("INFINITE FOLDER CREATION DETECTED! " "Folder '%s' (repository '%s') would be created as fold" "er '%s' (repository '%s'). The latter becomes '%s' in " "return, leading to infinite folder creation cycles.\n " "SOLUTION: 1) Do set your nametrans rules on both repos" "itories so they lead to identical names if applied bac" "k and forth. 2) Use folderfilter settings on a reposit" "ory to prevent some folders from being created on the " "other side." % (dst_folder.name, dst_repo, dst_name_t, src_repo, newdst_name), OfflineImapError.ERROR.REPO) # end sanity check, actually create the folder try: src_repo.makefolder(dst_name_t) src_haschanged = True # Need to refresh list except OfflineImapError as e: self.ui.error(e, exc_info()[2], "Creating folder %s on " "repository %s" % (dst_name_t, src_repo)) raise status_repo.makefolder(dst_name_t.replace( src_repo.getsep(), status_repo.getsep())) # Find deleted folders. # TODO: We don't delete folders right now. #Forget old list of cached folders so we get new ones if needed if src_haschanged: self.forgetfolders() if dst_haschanged: dst_repo.forgetfolders() def startkeepalive(self): """The default implementation will do nothing.""" pass def stopkeepalive(self): """Stop keep alive, but don't bother waiting for the threads to terminate.""" pass def getlocalroot(self): """ Local root folder for storing messages. Will not be set for remote repositories.""" return None offlineimap-6.6.1/offlineimap/repository/Gmail.py000066400000000000000000000063531264010144500221340ustar00rootroot00000000000000# Gmail IMAP repository support # Copyright (C) 2008 Riccardo Murri # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA from offlineimap.repository.IMAP import IMAPRepository from offlineimap import folder, OfflineImapError class GmailRepository(IMAPRepository): """Gmail IMAP repository. Falls back to hard-coded gmail host name and port, if none were specified: http://mail.google.com/support/bin/answer.py?answer=78799&topic=12814 """ # Gmail IMAP server hostname HOSTNAME = "imap.gmail.com" # Gmail IMAP server port PORT = 993 OAUTH2_URL = 'https://accounts.google.com/o/oauth2/token' def __init__(self, reposname, account): """Initialize a GmailRepository object.""" # Enforce SSL usage account.getconfig().set('Repository ' + reposname, 'ssl', 'yes') IMAPRepository.__init__(self, reposname, account) def gethost(self): """Return the server name to connect to. Gmail implementation first checks for the usual IMAP settings and falls back to imap.gmail.com if not specified.""" try: return super(GmailRepository, self).gethost() except OfflineImapError: # nothing was configured, cache and return hardcoded one self._host = GmailRepository.HOSTNAME return self._host def getoauth2_request_url(self): """Return the server name to connect to. Gmail implementation first checks for the usual IMAP settings and falls back to imap.gmail.com if not specified.""" url = super(GmailRepository, self).getoauth2_request_url() if url is None: # Nothing was configured, cache and return hardcoded one. self._oauth2_request_url = GmailRepository.OAUTH2_URL else: self._oauth2_request_url = url return self._oauth2_request_url def getport(self): return GmailRepository.PORT def getssl(self): return 1 def getpreauthtunnel(self): return None def getfolder(self, foldername): return self.getfoldertype()(self.imapserver, foldername, self) def getfoldertype(self): return folder.Gmail.GmailFolder def gettrashfolder(self, foldername): #: Where deleted mail should be moved return self.getconf('trashfolder','[Gmail]/Trash') def getspamfolder(self): #: Gmail also deletes messages upon EXPUNGE in the Spam folder return self.getconf('spamfolder','[Gmail]/Spam') offlineimap-6.6.1/offlineimap/repository/GmailMaildir.py000066400000000000000000000024671264010144500234400ustar00rootroot00000000000000# Maildir repository support # Copyright (C) 2002-2015 John Goerzen & contributors # # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA from offlineimap.repository.Maildir import MaildirRepository from offlineimap.folder.GmailMaildir import GmailMaildirFolder class GmailMaildirRepository(MaildirRepository): def __init__(self, reposname, account): """Initialize a MaildirRepository object. Takes a path name to the directory holding all the Maildir directories.""" super(GmailMaildirRepository, self).__init__(reposname, account) def getfoldertype(self): return GmailMaildirFolder offlineimap-6.6.1/offlineimap/repository/IMAP.py000066400000000000000000000435611264010144500216330ustar00rootroot00000000000000# IMAP repository support # Copyright (C) 2002-2015 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA from threading import Event import os from sys import exc_info import netrc import errno from offlineimap.repository.Base import BaseRepository from offlineimap import folder, imaputil, imapserver, OfflineImapError from offlineimap.folder.UIDMaps import MappedIMAPFolder from offlineimap.threadutil import ExitNotifyThread from offlineimap.utils.distro import get_os_sslcertfile, get_os_sslcertfile_searchpath class IMAPRepository(BaseRepository): def __init__(self, reposname, account): """Initialize an IMAPRepository object.""" BaseRepository.__init__(self, reposname, account) # self.ui is being set by the BaseRepository self._host = None self._oauth2_request_url = None self.imapserver = imapserver.IMAPServer(self) self.folders = None # Only set the newmail_hook in an IMAP repository. if self.config.has_option(self.getsection(), 'newmail_hook'): self.newmail_hook = self.localeval.eval( self.getconf('newmail_hook')) if self.getconf('sep', None): self.ui.info("The 'sep' setting is being ignored for IMAP " "repository '%s' (it's autodetected)"% self) def startkeepalive(self): keepalivetime = self.getkeepalive() if not keepalivetime: return self.kaevent = Event() self.kathread = ExitNotifyThread(target = self.imapserver.keepalive, name = "Keep alive " + self.getname(), args = (keepalivetime, self.kaevent)) self.kathread.setDaemon(1) self.kathread.start() def stopkeepalive(self): if not hasattr(self, 'kaevent'): # Keepalive is not active. return self.kaevent.set() del self.kathread del self.kaevent def holdordropconnections(self): if not self.getholdconnectionopen(): self.dropconnections() def dropconnections(self): self.imapserver.close() def getholdconnectionopen(self): if self.getidlefolders(): return 1 return self.getconfboolean("holdconnectionopen", 0) def getkeepalive(self): num = self.getconfint("keepalive", 0) if num == 0 and self.getidlefolders(): return 29*60 else: return num def getsep(self): """Return the folder separator for the IMAP repository This requires that self.imapserver has been initialized with an acquireconnection() or it will still be `None`""" assert self.imapserver.delim != None, "'%s' " \ "repository called getsep() before the folder separator was " \ "queried from the server"% self return self.imapserver.delim def gethost(self): """Return the configured hostname to connect to :returns: hostname as string or throws Exception""" if self._host: # use cached value if possible return self._host # 1) check for remotehosteval setting if self.config.has_option(self.getsection(), 'remotehosteval'): host = self.getconf('remotehosteval') try: host = self.localeval.eval(host) except Exception as e: raise OfflineImapError("remotehosteval option for repository " "'%s' failed:\n%s"% (self, e), OfflineImapError.ERROR.REPO), \ None, exc_info()[2] if host: self._host = host return self._host # 2) check for plain remotehost setting host = self.getconf('remotehost', None) if host != None: self._host = host return self._host # no success raise OfflineImapError("No remote host for repository " "'%s' specified."% self, OfflineImapError.ERROR.REPO) def get_remote_identity(self): """Remote identity is used for certain SASL mechanisms (currently -- PLAIN) to inform server about the ID we want to authorize as instead of our login name.""" return self.getconf('remote_identity', default=None) def get_auth_mechanisms(self): supported = ["GSSAPI", "XOAUTH2", "CRAM-MD5", "PLAIN", "LOGIN"] # Mechanisms are ranged from the strongest to the # weakest ones. # TODO: we need DIGEST-MD5, it must come before CRAM-MD5 # TODO: due to the chosen-plaintext resistance. default = ["GSSAPI", "XOAUTH2", "CRAM-MD5", "PLAIN", "LOGIN"] mechs = self.getconflist('auth_mechanisms', r',\s*', default) for m in mechs: if m not in supported: raise OfflineImapError("Repository %s: "% self + \ "unknown authentication mechanism '%s'"% m, OfflineImapError.ERROR.REPO) self.ui.debug('imap', "Using authentication mechanisms %s" % mechs) return mechs def getuser(self): user = None localeval = self.localeval if self.config.has_option(self.getsection(), 'remoteusereval'): user = self.getconf('remoteusereval') if user != None: return localeval.eval(user) if self.config.has_option(self.getsection(), 'remoteuser'): user = self.getconf('remoteuser') if user != None: return user try: netrcentry = netrc.netrc().authenticators(self.gethost()) except IOError as inst: if inst.errno != errno.ENOENT: raise else: if netrcentry: return netrcentry[0] try: netrcentry = netrc.netrc('/etc/netrc').authenticators(self.gethost()) except IOError as inst: if inst.errno not in (errno.ENOENT, errno.EACCES): raise else: if netrcentry: return netrcentry[0] def getport(self): port = None if self.config.has_option(self.getsection(), 'remoteporteval'): port = self.getconf('remoteporteval') if port != None: return self.localeval.eval(port) return self.getconfint('remoteport', None) def getssl(self): return self.getconfboolean('ssl', 1) def getsslclientcert(self): xforms = [os.path.expanduser, os.path.expandvars, os.path.abspath] return self.getconf_xform('sslclientcert', xforms, None) def getsslclientkey(self): xforms = [os.path.expanduser, os.path.expandvars, os.path.abspath] return self.getconf_xform('sslclientkey', xforms, None) def getsslcacertfile(self): """Determines CA bundle. Returns path to the CA bundle. It is either explicitely specified or requested via "OS-DEFAULT" value (and we will search known locations for the current OS and distribution). If search via "OS-DEFAULT" route yields nothing, we will throw an exception to make our callers distinguish between not specified value and non-existent default CA bundle. It is also an error to specify non-existent file via configuration: it will error out later, but, perhaps, with less verbose explanation, so we will also throw an exception. It is consistent with the above behaviour, so any explicitely-requested configuration that doesn't result in an existing file will give an exception. """ xforms = [os.path.expanduser, os.path.expandvars, os.path.abspath] cacertfile = self.getconf_xform('sslcacertfile', xforms, None) if self.getconf('sslcacertfile', None) == "OS-DEFAULT": cacertfile = get_os_sslcertfile() if cacertfile == None: searchpath = get_os_sslcertfile_searchpath() if searchpath: reason = "Default CA bundle was requested, "\ "but no existing locations available. "\ "Tried %s." % (", ".join(searchpath)) else: reason = "Default CA bundle was requested, "\ "but OfflineIMAP doesn't know any for your "\ "current operating system." raise OfflineImapError(reason, OfflineImapError.ERROR.REPO) if cacertfile is None: return None if not os.path.isfile(cacertfile): reason = "CA certfile for repository '%s' couldn't be found. "\ "No such file: '%s'" % (self.name, cacertfile) raise OfflineImapError(reason, OfflineImapError.ERROR.REPO) return cacertfile def gettlslevel(self): return self.getconf('tls_level', 'tls_compat') def getsslversion(self): return self.getconf('ssl_version', None) def get_ssl_fingerprint(self): """Return array of possible certificate fingerprints. Configuration item cert_fingerprint can contain multiple comma-separated fingerprints in hex form.""" value = self.getconf('cert_fingerprint', "") return [f.strip().lower() for f in value.split(',') if f] def getoauth2_request_url(self): if self._oauth2_request_url: # Use cached value if possible. return self._oauth2_request_url oauth2_request_url = self.getconf('oauth2_request_url', None) if oauth2_request_url != None: self._oauth2_request_url = oauth2_request_url return self._oauth2_request_url #raise OfflineImapError("No remote oauth2_request_url for repository " #"'%s' specified."% self, OfflineImapError.ERROR.REPO) def getoauth2_refresh_token(self): return self.getconf('oauth2_refresh_token', None) def getoauth2_client_id(self): return self.getconf('oauth2_client_id', None) def getoauth2_client_secret(self): return self.getconf('oauth2_client_secret', None) def getpreauthtunnel(self): return self.getconf('preauthtunnel', None) def gettransporttunnel(self): return self.getconf('transporttunnel', None) def getreference(self): return self.getconf('reference', '') def getdecodefoldernames(self): return self.getconfboolean('decodefoldernames', 0) def getidlefolders(self): localeval = self.localeval return localeval.eval(self.getconf('idlefolders', '[]')) def getmaxconnections(self): num1 = len(self.getidlefolders()) num2 = self.getconfint('maxconnections', 1) return max(num1, num2) def getexpunge(self): return self.getconfboolean('expunge', 1) def getpassword(self): """Return the IMAP password for this repository. It tries to get passwords in the following order: 1. evaluate Repository 'remotepasseval' 2. read password from Repository 'remotepass' 3. read password from file specified in Repository 'remotepassfile' 4. read password from ~/.netrc 5. read password from /etc/netrc On success we return the password. If all strategies fail we return None.""" # 1. evaluate Repository 'remotepasseval' passwd = self.getconf('remotepasseval', None) if passwd != None: return self.localeval.eval(passwd) # 2. read password from Repository 'remotepass' password = self.getconf('remotepass', None) if password != None: return password # 3. read password from file specified in Repository 'remotepassfile' passfile = self.getconf('remotepassfile', None) if passfile != None: fd = open(os.path.expanduser(passfile)) password = fd.readline().strip() fd.close() return password # 4. read password from ~/.netrc try: netrcentry = netrc.netrc().authenticators(self.gethost()) except IOError as inst: if inst.errno != errno.ENOENT: raise else: if netrcentry: user = self.getuser() if user == None or user == netrcentry[0]: return netrcentry[2] # 5. read password from /etc/netrc try: netrcentry = netrc.netrc('/etc/netrc').authenticators(self.gethost()) except IOError as inst: if inst.errno not in (errno.ENOENT, errno.EACCES): raise else: if netrcentry: user = self.getuser() if user == None or user == netrcentry[0]: return netrcentry[2] # no strategy yielded a password! return None def getfolder(self, foldername): """Return instance of OfflineIMAP representative folder.""" return self.getfoldertype()(self.imapserver, foldername, self) def getfoldertype(self): return folder.IMAP.IMAPFolder def connect(self): imapobj = self.imapserver.acquireconnection() self.imapserver.releaseconnection(imapobj) def forgetfolders(self): self.folders = None def getfolders(self): """Return a list of instances of OfflineIMAP representative folder.""" if self.folders != None: return self.folders retval = [] imapobj = self.imapserver.acquireconnection() # check whether to list all folders, or subscribed only listfunction = imapobj.list if self.getconfboolean('subscribedonly', False): listfunction = imapobj.lsub try: listresult = listfunction(directory = self.imapserver.reference)[1] finally: self.imapserver.releaseconnection(imapobj) for s in listresult: if s == None or \ (isinstance(s, basestring) and s == ''): # Bug in imaplib: empty strings in results from # literals. TODO: still relevant? continue flags, delim, name = imaputil.imapsplit(s) flaglist = [x.lower() for x in imaputil.flagsplit(flags)] if '\\noselect' in flaglist: continue foldername = imaputil.dequote(name) retval.append(self.getfoldertype()(self.imapserver, foldername, self)) # Add all folderincludes if len(self.folderincludes): imapobj = self.imapserver.acquireconnection() try: for foldername in self.folderincludes: try: imapobj.select(foldername, readonly = True) except OfflineImapError as e: # couldn't select this folderinclude, so ignore folder. if e.severity > OfflineImapError.ERROR.FOLDER: raise self.ui.error(e, exc_info()[2], 'Invalid folderinclude:') continue retval.append(self.getfoldertype()( self.imapserver, foldername, self)) finally: self.imapserver.releaseconnection(imapobj) if self.foldersort is None: # default sorting by case insensitive transposed name retval.sort(key=lambda x: str.lower(x.getvisiblename())) else: # do foldersort in a python3-compatible way # http://bytes.com/topic/python/answers/844614-python-3-sorting-comparison-function def cmp2key(mycmp): """Converts a cmp= function into a key= function We need to keep cmp functions for backward compatibility""" class K: def __init__(self, obj, *args): self.obj = obj def __cmp__(self, other): return mycmp(self.obj.getvisiblename(), other.obj.getvisiblename()) return K retval.sort(key=cmp2key(self.foldersort)) self.folders = retval return self.folders def makefolder(self, foldername): """Create a folder on the IMAP server This will not update the list cached in :meth:`getfolders`. You will need to invoke :meth:`forgetfolders` to force new caching when you are done creating folders yourself. :param foldername: Full path of the folder to be created.""" if self.getreference(): foldername = self.getreference() + self.getsep() + foldername if not foldername: # Create top level folder as folder separator foldername = self.getsep() self.ui.makefolder(self, foldername) if self.account.dryrun: return imapobj = self.imapserver.acquireconnection() try: result = imapobj.create(foldername) if result[0] != 'OK': raise OfflineImapError("Folder '%s'[%s] could not be created. " "Server responded: %s"% (foldername, self, str(result)), OfflineImapError.ERROR.FOLDER) finally: self.imapserver.releaseconnection(imapobj) class MappedIMAPRepository(IMAPRepository): def getfoldertype(self): return MappedIMAPFolder offlineimap-6.6.1/offlineimap/repository/LocalStatus.py000066400000000000000000000105631264010144500233370ustar00rootroot00000000000000# Local status cache repository support # Copyright (C) 2002-2015 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import os from offlineimap.folder.LocalStatus import LocalStatusFolder from offlineimap.folder.LocalStatusSQLite import LocalStatusSQLiteFolder from offlineimap.repository.Base import BaseRepository class LocalStatusRepository(BaseRepository): def __init__(self, reposname, account): BaseRepository.__init__(self, reposname, account) # class and root for all backends self.backends = {} self.backends['sqlite'] = { 'class': LocalStatusSQLiteFolder, 'root': os.path.join(account.getaccountmeta(), 'LocalStatus-sqlite') } self.backends['plain'] = { 'class': LocalStatusFolder, 'root': os.path.join(account.getaccountmeta(), 'LocalStatus') } # Set class and root for the configured backend self.setup_backend(self.account.getconf('status_backend', 'plain')) if not os.path.exists(self.root): os.mkdir(self.root, 0o700) # self._folders is a dict of name:LocalStatusFolders() self._folders = {} def setup_backend(self, backend): if backend in self.backends.keys(): self._backend = backend self.root = self.backends[backend]['root'] self.LocalStatusFolderClass = self.backends[backend]['class'] else: raise SyntaxWarning("Unknown status_backend '%s' for account '%s'"% (backend, self.account.name)) def import_other_backend(self, folder): for bk, dic in self.backends.items(): # skip folder's own type if dic['class'] == type(folder): continue repobk = LocalStatusRepository(self.name, self.account) repobk.setup_backend(bk) # fake the backend folderbk = dic['class'](folder.name, repobk) # if backend contains data, import it to folder. if not folderbk.isnewfolder(): self.ui._msg('Migrating LocalStatus cache from %s to %s " \ "status folder for %s:%s'% (bk, self._backend, self.name, folder.name)) folderbk.cachemessagelist() folder.messagelist = folderbk.messagelist folder.saveall() break def getsep(self): return '.' def makefolder(self, foldername): """Create a LocalStatus Folder.""" if self.account.dryrun: return # bail out in dry-run mode # Create an empty StatusFolder folder = self.LocalStatusFolderClass(foldername, self) folder.save() # Invalidate the cache. self.forgetfolders() def getfolder(self, foldername): """Return the Folder() object for a foldername.""" if foldername in self._folders: return self._folders[foldername] folder = self.LocalStatusFolderClass(foldername, self) # If folder is empty, try to import data from an other backend. if folder.isnewfolder(): self.import_other_backend(folder) self._folders[foldername] = folder return folder def getfolders(self): """Returns a list of all cached folders. Does nothing for this backend. We mangle the folder file names (see getfolderfilename) so we can not derive folder names from the file names that we have available. TODO: need to store a list of folder names somehow?""" pass def forgetfolders(self): """Forgets the cached list of folders, if any. Useful to run after a sync run.""" self._folders = {} offlineimap-6.6.1/offlineimap/repository/Maildir.py000066400000000000000000000214071264010144500224610ustar00rootroot00000000000000# Maildir repository support # Copyright (C) 2002-2015 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA from offlineimap import folder from offlineimap.ui import getglobalui from offlineimap.error import OfflineImapError from offlineimap.repository.Base import BaseRepository import os from stat import * class MaildirRepository(BaseRepository): def __init__(self, reposname, account): """Initialize a MaildirRepository object. Takes a path name to the directory holding all the Maildir directories.""" BaseRepository.__init__(self, reposname, account) self.root = self.getlocalroot() self.folders = None self.ui = getglobalui() self.debug("MaildirRepository initialized, sep is %s"% repr(self.getsep())) self.folder_atimes = [] # Create the top-level folder if it doesn't exist if not os.path.isdir(self.root): os.mkdir(self.root, 0o700) # Create the keyword->char mapping self.keyword2char = dict() for c in 'abcdefghijklmnopqrstuvwxyz': confkey = 'customflag_' + c keyword = self.getconf(confkey, None) if keyword is not None: self.keyword2char[keyword] = c def _append_folder_atimes(self, foldername): """Store the atimes of a folder's new|cur in self.folder_atimes""" p = os.path.join(self.root, foldername) new = os.path.join(p, 'new') cur = os.path.join(p, 'cur') atimes = (p, os.path.getatime(new), os.path.getatime(cur)) self.folder_atimes.append(atimes) def restore_atime(self): """Sets folders' atime back to their values after a sync Controlled by the 'restoreatime' config parameter.""" if not self.getconfboolean('restoreatime', False): return # not configured to restore for (dirpath, new_atime, cur_atime) in self.folder_atimes: new_dir = os.path.join(dirpath, 'new') cur_dir = os.path.join(dirpath, 'cur') os.utime(new_dir, (new_atime, os.path.getmtime(new_dir))) os.utime(cur_dir, (cur_atime, os.path.getmtime(cur_dir))) def getlocalroot(self): xforms = [os.path.expanduser] return self.getconf_xform('localfolders', xforms) def debug(self, msg): self.ui.debug('maildir', msg) def getsep(self): return self.getconf('sep', '.').strip() def getkeywordmap(self): return self.keyword2char if len(self.keyword2char) > 0 else None def makefolder(self, foldername): """Create new Maildir folder if necessary This will not update the list cached in getfolders(). You will need to invoke :meth:`forgetfolders` to force new caching when you are done creating folders yourself. :param foldername: A relative mailbox name. The maildir will be created in self.root+'/'+foldername. All intermediate folder levels will be created if they do not exist yet. 'cur', 'tmp', and 'new' subfolders will be created in the maildir. """ self.ui.makefolder(self, foldername) if self.account.dryrun: return full_path = os.path.abspath(os.path.join(self.root, foldername)) # sanity tests if self.getsep() == '/': for component in foldername.split('/'): assert not component in ['new', 'cur', 'tmp'],\ "When using nested folders (/ as a Maildir separator), "\ "folder names may not contain 'new', 'cur', 'tmp'." assert foldername.find('../') == -1, "Folder names may not contain ../" assert not foldername.startswith('/'), "Folder names may not begin with /" # If we're using hierarchical folders, it's possible that # sub-folders may be created before higher-up ones. self.debug("makefolder: calling makedirs '%s'"% full_path) try: os.makedirs(full_path, 0o700) except OSError as e: if e.errno == 17 and os.path.isdir(full_path): self.debug("makefolder: '%s' already a directory"% foldername) else: raise for subdir in ['cur', 'new', 'tmp']: try: os.mkdir(os.path.join(full_path, subdir), 0o700) except OSError as e: if e.errno == 17 and os.path.isdir(full_path): self.debug("makefolder: '%s' already has subdir %s"% (foldername, subdir)) else: raise def deletefolder(self, foldername): self.ui.warn("NOT YET IMPLEMENTED: DELETE FOLDER %s"% foldername) def getfolder(self, foldername): """Return a Folder instance of this Maildir If necessary, scan and cache all foldernames to make sure that we only return existing folders and that 2 calls with the same name will return the same object.""" # getfolders() will scan and cache the values *if* necessary folders = self.getfolders() for f in folders: if foldername == f.name: return f raise OfflineImapError("getfolder() asked for a nonexisting " "folder '%s'."% foldername, OfflineImapError.ERROR.FOLDER) def _getfolders_scandir(self, root, extension=None): """Recursively scan folder 'root'; return a list of MailDirFolder :param root: (absolute) path to Maildir root :param extension: (relative) subfolder to examine within root""" self.debug("_GETFOLDERS_SCANDIR STARTING. root = %s, extension = %s"% (root, extension)) retval = [] # Configure the full path to this repository -- "toppath" if extension: toppath = os.path.join(root, extension) else: toppath = root self.debug(" toppath = %s"% toppath) # Iterate over directories in top & top itself. for dirname in os.listdir(toppath) + ['']: self.debug(" dirname = %s"% dirname) if dirname == '' and extension is not None: self.debug(' skip this entry (already scanned)') continue if dirname in ['cur', 'new', 'tmp']: self.debug(" skip this entry (Maildir special)") # Bypass special files. continue fullname = os.path.join(toppath, dirname) if not os.path.isdir(fullname): self.debug(" skip this entry (not a directory)") # Not a directory -- not a folder. continue # extension can be None. if extension: foldername = os.path.join(extension, dirname) else: foldername = dirname if (os.path.isdir(os.path.join(fullname, 'cur')) and os.path.isdir(os.path.join(fullname, 'new')) and os.path.isdir(os.path.join(fullname, 'tmp'))): # This directory has maildir stuff -- process self.debug(" This is maildir folder '%s'."% foldername) if self.getconfboolean('restoreatime', False): self._append_folder_atimes(foldername) fd = self.getfoldertype()(self.root, foldername, self.getsep(), self) retval.append(fd) if self.getsep() == '/' and dirname != '': # Recursively check sub-directories for folders too. retval.extend(self._getfolders_scandir(root, foldername)) self.debug("_GETFOLDERS_SCANDIR RETURNING %s"% \ repr([x.getname() for x in retval])) return retval def getfolders(self): if self.folders == None: self.folders = self._getfolders_scandir(self.root) return self.folders def getfoldertype(self): return folder.Maildir.MaildirFolder def forgetfolders(self): """Forgets the cached list of folders, if any. Useful to run after a sync run.""" self.folders = None offlineimap-6.6.1/offlineimap/repository/__init__.py000066400000000000000000000072011264010144500226330ustar00rootroot00000000000000# Copyright (C) 2002-2007 John Goerzen # 2010 Sebastian Spaeth and contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA from sys import exc_info try: from configparser import NoSectionError except ImportError: #python2 from ConfigParser import NoSectionError from offlineimap.repository.IMAP import IMAPRepository, MappedIMAPRepository from offlineimap.repository.Gmail import GmailRepository from offlineimap.repository.Maildir import MaildirRepository from offlineimap.repository.GmailMaildir import GmailMaildirRepository from offlineimap.repository.LocalStatus import LocalStatusRepository from offlineimap.error import OfflineImapError class Repository(object): """Abstract class that returns the correct Repository type instance based on 'account' and 'reqtype', e.g. a class:`ImapRepository` instance.""" def __new__(cls, account, reqtype): """ :param account: :class:`Account` :param regtype: 'remote', 'local', or 'status'""" if reqtype == 'remote': name = account.getconf('remoterepository') # We don't support Maildirs on the remote side. typemap = {'IMAP': IMAPRepository, 'Gmail': GmailRepository} elif reqtype == 'local': name = account.getconf('localrepository') typemap = {'IMAP': MappedIMAPRepository, 'Maildir': MaildirRepository, 'GmailMaildir': GmailMaildirRepository} elif reqtype == 'status': # create and return a LocalStatusRepository. name = account.getconf('localrepository') return LocalStatusRepository(name, account) else: errstr = "Repository type %s not supported" % reqtype raise OfflineImapError(errstr, OfflineImapError.ERROR.REPO) # Get repository type. config = account.getconfig() try: repostype = config.get('Repository ' + name, 'type').strip() except NoSectionError as e: errstr = ("Could not find section '%s' in configuration. Required " "for account '%s'." % ('Repository %s' % name, account)) raise OfflineImapError(errstr, OfflineImapError.ERROR.REPO), \ None, exc_info()[2] try: repo = typemap[repostype] except KeyError: errstr = "'%s' repository not supported for '%s' repositories."% \ (repostype, reqtype) raise OfflineImapError(errstr, OfflineImapError.ERROR.REPO), \ None, exc_info()[2] return repo(name, account) def __init__(self, account, reqtype): """Load the correct Repository type and return that. The __init__ of the corresponding Repository class will be executed instead of this stub :param account: :class:`Account` :param regtype: 'remote', 'local', or 'status' """ pass offlineimap-6.6.1/offlineimap/syncmaster.py000066400000000000000000000032571264010144500210540ustar00rootroot00000000000000# OfflineIMAP synchronization master code # Copyright (C) 2002-2007 John Goerzen # # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA from offlineimap.threadutil import threadlist, InstanceLimitedThread from offlineimap.accounts import SyncableAccount from threading import currentThread def syncaccount(threads, config, accountname): account = SyncableAccount(config, accountname) thread = InstanceLimitedThread(instancename = 'ACCOUNTLIMIT', target = account.syncrunner, name = "Account sync %s" % accountname) thread.setDaemon(True) thread.start() threads.add(thread) def syncitall(accounts, config): # Special exit message for SyncRunner thread, so main thread can exit currentThread().exit_message = 'SYNCRUNNER_EXITED_NORMALLY' threads = threadlist() for accountname in accounts: syncaccount(threads, config, accountname) # Wait for the threads to finish. threads.reset() offlineimap-6.6.1/offlineimap/threadutil.py000066400000000000000000000203231264010144500210220ustar00rootroot00000000000000# Copyright (C) 2002-2012 John Goerzen & contributors # Thread support module # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA from threading import Lock, Thread, BoundedSemaphore, currentThread try: from Queue import Queue, Empty except ImportError: # python3 from queue import Queue, Empty import traceback import os.path import sys from offlineimap.ui import getglobalui ###################################################################### # General utilities ###################################################################### def semaphorereset(semaphore, originalstate): """Block until `semaphore` gets back to its original state, ie all acquired resources have been released.""" for i in range(originalstate): semaphore.acquire() # Now release these. for i in range(originalstate): semaphore.release() class threadlist: """Store the list of all threads in the software so it can be used to find out what's running and what's not.""" def __init__(self): self.lock = Lock() self.list = [] def add(self, thread): self.lock.acquire() try: self.list.append(thread) finally: self.lock.release() def remove(self, thread): self.lock.acquire() try: self.list.remove(thread) finally: self.lock.release() def pop(self): self.lock.acquire() try: if not len(self.list): return None return self.list.pop() finally: self.lock.release() def reset(self): while 1: thread = self.pop() if not thread: return thread.join() ###################################################################### # Exit-notify threads ###################################################################### exitthreads = Queue(100) def exitnotifymonitorloop(callback): """An infinite "monitoring" loop watching for finished ExitNotifyThread's. This one is supposed to run in the main thread. :param callback: the function to call when a thread terminated. That function is called with a single argument -- the ExitNotifyThread that has terminated. The monitor will not continue to monitor for other threads until 'callback' returns, so if it intends to perform long calculations, it should start a new thread itself -- but NOT an ExitNotifyThread, or else an infinite loop may result. Furthermore, the monitor will hold the lock all the while the other thread is waiting. :type callback: a callable function """ global exitthreads do_loop = True while do_loop: # Loop forever and call 'callback' for each thread that exited try: # we need a timeout in the get() call, so that ctrl-c can throw # a SIGINT (http://bugs.python.org/issue1360). A timeout with empty # Queue will raise `Empty`. thrd = exitthreads.get(True, 60) # request to abort when callback returns true do_loop = (callback(thrd) != True) except Empty: pass def threadexited(thread): """Called when a thread exits. Main thread is aborted when this returns True.""" ui = getglobalui() if thread.exit_exception: if isinstance(thread.exit_exception, SystemExit): # Bring a SystemExit into the main thread. # Do not send it back to UI layer right now. # Maybe later send it to ui.terminate? raise SystemExit ui.threadException(thread) # Expected to terminate sys.exit(100) # Just in case... elif thread.exit_message == 'SYNCRUNNER_EXITED_NORMALLY': return True else: ui.threadExited(thread) return False class ExitNotifyThread(Thread): """This class is designed to alert a "monitor" to the fact that a thread has exited and to provide for the ability for it to find out why. All instances are made daemon threads (setDaemon(True), so we bail out when the mainloop dies. The thread can set instance variables self.exit_message for a human readable reason of the thread exit.""" profiledir = None """Class variable that is set to the profile directory if required.""" def __init__(self, *args, **kwargs): super(ExitNotifyThread, self).__init__(*args, **kwargs) # These are all child threads that are supposed to go away when # the main thread is killed. self.setDaemon(True) self.exit_message = None self._exit_exc = None self._exit_stacktrace = None def run(self): global exitthreads try: if not ExitNotifyThread.profiledir: # normal case Thread.run(self) else: try: import cProfile as profile except ImportError: import profile prof = profile.Profile() try: prof = prof.runctx("Thread.run(self)", globals(), locals()) except SystemExit: pass prof.dump_stats(os.path.join(ExitNotifyThread.profiledir, "%s_%s.prof"% (self.ident, self.getName()))) except Exception as e: # Thread exited with Exception, store it tb = traceback.format_exc() self.set_exit_exception(e, tb) if exitthreads: exitthreads.put(self, True) def set_exit_exception(self, exc, st=None): """Sets Exception and stacktrace of a thread, so that other threads can query its exit status""" self._exit_exc = exc self._exit_stacktrace = st @property def exit_exception(self): """Returns the cause of the exit, one of: Exception() -- the thread aborted with this exception None -- normal termination.""" return self._exit_exc @property def exit_stacktrace(self): """Returns a string representing the stack trace if set""" return self._exit_stacktrace @classmethod def set_profiledir(cls, directory): """If set, will output profile information to 'directory'""" cls.profiledir = directory ###################################################################### # Instance-limited threads ###################################################################### instancelimitedsems = {} instancelimitedlock = Lock() def initInstanceLimit(instancename, instancemax): """Initialize the instance-limited thread implementation to permit up to intancemax threads with the given instancename.""" instancelimitedlock.acquire() if not instancename in instancelimitedsems: instancelimitedsems[instancename] = BoundedSemaphore(instancemax) instancelimitedlock.release() class InstanceLimitedThread(ExitNotifyThread): def __init__(self, instancename, *args, **kwargs): self.instancename = instancename super(InstanceLimitedThread, self).__init__(*args, **kwargs) def start(self): instancelimitedsems[self.instancename].acquire() ExitNotifyThread.start(self) def run(self): try: ExitNotifyThread.run(self) finally: if instancelimitedsems and instancelimitedsems[self.instancename]: instancelimitedsems[self.instancename].release() offlineimap-6.6.1/offlineimap/ui/000077500000000000000000000000001264010144500167205ustar00rootroot00000000000000offlineimap-6.6.1/offlineimap/ui/Curses.py000066400000000000000000000550131264010144500205420ustar00rootroot00000000000000# Curses-based interfaces # Copyright (C) 2003-2015 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA from threading import RLock, currentThread, Lock, Event from collections import deque import time import sys import os import curses import logging from offlineimap.ui.UIBase import UIBase from offlineimap.threadutil import ExitNotifyThread import offlineimap class CursesUtil: def __init__(self, *args, **kwargs): # iolock protects access to the self.iolock = RLock() self.tframe_lock = RLock() # tframe_lock protects the self.threadframes manipulation to # only happen from 1 thread. self.colormap = {} """dict, translating color string to curses color pair number""" def curses_colorpair(self, col_name): """Return the curses color pair, that corresponds to the color.""" return curses.color_pair(self.colormap[col_name]) def init_colorpairs(self): """Initialize the curses color pairs available.""" # set special colors 'gray' and 'banner' self.colormap['white'] = 0 #hardcoded by curses curses.init_pair(1, curses.COLOR_WHITE, curses.COLOR_BLUE) self.colormap['banner'] = 1 # color 'banner' for bannerwin bcol = curses.COLOR_BLACK colors = ( # name, color, bold? ('black', curses.COLOR_BLACK, False), ('blue', curses.COLOR_BLUE,False), ('red', curses.COLOR_RED, False), ('purple', curses.COLOR_MAGENTA, False), ('cyan', curses.COLOR_CYAN, False), ('green', curses.COLOR_GREEN, False), ('orange', curses.COLOR_YELLOW, False)) #set the rest of all colors starting at pair 2 i = 1 for name, fcol, bold in colors: i += 1 self.colormap[name] = i curses.init_pair(i, fcol, bcol) def lock(self, block=True): """Locks the Curses ui thread. Can be invoked multiple times from the owning thread. Invoking from a non-owning thread blocks and waits until it has been unlocked by the owning thread.""" return self.iolock.acquire(block) def unlock(self): """Unlocks the Curses ui thread. Decrease the lock counter by one and unlock the ui thread if the counter reaches 0. Only call this method when the calling thread owns the lock. A RuntimeError is raised if this method is called when the lock is unlocked.""" self.iolock.release() def exec_locked(self, target, *args, **kwargs): """Perform an operation with full locking.""" self.lock() try: target(*args, **kwargs) finally: self.unlock() def refresh(self): def lockedstuff(): curses.panel.update_panels() curses.doupdate() self.exec_locked(lockedstuff) def isactive(self): return hasattr(self, 'stdscr') class CursesAccountFrame: """Notable instance variables: - account: corresponding Account() - children - ui - key - window: curses window associated with an account """ def __init__(self, ui, account): """ :param account: An Account() or None (for eg SyncrunnerThread)""" self.children = [] self.account = account if account else '*Control' self.ui = ui self.window = None # Curses window associated with this acc. self.acc_num = None # Account number (& hotkey) associated with this acc. self.location = 0 # length of the account prefix string def drawleadstr(self, secs = 0): """Draw the account status string. secs tells us how long we are going to sleep.""" sleepstr = '%3d:%02d'% (secs // 60, secs % 60) if secs else 'active' accstr = '%s: [%s] %12.12s: '% (self.acc_num, sleepstr, self.account) self.ui.exec_locked(self.window.addstr, 0, 0, accstr) self.location = len(accstr) def setwindow(self, curses_win, acc_num): """Register an curses win and a hotkey as Account window. :param curses_win: the curses window associated with an account :param acc_num: int denoting the hotkey associated with this account.""" self.window = curses_win self.acc_num = acc_num self.drawleadstr() # Update the child ThreadFrames for child in self.children: child.update(curses_win, self.location, 0) self.location += 1 def get_new_tframe(self): """Create a new ThreadFrame and append it to self.children. :returns: The new ThreadFrame""" tf = CursesThreadFrame(self.ui, self.window, self.location, 0) self.location += 1 self.children.append(tf) return tf def sleeping(self, sleepsecs, remainingsecs): """Show how long we are going to sleep and sleep. :returns: Boolean, whether we want to abort the sleep""" self.drawleadstr(remainingsecs) self.ui.exec_locked(self.window.refresh) time.sleep(sleepsecs) return self.account.get_abort_event() def syncnow(self): """Request that we stop sleeping asap and continue to sync.""" # if this belongs to an Account (and not *Control), set the # skipsleep pref if isinstance(self.account, offlineimap.accounts.Account): self.ui.info("Requested synchronization for acc: %s"% self.account) self.account.config.set('Account %s'% self.account.name, 'skipsleep', '1') class CursesThreadFrame: """curses_color: current color pair for logging.""" def __init__(self, ui, acc_win, x, y): """ :param ui: is a Blinkenlights() instance :param acc_win: curses Account window""" self.ui = ui self.window = acc_win self.x = x self.y = y self.curses_color = curses.color_pair(0) #default color def setcolor(self, color, modifier=0): """Draw the thread symbol '@' in the specified color :param modifier: Curses modified, such as curses.A_BOLD """ self.curses_color = modifier | self.ui.curses_colorpair(color) self.colorname = color self.display() def display(self): def locked_display(): self.window.addch(self.y, self.x, '@', self.curses_color) self.window.refresh() # lock the curses IO while fudging stuff self.ui.exec_locked(locked_display) def update(self, acc_win, x, y): """Update the xy position of the '.' (and possibly the aframe).""" self.window = acc_win self.y = y self.x = x self.display() def std_color(self): self.setcolor('black') class InputHandler(ExitNotifyThread): """Listens for input via the curses interfaces""" #TODO, we need to use the ugly exitnotifythread (rather than simply #threading.Thread here, so exiting this thread via the callback #handler, kills off all parents too. Otherwise, they would simply #continue. def __init__(self, ui): super(InputHandler, self).__init__() self.char_handler = None self.ui = ui self.enabled = Event() # We will only parse input if we are enabled. self.inputlock = RLock() # denotes whether we should be handling the next char. self.start() #automatically start the thread def get_next_char(self): """Return the key pressed or -1. Wait until `enabled` and loop internally every stdscr.timeout() msecs, releasing the inputlock. :returns: char or None if disabled while in here""" self.enabled.wait() while self.enabled.is_set(): with self.inputlock: char = self.ui.stdscr.getch() if char != -1: yield char def run(self): while True: char_gen = self.get_next_char() for char in char_gen: self.char_handler(char) #curses.ungetch(char) def set_char_hdlr(self, callback): """Sets a character callback handler. If a key is pressed it will be passed to this handler. Keys include the curses.KEY_RESIZE key. callback is a function taking a single arg -- the char pressed. If callback is None, input will be ignored.""" with self.inputlock: self.char_handler = callback # start or stop the parsing of things if callback is None: self.enabled.clear() else: self.enabled.set() def input_acquire(self): """Call this method when you want exclusive input control. Make sure to call input_release afterwards! While this lockis held, input can go to e.g. the getpass input.""" self.enabled.clear() self.inputlock.acquire() def input_release(self): """Call this method when you are done getting input.""" self.inputlock.release() self.enabled.set() class CursesLogHandler(logging.StreamHandler): """self.ui has been set to the UI class before anything is invoked""" def emit(self, record): log_str = logging.StreamHandler.format(self, record) color = self.ui.gettf().curses_color # We must acquire both locks. Otherwise, deadlock can result. # This can happen if one thread calls _msg (locking curses, then # tf) and another tries to set the color (locking tf, then curses) # # By locking both up-front here, in this order, we prevent deadlock. self.ui.tframe_lock.acquire() self.ui.lock() try: y,x = self.ui.logwin.getyx() if y or x: self.ui.logwin.addch(10) # no \n before 1st item self.ui.logwin.addstr(log_str, color) finally: self.ui.unlock() self.ui.tframe_lock.release() self.ui.logwin.noutrefresh() self.ui.stdscr.refresh() class Blinkenlights(UIBase, CursesUtil): """Curses-cased fancy UI. Notable instance variables self. ....: - stdscr: THe curses std screen - bannerwin: The top line banner window - width|height: The total curses screen dimensions - logheight: Available height for the logging part - log_con_handler: The CursesLogHandler() - threadframes: - accframes[account]: 'Accountframe'""" def __init__(self, *args, **kwargs): super(Blinkenlights, self).__init__(*args, **kwargs) CursesUtil.__init__(self) ################################################## UTILS def setup_consolehandler(self): """Backend specific console handler. Sets up things and adds them to self.logger. :returns: The logging.Handler() for console output""" # create console handler with a higher log level ch = CursesLogHandler() #ch.setLevel(logging.DEBUG) # create formatter and add it to the handlers self.formatter = logging.Formatter("%(message)s") ch.setFormatter(self.formatter) # add the handlers to the logger self.logger.addHandler(ch) # the handler is not usable yet. We still need all the # intialization stuff currently done in init_banner. Move here? return ch def isusable(s): """Returns true if the backend is usable ie Curses works.""" # Not a terminal? Can't use curses. if not sys.stdout.isatty() and sys.stdin.isatty(): return False # No TERM specified? Can't use curses. if not os.environ.get('TERM', None): return False # Test if ncurses actually starts up fine. Only do so for # python>=2.6.6 as calling initscr() twice messing things up. # see http://bugs.python.org/issue7567 in python 2.6 to 2.6.5 if sys.version_info[0:3] < (2,6) or sys.version_info[0:3] >= (2,6,6): try: curses.initscr() curses.endwin() except: return False return True def init_banner(self): self.availablethreadframes = {} self.threadframes = {} self.accframes = {} self.aflock = Lock() self.stdscr = curses.initscr() # turn off automatic echoing of keys to the screen curses.noecho() # react to keys instantly, without Enter key curses.cbreak() # return special key values, eg curses.KEY_LEFT self.stdscr.keypad(1) # wait 1s for input, so we don't block the InputHandler infinitely self.stdscr.timeout(1000) curses.start_color() # turn off cursor and save original state self.oldcursor = None try: self.oldcursor = curses.curs_set(0) except: pass self.stdscr.clear() self.stdscr.refresh() self.init_colorpairs() # set log handlers ui to ourself self._log_con_handler.ui = self self.setupwindows() # Settup keyboard handler self.inputhandler = InputHandler(self) self.inputhandler.set_char_hdlr(self.on_keypressed) self.gettf().setcolor('red') self.info(offlineimap.banner) def acct(self, *args): """Output that we start syncing an account (and start counting).""" self.gettf().setcolor('purple') super(Blinkenlights, self).acct(*args) def connecting(self, *args): self.gettf().setcolor('white') super(Blinkenlights, self).connecting(*args) def syncfolders(self, *args): self.gettf().setcolor('blue') super(Blinkenlights, self).syncfolders(*args) def syncingfolder(self, *args): self.gettf().setcolor('cyan') super(Blinkenlights, self).syncingfolder(*args) def skippingfolder(self, *args): self.gettf().setcolor('cyan') super(Blinkenlights, self).skippingfolder(*args) def loadmessagelist(self, *args): self.gettf().setcolor('green') super(Blinkenlights, self).loadmessagelist(*args) def syncingmessages(self, *args): self.gettf().setcolor('blue') super(Blinkenlights, self).syncingmessages(*args) def copyingmessage(self, *args): self.gettf().setcolor('orange') super(Blinkenlights, self).copyingmessage(*args) def deletingmessages(self, *args): self.gettf().setcolor('red') super(Blinkenlights, self).deletingmessages(*args) def addingflags(self, *args): self.gettf().setcolor('blue') super(Blinkenlights, self).addingflags(*args) def deletingflags(self, *args): self.gettf().setcolor('blue') super(Blinkenlights, self).deletingflags(*args) def callhook(self, *args): self.gettf().setcolor('white') super(Blinkenlights, self).callhook(*args) ############ Generic logging functions ############################# def warn(self, msg, minor=0): self.gettf().setcolor('red', curses.A_BOLD) super(Blinkenlights, self).warn(msg) def threadExited(self, thread): acc = self.getthreadaccount(thread) with self.tframe_lock: if thread in self.threadframes[acc]: tf = self.threadframes[acc][thread] tf.setcolor('black') self.availablethreadframes[acc].append(tf) del self.threadframes[acc][thread] super(Blinkenlights, self).threadExited(thread) def gettf(self): """Return the ThreadFrame() of the current thread.""" cur_thread = currentThread() acc = self.getthreadaccount() #Account() or None with self.tframe_lock: # Ideally we already have self.threadframes[accountname][thread] try: if cur_thread in self.threadframes[acc]: return self.threadframes[acc][cur_thread] except KeyError: # Ensure threadframes already has an account dict self.threadframes[acc] = {} self.availablethreadframes[acc] = deque() # If available, return a ThreadFrame() if len(self.availablethreadframes[acc]): tf = self.availablethreadframes[acc].popleft() tf.std_color() else: tf = self.getaccountframe(acc).get_new_tframe() self.threadframes[acc][cur_thread] = tf return tf def on_keypressed(self, key): # received special KEY_RESIZE, resize terminal if key == curses.KEY_RESIZE: self.resizeterm() if key < 1 or key > 255: return if chr(key) == 'q': # Request to quit completely. self.warn("Requested shutdown via 'q'") offlineimap.accounts.Account.set_abort_event(self.config, 3) try: index = int(chr(key)) except ValueError: return # Key not a valid number: exit. if index >= len(self.hotkeys): # Not in our list of valid hotkeys. return # Trying to end sleep somewhere. self.getaccountframe(self.hotkeys[index]).syncnow() def sleep(self, sleepsecs, account): self.gettf().setcolor('red') self.info("Next sync in %d:%02d"% (sleepsecs / 60, sleepsecs % 60)) return super(Blinkenlights, self).sleep(sleepsecs, account) def sleeping(self, sleepsecs, remainingsecs): if not sleepsecs: # reset color to default if we are done sleeping. self.gettf().setcolor('white') accframe = self.getaccountframe(self.getthreadaccount()) return accframe.sleeping(sleepsecs, remainingsecs) def resizeterm(self): """Resize the current windows.""" self.exec_locked(self.setupwindows, True) def mainException(self): UIBase.mainException(self) def getpass(self, accountname, config, errmsg=None): # disable the hotkeys inputhandler self.inputhandler.input_acquire() # See comment on _msg for info on why both locks are obtained. self.lock() try: #s.gettf().setcolor('white') self.warn(" *** Input Required") self.warn(" *** Please enter password for account %s: " % \ accountname) self.logwin.refresh() password = self.logwin.getstr() finally: self.unlock() self.inputhandler.input_release() return password def setupwindows(self, resize=False): """Setup and draw bannerwin and logwin. If `resize`, don't create new windows, just adapt size. This function should be invoked with CursesUtils.locked().""" self.height, self.width = self.stdscr.getmaxyx() self.logheight = self.height - len(self.accframes) - 1 if resize: curses.resizeterm(self.height, self.width) self.bannerwin.resize(1, self.width) self.logwin.resize(self.logheight, self.width) else: self.bannerwin = curses.newwin(1, self.width, 0, 0) self.logwin = curses.newwin(self.logheight, self.width, 1, 0) self.draw_bannerwin() self.logwin.idlok(True) # needed for scrollok below self.logwin.scrollok(True) # scroll window when too many lines added self.draw_logwin() self.accounts = reversed(sorted(self.accframes.keys())) pos = self.height - 1 index = 0 self.hotkeys = [] for account in self.accounts: acc_win = curses.newwin(1, self.width, pos, 0) self.accframes[account].setwindow(acc_win, index) self.hotkeys.append(account) index += 1 pos -= 1 curses.doupdate() def draw_bannerwin(self): """Draw the top-line banner line.""" if curses.has_colors(): color = curses.A_BOLD | self.curses_colorpair('banner') else: color = curses.A_REVERSE self.bannerwin.clear() # Delete old content (eg before resizes) self.bannerwin.bkgd(' ', color) # Fill background with that color string = "%s %s"% (offlineimap.__productname__, offlineimap.__bigversion__) self.bannerwin.addstr(0, 0, string, color) self.bannerwin.addstr(0, self.width -len(offlineimap.__copyright__) -1, offlineimap.__copyright__, color) self.bannerwin.noutrefresh() def draw_logwin(self): """(Re)draw the current logwindow.""" if curses.has_colors(): color = curses.color_pair(0) #default colors else: color = curses.A_NORMAL self.logwin.move(0, 0) self.logwin.erase() self.logwin.bkgd(' ', color) def getaccountframe(self, acc_name): """Return an AccountFrame() corresponding to acc_name. Note that the *control thread uses acc_name `None`.""" with self.aflock: # 1) Return existing or 2) create a new CursesAccountFrame. if acc_name in self.accframes: return self.accframes[acc_name] self.accframes[acc_name] = CursesAccountFrame(self, acc_name) # update the window layout self.setupwindows(resize= True) return self.accframes[acc_name] def terminate(self, *args, **kwargs): curses.nocbreak(); self.stdscr.keypad(0); curses.echo() curses.endwin() # need to remove the Curses console handler now and replace with # basic one, so exceptions and stuff are properly displayed self.logger.removeHandler(self._log_con_handler) UIBase.setup_consolehandler(self) # reset the warning method, we do not have curses anymore self.warn = super(Blinkenlights, self).warn # finally call parent terminate which prints out exceptions etc super(Blinkenlights, self).terminate(*args, **kwargs) def threadException(self, thread): #self._log_con_handler.stop() UIBase.threadException(self, thread) offlineimap-6.6.1/offlineimap/ui/Machine.py000066400000000000000000000170731264010144500206460ustar00rootroot00000000000000# Copyright (C) 2007-2015 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA try: from urllib import urlencode except ImportError: # python3 from urllib.parse import urlencode import sys import time import logging from threading import currentThread from offlineimap.ui.UIBase import UIBase import offlineimap protocol = '7.0.0' class MachineLogFormatter(logging.Formatter): """urlencodes any outputted line, to avoid multi-line output""" def format(s, record): # Mapping of log levels to historic tag names severity_map = { 'info': 'msg', 'warning': 'warn', } line = super(MachineLogFormatter, s).format(record) severity = record.levelname.lower() if severity in severity_map: severity = severity_map[severity] if hasattr(record, "machineui"): command = record.machineui["command"] whoami = record.machineui["id"] else: command = "" whoami = currentThread().getName() prefix = "%s:%s"% (command, urlencode([('', whoami)])[1:]) return "%s:%s:%s"% (severity, prefix, urlencode([('', line)])[1:]) class MachineUI(UIBase): def __init__(s, config, loglevel=logging.INFO): super(MachineUI, s).__init__(config, loglevel) s._log_con_handler.createLock() """lock needed to block on password input""" # Set up the formatter that urlencodes the strings... s._log_con_handler.setFormatter(MachineLogFormatter()) # Arguments: # - handler: must be method from s.logger that reflects # the severity of the passed message # - command: command that produced this message # - msg: the message itself def _printData(s, handler, command, msg): handler(msg, extra = { 'machineui': { 'command': command, 'id': currentThread().getName(), } }) def _msg(s, msg): s._printData(s.logger.info, '_display', msg) def warn(s, msg, minor=0): # TODO, remove and cleanup the unused minor stuff s._printData(s.logger.warning, '', msg) def registerthread(s, account): super(MachineUI, s).registerthread(account) s._printData(s.logger.info, 'registerthread', account) def unregisterthread(s, thread): UIBase.unregisterthread(s, thread) s._printData(s.logger.info, 'unregisterthread', thread.getName()) def debugging(s, debugtype): s._printData(s.logger.debug, 'debugging', debugtype) def acct(s, accountname): s._printData(s.logger.info, 'acct', accountname) def acctdone(s, accountname): s._printData(s.logger.info, 'acctdone', accountname) def validityproblem(s, folder): s._printData(s.logger.warning, 'validityproblem', "%s\n%s\n%s\n%s"% (folder.getname(), folder.getrepository().getname(), folder.get_saveduidvalidity(), folder.get_uidvalidity())) def connecting(s, hostname, port): s._printData(s.logger.info, 'connecting', "%s\n%s"% (hostname, str(port))) def syncfolders(s, srcrepos, destrepos): s._printData(s.logger.info, 'syncfolders', "%s\n%s"% (s.getnicename(srcrepos), s.getnicename(destrepos))) def syncingfolder(s, srcrepos, srcfolder, destrepos, destfolder): s._printData(s.logger.info, 'syncingfolder', "%s\n%s\n%s\n%s\n"% (s.getnicename(srcrepos), srcfolder.getname(), s.getnicename(destrepos), destfolder.getname())) def loadmessagelist(s, repos, folder): s._printData(s.logger.info, 'loadmessagelist', "%s\n%s"% (s.getnicename(repos), folder.getvisiblename())) def messagelistloaded(s, repos, folder, count): s._printData(s.logger.info, 'messagelistloaded', "%s\n%s\n%d"% (s.getnicename(repos), folder.getname(), count)) def syncingmessages(s, sr, sf, dr, df): s._printData(s.logger.info, 'syncingmessages', "%s\n%s\n%s\n%s\n"% (s.getnicename(sr), sf.getname(), s.getnicename(dr), df.getname())) def copyingmessage(s, uid, num, num_to_copy, srcfolder, destfolder): s._printData(s.logger.info, 'copyingmessage', "%d\n%s\n%s\n%s[%s]"% (uid, s.getnicename(srcfolder), srcfolder.getname(), s.getnicename(destfolder), destfolder)) def folderlist(s, list): return ("\f".join(["%s\t%s"% (s.getnicename(x), x.getname()) for x in list])) def uidlist(s, list): return ("\f".join([str(u) for u in list])) def deletingmessages(s, uidlist, destlist): ds = s.folderlist(destlist) s._printData(s.logger.info, 'deletingmessages', "%s\n%s"% (s.uidlist(uidlist), ds)) def addingflags(s, uidlist, flags, dest): s._printData(s.logger.info, "addingflags", "%s\n%s\n%s"% (s.uidlist(uidlist), "\f".join(flags), dest)) def deletingflags(s, uidlist, flags, dest): s._printData(s.logger.info, 'deletingflags', "%s\n%s\n%s"% (s.uidlist(uidlist), "\f".join(flags), dest)) def threadException(s, thread): s._printData(s.logger.warning, 'threadException', "%s\n%s"% (thread.getName(), s.getThreadExceptionString(thread))) s.delThreadDebugLog(thread) s.terminate(100) def terminate(s, exitstatus=0, errortitle='', errormsg=''): s._printData(s.logger.info, 'terminate', "%d\n%s\n%s"% (exitstatus, errortitle, errormsg)) sys.exit(exitstatus) def mainException(s): s._printData(s.logger.warning, 'mainException', s.getMainExceptionString()) def threadExited(s, thread): s._printData(s.logger.info, 'threadExited', thread.getName()) UIBase.threadExited(s, thread) def sleeping(s, sleepsecs, remainingsecs): s._printData(s.logger.info, 'sleeping', "%d\n%d"% (sleepsecs, remainingsecs)) if sleepsecs > 0: time.sleep(sleepsecs) return 0 def getpass(s, accountname, config, errmsg=None): if errmsg: s._printData(s.logger.warning, 'getpasserror', "%s\n%s"% (accountname, errmsg), False) s._log_con_handler.acquire() # lock the console output try: s._printData(s.logger.info, 'getpass', accountname) return (sys.stdin.readline()[:-1]) finally: s._log_con_handler.release() def init_banner(s): s._printData(s.logger.info, 'protocol', protocol) s._printData(s.logger.info, 'initbanner', offlineimap.banner) def callhook(s, msg): s._printData(s.logger.info, 'callhook', msg) offlineimap-6.6.1/offlineimap/ui/Noninteractive.py000066400000000000000000000037221264010144500222660ustar00rootroot00000000000000# Noninteractive UI # Copyright (C) 2002-2012 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import logging from offlineimap.ui.UIBase import UIBase import offlineimap class Basic(UIBase): """'Basic' simply sets log level to INFO""" def __init__(self, config, loglevel = logging.INFO): return super(Basic, self).__init__(config, loglevel) class Quiet(UIBase): """'Quiet' simply sets log level to WARNING""" def __init__(self, config, loglevel = logging.WARNING): return super(Quiet, self).__init__(config, loglevel) class Syslog(UIBase): """'Syslog' sets log level to INFO and outputs to syslog instead of stdout""" def __init__(self, config, loglevel = logging.INFO): return super(Syslog, self).__init__(config, loglevel) def setup_consolehandler(self): # create syslog handler ch = logging.handlers.SysLogHandler('/dev/log') # create formatter and add it to the handlers self.formatter = logging.Formatter("%(message)s") ch.setFormatter(self.formatter) # add the handlers to the logger self.logger.addHandler(ch) self.logger.info(offlineimap.banner) return ch def setup_sysloghandler(self): pass # Do not honor -s (log to syslog) CLI option. offlineimap-6.6.1/offlineimap/ui/TTY.py000066400000000000000000000101511264010144500177500ustar00rootroot00000000000000# TTY UI # Copyright (C) 2002-2015 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import logging import sys import time from getpass import getpass from offlineimap import banner from offlineimap.ui.UIBase import UIBase class TTYFormatter(logging.Formatter): """Specific Formatter that adds thread information to the log output.""" def __init__(self, *args, **kwargs): #super() doesn't work in py2.6 as 'logging' uses old-style class logging.Formatter.__init__(self, *args, **kwargs) self._last_log_thread = None def format(self, record): """Override format to add thread information.""" #super() doesn't work in py2.6 as 'logging' uses old-style class log_str = logging.Formatter.format(self, record) # If msg comes from a different thread than our last, prepend # thread info. Most look like 'Account sync foo' or 'Folder # sync foo'. t_name = record.threadName if t_name == 'MainThread': return log_str # main thread doesn't get things prepended if t_name != self._last_log_thread: self._last_log_thread = t_name log_str = "%s:\n %s" % (t_name, log_str) else: log_str = " %s"% log_str return log_str class TTYUI(UIBase): def setup_consolehandler(self): """Backend specific console handler Sets up things and adds them to self.logger. :returns: The logging.Handler() for console output""" # create console handler with a higher log level ch = logging.StreamHandler() #ch.setLevel(logging.DEBUG) # create formatter and add it to the handlers self.formatter = TTYFormatter("%(message)s") ch.setFormatter(self.formatter) # add the handlers to the logger self.logger.addHandler(ch) self.logger.info(banner) # init lock for console output ch.createLock() return ch def isusable(self): """TTYUI is reported as usable when invoked on a terminal.""" return sys.stdout.isatty() and sys.stdin.isatty() def getpass(self, accountname, config, errmsg=None): """TTYUI backend is capable of querying the password.""" if errmsg: self.warn("%s: %s"% (accountname, errmsg)) self._log_con_handler.acquire() # lock the console output try: return getpass("Enter password for account '%s': " % accountname) finally: self._log_con_handler.release() def mainException(self): if isinstance(sys.exc_info()[1], KeyboardInterrupt): self.logger.warn("Timer interrupted at user request; program " "terminating.\n") self.terminate() else: UIBase.mainException(self) def sleeping(self, sleepsecs, remainingsecs): """Sleep for sleepsecs, display remainingsecs to go. Does nothing if sleepsecs <= 0. Display a message on the screen if we pass a full minute. This implementation in UIBase does not support this, but some implementations return 0 for successful sleep and 1 for an 'abort', ie a request to sync immediately.""" if sleepsecs > 0: if remainingsecs//60 != (remainingsecs-sleepsecs)//60: self.logger.info("Next refresh in %.1f minutes" % ( remainingsecs/60.0)) time.sleep(sleepsecs) return 0 offlineimap-6.6.1/offlineimap/ui/UIBase.py000066400000000000000000000571041264010144500204110ustar00rootroot00000000000000# UI base class # Copyright (C) 2002-2015 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import logging import logging.handlers import re import time import sys import traceback import threading try: from Queue import Queue except ImportError: #python3 from queue import Queue from collections import deque from offlineimap.error import OfflineImapError import offlineimap debugtypes = {'':'Other offlineimap related sync messages', 'imap': 'IMAP protocol debugging', 'maildir': 'Maildir repository debugging', 'thread': 'Threading debugging'} globalui = None def setglobalui(newui): """Set the global ui object to be used for logging.""" global globalui globalui = newui def getglobalui(): """Return the current ui object.""" global globalui return globalui class UIBase(object): def __init__(self, config, loglevel=logging.INFO): self.config = config # Is this a 'dryrun'? self.dryrun = config.getdefaultboolean('general', 'dry-run', False) self.debuglist = [] # list of debugtypes we are supposed to log self.debugmessages = {} # debugmessages in a deque(v) per thread(k) self.debugmsglen = 15 self.threadaccounts = {} # dict linking active threads (k) to account names (v) self.acct_startimes = {} # linking active accounts with the time.time() when sync started self.logfile = None self.exc_queue = Queue() # saves all occuring exceptions, so we can output them at the end self.uidval_problem = False # at least one folder skipped due to UID validity problem # create logger with 'OfflineImap' app self.logger = logging.getLogger('OfflineImap') self.logger.setLevel(loglevel) self._log_con_handler = self.setup_consolehandler() """The console handler (we need access to be able to lock it).""" ################################################## UTILS def setup_consolehandler(self): """Backend specific console handler. Sets up things and adds them to self.logger. :returns: The logging.Handler() for console output""" # create console handler with a higher log level ch = logging.StreamHandler(sys.stdout) #ch.setLevel(logging.DEBUG) # create formatter and add it to the handlers self.formatter = logging.Formatter("%(message)s") ch.setFormatter(self.formatter) # add the handlers to the logger self.logger.addHandler(ch) self.logger.info(offlineimap.banner) return ch def setup_sysloghandler(self): """Backend specific syslog handler.""" # create syslog handler ch = logging.handlers.SysLogHandler('/dev/log') # create formatter and add it to the handlers self.formatter = logging.Formatter("%(message)s") ch.setFormatter(self.formatter) # add the handlers to the logger self.logger.addHandler(ch) def setlogfile(self, logfile): """Create file handler which logs to file.""" fh = logging.FileHandler(logfile, 'at') file_formatter = logging.Formatter("%(asctime)s %(levelname)s: " "%(message)s", '%Y-%m-%d %H:%M:%S') fh.setFormatter(file_formatter) self.logger.addHandler(fh) # write out more verbose initial info blurb on the log file p_ver = ".".join([str(x) for x in sys.version_info[0:3]]) msg = "OfflineImap %s starting...\n Python: %s Platform: %s\n "\ "Args: %s"% (offlineimap.__bigversion__, p_ver, sys.platform, " ".join(sys.argv)) self.logger.info(msg) def _msg(self, msg): """Display a message.""" # TODO: legacy function, rip out. self.info(msg) def info(self, msg): """Display a message.""" self.logger.info(msg) def warn(self, msg, minor=0): self.logger.warning(msg) def error(self, exc, exc_traceback=None, msg=None): """Log a message at severity level ERROR. Log Exception 'exc' to error log, possibly prepended by a preceding error "msg", detailing at what point the error occurred. In debug mode, we also output the full traceback that occurred if one has been passed in via sys.info()[2]. Also save the Exception to a stack that can be output at the end of the sync run when offlineiamp exits. It is recommended to always pass in exceptions if possible, so we can give the user the best debugging info. We are always pushing tracebacks to the exception queue to make them to be output at the end of the run to allow users pass sensible diagnostics to the developers or to solve problems by themselves. One example of such a call might be: ui.error(exc, sys.exc_info()[2], msg="While syncing Folder %s in " "repo %s") """ if msg: self.logger.error("ERROR: %s\n %s" % (msg, exc)) else: self.logger.error("ERROR: %s" % (exc)) instant_traceback = exc_traceback if not self.debuglist: # only output tracebacks in debug mode instant_traceback = None # push exc on the queue for later output self.exc_queue.put((msg, exc, exc_traceback)) if instant_traceback: self.logger.error(traceback.format_tb(instant_traceback)) def registerthread(self, account): """Register current thread as being associated with an account name.""" cur_thread = threading.currentThread() if cur_thread in self.threadaccounts: # was already associated with an old account, update info self.debug('thread', "Register thread '%s' (previously '%s', now " "'%s')" % (cur_thread.getName(), self.getthreadaccount(cur_thread), account)) else: self.debug('thread', "Register new thread '%s' (account '%s')"% (cur_thread.getName(), account)) self.threadaccounts[cur_thread] = account def unregisterthread(self, thr): """Unregister a thread as being associated with an account name.""" if thr in self.threadaccounts: del self.threadaccounts[thr] self.debug('thread', "Unregister thread '%s'" % thr.getName()) def getthreadaccount(self, thr=None): """Get Account() for a thread (current if None) If no account has been registered with this thread, return 'None'.""" if thr == None: thr = threading.currentThread() if thr in self.threadaccounts: return self.threadaccounts[thr] return None def debug(self, debugtype, msg): cur_thread = threading.currentThread() if not cur_thread in self.debugmessages: # deque(..., self.debugmsglen) would be handy but was # introduced in p2.6 only, so we'll need to work around and # shorten our debugmsg list manually :-( self.debugmessages[cur_thread] = deque() self.debugmessages[cur_thread].append("%s: %s" % (debugtype, msg)) # Shorten queue if needed if len(self.debugmessages[cur_thread]) > self.debugmsglen: self.debugmessages[cur_thread].popleft() if debugtype in self.debuglist: # log if we are supposed to do so self.logger.debug("[%s]: %s" % (debugtype, msg)) def add_debug(self, debugtype): global debugtypes if debugtype in debugtypes: if not debugtype in self.debuglist: self.debuglist.append(debugtype) self.debugging(debugtype) else: self.invaliddebug(debugtype) def debugging(self, debugtype): global debugtypes self.logger.debug("Now debugging for %s: %s" % (debugtype, debugtypes[debugtype])) def invaliddebug(self, debugtype): self.warn("Invalid debug type: %s" % debugtype) def getnicename(self, object): """Return the type of a repository or Folder as string. (IMAP, Gmail, Maildir, etc...)""" prelimname = object.__class__.__name__.split('.')[-1] # Strip off extra stuff. return re.sub('(Folder|Repository)', '', prelimname) def isusable(self): """Returns true if this UI object is usable in the current environment. For instance, an X GUI would return true if it's being run in X with a valid DISPLAY setting, and false otherwise.""" return True ################################################## INPUT def getpass(self, accountname, config, errmsg = None): raise NotImplementedError("Prompting for a password is not supported" " in this UI backend.") def folderlist(self, folder_list): return ', '.join(["%s[%s]"% \ (self.getnicename(x), x.getname()) for x in folder_list]) ################################################## WARNINGS def msgtoreadonly(self, destfolder, uid, content, flags): if self.config.has_option('general', 'ignore-readonly') and \ self.config.getboolean('general', 'ignore-readonly'): return self.warn("Attempted to synchronize message %d to folder %s[%s], " "but that folder is read-only. The message will not be " "copied to that folder."% ( uid, self.getnicename(destfolder), destfolder)) def flagstoreadonly(self, destfolder, uidlist, flags): if self.config.has_option('general', 'ignore-readonly') and \ self.config.getboolean('general', 'ignore-readonly'): return self.warn("Attempted to modify flags for messages %s in folder %s[%s], " "but that folder is read-only. No flags have been modified " "for that message."% ( str(uidlist), self.getnicename(destfolder), destfolder)) def labelstoreadonly(self, destfolder, uidlist, labels): if self.config.has_option('general', 'ignore-readonly') and \ self.config.getboolean('general', 'ignore-readonly'): return self.warn("Attempted to modify labels for messages %s in folder %s[%s], " "but that folder is read-only. No labels have been modified " "for that message."% ( str(uidlist), self.getnicename(destfolder), destfolder)) def deletereadonly(self, destfolder, uidlist): if self.config.has_option('general', 'ignore-readonly') and \ self.config.getboolean('general', 'ignore-readonly'): return self.warn("Attempted to delete messages %s in folder %s[%s], but that " "folder is read-only. No messages have been deleted in that " "folder."% (str(uidlist), self.getnicename(destfolder), destfolder)) ################################################## MESSAGES def init_banner(self): """Called when the UI starts. Must be called before any other UI call except isusable(). Displays the copyright banner. This is where the UI should do its setup -- TK, for instance, would create the application window here.""" pass def connecting(self, hostname, port): """Log 'Establishing connection to'.""" if not self.logger.isEnabledFor(logging.INFO): return displaystr = '' hostname = hostname if hostname else '' port = "%s"% port if port else '' if hostname: displaystr = ' to %s:%s' % (hostname, port) self.logger.info("Establishing connection%s" % displaystr) def acct(self, account): """Output that we start syncing an account (and start counting).""" self.acct_startimes[account] = time.time() self.logger.info("*** Processing account %s" % account) def acctdone(self, account): """Output that we finished syncing an account (in which time).""" sec = time.time() - self.acct_startimes[account] del self.acct_startimes[account] self.logger.info("*** Finished account '%s' in %d:%02d"% (account, sec // 60, sec % 60)) def syncfolders(self, src_repo, dst_repo): """Log 'Copying folder structure...'.""" if self.logger.isEnabledFor(logging.DEBUG): self.debug('', "Copying folder structure from %s to %s" %\ (src_repo, dst_repo)) ############################## Folder syncing def makefolder(self, repo, foldername): """Called when a folder is created.""" prefix = "[DRYRUN] " if self.dryrun else "" self.info(("{0}Creating folder {1}[{2}]".format( prefix, foldername, repo))) def syncingfolder(self, srcrepos, srcfolder, destrepos, destfolder): """Called when a folder sync operation is started.""" self.logger.info("Syncing %s: %s -> %s"% (srcfolder, self.getnicename(srcrepos), self.getnicename(destrepos))) def skippingfolder(self, folder): """Called when a folder sync operation is started.""" self.logger.info("Skipping %s (not changed)" % folder) def validityproblem(self, folder): self.uidval_problem = True self.logger.warning("UID validity problem for folder %s (repo %s) " "(saved %d; got %d); skipping it. Please see FAQ " "and manual on how to handle this."% \ (folder, folder.getrepository(), folder.get_saveduidvalidity(), folder.get_uidvalidity())) def loadmessagelist(self, repos, folder): self.logger.debug("Loading message list for %s[%s]"% ( self.getnicename(repos), folder)) def messagelistloaded(self, repos, folder, count): self.logger.debug("Message list for %s[%s] loaded: %d messages" % ( self.getnicename(repos), folder, count)) ############################## Message syncing def syncingmessages(self, sr, srcfolder, dr, dstfolder): self.logger.debug("Syncing messages %s[%s] -> %s[%s]" % ( self.getnicename(sr), srcfolder, self.getnicename(dr), dstfolder)) def copyingmessage(self, uid, num, num_to_copy, src, destfolder): """Output a log line stating which message we copy""" self.logger.info("Copy message %s (%d of %d) %s:%s -> %s" % ( uid, num, num_to_copy, src.repository, src, destfolder.repository)) def deletingmessages(self, uidlist, destlist): ds = self.folderlist(destlist) prefix = "[DRYRUN] " if self.dryrun else "" self.info("{0}Deleting {1} messages ({2}) in {3}".format( prefix, len(uidlist), offlineimap.imaputil.uid_sequence(uidlist), ds)) def addingflags(self, uidlist, flags, dest): self.logger.info("Adding flag %s to %d messages on %s" % ( ", ".join(flags), len(uidlist), dest)) def deletingflags(self, uidlist, flags, dest): self.logger.info("Deleting flag %s from %d messages on %s" % ( ", ".join(flags), len(uidlist), dest)) def addinglabels(self, uidlist, label, dest): self.logger.info("Adding label %s to %d messages on %s" % ( label, len(uidlist), dest)) def deletinglabels(self, uidlist, label, dest): self.logger.info("Deleting label %s from %d messages on %s" % ( label, len(uidlist), dest)) def settinglabels(self, uid, num, num_to_set, labels, dest): self.logger.info("Setting labels to message %d on %s (%d of %d): %s" % ( uid, dest, num, num_to_set, ", ".join(labels))) def collectingdata(self, uidlist, source): if uidlist: self.logger.info("Collecting data from %d messages on %s"% ( len(uidlist), source)) else: self.logger.info("Collecting data from messages on %s"% source) def serverdiagnostics(self, repository, type): """Connect to repository and output useful information for debugging.""" conn = None self._msg("%s repository '%s': type '%s'" % (type, repository.name, self.getnicename(repository))) try: if hasattr(repository, 'gethost'): # IMAP self._msg("Host: %s Port: %s SSL: %s"% (repository.gethost(), repository.getport(), repository.getssl())) try: conn = repository.imapserver.acquireconnection() except OfflineImapError as e: self._msg("Failed to connect. Reason %s" % e) else: if 'ID' in conn.capabilities: self._msg("Server supports ID extension.") #TODO: Debug and make below working, it hangs Gmail #res_type, response = conn.id(( # 'name', offlineimap.__productname__, # 'version', offlineimap.__bigversion__)) #self._msg("Server ID: %s %s" % (res_type, response[0])) self._msg("Server welcome string: %s" % str(conn.welcome)) self._msg("Server capabilities: %s\n" % str(conn.capabilities)) repository.imapserver.releaseconnection(conn) if type != 'Status': folderfilter = repository.getconf('folderfilter', None) if folderfilter: self._msg("folderfilter= %s\n" % folderfilter) folderincludes = repository.getconf('folderincludes', None) if folderincludes: self._msg("folderincludes= %s\n" % folderincludes) nametrans = repository.getconf('nametrans', None) if nametrans: self._msg("nametrans= %s\n" % nametrans) folders = repository.getfolders() foldernames = [(f.name, f.getvisiblename(), f.sync_this) for f in folders] folders = [] for name, visiblename, sync_this in foldernames: syncstr = "" if sync_this else " (disabled)" if name == visiblename: folders.append("%s%s" % (name, syncstr)) else: folders.append("%s -> %s%s" % (name, visiblename, syncstr)) self._msg("Folderlist:\n %s\n" % "\n ".join(folders)) finally: if conn: #release any existing IMAP connection repository.imapserver.close() def savemessage(self, debugtype, uid, flags, folder): """Output a log line stating that we save a msg.""" self.debug(debugtype, "Write mail '%s:%d' with flags %s"% (folder, uid, repr(flags))) ################################################## Threads def getThreadDebugLog(self, thread): if thread in self.debugmessages: message = "\nLast %d debug messages logged for %s prior to exception:\n"\ % (len(self.debugmessages[thread]), thread.getName()) message += "\n".join(self.debugmessages[thread]) else: message = "\nNo debug messages were logged for %s."% \ thread.getName() return message def delThreadDebugLog(self, thread): if thread in self.debugmessages: del self.debugmessages[thread] def getThreadExceptionString(self, thread): message = "Thread '%s' terminated with exception:\n%s"% \ (thread.getName(), thread.exit_stacktrace) message += "\n" + self.getThreadDebugLog(thread) return message def threadException(self, thread): """Called when a thread has terminated with an exception. The argument is the ExitNotifyThread that has so terminated.""" self.warn(self.getThreadExceptionString(thread)) self.delThreadDebugLog(thread) self.terminate(100) def terminate(self, exitstatus = 0, errortitle = None, errormsg = None): """Called to terminate the application.""" #print any exceptions that have occurred over the run if not self.exc_queue.empty(): self.warn("ERROR: Exceptions occurred during the run!") if exitstatus == 0: exitstatus = 1 while not self.exc_queue.empty(): msg, exc, exc_traceback = self.exc_queue.get() if msg: self.warn("ERROR: %s\n %s"% (msg, exc)) else: self.warn("ERROR: %s"% (exc)) if exc_traceback: self.warn("\nTraceback:\n%s"% "".join( traceback.format_tb(exc_traceback))) if errormsg and errortitle: self.warn('ERROR: %s\n\n%s\n'% (errortitle, errormsg)) elif errormsg: self.warn('%s\n'% errormsg) if self.uidval_problem: self.warn('At least one folder skipped due to UID validity problem') if exitstatus == 0: exitstatus = 2 sys.exit(exitstatus) def threadExited(self, thread): """Called when a thread has exited normally. Many UIs will just ignore this.""" self.delThreadDebugLog(thread) self.unregisterthread(thread) ################################################## Hooks def callhook(self, msg): if self.dryrun: self.info("[DRYRUN] {0}".format(msg)) else: self.info(msg) ################################################## Other def sleep(self, sleepsecs, account): """This function does not actually output anything, but handles the overall sleep, dealing with updates as necessary. It will, however, call sleeping() which DOES output something. :returns: 0/False if timeout expired, 1/2/True if there is a request to cancel the timer. """ abortsleep = False while sleepsecs > 0 and not abortsleep: if account.get_abort_event(): abortsleep = True else: abortsleep = self.sleeping(10, sleepsecs) sleepsecs -= 10 self.sleeping(0, 0) # Done sleeping. return abortsleep def sleeping(self, sleepsecs, remainingsecs): """Sleep for sleepsecs, display remainingsecs to go. Does nothing if sleepsecs <= 0. Display a message on the screen if we pass a full minute. This implementation in UIBase does not support this, but some implementations return 0 for successful sleep and 1 for an 'abort', ie a request to sync immediately. """ if sleepsecs > 0: if remainingsecs//60 != (remainingsecs-sleepsecs)//60: self.logger.debug("Next refresh in %.1f minutes" % ( remainingsecs/60.0)) time.sleep(sleepsecs) return 0 offlineimap-6.6.1/offlineimap/ui/__init__.py000066400000000000000000000024421264010144500210330ustar00rootroot00000000000000# UI module # Copyright (C) 2010-2011 Sebastian Spaeth & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA from offlineimap.ui.UIBase import getglobalui, setglobalui from offlineimap.ui import TTY, Noninteractive, Machine UI_LIST = {'ttyui': TTY.TTYUI, 'basic': Noninteractive.Basic, 'quiet': Noninteractive.Quiet, 'syslog': Noninteractive.Syslog, 'machineui': Machine.MachineUI} #add Blinkenlights UI if it imports correctly (curses installed) try: from offlineimap.ui import Curses UI_LIST['blinkenlights'] = Curses.Blinkenlights except ImportError: pass offlineimap-6.6.1/offlineimap/ui/debuglock.py000066400000000000000000000033141264010144500212320ustar00rootroot00000000000000# Locking debugging code -- temporary # Copyright (C) 2003-2015 John Goerzen & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA from threading import Lock, currentThread import traceback logfile = open("/tmp/logfile", "wt") loglock = Lock() class DebuggingLock: def __init__(self, name): self.lock = Lock() self.name = name def acquire(self, blocking = 1): self.print_tb("Acquire lock") self.lock.acquire(blocking) self.logmsg("===== %s: Thread %s acquired lock\n"% (self.name, currentThread().getName())) def release(self): self.print_tb("Release lock") self.lock.release() def logmsg(self, msg): loglock.acquire() logfile.write(msg + "\n") logfile.flush() loglock.release() def print_tb(self, msg): self.logmsg(".... %s: Thread %s attempting to %s\n"% \ (self.name, currentThread().getName(), msg) + \ "\n".join(traceback.format_list(traceback.extract_stack()))) offlineimap-6.6.1/offlineimap/utils/000077500000000000000000000000001264010144500174435ustar00rootroot00000000000000offlineimap-6.6.1/offlineimap/utils/__init__.py000066400000000000000000000000001264010144500215420ustar00rootroot00000000000000offlineimap-6.6.1/offlineimap/utils/const.py000066400000000000000000000021271264010144500211450ustar00rootroot00000000000000# Copyright (C) 2013-2014 Eygene A. Ryabinkin and contributors # # Collection of classes that implement const-like behaviour # for various objects. import copy class ConstProxy(object): """Implements read-only access to a given object that can be attached to each instance only once.""" def __init__(self): self.__dict__['__source'] = None def __getattr__(self, name): src = self.__dict__['__source'] if src == None: raise ValueError("using non-initialized ConstProxy() object") return copy.deepcopy(getattr(src, name)) def __setattr__(self, name, value): raise AttributeError("tried to set '%s' to '%s' for constant object"% \ (name, value)) def __delattr__(self, name): raise RuntimeError("tried to delete field '%s' from constant object"% \ (name)) def set_source(self, source): """ Sets source object for this instance. """ if (self.__dict__['__source'] != None): raise ValueError("source object is already set") self.__dict__['__source'] = source offlineimap-6.6.1/offlineimap/utils/distro.py000066400000000000000000000047771264010144500213400ustar00rootroot00000000000000# Copyright 2014 Eygene A. Ryabinkin. # # Module that supports distribution-specific functions. import platform import os # Each dictionary value is either string or some iterable. # # For the former we will just return the value, for an iterable # we will walk through the values and will return the first # one that corresponds to the existing file. __DEF_OS_LOCATIONS = { 'freebsd': '/usr/local/share/certs/ca-root-nss.crt', 'openbsd': '/etc/ssl/cert.pem', 'dragonfly': '/etc/ssl/cert.pem', 'darwin': [ # MacPorts, port curl-ca-bundle '/opt/local/share/curl/curl-ca-bundle.crt', ], 'linux-ubuntu': '/etc/ssl/certs/ca-certificates.crt', 'linux-debian': '/etc/ssl/certs/ca-certificates.crt', 'linux-fedora': '/etc/pki/tls/certs/ca-bundle.crt', 'linux-redhat': '/etc/pki/tls/certs/ca-bundle.crt', 'linux-suse': '/etc/ssl/ca-bundle.pem', } def get_os_name(): """ Finds out OS name. For non-Linux system it will be just a plain OS name (like FreeBSD), for Linux it will be "linux-", where is the name of the distribution, as returned by the first component of platform.linux_distribution. Return value will be all-lowercase to avoid confusion about proper name capitalisation. """ OS = platform.system().lower() if OS.startswith('linux'): DISTRO = platform.linux_distribution()[0] if DISTRO: OS = OS + "-%s" % DISTRO.lower() return OS def get_os_sslcertfile_searchpath(): """Returns search path for CA bundle for the current OS. We will return an iterable even if configuration has just a single value: it is easier for our callers to be sure that they can iterate over result. Returned value of None means that there is no search path at all. """ OS = get_os_name() l = None if OS in __DEF_OS_LOCATIONS: l = __DEF_OS_LOCATIONS[OS] if not hasattr(l, '__iter__'): l = (l, ) return l def get_os_sslcertfile(): """ Finds out the location for the distribution-specific CA certificate file bundle. Returns the location of the file or None if there is no known CA certificate file or all known locations correspond to non-existing filesystem objects. """ l = get_os_sslcertfile_searchpath() if l == None: return None for f in l: assert (type(f) == type("")) if os.path.exists(f) and \ (os.path.isfile(f) or os.path.islink(f)): return f return None offlineimap-6.6.1/offlineimap/utils/stacktrace.py000066400000000000000000000012471264010144500221450ustar00rootroot00000000000000# Copyright 2013 Eygene A. Ryabinkin # Functions to perform stack tracing (for multithreaded programs # as well as for single-threaded ones). import sys import threading import traceback def dump(out): """ Dumps current stack trace into I/O object 'out' """ id2name = {} for th in threading.enumerate(): id2name[th.ident] = th.name n = 0 for i, stack in sys._current_frames().items(): out.write ("\n# Thread #%d (id=%d), %s\n" % \ (n, i, id2name[i])) n = n + 1 for f, lno, name, line in traceback.extract_stack (stack): out.write ('File: "%s", line %d, in %s' % \ (f, lno, name)) if line: out.write (" %s" % (line.strip())) out.write ("\n") offlineimap-6.6.1/scripts/000077500000000000000000000000001264010144500155015ustar00rootroot00000000000000offlineimap-6.6.1/scripts/get-repository.sh000077500000000000000000000037301264010144500210370ustar00rootroot00000000000000#!/bin/bash # # Licence: this file is in the public deomain. # # Download and configure the repositories of the website or wiki. repository=$1 github_remote=$2 # # TODO # function final_note () { cat < $ cd ./$1 $ git remote add myfork https://github.com//.git EOF } function setup () { target_dir=$1 remote_url=$2 # Adjust $PWD if necessary. test -d scripts || cd .. if test ! -d scripts then echo "cannot figure the correct workdir..." exit 2 fi if test -d $target_dir then echo "Directory '$target_dir' already exists..." exit 3 fi git clone "${remote_url}.git" "$1" echo '' if test $? -gt 0 then echo "Cannot fork $remote_url to $1" exit 4 fi } function configure_website () { renderer='./render.sh' echo "Found Github username: '$1'" echo "If it's wrong, please fix the script ./website/render.sh" cd ./website if test $? -eq 0 then sed -r -i -e "s,{{USERNAME}},$1," "$renderer" cd .. else echo "ERROR: could not enter ./website. (?)" fi } function configure_wiki () { : # noop } test n$github_remote = 'n' && github_remote='origin' # Get Github username. #offlineimap_url="$(git config --local --get remote.origin.url)" offlineimap_url="$(git config --local --get remote.nicolas33.url)" username=$(echo $offlineimap_url | sed -r -e 's,.*github.com.([^/]+)/.*,\1,') case n$repository in nwebsite) upstream=https://github.com/OfflineIMAP/offlineimap.github.io setup website "$upstream" configure_website "$username" final_note website "$upstream" ;; nwiki) upstream=https://github.com/OfflineIMAP/offlineimap.wiki setup wiki "$upstream" configure_wiki final_note wiki "$upstream" ;; *) cat <] : The name of the Git repository of YOUR fork at Github. Default: origin EOF exit 1 ;; esac offlineimap-6.6.1/setup.py000066400000000000000000000046341264010144500155330ustar00rootroot00000000000000#!/usr/bin/env python # $Id: setup.py,v 1.1 2002/06/21 18:10:49 jgoerzen Exp $ # IMAP synchronization # Module: installer # COPYRIGHT # # Copyright (C) 2002 - 2006 John Goerzen # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA import os from distutils.core import setup, Command import offlineimap import logging from test.OLItest import TextTestRunner, TestLoader, OLITestLib class TestCommand(Command): """runs the OLI testsuite""" description = """Runs the test suite. In order to execute only a single test, you could also issue e.g. 'python -m unittest test.tests.test_01_basic.TestBasicFunctions.test_01_olistartup' on the command line.""" user_options = [] def initialize_options(self): pass def finalize_options(self): pass def run(self): logging.basicConfig(format='%(message)s') # set credentials and OfflineImap command to be executed: OLITestLib(cred_file='./test/credentials.conf', cmd='./offlineimap.py') suite = TestLoader().discover('./test/tests') #TODO: failfast does not seem to exist in python2.6? TextTestRunner(verbosity=2,failfast=True).run(suite) setup(name = "offlineimap", version = offlineimap.__version__, description = offlineimap.__description__, author = offlineimap.__author__, author_email = offlineimap.__author_email__, url = offlineimap.__homepage__, packages = ['offlineimap', 'offlineimap.folder', 'offlineimap.repository', 'offlineimap.ui', 'offlineimap.utils'], scripts = ['bin/offlineimap'], license = offlineimap.__copyright__ + \ ", Licensed under the GPL version 2", cmdclass = { 'test': TestCommand} ) offlineimap-6.6.1/test/000077500000000000000000000000001264010144500147715ustar00rootroot00000000000000offlineimap-6.6.1/test/.gitignore000066400000000000000000000000261264010144500167570ustar00rootroot00000000000000credentials.conf tmp_*offlineimap-6.6.1/test/OLItest/000077500000000000000000000000001264010144500163145ustar00rootroot00000000000000offlineimap-6.6.1/test/OLItest/TestRunner.py000066400000000000000000000236241264010144500210060ustar00rootroot00000000000000# Copyright (C) 2012- Sebastian Spaeth & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import imaplib import unittest import logging import os import re import sys import shutil import subprocess import tempfile import random random.seed() from offlineimap.CustomConfig import CustomConfigParser from . import default_conf class OLITestLib(): cred_file = None testdir = None """Absolute path of the current temporary test directory""" cmd = None """command that will be executed to invoke offlineimap""" def __init__(self, cred_file = None, cmd='offlineimap'): """ :param cred_file: file of the configuration snippet for authenticating against the test IMAP server(s). :param cmd: command that will be executed to invoke offlineimap""" OLITestLib.cred_file = cred_file if not os.path.isfile(cred_file): raise UserWarning("Please copy 'credentials.conf.sample' to '%s' " "and set your credentials there." % cred_file) OLITestLib.cmd = cmd @classmethod def create_test_dir(cls, suffix=''): """Creates a test directory and places OLI config there Note that this is a class method. There can only be one test directory at a time. OLITestLib is not suited for running several tests in parallel. The user is responsible for cleaning that up herself.""" assert cls.cred_file != None # creating temporary dir for testing in same dir as credentials.conf cls.testdir = os.path.abspath( tempfile.mkdtemp(prefix='tmp_%s_'%suffix, dir=os.path.dirname(cls.cred_file))) cls.write_config_file() return cls.testdir @classmethod def get_default_config(cls): """Creates a default ConfigParser file and returns it The returned config can be manipulated and then saved with write_config_file()""" #TODO, only do first time and cache then for subsequent calls? assert cls.cred_file != None assert cls.testdir != None config = CustomConfigParser() config.readfp(default_conf) default_conf.seek(0) # rewind config_file to start config.read(cls.cred_file) config.set("general", "metadata", cls.testdir) return config @classmethod def write_config_file(cls, config=None): """Creates a OLI configuration file It is created in testdir (so create_test_dir has to be called earlier) using the credentials information given (so they had to be set earlier). Failure to do either of them will raise an AssertionException. If config is None, a default one will be used via get_default_config, otherwise it needs to be a config object derived from that.""" if config is None: config = cls.get_default_config() localfolders = os.path.join(cls.testdir, 'mail') config.set("Repository Maildir", "localfolders", localfolders) with open(os.path.join(cls.testdir, 'offlineimap.conf'), "wt") as f: config.write(f) @classmethod def delete_test_dir(cls): """Deletes the current test directory The users is responsible for cleaning that up herself.""" if os.path.isdir(cls.testdir): shutil.rmtree(cls.testdir) @classmethod def run_OLI(cls): """Runs OfflineImap :returns: (rescode, stdout (as unicode)) """ try: output = subprocess.check_output( [cls.cmd, "-c%s" % os.path.join(cls.testdir, 'offlineimap.conf')], shell=False) except subprocess.CalledProcessError as e: return (e.returncode, e.output.decode('utf-8')) return (0, output.decode('utf-8')) @classmethod def delete_remote_testfolders(cls, reponame=None): """Delete all INBOX.OLITEST* folders on the remote IMAP repository reponame: All on `reponame` or all IMAP-type repositories if None""" config = cls.get_default_config() if reponame: sections = ['Repository {0}'.format(reponame)] else: sections = [r for r in config.sections() \ if r.startswith('Repository')] sections = filter(lambda s: \ config.get(s, 'Type').lower() == 'imap', sections) for sec in sections: # Connect to each IMAP repo and delete all folders # matching the folderfilter setting. We only allow basic # settings and no fancy password getting here... # 1) connect and get dir listing host = config.get(sec, 'remotehost') user = config.get(sec, 'remoteuser') passwd = config.get(sec, 'remotepass') imapobj = imaplib.IMAP4(host) imapobj.login(user, passwd) res_t, data = imapobj.list() assert res_t == 'OK' dirs = [] for d in data: if d == '': continue if isinstance(d, tuple): # literal (unquoted) folder = b'"%s"' % d[1].replace('"', '\\"') else: m = re.search(br''' [ ] # space (?P (?P"?) # starting quote ([^"]|\\")* # a non-quote or a backslashded quote (?P=quote))$ # ending quote ''', d, flags=re.VERBOSE) folder = bytearray(m.group('dir')) if not m.group('quote'): folder = '"%s"' % folder #folder = folder.replace(br'\"', b'"') # remove quoting dirs.append(folder) # 2) filter out those not starting with INBOX.OLItest and del... dirs = [d for d in dirs if d.startswith(b'"INBOX.OLItest') or d.startswith(b'"INBOX/OLItest')] for folder in dirs: res_t, data = imapobj.delete(folder) assert res_t == 'OK', "Folder deletion of {0} failed with error"\ ":\n{1} {2}".format(folder.decode('utf-8'), res_t, data) imapobj.logout() @classmethod def create_maildir(cls, folder): """Create empty maildir 'folder' in our test maildir Does not fail if it already exists""" assert cls.testdir != None maildir = os.path.join(cls.testdir, 'mail', folder) for subdir in ('','tmp','cur','new'): try: os.makedirs(os.path.join(maildir, subdir)) except OSError as e: if e.errno != 17: # 'already exists' is ok. raise @classmethod def delete_maildir(cls, folder): """Delete maildir 'folder' in our test maildir Does not fail if not existing""" assert cls.testdir != None maildir = os.path.join(cls.testdir, 'mail', folder) shutil.rmtree(maildir, ignore_errors=True) @classmethod def create_mail(cls, folder, mailfile=None, content=None): """Create a mail in maildir 'folder'/new Use default mailfilename if not given. Use some default content if not given""" assert cls.testdir != None while True: # Loop till we found a unique filename mailfile = '{0}:2,'.format(random.randint(0,999999999)) mailfilepath = os.path.join(cls.testdir, 'mail', folder, 'new', mailfile) if not os.path.isfile(mailfilepath): break with open(mailfilepath,"wb") as mailf: mailf.write(b'''From: test Subject: Boo Date: 1 Jan 1980 To: test@offlineimap.org Content here.''') @classmethod def count_maildir_mails(cls, folder): """Returns the number of mails in maildir 'folder' Counting only those in cur&new (ignoring tmp).""" assert cls.testdir != None maildir = os.path.join(cls.testdir, 'mail', folder) boxes, mails = 0, 0 for dirpath, dirs, files in os.walk(maildir, False): if set(dirs) == set(['cur', 'new', 'tmp']): # New maildir folder boxes += 1 #raise RuntimeError("%s is not Maildir" % maildir) if dirpath.endswith(('/cur', '/new')): mails += len(files) return boxes, mails # find UID in a maildir filename re_uidmatch = re.compile(',U=(\d+)') @classmethod def get_maildir_uids(cls, folder): """Returns a list of maildir mail uids, 'None' if no valid uid""" assert cls.testdir != None mailfilepath = os.path.join(cls.testdir, 'mail', folder) assert os.path.isdir(mailfilepath) ret = [] for dirpath, dirs, files in os.walk(mailfilepath): if not dirpath.endswith((os.path.sep + 'new', os.path.sep + 'cur')): continue # only /new /cur are interesting for file in files: m = cls.re_uidmatch.search(file) uid = m.group(1) if m else None ret.append(uid) return ret offlineimap-6.6.1/test/OLItest/__init__.py000066400000000000000000000026351264010144500204330ustar00rootroot00000000000000# OfflineImap test library # Copyright (C) 2012- Sebastian Spaeth & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA __all__ = ['OLITestLib', 'TextTestRunner','TestLoader'] __productname__ = 'OfflineIMAP Test suite' __version__ = '0' __copyright__ = "Copyright 2012- Sebastian Spaeth & contributors" __author__ = 'Sebastian Spaeth' __author_email__= 'Sebastian@SSpaeth.de' __description__ = 'Moo' __license__ = "Licensed under the GNU GPL v2+ (v2 or any later version)" __homepage__ = "http://offlineimap.org" banner = """%(__productname__)s %(__version__)s %(__license__)s""" % locals() import unittest from unittest import TestLoader, TextTestRunner from .globals import default_conf from .TestRunner import OLITestLib offlineimap-6.6.1/test/OLItest/globals.py000066400000000000000000000025761264010144500203230ustar00rootroot00000000000000#Constants, that don't rely on anything else in the module # Copyright (C) 2012- Sebastian Spaeth & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA try: from cStringIO import StringIO except ImportError: #python3 from io import StringIO default_conf=StringIO("""[general] #will be set automatically metadata = accounts = test ui = quiet [Account test] localrepository = Maildir remoterepository = IMAP [Repository Maildir] Type = Maildir # will be set automatically during tests localfolders = [Repository IMAP] type=IMAP # Don't hammer the server with too many connection attempts: maxconnections=1 folderfilter= lambda f: f.startswith('INBOX.OLItest') or f.startswith('INBOX/OLItest') """) offlineimap-6.6.1/test/README000066400000000000000000000013511264010144500156510ustar00rootroot00000000000000Documentation for the OfflineImap Test suite. How to run the tests ==================== - Copy the credentials.conf.sample to credentials.conf and insert credentials for an IMAP account and a Gmail account. Delete the Gmail section if you don't have a Gmail account. Do note, that the tests will change the account and upload/delete/modify it's contents and folder structure. So don't use a real used account here... - go to the top level dir (one above this one) and execute: 'python setup.py test' System requirements =================== This test suite depend on python>=2.7 to run out of the box. If you want to run this with python 2.6 you will need to install the backport from http://pypi.python.org/pypi/unittest2 instead.offlineimap-6.6.1/test/__init__.py000066400000000000000000000000001264010144500170700ustar00rootroot00000000000000offlineimap-6.6.1/test/credentials.conf.sample000066400000000000000000000003401264010144500214120ustar00rootroot00000000000000[Repository IMAP] type = IMAP remotehost = localhost ssl = no #sslcacertfile = #cert_fingerprint = remoteuser = user@domain remotepass = SeKr3t [Repository Gmail] type = Gmail remoteuser = user@domain remotepass = SeKr3t offlineimap-6.6.1/test/tests/000077500000000000000000000000001264010144500161335ustar00rootroot00000000000000offlineimap-6.6.1/test/tests/__init__.py000066400000000000000000000000011264010144500202330ustar00rootroot00000000000000 offlineimap-6.6.1/test/tests/test_00_globals.py000077500000000000000000000022251264010144500214720ustar00rootroot00000000000000#!/usr/bin/env python # Copyright 2013 Eygene A. Ryabinkin from offlineimap import globals import unittest class Opt: def __init__(self): self.one = "baz" self.two = 42 self.three = True class TestOfflineimapGlobals(unittest.TestCase): @classmethod def setUpClass(klass): klass.o = Opt() globals.set_options (klass.o) def test_initial_state(self): for k in self.o.__dict__.keys(): self.assertTrue(getattr(self.o, k) == getattr(globals.options, k)) def test_object_changes(self): self.o.one = "one" self.o.two = 119 self.o.three = False return self.test_initial_state() def test_modification(self): with self.assertRaises(AttributeError): globals.options.two = True def test_deletion(self): with self.assertRaises(RuntimeError): del globals.options.three def test_nonexistent_key(self): with self.assertRaises(AttributeError): a = globals.options.nosuchoption def test_double_init(self): with self.assertRaises(ValueError): globals.set_options (True) if __name__ == "__main__": suite = unittest.TestLoader().loadTestsFromTestCase(TestOfflineimapGlobals) unittest.TextTestRunner(verbosity=2).run(suite) offlineimap-6.6.1/test/tests/test_00_imaputil.py000066400000000000000000000074331264010144500216760ustar00rootroot00000000000000# Copyright (C) 2012- Sebastian Spaeth & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import unittest import logging from offlineimap import imaputil from offlineimap.ui import UI_LIST, setglobalui from offlineimap.CustomConfig import CustomConfigParser from test.OLItest import OLITestLib # Things need to be setup first, usually setup.py initializes everything. # but if e.g. called from command line, we take care of default values here: if not OLITestLib.cred_file: OLITestLib(cred_file='./test/credentials.conf', cmd='./offlineimap.py') def setUpModule(): logging.info("Set Up test module %s" % __name__) tdir = OLITestLib.create_test_dir(suffix=__name__) def tearDownModule(): logging.info("Tear Down test module") # comment out next line to keep testdir after test runs. TODO: make nicer OLITestLib.delete_test_dir() #Stuff that can be used #self.assertEqual(self.seq, range(10)) # should raise an exception for an immutable sequence #self.assertRaises(TypeError, random.shuffle, (1,2,3)) #self.assertTrue(element in self.seq) #self.assertFalse(element in self.seq) class TestInternalFunctions(unittest.TestCase): """While the other test files test OfflineImap as a program, these tests directly invoke internal helper functions to guarantee that they deliver results as expected""" @classmethod def setUpClass(cls): #This is run before all tests in this class config= OLITestLib.get_default_config() setglobalui(UI_LIST['quiet'](config)) def test_01_imapsplit(self): """Test imaputil.imapsplit()""" res = imaputil.imapsplit(b'(\\HasNoChildren) "." "INBOX.Sent"') self.assertEqual(res, [b'(\\HasNoChildren)', b'"."', b'"INBOX.Sent"']) res = imaputil.imapsplit(b'"mo\\" o" sdfsdf') self.assertEqual(res, [b'"mo\\" o"', b'sdfsdf']) def test_02_flagsplit(self): """Test imaputil.flagsplit()""" res = imaputil.flagsplit(b'(\\Draft \\Deleted)') self.assertEqual(res, [b'\\Draft', b'\\Deleted']) res = imaputil.flagsplit(b'(FLAGS (\\Seen Old) UID 4807)') self.assertEqual(res, [b'FLAGS', b'(\\Seen Old)', b'UID', b'4807']) def test_04_flags2hash(self): """Test imaputil.flags2hash()""" res = imaputil.flags2hash(b'(FLAGS (\\Seen Old) UID 4807)') self.assertEqual(res, {b'FLAGS': b'(\\Seen Old)', b'UID': b'4807'}) def test_05_flagsimap2maildir(self): """Test imaputil.flagsimap2maildir()""" res = imaputil.flagsimap2maildir(b'(\\Draft \\Deleted)') self.assertEqual(res, set(b'DT')) def test_06_flagsmaildir2imap(self): """Test imaputil.flagsmaildir2imap()""" res = imaputil.flagsmaildir2imap(set(b'DR')) self.assertEqual(res, b'(\\Answered \\Draft)') # test all possible flags res = imaputil.flagsmaildir2imap(set(b'SRFTD')) self.assertEqual(res, b'(\\Answered \\Deleted \\Draft \\Flagged \\Seen)') def test_07_uid_sequence(self): """Test imaputil.uid_sequence()""" res = imaputil.uid_sequence([1,2,3,4,5,10,12,13]) self.assertEqual(res, b'1:5,10,12:13') offlineimap-6.6.1/test/tests/test_01_basic.py000066400000000000000000000156171264010144500211370ustar00rootroot00000000000000# Copyright (C) 2012- Sebastian Spaeth & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import random import unittest import logging import os, sys from test.OLItest import OLITestLib # Things need to be setup first, usually setup.py initializes everything. # but if e.g. called from command line, we take care of default values here: if not OLITestLib.cred_file: OLITestLib(cred_file='./test/credentials.conf', cmd='./offlineimap.py') def setUpModule(): logging.info("Set Up test module %s" % __name__) tdir = OLITestLib.create_test_dir(suffix=__name__) def tearDownModule(): logging.info("Tear Down test module") OLITestLib.delete_test_dir() #Stuff that can be used #self.assertEqual(self.seq, range(10)) # should raise an exception for an immutable sequence #self.assertRaises(TypeError, random.shuffle, (1,2,3)) #self.assertTrue(element in self.seq) #self.assertFalse(element in self.seq) class TestBasicFunctions(unittest.TestCase): def setUp(self): OLITestLib.delete_remote_testfolders() def tearDown(self): OLITestLib.delete_remote_testfolders() def test_01_olistartup(self): """Tests if OLI can be invoked without exceptions Cleans existing remote test folders. Then syncs all "OLItest* (specified in the default config) to our local Maildir. The result should be 0 folders and 0 mails.""" code, res = OLITestLib.run_OLI() self.assertEqual(res, "") boxes, mails = OLITestLib.count_maildir_mails('') self.assertTrue((boxes, mails)==(0,0), msg="Expected 0 folders and 0 " "mails, but sync led to {0} folders and {1} mails".format( boxes, mails)) def test_02_createdir(self): """Create local 'OLItest 1', sync""" OLITestLib.delete_maildir('') #Delete all local maildir folders OLITestLib.create_maildir('INBOX.OLItest 1') code, res = OLITestLib.run_OLI() self.assertEqual(res, "") boxes, mails = OLITestLib.count_maildir_mails('') self.assertTrue((boxes, mails)==(1,0), msg="Expected 1 folders and 0 " "mails, but sync led to {0} folders and {1} mails".format( boxes, mails)) def test_03_createdir_quote(self): """Create local 'OLItest "1"' maildir, sync Folder names with quotes used to fail and have been fixed, so one is included here as a small challenge.""" OLITestLib.delete_maildir('') #Delete all local maildir folders OLITestLib.create_maildir('INBOX.OLItest "1"') code, res = OLITestLib.run_OLI() if 'unallowed folder' in res: raise unittest.SkipTest("remote server doesn't handle quote") self.assertEqual(res, "") boxes, mails = OLITestLib.count_maildir_mails('') self.assertTrue((boxes, mails)==(1,0), msg="Expected 1 folders and 0 " "mails, but sync led to {0} folders and {1} mails".format( boxes, mails)) def test_04_nametransmismatch(self): """Create mismatching remote and local nametrans rules This should raise an error.""" config = OLITestLib.get_default_config() config.set('Repository IMAP', 'nametrans', 'lambda f: f' ) config.set('Repository Maildir', 'nametrans', 'lambda f: f + "moo"' ) OLITestLib.write_config_file(config) code, res = OLITestLib.run_OLI() #logging.warn("%s %s "% (code, res)) # We expect an INFINITE FOLDER CREATION WARNING HERE.... mismatch = "ERROR: INFINITE FOLDER CREATION DETECTED!" in res self.assertEqual(mismatch, True, msg="Mismatching nametrans rules did " "NOT trigger an 'infinite folder generation' error. Output was:\n" "{0}".format(res)) # Write out default config file again OLITestLib.write_config_file() def test_05_createmail(self): """Create mail in OLItest 1, sync, wipe folder sync Currently, this will mean the folder will be recreated locally. At some point when remote folder deletion is implemented, this behavior will change.""" OLITestLib.delete_remote_testfolders() OLITestLib.delete_maildir('') #Delete all local maildir folders OLITestLib.create_maildir('INBOX.OLItest') OLITestLib.create_mail('INBOX.OLItest') code, res = OLITestLib.run_OLI() #logging.warn("%s %s "% (code, res)) self.assertEqual(res, "") boxes, mails = OLITestLib.count_maildir_mails('') self.assertTrue((boxes, mails)==(1,1), msg="Expected 1 folders and 1 " "mails, but sync led to {0} folders and {1} mails".format( boxes, mails)) # The local Mail should have been assigned a proper UID now, check! uids = OLITestLib.get_maildir_uids('INBOX.OLItest') self.assertFalse (None in uids, msg = "All mails should have been "+ \ "assigned the IMAP's UID number, but {0} messages had no valid ID "\ .format(len([None for x in uids if x==None]))) def test_06_createfolders(self): """Test if createfolders works as expected Create a local Maildir, then sync with remote "createfolders" disabled. Delete local Maildir and sync. We should have no new local maildir then. TODO: Rewrite this test to directly test and count the remote folders when the helper functions have been written""" config = OLITestLib.get_default_config() config.set('Repository IMAP', 'createfolders', 'False' ) OLITestLib.write_config_file(config) # delete all remote and local testfolders OLITestLib.delete_remote_testfolders() OLITestLib.delete_maildir('') OLITestLib.create_maildir('INBOX.OLItest') code, res = OLITestLib.run_OLI() #logging.warn("%s %s "% (code, res)) self.assertEqual(res, "") OLITestLib.delete_maildir('INBOX.OLItest') code, res = OLITestLib.run_OLI() self.assertEqual(res, "") boxes, mails = OLITestLib.count_maildir_mails('') self.assertTrue((boxes, mails)==(0,0), msg="Expected 0 folders and 0 " "mails, but sync led to {} folders and {} mails".format( boxes, mails)) offlineimap-6.6.1/test/tests/test_02_MappedIMAP.py000066400000000000000000000052471264010144500217320ustar00rootroot00000000000000# Copyright (C) 2012- Sebastian Spaeth & contributors # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA import random import unittest import logging import os, sys from test.OLItest import OLITestLib # Things need to be setup first, usually setup.py initializes everything. # but if e.g. called from command line, we take care of default values here: if not OLITestLib.cred_file: OLITestLib(cred_file='./test/credentials.conf', cmd='./offlineimap.py') def setUpModule(): logging.info("Set Up test module %s" % __name__) tdir = OLITestLib.create_test_dir(suffix=__name__) def tearDownModule(): logging.info("Tear Down test module") OLITestLib.delete_test_dir() #Stuff that can be used #self.assertEqual(self.seq, range(10)) # should raise an exception for an immutable sequence #self.assertRaises(TypeError, random.shuffle, (1,2,3)) #self.assertTrue(element in self.seq) #self.assertFalse(element in self.seq) class TestBasicFunctions(unittest.TestCase): #@classmethod #def setUpClass(cls): #This is run before all tests in this class # cls._connection = createExpensiveConnectionObject() #@classmethod #This is run after all tests in this class #def tearDownClass(cls): # cls._connection.destroy() # This will be run before each test #def setUp(self): # self.seq = range(10) def test_01_MappedImap(self): """Tests if a MappedIMAP sync can be invoked without exceptions Cleans existing remote test folders. Then syncs all "OLItest* (specified in the default config) to our local IMAP (Gmail). The result should be 0 folders and 0 mails.""" pass #TODO #OLITestLib.delete_remote_testfolders() #code, res = OLITestLib.run_OLI() #self.assertEqual(res, "") #boxes, mails = OLITestLib.count_maildir_mails('') #self.assertTrue((boxes, mails)==(0,0), msg="Expected 0 folders and 0" # "mails, but sync led to {} folders and {} mails".format( # boxes, mails))