pax_global_header00006660000000000000000000000064131522607530014516gustar00rootroot0000000000000052 comment=d8475a0e37ed1a856d469e887e6f7673a7e610eb curator-5.2.0/000077500000000000000000000000001315226075300132015ustar00rootroot00000000000000curator-5.2.0/.gitignore000066400000000000000000000004671315226075300152000ustar00rootroot00000000000000.*.swp *~ *.py[co] build dist scratch docs/_build *.egg .eggs elasticsearch_curator.egg-info elasticsearch_curator_dev.egg-info .coverage .idea coverage.xml nosetests.xml index.html docs/asciidoc/html_docs wheelhouse Elastic.ico Vagrant/centos/6/.vagrant Vagrant/centos/7/.vagrant Vagrant/ubuntu/14.04/.vagrant curator-5.2.0/.travis.yml000066400000000000000000000010221315226075300153050ustar00rootroot00000000000000language: python python: - "2.7" - "3.4" - "3.5" - "3.6" env: - ES_VERSION=5.0.2 - ES_VERSION=5.1.2 - ES_VERSION=5.2.2 - ES_VERSION=5.3.3 - ES_VERSION=5.4.3 - ES_VERSION=5.5.2 os: linux cache: pip: true jdk: - oraclejdk8 install: - pip install -r requirements.txt - pip install . script: - sudo apt-get update && sudo apt-get install oracle-java8-installer - java -version - sudo update-alternatives --set java /usr/lib/jvm/java-8-oracle/jre/bin/java - java -version - ./travis-run.sh curator-5.2.0/CONTRIBUTING.md000066400000000000000000000073161315226075300154410ustar00rootroot00000000000000# Contributing to Curator All contributions are welcome: ideas, patches, documentation, bug reports, complaints, etc! Programming is not a required skill, and there are many ways to help out! It is more important to us that you are able to contribute. That said, some basic guidelines, which you are free to ignore :) ## Want to learn? Want to write your own code to do something Curator doesn't do out of the box? * [Curator API Documentation](http://curator.readthedocs.io/) Since version 2.0, Curator ships with both an API and wrapper scripts (which are actually defined as entry points). This allows you to write your own scripts to accomplish similar goals, or even new and different things with the [Curator API](http://curator.readthedocs.io/), and the [Elasticsearch Python API](http://elasticsearch-py.readthedocs.io/). Want to know how to use the command-line interface (CLI)? * [Curator CLI Documentation](http://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html) The Curator CLI Documentation is now a part of the document repository at http://elastic.co/guide at http://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html Want to lurk about and see what others are doing with Curator? * The irc channels (#logstash and #elasticsearch on irc.freenode.org) are good places for this ## Got Questions? Have a problem you want Curator to solve for you? * You are welcome to join the IRC channel #logstash (or #elasticsearch) on irc.freenode.org and ask for help there! ## Have an Idea or Feature Request? * File a ticket on [github](https://github.com/elastic/curator/issues) ## Something Not Working? Found a Bug? If you think you found a bug, it probably is a bug. * File it on [github](https://github.com/elastic/curator/issues) # Contributing Documentation and Code Changes If you have a bugfix or new feature that you would like to contribute to Curator, and you think it will take more than a few minutes to produce the fix (ie; write code), it is worth discussing the change with the Curator users and developers first! You can reach us via [github](https://github.com/elastic/curator/issues), or via IRC (#logstash or #elasticsearch on freenode irc) Documentation is in two parts: API and CLI documentation. API documentation is generated from comments inside the classes and methods within the code. This documentation is rendered and hosted at http://curator.readthedocs.io CLI documentation is in Asciidoc format in the GitHub repository at https://github.com/elastic/curator/tree/master/docs/asciidoc. This documentation can be changed via a pull request as with any other code change. ## Contribution Steps 1. Test your changes! Run the test suite ('python setup.py test'). Please note that this requires an Elasticsearch instance. The tests will try to connect to your local elasticsearch instance and run integration tests against it. **This will delete all the data stored there!** You can use the env variable `TEST_ES_SERVER` to point to a different instance (for example 'otherhost:9203'). 2. Please make sure you have signed our [Contributor License Agreement](http://www.elastic.co/contributor-agreement/). We are not asking you to assign copyright to us, but to give us the right to distribute your code without restriction. We ask this of all contributors in order to assure our users of the origin and continuing existence of the code. You only need to sign the CLA once. 3. Send a pull request! Push your changes to your fork of the repository and [submit a pull request](https://help.github.com/articles/using-pull-requests). In the pull request, describe what your changes do and mention any bugs/issues related to the pull request. curator-5.2.0/CONTRIBUTORS000066400000000000000000000043061315226075300150640ustar00rootroot00000000000000The following is a list of people who have contributed ideas, code, bug reports, or in general have helped curator along its way. Contributors: * Jordan Sissel (jordansissel) (For Logstash, first and foremost) * Shay Banon (kimchy) (For Elasticsearch, of course!) * Aaron Mildenstein (untergeek) * Njal Karevoll * François Deppierraz * Honza Kral (HonzaKral) * Benjamin Smith (benjaminws) * Colin Moller (LeftyBC) * Elliot (edgeofnite) * Ram Viswanadha (ramv) * Chris Meisinger (cmeisinger) * Stuart Warren (stuart-warren) * (gitshaw) * (sfritz) * (sjoelsam) * Jose Diaz-Gonzalez (josegonzalez) * Arie Bro (arieb) * David Harrigan (dharrigan) * Mathieu Geli (gelim) * Nick Ethier (nickethier) * Mohab Usama (mohabusama) * (gitshaw) * Stuart Warren (stuart-warren) * Xavier Calland (xavier-calland) * Chad Schellenger (cschellenger) * Kamil Essekkat (ekamil) * (gbutt) * Ben Buchacher (bbuchacher) * Ehtesh Choudhury (shurane) * Markus Fischer (mfn) * Fabien Wernli (faxm0dem) * Michael Weiser (michaelweiser) * (digital-wonderland) * cassiano (cassianoleal) * Matt Dainty (bodgit) * Alex Philipp (alex-sf) * (krzaczek) * Justin Lintz (jlintz) * Jeremy Falling (jjfalling) * Ian Babrou (bobrik) * Ferenc Erki (ferki) * George Heppner (gheppner) * Matt Hughes (matthughes) * Brian Lalor (blalor) * Paweł Krzaczkowski (krzaczek) * Ben Tse (bt5e) * Tom Hendrikx (whyscream) * Christian Vozar (christianvozar) * Magnus Baeck (magnusbaeck) * Robin Kearney (rk295) * (cfeio) * (malagoli) * Dan Sheridan (djs52) * Michael-Keith Bernard (SegFaultAX) * Simon Lundström (simmel) * (pkr1234) * Mark Feltner (feltnerm) * William Jimenez (wjimenez5271) * Jeremy Canady (jrmycanady) * Steven Ottenhoff (steffo) * Ole Rößner (Basster) * Jack (univerio) * Tomáš Mózes (hydrapolic) * Gary Gao (garyelephant) * Panagiotis Moustafellos (pmoust) * (pbamba) * Pavel Strashkin (xaka) * Wadim Kruse (wkruse) * Richard Megginson (richm) * Thibaut Ackermann (thib-ack) * (zzugg) * Julien Mancuso (petitout) * Spencer Herzberg (sherzberg) * Luke Waite (lukewaite) * (dtrv) * Christopher "Chief" Najewicz (chiefy) * Filipe Gonçalves (basex) * Sönke Liebau (soenkeliebau) * Timothy Schroder (tschroeder-zendesk) * Jared Carey (jpcarey) * Juraj Seffer (jurajseffer)curator-5.2.0/Changelog.rst000066400000000000000000001650611315226075300156330ustar00rootroot00000000000000.. _changelog: Changelog ========= 5.1.2 (? ? ?) **Bug Fixes** * The ``restore_check`` function did not work properly with wildcard index patterns. This has been rectified, and an integration test added to satisfy this. Reported in #989 (untergeek) 5.1.1 (8 June 2017) **Bug Fixes** * Mock and cx_Freeze don't play well together. Packages weren't working, so I reverted the string-based comparison as before. 5.1.0 (8 June 2017) **New Features** * Index Settings are here! First requested as far back as #160, it's been requested in various forms culminating in #656. The official documentation addresses the usage. (untergeek) * Remote reindex now adds the ability to migrate from one cluster to another, preserving the index names, or optionally adding a prefix and/or a suffix. The official documentation shows you how. (untergeek) * Added support for naming rollover indices. #970 (jurajseffer) * Testing against ES 5.4.1, 5.3.3 **Bug Fixes** * Since Curator no longer supports old versions of python, convert tests to use ``isinstance``. #973 (untergeek) * Fix stray instance of ``is not`` comparison instead of ``!=`` #972 (untergeek) * Increase remote client timeout to 180 seconds for remote reindex. #930 (untergeek) **General** * elasticsearch-py dependency bumped to 5.4.0 * Added mock dependency due to isinstance and testing requirements * AWS ES 5.3 officially supports Curator now. Documentation has been updated to reflect this. 5.0.4 (16 May 2017) **Bug Fixes** * The ``_recovery`` check needs to compare using ``!=`` instead of ``is not``, which apparently does not accurately compare unicode strings. Reported in #966. (untergeek) 5.0.3 (15 May 2017) **Bug Fixes** * Restoring a snapshot on an exceptionally fast cluster/node can create a race race condition where a ``_recovery`` check returns an empty dictionary ``{}``, which causes Curator to fail. Added test and code to correct this. Reported in #962. (untergeek) 5.0.2 (4 May 2017) **Bug Fixes** * Nasty bug in schema validation fixed where boolean options or filter flags would validate as ``True`` if non-boolean types were submitted. Reported in #945. (untergeek) * Check for presence of alias after reindex, in case the reindex was to an alias. Reported in #941. (untergeek) * Fix an edge case where an index named with `1970.01.01` could not be sorted by index-name age. Reported in #951. (untergeek) * Update tests to include ES 5.3.2 * Bump certifi requirement to 2017.4.17. **Documentation** * Document substitute strftime symbols for doing ISO Week timestrings added in #932. (untergeek) * Document how to include file paths better. Fixes #944. (untergeek) 5.0.1 (10 April 2017) **Bug Fixes** * Fixed default values for ``include_global_state`` on the restore action to be in line with defaults in Elasticsearch 5.3 **Documentation** * Huge improvement to documenation, with many more examples. * Address age filter limitations per #859 (untergeek) * Address date matching behavior better per #858 (untergeek) 5.0.0 (5 April 2017) The full feature set of 5.0 (including alpha releases) is included here. **New Features** * Reindex is here! The new reindex action has a ton of flexibility. You can even reindex from remote locations, so long as the remote cluster is Elasticsearch 1.4 or newer. * Added the ``period`` filter (#733). This allows you to select indices or snapshots, based on whether they fit within a period of hours, days, weeks, months, or years. * Add dedicated "wait for completion" functionality. This supports health checks, recovery (restore) checks, snapshot checks, and operations which support the new tasks API. All actions which can use this have been refactored to take advantage of this. The benefit of this new feature is that client timeouts will be less likely to happen when performing long operations, like snapshot and restore. NOTE: There is one caveat: forceMerge does not support this, per the Elasticsearch API. A forceMerge call will hold the client until complete, or the client times out. There is no clean way around this that I can discern. * Elasticsearch date math naming is supported and documented for the ``create_index`` action. An integration test is included for validation. * Allow allocation action to unset a key/value pair by using an empty value. Requested in #906. (untergeek) * Added support for the Rollover API. Requested in #898, and by countless others. * Added ``warn_if_no_indices`` option for ``alias`` action in response to #883. Using this option will permit the ``alias`` add or remove to continue with a logged warning, even if the filters result in a NoIndices condition. Use with care. **General** * Bumped ``click`` (python module) version dependency to 6.7 * Bumped ``urllib3`` (python module) version dependency to 1.20 * Bumped ``elasticsearch`` (python module) version dependency to 5.3 * Refactored a ton of code to be cleaner and hopefully more consistent. **Bug Fixes** * Curator now logs version incompatibilities as an error, rather than just raising an Exception. #874 (untergeek) * The ``get_repository()`` function now properly raises an exception instead of returning `False` if nothing is found. #761 (untergeek) * Check if an index is in an alias before attempting to delete it from the alias. Issue raised in #887. (untergeek) * Fix allocation issues when using Elasticsearch 5.1+. Issue raised in #871 (untergeek) **Documentation** * Add missing repository arg to auto-gen API docs. Reported in #888 (untergeek) * Add all new documentation and clean up for v5 specific. **Breaking Changes** * IndexList no longer checks to see if there are indices on initialization. 5.0.0a1 (23 March 2017) ----------------------- This is the first alpha release of Curator 5. This should not be used for production! There `will` be many more changes before 5.0.0 is released. **New Features** * Allow allocation action to unset a key/value pair by using an empty value. Requested in #906. (untergeek) * Added support for the Rollover API. Requested in #898, and by countless others. * Added ``warn_if_no_indices`` option for ``alias`` action in response to #883. Using this option will permit the ``alias`` add or remove to continue with a logged warning, even if the filters result in a NoIndices condition. Use with care. **Bug Fixes** * Check if an index is in an alias before attempting to delete it from the alias. Issue raised in #887. (untergeek) * Fix allocation issues when using Elasticsearch 5.1+. Issue raised in #871 (untergeek) **Documentation** * Add missing repository arg to auto-gen API docs. Reported in #888 (untergeek) 4.2.6 (27 January 2016) ----------------------- **General** * Update Curator to use version 5.1 of the ``elasticsearch-py`` python module. With this change, there will be no reverse compatibility with Elasticsearch 2.x. For 2.x versions, continue to use the 4.x branches of Curator. * Tests were updated to reflect the changes in API calls, which were minimal. * Remove "official" support for Python 2.6. If you must use Curator on a system that uses Python 2.6 (RHEL/CentOS 6 users), it is recommended that you use the official RPM package as it is a frozen binary built on Python 3.5.x which will not conflict with your system Python. * Use ``isinstance()`` to verify client object. #862 (cp2587) * Prune older versions from Travis CI tests. * Update ``certifi`` dependency to latest version **Documentation** * Add version compatibility section to official documentation. * Update docs to reflect changes. Remove cruft and references to older versions. 4.2.5 (22 December 2016) ------------------------ **General** * Add and increment test versions for Travis CI. #839 (untergeek) * Make `filter_list` optional in snapshot, show_snapshot and show_indices singleton actions. #853 (alexef) **Bug Fixes** * Fix cli integration test when different host/port are specified. Reported in #843 (untergeek) * Catch empty list condition during filter iteration in singleton actions. Reported in #848 (untergeek) **Documentation** * Add docs regarding how filters are ANDed together, and how to do an OR with the regex pattern filter type. Requested in #842 (untergeek) * Fix typo in Click version in docs. #850 (breml) * Where applicable, replace `[source,text]` with `[source,yaml]` for better formatting in the resulting docs. 4.2.4 (7 December 2016) ----------------------- **Bug Fixes** * ``--wait_for_completion`` should be `True` by default for Snapshot singleton action. Reported in #829 (untergeek) * Increase `version_max` to 5.1.99. Prematurely reported in #832 (untergeek) * Make the '.security' index visible for snapshots so long as proper credentials are used. Reported in #826 (untergeek) 4.2.3.post1 (22 November 2016) ------------------------------ This fix is `only` going in for ``pip``-based installs. There are no other code changes. **Bug Fixes** * Fixed incorrect assumption of PyPI picking up dependency for certifi. It is still a dependency, but should not affect ``pip`` installs with an error any more. Reported in #821 (untergeek) 4.2.3 (21 November 2016) ------------------------ 4.2.2 was pulled immediately after release after it was discovered that the Windows binary distributions were still not including the certifi-provided certificates. This has now been remedied. **General** * ``certifi`` is now officially a requirement. * ``setup.py`` now forcibly includes the ``certifi`` certificate PEM file in the "frozen" distributions (i.e., the compiled versions). The ``get_client`` method was updated to reflect this and catch it for both the Linux and Windows binary distributions. This should `finally` put to rest #810 4.2.2 (21 November 2016) ------------------------ **Bug Fixes** * The certifi-provided certificates were not propagating to the compiled RPM/DEB packages. This has been corrected. Reported in #810 (untergeek) **General** * Added missing ``--ignore_empty_list`` option to singleton actions. Requested in #812 (untergeek) **Documentation** * Add a FAQ entry regarding the click module's need for Unicode when using Python 3. Kind of a bug fix too, as the entry_points were altered to catch this omission and report a potential solution on the command-line. Reported in #814 (untergeek) * Change the "Command-Line" documentation header to be "Running Curator" 4.2.1 (8 November 2016) ----------------------- **Bug Fixes** * In the course of package release testing, an undesirable scenario was caught where boolean flags default values for ``curator_cli`` were improperly overriding values from a yaml config file. **General** * Adding in direct download URLs for the RPM, DEB, tarball and zip packages. 4.2.0 (4 November 2016) ----------------------- **New Features** * Shard routing allocation enable/disable. This will allow you to disable shard allocation routing before performing one or more actions, and then re-enable after it is complete. Requested in #446 (untergeek) * Curator 3.x-style command-line. This is now ``curator_cli``, to differentiate between the current binary. Not all actions are available, but the most commonly used ones are. With the addition in 4.1.0 of schema and configuration validation, there's even a way to still do filter chaining on the command-line! Requested in #767, and by many other users (untergeek) **General** * Update testing to the most recent versions. * Lock elasticsearch-py module version at >= 2.4.0 and <= 3.0.0. There are API changes in the 5.0 release that cause tests to fail. **Bug Fixes** * Guarantee that binary packages are built from the latest Python + libraries. This ensures that SSL/TLS will work without warning messages about insecure connections, unless they actually are insecure. Reported in #780, though the reported problem isn't what was fixed. The fix is needed based on what was discovered while troubleshooting the problem. (untergeek) 4.1.2 (6 October 2016) ---------------------- This release does not actually add any new code to Curator, but instead improves documentation and includes new linux binary packages. **General** * New Curator binary packages for common Linux systems! These will be found in the same repositories that the python-based packages are in, but have no dependencies. All necessary libraries/modules are bundled with the binary, so everything should work out of the box. This feature doesn't change any other behavior, so it's not a major release. These binaries have been tested in: * CentOS 6 & 7 * Ubuntu 12.04, 14.04, 16.04 * Debian 8 They do not work in Debian 7 (library mismatch). They may work in other systems, but that is untested. The script used is in the unix_packages directory. The Vagrantfiles for the various build systems are in the Vagrant directory. **Bug Fixes** * The only bug that can be called a bug is actually a stray ``.exe`` suffix in the binary package creation section (cx_freeze) of ``setup.py``. The Windows binaries should have ``.exe`` extensions, but not unix variants. * Elasticsearch 5.0.0-beta1 testing revealed that a document ID is required during document creation in tests. This has been fixed, and a redundant bit of code in the forcemerge integration test was removed. **Documentation** * The documentation has been updated and improved. Examples and installation are now top-level events, with the sub-sections each having their own link. They also now show how to install and use the binary packages, and the section on installation from source has been improved. The missing section on installing the voluptuous schema verification module has been written and included. #776 (untergeek) 4.1.1 (27 September 2016) ------------------------- **Bug Fixes** * String-based booleans are now properly coerced. This fixes an issue where `True`/`False` were used in environment variables, but not recognized. #765 (untergeek) * Fix missing `count` method in ``__map_method`` in SnapshotList. Reported in #766 (untergeek) **General** * Update es_repo_mgr to use the same client/logging YAML config file. Requested in #752 (untergeek) **Schema Validation** * Cases where ``source`` was not defined in a filter (but should have been) were informing users that a `timestring` field was there that shouldn't have been. This edge case has been corrected. **Documentation** * Added notifications and FAQ entry to explain that AWS ES is not supported. 4.1.0 (6 September 2016) ------------------------ **New Features** * Configuration and Action file schema validation. Requested in #674 (untergeek) * Alias filtertype! With this filter, you can select indices based on whether they are part of an alias. Merged in #748 (untergeek) * Count filtertype! With this filter, you can now configure Curator to only keep the most recent _n_ indices (or snapshots!). Merged in #749 (untergeek) * Experimental! Use environment variables in your YAML configuration files. This was a popular request, #697. (untergeek) **General** * New requirement! ``voluptuous`` Python schema validation module * Requirement version bump: Now requires ``elasticsearch-py`` 2.4.0 **Bug Fixes** * ``delete_aliases`` option in ``close`` action no longer results in an error if not all selected indices have an alias. Add test to confirm expected behavior. Reported in #736 (untergeek) **Documentation** * Add information to FAQ regarding indices created before Elasticsearch 1.4. Merged in #747 4.0.6 (15 August 2016) ---------------------- **Bug Fixes** * Update old calls used with ES 1.x to reflect changes in 2.x+. This was necessary to work with Elasticsearch 5.0.0-alpha5. Fixed in #728 (untergeek) **Doc Fixes** * Add section detailing that the value of a ``value`` filter element should be encapsulated in single quotes. Reported in #726. (untergeek) 4.0.5 (3 August 2016) --------------------- **Bug Fixes** * Fix incorrect variable name for AWS Region reported in #679 (basex) * Fix ``filter_by_space()`` to not fail when index age metadata is not present. Indices without the appropriate age metadata will instead be excluded, with a debug-level message. Reported in #724 (untergeek) **Doc Fixes** * Fix documentation for the space filter and the source filter element. 4.0.4 (1 August 2016) --------------------- **Bug Fixes** * Fix incorrect variable name in Allocation action. #706 (lukewaite) * Incorrect error message in ``create_snapshot_body`` reported in #711 (untergeek) * Test for empty index list object should happen in action initialization for snapshot action. Discovered in #711. (untergeek) **Doc Fixes** * Add menus to asciidoc chapters #704 (untergeek) * Add pyyaml dependency #710 (dtrv) 4.0.3 (22 July 2016) -------------------- **General** * 4.0.2 didn't work for ``pip`` installs due to an omission in the MANIFEST.in file. This came up during release testing, but before the release was fully published. As the release was never fully published, this should not have actually affected anyone. **Bug Fixes** * These are the same as 4.0.2, but it was never fully released. * All default settings are now values returned from functions instead of constants. This was resulting in settings getting stomped on. New test addresses the original complaint. This removes the need for ``deepcopy``. See issue #687 (untergeek) * Fix ``host`` vs. ``hosts`` issue in ``get_client()`` rather than the non-functional function in ``repomgrcli.py``. * Update versions being tested. * Community contributed doc fixes. * Reduced logging verbosity by making most messages debug level. #684 (untergeek) * Fixed log whitelist behavior (and switched to blacklisting instead). Default behavior will now filter traffic from the ``elasticsearch`` and ``urllib3`` modules. * Fix Travis CI testing to accept some skipped tests, as needed. #695 (untergeek) * Fix missing empty index test in snapshot action. #682 (sherzberg) 4.0.2 (22 July 2016) -------------------- **Bug Fixes** * All default settings are now values returned from functions instead of constants. This was resulting in settings getting stomped on. New test addresses the original complaint. This removes the need for ``deepcopy``. See issue #687 (untergeek) * Fix ``host`` vs. ``hosts`` issue in ``get_client()`` rather than the non-functional function in ``repomgrcli.py``. * Update versions being tested. * Community contributed doc fixes. * Reduced logging verbosity by making most messages debug level. #684 (untergeek) * Fixed log whitelist behavior (and switched to blacklisting instead). Default behavior will now filter traffic from the ``elasticsearch`` and ``urllib3`` modules. * Fix Travis CI testing to accept some skipped tests, as needed. #695 (untergeek) * Fix missing empty index test in snapshot action. #682 (sherzberg) 4.0.1 (1 July 2016) ------------------- **Bug Fixes** * Coerce Logstash/JSON logformat type timestamp value to always use UTC. #661 (untergeek) * Catch and remove indices from the actionable list if they do not have a `creation_date` field in settings. This field was introduced in ES v1.4, so that indicates a rather old index. #663 (untergeek) * Replace missing ``state`` filter for ``snapshotlist``. #665 (untergeek) * Restore ``es_repo_mgr`` as a stopgap until other CLI scripts are added. It will remain undocumented for now, as I am debating whether to make repository creation its own action in the API. #668 (untergeek) * Fix dry run results for snapshot action. #673 (untergeek) 4.0.0 (24 June 2016) -------------------- It's official! Curator 4.0.0 is released! **Breaking Changes** * New and improved API! * Command-line changes. No more command-line args, except for ``--config``, ``--actions``, and ``--dry-run``: - ``--config`` points to a YAML client and logging configuration file. The default location is ``~/.curator/curator.yml`` - ``--actions`` arg points to a YAML action configuration file - ``--dry-run`` will simulate the action(s) which would have taken place, but not actually make any changes to the cluster or its indices. **New Features** * Snapshot restore is here! * YAML configuration files. Now a single file can define an entire batch of commands, each with their own filters, to be performed in sequence. * Sort by index age not only by index name (as with previous versions of Curator), but also by index `creation_date`, or by calculations from the Field Stats API on a timestamp field. * Atomically add/remove indices from aliases! This is possible by way of the new `IndexList` class and YAML configuration files. * State of indices pulled and stored in `IndexList` instance. Fewer API calls required to serially test for open/close, `size_in_bytes`, etc. * Filter by space now allows sorting by age! * Experimental! Use AWS IAM credentials to sign requests to Elasticsearch. This requires the end user to *manually* install the `requests_aws4auth` python module. * Optionally delete aliases from indices before closing. * An empty index or snapshot list no longer results in an error if you set ``ignore_empty_list`` to `True`. If `True` it will still log that the action was not performed, but will continue to the next action. If 'False' it will log an ERROR and exit with code 1. **API** * Updated API documentation * Class: `IndexList`. This pulls all indices at instantiation, and you apply filters, which are class methods. You can iterate over as many filters as you like, in fact, due to the YAML config file. * Class: `SnapshotList`. This pulls all snapshots from the given repository at instantiation, and you apply filters, which are class methods. You can iterate over as many filters as you like, in fact, due to the YAML config file. * Add `wait_for_completion` to Allocation and Replicas actions. These will use the client timeout, as set by default or `timeout_override`, to determine how long to wait for timeout. These are handled in batches of indices for now. * Allow `timeout_override` option for all actions. This allows for different timeout values per action. * Improve API by giving each action its own `do_dry_run()` method. **General** * Updated use documentation for Elastic main site. * Include example files for ``--config`` and ``--actions``. 4.0.0b2 (16 June 2016) ---------------------- **Second beta release of the 4.0 branch** **New Feature** * An empty index or snapshot list no longer results in an error if you set ``ignore_empty_list`` to `True`. If `True` it will still log that the action was not performed, but will continue to the next action. If 'False' it will log an ERROR and exit with code 1. (untergeek) 4.0.0b1 (13 June 2016) ---------------------- **First beta release of the 4.0 branch!** The release notes will be rehashing the new features in 4.0, rather than the bug fixes done during the alphas. **Breaking Changes** * New and improved API! * Command-line changes. No more command-line args, except for ``--config``, ``--actions``, and ``--dry-run``: - ``--config`` points to a YAML client and logging configuration file. The default location is ``~/.curator/curator.yml`` - ``--actions`` arg points to a YAML action configuration file - ``--dry-run`` will simulate the action(s) which would have taken place, but not actually make any changes to the cluster or its indices. **New Features** * Snapshot restore is here! * YAML configuration files. Now a single file can define an entire batch of commands, each with their own filters, to be performed in sequence. * Sort by index age not only by index name (as with previous versions of Curator), but also by index `creation_date`, or by calculations from the Field Stats API on a timestamp field. * Atomically add/remove indices from aliases! This is possible by way of the new `IndexList` class and YAML configuration files. * State of indices pulled and stored in `IndexList` instance. Fewer API calls required to serially test for open/close, `size_in_bytes`, etc. * Filter by space now allows sorting by age! * Experimental! Use AWS IAM credentials to sign requests to Elasticsearch. This requires the end user to *manually* install the `requests_aws4auth` python module. * Optionally delete aliases from indices before closing. **API** * Updated API documentation * Class: `IndexList`. This pulls all indices at instantiation, and you apply filters, which are class methods. You can iterate over as many filters as you like, in fact, due to the YAML config file. * Class: `SnapshotList`. This pulls all snapshots from the given repository at instantiation, and you apply filters, which are class methods. You can iterate over as many filters as you like, in fact, due to the YAML config file. * Add `wait_for_completion` to Allocation and Replicas actions. These will use the client timeout, as set by default or `timeout_override`, to determine how long to wait for timeout. These are handled in batches of indices for now. * Allow `timeout_override` option for all actions. This allows for different timeout values per action. * Improve API by giving each action its own `do_dry_run()` method. **General** * Updated use documentation for Elastic main site. * Include example files for ``--config`` and ``--actions``. 4.0.0a10 (10 June 2016) ----------------------- **New Features** * Snapshot restore is here! * Optionally delete aliases from indices before closing. Fixes #644 (untergeek) **General** * Add `wait_for_completion` to Allocation and Replicas actions. These will use the client timeout, as set by default or `timeout_override`, to determine how long to wait for timeout. These are handled in batches of indices for now. * Allow `timeout_override` option for all actions. This allows for different timeout values per action. **Bug Fixes** * Disallow use of `master_only` if multiple hosts are used. Fixes #615 (untergeek) * Fix an issue where arguments weren't being properly passed and populated. * ForceMerge replaced Optimize in ES 2.1.0. * Fix prune_nones to work with Python 2.6. Fixes #619 (untergeek) * Fix TimestringSearch to work with Python 2.6. Fixes #622 (untergeek) * Add language classifiers to ``setup.py``. Fixes #640 (untergeek) * Changed references to readthedocs.org to be readthedocs.io. 4.0.0a9 (27 Apr 2016) --------------------- **General** * Changed `create_index` API to use kwarg `extra_settings` instead of `body` * Normalized Alias action to use `name` instead of `alias`. This simplifies documentation by reducing the number of option elements. * Streamlined some code * Made `exclude` a filter element setting for all filters. Updated all examples to show this. * Improved documentation **New Features** * Alias action can now accept `extra_settings` to allow adding filters, and/or routing. 4.0.0a8 (26 Apr 2016) --------------------- **Bug Fixes** * Fix to use `optimize` with versions of Elasticsearch < 5.0 * Fix missing setting in testvars 4.0.0a7 (25 Apr 2016) --------------------- **Bug Fixes** * Fix AWS4Auth error. 4.0.0a6 (25 Apr 2016) --------------------- **General** * Documentation updates. * Improve API by giving each action its own `do_dry_run()` method. **Bug Fixes** * Do not escape characters other than ``.`` and ``-`` in timestrings. Fixes #602 (untergeek) ** New Features** * Added `CreateIndex` action. 4.0.0a4 (21 Apr 2016) --------------------- **Bug Fixes** * Require `pyyaml` 3.10 or better. * In the case that no `options` are in an action, apply the defaults. 4.0.0a3 (21 Apr 2016) --------------------- It's time for Curator 4.0 alpha! **Breaking Changes** * New API! (again?!) * Command-line changes. No more command-line args, except for ``--config``, ``--actions``, and ``--dry-run``: - ``--config`` points to a YAML client and logging configuration file. The default location is ``~/.curator/curator.yml`` - ``--actions`` arg points to a YAML action configuration file - ``--dry-run`` will simulate the action(s) which would have taken place, but not actually make any changes to the cluster or its indices. **General** * Updated API documentation * Updated use documentation for Elastic main site. * Include example files for ``--config`` and ``--actions``. **New Features** * Sort by index age not only by index name (as with previous versions of Curator), but also by index `creation_date`, or by calculations from the Field Stats API on a timestamp field. * Class: `IndexList`. This pulls all indices at instantiation, and you apply filters, which are class methods. You can iterate over as many filters as you like, in fact, due to the YAML config file. * Class: `SnapshotList`. This pulls all snapshots from the given repository at instantiation, and you apply filters, which are class methods. You can iterate over as many filters as you like, in fact, due to the YAML config file. * YAML configuration files. Now a single file can define an entire batch of commands, each with their own filters, to be performed in sequence. * Atomically add/remove indices from aliases! This is possible by way of the new `IndexList` class and YAML configuration files. * State of indices pulled and stored in `IndexList` instance. Fewer API calls required to serially test for open/close, `size_in_bytes`, etc. * Filter by space now allows sorting by age! * Experimental! Use AWS IAM credentials to sign requests to Elasticsearch. This requires the end user to *manually* install the `requests_aws4auth` python module. 3.5.1 (21 March 2016) --------------------- **Bug fixes** * Add more logging information to snapshot delete method #582 (untergeek) * Improve default timeout, logging, and exception handling for `seal` command #583 (untergeek) * Fix use of default snapshot name. #584 (untergeek) 3.5.0 (16 March 2016) --------------------- **General** * Add support for the `--client-cert` and `--client-key` command line parameters and client_cert and client_key parameters to the get_client() call. #520 (richm) **Bug fixes** * Disallow users from creating snapshots with upper-case letters, which is not permitted by Elasticsearch. #562 (untergeek) * Remove `print()` command from ``setup.py`` as it causes issues with command- line retrieval of ``--url``, etc. #568 (thib-ack) * Remove unnecessary argument from `build_filter()` #530 (zzugg) * Allow day of year filter to be made up with 1, 2 or 3 digits #578 (petitout) 3.4.1 (10 February 2016) ------------------------ **General** * Update license copyright to 2016 * Use slim python version with Docker #527 (xaka) * Changed ``--master-only`` exit code to 0 when connected to non-master node #540 (wkruse) * Add ``cx_Freeze`` capability to ``setup.py``, plus a ``binary_release.py`` script to simplify binary package creation. #554 (untergeek) * Set Elastic as author. #555 (untergeek) * Put repository creation methods into API and document them. Requested in #550 (untergeek) **Bug fixes** * Fix sphinx documentation build error #506 (hydrapolic) * Ensure snapshots are found before iterating #507 (garyelephant) * Fix a doc inconsistency #509 (pmoust) * Fix a typo in `show` documentation #513 (pbamba) * Default to trying the cluster state for checking whether indices are closed, and then fall back to using the _cat API (for Amazon ES instances). #519 (untergeek) * Improve logging to show time delay between optimize runs, if selected. #525 (untergeek) * Allow elasticsearch-py module versions through 2.3.0 (a presumption at this point) #524 (untergeek) * Improve logging in snapshot api method to reveal when a repository appears to be missing. Reported in #551 (untergeek) * Test that ``--timestring`` has the correct variable for ``--time-unit``. Reported in #544 (untergeek) * Allocation will exit with exit_code 0 now when there are no indices to work on. Reported in #531 (untergeek) 3.4.0 (28 October 2015) ----------------------- **General** * API change in elasticsearch-py 1.7.0 prevented alias operations. Fixed in #486 (HonzaKral) * During index selection you can now select only closed indices with ``--closed-only``. Does not impact ``--all-indices`` Reported in #476. Fixed in #487 (Basster) * API Changes in Elasticsearch 2.0.0 required some refactoring. All tests pass for ES versions 1.0.3 through 2.0.0-rc1. Fixed in #488 (untergeek) * es_repo_mgr now has access to the same SSL options from #462. #489 (untergeek) * Logging improvements requested in #475. (untergeek) * Added ``--quiet`` flag. #494 (untergeek) * Fixed ``index_closed`` to work with AWS Elasticsearch. #499 (univerio) * Acceptable versions of Elasticsearch-py module are 1.8.0 up to 2.1.0 (untergeek) 3.3.0 (31 August 2015) ---------------------- **Announcement** * Curator is tested in Jenkins. Each commit to the master branch is tested with both Python versions 2.7.6 and 3.4.0 against each of the following Elasticsearch versions: * 1.7_nightly * 1.6_nightly * 1.7.0 * 1.6.1 * 1.5.1 * 1.4.4 * 1.3.9 * 1.2.4 * 1.1.2 * 1.0.3 * If you are using a version different from this, your results may vary. **General** * Allocation type can now also be ``include`` or ``exclude``, in addition to the the existing default ``require`` type. Add ``--type`` to the allocation command to specify the type. #443 (steffo) * Bump elasticsearch python module dependency to 1.6.0+ to enable synced_flush API call. Reported in #447 (untergeek) * Add SSL features, ``--ssl-no-validate`` and ``certificate`` to provide other ways to validate SSL connections to Elasticsearch. #436 (untergeek) **Bug fixes** * Delete by space was only reporting space used by primary shards. Fixed to show all space consumed. Reported in #455 (untergeek) * Update exit codes and messages for snapshot selection. Reported in #452 (untergeek) * Fix potential int/float casting issues. Reported in #465 (untergeek) 3.2.3 (16 July 2015) -------------------- **Bug fix** * In order to address customer and community issues with bulk deletes, the ``master_timeout`` is now invoked for delete operations. This should address 503s with 30s timeouts in the debug log, even when ``--timeout`` is set to a much higher value. The ``master_timeout`` is tied to the ``--timeout`` flag value, but will not exceed 300 seconds. #420 (untergeek) **General** * Mixing it up a bit here by putting `General` second! The only other changes are that logging has been improved for deletes so you won't need to have the ``--debug`` flag to see if you have error codes >= 400, and some code documentation improvements. 3.2.2 (13 July 2015) -------------------- **General** * This is a very minor change. The ``mock`` library recently removed support for Python 2.6. As many Curator users are using RHEL/CentOS 6, which is pinned to Python 2.6, this requires the mock version referenced by Curator to also be pinned to a supported version (``mock==1.0.1``). 3.2.1 (10 July 2015) -------------------- **General** * Added delete verification & retry (fixed at 3x) to potentially cover an edge case in #420 (untergeek) * Since GitHub allows rST (reStructuredText) README documents, and that's what PyPI wants also, the README has been rebuilt in rST. (untergeek) **Bug fixes** * If closing indices with ES 1.6+, and all indices are closed, ensure that the seal command does not try to seal all indices. Reported in #426 (untergeek) * Capture AttributeError when sealing indices if a non-TransportError occurs. Reported in #429 (untergeek) 3.2.0 (25 June 2015) -------------------- **New!** * Added support to manually seal, or perform a [synced flush](http://www.elastic.co/guide/en/elasticsearch/reference/current/indices-synced-flush.html) on indices with the ``seal`` command. #394 (untergeek) * Added *experimental* support for SSL certificate validation. In order for this to work, you must install the ``certifi`` python module: ``pip install certifi`` This feature *should* automatically work if the ``certifi`` module is installed. Please report any issues. **General** * Changed logging to go to stdout rather than stderr. Reopened #121 and figured they were right. This is better. (untergeek) * Exit code 99 was unpopular. It has been removed. Reported in #371 and #391 (untergeek) * Add ``--skip-repo-validation`` flag for snapshots. Do not validate write access to repository on all cluster nodes before proceeding. Useful for shared filesystems where intermittent timeouts can affect validation, but won't likely affect snapshot success. Requested in #396 (untergeek) * An alias no longer needs to be pre-existent in order to use the alias command. #317 (untergeek) * es_repo_mgr now passes through upstream errors in the event a repository fails to be created. Requested in #405 (untergeek) **Bug fixes** * In rare cases, ``*`` wildcard would not expand. Replaced with _all. Reported in #399 (untergeek) * Beginning with Elasticsearch 1.6, closed indices cannot have their replica count altered. Attempting to do so results in this error: ``org.elasticsearch.ElasticsearchIllegalArgumentException: Can't update [index.number_of_replicas] on closed indices [[test_index]] - can leave index in an unopenable state`` As a result, the ``change_replicas`` method has been updated to prune closed indices. This change will apply to all versions of Elasticsearch. Reported in #400 (untergeek) * Fixed es_repo_mgr repository creation verification error. Reported in #389 (untergeek) 3.1.0 (21 May 2015) ------------------- **General** * If ``wait_for_completion`` is true, snapshot success is now tested and logged. Reported in #253 (untergeek) * Log & return false if a snapshot is already in progress (untergeek) * Logs individual deletes per index, even though they happen in batch mode. Also log individual snapshot deletions. Reported in #372 (untergeek) * Moved ``chunk_index_list`` from cli to api utils as it's now also used by ``filter.py`` * Added a warning and 10 second timer countdown if you use ``--timestring`` to filter indices, but do not use ``--older-than`` or ``--newer-than`` in conjunction with it. This is to address #348, which behavior isn't a bug, but prevents accidental action against all of your time-series indices. The warning and timer are not displayed for ``show`` and ``--dry-run`` operations. * Added tests for ``es_repo_mgr`` in #350 * Doc fixes **Bug fixes** * delete-by-space needed the same fix used for #245. Fixed in #353 (untergeek) * Increase default client timeout for ``es_repo_mgr`` as node discovery and availability checks for S3 repositories can take a bit. Fixed in #352 (untergeek) * If an index is closed, indicate in ``show`` and ``--dry-run`` output. Reported in #327. (untergeek) * Fix issue where CLI parameters were not being passed to the ``es_repo_mgr`` create sub-command. Reported in #337. (feltnerm) 3.0.3 (27 Mar 2015) ------------------- **Announcement** This is a bug fix release. #319 and #320 are affecting a few users, so this release is being expedited. Test count: 228 Code coverage: 99% **General** * Documentation for the CLI converted to Asciidoc and moved to http://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html * Improved logging, and refactored a few methods to help with this. * Dry-run output is now more like v2, with the index or snapshot in the log line, along with the command. Several tests needed refactoring with this change, along with a bit of documentation. **Bug fixes** * Fix links to repository in setup.py. Reported in #318 (untergeek) * No more ``--delay`` with optimized indices. Reported in #319 (untergeek) * ``--request_timeout`` not working as expected. Reinstate the version 2 timeout override feature to prevent default timeouts for ``optimize`` and ``snapshot`` operations. Reported in #320 (untergeek) * Reduce index count to 200 for test.integration.test_cli_commands.TestCLISnapshot.test_cli_snapshot_huge_list in order to reduce or eliminate Jenkins CI test timeouts. Reported in #324 (untergeek) * ``--dry-run`` no longer calls ``show``, but will show output in the log, as in v2. This was a recurring complaint. See #328 (untergeek) 3.0.2 (23 Mar 2015) ------------------- **Announcement** This is a bug fix release. #307 and #309 were big enough to warrant an expedited release. **Bug fixes** * Purge unneeded constants, and clean up config options for snapshot. Reported in #303 (untergeek) * Don't split large index list if performing snapshots. Reported in #307 (untergeek) * Act correctly if a zero value for `--older-than` or `--newer-than` is provided. #309 (untergeek) 3.0.1 (16 Mar 2015) ------------------- **Announcement** The ``regex_iterate`` method was horribly named. It has been renamed to ``apply_filter``. Methods have been added to allow API users to build a filtered list of indices similarly to how the CLI does. This was an oversight. Props to @SegFaultAX for pointing this out. **General** * In conjunction with the rebrand to Elastic, URLs and documentation were updated. * Renamed horribly named `regex_iterate` method to `apply_filter` #298 (untergeek) * Added `build_filter` method to mimic CLI calls. #298 (untergeek) * Added Examples page in the API documentation. #298 (untergeek) **Bug fixes** * Refactored to show `--dry-run` info for `--disk-space` calls. Reported in #290 (untergeek) * Added list chunking so acting on huge lists of indices won't result in a URL bigger than 4096 bytes (Elasticsearch's default limit.) Reported in https://github.com/elastic/curator/issues/245#issuecomment-77916081 * Refactored `to_csv()` method to be simpler. * Added and removed tests according to changes. Code coverage still at 99% 3.0.0 (9 March 2015) -------------------- **Release Notes** The full release of Curator 3.0 is out! Check out all of the changes here! *Note:* This release is _not_ reverse compatible with any previous version. Because 3.0 is a major point release, there have been some major changes to both the API as well as the CLI arguments and structure. Be sure to read the updated command-line specific docs in the [wiki](https://github.com/elasticsearch/curator/wiki) and change your command-line arguments accordingly. The API docs are still at http://curator.readthedocs.io. Be sure to read the latest docs, or select the docs for 3.0.0. **General** * **Breaking changes to the API.** Because this is a major point revision, changes to the API have been made which are non-reverse compatible. Before upgrading, be sure to update your scripts and test them thoroughly. * **Python 3 support** Somewhere along the line, Curator would no longer work with curator. All tests now pass for both Python2 and Python3, with 99% code coverage in both environments. * **New CLI library.** Using Click now. http://click.pocoo.org/3/ This change is especially important as it allows very easy CLI integration testing. * **Pipelined filtering!** You can now use ``--older-than`` & ``--newer-than`` in the same command! You can also provide your own regex via the ``--regex`` parameter. You can use multiple instances of the ``--exclude`` flag. * **Manually include indices!** With the ``--index`` paramter, you can add an index to the working list. You can provide multiple instances of the ``--index`` parameter as well! * **Tests!** So many tests now. Test coverage of the API methods is at 100% now, and at 99% for the CLI methods. This doesn't mean that all of the tests are perfect, or that I haven't missed some scenarios. It does mean, however, that it will be much easier to write tests if something turns up missed. It also means that any new functionality will now need to have tests. * **Iteration changes** Methods now only iterate through each index when appropriate! In fact, the only commands that iterate are `alias` and `optimize`. The `bloom` command will iterate, but only if you have added the `--delay` flag with a value greater than zero. * **Improved packaging!** Methods have been moved into categories of ``api`` and ``cli``, and further broken out into individual modules to help them be easier to find and read. * Check for allocation before potentially re-applying an allocation rule. #273 (ferki) * Assigning replica count and routing allocation rules _can_ be done to closed indices. #283 (ferki) **Bug fixes** * Don't accidentally delete ``.kibana`` index. #261 (malagoli) * Fix segment count for empty indices. #265 (untergeek) * Change bloom filter cutoff Elasticsearch version to 1.4. Reported in #267 (untergeek) 3.0.0rc1 (5 March 2015) ----------------------- **Release Notes** RC1 is here! I'm re-releasing the Changes from all betas here, minus the intra-beta code fixes. Barring any show stoppers, the official release will be soon. **General** * **Breaking changes to the API.** Because this is a major point revision, changes to the API have been made which are non-reverse compatible. Before upgrading, be sure to update your scripts and test them thoroughly. * **Python 3 support** Somewhere along the line, Curator would no longer work with curator. All tests now pass for both Python2 and Python3, with 99% code coverage in both environments. * **New CLI library.** Using Click now. http://click.pocoo.org/3/ This change is especially important as it allows very easy CLI integration testing. * **Pipelined filtering!** You can now use ``--older-than`` & ``--newer-than`` in the same command! You can also provide your own regex via the ``--regex`` parameter. You can use multiple instances of the ``--exclude`` flag. * **Manually include indices!** With the ``--index`` paramter, you can add an index to the working list. You can provide multiple instances of the ``--index`` parameter as well! * **Tests!** So many tests now. Test coverage of the API methods is at 100% now, and at 99% for the CLI methods. This doesn't mean that all of the tests are perfect, or that I haven't missed some scenarios. It does mean, however, that it will be much easier to write tests if something turns up missed. It also means that any new functionality will now need to have tests. * Methods now only iterate through each index when appropriate! * Improved packaging! Hopefully the ``entry_point`` issues some users have had will be addressed by this. Methods have been moved into categories of ``api`` and ``cli``, and further broken out into individual modules to help them be easier to find and read. * Check for allocation before potentially re-applying an allocation rule. #273 (ferki) * Assigning replica count and routing allocation rules _can_ be done to closed indices. #283 (ferki) **Bug fixes** * Don't accidentally delete ``.kibana`` index. #261 (malagoli) * Fix segment count for empty indices. #265 (untergeek) * Change bloom filter cutoff Elasticsearch version to 1.4. Reported in #267 (untergeek) 3.0.0b4 (5 March 2015) ---------------------- **Notes** Integration testing! Because I finally figured out how to use the Click Testing API, I now have a good collection of command-line simulations, complete with a real back-end. This testing found a few bugs (this is why testing exists, right?), and fixed a few of them. **Bug fixes** * HUGE! `curator show snapshots` would _delete_ snapshots. This is fixed. * Return values are now being sent from the commands. * `scripttest` is no longer necessary (click.Test works!) * Calling `get_snapshot` without a snapshot name returns all snapshots 3.0.0b3 (4 March 2015) ---------------------- **Bug fixes** * setup.py was lacking the new packages "curator.api" and "curator.cli" The package works now. * Python3 suggested I had to normalize the beta tag to just b3, so that's also changed. * Cleaned out superfluous imports and logger references from the __init__.py files. 3.0.0-beta2 (3 March 2015) -------------------------- **Bug fixes** * Python3 issues resolved. Tests now pass on both Python2 and Python3 3.0.0-beta1 (3 March 2015) -------------------------- **General** * **Breaking changes to the API.** Because this is a major point revision, changes to the API have been made which are non-reverse compatible. Before upgrading, be sure to update your scripts and test them thoroughly. * **New CLI library.** Using Click now. http://click.pocoo.org/3/ * **Pipelined filtering!** You can now use ``--older-than`` & ``--newer-than`` in the same command! You can also provide your own regex via the ``--regex`` parameter. You can use multiple instances of the ``--exclude`` flag. * **Manually include indices!** With the ``--index`` paramter, you can add an index to the working list. You can provide multiple instances of the ``--index`` parameter as well! * **Tests!** So many tests now. Unit test coverage of the API methods is at 100% now. This doesn't mean that all of the tests are perfect, or that I haven't missed some scenarios. It does mean that any new functionality will need to also have tests, now. * Methods now only iterate through each index when appropriate! * Improved packaging! Hopefully the ``entry_point`` issues some users have had will be addressed by this. Methods have been moved into categories of ``api`` and ``cli``, and further broken out into individual modules to help them be easier to find and read. * Check for allocation before potentially re-applying an allocation rule. #273 (ferki) **Bug fixes** * Don't accidentally delete ``.kibana`` index. #261 (malagoli) * Fix segment count for empty indices. #265 (untergeek) * Change bloom filter cutoff Elasticsearch version to 1.4. Reported in #267 (untergeek) 2.1.2 (22 January 2015) ----------------------- **Bug fixes** * Do not try to set replica count if count matches provided argument. #247 (bobrik) * Fix JSON logging (Logstash format). #250 (magnusbaeck) * Fix bug in `filter_by_space()` which would match all indices if the provided patterns found no matches. Reported in #254 (untergeek) 2.1.1 (30 December 2014) ------------------------ **Bug fixes** * Renamed unnecessarily redundant ``--replicas`` to ``--count`` in args for ``curator_script.py`` 2.1.0 (30 December 2014) ------------------------ **General** * Snapshot name now appears in log output or STDOUT. #178 (untergeek) * Replicas! You can now change the replica count of indices. Requested in #175 (untergeek) * Delay option added to Bloom Filter functionality. #206 (untergeek) * Add 2-digit years as acceptable pattern (y vs. Y). Reported in #209 (untergeek) * Add Docker container definition #226 (christianvozar) * Allow the use of 0 with --older-than, --most-recent and --delete-older-than. See #208. #211 (bobrik) **Bug fixes** * Edge case where 1.4.0.Beta1-SNAPSHOT would break version check. Reported in #183 (untergeek) * Typo fixed. #193 (ferki) * Type fixed. #204 (gheppner) * Shows proper error in the event of concurrent snapshots. #177 (untergeek) * Fixes erroneous index display of ``_, a, l, l`` when --all-indices selected. Reported in #222 (untergeek) * Use json.dumps() to escape exceptions. Reported in #210 (untergeek) * Check if index is closed before adding to alias. Reported in #214 (bt5e) * No longer force-install argparse if pre-installed #216 (whyscream) * Bloom filters have been removed from Elasticsearch 1.5.0. Update methods and tests to act accordingly. #233 (untergeek) 2.0.2 (8 October 2014) ---------------------- **Bug fixes** * Snapshot name not displayed in log or STDOUT #185 (untergeek) * Variable name collision in delete_snapshot() #186 (untergeek) 2.0.1 (1 October 2014) ---------------------- **Bug fix** * Override default timeout when snapshotting --all-indices #179 (untergeek) 2.0.0 (25 September 2014) ------------------------- **General** * New! Separation of Elasticsearch Curator Python API and curator_script.py (untergeek) * New! ``--delay`` after optimize to allow cluster to quiesce #131 (untergeek) * New! ``--suffix`` option in addition to ``--prefix`` #136 (untergeek) * New! Support for wildcards in prefix & suffix #136 (untergeek) * Complete refactor of snapshots. Now supporting incrementals! (untergeek) **Bug fix** * Incorrect error msg if no indices sent to create_snapshot (untergeek) * Correct for API change coming in ES 1.4 #168 (untergeek) * Missing ``"`` in Logstash log format #143 (cassianoleal) * Change non-master node test to exit code 0, log as ``INFO``. #145 (untergeek) * `months` option missing from validate_timestring() (untergeek) 1.2.2 (29 July 2014) -------------------- **Bug fix** * Updated ``README.md`` to briefly explain what curator does #117 (untergeek) * Fixed ``es_repo_mgr`` logging whitelist #119 (untergeek) * Fixed absent ``months`` time-unit #120 (untergeek) * Filter out ``.marvel-kibana`` when prefix is ``.marvel-`` #120 (untergeek) * Clean up arg parsing code where redundancy exists #123 (untergeek) * Properly divide debug from non-debug logging #125 (untergeek) * Fixed ``show`` command bug caused by changes to command structure #126 (michaelweiser) 1.2.1 (24 July 2014) -------------------- **Bug fix** * Fixed the new logging when called by ``curator`` entrypoint. 1.2.0 (24 July 2014) -------------------- **General** * New! Allow user-specified date patterns: ``--timestring`` #111 (untergeek) * New! Curate weekly indices (must use week of year) #111 (untergeek) * New! Log output in logstash format ``--logformat logstash`` #111 (untergeek) * Updated! Cleaner default logs (debug still shows everything) (untergeek) * Improved! Dry runs are more visible in log output (untergeek) Errata * The ``--separator`` option was removed in lieu of user-specified date patterns. * Default ``--timestring`` for days: ``%Y.%m.%d`` (Same as before) * Default ``--timestring`` for hours: ``%Y.%m.%d.%H`` (Same as before) * Default ``--timestring`` for weeks: ``%Y.%W`` 1.1.3 (18 July 2014) -------------------- **Bug fix** * Prefix not passed in ``get_object_list()`` #106 (untergeek) * Use ``os.devnull`` instead of ``/dev/null`` for Windows #102 (untergeek) * The http auth feature was erroneously omitted #100 (bbuchacher) 1.1.2 (13 June 2014) -------------------- **Bug fix** * This was a showstopper bug for anyone using RHEL/CentOS with a Python 2.6 dependency for yum * Python 2.6 does not like format calls without an index. #96 via #95 (untergeek) * We won't talk about what happened to 1.1.1. No really. I hate git today :( 1.1.0 (12 June 2014) -------------------- **General** * Updated! New command structure * New! Snapshot to fs or s3 #82 (untergeek) * New! Add/Remove indices to alias #82 via #86 (cschellenger) * New! ``--exclude-pattern`` #80 (ekamil) * New! (sort of) Restored ``--log-level`` support #73 (xavier-calland) * New! show command-line options #82 via #68 (untergeek) * New! Shard Allocation Routing #82 via #62 (nickethier) **Bug fix** * Fix ``--max_num_segments`` not being passed correctly #74 (untergeek) * Change ``BUILD_NUMBER`` to ``CURATOR_BUILD_NUMBER`` in ``setup.py`` #60 (mohabusama) * Fix off-by-one error in time calculations #66 (untergeek) * Fix testing with python3 #92 (untergeek) Errata * Removed ``optparse`` compatibility. Now requires ``argparse``. 1.0.0 (25 Mar 2014) ------------------- **General** * compatible with ``elasticsearch-py`` 1.0 and Elasticsearch 1.0 (honzakral) * Lots of tests! (honzakral) * Streamline code for 1.0 ES versions (honzakral) **Bug fix** * Fix ``find_expired_indices()`` to not skip closed indices (honzakral) 0.6.2 (18 Feb 2014) ------------------- **General** * Documentation fixes #38 (dharrigan) * Add support for HTTPS URI scheme and ``optparse`` compatibility for Python 2.6 (gelim) * Add elasticsearch module version checking for future compatibility checks (untergeek) 0.6.1 (08 Feb 2014) ------------------- **General** * Added tarball versioning to ``setup.py`` (untergeek) **Bug fix** * Fix ``long_description`` by including ``README.md`` in ``MANIFEST.in`` (untergeek) * Incorrect version number in ``curator.py`` (untergeek) 0.6.0 (08 Feb 2014) ------------------- **General** * Restructured repository to a be a proper python package. (arieb) * Added ``setup.py`` file. (arieb) * Removed the deprecated file ``logstash_index_cleaner.py`` (arieb) * Updated ``README.md`` to fit the new package, most importantly the usage and installation. (arieb) * Fixes and package push to PyPI (untergeek) 0.5.2 (26 Jan 2014) ------------------- **General** * Fix boolean logic determining hours or days for time selection (untergeek) 0.5.1 (20 Jan 2014) ------------------- **General** * Fix ``can_bloom`` to compare numbers (HonzaKral) * Switched ``find_expired_indices()`` to use ``datetime`` and ``timedelta`` * Do not try and catch unrecoverable exceptions. (HonzaKral) * Future proofing the use of the elasticsearch client (i.e. work with version 1.0+ of Elasticsearch) (HonzaKral) Needs more testing, but should work. * Add tests for these scenarios (HonzaKral) 0.5.0 (17 Jan 2014) ------------------- **General** * Deprecated ``logstash_index_cleaner.py`` Use new ``curator.py`` instead (untergeek) * new script change: ``curator.py`` (untergeek) * new add index optimization (Lucene forceMerge) to reduce segments and therefore memory usage. (untergeek) * update refactor of args and several functions to streamline operation and make it more readable (untergeek) * update refactor further to clean up and allow immediate (and future) portability (HonzaKral) 0.4.0 ----- **General** * First version logged in ``CHANGELOG`` * new ``--disable-bloom-days`` feature requires 0.90.9+ http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-codec.html#bloom-postings This can save a lot of heap space on cold indexes (i.e. not actively indexing documents) curator-5.2.0/Dockerfile000066400000000000000000000003271315226075300151750ustar00rootroot00000000000000# Docker Definition for ElasticSearch Curator FROM python:2.7.8-slim MAINTAINER Christian R. Vozar RUN pip install --quiet elasticsearch-curator ENTRYPOINT [ "/usr/local/bin/curator" ] curator-5.2.0/LICENSE.txt000066400000000000000000000011261315226075300150240ustar00rootroot00000000000000Copyright 2011–2017 Elasticsearch and contributors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. curator-5.2.0/MANIFEST.in000066400000000000000000000007441315226075300147440ustar00rootroot00000000000000include Changelog.rst include CONTRIBUTORS CONTRIBUTING.md include LICENSE.txt include README.rst include Dockerfile recursive-exclude * __pycache__ recursive-exclude * *.py[co] recursive-include curator * recursive-include test * recursive-include docs * recursive-exclude curator *.pyc recursive-exclude curator *.pyo recursive-exclude docs *.pyc recursive-exclude docs *.pyo recursive-exclude test *.pyc recursive-exclude test *.pyo prune docs/_build prune docs/asciidoc/html_docs curator-5.2.0/NOTICE000066400000000000000000000103641315226075300141110ustar00rootroot00000000000000In accordance with section 4d of the Apache 2.0 license (http://www.apache.org/licenses/LICENSE-2.0), this NOTICE file is included. All users mentioned in the CONTRIBUTORS file at https://github.com/elastic/curator/blob/master/CONTRIBUTORS must be included in any derivative work. All conditions of section 4 of the Apache 2.0 license will be enforced: 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: a. You must give any other recipients of the Work or Derivative Works a copy of this License; and b. You must cause any modified files to carry prominent notices stating that You changed the files; and c. You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and d. If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. Contributors: * Jordan Sissel (jordansissel) (For Logstash, first and foremost) * Shay Banon (kimchy) (For Elasticsearch, of course!) * Aaron Mildenstein (untergeek) * Njal Karevoll * François Deppierraz * Honza Kral (HonzaKral) * Benjamin Smith (benjaminws) * Colin Moller (LeftyBC) * Elliot (edgeofnite) * Ram Viswanadha (ramv) * Chris Meisinger (cmeisinger) * Stuart Warren (stuart-warren) * (gitshaw) * (sfritz) * (sjoelsam) * Jose Diaz-Gonzalez (josegonzalez) * Arie Bro (arieb) * David Harrigan (dharrigan) * Mathieu Geli (gelim) * Nick Ethier (nickethier) * Mohab Usama (mohabusama) * (gitshaw) * Stuart Warren (stuart-warren) * Xavier Calland (xavier-calland) * Chad Schellenger (cschellenger) * Kamil Essekkat (ekamil) * (gbutt) * Ben Buchacher (bbuchacher) * Ehtesh Choudhury (shurane) * Markus Fischer (mfn) * Fabien Wernli (faxm0dem) * Michael Weiser (michaelweiser) * (digital-wonderland) * cassiano (cassianoleal) * Matt Dainty (bodgit) * Alex Philipp (alex-sf) * (krzaczek) * Justin Lintz (jlintz) * Jeremy Falling (jjfalling) * Ian Babrou (bobrik) * Ferenc Erki (ferki) * George Heppner (gheppner) * Matt Hughes (matthughes) * Brian Lalor (blalor) * Paweł Krzaczkowski (krzaczek) * Ben Tse (bt5e) * Tom Hendrikx (whyscream) * Christian Vozar (christianvozar) * Magnus Baeck (magnusbaeck) * Robin Kearney (rk295) * (cfeio) * (malagoli) * Dan Sheridan (djs52) * Michael-Keith Bernard (SegFaultAX) * Simon Lundström (simmel) * (pkr1234) * Mark Feltner (feltnerm) * William Jimenez (wjimenez5271) * Jeremy Canady (jrmycanady) * Steven Ottenhoff (steffo) * Ole Rößner (Basster) * Jack (univerio) * Tomáš Mózes (hydrapolic) * Gary Gao (garyelephant) * Panagiotis Moustafellos (pmoust) * (pbamba) * Pavel Strashkin (xaka) * Wadim Kruse (wkruse) * Richard Megginson (richm) * Thibaut Ackermann (thib-ack) * (zzugg) * Julien Mancuso (petitout) curator-5.2.0/README.rst000066400000000000000000000240401315226075300146700ustar00rootroot00000000000000.. _readme: Curator ======= Have indices in Elasticsearch? This is the tool for you! Like a museum curator manages the exhibits and collections on display, Elasticsearch Curator helps you curate, or manage your indices. Compatibility Matrix ==================== +--------+----------+------------+----------+------------+----------+------------+ |Version | ES 1.x | AWS ES 1.x | ES 2.x | AWS ES 2.x | ES 5.x | AWS ES 5.x | +========+==========+============+==========+============+==========+============+ | 3 | yes | yes* | yes | yes* | no | no | +--------+----------+------------+----------+------------+----------+------------+ | 4 | no | no | yes | no | yes | no | +--------+----------+------------+----------+------------+----------+------------+ | 5 | no | no | no | no | yes | yes* | +--------+----------+------------+----------+------------+----------+------------+ It is important to note that Curator 4 will not work with indices created in versions of Elasticsearch older than 1.4 (if they have been subsequently re-indexed, they will work). This is because those older indices lack index metadata that Curator 4 requires. Curator 4 will simply exclude any such indices from being acted on, and you will get a warning message like the following: :: 2016-07-31 10:36:17,423 WARNING Index: YOUR_INDEX_NAME has no "creation_date"! This implies that the index predates Elasticsearch v1.4. For safety, this index will be removed from the actionable list. It is also important to note that Curator 4 requires access to the ``/_cluster/state/metadata`` endpoint. Forks of Elasticsearch which do not support this endpoint (such as AWS ES, see #717) *will not* be able to use Curator version 4. \* It appears that AWS ES `does not allow access to the snapshot status endpoint`_ for the 1.x, 2.x, 5.1, and 5.3 versions. This prevents Curator 3 from being used to make snapshots. .. _does not allow access to the snapshot status endpoint: https://github.com/elastic/curator/issues/796 ? Curator 4 and 5 should work with AWS ES 5.x, but the ``/_cluster/state/metadata`` endpoint is still not fully supported (see #880). If a future patch fixes this, then Curator 4 and 5 should work with AWS ES 5.x. Build Status ------------ +--------+----------+ | Branch | Status | +========+==========+ | Master | |master| | +--------+----------+ | 5.x | |5_x| | +--------+----------+ | 5.1 | |5_1| | +--------+----------+ | 5.0 | |5_0| | +--------+----------+ | 4.x | |4_x| | +--------+----------+ | 4.3 | |4_3| | +--------+----------+ PyPI: |pypi_pkg| .. |master| image:: https://travis-ci.org/elastic/curator.svg?branch=master :target: https://travis-ci.org/elastic/curator .. |5_x| image:: https://travis-ci.org/elastic/curator.svg?branch=5.x :target: https://travis-ci.org/elastic/curator .. |5_1| image:: https://travis-ci.org/elastic/curator.svg?branch=5.1 :target: https://travis-ci.org/elastic/curator .. |5_0| image:: https://travis-ci.org/elastic/curator.svg?branch=5.0 :target: https://travis-ci.org/elastic/curator .. |4_x| image:: https://travis-ci.org/elastic/curator.svg?branch=4.x :target: https://travis-ci.org/elastic/curator .. |4_3| image:: https://travis-ci.org/elastic/curator.svg?branch=4.3 :target: https://travis-ci.org/elastic/curator .. |pypi_pkg| image:: https://badge.fury.io/py/elasticsearch-curator.svg :target: https://badge.fury.io/py/elasticsearch-curator `Curator API Documentation`_ ---------------------------- Version 5 of Curator ships with both an API and a wrapper script (which is actually defined as an entry point). The API allows you to write your own scripts to accomplish similar goals, or even new and different things with the `Curator API`_, and the `Elasticsearch Python API`_. .. _Curator API: http://curator.readthedocs.io/ .. _Curator API Documentation: `Curator API`_ .. _Elasticsearch Python API: http://elasticsearch-py.readthedocs.io/ `Curator CLI Documentation`_ ---------------------------- The `Curator CLI Documentation`_ is now a part of the document repository at http://elastic.co/guide at http://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html .. _Curator CLI Documentation: http://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html `Getting Started`_ ------------------ .. _Getting Started: https://www.elastic.co/guide/en/elasticsearch/client/curator/current/about.html See the `Installation guide `_ and the `command-line usage guide `_ Running ``curator --help`` will also show usage information. `Frequently Asked Questions`_ ----------------------------- .. _Frequently Asked Questions: http://www.elastic.co/guide/en/elasticsearch/client/curator/current/faq.html Encountering issues like ``DistributionNotFound``? See the FAQ_ for that issue, and more. .. _FAQ: http://www.elastic.co/guide/en/elasticsearch/client/curator/current/entrypoint-fix.html `Documentation & Examples`_ --------------------------- .. _Documentation & Examples: http://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html The documentation for the CLI is now part of the document repository at http://elastic.co/guide at http://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html The `Curator Wiki `_ on Github is now a place to add your own examples and ideas. Contributing ------------ * fork the repo * make changes in your fork * add tests to cover your changes (if necessary) * run tests * sign the `CLA `_ * send a pull request! To run from source, use the ``run_curator.py`` script in the root directory of the project. Running Tests ------------- To run the test suite just run ``python setup.py test`` When changing code, contributing new code or fixing a bug please make sure you include tests in your PR (or mark it as without tests so that someone else can pick it up to add the tests). When fixing a bug please make sure the test actually tests the bug - it should fail without the code changes and pass after they're applied (it can still be one commit of course). The tests will try to connect to your local elasticsearch instance and run integration tests against it. This will delete all the data stored there! You can use the env variable ``TEST_ES_SERVER`` to point to a different instance (for example, 'otherhost:9203'). Binary Executables ------------------ The combination of `setuptools `_ and `cx_Freeze `_ allows for Curator to be compiled into binary packages. These consist of a binary file placed in a directory which contains all the libraries required to run it. In order to make a binary package you must manually install the ``cx_freeze`` python module. You can do this via ``pip``, or ``python setup.py install``, or by package, if such exists for your platform. In order to make it compile on recent Debian/Ubuntu platforms, a patch had to be applied to the ``setup.py`` file in the extracted folder. This patch file is in the ``unix_packages`` directory in this repository. With ``cx_freeze`` installed, building a binary package is as simple as running ``python setup.py build_exe``. In Linux distributions, the results will be in the ``build`` directory, in a subdirectory labelled ``exe.linux-x86_64-${PYVER}``, where `${PYVER}` is the current major/minor version of Python, e.g. ``2.7``. This directory can be renamed as desired. Other entry-points that are defined in the ``setup.py`` file, such as ``es_repo_mgr``, will also appear in this directory. The process is identical for building the binary package for Windows. It must be run from a Windows machine with all dependencies installed. Executables in Windows will have the ``.exe`` suffix attached. The directory in ``build`` will be named ``exe.win-amd64-${PYVER}``, where `${PYVER}` is the current major/minor version of Python, e.g. ``2.7``. This directory can be renamed as desired. In Windows, cx_Freeze also allows for building rudimentary MSI installers. This can be done by invoking ``python setup.py bdist_msi``. The MSI fill will be in the ``dist`` directory, and will be named ``elasticsearch-curator-#.#.#-amd64.msi``, where the major, minor, and patch version numbers are substituted accordingly. One drawback to this rudimentary MSI is that it does not allow updates to be installed on top of the existing installation. You must uninstall the old version before installing the newer one. The ``unix_packages`` directory contains the ``build_packages.sh`` script used to generate the packages for the Curator YUM and APT repositories. The ``Vagrant`` directory has the Vagrantfiles used in conjunction with the ``build_packages.sh`` script. If you wish to use this method on your own, you must ensure that the shared folders exist. ``/curator_packages`` is where the packages will be placed after building. ``/curator_source`` is the path to the Curator source code, so that the ``build_packages.sh`` script can be called from there. The ``build_packages.sh`` script does `not` use the local source code, but rather pulls the version specified as an argument directly from GitHub. Versioning ---------- Version 5 of Curator is the current ``master`` branch. It supports only 5.x versions of Elasticsearch. Origins ------- Curator was first called ``clearESindices.py`` [1] and was almost immediately renamed to ``logstash_index_cleaner.py`` [1]. After a time it was migrated under the [logstash](https://github.com/elastic/logstash) repository as ``expire_logs``. Soon thereafter, Jordan Sissel was hired by Elasticsearch, as was the original author of this tool. It became Elasticsearch Curator after that and is now hosted at [1] curator-5.2.0/Vagrant/000077500000000000000000000000001315226075300146035ustar00rootroot00000000000000curator-5.2.0/Vagrant/centos/000077500000000000000000000000001315226075300160765ustar00rootroot00000000000000curator-5.2.0/Vagrant/centos/6/000077500000000000000000000000001315226075300162435ustar00rootroot00000000000000curator-5.2.0/Vagrant/centos/6/Vagrantfile000066400000000000000000000012241315226075300204270ustar00rootroot00000000000000# -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.configure(2) do |config| config.vm.box = "elastic/centos-6-x86_64" config.vm.provision "shell", inline: <<-SHELL sudo yum -y groupinstall "Development Tools" sudo yum -y install python-devel zlib-devel bzip2-devel sqlite sqlite-devel openssl-devel SHELL config.vm.synced_folder "/curator_packages", "/curator_packages", create: true, owner: "vagrant", group: "vagrant" config.vm.synced_folder "/curator_source", "/curator_source", create: true, owner: "vagrant", group: "vagrant" config.vm.provider "virtualbox" do |v| v.customize ["modifyvm", :id, "--nictype1", "virtio"] end end curator-5.2.0/Vagrant/centos/7/000077500000000000000000000000001315226075300162445ustar00rootroot00000000000000curator-5.2.0/Vagrant/centos/7/Vagrantfile000066400000000000000000000012241315226075300204300ustar00rootroot00000000000000# -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.configure(2) do |config| config.vm.box = "elastic/centos-7-x86_64" config.vm.provision "shell", inline: <<-SHELL sudo yum -y groupinstall "Development Tools" sudo yum -y install python-devel zlib-devel bzip2-devel sqlite sqlite-devel openssl-devel SHELL config.vm.synced_folder "/curator_packages", "/curator_packages", create: true, owner: "vagrant", group: "vagrant" config.vm.synced_folder "/curator_source", "/curator_source", create: true, owner: "vagrant", group: "vagrant" config.vm.provider "virtualbox" do |v| v.customize ["modifyvm", :id, "--nictype1", "virtio"] end end curator-5.2.0/Vagrant/ubuntu/000077500000000000000000000000001315226075300161255ustar00rootroot00000000000000curator-5.2.0/Vagrant/ubuntu/14.04/000077500000000000000000000000001315226075300165735ustar00rootroot00000000000000curator-5.2.0/Vagrant/ubuntu/14.04/Vagrantfile000066400000000000000000000012671315226075300207660ustar00rootroot00000000000000# -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.configure(2) do |config| config.vm.box = "ubuntu/trusty64" config.vm.provision "shell", inline: <<-SHELL sudo apt-get -y autoremove sudo apt-get update sudo apt-get install -y libxml2-dev zlib1g-dev pkg-config python-dev make build-essential libssl-dev libbz2-dev libsqlite3-dev SHELL config.vm.synced_folder "/curator_packages", "/curator_packages", create: true, owner: "vagrant", group: "vagrant" config.vm.synced_folder "/curator_source", "/curator_source", create: true, owner: "vagrant", group: "vagrant" config.vm.provider "virtualbox" do |v| v.customize ["modifyvm", :id, "--nictype1", "virtio"] end end curator-5.2.0/binary_release.py000066400000000000000000000074401315226075300165440ustar00rootroot00000000000000import os import re import sys import shutil import hashlib # This script simply takes the output of `python setup.py build_exe` and makes # a compressed archive (zip for windows, tar.gz for Linux) for distribution. # Utility function to read from file. def fread(fname): return open(os.path.join(os.path.dirname(__file__), fname)).read() def get_version(): VERSIONFILE="curator/_version.py" verstrline = fread(VERSIONFILE).strip() vsre = r"^__version__ = ['\"]([^'\"]*)['\"]" mo = re.search(vsre, verstrline, re.M) if mo: VERSION = mo.group(1) else: raise RuntimeError("Unable to find version string in %s." % (VERSIONFILE,)) build_number = os.environ.get('CURATOR_BUILD_NUMBER', None) if build_number: return VERSION + "b{}".format(build_number) return VERSION archive_format = 'gztar' enviro = dict(os.environ) platform = sys.platform pyver = str(sys.version_info[0]) + '.' + str(sys.version_info[1]) if platform == 'win32': # Win32 stuff archive_format = 'zip' build_name = 'exe.win-' + enviro['PROCESSOR_ARCHITECTURE'].lower() + '-' + pyver target_name = "curator-" + str(get_version()) + "-amd64" elif platform == 'linux' or platform == 'linux2': sys_string = enviro['_system_type'].lower() + '-' + enviro['_system_arch'].lower() build_name = 'exe.' + sys_string + '-' + pyver target_name = "curator-" + str(get_version()) + "-" + sys_string else: # Unsupported platform? print('Your platform ({0}) is not yet supported for binary build/distribution.'.format(platform)) sys.exit(1) #sys_string = sys_type + '-' + sys_arch #build_name = 'exe.' + sys_string + '-' + pyver #print('Expected build directory: {0}'.format(build_name)) build_path = os.path.join('build', build_name) if os.path.exists(build_path): #print("I found the path: {0}".format(build_path)) target_path = os.path.join('.', target_name) # Check to see if an older directory exists... if os.path.exists(target_path): print('An older build exists at {0}. Please delete this before continuing.'.format(target_path)) sys.exit(1) else: shutil.copytree(build_path, target_path) # Ensure the rename went smoothly, then continue if os.path.exists(target_path): #print("Build successfully renamed") if float(pyver) >= 2.7: shutil.make_archive('elasticsearch-' + target_name, archive_format, '.', target_path) if platform == 'win32': fname = 'elasticsearch-' + target_name + '.zip' else: fname = 'elasticsearch-' + target_name + '.tar.gz' # Clean up directory if we made a viable archive. if os.path.exists(fname): shutil.rmtree(target_path) else: print('Something went wrong creating the archive {0}'.format(fname)) sys.exit(1) md5sum = hashlib.md5(open(fname, 'rb').read()).hexdigest() sha1sum = hashlib.sha1(open(fname, 'rb').read()).hexdigest() with open(fname + ".md5.txt", "w") as md5_file: md5_file.write("{0}".format(md5sum)) with open(fname + ".sha1.txt", "w") as sha1_file: sha1_file.write("{0}".format(sha1sum)) print('Archive: {0}'.format(fname)) print('{0} = {1}'.format(fname + ".md5.txt", md5sum)) print('{0} = {1}'.format(fname + ".sha1.txt", sha1sum)) else: print('Your python version ({0}) is too old to use with shutil.make_archive.'.format(pyver)) print('You can manually compress the {0} directory to achieve the same result.'.format(target_name)) else: # We couldn't find a build_path print("Build not found. Please run 'python setup.py build_exe' to create the build directory.") sys.exit(1) curator-5.2.0/curator/000077500000000000000000000000001315226075300146605ustar00rootroot00000000000000curator-5.2.0/curator/__init__.py000066400000000000000000000004051315226075300167700ustar00rootroot00000000000000from .exceptions import * from .defaults import * from .validators import * from .logtools import * from .utils import * from .indexlist import IndexList from .snapshotlist import SnapshotList from .actions import * from .cli import * from .repomgrcli import * curator-5.2.0/curator/__main__.py000066400000000000000000000001371315226075300167530ustar00rootroot00000000000000 """Executed when package directory is called as a script""" from .curator import main main() curator-5.2.0/curator/_version.py000066400000000000000000000000271315226075300170550ustar00rootroot00000000000000__version__ = '5.2.0' curator-5.2.0/curator/actions.py000066400000000000000000002626701315226075300167070ustar00rootroot00000000000000from .exceptions import * from .utils import * import logging import time from copy import deepcopy from datetime import datetime class Alias(object): def __init__(self, name=None, extra_settings={}, **kwargs): """ Define the Alias object. :arg name: The alias name :arg extra_settings: Extra settings, including filters and routing. For more information see https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-aliases.html :type extra_settings: dict, representing the settings. """ if not name: raise MissingArgument('No value for "name" provided.') #: Instance variable #: The strftime parsed version of `name`. self.name = parse_date_pattern(name) #: The list of actions to perform. Populated by #: :mod:`curator.actions.Alias.add` and #: :mod:`curator.actions.Alias.remove` self.actions = [] #: Instance variable. #: The Elasticsearch Client object derived from `ilo` self.client = None #: Instance variable. #: Any extra things to add to the alias, like filters, or routing. self.extra_settings = extra_settings self.loggit = logging.getLogger('curator.actions.alias') def add(self, ilo, warn_if_no_indices=False): """ Create `add` statements for each index in `ilo` for `alias`, then append them to `actions`. Add any `extras` that may be there. :arg ilo: A :class:`curator.indexlist.IndexList` object """ verify_index_list(ilo) if not self.client: self.client = ilo.client try: ilo.empty_list_check() except NoIndices: # Add a warning if there are no indices to add, if so set in options if warn_if_no_indices: self.loggit.warn( 'No indices found after processing filters. ' 'Nothing to add to {0}'.format(self.name) ) return else: # Re-raise the NoIndices so it will behave as before raise NoIndices for index in ilo.working_list(): self.loggit.debug( 'Adding index {0} to alias {1} with extra settings ' '{2}'.format(index, self.name, self.extra_settings) ) add_dict = { 'add' : { 'index' : index, 'alias': self.name } } add_dict['add'].update(self.extra_settings) self.actions.append(add_dict) def remove(self, ilo, warn_if_no_indices=False): """ Create `remove` statements for each index in `ilo` for `alias`, then append them to `actions`. :arg ilo: A :class:`curator.indexlist.IndexList` object """ verify_index_list(ilo) if not self.client: self.client = ilo.client try: ilo.empty_list_check() except NoIndices: # Add a warning if there are no indices to add, if so set in options if warn_if_no_indices: self.loggit.warn( 'No indices found after processing filters. ' 'Nothing to remove from {0}'.format(self.name) ) return else: # Re-raise the NoIndices so it will behave as before raise NoIndices aliases = self.client.indices.get_alias() for index in ilo.working_list(): if index in aliases: self.loggit.debug( 'Index {0} in get_aliases output'.format(index)) # Only remove if the index is associated with the alias if self.name in aliases[index]['aliases']: self.loggit.debug( 'Removing index {0} from alias ' '{1}'.format(index, self.name) ) self.actions.append( { 'remove' : { 'index' : index, 'alias': self.name } }) else: self.loggit.debug( 'Can not remove: Index {0} is not associated with alias' ' {1}'.format(index, self.name) ) def body(self): """ Return a `body` string suitable for use with the `update_aliases` API call. """ if not self.actions: raise ActionError('No "add" or "remove" operations') self.loggit.debug('Alias actions: {0}'.format(self.actions)) return { 'actions' : self.actions } def do_dry_run(self): """ Log what the output would be, but take no action. """ self.loggit.info('DRY-RUN MODE. No changes will be made.') for item in self.body()['actions']: job = list(item.keys())[0] index = item[job]['index'] alias = item[job]['alias'] # We want our log to look clever, so if job is "remove", strip the # 'e' so "remove" can become "removing". "adding" works already. self.loggit.info( 'DRY-RUN: alias: {0}ing index "{1}" {2} alias ' '"{3}"'.format( job.rstrip('e'), index, 'to' if job is 'add' else 'from', alias ) ) def do_action(self): """ Run the API call `update_aliases` with the results of `body()` """ self.loggit.info('Updating aliases...') self.loggit.info('Alias actions: {0}'.format(self.body())) try: self.client.indices.update_aliases(body=self.body()) except Exception as e: report_failure(e) class Allocation(object): def __init__(self, ilo, key=None, value=None, allocation_type='require', wait_for_completion=False, wait_interval=3, max_wait=-1, ): """ :arg ilo: A :class:`curator.indexlist.IndexList` object :arg key: An arbitrary metadata attribute key. Must match the key assigned to at least some of your nodes to have any effect. :arg value: An arbitrary metadata attribute value. Must correspond to values associated with `key` assigned to at least some of your nodes to have any effect. If a `None` value is provided, it will remove any setting associated with that `key`. :arg allocation_type: Type of allocation to apply. Default is `require` :arg wait_for_completion: Wait (or not) for the operation to complete before returning. (default: `False`) :type wait_for_completion: bool :arg wait_interval: How long in seconds to wait between checks for completion. :arg max_wait: Maximum number of seconds to `wait_for_completion` .. note:: See: https://www.elastic.co/guide/en/elasticsearch/reference/current/shard-allocation-filtering.html """ verify_index_list(ilo) if not key: raise MissingArgument('No value for "key" provided') if allocation_type not in ['require', 'include', 'exclude']: raise ValueError( '{0} is an invalid allocation_type. Must be one of "require", ' '"include", "exclude".'.format(allocation_type) ) #: Instance variable. #: Internal reference to `ilo` self.index_list = ilo #: Instance variable. #: The Elasticsearch Client object derived from `ilo` self.client = ilo.client self.loggit = logging.getLogger('curator.actions.allocation') #: Instance variable. #: Populated at instance creation time. Value is #: ``index.routing.allocation.`` `allocation_type` ``.`` `key` ``.`` `value` bkey = 'index.routing.allocation.{0}.{1}'.format(allocation_type, key) self.body = { bkey : value } #: Instance variable. #: Internal reference to `wait_for_completion` self.wfc = wait_for_completion #: Instance variable #: How many seconds to wait between checks for completion. self.wait_interval = wait_interval #: Instance variable. #: How long in seconds to `wait_for_completion` before returning with an #: exception. A value of -1 means wait forever. self.max_wait = max_wait def do_dry_run(self): """ Log what the output would be, but take no action. """ show_dry_run(self.index_list, 'allocation', body=self.body) def do_action(self): """ Change allocation settings for indices in `index_list.indices` with the settings in `body`. """ self.loggit.debug( 'Cannot get change shard routing allocation of closed indices. ' 'Omitting any closed indices.' ) self.index_list.filter_closed() self.index_list.empty_list_check() self.loggit.info('Updating index setting {0}'.format(self.body)) try: index_lists = chunk_index_list(self.index_list.indices) for l in index_lists: self.client.indices.put_settings( index=to_csv(l), body=self.body ) if self.wfc: logger.debug( 'Waiting for shards to complete relocation for indices:' ' {0}'.format(to_csv(l)) ) wait_for_it( self.client, 'allocation', wait_interval=self.wait_interval, max_wait=self.max_wait ) except Exception as e: report_failure(e) class Close(object): def __init__(self, ilo, delete_aliases=False): """ :arg ilo: A :class:`curator.indexlist.IndexList` object :arg delete_aliases: If `True`, will delete any associated aliases before closing indices. :type delete_aliases: bool """ verify_index_list(ilo) #: Instance variable. #: Internal reference to `ilo` self.index_list = ilo #: Instance variable. #: Internal reference to `delete_aliases` self.delete_aliases = delete_aliases #: Instance variable. #: The Elasticsearch Client object derived from `ilo` self.client = ilo.client self.loggit = logging.getLogger('curator.actions.close') def do_dry_run(self): """ Log what the output would be, but take no action. """ show_dry_run( self.index_list, 'close', **{'delete_aliases':self.delete_aliases}) def do_action(self): """ Close open indices in `index_list.indices` """ self.index_list.filter_closed() self.index_list.empty_list_check() self.loggit.info( 'Closing selected indices: {0}'.format(self.index_list.indices)) try: index_lists = chunk_index_list(self.index_list.indices) for l in index_lists: if self.delete_aliases: self.loggit.info( 'Deleting aliases from indices before closing.') self.loggit.debug('Deleting aliases from: {0}'.format(l)) try: self.client.indices.delete_alias( index=to_csv(l), name='_all') except Exception as e: self.loggit.warn( 'Some indices may not have had aliases. Exception:' ' {0}'.format(e) ) self.client.indices.flush_synced( index=to_csv(l), ignore_unavailable=True) self.client.indices.close( index=to_csv(l), ignore_unavailable=True) except Exception as e: report_failure(e) class ClusterRouting(object): def __init__( self, client, routing_type=None, setting=None, value=None, wait_for_completion=False, wait_interval=9, max_wait=-1 ): """ For now, the cluster routing settings are hardcoded to be ``transient`` :arg client: An :class:`elasticsearch.Elasticsearch` client object :arg routing_type: Type of routing to apply. Either `allocation` or `rebalance` :arg setting: Currently, the only acceptable value for `setting` is ``enable``. This is here in case that changes. :arg value: Used only if `setting` is `enable`. Semi-dependent on `routing_type`. Acceptable values for `allocation` and `rebalance` are ``all``, ``primaries``, and ``none`` (string, not `NoneType`). If `routing_type` is `allocation`, this can also be ``new_primaries``, and if `rebalance`, it can be ``replicas``. :arg wait_for_completion: Wait (or not) for the operation to complete before returning. (default: `False`) :type wait_for_completion: bool :arg wait_interval: How long in seconds to wait between checks for completion. :arg max_wait: Maximum number of seconds to `wait_for_completion` """ verify_client_object(client) #: Instance variable. #: An :class:`elasticsearch.Elasticsearch` client object self.client = client self.loggit = logging.getLogger('curator.actions.cluster_routing') #: Instance variable. #: Internal reference to `wait_for_completion` self.wfc = wait_for_completion #: Instance variable #: How many seconds to wait between checks for completion. self.wait_interval = wait_interval #: Instance variable. #: How long in seconds to `wait_for_completion` before returning with an #: exception. A value of -1 means wait forever. self.max_wait = max_wait if setting != 'enable': raise ValueError( 'Invalid value for "setting": {0}.'.format(setting) ) if routing_type == 'allocation': if value not in ['all', 'primaries', 'new_primaries', 'none']: raise ValueError( 'Invalid "value": {0} with "routing_type":' '{1}.'.format(value, routing_type) ) elif routing_type == 'rebalance': if value not in ['all', 'primaries', 'replicas', 'none']: raise ValueError( 'Invalid "value": {0} with "routing_type":' '{1}.'.format(value, routing_type) ) else: raise ValueError( 'Invalid value for "routing_type": {0}.'.format(routing_type) ) bkey = 'cluster.routing.{0}.{1}'.format(routing_type,setting) self.body = { 'transient' : { bkey : value } } def do_dry_run(self): """ Log what the output would be, but take no action. """ logger.info('DRY-RUN MODE. No changes will be made.') self.loggit.info( 'DRY-RUN: Update cluster routing settings with arguments: ' '{0}'.format(self.body) ) def do_action(self): """ Change cluster routing settings with the settings in `body`. """ self.loggit.info('Updating cluster settings: {0}'.format(self.body)) try: self.client.cluster.put_settings(body=self.body) if self.wfc: logger.debug( 'Waiting for shards to complete routing and/or rebalancing' ) wait_for_it( self.client, 'cluster_routing', wait_interval=self.wait_interval, max_wait=self.max_wait ) except Exception as e: report_failure(e) class CreateIndex(object): def __init__(self, client, name, extra_settings={}): """ :arg client: An :class:`elasticsearch.Elasticsearch` client object :arg name: A name, which can contain :py:func:`time.strftime` strings :arg extra_settings: The `settings` and `mappings` for the index. For more information see https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html :type extra_settings: dict, representing the settings and mappings. """ verify_client_object(client) if not name: raise ConfigurationError('Value for "name" not provided.') #: Instance variable. #: The parsed version of `name` self.name = parse_date_pattern(name) #: Instance variable. #: Extracted from the config yaml, it should be a dictionary of #: mappings and settings suitable for index creation. self.body = extra_settings #: Instance variable. #: An :class:`elasticsearch.Elasticsearch` client object self.client = client self.loggit = logging.getLogger('curator.actions.create_index') def do_dry_run(self): """ Log what the output would be, but take no action. """ logger.info('DRY-RUN MODE. No changes will be made.') self.loggit.info( 'DRY-RUN: create_index "{0}" with arguments: ' '{1}'.format(self.name, self.body) ) def do_action(self): """ Create index identified by `name` with settings in `body` """ self.loggit.info( 'Creating index "{0}" with settings: ' '{1}'.format(self.name, self.body) ) try: self.client.indices.create(index=self.name, body=self.body) except Exception as e: report_failure(e) class DeleteIndices(object): def __init__(self, ilo, master_timeout=30): """ :arg ilo: A :class:`curator.indexlist.IndexList` object :arg master_timeout: Number of seconds to wait for master node response """ verify_index_list(ilo) if not isinstance(master_timeout, int): raise TypeError( 'Incorrect type for "master_timeout": {0}. ' 'Should be integer value.'.format(type(master_timeout)) ) #: Instance variable. #: Internal reference to `ilo` self.index_list = ilo #: Instance variable. #: The Elasticsearch Client object derived from `ilo` self.client = ilo.client #: Instance variable. #: String value of `master_timeout` + 's', for seconds. self.master_timeout = str(master_timeout) + 's' self.loggit = logging.getLogger('curator.actions.delete_indices') self.loggit.debug('master_timeout value: {0}'.format( self.master_timeout)) def _verify_result(self, result, count): """ Breakout method to aid readability :arg result: A list of indices from `_get_result_list` :arg count: The number of tries that have occurred :rtype: bool """ if len(result) > 0: self.loggit.error( 'The following indices failed to delete on try ' '#{0}:'.format(count) ) for idx in result: self.loggit.error("---{0}".format(idx)) return False else: self.loggit.debug( 'Successfully deleted all indices on try #{0}'.format(count) ) return True def __chunk_loop(self, chunk_list): """ Loop through deletes 3 times to ensure they complete :arg chunk_list: A list of indices pre-chunked so it won't overload the URL size limit. """ working_list = chunk_list for count in range(1, 4): # Try 3 times for i in working_list: self.loggit.info("---deleting index {0}".format(i)) self.client.indices.delete( index=to_csv(working_list), master_timeout=self.master_timeout) result = [ i for i in working_list if i in get_indices(self.client)] if self._verify_result(result, count): return else: working_list = result self.loggit.error( 'Unable to delete the following indices after 3 attempts: ' '{0}'.format(result) ) def do_dry_run(self): """ Log what the output would be, but take no action. """ show_dry_run(self.index_list, 'delete_indices') def do_action(self): """ Delete indices in `index_list.indices` """ self.index_list.empty_list_check() self.loggit.info( 'Deleting selected indices: {0}'.format(self.index_list.indices)) try: index_lists = chunk_index_list(self.index_list.indices) for l in index_lists: self.__chunk_loop(l) except Exception as e: report_failure(e) class ForceMerge(object): def __init__(self, ilo, max_num_segments=None, delay=0): """ :arg ilo: A :class:`curator.indexlist.IndexList` object :arg max_num_segments: Number of segments per shard to forceMerge :arg delay: Number of seconds to delay between forceMerge operations """ verify_index_list(ilo) if not max_num_segments: raise MissingArgument('Missing value for "max_num_segments"') #: Instance variable. #: The Elasticsearch Client object derived from `ilo` self.client = ilo.client #: Instance variable. #: Internal reference to `ilo` self.index_list = ilo #: Instance variable. #: Internally accessible copy of `max_num_segments` self.max_num_segments = max_num_segments #: Instance variable. #: Internally accessible copy of `delay` self.delay = delay self.loggit = logging.getLogger('curator.actions.forcemerge') def do_dry_run(self): """ Log what the output would be, but take no action. """ show_dry_run( self.index_list, 'forcemerge', max_num_segments=self.max_num_segments, delay=self.delay, ) def do_action(self): """ forcemerge indices in `index_list.indices` """ self.index_list.empty_list_check() self.index_list.filter_forceMerged( max_num_segments=self.max_num_segments) self.loggit.info('forceMerging selected indices') try: for index_name in self.index_list.indices: self.loggit.info( 'forceMerging index {0} to {1} segments per shard. ' 'Please wait...'.format(index_name, self.max_num_segments) ) self.client.indices.forcemerge(index=index_name, max_num_segments=self.max_num_segments) if self.delay > 0: self.loggit.info( 'Pausing for {0} seconds before continuing...'.format( self.delay) ) time.sleep(self.delay) except Exception as e: report_failure(e) class IndexSettings(object): def __init__(self, ilo, index_settings={}, ignore_unavailable=False, preserve_existing=False): """ :arg ilo: A :class:`curator.indexlist.IndexList` object :arg index_settings: A dictionary structure with one or more index settings to change. :arg ignore_unavailable: Whether specified concrete indices should be ignored when unavailable (missing or closed) :arg preserve_existing: Whether to update existing settings. If set to ``True`` existing settings on an index remain unchanged. The default is ``False`` """ verify_index_list(ilo) if not index_settings: raise ConfigurationError('Missing value for "index_settings"') #: Instance variable. #: The Elasticsearch Client object derived from `ilo` self.client = ilo.client #: Instance variable. #: Internal reference to `ilo` self.index_list = ilo #: Instance variable. #: Internal reference to `index_settings` self.body = index_settings #: Instance variable. #: Internal reference to `ignore_unavailable` self.ignore_unavailable = ignore_unavailable #: Instance variable. #: Internal reference to `preserve_settings` self.preserve_existing = preserve_existing self.loggit = logging.getLogger('curator.actions.index_settings') self._body_check() def _body_check(self): # The body only passes the skimpiest of requirements by having 'index' # as the only root-level key, and having a 'dict' as its value if len(self.body) == 1: if 'index' in self.body: if isinstance(self.body['index'], dict): return True raise ConfigurationError( 'Bad value for "index_settings": {0}'.format(self.body)) def _static_settings(self): return [ 'number_of_shards', 'shard', 'codec', 'routing_partition_size', ] def _dynamic_settings(self): return [ 'number_of_replicas', 'auto_expand_replicas', 'refresh_interval', 'max_result_window', 'max_rescore_window', 'blocks', 'max_refresh_listeners', 'mapping', 'merge', 'translog', ] def _settings_check(self): # Detect if even one index is open. Save all found to open_index_list. open_index_list = [] open_indices = False for idx in self.index_list.indices: if self.index_list.index_info[idx]['state'] == 'open': open_index_list.append(idx) open_indices = True for k in self.body['index']: if k in self._static_settings(): if not self.ignore_unavailable: if open_indices: raise ActionError( 'Static Setting "{0}" detected with open indices: ' '{1}. Static settings can only be used with closed ' 'indices. Recommend filtering out open indices, ' 'or setting ignore_unavailable to True'.format( k, open_index_list ) ) elif k in self._dynamic_settings(): # Dynamic settings should be appliable to open or closed indices # Act here if the case is different for some settings. pass else: self.loggit.warn( '"{0}" is not a setting Curator recognizes and may or may ' 'not work.'.format(k) ) def do_dry_run(self): """ Log what the output would be, but take no action. """ show_dry_run(self.index_list, 'indexsettings', **self.body) def do_action(self): self._settings_check() # Ensure that the open indices filter applied in _settings_check() # didn't result in an empty list (or otherwise empty) self.index_list.empty_list_check() self.loggit.info( 'Applying index settings to indices: ' '{0}'.format(self.index_list.indices) ) try: index_lists = chunk_index_list(self.index_list.indices) for l in index_lists: response = self.client.indices.put_settings( index=to_csv(l), body=self.body, ignore_unavailable=self.ignore_unavailable, preserve_existing=self.preserve_existing ) self.loggit.debug('PUT SETTINGS RESPONSE: {0}'.format(response)) except Exception as e: report_failure(e) class Open(object): def __init__(self, ilo): """ :arg ilo: A :class:`curator.indexlist.IndexList` object """ verify_index_list(ilo) #: Instance variable. #: The Elasticsearch Client object derived from `ilo` self.client = ilo.client #: Instance variable. #: Internal reference to `ilo` self.index_list = ilo self.loggit = logging.getLogger('curator.actions.open') def do_dry_run(self): """ Log what the output would be, but take no action. """ show_dry_run(self.index_list, 'open') def do_action(self): """ Open closed indices in `index_list.indices` """ self.index_list.empty_list_check() self.loggit.info( 'Opening selected indices: {0}'.format(self.index_list.indices)) try: index_lists = chunk_index_list(self.index_list.indices) for l in index_lists: self.client.indices.open(index=to_csv(l)) except Exception as e: report_failure(e) class Replicas(object): def __init__(self, ilo, count=None, wait_for_completion=False, wait_interval=9, max_wait=-1): """ :arg ilo: A :class:`curator.indexlist.IndexList` object :arg count: The count of replicas per shard :arg wait_for_completion: Wait (or not) for the operation to complete before returning. (default: `False`) :type wait_for_completion: bool :arg wait_interval: How long in seconds to wait between checks for completion. :arg max_wait: Maximum number of seconds to `wait_for_completion` """ verify_index_list(ilo) # It's okay for count to be zero if count == 0: pass elif not count: raise MissingArgument('Missing value for "count"') #: Instance variable. #: The Elasticsearch Client object derived from `ilo` self.client = ilo.client #: Instance variable. #: Internal reference to `ilo` self.index_list = ilo #: Instance variable. #: Internally accessible copy of `count` self.count = count #: Instance variable. #: Internal reference to `wait_for_completion` self.wfc = wait_for_completion #: Instance variable #: How many seconds to wait between checks for completion. self.wait_interval = wait_interval #: Instance variable. #: How long in seconds to `wait_for_completion` before returning with an #: exception. A value of -1 means wait forever. self.max_wait = max_wait self.loggit = logging.getLogger('curator.actions.replicas') def do_dry_run(self): """ Log what the output would be, but take no action. """ show_dry_run(self.index_list, 'replicas', count=self.count) def do_action(self): """ Update the replica count of indices in `index_list.indices` """ self.index_list.empty_list_check() self.loggit.debug( 'Cannot get update replica count of closed indices. ' 'Omitting any closed indices.' ) self.index_list.filter_closed() self.loggit.info( 'Setting the replica count to {0} for indices: ' '{1}'.format(self.count, self.index_list.indices) ) try: index_lists = chunk_index_list(self.index_list.indices) for l in index_lists: self.client.indices.put_settings(index=to_csv(l), body={'number_of_replicas' : self.count}) if self.wfc and self.count > 0: logger.debug( 'Waiting for shards to complete replication for ' 'indices: {0}'.format(to_csv(l)) ) wait_for_it( self.client, 'replicas', wait_interval=self.wait_interval, max_wait=self.max_wait ) except Exception as e: report_failure(e) class Rollover(object): def __init__( self, client, name, conditions, new_index=None, extra_settings=None, wait_for_active_shards=1 ): """ :arg client: An :class:`elasticsearch.Elasticsearch` client object :arg name: The name of the single-index-mapped alias to test for rollover conditions. :new_index: The new index name :arg conditions: A dictionary of conditions to test :arg extra_settings: Must be either `None`, or a dictionary of settings to apply to the new index on rollover. This is used in place of `settings` in the Rollover API, mostly because it's already existent in other places here in Curator :arg wait_for_active_shards: The number of shards expected to be active before returning. """ verify_client_object(client) self.loggit = logging.getLogger('curator.actions.rollover') if not isinstance(conditions, dict): raise ConfigurationError('"conditions" must be a dictionary') else: self.loggit.debug('"conditions" is {0}'.format(conditions)) if not isinstance(extra_settings, dict) and extra_settings is not None: raise ConfigurationError( '"extra_settings" must be a dictionary or None') #: Instance variable. #: The Elasticsearch Client object self.client = client #: Instance variable. #: Internal reference to `conditions` self.conditions = conditions #: Instance variable. #: Internal reference to `extra_settings` self.settings = extra_settings #: Instance variable. #: Internal reference to `new_index` self.new_index = parse_date_pattern(new_index) if new_index else new_index #: Instance variable. #: Internal reference to `wait_for_active_shards` self.wait_for_active_shards = wait_for_active_shards # Verify that `conditions` and `settings` are good? # Verify that `name` is an alias, and is only mapped to one index. if rollable_alias(client, name): self.name = name else: raise ValueError( 'Unable to perform index rollover with alias ' '"{0}". See previous logs for more details.'.format(name) ) def body(self): """ Create a body from conditions and settings """ retval = {} retval['conditions'] = self.conditions if self.settings: retval['settings'] = self.settings return retval def doit(self, dry_run=False): """ This exists solely to prevent having to have duplicate code in both `do_dry_run` and `do_action` """ return self.client.indices.rollover( alias=self.name, new_index=self.new_index, body=self.body(), dry_run=dry_run, wait_for_active_shards=self.wait_for_active_shards, ) def do_dry_run(self): """ Log what the output would be, but take no action. """ logger.info('DRY-RUN MODE. No changes will be made.') result = self.doit(dry_run=True) logger.info('DRY-RUN: rollover: {0} result: ' '{1}'.format(self.name, result)) def do_action(self): """ Rollover the index referenced by alias `name` """ self.loggit.info('Performing index rollover') try: self.doit() except Exception as e: report_failure(e) class DeleteSnapshots(object): def __init__(self, slo, retry_interval=120, retry_count=3): """ :arg slo: A :class:`curator.snapshotlist.SnapshotList` object :arg retry_interval: Number of seconds to delay betwen retries. Default: 120 (seconds) :arg retry_count: Number of attempts to make. Default: 3 """ verify_snapshot_list(slo) #: Instance variable. #: The Elasticsearch Client object derived from `slo` self.client = slo.client #: Instance variable. #: Internally accessible copy of `retry_interval` self.retry_interval = retry_interval #: Instance variable. #: Internally accessible copy of `retry_count` self.retry_count = retry_count #: Instance variable. #: Internal reference to `slo` self.snapshot_list = slo #: Instance variable. #: The repository name derived from `slo` self.repository = slo.repository self.loggit = logging.getLogger('curator.actions.delete_snapshots') def do_dry_run(self): """ Log what the output would be, but take no action. """ logger.info('DRY-RUN MODE. No changes will be made.') mykwargs = { 'repository' : self.repository, 'retry_interval' : self.retry_interval, 'retry_count' : self.retry_count, } for snap in self.snapshot_list.snapshots: logger.info('DRY-RUN: delete_snapshot: {0} with arguments: ' '{1}'.format(snap, mykwargs)) def do_action(self): """ Delete snapshots in `slo` Retry up to `retry_count` times, pausing `retry_interval` seconds between retries. """ self.snapshot_list.empty_list_check() self.loggit.info('Deleting selected snapshots') if not safe_to_snap( self.client, repository=self.repository, retry_interval=self.retry_interval, retry_count=self.retry_count): raise FailedExecution( 'Unable to delete snapshot(s) because a snapshot is in ' 'state "IN_PROGRESS"') try: for s in self.snapshot_list.snapshots: self.loggit.info('Deleting snapshot {0}...'.format(s)) self.client.snapshot.delete( repository=self.repository, snapshot=s) except Exception as e: report_failure(e) class Reindex(object): def __init__(self, ilo, request_body, refresh=True, requests_per_second=-1, slices=1, timeout=60, wait_for_active_shards=1, wait_for_completion=True, max_wait=-1, wait_interval=9, remote_url_prefix=None, remote_ssl_no_validate=None, remote_certificate=None, remote_client_cert=None, remote_client_key=None, remote_aws_key=None, remote_aws_secret_key=None, remote_aws_region=None, remote_filters={}, migration_prefix='', migration_suffix=''): """ :arg ilo: A :class:`curator.indexlist.IndexList` object :arg request_body: The body to send to :py:meth:`elasticsearch.Elasticsearch.reindex`, which must be complete and usable, as Curator will do no vetting of the request_body. If it fails to function, Curator will return an exception. :arg refresh: Whether to refresh the entire target index after the operation is complete. (default: `True`) :type refresh: bool :arg requests_per_second: The throttle to set on this request in sub-requests per second. ``-1`` means set no throttle as does ``unlimited`` which is the only non-float this accepts. (default: ``-1``) :arg slices: The number of slices this task should be divided into. 1 means the task will not be sliced into subtasks. (default: ``1``) :arg timeout: The length in seconds each individual bulk request should wait for shards that are unavailable. (default: ``60``) :arg wait_for_active_shards: Sets the number of shard copies that must be active before proceeding with the reindex operation. (default: ``1``) means the primary shard only. Set to ``all`` for all shard copies, otherwise set to any non-negative value less than or equal to the total number of copies for the shard (number of replicas + 1) :arg wait_for_completion: Wait (or not) for the operation to complete before returning. (default: `True`) :type wait_for_completion: bool :arg wait_interval: How long in seconds to wait between checks for completion. :arg max_wait: Maximum number of seconds to `wait_for_completion` :arg remote_url_prefix: `Optional` url prefix, if needed to reach the Elasticsearch API (i.e., it's not at the root level) :type remote_url_prefix: str :arg remote_ssl_no_validate: If `True`, do not validate the certificate chain. This is an insecure option and you will see warnings in the log output. :type remote_ssl_no_validate: bool :arg remote_certificate: Path to SSL/TLS certificate :arg remote_client_cert: Path to SSL/TLS client certificate (public key) :arg remote_client_key: Path to SSL/TLS private key :arg remote_aws_key: AWS IAM Access Key (Only used if the :mod:`requests-aws4auth` python module is installed) :arg remote_aws_secret_key: AWS IAM Secret Access Key (Only used if the :mod:`requests-aws4auth` python module is installed) :arg remote_aws_region: AWS Region (Only used if the :mod:`requests-aws4auth` python module is installed) :arg remote_filters: Apply these filters to the remote client for remote index selection. :arg migration_prefix: When migrating, prepend this value to the index name. :arg migration_suffix: When migrating, append this value to the index name. """ self.loggit = logging.getLogger('curator.actions.reindex') verify_index_list(ilo) # Normally, we'd check for an empty list here. But since we can reindex # from remote, we might just be starting with an empty one. # ilo.empty_list_check() if not isinstance(request_body, dict): raise ConfigurationError('"request_body" is not of type dictionary') #: Instance variable. #: Internal reference to `request_body` self.body = request_body self.loggit.debug('REQUEST_BODY = {0}'.format(request_body)) #: Instance variable. #: The Elasticsearch Client object derived from `ilo` self.client = ilo.client #: Instance variable. #: Internal reference to `ilo` self.index_list = ilo #: Instance variable. #: Internal reference to `refresh` self.refresh = refresh #: Instance variable. #: Internal reference to `requests_per_second` self.requests_per_second = requests_per_second #: Instance variable. #: Internal reference to `slices` self.slices = slices #: Instance variable. #: Internal reference to `timeout`, and add "s" for seconds. self.timeout = '{0}s'.format(timeout) #: Instance variable. #: Internal reference to `wait_for_active_shards` self.wait_for_active_shards = wait_for_active_shards #: Instance variable. #: Internal reference to `wait_for_completion` self.wfc = wait_for_completion #: Instance variable #: How many seconds to wait between checks for completion. self.wait_interval = wait_interval #: Instance variable. #: How long in seconds to `wait_for_completion` before returning with an #: exception. A value of -1 means wait forever. self.max_wait = max_wait #: Instance variable. #: Internal reference to `migration_prefix` self.mpfx = migration_prefix #: Instance variable. #: Internal reference to `migration_suffix` self.msfx = migration_suffix # This is for error logging later... self.remote = False if 'remote' in self.body['source']: self.remote = True self.migration = False if self.body['dest']['index'] == 'MIGRATION': self.migration = True if self.migration: if not self.remote and not self.mpfx and not self.msfx: raise ConfigurationError( 'MIGRATION can only be used locally with one or both of ' 'migration_prefix or migration_suffix.' ) # REINDEX_SELECTION is the designated token. If you use this for the # source "index," it will be replaced with the list of indices from the # provided 'ilo' (index list object). if self.body['source']['index'] == 'REINDEX_SELECTION' \ and not self.remote: self.body['source']['index'] = self.index_list.indices # Remote section elif self.remote: self.loggit.debug('Remote reindex request detected') if 'host' not in self.body['source']['remote']: raise ConfigurationError('Missing remote "host"') rclient_info = {} for k in ['host', 'username', 'password']: rclient_info[k] = self.body['source']['remote'][k] \ if k in self.body['source']['remote'] else None rhost = rclient_info['host'] try: # Save these for logging later a = rhost.split(':') self.remote_port = a[2] self.remote_host = a[1][2:] except Exception as e: raise ConfigurationError( 'Host must be in the form [scheme]://[host]:[port] but ' 'was [{0}]'.format(rhost) ) rhttp_auth = '{0}:{1}'.format( rclient_info['username'],rclient_info['password']) \ if (rclient_info['username'] and rclient_info['password']) \ else None if rhost[:5] == 'http:': use_ssl = False elif rhost[:5] == 'https': use_ssl = True else: raise ConfigurationError( 'Host must be in URL format. You provided: ' '{0}'.format(rclient_info['host']) ) # Let's set a decent remote timeout for initially reading # the indices on the other side, and collecting their metadata remote_timeout = 180 # The rest only applies if using filters for remote indices if self.body['source']['index'] == 'REINDEX_SELECTION': self.loggit.debug('Filtering indices from remote') from .indexlist import IndexList self.loggit.debug('Remote client args: ' 'host={0} ' 'http_auth={1} ' 'url_prefix={2} ' 'use_ssl={3} ' 'ssl_no_validate={4} ' 'certificate={5} ' 'client_cert={6} ' 'client_key={7} ' 'aws_key={8} ' 'aws_secret_key={9} ' 'aws_region={10} ' 'timeout={11} ' 'skip_version_test=True'.format( rhost, rhttp_auth, remote_url_prefix, use_ssl, remote_ssl_no_validate, remote_certificate, remote_client_cert, remote_client_key, remote_aws_key, remote_aws_secret_key, remote_aws_region, remote_timeout ) ) try: # let's try to build a remote connection with these! rclient = get_client( host=rhost, http_auth=rhttp_auth, url_prefix=remote_url_prefix, use_ssl=use_ssl, ssl_no_validate=remote_ssl_no_validate, certificate=remote_certificate, client_cert=remote_client_cert, client_key=remote_client_key, aws_key=remote_aws_key, aws_secret_key=remote_aws_secret_key, aws_region=remote_aws_region, skip_version_test=True, timeout=remote_timeout ) except Exception as e: self.loggit.error( 'Unable to establish connection to remote Elasticsearch' ' with provided credentials/certificates/settings.' ) report_failure(e) try: rio = IndexList(rclient) rio.iterate_filters({'filters': remote_filters}) try: rio.empty_list_check() except NoIndices: raise FailedExecution( 'No actionable remote indices selected after ' 'applying filters.' ) self.body['source']['index'] = rio.indices except Exception as e: self.loggit.error( 'Unable to get/filter list of remote indices.' ) report_failure(e) self.loggit.debug( 'Reindexing indices: {0}'.format(self.body['source']['index'])) def _get_request_body(self, source, dest): body = deepcopy(self.body) body['source']['index'] = source body['dest']['index'] = dest return body def _get_reindex_args(self, source, dest): # Always set wait_for_completion to False. Let 'wait_for_it' do its # thing if wait_for_completion is set to True. Report the task_id # either way. reindex_args = { 'body':self._get_request_body(source, dest), 'refresh':self.refresh, 'requests_per_second': self.requests_per_second, 'timeout': self.timeout, 'wait_for_active_shards': self.wait_for_active_shards, 'wait_for_completion': False, 'slices': self.slices } version = get_version(self.client) if version < (5,1,0): self.loggit.info( 'Your version of elasticsearch ({0}) does not support ' 'sliced scroll for reindex, so that setting will not be ' 'used'.format(version) ) del reindex_args['slices'] return reindex_args def _post_run_quick_check(self, index_name): # Verify the destination index is there after the fact index_exists = self.client.indices.exists(index=index_name) alias_instead = self.client.indices.exists_alias(name=index_name) if not index_exists and not alias_instead: self.loggit.error( 'The index described as "{0}" was not found after the reindex ' 'operation. Check Elasticsearch logs for more ' 'information.'.format(index_name) ) if self.remote: self.loggit.error( 'Did you forget to add "reindex.remote.whitelist: ' '{0}:{1}" to the elasticsearch.yml file on the ' '"dest" node?'.format( self.remote_host, self.remote_port ) ) raise FailedExecution( 'Reindex failed. The index or alias identified by "{0}" was ' 'not found.'.format(index_name) ) def sources(self): # Generator for sources & dests dest = self.body['dest']['index'] if not self.migration: yield self.body['source']['index'], dest # Loop over all sources (default will only be one) else: for source in ensure_list(self.body['source']['index']): if self.migration: dest = self.mpfx + source + self.msfx yield source, dest def show_run_args(self, source, dest): """ Show what will run """ return ('request body: {0} with arguments: ' 'refresh={1} ' 'requests_per_second={2} ' 'slices={3} ' 'timeout={4} ' 'wait_for_active_shards={5} ' 'wait_for_completion={6}'.format( self._get_request_body(source, dest), self.refresh, self.requests_per_second, self.slices, self.timeout, self.wait_for_active_shards, self.wfc ) ) def do_dry_run(self): """ Log what the output would be, but take no action. """ self.loggit.info('DRY-RUN MODE. No changes will be made.') for source, dest in self.sources(): self.loggit.info( 'DRY-RUN: REINDEX: {0}'.format(self.show_run_args(source, dest)) ) def do_action(self): """ Execute :py:meth:`elasticsearch.Elasticsearch.reindex` operation with the provided request_body and arguments. """ try: # Loop over all sources (default will only be one) for source, dest in self.sources(): self.loggit.info('Commencing reindex operation') self.loggit.debug( 'REINDEX: {0}'.format(self.show_run_args(source, dest))) response = self.client.reindex( **self._get_reindex_args(source, dest)) self.loggit.debug('TASK ID = {0}'.format(response['task'])) if self.wfc: wait_for_it( self.client, 'reindex', task_id=response['task'], wait_interval=self.wait_interval, max_wait=self.max_wait ) self._post_run_quick_check(dest) else: self.loggit.warn( '"wait_for_completion" set to {0}. Remember ' 'to check task_id "{1}" for successful completion ' 'manually.'.format(self.wfc, response['task']) ) except Exception as e: report_failure(e) class Snapshot(object): def __init__(self, ilo, repository=None, name=None, ignore_unavailable=False, include_global_state=True, partial=False, wait_for_completion=True, wait_interval=9, max_wait=-1, skip_repo_fs_check=False): """ :arg ilo: A :class:`curator.indexlist.IndexList` object :arg repository: The Elasticsearch snapshot repository to use :arg name: What to name the snapshot. :arg wait_for_completion: Wait (or not) for the operation to complete before returning. (default: `True`) :type wait_for_completion: bool :arg wait_interval: How long in seconds to wait between checks for completion. :arg max_wait: Maximum number of seconds to `wait_for_completion` :arg ignore_unavailable: Ignore unavailable shards/indices. (default: `False`) :type ignore_unavailable: bool :arg include_global_state: Store cluster global state with snapshot. (default: `True`) :type include_global_state: bool :arg partial: Do not fail if primary shard is unavailable. (default: `False`) :type partial: bool :arg skip_repo_fs_check: Do not validate write access to repository on all cluster nodes before proceeding. (default: `False`). Useful for shared filesystems where intermittent timeouts can affect validation, but won't likely affect snapshot success. :type skip_repo_fs_check: bool """ verify_index_list(ilo) # Check here and don't bother with the rest of this if there are no # indices in the index list. ilo.empty_list_check() if not repository_exists(ilo.client, repository=repository): raise ActionError( 'Cannot snapshot indices to missing repository: ' '{0}'.format(repository) ) if not name: raise MissingArgument('No value for "name" provided.') #: Instance variable. #: The parsed version of `name` self.name = parse_date_pattern(name) #: Instance variable. #: The Elasticsearch Client object derived from `ilo` self.client = ilo.client #: Instance variable. #: Internal reference to `ilo` self.index_list = ilo #: Instance variable. #: Internally accessible copy of `repository` self.repository = repository #: Instance variable. #: Internally accessible copy of `wait_for_completion` self.wait_for_completion = wait_for_completion #: Instance variable #: How many seconds to wait between checks for completion. self.wait_interval = wait_interval #: Instance variable. #: How long in seconds to `wait_for_completion` before returning with an #: exception. A value of -1 means wait forever. self.max_wait = max_wait #: Instance variable. #: Internally accessible copy of `skip_repo_fs_check` self.skip_repo_fs_check = skip_repo_fs_check self.state = None #: Instance variable. #: Populated at instance creation time by calling #: :mod:`curator.utils.create_snapshot_body` with `ilo.indices` and the #: provided arguments: `ignore_unavailable`, `include_global_state`, #: `partial` self.body = create_snapshot_body( ilo.indices, ignore_unavailable=ignore_unavailable, include_global_state=include_global_state, partial=partial ) self.loggit = logging.getLogger('curator.actions.snapshot') def get_state(self): """ Get the state of the snapshot """ try: self.state = self.client.snapshot.get( repository=self.repository, snapshot=self.name)['snapshots'][0]['state'] return self.state except IndexError: raise CuratorException( 'Snapshot "{0}" not found in repository ' '"{1}"'.format(self.name, self.repository) ) def report_state(self): """ Log the state of the snapshot """ self.get_state() if self.state == 'SUCCESS': self.loggit.info( 'Snapshot {0} successfully completed.'.format(self.name)) else: self.loggit.warn( 'Snapshot {0} completed with state: {0}'.format(self.state)) def do_dry_run(self): """ Log what the output would be, but take no action. """ self.loggit.info('DRY-RUN MODE. No changes will be made.') self.loggit.info( 'DRY-RUN: snapshot: {0} in repository {1} with arguments: ' '{2}'.format(self.name, self.repository, self.body) ) def do_action(self): """ Snapshot indices in `index_list.indices`, with options passed. """ if not self.skip_repo_fs_check: test_repo_fs(self.client, self.repository) if snapshot_running(self.client): raise SnapshotInProgress('Snapshot already in progress.') try: self.loggit.info('Creating snapshot "{0}" from indices: ' '{1}'.format(self.name, self.index_list.indices) ) # Always set wait_for_completion to False. Let 'wait_for_it' do its # thing if wait_for_completion is set to True. Report the task_id # either way. self.client.snapshot.create( repository=self.repository, snapshot=self.name, body=self.body, wait_for_completion=False ) if self.wait_for_completion: wait_for_it( self.client, 'snapshot', snapshot=self.name, repository=self.repository, wait_interval=self.wait_interval, max_wait=self.max_wait ) else: self.loggit.warn( '"wait_for_completion" set to {0}.' 'Remember to check for successful completion ' 'manually.'.format(self.wait_for_completion) ) except Exception as e: report_failure(e) class Restore(object): def __init__(self, slo, name=None, indices=None, include_aliases=False, ignore_unavailable=False, include_global_state=False, partial=False, rename_pattern=None, rename_replacement=None, extra_settings={}, wait_for_completion=True, wait_interval=9, max_wait=-1, skip_repo_fs_check=False): """ :arg slo: A :class:`curator.snapshotlist.SnapshotList` object :arg name: Name of the snapshot to restore. If no name is provided, it will restore the most recent snapshot by age. :type name: str :arg indices: A list of indices to restore. If no indices are provided, it will restore all indices in the snapshot. :type indices: list :arg include_aliases: If set to `True`, restore aliases with the indices. (default: `False`) :type include_aliases: bool :arg ignore_unavailable: Ignore unavailable shards/indices. (default: `False`) :type ignore_unavailable: bool :arg include_global_state: Restore cluster global state with snapshot. (default: `False`) :type include_global_state: bool :arg partial: Do not fail if primary shard is unavailable. (default: `False`) :type partial: bool :arg rename_pattern: A regular expression pattern with one or more captures, e.g. ``index_(.+)`` :type rename_pattern: str :arg rename_replacement: A target index name pattern with `$#` numbered references to the captures in ``rename_pattern``, e.g. ``restored_index_$1`` :type rename_replacement: str :arg extra_settings: Extra settings, including shard count and settings to omit. For more information see https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html#_changing_index_settings_during_restore :type extra_settings: dict, representing the settings. :arg wait_for_completion: Wait (or not) for the operation to complete before returning. (default: `True`) :arg wait_interval: How long in seconds to wait between checks for completion. :arg max_wait: Maximum number of seconds to `wait_for_completion` :type wait_for_completion: bool :arg skip_repo_fs_check: Do not validate write access to repository on all cluster nodes before proceeding. (default: `False`). Useful for shared filesystems where intermittent timeouts can affect validation, but won't likely affect snapshot success. :type skip_repo_fs_check: bool """ self.loggit = logging.getLogger('curator.actions.snapshot') verify_snapshot_list(slo) # Get the most recent snapshot. most_recent = slo.most_recent() self.loggit.debug('"most_recent" snapshot: {0}'.format(most_recent)) #: Instance variable. #: Will use a provided snapshot name, or the most recent snapshot in slo self.name = name if name else most_recent # Stop here now, if it's not a successful snapshot. if slo.snapshot_info[self.name]['state'] == 'PARTIAL' \ and partial == True: self.loggit.warn( 'Performing restore of snapshot in state PARTIAL.') elif slo.snapshot_info[self.name]['state'] != 'SUCCESS': raise CuratorException( 'Restore operation can only be performed on snapshots with ' 'state "SUCCESS", or "PARTIAL" if partial=True.' ) #: Instance variable. #: The Elasticsearch Client object derived from `slo` self.client = slo.client #: Instance variable. #: Internal reference to `slo` self.snapshot_list = slo #: Instance variable. #: `repository` derived from `slo` self.repository = slo.repository if indices: self.indices = ensure_list(indices) else: self.indices = slo.snapshot_info[self.name]['indices'] self.wfc = wait_for_completion #: Instance variable #: How many seconds to wait between checks for completion. self.wait_interval = wait_interval #: Instance variable. #: How long in seconds to `wait_for_completion` before returning with an #: exception. A value of -1 means wait forever. self.max_wait = max_wait #: Instance variable version of ``rename_pattern`` self.rename_pattern = rename_pattern if rename_replacement is not None \ else '' #: Instance variable version of ``rename_replacement`` self.rename_replacement = rename_replacement if rename_replacement \ is not None else '' #: Also an instance variable version of ``rename_replacement`` #: but with Java regex group designations of ``$#`` #: converted to Python's ``\\#`` style. self.py_rename_replacement = self.rename_replacement.replace('$', '\\') #: Instance variable. #: Internally accessible copy of `skip_repo_fs_check` self.skip_repo_fs_check = skip_repo_fs_check #: Instance variable. #: Populated at instance creation time from the other options self.body = { 'indices' : self.indices, 'include_aliases' : include_aliases, 'ignore_unavailable' : ignore_unavailable, 'include_global_state' : include_global_state, 'partial' : partial, 'rename_pattern' : self.rename_pattern, 'rename_replacement' : self.rename_replacement, } if extra_settings: self.loggit.debug( 'Adding extra_settings to restore body: ' '{0}'.format(extra_settings) ) try: self.body.update(extra_settings) except: self.loggit.error( 'Unable to apply extra settings to restore body') self.loggit.debug('REPOSITORY: {0}'.format(self.repository)) self.loggit.debug('WAIT_FOR_COMPLETION: {0}'.format(self.wfc)) self.loggit.debug( 'SKIP_REPO_FS_CHECK: {0}'.format(self.skip_repo_fs_check)) self.loggit.debug('BODY: {0}'.format(self.body)) # Populate the expected output index list. self._get_expected_output() def _get_expected_output(self): if not self.rename_pattern and not self.rename_replacement: self.expected_output = self.indices return # Don't stick around if we're not replacing anything self.expected_output = [] for index in self.indices: self.expected_output.append( re.sub( self.rename_pattern, self.py_rename_replacement, index ) ) self.loggit.debug('index: {0} replacement: ' '{1}'.format(index, self.expected_output[-1]) ) def report_state(self): """ Log the state of the restore This should only be done if ``wait_for_completion`` is `True`, and only after completing the restore. """ all_indices = get_indices(self.client) found_count = 0 missing = [] for index in self.expected_output: if index in all_indices: found_count += 1 self.loggit.info('Found restored index {0}'.format(index)) else: missing.append(index) if found_count == len(self.expected_output): self.loggit.info('All indices appear to have been restored.') else: self.loggit.error( 'Some of the indices do not appear to have been restored. ' 'Missing: {0}'.format(missing) ) def do_dry_run(self): """ Log what the output would be, but take no action. """ logger.info('DRY-RUN MODE. No changes will be made.') logger.info( 'DRY-RUN: restore: Repository: {0} Snapshot name: {1} Arguments: ' '{2}'.format( self.repository, self.name, { 'wait_for_completion' : self.wfc, 'body' : self.body } ) ) for index in self.indices: if self.rename_pattern and self.rename_replacement: replacement_msg = 'as {0}'.format( re.sub( self.rename_pattern, self.py_rename_replacement, index ) ) else: replacement_msg = '' logger.info( 'DRY-RUN: restore: Index {0} {1}'.format(index, replacement_msg) ) def do_action(self): """ Restore indices with options passed. """ if not self.skip_repo_fs_check: test_repo_fs(self.client, self.repository) if snapshot_running(self.client): raise SnapshotInProgress( 'Cannot restore while a snapshot is in progress.') try: self.loggit.info('Restoring indices "{0}" from snapshot: ' '{1}'.format(self.indices, self.name) ) # Always set wait_for_completion to False. Let 'wait_for_it' do its # thing if wait_for_completion is set to True. Report the task_id # either way. self.client.snapshot.restore( repository=self.repository, snapshot=self.name, body=self.body, wait_for_completion=False ) if self.wfc: wait_for_it( self.client, 'restore', index_list=self.expected_output, wait_interval=self.wait_interval, max_wait=self.max_wait ) else: self.loggit.warn( '"wait_for_completion" set to {0}. ' 'Remember to check for successful completion ' 'manually.'.format(self.wfc) ) except Exception as e: report_failure(e) class Shrink(object): def __init__(self, ilo, shrink_node='DETERMINISTIC', node_filters={}, number_of_shards=1, number_of_replicas=1, shrink_prefix='', shrink_suffix='-shrink', delete_after=True, post_allocation={}, wait_for_active_shards=1, extra_settings={}, wait_for_completion=True, wait_interval=9, max_wait=-1): """ :arg ilo: A :class:`curator.indexlist.IndexList` object :arg shrink_node: The node name to use as the shrink target, or ``DETERMINISTIC``, which will use the values in ``node_filters`` to determine which node will be the shrink node. :arg node_filters: If the value of ``shrink_node`` is ``DETERMINISTIC``, the values in ``node_filters`` will be used while determining which node to allocate the shards on before performing the shrink. :type node_filters: dict, representing the filters :arg number_of_shards: The number of shards the shrunk index should have :arg number_of_replicas: The number of replicas for the shrunk index :arg shrink_prefix: Prepend the shrunk index with this value :arg shrink_suffix: Append the value to the shrunk index (default: `-shrink`) :arg delete_after: Whether to delete each index after shrinking. (default: `True`) :type delete_after: bool :arg post_allocation: If populated, the `allocation_type`, `key`, and `value` will be applied to the shrunk index to re-route it. :type post_allocation: dict, with keys `allocation_type`, `key`, and `value` :arg wait_for_active_shards: The number of shards expected to be active before returning. :arg extra_settings: Permitted root keys are `settings` and `aliases`. See https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-shrink-index.html :type extra_settings: dict :arg wait_for_active_shards: Wait for active shards before returning. :arg wait_for_completion: Wait (or not) for the operation to complete before returning. You should not normally change this, ever. (default: `True`) :arg wait_interval: How long in seconds to wait between checks for completion. :arg max_wait: Maximum number of seconds to `wait_for_completion` :type wait_for_completion: bool """ self.loggit = logging.getLogger('curator.actions.shrink') verify_index_list(ilo) if not 'permit_masters' in node_filters: node_filters['permit_masters'] = False #: Instance variable. The Elasticsearch Client object derived from `ilo` self.client = ilo.client #: Instance variable. Internal reference to `ilo` self.index_list = ilo #: Instance variable. Internal reference to `shrink_node` self.shrink_node = shrink_node #: Instance variable. Internal reference to `node_filters` self.node_filters = node_filters #: Instance variable. Internal reference to `shrink_prefix` self.shrink_prefix = shrink_prefix #: Instance variable. Internal reference to `shrink_suffix` self.shrink_suffix = shrink_suffix #: Instance variable. Internal reference to `delete_after` self.delete_after = delete_after #: Instance variable. Internal reference to `post_allocation` self.post_allocation = post_allocation #: Instance variable. Internal reference to `wait_for_completion` self.wfc = wait_for_completion #: Instance variable. How many seconds to wait between checks for completion. self.wait_interval = wait_interval #: Instance variable. How long in seconds to `wait_for_completion` before returning with an exception. A value of -1 means wait forever. self.max_wait = max_wait #: Instance variable. Internal reference to `number_of_shards` self.number_of_shards = number_of_shards self.wait_for_active_shards = wait_for_active_shards self.shrink_node_name = None self.body = { 'settings': { 'index.number_of_shards' : number_of_shards, 'index.number_of_replicas' : number_of_replicas, } } if extra_settings: self._merge_extra_settings(extra_settings) def _merge_extra_settings(self, extra_settings): self.loggit.debug( 'Adding extra_settings to shrink body: ' '{0}'.format(extra_settings) ) # Pop these here, otherwise we could overwrite our default number of # shards and replicas if 'settings' in extra_settings: settings = extra_settings.pop('settings') try: self.body['settings'].update(settings) except Exception as e: raise ConfigurationError('Unable to apply extra settings "{0}" to shrink body'.format({'settings':settings})) if extra_settings: try: # Apply any remaining keys, should there be any. self.body.update(extra_settings) except Exception as e: raise ConfigurationError('Unable to apply extra settings "{0}" to shrink body'.format(extra_settings)) def _data_node(self, node_id): roles = node_roles(self.client, node_id) name = node_id_to_name(self.client, node_id) if not 'data' in roles: self.loggit.info('Skipping node "{0}": non-data node'.format(name)) return False if 'master' in roles and not self.node_filters['permit_masters']: self.loggit.info('Skipping node "{0}": master node'.format(name)) return False elif 'master' in roles and self.node_filters['permit_masters']: self.loggit.warn('Not skipping node "{0}" which is a master node (not recommended), but permit_masters is True'.format(name)) return True else: # It does have `data` as a role. return True def _exclude_node(self, name): if 'exclude_nodes' in self.node_filters: if name in self.node_filters['exclude_nodes']: self.loggit.info('Excluding node "{0}" due to node_filters'.format(name)) return True return False def _shrink_target(self, name): return '{0}{1}{2}'.format(self.shrink_prefix, name, self.shrink_suffix) def qualify_single_node(self): node_id = name_to_node_id(self.client, self.shrink_node) if node_id: self.shrink_node_id = node_id self.shrink_node_name = self.shrink_node else: raise ConfigurationError('Unable to find node named: "{0}"'.format(self.shrink_node)) if self._exclude_node(self.shrink_node): raise ConfigurationError('Node "{0}" listed for exclusion'.format(self.shrink_node)) if not self._data_node(node_id): raise ActionError('Node "{0}" is not usable as a shrink node'.format(self.shrink_node)) if not single_data_path(self.client, node_id): raise ActionError( 'Node "{0}" has multiple data paths and cannot be used ' 'for shrink operations.' ) self.shrink_node_avail = ( self.client.nodes.stats()['nodes'][node_id]['fs']['total']['available_in_bytes'] ) def most_available_node(self): """ Determine which data node name has the most available free space, and meets the other node filters settings. :arg client: An :class:`elasticsearch.Elasticsearch` client object """ mvn_avail = 0 # mvn_total = 0 mvn_name = None mvn_id = None nodes = self.client.nodes.stats()['nodes'] for node_id in nodes: name = self.client.nodes.stats()['nodes'][node_id]['name'] if self._exclude_node(name): self.loggit.debug('Node "{0}" excluded by node filters'.format(name)) continue if not self._data_node(node_id): self.loggit.debug('Node "{0}" is not a data node'.format(name)) continue if not single_data_path(self.client, node_id): self.loggit.info('Node "{0}" has multiple data paths and will not be used for shrink operations.') continue value = nodes[node_id]['fs']['total']['available_in_bytes'] if value > mvn_avail: mvn_name = name mvn_id = node_id mvn_avail = value # mvn_total = nodes[node_id]['fs']['total']['total_in_bytes'] self.shrink_node_name = mvn_name self.shrink_node_id = mvn_id self.shrink_node_avail = mvn_avail # self.shrink_node_total = mvn_total def route_index(self, idx, allocation_type, key, value): bkey = 'index.routing.allocation.{0}.{1}'.format(allocation_type, key) routing = { bkey : value } try: self.client.indices.put_settings(index=idx, body=routing) wait_for_it(self.client, 'allocation', wait_interval=self.wait_interval, max_wait=self.max_wait) except Exception as e: report_failure(e) def __log_action(self, error_msg, dry_run=False): if not dry_run: raise ActionError(error_msg) else: self.loggit.warn('DRY-RUN: {0}'.format(error_msg)) def _block_writes(self, idx): block = { 'index.blocks.write': True } self.client.indices.put_settings(index=idx, body=block) def _unblock_writes(self, idx): unblock = { 'index.blocks.write': False } self.client.indices.put_settings(index=idx, body=unblock) def _check_space(self, idx, dry_run=False): # Disk watermark calculation is already baked into `available_in_bytes` size = index_size(self.client, idx) avail = self.shrink_node_avail padded = (size * 2) + (32 * 1024) if padded < self.shrink_node_avail: self.loggit.debug('Sufficient space available for 2x the size of index "{0}". Required: {1}, available: {2}'.format(idx, padded, self.shrink_node_avail)) else: error_msg = ('Insufficient space available for 2x the size of index "{0}", shrinking will exceed space available. Required: {1}, available: {2}'.format(idx, padded, self.shrink_node_avail)) self.__log_action(error_msg, dry_run) def _check_node(self): if self.shrink_node != 'DETERMINISTIC': if not self.shrink_node_name: self.qualify_single_node() else: self.most_available_node() # At this point, we should have the three shrink-node identifying # instance variables: # - self.shrink_node_name # - self.shrink_node_id # - self.shrink_node_avail # # - self.shrink_node_total - only if needed in the future def _check_target_exists(self, idx, dry_run=False): target = self._shrink_target(idx) if self.client.indices.exists(target): error_msg = 'Target index "{0}" already exists'.format(target) self.__log_action(error_msg, dry_run) def _check_doc_count(self, idx, dry_run=False): max_docs = 2147483519 doc_count = self.client.indices.stats(idx)['indices'][idx]['primaries']['docs']['count'] if doc_count > (max_docs * self.number_of_shards): error_msg = ('Too many documents ({0}) to fit in {1} shard(s). Maximum number of docs per shard is {2}'.format(doc_count, self.number_of_shards, max_docs)) self.__log_action(error_msg, dry_run) def _check_shard_count(self, idx, src_shards, dry_run=False): if self.number_of_shards >= src_shards: error_msg = ('Target number of shards ({0}) must be less than current number of shards ({1}) in index "{2}"'.format(self.number_of_shards, src_shards, idx)) self.__log_action(error_msg, dry_run) def _check_shard_factor(self, idx, src_shards, dry_run=False): # Find the list of factors of src_shards factors = [x for x in range(1,src_shards+1) if src_shards % x == 0] # Pop the last one, because it will be the value of src_shards factors.pop() if not self.number_of_shards in factors: error_msg = ( '"{0}" is not a valid factor of {1} shards. Valid values are ' '{2}'.format(self.number_of_shards, src_shards, factors) ) self.__log_action(error_msg, dry_run) def _check_all_shards(self, idx): shards = self.client.cluster.state(index=idx)['routing_table']['indices'][idx]['shards'] found = [] for shardnum in shards: for shard_idx in range(0, len(shards[shardnum])): if shards[shardnum][shard_idx]['node'] == self.shrink_node_id: found.append({'shard': shardnum, 'primary': shards[shardnum][shard_idx]['primary']}) if len(shards) != len(found): self.loggit.debug('Found these shards on node "{0}": {1}'.format(self.shrink_node_name, found)) raise ActionError('Unable to shrink index "{0}" as not all shards were found on the designated shrink node ({1}): {2}'.format(idx, self.shrink_node_name, found)) def pre_shrink_check(self, idx, dry_run=False): self.loggit.debug('BEGIN PRE_SHRINK_CHECK') self.loggit.debug('Check that target exists') self._check_target_exists(idx, dry_run) self.loggit.debug('Check doc count constraints') self._check_doc_count(idx, dry_run) self.loggit.debug('Check shard count') src_shards = int(self.client.indices.get(idx)[idx]['settings']['index']['number_of_shards']) self._check_shard_count(idx, src_shards, dry_run) self.loggit.debug('Check shard factor') self._check_shard_factor(idx, src_shards, dry_run) self.loggit.debug('Check node availability') self._check_node() self.loggit.debug('Check available disk space') self._check_space(idx, dry_run) self.loggit.debug('FINISH PRE_SHRINK_CHECK') def do_dry_run(self): """ Show what a regular run would do, but don't actually do it. """ self.index_list.filter_closed() self.index_list.empty_list_check() try: index_lists = chunk_index_list(self.index_list.indices) for l in index_lists: for idx in l: # Shrink can only be done one at a time... target = self._shrink_target(idx) self.pre_shrink_check(idx, dry_run=True) self.loggit.info('DRY-RUN: Moving shards to shrink node: "{0}"'.format(self.shrink_node_name)) self.loggit.info('DRY-RUN: Shrinking index "{0}" to "{1}" with settings: {2}, wait_for_active_shards={3}'.format(idx, target, self.body, self.wait_for_active_shards)) if self.post_allocation: self.loggit.info('DRY-RUN: Applying post-shrink allocation rule "{0}" to index "{1}"'.format('index.routing.allocation.{0}.{1}:{2}'.format(self.post_allocation['allocation_type'], self.post_allocation['key'], self.post_allocation['value']), target)) if self.delete_after: self.loggit.info('DRY-RUN: Deleting source index "{0}"'.format(idx)) except Exception as e: report_failure(e) def do_action(self): self.index_list.filter_closed() self.index_list.empty_list_check() try: index_lists = chunk_index_list(self.index_list.indices) for l in index_lists: for idx in l: # Shrink can only be done one at a time... target = self._shrink_target(idx) self.loggit.info('Source index: {0} -- Target index: {1}'.format(idx, target)) # Pre-check ensures disk space available for each pass of the loop self.pre_shrink_check(idx) # Route the index to the shrink node self.loggit.info('Moving shards to shrink node: "{0}"'.format(self.shrink_node_name)) self.route_index(idx, 'require', '_name', self.shrink_node_name) # Ensure a copy of each shard is present self._check_all_shards(idx) # Block writes on index self._block_writes(idx) # Do final health check if not health_check(self.client, status='green'): raise ActionError('Unable to proceed with shrink action. Cluster health is not "green"') # Do the shrink self.loggit.info('Shrinking index "{0}" to "{1}" with settings: {2}, wait_for_active_shards={3}'.format(idx, target, self.body, self.wait_for_active_shards)) self.client.indices.shrink(index=idx, target=target, body=self.body, wait_for_active_shards=self.wait_for_active_shards) # Wait for it to complete if self.wfc: self.loggit.debug('Wait for shards to complete allocation for index: {0}'.format(target)) wait_for_it(self.client, 'shrink', wait_interval=self.wait_interval, max_wait=self.max_wait) self.loggit.info('Index "{0}" successfully shrunk to "{1}"'.format(idx, target)) # Do post-shrink steps # Unblock writes on index (just in case) self._unblock_writes(idx) ## Post-allocation, if enabled if self.post_allocation: self.loggit.info('Applying post-shrink allocation rule "{0}" to index "{1}"'.format('index.routing.allocation.{0}.{1}:{2}'.format(self.post_allocation['allocation_type'], self.post_allocation['key'], self.post_allocation['value']), target)) self.route_index(target, self.post_allocation['allocation_type'], self.post_allocation['key'], self.post_allocation['value']) ## Delete, if flagged if self.delete_after: self.loggit.info('Deleting source index "{0}"'.format(idx)) self.client.indices.delete(index=idx) else: # Let's unset the routing we applied here. self.loggit.info('Unassigning routing for source index: "{0}"'.format(idx)) self.route_index(idx, 'require', '_name', '') except Exception as e: # Just in case it fails after attempting to meet this condition self._unblock_writes(idx) report_failure(e) curator-5.2.0/curator/cli.py000066400000000000000000000202561315226075300160060ustar00rootroot00000000000000import os, sys import yaml import logging import click from voluptuous import Schema from .defaults import settings from .validators import SchemaCheck from .config_utils import process_config from .exceptions import * from .utils import * from .indexlist import IndexList from .snapshotlist import SnapshotList from .actions import * from ._version import __version__ CLASS_MAP = { 'alias' : Alias, 'allocation' : Allocation, 'close' : Close, 'cluster_routing' : ClusterRouting, 'create_index' : CreateIndex, 'delete_indices' : DeleteIndices, 'delete_snapshots' : DeleteSnapshots, 'forcemerge' : ForceMerge, 'index_settings' : IndexSettings, 'open' : Open, 'reindex' : Reindex, 'replicas' : Replicas, 'restore' : Restore, 'rollover' : Rollover, 'snapshot' : Snapshot, 'shrink' : Shrink, } def process_action(client, config, **kwargs): """ Do the `action` in the configuration dictionary, using the associated args. Other necessary args may be passed as keyword arguments :arg config: An `action` dictionary. """ logger = logging.getLogger(__name__) # Make some placeholder variables here for readability logger.debug('Configuration dictionary: {0}'.format(config)) logger.debug('kwargs: {0}'.format(kwargs)) action = config['action'] # This will always have some defaults now, so no need to do the if... # # OLD WAY: opts = config['options'] if 'options' in config else {} opts = config['options'] logger.debug('opts: {0}'.format(opts)) mykwargs = {} action_class = CLASS_MAP[action] # Add some settings to mykwargs... if action == 'delete_indices': mykwargs['master_timeout'] = ( kwargs['master_timeout'] if 'master_timeout' in kwargs else 30) ### Update the defaults with whatever came with opts, minus any Nones mykwargs.update(prune_nones(opts)) logger.debug('Action kwargs: {0}'.format(mykwargs)) ### Set up the action ### if action == 'alias': # Special behavior for this action, as it has 2 index lists logger.debug('Running "{0}" action'.format(action.upper())) action_obj = action_class(**mykwargs) if 'add' in config: logger.debug('Adding indices to alias "{0}"'.format(opts['name'])) adds = IndexList(client) adds.iterate_filters(config['add']) action_obj.add(adds, warn_if_no_indices=opts['warn_if_no_indices']) if 'remove' in config: logger.debug( 'Removing indices from alias "{0}"'.format(opts['name'])) removes = IndexList(client) removes.iterate_filters(config['remove']) action_obj.remove( removes, warn_if_no_indices= opts['warn_if_no_indices']) elif action in [ 'cluster_routing', 'create_index', 'rollover']: action_obj = action_class(client, **mykwargs) elif action == 'delete_snapshots' or action == 'restore': logger.debug('Running "{0}"'.format(action)) slo = SnapshotList(client, repository=opts['repository']) slo.iterate_filters(config) # We don't need to send this value to the action mykwargs.pop('repository') action_obj = action_class(slo, **mykwargs) else: logger.debug('Running "{0}"'.format(action.upper())) ilo = IndexList(client) ilo.iterate_filters(config) action_obj = action_class(ilo, **mykwargs) ### Do the action if 'dry_run' in kwargs and kwargs['dry_run'] == True: action_obj.do_dry_run() else: logger.debug('Doing the action here.') action_obj.do_action() def run(config, action_file, dry_run=False): """ Actually run. """ client_args = process_config(config) logger = logging.getLogger(__name__) logger.debug('Client and logging options validated.') # Extract this and save it for later, in case there's no timeout_override. default_timeout = client_args.pop('timeout') logger.debug('default_timeout = {0}'.format(default_timeout)) ######################################### ### Start working on the actions here ### ######################################### logger.debug('action_file: {0}'.format(action_file)) action_config = get_yaml(action_file) logger.debug('action_config: {0}'.format(action_config)) action_dict = validate_actions(action_config) actions = action_dict['actions'] logger.debug('Full list of actions: {0}'.format(actions)) action_keys = sorted(list(actions.keys())) for idx in action_keys: action = actions[idx]['action'] action_disabled = actions[idx]['options'].pop('disable_action') logger.debug('action_disabled = {0}'.format(action_disabled)) continue_if_exception = ( actions[idx]['options'].pop('continue_if_exception')) logger.debug( 'continue_if_exception = {0}'.format(continue_if_exception)) timeout_override = actions[idx]['options'].pop('timeout_override') logger.debug('timeout_override = {0}'.format(timeout_override)) ignore_empty_list = actions[idx]['options'].pop('ignore_empty_list') logger.debug('ignore_empty_list = {0}'.format(ignore_empty_list)) ### Skip to next action if 'disabled' if action_disabled: logger.info( 'Action ID: {0}: "{1}" not performed because "disable_action" ' 'is set to True'.format(idx, action) ) continue else: logger.info('Preparing Action ID: {0}, "{1}"'.format(idx, action)) # Override the timeout, if specified, otherwise use the default. if isinstance(timeout_override, int): client_args['timeout'] = timeout_override else: client_args['timeout'] = default_timeout # Set up action kwargs kwargs = {} kwargs['master_timeout'] = ( client_args['timeout'] if client_args['timeout'] <= 300 else 300) kwargs['dry_run'] = dry_run # Create a client object for each action... client = get_client(**client_args) logger.debug('client is {0}'.format(type(client))) ########################## ### Process the action ### ########################## try: logger.info('Trying Action ID: {0}, "{1}": ' '{2}'.format(idx, action, actions[idx]['description']) ) process_action(client, actions[idx], **kwargs) except Exception as e: if isinstance(e, NoIndices) or isinstance(e, NoSnapshots): if ignore_empty_list: logger.info( 'Skipping action "{0}" due to empty list: ' '{1}'.format(action, type(e)) ) else: logger.error( 'Unable to complete action "{0}". No actionable items ' 'in list: {1}'.format(action, type(e)) ) sys.exit(1) else: logger.error( 'Failed to complete action: {0}. {1}: ' '{2}'.format(action, type(e), e) ) if continue_if_exception: logger.info( 'Continuing execution with next action because ' '"continue_if_exception" is set to True for action ' '{0}'.format(action) ) else: sys.exit(1) logger.info('Action ID: {0}, "{1}" completed.'.format(idx, action)) logger.info('Job completed.') @click.command() @click.option('--config', help="Path to configuration file. Default: ~/.curator/curator.yml", type=click.Path(exists=True), default=settings.config_file() ) @click.option('--dry-run', is_flag=True, help='Do not perform any changes.') @click.argument('action_file', type=click.Path(exists=True), nargs=1) @click.version_option(version=__version__) def cli(config, dry_run, action_file): """ Curator for Elasticsearch indices. See http://elastic.co/guide/en/elasticsearch/client/curator/current """ run(config, action_file, dry_run)curator-5.2.0/curator/config_utils.py000066400000000000000000000034331315226075300177220ustar00rootroot00000000000000from voluptuous import Schema from .validators import SchemaCheck, config_file from .utils import * from .logtools import LogInfo, Whitelist, Blacklist def test_config(config): # Get config from yaml file yaml_config = get_yaml(config) # if the file is empty, which is still valid yaml, set as an empty dict yaml_config = {} if not yaml_config else prune_nones(yaml_config) # Voluptuous can't verify the schema of a dict if it doesn't have keys, # so make sure the keys are at least there and are dict() for k in ['client', 'logging']: if k not in yaml_config: yaml_config[k] = {} else: yaml_config[k] = prune_nones(yaml_config[k]) return SchemaCheck(yaml_config, config_file.client(), 'Client Configuration', 'full configuration dictionary').result() def set_logging(log_opts): try: from logging import NullHandler except ImportError: from logging import Handler class NullHandler(Handler): def emit(self, record): pass # Set up logging loginfo = LogInfo(log_opts) logging.root.addHandler(loginfo.handler) logging.root.setLevel(loginfo.numeric_log_level) logger = logging.getLogger('curator.cli') # Set up NullHandler() to handle nested elasticsearch.trace Logger # instance in elasticsearch python client logging.getLogger('elasticsearch.trace').addHandler(NullHandler()) if log_opts['blacklist']: for bl_entry in ensure_list(log_opts['blacklist']): for handler in logging.root.handlers: handler.addFilter(Blacklist(bl_entry)) def process_config(yaml_file): config = test_config(yaml_file) set_logging(config['logging']) test_client_options(config['client']) return config['client'] curator-5.2.0/curator/curator_cli.py000077500000000000000000000001061315226075300175400ustar00rootroot00000000000000import click from .singletons import cli def main(): cli(obj={}) curator-5.2.0/curator/defaults/000077500000000000000000000000001315226075300164675ustar00rootroot00000000000000curator-5.2.0/curator/defaults/__init__.py000066400000000000000000000000001315226075300205660ustar00rootroot00000000000000curator-5.2.0/curator/defaults/client_defaults.py000066400000000000000000000033471315226075300222150ustar00rootroot00000000000000from voluptuous import * # Configuration file: client def config_client(): return { Optional('hosts', default='127.0.0.1'): Any(None, str, unicode, list), Optional('port', default=9200): Any( None, All(Coerce(int), Range(min=1, max=65535)) ), Optional('url_prefix', default=''): Any(None, str, unicode), Optional('use_ssl', default=False): Boolean(), Optional('certificate', default=None): Any(None, str, unicode), Optional('client_cert', default=None): Any(None, str, unicode), Optional('client_key', default=None): Any(None, str, unicode), Optional('aws_key', default=None): Any(None, str, unicode), Optional('aws_secret_key', default=None): Any(None, str, unicode), Optional('aws_region', default=None): Any(None, str, unicode), Optional('ssl_no_validate', default=False): Boolean(), Optional('http_auth', default=None): Any(None, str, unicode), Optional('timeout', default=30): All( Coerce(int), Range(min=1, max=86400)), Optional('master_only', default=False): Boolean(), } # Configuration file: logging def config_logging(): return { Optional( 'loglevel', default='INFO'): Any(None, 'NOTSET', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL', All(Coerce(int), Any(0, 10, 20, 30, 40, 50)) ), Optional('logfile', default=None): Any(None, str, unicode), Optional( 'logformat', default='default'): Any(None, All( Any(str, unicode), Any('default', 'json', 'logstash') ) ), Optional( 'blacklist', default=['elasticsearch', 'urllib3']): Any(None, list), }curator-5.2.0/curator/defaults/filter_elements.py000066400000000000000000000122031315226075300222200ustar00rootroot00000000000000from voluptuous import * from . import settings ### Schema information ### def aliases(**kwargs): # This setting is used by the alias filtertype and is required return { Required('aliases'): Any(str, [str], unicode, [unicode]) } def allocation_type(**kwargs): return { Optional('allocation_type', default='require'): All( Any(str, unicode), Any('require', 'include', 'exclude')) } def count(**kwargs): # This setting is only used with the count filtertype and is required return { Required('count'): All(Coerce(int), Range(min=1)) } def direction(**kwargs): # This setting is only used with the age filtertype. return { Required('direction'): Any('older', 'younger') } def disk_space(**kwargs): # This setting is only used with the space filtertype and is required return { Required('disk_space'): Any(Coerce(float)) } def epoch(**kwargs): # This setting is only used with the age filtertype. return { Optional('epoch', default=None): Any(Coerce(int), None) } def exclude(**kwargs): # This setting is available in all filter types. if 'exclude' in kwargs and kwargs['exclude']: val = True else: # False by default val = False return { Optional('exclude', default=val): Any(bool, All(Any(str, unicode), Boolean())) } def field(**kwargs): # This setting is only used with the age filtertype. if 'required' in kwargs and kwargs['required']: return { Required('field'): Any(str, unicode) } else: return { Optional('field'): Any(str, unicode) } def key(**kwargs): # This setting is only used with the allocated filtertype. return { Required('key'): Any(str, unicode) } def kind(**kwargs): # This setting is only used with the pattern filtertype and is required return { Required('kind'): Any('prefix', 'suffix', 'timestring', 'regex') } def max_num_segments(**kwargs): return { Required('max_num_segments'): All(Coerce(int), Range(min=1)) } def range_from(**kwargs): return { Required('range_from'): Coerce(int) } def range_to(**kwargs): return { Required('range_to'): Coerce(int) } def reverse(**kwargs): # Only used with space filtertype # Should be ignored if `use_age` is True return { Optional('reverse', default=True): Any(bool, All(Any(str, unicode), Boolean())) } def source(**kwargs): # This setting is only used with the age filtertype, or with the space # filtertype when use_age is set to True. if 'action' in kwargs and kwargs['action'] in settings.snapshot_actions(): valuelist = Any('name', 'creation_date') else: valuelist = Any('name', 'creation_date', 'field_stats') if 'required' in kwargs and kwargs['required']: return { Required('source'): valuelist } else: return { Optional('source'): valuelist } def state(**kwargs): # This setting is only used with the state filtertype. return { Optional('state', default='SUCCESS'): Any( 'SUCCESS', 'PARTIAL', 'FAILED', 'IN_PROGRESS') } def stats_result(**kwargs): # This setting is only used with the age filtertype. return { Optional('stats_result', default='min_value'): Any( 'min_value', 'max_value') } def timestring(**kwargs): # This setting is only used with the age filtertype, or with the space # filtertype if use_age is set to True. if 'required' in kwargs and kwargs['required']: return { Required('timestring'): Any(str, unicode) } else: return { Optional('timestring', default=None): Any(str, unicode, None) } def unit(**kwargs): # This setting is only used with the age filtertype, or with the space # filtertype if use_age is set to True. if 'period' in kwargs and kwargs['period']: return { Required('unit'): Any( 'hours', 'days', 'weeks', 'months', 'years' ) } else: return { Required('unit'): Any( 'seconds','minutes', 'hours', 'days', 'weeks', 'months', 'years' ) } def unit_count(**kwargs): # This setting is only used with the age filtertype, or with the space # filtertype if use_age is set to True. return { Required('unit_count'): Coerce(int) } def unit_count_pattern(**kwargs): # This setting is used with the age filtertype to define, whether # the unit_count value is taken from the configuration or read from # the index name via a regular expression return { Optional('unit_count_pattern'): Any(str, unicode) } def use_age(**kwargs): # Use of this setting requires the additional setting, source. return { Optional('use_age', default=False): Any(bool, All(Any(str, unicode), Boolean())) } def value(**kwargs): # This setting is only used with the pattern filtertype and is a required # setting. There is a separate value option associated with the allocation # action, and the allocated filtertype. return { Required('value'): Any(str, unicode) } def week_starts_on(**kwargs): return { Optional('week_starts_on', default='sunday'): Any( 'Sunday', 'sunday', 'SUNDAY', 'Monday', 'monday', 'MONDAY', None ) } curator-5.2.0/curator/defaults/filtertypes.py000066400000000000000000000073051315226075300214200ustar00rootroot00000000000000from voluptuous import * from . import settings from . import filter_elements import logging logger = logging.getLogger(__name__) ## Helpers ## def _age_elements(action, config): retval = [] is_req = True if config['filtertype'] in ['count', 'space']: is_req = True if 'use_age' in config and config['use_age'] else False retval.append(filter_elements.source(action=action, required=is_req)) if action in settings.index_actions(): retval.append(filter_elements.stats_result()) # This is a silly thing here, because the absence of 'source' will # show up in the actual schema check, but it keeps code from breaking here ts_req = False if 'source' in config: if config['source'] == 'name': ts_req = True elif action in settings.index_actions(): # field_stats must _only_ exist for Index actions (not Snapshot) if config['source'] == 'field_stats': retval.append(filter_elements.field(required=True)) else: retval.append(filter_elements.field(required=False)) retval.append(filter_elements.timestring(required=ts_req)) else: # If source isn't in the config, then the other elements are not # required, but should be Optional to prevent false positives retval.append(filter_elements.field(required=False)) retval.append(filter_elements.timestring(required=ts_req)) return retval ### Schema information ### def alias(action, config): return [ filter_elements.aliases(), filter_elements.exclude(), ] def age(action, config): # Required & Optional retval = [ filter_elements.direction(), filter_elements.unit(), filter_elements.unit_count(), filter_elements.unit_count_pattern(), filter_elements.epoch(), filter_elements.exclude(), ] retval += _age_elements(action, config) logger.debug('AGE FILTER = {0}'.format(retval)) return retval def allocated(action, config): return [ filter_elements.key(), filter_elements.value(), filter_elements.allocation_type(), filter_elements.exclude(exclude=True), ] def closed(action, config): return [ filter_elements.exclude(exclude=True) ] def count(action, config): retval = [ filter_elements.count(), filter_elements.use_age(), filter_elements.reverse(), filter_elements.exclude(exclude=True), ] retval += _age_elements(action, config) return retval def forcemerged(action, config): return [ filter_elements.max_num_segments(), filter_elements.exclude(exclude=True), ] def kibana(action, config): return [ filter_elements.exclude(exclude=True) ] def none(action, config): return [ ] def opened(action, config): return [ filter_elements.exclude(exclude=True) ] def pattern(action, config): return [ filter_elements.kind(), filter_elements.value(), filter_elements.exclude(), ] def period(action, config): retval = [ filter_elements.unit(period=True), filter_elements.range_from(), filter_elements.range_to(), filter_elements.week_starts_on(), filter_elements.epoch(), filter_elements.exclude(), ] retval += _age_elements(action, config) return retval def space(action, config): retval = [ filter_elements.disk_space(), filter_elements.reverse(), filter_elements.use_age(), filter_elements.exclude(), ] retval += _age_elements(action, config) return retval def state(action, config): return [ filter_elements.state(), filter_elements.exclude(), ] curator-5.2.0/curator/defaults/option_defaults.py000066400000000000000000000240161315226075300222430ustar00rootroot00000000000000from voluptuous import * # Action Options def allocation_type(): return { Optional('allocation_type', default='require'): All( Any(str, unicode), Any('require', 'include', 'exclude')) } def conditions(): return { Optional('conditions'): { Optional('max_age'): Any(str, unicode), Optional('max_docs'): Coerce(int) } } def continue_if_exception(): return { Optional('continue_if_exception', default=False): Any(bool, All(Any(str, unicode), Boolean())) } def count(): return { Required('count'): All(Coerce(int), Range(min=0, max=10)) } def delay(): return { Optional('delay', default=0): All( Coerce(float), Range(min=0.0, max=3600.0) ) } def delete_after(): return { Optional('delete_after', default=True): Any(bool, All(Any(str, unicode), Boolean())) } def delete_aliases(): return { Optional('delete_aliases', default=False): Any(bool, All(Any(str, unicode), Boolean())) } def disable_action(): return { Optional('disable_action', default=False): Any(bool, All(Any(str, unicode), Boolean())) } def extra_settings(): return { Optional('extra_settings', default={}): dict } def ignore_empty_list(): return { Optional('ignore_empty_list', default=False): Any(bool, All(Any(str, unicode), Boolean())) } def ignore_unavailable(): return { Optional('ignore_unavailable', default=False): Any(bool, All(Any(str, unicode), Boolean())) } def include_aliases(): return { Optional('include_aliases', default=False): Any(bool, All(Any(str, unicode), Boolean())) } def include_global_state(action): default = False if action == 'snapshot': default = True return { Optional('include_global_state', default=default): Any(bool, All(Any(str, unicode), Boolean())) } def index_settings(): return { Required('index_settings'): {'index': dict} } def indices(): return { Optional('indices', default=None): Any(None, list) } def key(): return { Required('key'): Any(str, unicode) } def max_num_segments(): return { Required('max_num_segments'): All(Coerce(int), Range(min=1, max=32768)) } def max_wait(action): # The separation is here in case I want to change defaults later... value = -1 # if action in ['allocation', 'cluster_routing', 'replicas']: # value = -1 # elif action in ['restore', 'snapshot', 'reindex', 'shrink']: # value = -1 return { Optional('max_wait', default=value): Any(-1, Coerce(int), None) } def migration_prefix(): return { Optional('migration_prefix', default=''): Any(str, unicode, None)} def migration_suffix(): return { Optional('migration_suffix', default=''): Any(str, unicode, None)} def name(action): if action in ['alias', 'create_index', 'rollover']: return { Required('name'): Any(str, unicode) } elif action == 'snapshot': return { Optional('name', default='curator-%Y%m%d%H%M%S'): Any(str, unicode) } elif action == 'restore': return { Optional('name'): Any(str, unicode) } def new_index(): return { Optional('new_index', default=None): Any(str, unicode) } def node_filters(): return { Optional('node_filters', default={}): { Optional('permit_masters', default=False): Any(bool, All(Any(str, unicode), Boolean())), Optional('exclude_nodes', default=[]): Any(list, None) } } def number_of_replicas(): return { Optional('number_of_replicas', default=1): All(Coerce(int), Range(min=0, max=10)) } def number_of_shards(): return { Optional('number_of_shards', default=1): All(Coerce(int), Range(min=1, max=99)) } def partial(): return { Optional('partial', default=False): Any(bool, All(Any(str, unicode), Boolean())) } def post_allocation(): return { Optional('post_allocation', default={}): { Required('allocation_type', default='require'): All(Any(str, unicode), Any('require', 'include', 'exclude')), Required('key'): Any(str, unicode), Required('value', default=None): Any(str, unicode, None) } } def preserve_existing(): return { Optional('preserve_existing', default=False): Any(bool, All(Any(str, unicode), Boolean())) } def refresh(): return { Optional('refresh', default=True): Any(bool, All(Any(str, unicode), Boolean())) } def remote_aws_key(): return { Optional('remote_aws_key', default=None): Any(str, unicode, None) } def remote_aws_secret_key(): return { Optional('remote_aws_secret_key', default=None): Any(str, unicode, None) } def remote_aws_region(): return { Optional('remote_aws_region', default=None): Any(str, unicode, None) } def remote_certificate(): return { Optional('remote_certificate', default=None): Any(str, unicode, None) } def remote_client_cert(): return { Optional('remote_client_cert', default=None): Any(str, unicode, None) } def remote_client_key(): return { Optional('remote_client_key', default=None): Any(str, unicode, None) } def remote_filters(): # This is really just a basic check here. The real check is in the # validate_actions() method in utils.py return { Optional('remote_filters', default=[ { 'filtertype': 'pattern', 'kind': 'regex', 'value': '.*', 'exclude': True, } ] ): Any(list, None) } def remote_ssl_no_validate(): return { Optional('remote_ssl_no_validate', default=False): Any(bool, All(Any(str, unicode), Boolean())) } def remote_url_prefix(): return { Optional('remote_url_prefix', default=''): Any(None, str, unicode) } def rename_pattern(): return { Optional('rename_pattern'): Any(str, unicode) } def rename_replacement(): return { Optional('rename_replacement'): Any(str, unicode) } def repository(): return { Required('repository'): Any(str, unicode) } def request_body(): return { Required('request_body'): { Optional('conflicts'): Any(str, unicode), Optional('size'): Coerce(int), Required('source'): { Required('index'): Any(Any(str, unicode), list), Optional('remote'): { Optional('host'): Any(str, unicode), Optional('headers'): Any(str, unicode), Optional('username'): Any(str, unicode), Optional('password'): Any(str, unicode), Optional('socket_timeout'): Any(str, unicode), Optional('connect_timeout'): Any(str, unicode), }, Optional('size'): Coerce(int), Optional('type'): Any(Any(str, unicode), list), Optional('query'): dict, Optional('sort'): dict, Optional('_source'): Any(Any(str, unicode), list), }, Required('dest'): { Required('index'): Any(str, unicode), Optional('type'): Any(Any(str, unicode), list), Optional('op_type'): Any(str, unicode), Optional('version_type'): Any(str, unicode), Optional('routing'): Any(str, unicode), Optional('pipeline'): Any(str, unicode), }, Optional('script'): dict, } } def requests_per_second(): return { Optional('requests_per_second', default=-1): Any( -1, Coerce(int), None) } def retry_count(): return { Optional('retry_count', default=3): All( Coerce(int), Range(min=0, max=100) ) } def retry_interval(): return { Optional('retry_interval', default=120): All( Coerce(int), Range(min=1, max=600) ) } def routing_type(): return { Required('routing_type'): Any('allocation', 'rebalance') } def cluster_routing_setting(): return { Required('setting'): Any('enable') } def cluster_routing_value(): return { Required('value'): Any( 'all', 'primaries', 'none', 'new_primaries', 'replicas' ) } def shrink_node(): return { Required('shrink_node'): Any(str, unicode) } def shrink_prefix(): return { Optional('shrink_prefix', default=''): Any(str, unicode, None) } def shrink_suffix(): return { Optional('shrink_suffix', default='-shrink'): Any(str, unicode, None) } def skip_repo_fs_check(): return { Optional('skip_repo_fs_check', default=False): Any(bool, All(Any(str, unicode), Boolean())) } def slices(): return { Optional('slices', default=1): Any( All(Coerce(int), Range(min=1, max=500)), None) } def timeout(action): # if action == 'reindex': value = 60 return { Optional('timeout', default=value): Any(Coerce(int), None) } def timeout_override(action): if action in ['forcemerge', 'restore', 'snapshot']: value = 21600 elif action == 'close': value = 180 else: value = None return { Optional('timeout_override', default=value): Any(Coerce(int), None) } def value(): return { Required('value', default=None): Any(str, unicode, None) } def wait_for_active_shards(action): value = 0 if action in ['reindex', 'shrink']: value = 1 return { Optional('wait_for_active_shards', default=value): Any( Coerce(int), 'all', None) } def wait_for_completion(action): # if action in ['reindex', 'restore', 'snapshot']: value = True if action in ['allocation', 'cluster_routing', 'replicas']: value = False return { Optional('wait_for_completion', default=value): Any(bool, All(Any(str, unicode), Boolean())) } def wait_interval(action): minval = 1 maxval = 30 # if action in ['allocation', 'cluster_routing', 'replicas']: value = 3 if action in ['restore', 'snapshot', 'reindex', 'shrink']: value = 9 return { Optional('wait_interval', default=value): Any(All( Coerce(int), Range(min=minval, max=maxval)), None) } def warn_if_no_indices(): return { Optional('warn_if_no_indices', default=False): Any(bool, All(Any(str, unicode), Boolean())) } curator-5.2.0/curator/defaults/settings.py000066400000000000000000000063471315226075300207130ustar00rootroot00000000000000import os from voluptuous import * # Elasticsearch versions supported def version_max(): return (5, 99, 99) def version_min(): return (5, 0, 0) # Default Config file location def config_file(): return os.path.join(os.path.expanduser('~'), '.curator', 'curator.yml') # Default filter patterns (regular expressions) def regex_map(): return { 'timestring': r'^.*{0}.*$', 'regex': r'{0}', 'prefix': r'^{0}.*$', 'suffix': r'^.*{0}$', } def date_regex(): return { 'Y' : '4', 'G' : '4', 'y' : '2', 'm' : '2', 'W' : '2', 'V' : '2', 'U' : '2', 'd' : '2', 'H' : '2', 'M' : '2', 'S' : '2', 'j' : '3', } # Actions def cluster_actions(): return [ 'cluster_routing' ] def index_actions(): return [ 'alias', 'allocation', 'close', 'create_index', 'delete_indices', 'forcemerge', 'index_settings', 'open', 'reindex', 'replicas', 'rollover', 'shrink', 'snapshot', ] def snapshot_actions(): return [ 'delete_snapshots', 'restore' ] def all_actions(): return sorted(cluster_actions() + index_actions() + snapshot_actions()) def index_filtertypes(): return [ 'alias', 'allocated', 'age', 'closed', 'count', 'forcemerged', 'kibana', 'none', 'opened', 'pattern', 'period', 'space', ] def snapshot_filtertypes(): return ['age', 'count', 'none', 'pattern', 'period', 'state'] def all_filtertypes(): return sorted(list(set(index_filtertypes() + snapshot_filtertypes()))) def default_options(): return { 'continue_if_exception': False, 'disable_action': False, 'ignore_empty_list': False, 'timeout_override': None, } def default_filters(): return { 'filters' : [{ 'filtertype' : 'none' }] } def structural_filter_elements(): return { Optional('aliases'): Any(str, [str], unicode, [unicode]), Optional('allocation_type'): Any(str, unicode), Optional('count'): Coerce(int), Optional('direction'): Any(str, unicode), Optional('disk_space'): float, Optional('epoch'): Any(Coerce(int), None), Optional('exclude'): Any(int, str, unicode, bool, None), Optional('field'): Any(str, unicode, None), Optional('key'): Any(str, unicode), Optional('kind'): Any(str, unicode), Optional('max_num_segments'): Coerce(int), Optional('reverse'): Any(int, str, unicode, bool, None), Optional('range_from'): Coerce(int), Optional('range_to'): Coerce(int), Optional('source'): Any(str, unicode), Optional('state'): Any(str, unicode), Optional('stats_result'): Any(str, unicode, None), Optional('timestring'): Any(str, unicode, None), Optional('unit'): Any(str, unicode), Optional('unit_count'): Coerce(int), Optional('unit_count_pattern'): Any(str, unicode), Optional('use_age'): Boolean(), Optional('value'): Any(int, float, str, unicode, bool), Optional('week_starts_on'): Any(str, unicode, None), }curator-5.2.0/curator/exceptions.py000066400000000000000000000022251315226075300174140ustar00rootroot00000000000000class CuratorException(Exception): """ Base class for all exceptions raised by Curator which are not Elasticsearch exceptions. """ class ConfigurationError(CuratorException): """ Exception raised when a misconfiguration is detected """ class MissingArgument(CuratorException): """ Exception raised when a needed argument is not passed. """ class NoIndices(CuratorException): """ Exception raised when an operation is attempted against an empty index_list """ class NoSnapshots(CuratorException): """ Exception raised when an operation is attempted against an empty snapshot_list """ class ActionError(CuratorException): """ Exception raised when an action (against an index_list or snapshot_list) cannot be taken. """ class FailedExecution(CuratorException): """ Exception raised when an action fails to execute for some reason. """ class SnapshotInProgress(ActionError): """ Exception raised when a snapshot is already in progress """ class ActionTimeout(CuratorException): """ Exception raised when an action fails to complete in the allotted time """curator-5.2.0/curator/indexlist.py000066400000000000000000001217711315226075300172460ustar00rootroot00000000000000from datetime import timedelta, datetime, date import time import re import logging import elasticsearch from .defaults import settings from .validators import SchemaCheck, filters from .exceptions import * from .utils import * class IndexList(object): def __init__(self, client): verify_client_object(client) self.loggit = logging.getLogger('curator.indexlist') #: An Elasticsearch Client object #: Also accessible as an instance variable. self.client = client #: Instance variable. #: Information extracted from indices, such as segment count, age, etc. #: Populated at instance creation time, and by other private helper #: methods, as needed. **Type:** ``dict()`` self.index_info = {} #: Instance variable. #: The running list of indices which will be used by an Action class. #: Populated at instance creation time. **Type:** ``list()`` self.indices = [] #: Instance variable. #: All indices in the cluster at instance creation time. #: **Type:** ``list()`` self.all_indices = [] self.__get_indices() def __actionable(self, idx): self.loggit.debug( 'Index {0} is actionable and remains in the list.'.format(idx)) def __not_actionable(self, idx): self.loggit.debug( 'Index {0} is not actionable, removing from list.'.format(idx)) self.indices.remove(idx) def __excludify(self, condition, exclude, index, msg=None): if condition == True: if exclude: text = "Removed from actionable list" self.__not_actionable(index) else: text = "Remains in actionable list" self.__actionable(index) else: if exclude: text = "Remains in actionable list" self.__actionable(index) else: text = "Removed from actionable list" self.__not_actionable(index) if msg: self.loggit.debug('{0}: {1}'.format(text, msg)) def __get_indices(self): """ Pull all indices into `all_indices`, then populate `indices` and `index_info` """ self.loggit.debug('Getting all indices') self.all_indices = get_indices(self.client) self.indices = self.all_indices[:] if self.indices: for index in self.indices: self.__build_index_info(index) self._get_metadata() self._get_index_stats() def __build_index_info(self, index): """ Ensure that `index` is a key in `index_info`. If not, create a sub-dictionary structure under that key. """ self.loggit.debug( 'Building preliminary index metadata for {0}'.format(index)) if not index in self.index_info: self.index_info[index] = { "age" : {}, "number_of_replicas" : 0, "number_of_shards" : 0, "segments" : 0, "size_in_bytes" : 0, "docs" : 0, "state" : "", } def __map_method(self, ft): methods = { 'alias': self.filter_by_alias, 'age': self.filter_by_age, 'allocated': self.filter_allocated, 'closed': self.filter_closed, 'count': self.filter_by_count, 'forcemerged': self.filter_forceMerged, 'kibana': self.filter_kibana, 'none': self.filter_none, 'opened': self.filter_opened, 'period': self.filter_period, 'pattern': self.filter_by_regex, 'space': self.filter_by_space, } return methods[ft] def _get_index_stats(self): """ Populate `index_info` with index `size_in_bytes` and doc count information for each index. """ self.loggit.debug('Getting index stats') self.empty_list_check() # Subroutine to do the dirty work def iterate_over_stats(stats): for index in stats['indices']: size = stats['indices'][index]['total']['store']['size_in_bytes'] docs = stats['indices'][index]['total']['docs']['count'] self.loggit.debug( 'Index: {0} Size: {1} Docs: {2}'.format( index, byte_size(size), docs ) ) self.index_info[index]['size_in_bytes'] = size self.index_info[index]['docs'] = docs working_list = self.working_list() for index in self.working_list(): if self.index_info[index]['state'] == 'close': working_list.remove(index) if working_list: index_lists = chunk_index_list(working_list) for l in index_lists: iterate_over_stats( self.client.indices.stats(index=to_csv(l), metric='store,docs') ) def _get_metadata(self): """ Populate `index_info` with index `size_in_bytes` and doc count information for each index. """ self.loggit.debug('Getting index metadata') self.empty_list_check() index_lists = chunk_index_list(self.indices) for l in index_lists: working_list = ( self.client.cluster.state( index=to_csv(l),metric='metadata' )['metadata']['indices'] ) if working_list: for index in list(working_list.keys()): s = self.index_info[index] wl = working_list[index] if not 'creation_date' in wl['settings']['index']: self.loggit.warn( 'Index: {0} has no "creation_date"! This implies ' 'that the index predates Elasticsearch v1.4. For ' 'safety, this index will be removed from the ' 'actionable list.'.format(index) ) self.__not_actionable(index) else: s['age']['creation_date'] = ( fix_epoch(wl['settings']['index']['creation_date']) ) s['number_of_replicas'] = ( wl['settings']['index']['number_of_replicas'] ) s['number_of_shards'] = ( wl['settings']['index']['number_of_shards'] ) s['state'] = wl['state'] if 'routing' in wl['settings']['index']: s['routing'] = wl['settings']['index']['routing'] def empty_list_check(self): """Raise exception if `indices` is empty""" self.loggit.debug('Checking for empty list') if not self.indices: raise NoIndices('index_list object is empty.') def working_list(self): """ Return the current value of `indices` as copy-by-value to prevent list stomping during iterations """ # Copy by value, rather than reference to prevent list stomping during # iterations self.loggit.debug('Generating working list of indices') return self.indices[:] def _get_segmentcounts(self): """ Populate `index_info` with segment information for each index. """ self.loggit.debug('Getting index segment counts') self.empty_list_check() index_lists = chunk_index_list(self.indices) for l in index_lists: working_list = ( self.client.indices.segments(index=to_csv(l))['indices'] ) if working_list: for index in list(working_list.keys()): shards = working_list[index]['shards'] segmentcount = 0 for shardnum in shards: for shard in range(0,len(shards[shardnum])): segmentcount += ( shards[shardnum][shard]['num_search_segments'] ) self.index_info[index]['segments'] = segmentcount def _get_name_based_ages(self, timestring): """ Add indices to `index_info` based on the age as indicated by the index name pattern, if it matches `timestring` :arg timestring: An strftime pattern """ # Check for empty list before proceeding here to prevent non-iterable # condition self.loggit.debug('Getting ages of indices by "name"') self.empty_list_check() ts = TimestringSearch(timestring) for index in self.working_list(): epoch = ts.get_epoch(index) if isinstance(epoch, int): self.index_info[index]['age']['name'] = epoch def _get_field_stats_dates(self, field='@timestamp'): """ Add indices to `index_info` based on the value the stats api returns, as determined by `field` :arg field: The field with the date value. The field must be mapped in elasticsearch as a date datatype. Default: ``@timestamp`` """ self.loggit.debug('Getting index date from field_stats API') self.loggit.debug( 'Cannot use field_stats on closed indices. ' 'Omitting any closed indices.' ) self.filter_closed() index_lists = chunk_index_list(self.indices) for l in index_lists: working_list = self.client.field_stats( index=to_csv(l), fields=field, level='indices' )['indices'] if working_list: for index in list(working_list.keys()): try: s = self.index_info[index]['age'] wl = working_list[index]['fields'][field] # Use these new references to keep these lines more # readable s['min_value'] = fix_epoch(wl['min_value']) s['max_value'] = fix_epoch(wl['max_value']) except KeyError as e: raise ActionError( 'Field "{0}" not found in index ' '"{1}"'.format(field, index) ) def _calculate_ages(self, source=None, timestring=None, field=None, stats_result=None ): """ This method initiates index age calculation based on the given parameters. Exceptions are raised when they are improperly configured. Set instance variable `age_keyfield` for use later, if needed. :arg source: Source of index age. Can be one of 'name', 'creation_date', or 'field_stats' :arg timestring: An strftime string to match the datestamp in an index name. Only used for index filtering by ``name``. :arg field: A timestamp field name. Only used for ``field_stats`` based calculations. :arg stats_result: Either `min_value` or `max_value`. Only used in conjunction with `source`=``field_stats`` to choose whether to reference the minimum or maximum result value. """ self.age_keyfield = source if source == 'name': if not timestring: raise MissingArgument( 'source "name" requires the "timestring" keyword argument' ) self._get_name_based_ages(timestring) elif source == 'creation_date': # Nothing to do here as this comes from `get_metadata` in __init__ pass elif source == 'field_stats': if not field: raise MissingArgument( 'source "field_stats" requires the "field" keyword argument' ) if stats_result not in ['min_value', 'max_value']: raise ValueError( 'Invalid value for "stats_result": {0}'.format(stats_result) ) self.age_keyfield = stats_result self._get_field_stats_dates(field=field) else: raise ValueError( 'Invalid source: {0}. ' 'Must be one of "name", ' '"creation_date", "field_stats".'.format(source) ) def _sort_by_age(self, index_list, reverse=True): """ Take a list of indices and sort them by date. By default, the youngest are first with `reverse=True`, but the oldest can be first by setting `reverse=False` """ # Do the age-based sorting here. # First, build an temporary dictionary with just index and age # as the key and value, respectively temp = {} for index in index_list: if self.age_keyfield in self.index_info[index]['age']: temp[index] = self.index_info[index]['age'][self.age_keyfield] else: msg = ( '{0} does not have age key "{1}" in IndexList ' ' metadata'.format(index, self.age_keyfield) ) self.__excludify(True, True, index, msg) # If reverse is True, this will sort so the youngest indices are first. # However, if you want oldest first, set reverse to False. # Effectively, this should set us up to act on everything older than # meets the other set criteria. # It starts as a tuple, but then becomes a list. sorted_tuple = ( sorted(temp.items(), key=lambda k: k[1], reverse=reverse) ) return [x[0] for x in sorted_tuple] def filter_by_regex(self, kind=None, value=None, exclude=False): """ Match indices by regular expression (pattern). :arg kind: Can be one of: ``suffix``, ``prefix``, ``regex``, or ``timestring``. This option defines what kind of filter you will be building. :arg value: Depends on `kind`. It is the strftime string if `kind` is ``timestring``. It's used to build the regular expression for other kinds. :arg exclude: If `exclude` is `True`, this filter will remove matching indices from `indices`. If `exclude` is `False`, then only matching indices will be kept in `indices`. Default is `False` """ self.loggit.debug('Filtering indices by regex') if kind not in [ 'regex', 'prefix', 'suffix', 'timestring' ]: raise ValueError('{0}: Invalid value for kind'.format(kind)) # Stop here if None or empty value, but zero is okay if value == 0: pass elif not value: raise ValueError( '{0}: Invalid value for "value". ' 'Cannot be "None" type, empty, or False' ) if kind == 'timestring': regex = settings.regex_map()[kind].format(get_date_regex(value)) else: regex = settings.regex_map()[kind].format(value) self.empty_list_check() pattern = re.compile(regex) for index in self.working_list(): self.loggit.debug('Filter by regex: Index: {0}'.format(index)) match = pattern.match(index) if match: self.__excludify(True, exclude, index) else: self.__excludify(False, exclude, index) def filter_by_age(self, source='name', direction=None, timestring=None, unit=None, unit_count=None, field=None, stats_result='min_value', epoch=None, exclude=False, unit_count_pattern=False ): """ Match `indices` by relative age calculations. :arg source: Source of index age. Can be one of 'name', 'creation_date', or 'field_stats' :arg direction: Time to filter, either ``older`` or ``younger`` :arg timestring: An strftime string to match the datestamp in an index name. Only used for index filtering by ``name``. :arg unit: One of ``seconds``, ``minutes``, ``hours``, ``days``, ``weeks``, ``months``, or ``years``. :arg unit_count: The number of ``unit`` (s). ``unit_count`` * ``unit`` will be calculated out to the relative number of seconds. :arg unit_count_pattern: A regular expression whose capture group identifies the value for ``unit_count``. :arg field: A timestamp field name. Only used for ``field_stats`` based calculations. :arg stats_result: Either `min_value` or `max_value`. Only used in conjunction with `source`=``field_stats`` to choose whether to reference the minimum or maximum result value. :arg epoch: An epoch timestamp used in conjunction with ``unit`` and ``unit_count`` to establish a point of reference for calculations. If not provided, the current time will be used. :arg exclude: If `exclude` is `True`, this filter will remove matching indices from `indices`. If `exclude` is `False`, then only matching indices will be kept in `indices`. Default is `False` """ self.loggit.debug('Filtering indices by age') # Get timestamp point of reference, PoR PoR = get_point_of_reference(unit, unit_count, epoch) if not direction: raise MissingArgument('Must provide a value for "direction"') if direction not in ['older', 'younger']: raise ValueError( 'Invalid value for "direction": {0}'.format(direction) ) self._calculate_ages( source=source, timestring=timestring, field=field, stats_result=stats_result ) if unit_count_pattern: try: unit_count_matcher = re.compile(unit_count_pattern) except: # We got an illegal regex, so won't be able to match anything unit_count_matcher = None for index in self.working_list(): try: age = int(self.index_info[index]['age'][self.age_keyfield]) msg = ( 'Index "{0}" age ({1}), direction: "{2}", point of ' 'reference, ({3})'.format( index, age, direction, PoR ) ) # Because time adds to epoch, smaller numbers are actually older # timestamps. if unit_count_pattern: self.loggit.debug('Unit_count_pattern is set, trying to match pattern to index "{0}"'.format(index)) unit_count_from_index = get_unit_count_from_name(index, unit_count_matcher) if unit_count_from_index: self.loggit.debug('Pattern matched, applying unit_count of "{0}"'.format(unit_count_from_index)) adjustedPoR = get_point_of_reference(unit, unit_count_from_index, epoch) test = 0 elif unit_count == -1: # Unable to match pattern and unit_count is -1, meaning no fallback, so this # index is removed from the list self.loggit.debug('Unable to match pattern and no fallback value set. Removing index "{0}" from actionable list'.format(index)) exclude = True adjustedPoR = PoR # necessary to avoid exception if the first index is excluded else: # Unable to match the pattern and unit_count is set, so fall back to using unit_count # for determining whether to keep this index in the list self.loggit.debug('Unable to match pattern using fallback value of "{0}"'.format(unit_count)) adjustedPoR = PoR else: adjustedPoR = PoR if direction == 'older': agetest = age < adjustedPoR else: agetest = age > adjustedPoR self.__excludify(agetest, exclude, index, msg) except KeyError: self.loggit.debug( 'Index "{0}" does not meet provided criteria. ' 'Removing from list.'.format(index, source)) self.indices.remove(index) def filter_by_space( self, disk_space=None, reverse=True, use_age=False, source='creation_date', timestring=None, field=None, stats_result='min_value', exclude=False): """ Remove indices from the actionable list based on space consumed, sorted reverse-alphabetically by default. If you set `reverse` to `False`, it will be sorted alphabetically. The default is usually what you will want. If only one kind of index is provided--for example, indices matching ``logstash-%Y.%m.%d``--then reverse alphabetical sorting will mean the oldest will remain in the list, because lower numbers in the dates mean older indices. By setting `reverse` to `False`, then ``index3`` will be deleted before ``index2``, which will be deleted before ``index1`` `use_age` allows ordering indices by age. Age is determined by the index creation date by default, but you can specify an `source` of ``name``, ``max_value``, or ``min_value``. The ``name`` `source` requires the timestring argument. :arg disk_space: Filter indices over *n* gigabytes :arg reverse: The filtering direction. (default: `True`). Ignored if `use_age` is `True` :arg use_age: Sort indices by age. ``source`` is required in this case. :arg source: Source of index age. Can be one of ``name``, ``creation_date``, or ``field_stats``. Default: ``creation_date`` :arg timestring: An strftime string to match the datestamp in an index name. Only used if `source` ``name`` is selected. :arg field: A timestamp field name. Only used if `source` ``field_stats`` is selected. :arg stats_result: Either `min_value` or `max_value`. Only used if `source` ``field_stats`` is selected. It determines whether to reference the minimum or maximum value of `field` in each index. :arg exclude: If `exclude` is `True`, this filter will remove matching indices from `indices`. If `exclude` is `False`, then only matching indices will be kept in `indices`. Default is `False` """ self.loggit.debug('Filtering indices by disk space') # Ensure that disk_space is a float if not disk_space: raise MissingArgument('No value for "disk_space" provided') disk_space = float(disk_space) disk_usage = 0.0 disk_limit = disk_space * 2**30 self.loggit.debug( 'Cannot get disk usage info from closed indices. ' 'Omitting any closed indices.' ) self.filter_closed() # Create a copy-by-value working list working_list = self.working_list() if use_age: self._calculate_ages( source=source, timestring=timestring, field=field, stats_result=stats_result ) # Using default value of reverse=True in self._sort_by_age() sorted_indices = self._sort_by_age(working_list) else: # Default to sorting by index name sorted_indices = sorted(working_list, reverse=reverse) for index in sorted_indices: disk_usage += self.index_info[index]['size_in_bytes'] msg = ( '{0}, summed disk usage is {1} and disk limit is {2}.'.format( index, byte_size(disk_usage), byte_size(disk_limit) ) ) self.__excludify((disk_usage > disk_limit), exclude, index, msg) def filter_kibana(self, exclude=True): """ Match any index named ``.kibana``, ``kibana-int``, ``.marvel-kibana``, or ``.marvel-es-data`` in `indices`. :arg exclude: If `exclude` is `True`, this filter will remove matching indices from `indices`. If `exclude` is `False`, then only matching indices will be kept in `indices`. Default is `True` """ self.loggit.debug('Filtering kibana indices') self.empty_list_check() for index in self.working_list(): if index in [ '.kibana', '.marvel-kibana', 'kibana-int', '.marvel-es-data' ]: self.__excludify(True, exclude, index) def filter_forceMerged(self, max_num_segments=None, exclude=True): """ Match any index which has `max_num_segments` per shard or fewer in the actionable list. :arg max_num_segments: Cutoff number of segments per shard. :arg exclude: If `exclude` is `True`, this filter will remove matching indices from `indices`. If `exclude` is `False`, then only matching indices will be kept in `indices`. Default is `True` """ self.loggit.debug('Filtering forceMerged indices') if not max_num_segments: raise MissingArgument('Missing value for "max_num_segments"') self.loggit.debug( 'Cannot get segment count of closed indices. ' 'Omitting any closed indices.' ) self.filter_closed() self._get_segmentcounts() for index in self.working_list(): # Do this to reduce long lines and make it more readable... shards = int(self.index_info[index]['number_of_shards']) replicas = int(self.index_info[index]['number_of_replicas']) segments = int(self.index_info[index]['segments']) msg = ( '{0} has {1} shard(s) + {2} replica(s) ' 'with a sum total of {3} segments.'.format( index, shards, replicas, segments ) ) expected_count = ((shards + (shards * replicas)) * max_num_segments) self.__excludify((segments <= expected_count), exclude, index, msg) def filter_closed(self, exclude=True): """ Filter out closed indices from `indices` :arg exclude: If `exclude` is `True`, this filter will remove matching indices from `indices`. If `exclude` is `False`, then only matching indices will be kept in `indices`. Default is `True` """ self.loggit.debug('Filtering closed indices') self.empty_list_check() for index in self.working_list(): condition = self.index_info[index]['state'] == 'close' self.loggit.debug('Index {0} state: {1}'.format( index, self.index_info[index]['state'] ) ) self.__excludify(condition, exclude, index) def filter_opened(self, exclude=True): """ Filter out opened indices from `indices` :arg exclude: If `exclude` is `True`, this filter will remove matching indices from `indices`. If `exclude` is `False`, then only matching indices will be kept in `indices`. Default is `True` """ self.loggit.debug('Filtering open indices') self.empty_list_check() for index in self.working_list(): condition = self.index_info[index]['state'] == 'open' self.loggit.debug('Index {0} state: {1}'.format( index, self.index_info[index]['state'] ) ) self.__excludify(condition, exclude, index) def filter_allocated(self, key=None, value=None, allocation_type='require', exclude=True, ): """ Match indices that have the routing allocation rule of `key=value` from `indices` :arg key: The allocation attribute to check for :arg value: The value to check for :arg allocation_type: Type of allocation to apply :arg exclude: If `exclude` is `True`, this filter will remove matching indices from `indices`. If `exclude` is `False`, then only matching indices will be kept in `indices`. Default is `True` """ self.loggit.debug( 'Filtering indices with shard routing allocation rules') if not key: raise MissingArgument('No value for "key" provided') if not value: raise MissingArgument('No value for "value" provided') if not allocation_type in ['include', 'exclude', 'require']: raise ValueError( 'Invalid "allocation_type": {0}'.format(allocation_type) ) self.empty_list_check() index_lists = chunk_index_list(self.indices) for l in index_lists: working_list = self.client.indices.get_settings(index=to_csv(l)) if working_list: for index in list(working_list.keys()): try: has_routing = ( working_list[index]['settings']['index']['routing']['allocation'][allocation_type][key] == value ) except KeyError: has_routing = False # if has_routing: msg = ( '{0}: Routing (mis)match: ' 'index.routing.allocation.{1}.{2}={3}.'.format( index, allocation_type, key, value ) ) # self.indices.remove(index) self.__excludify(has_routing, exclude, index, msg) def filter_none(self): self.loggit.debug('"None" filter selected. No filtering will be done.') def filter_by_alias(self, aliases=None, exclude=False): """ Match indices which are associated with the alias or list of aliases identified by `aliases`. An update to Elasticsearch 5.5.0 changes the behavior of this from previous 5.x versions: https://www.elastic.co/guide/en/elasticsearch/reference/5.5/breaking-changes-5.5.html#breaking_55_rest_changes What this means is that indices must appear in all aliases in list `aliases` or a 404 error will result, leading to no indices being matched. In older versions, if the index was associated with even one of the aliases in `aliases`, it would result in a match. It is unknown if this behavior affects anyone. At the time this was written, no users have been bit by this. The code could be adapted to manually loop if the previous behavior is desired. But if no users complain, this will become the accepted/expected behavior. :arg aliases: A list of alias names. :type aliases: list :arg exclude: If `exclude` is `True`, this filter will remove matching indices from `indices`. If `exclude` is `False`, then only matching indices will be kept in `indices`. Default is `False` """ self.loggit.debug( 'Filtering indices matching aliases: "{0}"'.format(aliases)) if not aliases: raise MissingArgument('No value for "aliases" provided') aliases = ensure_list(aliases) self.empty_list_check() index_lists = chunk_index_list(self.indices) for l in index_lists: try: # get_alias will either return {} or a NotFoundError. has_alias = list(self.client.indices.get_alias( index=to_csv(l), name=to_csv(aliases) ).keys()) self.loggit.debug('has_alias: {0}'.format(has_alias)) except elasticsearch.exceptions.NotFoundError: # if we see the NotFoundError, we need to set working_list to {} has_alias = [] for index in l: if index in has_alias: isOrNot = 'is' condition = True else: isOrNot = 'is not' condition = False msg = ( '{0} {1} associated with aliases: {2}'.format( index, isOrNot, aliases ) ) self.__excludify(condition, exclude, index, msg) def filter_by_count( self, count=None, reverse=True, use_age=False, source='creation_date', timestring=None, field=None, stats_result='min_value', exclude=True): """ Remove indices from the actionable list beyond the number `count`, sorted reverse-alphabetically by default. If you set `reverse` to `False`, it will be sorted alphabetically. The default is usually what you will want. If only one kind of index is provided--for example, indices matching ``logstash-%Y.%m.%d``--then reverse alphabetical sorting will mean the oldest will remain in the list, because lower numbers in the dates mean older indices. By setting `reverse` to `False`, then ``index3`` will be deleted before ``index2``, which will be deleted before ``index1`` `use_age` allows ordering indices by age. Age is determined by the index creation date by default, but you can specify an `source` of ``name``, ``max_value``, or ``min_value``. The ``name`` `source` requires the timestring argument. :arg count: Filter indices beyond `count`. :arg reverse: The filtering direction. (default: `True`). :arg use_age: Sort indices by age. ``source`` is required in this case. :arg source: Source of index age. Can be one of ``name``, ``creation_date``, or ``field_stats``. Default: ``creation_date`` :arg timestring: An strftime string to match the datestamp in an index name. Only used if `source` ``name`` is selected. :arg field: A timestamp field name. Only used if `source` ``field_stats`` is selected. :arg stats_result: Either `min_value` or `max_value`. Only used if `source` ``field_stats`` is selected. It determines whether to reference the minimum or maximum value of `field` in each index. :arg exclude: If `exclude` is `True`, this filter will remove matching indices from `indices`. If `exclude` is `False`, then only matching indices will be kept in `indices`. Default is `True` """ self.loggit.debug('Filtering indices by count') if not count: raise MissingArgument('No value for "count" provided') # Create a copy-by-value working list working_list = self.working_list() if use_age: if source != 'name': self.loggit.warn( 'Cannot get age information from closed indices unless ' 'source="name". Omitting any closed indices.' ) self.filter_closed() self._calculate_ages( source=source, timestring=timestring, field=field, stats_result=stats_result ) # Using default value of reverse=True in self._sort_by_age() sorted_indices = self._sort_by_age(working_list, reverse=reverse) else: # Default to sorting by index name sorted_indices = sorted(working_list, reverse=reverse) idx = 1 for index in sorted_indices: msg = ( '{0} is {1} of specified count of {2}.'.format( index, idx, count ) ) condition = True if idx <= count else False self.__excludify(condition, exclude, index, msg) idx += 1 def filter_period( self, source='name', range_from=None, range_to=None, timestring=None, unit=None, field=None, stats_result='min_value', week_starts_on='sunday', epoch=None, exclude=False, ): """ Match `indices` within ages within a given period. :arg source: Source of index age. Can be one of 'name', 'creation_date', or 'field_stats' :arg range_from: How many ``unit`` (s) in the past/future is the origin? :arg range_to: How many ``unit`` (s) in the past/future is the end point? :arg timestring: An strftime string to match the datestamp in an index name. Only used for index filtering by ``name``. :arg unit: One of ``hours``, ``days``, ``weeks``, ``months``, or ``years``. :arg unit_count: The number of ``unit`` (s). ``unit_count`` * ``unit`` will be calculated out to the relative number of seconds. :arg field: A timestamp field name. Only used for ``field_stats`` based calculations. :arg stats_result: Either `min_value` or `max_value`. Only used in conjunction with `source`=``field_stats`` to choose whether to reference the minimum or maximum result value. :arg week_starts_on: Either ``sunday`` or ``monday``. Default is ``sunday`` :arg epoch: An epoch timestamp used to establish a point of reference for calculations. If not provided, the current time will be used. :arg exclude: If `exclude` is `True`, this filter will remove matching indices from `indices`. If `exclude` is `False`, then only matching indices will be kept in `indices`. Default is `False` """ self.loggit.debug('Filtering indices by age') try: start, end = date_range( unit, range_from, range_to, epoch, week_starts_on=week_starts_on ) except Exception as e: report_failure(e) self._calculate_ages( source=source, timestring=timestring, field=field, stats_result=stats_result ) for index in self.working_list(): try: age = int(self.index_info[index]['age'][self.age_keyfield]) msg = ( 'Index "{0}" age ({1}), period start: "{2}", period ' 'end, "{3}"'.format( index, age, start, end ) ) # Because time adds to epoch, smaller numbers are actually older # timestamps. inrange = ((age >= start) and (age <= end)) self.__excludify(inrange, exclude, index, msg) except KeyError: self.loggit.debug( 'Index "{0}" does not meet provided criteria. ' 'Removing from list.'.format(index, source)) self.indices.remove(index) def iterate_filters(self, filter_dict): """ Iterate over the filters defined in `config` and execute them. :arg filter_dict: The configuration dictionary .. note:: `filter_dict` should be a dictionary with the following form: .. code-block:: python { 'filters' : [ { 'filtertype': 'the_filter_type', 'key1' : 'value1', ... 'keyN' : 'valueN' } ] } """ self.loggit.debug('Iterating over a list of filters') # Make sure we actually _have_ filters to act on if not 'filters' in filter_dict or len(filter_dict['filters']) < 1: logger.info('No filters in config. Returning unaltered object.') return self.loggit.debug('All filters: {0}'.format(filter_dict['filters'])) for f in filter_dict['filters']: self.loggit.debug('Top of the loop: {0}'.format(self.indices)) self.loggit.debug('Un-parsed filter args: {0}'.format(f)) # Make sure we got at least this much in the configuration self.loggit.debug('Parsed filter args: {0}'.format( SchemaCheck( f, filters.structure(), 'filter', 'IndexList.iterate_filters' ).result() ) ) method = self.__map_method(f['filtertype']) del f['filtertype'] # If it's a filtertype with arguments, update the defaults with the # provided settings. if f: logger.debug('Filter args: {0}'.format(f)) logger.debug('Pre-instance: {0}'.format(self.indices)) method(**f) logger.debug('Post-instance: {0}'.format(self.indices)) else: # Otherwise, it's a settingless filter. method() curator-5.2.0/curator/logtools.py000066400000000000000000000046541315226075300171050ustar00rootroot00000000000000import sys import json import logging import time class LogstashFormatter(logging.Formatter): # The LogRecord attributes we want to carry over to the Logstash message, # mapped to the corresponding output key. WANTED_ATTRS = {'levelname': 'loglevel', 'funcName': 'function', 'lineno': 'linenum', 'message': 'message', 'name': 'name'} # def converter(self, timevalue): # return time.gmtime(timevalue) def format(self, record): self.converter = time.gmtime timestamp = '%s.%03dZ' % ( self.formatTime(record, datefmt='%Y-%m-%dT%H:%M:%S'), record.msecs) result = {'message': record.getMessage(), '@timestamp': timestamp} for attribute in set(self.WANTED_ATTRS).intersection(record.__dict__): result[self.WANTED_ATTRS[attribute]] = getattr(record, attribute) return json.dumps(result, sort_keys=True) class Whitelist(logging.Filter): def __init__(self, *whitelist): self.whitelist = [logging.Filter(name) for name in whitelist] def filter(self, record): return any(f.filter(record) for f in self.whitelist) class Blacklist(Whitelist): def filter(self, record): return not Whitelist.filter(self, record) class LogInfo(object): def __init__(self, cfg): cfg['loglevel'] = 'INFO' if not 'loglevel' in cfg else cfg['loglevel'] cfg['logfile'] = None if not 'logfile' in cfg else cfg['logfile'] cfg['logformat'] = 'default' if not 'logformat' in cfg else cfg['logformat'] self.numeric_log_level = getattr(logging, cfg['loglevel'].upper(), None) self.format_string = '%(asctime)s %(levelname)-9s %(message)s' if not isinstance(self.numeric_log_level, int): raise ValueError('Invalid log level: {0}'.format(cfg['loglevel'])) self.handler = logging.StreamHandler( open(cfg['logfile'], 'a') if cfg['logfile'] else sys.stdout ) if self.numeric_log_level == 10: # DEBUG self.format_string = ( '%(asctime)s %(levelname)-9s %(name)22s ' '%(funcName)22s:%(lineno)-4d %(message)s' ) if cfg['logformat'] == 'json' or cfg['logformat'] == 'logstash': self.handler.setFormatter(LogstashFormatter()) else: self.handler.setFormatter(logging.Formatter(self.format_string)) curator-5.2.0/curator/repomgrcli.py000066400000000000000000000116411315226075300174000ustar00rootroot00000000000000import elasticsearch import click import re import sys import logging from .defaults import settings from .exceptions import * from .config_utils import process_config from .utils import * from ._version import __version__ logger = logging.getLogger('curator.repomgrcli') def delete_callback(ctx, param, value): if not value: ctx.abort() def show_repos(client): for repository in sorted(get_repository(client, '_all').keys()): print('{0}'.format(repository)) sys.exit(0) @click.command(short_help='Filesystem Repository') @click.option('--repository', required=True, type=str, help='Repository name') @click.option( '--location', required=True, type=str, help=( 'Shared file-system location. ' 'Must match remote path, & be accessible to all master & data nodes' ) ) @click.option('--compression', type=bool, default=True, show_default=True, help='Enable/Disable metadata compression.') @click.option('--chunk_size', type=str, help='Chunk size, e.g. 1g, 10m, 5k. [unbounded]') @click.option('--max_restore_bytes_per_sec', type=str, default='20mb', show_default=True, help='Throttles per node restore rate (per second).') @click.option('--max_snapshot_bytes_per_sec', type=str, default='20mb', show_default=True, help='Throttles per node snapshot rate (per second).') @click.pass_context def fs( ctx, repository, location, compression, chunk_size, max_restore_bytes_per_sec, max_snapshot_bytes_per_sec): """ Create a filesystem repository. """ logger = logging.getLogger('curator.repomgrcli.fs') client = get_client(**ctx.obj['client_args']) try: create_repository(client, repo_type='fs', **ctx.params) except FailedExecution as e: logger.critical(e) sys.exit(1) @click.command(short_help='S3 Repository') @click.option('--repository', required=True, type=str, help='Repository name') @click.option('--bucket', required=True, type=str, help='S3 bucket name') @click.option('--region', type=str, help='S3 region. [US Standard]') @click.option('--base_path', type=str, help='S3 base path. [root]') @click.option('--access_key', type=str, help='S3 access key. [value of cloud.aws.access_key]') @click.option('--secret_key', type=str, help='S3 secret key. [value of cloud.aws.secret_key]') @click.option('--compression', type=bool, default=True, show_default=True, help='Enable/Disable metadata compression.') @click.option('--chunk_size', type=str, help='Chunk size, e.g. 1g, 10m, 5k. [unbounded]') @click.option('--max_restore_bytes_per_sec', type=str, default='20mb', show_default=True, help='Throttles per node restore rate (per second).') @click.option('--max_snapshot_bytes_per_sec', type=str, default='20mb', show_default=True, help='Throttles per node snapshot rate (per second).') @click.pass_context def s3( ctx, repository, bucket, region, base_path, access_key, secret_key, compression, chunk_size, max_restore_bytes_per_sec, max_snapshot_bytes_per_sec): """ Create an S3 repository. """ logger = logging.getLogger('curator.repomgrcli.s3') client = get_client(**ctx.obj['client_args']) try: create_repository(client, repo_type='s3', **ctx.params) except FailedExecution as e: logger.critical(e) sys.exit(1) @click.group() @click.option( '--config', help="Path to configuration file. Default: ~/.curator/curator.yml", type=click.Path(exists=True), default=settings.config_file() ) @click.pass_context def repo_mgr_cli(ctx, config): """ Repository manager for Elasticsearch Curator. """ ctx.obj = {} ctx.obj['client_args'] = process_config(config) logger = logging.getLogger(__name__) logger.debug('Client and logging options validated.') @repo_mgr_cli.group('create') @click.pass_context def _create(ctx): """Create an Elasticsearch repository""" _create.add_command(fs) _create.add_command(s3) @repo_mgr_cli.command('show') @click.pass_context def show(ctx): """ Show all repositories """ client = get_client(**ctx.obj['client_args']) show_repos(client) @repo_mgr_cli.command('delete') @click.option('--repository', required=True, help='Repository name', type=str) @click.option('--yes', is_flag=True, callback=delete_callback, expose_value=False, prompt='Are you sure you want to delete the repository?') @click.pass_context def _delete(ctx, repository): """Delete an Elasticsearch repository""" client = get_client(**ctx.obj['client_args']) try: logger.info('Deleting repository {0}...'.format(repository)) client.snapshot.delete_repository(repository=repository) except elasticsearch.NotFoundError: logger.error( 'Unable to delete repository: {0} Not Found.'.format(repository)) sys.exit(1) curator-5.2.0/curator/singletons.py000066400000000000000000000555441315226075300174340ustar00rootroot00000000000000import os, sys import yaml, json import logging import click from voluptuous import Schema from .defaults import settings from .validators import SchemaCheck, config_file, options from .config_utils import test_config, set_logging from .exceptions import * from .utils import * from .indexlist import IndexList from .snapshotlist import SnapshotList from .actions import * from ._version import __version__ CLASS_MAP = { 'alias' : Alias, 'allocation' : Allocation, 'close' : Close, 'cluster_routing' : ClusterRouting, 'create_index' : CreateIndex, 'delete_indices' : DeleteIndices, 'delete_snapshots' : DeleteSnapshots, 'forcemerge' : ForceMerge, 'open' : Open, 'replicas' : Replicas, 'restore' : Restore, 'snapshot' : Snapshot, } EXCLUDED_OPTIONS = [ 'ignore_empty_list', 'timeout_override', 'continue_if_exception', 'disable_action' ] def validate_filter_json(ctx, param, value): try: filter_list = ensure_list(json.loads(value)) return filter_list except ValueError: raise click.BadParameter('Invalid JSON: {0}'.format(value)) def false_to_none(ctx, param, value): try: if value: return True else: return None except ValueError: raise click.BadParameter('Invalid value: {0}'.format(value)) def filter_schema_check(action, filter_dict): valid_filters = SchemaCheck( filter_dict, Schema(filters.Filters(action, location='singleton')), 'filters', '{0} singleton action "filters"'.format(action) ).result() return validate_filters(action, valid_filters) def _actionator(action, action_obj, dry_run=True): logger = logging.getLogger(__name__) logger.debug('Doing the singleton "{0}" action here.'.format(action)) try: if dry_run: action_obj.do_dry_run() else: action_obj.do_action() except Exception as e: if isinstance(e, NoIndices) or isinstance(e, NoSnapshots): logger.error( 'Unable to complete action "{0}". No actionable items ' 'in list: {1}'.format(action, type(e)) ) else: logger.error( 'Failed to complete action: {0}. {1}: ' '{2}'.format(action, type(e), e) ) sys.exit(1) logger.info('Singleton "{0}" action completed.'.format(action)) def _do_filters(list_object, filters, ignore=False): logger = logging.getLogger(__name__) logger.debug('Running filters and testing for empty list object') try: list_object.iterate_filters(filters) list_object.empty_list_check() except (NoIndices, NoSnapshots) as e: if isinstance(e, NoIndices): otype = 'index' else: otype = 'snapshot' if ignore: logger.info( 'Singleton action not performed: empty {0} list'.format(otype) ) sys.exit(0) else: logger.error( 'Singleton action failed due to empty {0} list'.format(otype) ) sys.exit(1) def _prune_excluded(option_dict): for k in list(option_dict.keys()): if k in EXCLUDED_OPTIONS: del option_dict[k] return option_dict def option_schema_check(action, option_dict): clean_options = SchemaCheck( prune_nones(option_dict), options.get_schema(action), 'options', '{0} singleton action "options"'.format(action) ).result() return _prune_excluded(clean_options) def config_override(ctx, config_dict): if config_dict == None: config_dict = {} for k in ['client', 'logging']: if not k in config_dict: config_dict[k] = {} for k in list(ctx.params.keys()): if k in ['dry_run', 'config']: pass elif k == 'host': if 'host' in ctx.params and ctx.params['host'] is not None: config_dict['client']['hosts'] = ctx.params[k] elif k in ['loglevel', 'logfile', 'logformat']: if k in ctx.params and ctx.params[k] is not None: config_dict['logging'][k] = ctx.params[k] else: if k in ctx.params and ctx.params[k] is not None: config_dict['client'][k] = ctx.params[k] # After override, prune the nones for k in ['client', 'logging']: config_dict[k] = prune_nones(config_dict[k]) return SchemaCheck(config_dict, config_file.client(), 'Client Configuration', 'full configuration dictionary').result() @click.command(name='allocation') @click.option( '--key', type=str, required=True, help='Node identification tag' ) @click.option( '--value', type=str, default=None, help='Value associated with --key' ) @click.option( '--allocation_type', type=str, help='Must be one of: require, include, or exclude' ) @click.option( '--wait_for_completion', is_flag=True, help='Wait for operation to complete' ) @click.option( '--ignore_empty_list', is_flag=True, help='Do not raise exception if there are no actionable indices' ) @click.option( '--filter_list', callback=validate_filter_json, help='JSON string representing an array of filters.', required=True ) @click.pass_context def allocation_singleton( ctx, key, value, allocation_type, wait_for_completion, ignore_empty_list, filter_list): """ Shard Routing Allocation """ action = 'allocation' action_class = CLASS_MAP[action] c_args = ctx.obj['config']['client'] client = get_client(**c_args) logger = logging.getLogger(__name__) raw_options = { 'key': key, 'value': value, 'allocation_type': allocation_type, 'wait_for_completion': wait_for_completion, } logger.debug('Validating provided options: {0}'.format(raw_options)) mykwargs = option_schema_check(action, raw_options) mykwargs.update( { 'max_wait': c_args['timeout'] if c_args['timeout'] else 30 } ) logger.debug('Validating provided filters: {0}'.format(filter_list)) clean_filters = { 'filters': filter_schema_check(action, filter_list) } ilo = IndexList(client) _do_filters(ilo, clean_filters, ignore_empty_list) action_obj = action_class(ilo, **mykwargs) ### Do the action _actionator(action, action_obj, dry_run=ctx.parent.params['dry_run']) @click.command(name='close') @click.option( '--delete_aliases', is_flag=True, help='Delete all aliases from indices to be closed' ) @click.option( '--ignore_empty_list', is_flag=True, help='Do not raise exception if there are no actionable indices' ) @click.option( '--filter_list', callback=validate_filter_json, help='JSON string representing an array of filters.', required=True ) @click.pass_context def close_singleton( ctx, delete_aliases, ignore_empty_list, filter_list): """ Close indices """ action = 'close' action_class = CLASS_MAP[action] c_args = ctx.obj['config']['client'] client = get_client(**c_args) logger = logging.getLogger(__name__) raw_options = { 'delete_aliases': delete_aliases } logger.debug('Validating provided options: {0}'.format(raw_options)) mykwargs = option_schema_check(action, raw_options) logger.debug('Validating provided filters: {0}'.format(filter_list)) clean_filters = { 'filters': filter_schema_check(action, filter_list) } ilo = IndexList(client) _do_filters(ilo, clean_filters, ignore_empty_list) action_obj = action_class(ilo, **mykwargs) ### Do the action _actionator(action, action_obj, dry_run=ctx.parent.params['dry_run']) @click.command(name='delete_indices') @click.option( '--ignore_empty_list', is_flag=True, help='Do not raise exception if there are no actionable indices' ) @click.option( '--filter_list', callback=validate_filter_json, help='JSON string representing an array of filters.', required=True ) @click.pass_context def delete_indices_singleton(ctx, ignore_empty_list, filter_list): """ Delete indices """ action = 'delete_indices' action_class = CLASS_MAP[action] c_args = ctx.obj['config']['client'] client = get_client(**c_args) logger = logging.getLogger(__name__) mykwargs = { 'master_timeout': c_args['timeout'] if c_args['timeout'] <= 300 else 300 } logger.debug('Validating provided filters: {0}'.format(filter_list)) clean_filters = { 'filters': filter_schema_check(action, filter_list) } ilo = IndexList(client) _do_filters(ilo, clean_filters, ignore_empty_list) action_obj = action_class(ilo, **mykwargs) ### Do the action _actionator(action, action_obj, dry_run=ctx.parent.params['dry_run']) @click.command(name='delete_snapshots') @click.option( '--repository', type=str, required=True, help='Snapshot repository name' ) @click.option( '--retry_count', type=int, help='Number of times to retry (max 3)' ) @click.option( '--retry_interval', type=int, help='Time in seconds between retries' ) @click.option( '--ignore_empty_list', is_flag=True, help='Do not raise exception if there are no actionable snapshots' ) @click.option( '--filter_list', callback=validate_filter_json, help='JSON string representing an array of filters.', required=True ) @click.pass_context def delete_snapshots_singleton( ctx, repository, retry_count, retry_interval, ignore_empty_list, filter_list): """ Delete snapshots """ action = 'delete_snapshots' action_class = CLASS_MAP[action] c_args = ctx.obj['config']['client'] client = get_client(**c_args) logger = logging.getLogger(__name__) raw_options = { 'repository': repository, 'retry_count': retry_count, 'retry_interval': retry_interval } logger.debug('Validating provided options: {0}'.format(raw_options)) mykwargs = option_schema_check(action, raw_options) # Repo arg Not necessary after schema check. It's only for the slo object del mykwargs['repository'] logger.debug('Validating provided filters: {0}'.format(filter_list)) clean_filters = { 'filters': filter_schema_check(action, filter_list) } slo = SnapshotList(client, repository=repository) _do_filters(slo, clean_filters, ignore_empty_list) action_obj = action_class(slo, **mykwargs) ### Do the action _actionator(action, action_obj, dry_run=ctx.parent.params['dry_run']) @click.command(name='open') @click.option( '--ignore_empty_list', is_flag=True, help='Do not raise exception if there are no actionable indices' ) @click.option( '--filter_list', callback=validate_filter_json, help='JSON string representing an array of filters.', required=True ) @click.pass_context def open_singleton( ctx, ignore_empty_list, filter_list): """ Open indices """ action = 'open' action_class = CLASS_MAP[action] c_args = ctx.obj['config']['client'] client = get_client(**c_args) logger = logging.getLogger(__name__) logger.debug('Validating provided filters: {0}'.format(filter_list)) clean_filters = { 'filters': filter_schema_check(action, filter_list) } ilo = IndexList(client) _do_filters(ilo, clean_filters, ignore_empty_list) action_obj = action_class(ilo) ### Do the action _actionator(action, action_obj, dry_run=ctx.parent.params['dry_run']) @click.command(name='forcemerge') @click.option( '--max_num_segments', type=int, required=True, help='Maximum number of segments per shard (minimum of 1)' ) @click.option( '--delay', type=float, help='Time in seconds to delay between operations. Default 0, maximum 3600' ) @click.option( '--ignore_empty_list', is_flag=True, help='Do not raise exception if there are no actionable indices' ) @click.option( '--filter_list', callback=validate_filter_json, help='JSON string representing an array of filters.', required=True ) @click.pass_context def forcemerge_singleton( ctx, max_num_segments, delay, ignore_empty_list, filter_list): """ forceMerge index/shard segments """ action = 'forcemerge' action_class = CLASS_MAP[action] c_args = ctx.obj['config']['client'] client = get_client(**c_args) logger = logging.getLogger(__name__) raw_options = { 'max_num_segments': max_num_segments, 'delay': delay, } logger.debug('Validating provided options: {0}'.format(raw_options)) mykwargs = option_schema_check(action, raw_options) logger.debug('Validating provided filters: {0}'.format(filter_list)) clean_filters = { 'filters': filter_schema_check(action, filter_list) } ilo = IndexList(client) _do_filters(ilo, clean_filters, ignore_empty_list) action_obj = action_class(ilo, **mykwargs) ### Do the action _actionator(action, action_obj, dry_run=ctx.parent.params['dry_run']) @click.command(name='replicas') @click.option( '--count', type=int, required=True, help='Number of replicas (max 10)' ) @click.option( '--wait_for_completion', is_flag=True, help='Wait for operation to complete' ) @click.option( '--ignore_empty_list', is_flag=True, help='Do not raise exception if there are no actionable indices' ) @click.option( '--filter_list', callback=validate_filter_json, help='JSON string representing an array of filters.', required=True ) @click.pass_context def replicas_singleton( ctx, count, wait_for_completion, ignore_empty_list, filter_list): """ Change replica count """ action = 'replicas' action_class = CLASS_MAP[action] c_args = ctx.obj['config']['client'] client = get_client(**c_args) logger = logging.getLogger(__name__) raw_options = { 'count': count, 'wait_for_completion': wait_for_completion, } logger.debug('Validating provided options: {0}'.format(raw_options)) mykwargs = option_schema_check(action, raw_options) logger.debug('Validating provided filters: {0}'.format(filter_list)) clean_filters = { 'filters': filter_schema_check(action, filter_list) } ilo = IndexList(client) _do_filters(ilo, clean_filters, ignore_empty_list) action_obj = action_class(ilo, **mykwargs) ### Do the action _actionator(action, action_obj, dry_run=ctx.parent.params['dry_run']) @click.command(name='snapshot') @click.option( '--repository', type=str, required=True, help='Snapshot repository') @click.option( '--name', type=str, help='Snapshot name', show_default=True, default='curator-%Y%m%d%H%M%S' ) @click.option( '--ignore_unavailable', is_flag=True, show_default=True, help='Ignore unavailable shards/indices.' ) @click.option( '--include_global_state', type=bool, show_default=True, default=True, expose_value=True, help='Store cluster global state with snapshot.' ) @click.option( '--partial', is_flag=True, show_default=True, help='Do not fail if primary shard is unavailable.' ) @click.option( '--wait_for_completion', type=bool, show_default=True, default=True, help='Wait for operation to complete' ) @click.option( '--skip_repo_fs_check', is_flag=True, expose_value=True, help='Skip repository filesystem access validation.' ) @click.option( '--ignore_empty_list', is_flag=True, help='Do not raise exception if there are no actionable indices' ) @click.option( '--filter_list', callback=validate_filter_json, default='{"filtertype":"none"}', help='JSON string representing an array of filters.' ) @click.pass_context def snapshot_singleton( ctx, repository, name, ignore_unavailable, include_global_state, partial, skip_repo_fs_check, wait_for_completion, ignore_empty_list, filter_list): """ Snapshot indices """ action = 'snapshot' action_class = CLASS_MAP[action] c_args = ctx.obj['config']['client'] client = get_client(**c_args) logger = logging.getLogger(__name__) raw_options = { 'repository': repository, 'name': name, 'ignore_unavailable': ignore_unavailable, 'include_global_state': include_global_state, 'partial': partial, 'skip_repo_fs_check': skip_repo_fs_check, 'wait_for_completion': wait_for_completion, } logger.debug('Validating provided options: {0}'.format(raw_options)) mykwargs = option_schema_check(action, raw_options) logger.debug('Validating provided filters: {0}'.format(filter_list)) clean_filters = { 'filters': filter_schema_check(action, filter_list) } ilo = IndexList(client) _do_filters(ilo, clean_filters, ignore_empty_list) action_obj = action_class(ilo, **mykwargs) ### Do the action _actionator(action, action_obj, dry_run=ctx.parent.params['dry_run']) @click.command(name='show_indices') @click.option('--verbose', help='Show verbose output.', is_flag=True) @click.option('--header', help='Print header if --verbose', is_flag=True) @click.option('--epoch', help='Print time as epoch if --verbose', is_flag=True) @click.option( '--ignore_empty_list', is_flag=True, help='Do not raise exception if there are no actionable indices' ) @click.option( '--filter_list', callback=validate_filter_json, default='{"filtertype":"none"}', help='JSON string representing an array of filters.' ) @click.pass_context def show_indices_singleton( ctx, epoch, header, verbose, ignore_empty_list, filter_list): """ Show indices """ action = "open" c_args = ctx.obj['config']['client'] client = get_client(**c_args) logger = logging.getLogger(__name__) logger.debug( 'Using dummy "open" action for show_indices singleton. ' 'No action will be taken.' ) logger.debug('Validating provided filters: {0}'.format(filter_list)) clean_filters = { 'filters': filter_schema_check(action, filter_list) } ilo = IndexList(client) _do_filters(ilo, clean_filters, ignore_empty_list) indices = sorted(ilo.indices) # Do some calculations to figure out the proper column sizes allbytes = [] alldocs = [] for idx in indices: allbytes.append(byte_size(ilo.index_info[idx]['size_in_bytes'])) alldocs.append(str(ilo.index_info[idx]['docs'])) if epoch: timeformat = '{6:>13}' column = 'creation_date' else: timeformat = '{6:>20}' column = 'Creation Timestamp' formatting = ( '{0:' + str(len(max(indices, key=len))) + '} ' '{1:>5} ' '{2:>' + str(len(max(allbytes, key=len)) + 1) + '} ' '{3:>' + str(len(max(alldocs, key=len)) + 1) + '} ' '{4:>3} {5:>3} ' + timeformat ) # Print the header, if both verbose and header are enabled if header and verbose: click.secho( formatting.format( 'Index', 'State', 'Size', 'Docs', 'Pri', 'Rep', column ), bold=True, underline=True ) # Loop through indices and print info, if verbose for idx in indices: p = ilo.index_info[idx] if verbose: if epoch: datefield = p['age']['creation_date'] if 'creation_date' in p['age'] else 0 else: datefield = '{0}Z'.format( datetime.utcfromtimestamp(p['age']['creation_date'] ).isoformat()) if 'creation_date' in p['age'] else 'unknown/closed' click.echo( formatting.format( idx, p['state'], byte_size(p['size_in_bytes']), p['docs'], p['number_of_shards'], p['number_of_replicas'], datefield ) ) else: click.echo('{0}'.format(idx)) @click.command(name='show_snapshots') @click.option( '--repository', type=str, required=True, help='Snapshot repository name' ) @click.option( '--ignore_empty_list', is_flag=True, help='Do not raise exception if there are no actionable snapshots' ) @click.option( '--filter_list', callback=validate_filter_json, default='{"filtertype":"none"}', help='JSON string representing an array of filters.' ) @click.pass_context def show_snapshots_singleton( ctx, repository, ignore_empty_list, filter_list): """ Show snapshots """ action = 'delete_snapshots' c_args = ctx.obj['config']['client'] client = get_client(**c_args) logger = logging.getLogger(__name__) logger.debug('Validating provided filters: {0}'.format(filter_list)) clean_filters = { 'filters': filter_schema_check(action, filter_list) } slo = SnapshotList(client, repository=repository) _do_filters(slo, clean_filters, ignore_empty_list) snapshots = sorted(slo.snapshots) for idx in snapshots: click.secho('{0}'.format(idx)) @click.group() @click.option( '--config', help='Path to configuration file. Default: ~/.curator/curator.yml', type=click.Path(), default=settings.config_file() ) @click.option('--host', help='Elasticsearch host.') @click.option('--url_prefix', help='Elasticsearch http url prefix.') @click.option('--port', help='Elasticsearch port.') @click.option( '--use_ssl', is_flag=True, callback=false_to_none, help='Connect to Elasticsearch through SSL.' ) @click.option( '--certificate', help='Path to certificate to use for SSL validation.') @click.option( '--client-cert', help='Path to file containing SSL certificate for client auth.', type=str ) @click.option( '--client-key', help='Path to file containing SSL key for client auth.', type=str ) @click.option( '--ssl-no-validate', is_flag=True, callback=false_to_none, help='Do not validate SSL certificate' ) @click.option('--http_auth', help='Use Basic Authentication ex: user:pass') @click.option('--timeout', help='Connection timeout in seconds.', type=int) @click.option( '--master-only', is_flag=True, callback=false_to_none, help='Only operate on elected master node.' ) @click.option('--dry-run', is_flag=True, help='Do not perform any changes.') @click.option('--loglevel', help='Log level') @click.option('--logfile', help='log file') @click.option('--logformat', help='Log output format [default|logstash|json].') @click.version_option(version=__version__) @click.pass_context def cli( ctx, config, host, url_prefix, port, use_ssl, certificate, client_cert, client_key, ssl_no_validate, http_auth, timeout, master_only, dry_run, loglevel, logfile, logformat): if os.path.isfile(config): initial_config = test_config(config) else: initial_config = None configuration = config_override(ctx, initial_config) set_logging(configuration['logging']) test_client_options(configuration['client']) logger = logging.getLogger(__name__) ctx.obj['config'] = configuration cli.add_command(allocation_singleton) cli.add_command(close_singleton) cli.add_command(delete_indices_singleton) cli.add_command(delete_snapshots_singleton) cli.add_command(forcemerge_singleton) cli.add_command(open_singleton) cli.add_command(replicas_singleton) cli.add_command(snapshot_singleton) cli.add_command(show_indices_singleton) cli.add_command(show_snapshots_singleton) curator-5.2.0/curator/snapshotlist.py000066400000000000000000000504501315226075300177710ustar00rootroot00000000000000from datetime import timedelta, datetime, date import time import re import logging from .defaults import settings from .validators import SchemaCheck, filters from .exceptions import * from .utils import * class SnapshotList(object): def __init__(self, client, repository=None): verify_client_object(client) if not repository: raise MissingArgument('No value for "repository" provided') if not repository_exists(client, repository): raise FailedExecution( 'Unable to verify existence of repository ' '{0}'.format(repository) ) self.loggit = logging.getLogger('curator.snapshotlist') #: An Elasticsearch Client object. #: Also accessible as an instance variable. self.client = client #: An Elasticsearch repository. #: Also accessible as an instance variable. self.repository = repository #: Instance variable. #: Information extracted from snapshots, such as age, etc. #: Populated by internal method `__get_snapshots` at instance creation #: time. **Type:** ``dict()`` self.snapshot_info = {} #: Instance variable. #: The running list of snapshots which will be used by an Action class. #: Populated by internal methods `__get_snapshots` at instance creation #: time. **Type:** ``list()`` self.snapshots = [] #: Instance variable. #: Raw data dump of all snapshots in the repository at instance creation #: time. **Type:** ``list()`` of ``dict()`` data. self.__get_snapshots() def __actionable(self, snap): self.loggit.debug( 'Snapshot {0} is actionable and remains in the list.'.format(snap)) def __not_actionable(self, snap): self.loggit.debug( 'Snapshot {0} is not actionable, removing from ' 'list.'.format(snap) ) self.snapshots.remove(snap) def __excludify(self, condition, exclude, snap, msg=None): if condition == True: if exclude: text = "Removed from actionable list" self.__not_actionable(snap) else: text = "Remains in actionable list" self.__actionable(snap) else: if exclude: text = "Remains in actionable list" self.__actionable(snap) else: text = "Removed from actionable list" self.__not_actionable(snap) if msg: self.loggit.debug('{0}: {1}'.format(text, msg)) def __get_snapshots(self): """ Pull all snapshots into `snapshots` and populate `snapshot_info` """ self.all_snapshots = get_snapshot_data(self.client, self.repository) for list_item in self.all_snapshots: if 'snapshot' in list_item.keys(): self.snapshots.append(list_item['snapshot']) self.snapshot_info[list_item['snapshot']] = list_item self.empty_list_check() def __map_method(self, ft): methods = { 'age': self.filter_by_age, 'count': self.filter_by_count, 'none': self.filter_none, 'pattern': self.filter_by_regex, 'period': self.filter_period, 'state': self.filter_by_state, } return methods[ft] def empty_list_check(self): """Raise exception if `snapshots` is empty""" if not self.snapshots: raise NoSnapshots('snapshot_list object is empty.') def working_list(self): """ Return the current value of `snapshots` as copy-by-value to prevent list stomping during iterations """ # Copy by value, rather than reference to prevent list stomping during # iterations return self.snapshots[:] def _get_name_based_ages(self, timestring): """ Add a snapshot age to `snapshot_info` based on the age as indicated by the snapshot name pattern, if it matches `timestring`. This is stored at key ``age_by_name``. :arg timestring: An strftime pattern """ # Check for empty list before proceeding here to prevent non-iterable # condition self.empty_list_check() ts = TimestringSearch(timestring) for snapshot in self.working_list(): epoch = ts.get_epoch(snapshot) if epoch: self.snapshot_info[snapshot]['age_by_name'] = epoch else: self.snapshot_info[snapshot]['age_by_name'] = None def _calculate_ages(self, source='creation_date', timestring=None): """ This method initiates snapshot age calculation based on the given parameters. Exceptions are raised when they are improperly configured. Set instance variable `age_keyfield` for use later, if needed. :arg source: Source of snapshot age. Can be 'name' or 'creation_date'. :arg timestring: An strftime string to match the datestamp in an snapshot name. Only used if ``source`` is ``name``. """ if source == 'name': self.age_keyfield = 'age_by_name' if not timestring: raise MissingArgument( 'source "name" requires the "timestring" keyword argument' ) self._get_name_based_ages(timestring) elif source == 'creation_date': self.age_keyfield = 'start_time_in_millis' else: raise ValueError( 'Invalid source: {0}. ' 'Must be "name", or "creation_date".'.format(source) ) def _sort_by_age(self, snapshot_list, reverse=True): """ Take a list of snapshots and sort them by date. By default, the youngest are first with `reverse=True`, but the oldest can be first by setting `reverse=False` """ # Do the age-based sorting here. # First, build an temporary dictionary with just snapshot and age # as the key and value, respectively temp = {} for snap in snapshot_list: if self.age_keyfield in self.snapshot_info[snap]: temp[snap] = self.snapshot_info[snap][self.age_keyfield] else: msg = ( '{0} does not have age key "{1}" in SnapshotList ' ' metadata'.format(snap, self.age_keyfield) ) self.__excludify(True, True, snap, msg) # If reverse is True, this will sort so the youngest snapshots are # first. However, if you want oldest first, set reverse to False. # Effectively, this should set us up to act on everything older than # meets the other set criteria. # It starts as a tuple, but then becomes a list. sorted_tuple = ( sorted(temp.items(), key=lambda k: k[1], reverse=reverse) ) return [x[0] for x in sorted_tuple] def most_recent(self): """ Return the most recent snapshot based on `start_time_in_millis`. """ self.empty_list_check() most_recent_time = 0 most_recent_snap = '' for snapshot in self.snapshots: snaptime = fix_epoch( self.snapshot_info[snapshot]['start_time_in_millis']) if snaptime > most_recent_time: most_recent_snap = snapshot most_recent_time = snaptime return most_recent_snap def filter_by_regex(self, kind=None, value=None, exclude=False): """ Filter out snapshots not matching the pattern, or in the case of exclude, filter those matching the pattern. :arg kind: Can be one of: ``suffix``, ``prefix``, ``regex``, or ``timestring``. This option defines what kind of filter you will be building. :arg value: Depends on `kind`. It is the strftime string if `kind` is `timestring`. It's used to build the regular expression for other kinds. :arg exclude: If `exclude` is `True`, this filter will remove matching snapshots from `snapshots`. If `exclude` is `False`, then only matching snapshots will be kept in `snapshots`. Default is `False` """ if kind not in [ 'regex', 'prefix', 'suffix', 'timestring' ]: raise ValueError('{0}: Invalid value for kind'.format(kind)) # Stop here if None or empty value, but zero is okay if value == 0: pass elif not value: raise ValueError( '{0}: Invalid value for "value". ' 'Cannot be "None" type, empty, or False' ) if kind == 'timestring': regex = settings.regex_map()[kind].format(get_date_regex(value)) else: regex = settings.regex_map()[kind].format(value) self.empty_list_check() pattern = re.compile(regex) for snapshot in self.working_list(): match = pattern.match(snapshot) self.loggit.debug('Filter by regex: Snapshot: {0}'.format(snapshot)) if match: self.__excludify(True, exclude, snapshot) else: self.__excludify(False, exclude, snapshot) def filter_by_age(self, source='creation_date', direction=None, timestring=None, unit=None, unit_count=None, epoch=None, exclude=False ): """ Remove snapshots from `snapshots` by relative age calculations. :arg source: Source of snapshot age. Can be 'name', or 'creation_date'. :arg direction: Time to filter, either ``older`` or ``younger`` :arg timestring: An strftime string to match the datestamp in an snapshot name. Only used for snapshot filtering by ``name``. :arg unit: One of ``seconds``, ``minutes``, ``hours``, ``days``, ``weeks``, ``months``, or ``years``. :arg unit_count: The number of ``unit`` (s). ``unit_count`` * ``unit`` will be calculated out to the relative number of seconds. :arg epoch: An epoch timestamp used in conjunction with ``unit`` and ``unit_count`` to establish a point of reference for calculations. If not provided, the current time will be used. :arg exclude: If `exclude` is `True`, this filter will remove matching snapshots from `snapshots`. If `exclude` is `False`, then only matching snapshots will be kept in `snapshots`. Default is `False` """ self.loggit.debug('Starting filter_by_age') # Get timestamp point of reference, PoR PoR = get_point_of_reference(unit, unit_count, epoch) self.loggit.debug('Point of Reference: {0}'.format(PoR)) if not direction: raise MissingArgument('Must provide a value for "direction"') if direction not in ['older', 'younger']: raise ValueError( 'Invalid value for "direction": {0}'.format(direction) ) self._calculate_ages(source=source, timestring=timestring) for snapshot in self.working_list(): if not self.snapshot_info[snapshot][self.age_keyfield]: self.loggit.debug('Removing snapshot {0} for having no age') self.snapshots.remove(snapshot) continue msg = ( 'Snapshot "{0}" age ({1}), direction: "{2}", point of ' 'reference, ({3})'.format( snapshot, fix_epoch(self.snapshot_info[snapshot][self.age_keyfield]), direction, PoR ) ) # Because time adds to epoch, smaller numbers are actually older # timestamps. snapshot_age = fix_epoch( self.snapshot_info[snapshot][self.age_keyfield]) if direction == 'older': agetest = snapshot_age < PoR else: # 'younger' agetest = snapshot_age > PoR self.__excludify(agetest, exclude, snapshot, msg) def filter_by_state(self, state=None, exclude=False): """ Filter out snapshots not matching ``state``, or in the case of exclude, filter those matching ``state``. :arg state: The snapshot state to filter for. Must be one of ``SUCCESS``, ``PARTIAL``, ``FAILED``, or ``IN_PROGRESS``. :arg exclude: If `exclude` is `True`, this filter will remove matching snapshots from `snapshots`. If `exclude` is `False`, then only matching snapshots will be kept in `snapshots`. Default is `False` """ if state.upper() not in ['SUCCESS', 'PARTIAL', 'FAILED', 'IN_PROGRESS']: raise ValueError('{0}: Invalid value for state'.format(state)) self.empty_list_check() for snapshot in self.working_list(): self.loggit.debug('Filter by state: Snapshot: {0}'.format(snapshot)) if self.snapshot_info[snapshot]['state'] == state: self.__excludify(True, exclude, snapshot) else: self.__excludify(False, exclude, snapshot) def filter_none(self): self.loggit.debug('"None" filter selected. No filtering will be done.') def filter_by_count( self, count=None, reverse=True, use_age=False, source='creation_date', timestring=None, exclude=True ): """ Remove snapshots from the actionable list beyond the number `count`, sorted reverse-alphabetically by default. If you set `reverse` to `False`, it will be sorted alphabetically. The default is usually what you will want. If only one kind of snapshot is provided--for example, snapshots matching ``curator-%Y%m%d%H%M%S``-- then reverse alphabetical sorting will mean the oldest will remain in the list, because lower numbers in the dates mean older snapshots. By setting `reverse` to `False`, then ``snapshot3`` will be acted on before ``snapshot2``, which will be acted on before ``snapshot1`` `use_age` allows ordering snapshots by age. Age is determined by the snapshot creation date (as identified by ``start_time_in_millis``) by default, but you can also specify a `source` of ``name``. The ``name`` `source` requires the timestring argument. :arg count: Filter snapshots beyond `count`. :arg reverse: The filtering direction. (default: `True`). :arg use_age: Sort snapshots by age. ``source`` is required in this case. :arg source: Source of snapshot age. Can be one of ``name``, or ``creation_date``. Default: ``creation_date`` :arg timestring: An strftime string to match the datestamp in a snapshot name. Only used if `source` ``name`` is selected. :arg exclude: If `exclude` is `True`, this filter will remove matching snapshots from `snapshots`. If `exclude` is `False`, then only matching snapshots will be kept in `snapshots`. Default is `True` """ self.loggit.debug('Filtering snapshots by count') if not count: raise MissingArgument('No value for "count" provided') # Create a copy-by-value working list working_list = self.working_list() if use_age: self._calculate_ages(source=source, timestring=timestring) # Using default value of reverse=True in self._sort_by_age() sorted_snapshots = self._sort_by_age(working_list, reverse=reverse) else: # Default to sorting by snapshot name sorted_snapshots = sorted(working_list, reverse=reverse) idx = 1 for snap in sorted_snapshots: msg = ( '{0} is {1} of specified count of {2}.'.format( snap, idx, count ) ) condition = True if idx <= count else False self.__excludify(condition, exclude, snap, msg) idx += 1 def filter_period( self, source='name', range_from=None, range_to=None, timestring=None, unit=None, field=None, stats_result='min_value', week_starts_on='sunday', epoch=None, exclude=False, ): """ Match `indices` within ages within a given period. :arg source: Source of snapshot age. Can be 'name', or 'creation_date'. :arg range_from: How many ``unit`` (s) in the past/future is the origin? :arg range_to: How many ``unit`` (s) in the past/future is the end point? :arg timestring: An strftime string to match the datestamp in an snapshot name. Only used for snapshot filtering by ``name``. :arg unit: One of ``hours``, ``days``, ``weeks``, ``months``, or ``years``. :arg week_starts_on: Either ``sunday`` or ``monday``. Default is ``sunday`` :arg epoch: An epoch timestamp used to establish a point of reference for calculations. If not provided, the current time will be used. :arg exclude: If `exclude` is `True`, this filter will remove matching indices from `indices`. If `exclude` is `False`, then only matching indices will be kept in `indices`. Default is `False` """ self.loggit.debug('Filtering snapshots by period') try: start, end = date_range( unit, range_from, range_to, epoch, week_starts_on=week_starts_on ) except Exception as e: report_failure(e) self._calculate_ages(source=source, timestring=timestring) for snapshot in self.working_list(): if not self.snapshot_info[snapshot][self.age_keyfield]: self.loggit.debug('Removing snapshot {0} for having no age') self.snapshots.remove(snapshot) continue age = fix_epoch(self.snapshot_info[snapshot][self.age_keyfield]) msg = ( 'Snapshot "{0}" age ({1}), period start: "{2}", period ' 'end, ({3})'.format( snapshot, age, start, end ) ) # Because time adds to epoch, smaller numbers are actually older # timestamps. inrange = ((age >= start) and (age <= end)) self.__excludify(inrange, exclude, snapshot, msg) def iterate_filters(self, config): """ Iterate over the filters defined in `config` and execute them. :arg config: A dictionary of filters, as extracted from the YAML configuration file. .. note:: `config` should be a dictionary with the following form: .. code-block:: python { 'filters' : [ { 'filtertype': 'the_filter_type', 'key1' : 'value1', ... 'keyN' : 'valueN' } ] } """ # Make sure we actually _have_ filters to act on if not 'filters' in config or len(config['filters']) < 1: logger.info('No filters in config. Returning unaltered object.') return self.loggit.debug('All filters: {0}'.format(config['filters'])) for f in config['filters']: self.loggit.debug('Top of the loop: {0}'.format(self.snapshots)) self.loggit.debug('Un-parsed filter args: {0}'.format(f)) self.loggit.debug('Parsed filter args: {0}'.format( SchemaCheck( f, filters.structure(), 'filter', 'SnapshotList.iterate_filters' ).result() ) ) method = self.__map_method(f['filtertype']) # Remove key 'filtertype' from dictionary 'f' del f['filtertype'] # If it's a filtertype with arguments, update the defaults with the # provided settings. logger.debug('Filter args: {0}'.format(f)) logger.debug('Pre-instance: {0}'.format(self.snapshots)) method(**f) logger.debug('Post-instance: {0}'.format(self.snapshots)) curator-5.2.0/curator/utils.py000066400000000000000000002001111315226075300163650ustar00rootroot00000000000000from datetime import timedelta, datetime, date import elasticsearch import time import logging import yaml, os, re, sys from voluptuous import Schema from .exceptions import * from .defaults import settings from .validators import SchemaCheck, actions, filters, options from ._version import __version__ logger = logging.getLogger(__name__) def read_file(myfile): """ Read a file and return the resulting data. :arg myfile: A file to read. :rtype: str """ try: with open(myfile, 'r') as f: data = f.read() return data except IOError as e: raise FailedExecution( 'Unable to read file {0}'.format(myfile) ) def get_yaml(path): """ Read the file identified by `path` and import its YAML contents. :arg path: The path to a YAML configuration file. :rtype: dict """ # Set the stage here to parse single scalar value environment vars from # the YAML file being read single = re.compile( r'^\$\{(.*)\}$' ) yaml.add_implicit_resolver ( "!single", single ) def single_constructor(loader,node): value = loader.construct_scalar(node) proto = single.match(value).group(1) default = None if len(proto.split(':')) > 1: envvar, default = proto.split(':') else: envvar = proto return os.environ[envvar] if envvar in os.environ else default yaml.add_constructor('!single', single_constructor) raw = read_file(path) try: cfg = yaml.load(raw) except yaml.scanner.ScannerError as e: raise ConfigurationError( 'Unable to parse YAML file. Error: {0}'.format(e)) return cfg def test_client_options(config): """ Test whether a SSL/TLS files exist. Will raise an exception if the files cannot be read. :arg config: A client configuration file data dictionary :rtype: None """ if config['use_ssl']: # Test whether certificate is a valid file path if 'certificate' in config and config['certificate']: read_file(config['certificate']) # Test whether client_cert is a valid file path if 'client_cert' in config and config['client_cert']: read_file(config['client_cert']) # Test whether client_key is a valid file path if 'client_key' in config and config['client_key']: read_file(config['client_key']) def rollable_alias(client, alias): """ Ensure that `alias` is an alias, and points to an index that can use the _rollover API. :arg client: An :class:`elasticsearch.Elasticsearch` client object :arg alias: An Elasticsearch alias """ try: response = client.indices.get_alias(name=alias) except elasticsearch.NotFoundError as e: logger.error('alias "{0}" not found.'.format(alias)) return False # Response should be like: # {'there_should_be_only_one': {u'aliases': {'value of "alias" here': {}}}} # Where 'there_should_be_only_one' is a single index name that ends in a # number, and 'value of "alias" here' reflects the value of the passed # parameter. if len(response) > 1: logger.error( '"alias" must only reference one index: {0}'.format(response)) # elif len(response) < 1: # logger.error( # '"alias" must reference at least one index: {0}'.format(response)) else: index = list(response.keys())[0] rollable = False # In order for `rollable` to be True, the last 2 digits of the index # must be digits, or a hyphen followed by a digit. # NOTE: This is not a guarantee that the rest of the index name is # necessarily correctly formatted. if index[-2:][1].isdigit(): if index[-2:][0].isdigit(): rollable = True elif index[-2:][0] == '-': rollable = True return rollable def verify_client_object(test): """ Test if `test` is a proper :class:`elasticsearch.Elasticsearch` client object and raise an exception if it is not. :arg test: The variable or object to test :rtype: None """ # Ignore mock type for testing if str(type(test)) == "" or \ str(type(test)) == "": pass elif not isinstance(test, elasticsearch.Elasticsearch): raise TypeError( 'Not a client object. Type: {0}'.format(type(test)) ) def verify_index_list(test): """ Test if `test` is a proper :class:`curator.indexlist.IndexList` object and raise an exception if it is not. :arg test: The variable or object to test :rtype: None """ # It breaks if this import isn't local to this function from .indexlist import IndexList if not isinstance(test, IndexList): raise TypeError( 'Not an IndexList object. Type: {0}.'.format(type(test)) ) def verify_snapshot_list(test): """ Test if `test` is a proper :class:`curator.snapshotlist.SnapshotList` object and raise an exception if it is not. :arg test: The variable or object to test :rtype: None """ # It breaks if this import isn't local to this function from .snapshotlist import SnapshotList if not isinstance(test, SnapshotList): raise TypeError( 'Not an SnapshotList object. Type: {0}.'.format(type(test)) ) def report_failure(exception): """ Raise a `FailedExecution` exception and include the original error message. :arg exception: The upstream exception. :rtype: None """ raise FailedExecution( 'Exception encountered. Rerun with loglevel DEBUG and/or check ' 'Elasticsearch logs for more information. ' 'Exception: {0}'.format(exception) ) def get_date_regex(timestring): """ Return a regex string based on a provided strftime timestring. :arg timestring: An strftime pattern :rtype: str """ prev = ''; curr = ''; regex = '' for s in range(0, len(timestring)): curr = timestring[s] if curr == '%': pass elif curr in settings.date_regex() and prev == '%': regex += '\d{' + settings.date_regex()[curr] + '}' elif curr in ['.', '-']: regex += "\\" + curr else: regex += curr prev = curr logger.debug("regex = {0}".format(regex)) return regex def get_datetime(index_timestamp, timestring): """ Return the datetime extracted from the index name, which is the index creation time. :arg index_timestamp: The timestamp extracted from an index name :arg timestring: An strftime pattern :rtype: :py:class:`datetime.datetime` """ # Compensate for week of year by appending '%w' to the timestring # and '1' (Monday) to index_timestamp iso_week_number = False if '%W' in timestring or '%U' in timestring or '%V' in timestring: timestring += '%w' index_timestamp += '1' if '%V' in timestring and '%G' in timestring: iso_week_number = True # Fake as so we read Greg format instead. We will process it later timestring = timestring.replace("%G", "%Y").replace("%V", "%W") elif '%m' in timestring: if not '%d' in timestring: timestring += '%d' index_timestamp += '1' date = datetime.strptime(index_timestamp, timestring) # Handle ISO time string if iso_week_number: date = _handle_iso_week_number(date, timestring, index_timestamp) return date def fix_epoch(epoch): """ Fix value of `epoch` to be epoch, which should be 10 or fewer digits long. :arg epoch: An epoch timestamp, in epoch + milliseconds, or microsecond, or even nanoseconds. :rtype: int """ # No decimals allowed epoch = int(epoch) # If we're still using this script past January, 2038, we have bigger # problems than my hacky math here... if len(str(epoch)) <= 10: return epoch elif len(str(epoch)) == 13: return int(epoch/1000) elif len(str(epoch)) > 10 and len(str(epoch)) < 13: raise ValueError( 'Unusually formatted epoch timestamp. ' 'Should be 10, 13, or more digits' ) else: orders_of_magnitude = len(str(epoch)) - 10 powers_of_ten = 10**orders_of_magnitude epoch = int(epoch/powers_of_ten) return epoch def _handle_iso_week_number(date, timestring, index_timestamp): date_iso = date.isocalendar() iso_week_str = "{Y:04d}{W:02d}".format(Y=date_iso[0], W=date_iso[1]) greg_week_str = datetime.strftime(date, "%Y%W") # Edge case 1: ISO week number is bigger than Greg week number. # Ex: year 2014, all ISO week numbers were 1 more than in Greg. if (iso_week_str > greg_week_str or # Edge case 2: 2010-01-01 in ISO: 2009.W53, in Greg: 2010.W00 # For Greg converting 2009.W53 gives 2010-01-04, converting back # to same timestring gives: 2010.W01. datetime.strftime(date, timestring) != index_timestamp): # Remove one week in this case date = date - timedelta(days=7) return date def datetime_to_epoch(mydate): # I would have used `total_seconds`, but apparently that's new # to Python 2.7+, and due to so many people still using # RHEL/CentOS 6, I need this to support Python 2.6. tdelta = (mydate - datetime(1970,1,1)) return tdelta.seconds + tdelta.days * 24 * 3600 class TimestringSearch(object): """ An object to allow repetitive search against a string, `searchme`, without having to repeatedly recreate the regex. :arg timestring: An strftime pattern """ def __init__(self, timestring): regex = r'(?P{0})'.format(get_date_regex(timestring)) self.pattern = re.compile(regex) self.timestring = timestring def get_epoch(self, searchme): """ Return the epoch timestamp extracted from the `timestring` appearing in `searchme`. :arg searchme: A string to be searched for a date pattern that matches `timestring` :rtype: int """ match = self.pattern.search(searchme) if match: if match.group("date"): timestamp = match.group("date") return datetime_to_epoch( get_datetime(timestamp, self.timestring) ) # # I would have used `total_seconds`, but apparently that's new # # to Python 2.7+, and due to so many people still using # # RHEL/CentOS 6, I need this to support Python 2.6. # tdelta = ( # get_datetime(timestamp, self.timestring) - # datetime(1970,1,1) # ) # return tdelta.seconds + tdelta.days * 24 * 3600 def get_point_of_reference(unit, count, epoch=None): """ Get a point-of-reference timestamp in epoch + milliseconds by deriving from a `unit` and a `count`, and an optional reference timestamp, `epoch` :arg unit: One of ``seconds``, ``minutes``, ``hours``, ``days``, ``weeks``, ``months``, or ``years``. :arg unit_count: The number of ``units``. ``unit_count`` * ``unit`` will be calculated out to the relative number of seconds. :arg epoch: An epoch timestamp used in conjunction with ``unit`` and ``unit_count`` to establish a point of reference for calculations. :rtype: int """ if unit == 'seconds': multiplier = 1 elif unit == 'minutes': multiplier = 60 elif unit == 'hours': multiplier = 3600 elif unit == 'days': multiplier = 3600*24 elif unit == 'weeks': multiplier = 3600*24*7 elif unit == 'months': multiplier = 3600*24*30 elif unit == 'years': multiplier = 3600*24*365 else: raise ValueError('Invalid unit: {0}.'.format(unit)) # Use this moment as a reference point, if one is not provided. if not epoch: epoch = time.time() epoch = fix_epoch(epoch) return epoch - multiplier * count def get_unit_count_from_name(index_name, pattern): if (pattern == None): return None match = pattern.search(index_name) if match: try: return int(match.group(1)) except Exception: return None else: return None def date_range(unit, range_from, range_to, epoch=None, week_starts_on='sunday'): """ Get the epoch start time and end time of a range of ``unit``s, reckoning the start of the week (if that's the selected unit) based on ``week_starts_on``, which can be either ``sunday`` or ``monday``. :arg unit: One of ``hours``, ``days``, ``weeks``, ``months``, or ``years``. :arg range_from: How many ``unit`` (s) in the past/future is the origin? :arg range_to: How many ``unit`` (s) in the past/future is the end point? :arg epoch: An epoch timestamp used to establish a point of reference for calculations. :arg week_starts_on: Either ``sunday`` or ``monday``. Default is ``sunday`` :rtype: tuple """ acceptable_units = ['hours', 'days', 'weeks', 'months', 'years'] if unit not in acceptable_units: raise ConfigurationError( '"unit" must be one of: {0}'.format(acceptable_units)) if not range_to >= range_from: raise ConfigurationError( '"range_to" must be greater than or equal to "range_from"') if not epoch: epoch = time.time() epoch = fix_epoch(epoch) rawPoR = datetime.utcfromtimestamp(epoch) logger.debug('Raw point of Reference = {0}'.format(rawPoR)) # Reverse the polarity, because -1 as last week makes sense when read by # humans, but datetime timedelta math makes -1 in the future. origin = range_from * -1 # These if statements help get the start date or start_delta if unit == 'hours': PoR = datetime(rawPoR.year, rawPoR.month, rawPoR.day, rawPoR.hour, 0, 0) start_delta = timedelta(hours=origin) if unit == 'days': PoR = datetime(rawPoR.year, rawPoR.month, rawPoR.day, 0, 0, 0) start_delta = timedelta(days=origin) if unit == 'weeks': PoR = datetime(rawPoR.year, rawPoR.month, rawPoR.day, 0, 0, 0) sunday = False if week_starts_on.lower() == 'sunday': sunday = True weekday = PoR.weekday() # Compensate for ISO week starting on Monday by default if sunday: weekday += 1 logger.debug('Weekday = {0}'.format(weekday)) start_delta = timedelta(days=weekday, weeks=origin) if unit == 'months': PoR = datetime(rawPoR.year, rawPoR.month, 1, 0, 0, 0) year = rawPoR.year month = rawPoR.month if origin > 0: for m in range(0, origin): if month == 1: year -= 1 month = 12 else: month -= 1 else: for m in range(origin, 0): if month == 12: year += 1 month = 1 else: month += 1 start_date = datetime(year, month, 1, 0, 0, 0) if unit == 'years': PoR = datetime(rawPoR.year, 1, 1, 0, 0, 0) start_date = datetime(rawPoR.year - origin, 1, 1, 0, 0, 0) if unit not in ['months','years']: start_date = PoR - start_delta # By this point, we know our start date and can convert it to epoch time start_epoch = datetime_to_epoch(start_date) logger.debug('Start ISO8601 = {0}'.format( datetime.utcfromtimestamp(start_epoch).isoformat())) # This is the number of units we need to consider. count = (range_to - range_from) + 1 # We have to iterate to one more month, and then subtract a second to get # the last day of the correct month if unit == 'months': month = start_date.month year = start_date.year for m in range(0, count): if month == 12: year += 1 month = 1 else: month += 1 end_date = datetime(year, month, 1, 0, 0, 0) end_epoch = datetime_to_epoch(end_date) - 1 # Similarly, with years, we need to get the last moment of the year elif unit == 'years': end_date = datetime((rawPoR.year - origin) + count, 1, 1, 0, 0, 0) end_epoch = datetime_to_epoch(end_date) - 1 # It's not months or years, which have inconsistent reckoning... else: # This lets us use an existing method to simply add unit * count seconds # to get hours, days, or weeks, as they don't change end_epoch = get_point_of_reference( unit, count * -1, epoch=start_epoch) -1 logger.debug('End ISO8601 = {0}'.format( datetime.utcfromtimestamp(end_epoch).isoformat())) return (start_epoch, end_epoch) def byte_size(num, suffix='B'): """ Return a formatted string indicating the size in bytes, with the proper unit, e.g. KB, MB, GB, TB, etc. :arg num: The number of byte :arg suffix: An arbitrary suffix, like `Bytes` :rtype: float """ for unit in ['','K','M','G','T','P','E','Z']: if abs(num) < 1024.0: return "%3.1f%s%s" % (num, unit, suffix) num /= 1024.0 return "%.1f%s%s" % (num, 'Y', suffix) def ensure_list(indices): """ Return a list, even if indices is a single value :arg indices: A list of indices to act upon :rtype: list """ if not isinstance(indices, list): # in case of a single value passed indices = [indices] return indices def to_csv(indices): """ Return a csv string from a list of indices, or a single value if only one value is present :arg indices: A list of indices to act on, or a single value, which could be in the format of a csv string already. :rtype: str """ indices = ensure_list(indices) # in case of a single value passed if indices: return ','.join(sorted(indices)) else: return None def check_csv(value): """ Some of the curator methods should not operate against multiple indices at once. This method can be used to check if a list or csv has been sent. :arg value: The value to test, if list or csv string :rtype: bool """ if isinstance(value, list): return True string = False # Python3 hack because it doesn't recognize unicode as a type anymore if sys.version_info < (3, 0): if isinstance(value, unicode): value = str(value) if isinstance(value, str): if len(value.split(',')) > 1: # It's a csv string. return True else: # There's only one value here, so it's not a csv string return False else: raise TypeError( 'Passed value: {0} is not a list or a string ' 'but is of type {1}'.format(value, type(value)) ) def chunk_index_list(indices): """ This utility chunks very large index lists into 3KB chunks It measures the size as a csv string, then converts back into a list for the return value. :arg indices: A list of indices to act on. :rtype: list """ chunks = [] chunk = "" for index in indices: if len(chunk) < 3072: if not chunk: chunk = index else: chunk += "," + index else: chunks.append(chunk.split(',')) chunk = index chunks.append(chunk.split(',')) return chunks def get_indices(client): """ Get the current list of indices from the cluster. :arg client: An :class:`elasticsearch.Elasticsearch` client object :rtype: list """ try: indices = list( client.indices.get_settings( index='_all', params={'expand_wildcards': 'open,closed'}) ) version_number = get_version(client) logger.debug( 'Detected Elasticsearch version ' '{0}'.format(".".join(map(str,version_number))) ) logger.debug("All indices: {0}".format(indices)) return indices except Exception as e: raise FailedExecution('Failed to get indices. Error: {0}'.format(e)) def get_version(client): """ Return the ES version number as a tuple. Omits trailing tags like -dev, or Beta :arg client: An :class:`elasticsearch.Elasticsearch` client object :rtype: tuple """ version = client.info()['version']['number'] version = version.split('-')[0] if len(version.split('.')) > 3: version = version.split('.')[:-1] else: version = version.split('.') return tuple(map(int, version)) def is_master_node(client): """ Return `True` if the connected client node is the elected master node in the Elasticsearch cluster, otherwise return `False`. :arg client: An :class:`elasticsearch.Elasticsearch` client object :rtype: bool """ my_node_id = list(client.nodes.info('_local')['nodes'])[0] master_node_id = client.cluster.state(metric='master_node')['master_node'] return my_node_id == master_node_id def check_version(client): """ Verify version is within acceptable range. Raise an exception if it is not. :arg client: An :class:`elasticsearch.Elasticsearch` client object :rtype: None """ version_number = get_version(client) logger.debug( 'Detected Elasticsearch version ' '{0}'.format(".".join(map(str,version_number))) ) if version_number >= settings.version_max() \ or version_number < settings.version_min(): logger.error( 'Elasticsearch version {0} incompatible ' 'with this version of Curator ' '({1})'.format(".".join(map(str,version_number)), __version__) ) raise CuratorException( 'Elasticsearch version {0} incompatible ' 'with this version of Curator ' '({1})'.format(".".join(map(str,version_number)), __version__) ) def check_master(client, master_only=False): """ Check if connected client is the elected master node of the cluster. If not, cleanly exit with a log message. :arg client: An :class:`elasticsearch.Elasticsearch` client object :rtype: None """ if master_only and not is_master_node(client): logger.info( 'Master-only flag detected. ' 'Connected to non-master node. Aborting.' ) sys.exit(0) def get_client(**kwargs): """ NOTE: AWS IAM parameters `aws_key`, `aws_secret_key`, and `aws_region` are provided for future compatibility, should AWS ES support the ``/_cluster/state/metadata`` endpoint. So long as this endpoint does not function in AWS ES, the client will not be able to use :class:`curator.indexlist.IndexList`, which is the backbone of Curator 4 Return an :class:`elasticsearch.Elasticsearch` client object using the provided parameters. Any of the keyword arguments the :class:`elasticsearch.Elasticsearch` client object can receive are valid, such as: :arg hosts: A list of one or more Elasticsearch client hostnames or IP addresses to connect to. Can send a single host. :type hosts: list :arg port: The Elasticsearch client port to connect to. :type port: int :arg url_prefix: `Optional` url prefix, if needed to reach the Elasticsearch API (i.e., it's not at the root level) :type url_prefix: str :arg use_ssl: Whether to connect to the client via SSL/TLS :type use_ssl: bool :arg certificate: Path to SSL/TLS certificate :arg client_cert: Path to SSL/TLS client certificate (public key) :arg client_key: Path to SSL/TLS private key :arg aws_key: AWS IAM Access Key (Only used if the :mod:`requests-aws4auth` python module is installed) :arg aws_secret_key: AWS IAM Secret Access Key (Only used if the :mod:`requests-aws4auth` python module is installed) :arg aws_region: AWS Region (Only used if the :mod:`requests-aws4auth` python module is installed) :arg ssl_no_validate: If `True`, do not validate the certificate chain. This is an insecure option and you will see warnings in the log output. :type ssl_no_validate: bool :arg http_auth: Authentication credentials in `user:pass` format. :type http_auth: str :arg timeout: Number of seconds before the client will timeout. :type timeout: int :arg master_only: If `True`, the client will `only` connect if the endpoint is the elected master node of the cluster. **This option does not work if `hosts` has more than one value.** It will raise an Exception in that case. :type master_only: bool :arg skip_version_test: If `True`, skip the version check as part of the client connection. :rtype: :class:`elasticsearch.Elasticsearch` """ if 'url_prefix' in kwargs: if ( type(kwargs['url_prefix']) == type(None) or kwargs['url_prefix'] == "None" ): kwargs['url_prefix'] = '' if 'host' in kwargs and 'hosts' in kwargs: raise ConfigurationError( 'Both "host" and "hosts" are defined. Pick only one.') elif 'host' in kwargs and not 'hosts' in kwargs: kwargs['hosts'] = kwargs['host'] del kwargs['host'] kwargs['hosts'] = '127.0.0.1' if not 'hosts' in kwargs else kwargs['hosts'] kwargs['master_only'] = False if not 'master_only' in kwargs \ else kwargs['master_only'] if 'skip_version_test' in kwargs: skip_version_test = kwargs.pop('skip_version_test') else: skip_version_test = False kwargs['use_ssl'] = False if not 'use_ssl' in kwargs else kwargs['use_ssl'] kwargs['ssl_no_validate'] = False if not 'ssl_no_validate' in kwargs \ else kwargs['ssl_no_validate'] kwargs['certificate'] = False if not 'certificate' in kwargs \ else kwargs['certificate'] kwargs['client_cert'] = False if not 'client_cert' in kwargs \ else kwargs['client_cert'] kwargs['client_key'] = False if not 'client_key' in kwargs \ else kwargs['client_key'] kwargs['hosts'] = ensure_list(kwargs['hosts']) logger.debug("kwargs = {0}".format(kwargs)) master_only = kwargs.pop('master_only') if kwargs['use_ssl']: if kwargs['ssl_no_validate']: kwargs['verify_certs'] = False # Not needed, but explicitly defined else: logger.debug('Attempting to verify SSL certificate.') # If user provides a certificate: if kwargs['certificate']: kwargs['verify_certs'] = True kwargs['ca_certs'] = kwargs['certificate'] else: # Try to use bundled certifi certificates if getattr(sys, 'frozen', False): # The application is frozen (compiled) datadir = os.path.dirname(sys.executable) kwargs['verify_certs'] = True kwargs['ca_certs'] = os.path.join(datadir, 'cacert.pem') else: # Use certifi certificates via certifi.where(): import certifi kwargs['verify_certs'] = True kwargs['ca_certs'] = certifi.where() try: from requests_aws4auth import AWS4Auth kwargs['aws_key'] = False if not 'aws_key' in kwargs \ else kwargs['aws_key'] kwargs['aws_secret_key'] = False if not 'aws_secret_key' in kwargs \ else kwargs['aws_secret_key'] kwargs['aws_region'] = False if not 'aws_region' in kwargs \ else kwargs['aws_region'] if kwargs['aws_key'] or kwargs['aws_secret_key'] or kwargs['aws_region']: if not kwargs['aws_key'] and kwargs['aws_secret_key'] \ and kwargs['aws_region']: raise MissingArgument( 'Missing one or more of "aws_key", "aws_secret_key", ' 'or "aws_region".' ) # Override these kwargs kwargs['use_ssl'] = True kwargs['verify_certs'] = True kwargs['connection_class'] = elasticsearch.RequestsHttpConnection kwargs['http_auth'] = ( AWS4Auth( kwargs['aws_key'], kwargs['aws_secret_key'], kwargs['aws_region'], 'es') ) else: logger.debug('"requests_aws4auth" module present, but not used.') except ImportError: logger.debug('Not using "requests_aws4auth" python module to connect.') if master_only: if len(kwargs['hosts']) > 1: raise ConfigurationError( '"master_only" cannot be True if more than one host is ' 'specified. Hosts = {0}'.format(kwargs['hosts']) ) try: client = elasticsearch.Elasticsearch(**kwargs) if skip_version_test: logger.warn( 'Skipping Elasticsearch version verification. This is ' 'acceptable for remote reindex operations.' ) else: # Verify the version is acceptable. check_version(client) # Verify "master_only" status, if applicable check_master(client, master_only=master_only) return client except Exception as e: raise elasticsearch.ElasticsearchException( 'Unable to create client connection to Elasticsearch. ' 'Error: {0}'.format(e) ) def show_dry_run(ilo, action, **kwargs): """ Log dry run output with the action which would have been executed. :arg ilo: A :class:`curator.indexlist.IndexList` :arg action: The `action` to be performed. :arg kwargs: Any other args to show in the log output """ logger.info('DRY-RUN MODE. No changes will be made.') logger.info( '(CLOSED) indices may be shown that may not be acted on by ' 'action "{0}".'.format(action) ) indices = sorted(ilo.indices) for idx in indices: index_closed = ilo.index_info[idx]['state'] == 'close' logger.info( 'DRY-RUN: {0}: {1}{2} with arguments: {3}'.format( action, idx, ' (CLOSED)' if index_closed else '', kwargs ) ) ### SNAPSHOT STUFF ### def get_repository(client, repository=''): """ Return configuration information for the indicated repository. :arg client: An :class:`elasticsearch.Elasticsearch` client object :arg repository: The Elasticsearch snapshot repository to use :rtype: dict """ try: return client.snapshot.get_repository(repository=repository) except (elasticsearch.TransportError, elasticsearch.NotFoundError) as e: raise CuratorException( 'Unable to get repository {0}. Response Code: {1}. Error: {2}.' 'Check Elasticsearch logs for more information.'.format( repository, e.status_code, e.error ) ) def get_snapshot(client, repository=None, snapshot=''): """ Return information about a snapshot (or a comma-separated list of snapshots) If no snapshot specified, it will return all snapshots. If none exist, an empty dictionary will be returned. :arg client: An :class:`elasticsearch.Elasticsearch` client object :arg repository: The Elasticsearch snapshot repository to use :arg snapshot: The snapshot name, or a comma-separated list of snapshots :rtype: dict """ if not repository: raise MissingArgument('No value for "repository" provided') snapname = '_all' if snapshot == '' else snapshot try: return client.snapshot.get(repository=repository, snapshot=snapshot) except (elasticsearch.TransportError, elasticsearch.NotFoundError) as e: raise FailedExecution( 'Unable to get information about snapshot {0} from repository: ' '{1}. Error: {2}'.format(snapname, repository, e) ) def get_snapshot_data(client, repository=None): """ Get ``_all`` snapshots from repository and return a list. :arg client: An :class:`elasticsearch.Elasticsearch` client object :arg repository: The Elasticsearch snapshot repository to use :rtype: list """ if not repository: raise MissingArgument('No value for "repository" provided') try: return client.snapshot.get( repository=repository, snapshot="_all")['snapshots'] except (elasticsearch.TransportError, elasticsearch.NotFoundError) as e: raise FailedExecution( 'Unable to get snapshot information from repository: {0}. ' 'Error: {1}'.format(repository, e) ) def snapshot_in_progress(client, repository=None, snapshot=None): """ Determine whether the provided snapshot in `repository` is ``IN_PROGRESS``. If no value is provided for `snapshot`, then check all of them. Return `snapshot` if it is found to be in progress, or `False` :arg client: An :class:`elasticsearch.Elasticsearch` client object :arg repository: The Elasticsearch snapshot repository to use :arg snapshot: The snapshot name """ allsnaps = get_snapshot_data(client, repository=repository) inprogress = ( [snap['snapshot'] for snap in allsnaps if 'state' in snap.keys() \ and snap['state'] == 'IN_PROGRESS'] ) if snapshot: return snapshot if snapshot in inprogress else False else: if len(inprogress) == 0: return False elif len(inprogress) == 1: return inprogress[0] else: # This should not be possible raise CuratorException( 'More than 1 snapshot in progress: {0}'.format(inprogress) ) def find_snapshot_tasks(client): """ Check if there is snapshot activity in the Tasks API. Return `True` if activity is found, or `False` :arg client: An :class:`elasticsearch.Elasticsearch` client object :rtype: bool """ retval = False tasklist = client.tasks.get() for node in tasklist['nodes']: for task in tasklist['nodes'][node]['tasks']: activity = tasklist['nodes'][node]['tasks'][task]['action'] if 'snapshot' in activity: logger.debug('Snapshot activity detected: {0}'.format(activity)) retval = True return retval def safe_to_snap(client, repository=None, retry_interval=120, retry_count=3): """ Ensure there are no snapshots in progress. Pause and retry accordingly :arg client: An :class:`elasticsearch.Elasticsearch` client object :arg repository: The Elasticsearch snapshot repository to use :arg retry_interval: Number of seconds to delay betwen retries. Default: 120 (seconds) :arg retry_count: Number of attempts to make. Default: 3 :rtype: bool """ if not repository: raise MissingArgument('No value for "repository" provided') for count in range(1, retry_count+1): in_progress = snapshot_in_progress( client, repository=repository ) ongoing_task = find_snapshot_tasks(client) if in_progress or ongoing_task: if in_progress: logger.info( 'Snapshot already in progress: {0}'.format(in_progress)) elif ongoing_task: logger.info('Snapshot activity detected in Tasks API') logger.info( 'Pausing {0} seconds before retrying...'.format(retry_interval)) time.sleep(retry_interval) logger.info('Retry {0} of {1}'.format(count, retry_count)) else: return True return False def create_snapshot_body(indices, ignore_unavailable=False, include_global_state=True, partial=False): """ Create the request body for creating a snapshot from the provided arguments. :arg indices: A single index, or list of indices to snapshot. :arg ignore_unavailable: Ignore unavailable shards/indices. (default: `False`) :type ignore_unavailable: bool :arg include_global_state: Store cluster global state with snapshot. (default: `True`) :type include_global_state: bool :arg partial: Do not fail if primary shard is unavailable. (default: `False`) :type partial: bool :rtype: dict """ if not indices: logger.error('No indices provided.') return False body = { "ignore_unavailable": ignore_unavailable, "include_global_state": include_global_state, "partial": partial, } if indices == '_all': body["indices"] = indices else: body["indices"] = to_csv(indices) return body def create_repo_body(repo_type=None, compress=True, chunk_size=None, max_restore_bytes_per_sec=None, max_snapshot_bytes_per_sec=None, location=None, bucket=None, region=None, base_path=None, access_key=None, secret_key=None, **kwargs): """ Build the 'body' portion for use in creating a repository. :arg repo_type: The type of repository (presently only `fs` and `s3`) :arg compress: Turn on compression of the snapshot files. Compression is applied only to metadata files (index mapping and settings). Data files are not compressed. (Default: `True`) :arg chunk_size: The chunk size can be specified in bytes or by using size value notation, i.e. 1g, 10m, 5k. Defaults to `null` (unlimited chunk size). :arg max_restore_bytes_per_sec: Throttles per node restore rate. Defaults to ``20mb`` per second. :arg max_snapshot_bytes_per_sec: Throttles per node snapshot rate. Defaults to ``20mb`` per second. :arg location: Location of the snapshots. Required. :arg bucket: `S3 only.` The name of the bucket to be used for snapshots. Required. :arg region: `S3 only.` The region where bucket is located. Defaults to `US Standard` :arg base_path: `S3 only.` Specifies the path within bucket to repository data. Defaults to value of ``repositories.s3.base_path`` or to root directory if not set. :arg access_key: `S3 only.` The access key to use for authentication. Defaults to value of ``cloud.aws.access_key``. :arg secret_key: `S3 only.` The secret key to use for authentication. Defaults to value of ``cloud.aws.secret_key``. :returns: A dictionary suitable for creating a repository from the provided arguments. :rtype: dict """ # This shouldn't happen, but just in case... if not repo_type: raise MissingArgument('Missing required parameter --repo_type') argdict = locals() body = {} body['type'] = argdict['repo_type'] body['settings'] = {} settingz = [] # Differentiate from module settings maybes = [ 'compress', 'chunk_size', 'max_restore_bytes_per_sec', 'max_snapshot_bytes_per_sec' ] s3 = ['bucket', 'region', 'base_path', 'access_key', 'secret_key'] settingz += [i for i in maybes if argdict[i]] # Type 'fs' if argdict['repo_type'] == 'fs': settingz.append('location') # Type 's3' if argdict['repo_type'] == 's3': settingz += [i for i in s3 if argdict[i]] for k in settingz: body['settings'][k] = argdict[k] return body def create_repository(client, **kwargs): """ Create repository with repository and body settings :arg client: An :class:`elasticsearch.Elasticsearch` client object :arg repository: The Elasticsearch snapshot repository to use :arg repo_type: The type of repository (presently only `fs` and `s3`) :arg compress: Turn on compression of the snapshot files. Compression is applied only to metadata files (index mapping and settings). Data files are not compressed. (Default: `True`) :arg chunk_size: The chunk size can be specified in bytes or by using size value notation, i.e. 1g, 10m, 5k. Defaults to `null` (unlimited chunk size). :arg max_restore_bytes_per_sec: Throttles per node restore rate. Defaults to ``20mb`` per second. :arg max_snapshot_bytes_per_sec: Throttles per node snapshot rate. Defaults to ``20mb`` per second. :arg location: Location of the snapshots. Required. :arg bucket: `S3 only.` The name of the bucket to be used for snapshots. Required. :arg region: `S3 only.` The region where bucket is located. Defaults to `US Standard` :arg base_path: `S3 only.` Specifies the path within bucket to repository data. Defaults to value of ``repositories.s3.base_path`` or to root directory if not set. :arg access_key: `S3 only.` The access key to use for authentication. Defaults to value of ``cloud.aws.access_key``. :arg secret_key: `S3 only.` The secret key to use for authentication. Defaults to value of ``cloud.aws.secret_key``. :returns: A boolean value indicating success or failure. :rtype: bool """ if not 'repository' in kwargs: raise MissingArgument('Missing required parameter "repository"') else: repository = kwargs['repository'] try: body = create_repo_body(**kwargs) logger.debug( 'Checking if repository {0} already exists...'.format(repository) ) result = repository_exists(client, repository=repository) logger.debug("Result = {0}".format(result)) if not result: logger.debug( 'Repository {0} not in Elasticsearch. Continuing...'.format( repository ) ) client.snapshot.create_repository(repository=repository, body=body) else: raise FailedExecution( 'Unable to create repository {0}. A repository with that name ' 'already exists.'.format(repository) ) except elasticsearch.TransportError as e: raise FailedExecution( """ Unable to create repository {0}. Response Code: {1}. Error: {2}. Check curator and elasticsearch logs for more information. """.format( repository, e.status_code, e.error ) ) logger.debug("Repository {0} creation initiated...".format(repository)) return True def repository_exists(client, repository=None): """ Verify the existence of a repository :arg client: An :class:`elasticsearch.Elasticsearch` client object :arg repository: The Elasticsearch snapshot repository to use :rtype: bool """ if not repository: raise MissingArgument('No value for "repository" provided') try: test_result = get_repository(client, repository) if repository in test_result: logger.debug("Repository {0} exists.".format(repository)) return True else: logger.debug("Repository {0} not found...".format(repository)) return False except Exception as e: logger.debug( 'Unable to find repository "{0}": Error: ' '{1}'.format(repository, e) ) return False def test_repo_fs(client, repository=None): """ Test whether all nodes have write access to the repository :arg client: An :class:`elasticsearch.Elasticsearch` client object :arg repository: The Elasticsearch snapshot repository to use """ try: nodes = client.snapshot.verify_repository( repository=repository)['nodes'] logger.debug('All nodes can write to the repository') logger.debug( 'Nodes with verified repository access: {0}'.format(nodes)) except Exception as e: try: if e.status_code == 404: msg = ( '--- Repository "{0}" not found. Error: ' '{1}, {2}'.format(repository, e.status_code, e.error) ) else: msg = ( '--- Got a {0} response from Elasticsearch. ' 'Error message: {1}'.format(e.status_code, e.error) ) except AttributeError: msg = ('--- Error message: {0}'.format(e)) raise ActionError( 'Failed to verify all nodes have repository access: ' '{0}'.format(msg) ) def snapshot_running(client): """ Return `True` if a snapshot is in progress, and `False` if not :arg client: An :class:`elasticsearch.Elasticsearch` client object :rtype: bool """ try: status = client.snapshot.status()['snapshots'] except Exception as e: report_failure(e) # We will only accept a positively identified False. Anything else is # suspect. return False if status == [] else True def parse_date_pattern(name): """ Scan and parse `name` for :py:func:`time.strftime` strings, replacing them with the associated value when found, but otherwise returning lowercase values, as uppercase snapshot names are not allowed. It will detect if the first character is a `<`, which would indicate `name` is going to be using Elasticsearch date math syntax, and skip accordingly. The :py:func:`time.strftime` identifiers that Curator currently recognizes as acceptable include: * ``Y``: A 4 digit year * ``y``: A 2 digit year * ``m``: The 2 digit month * ``W``: The 2 digit week of the year * ``d``: The 2 digit day of the month * ``H``: The 2 digit hour of the day, in 24 hour notation * ``M``: The 2 digit minute of the hour * ``S``: The 2 digit number of second of the minute * ``j``: The 3 digit day of the year :arg name: A name, which can contain :py:func:`time.strftime` strings """ prev = ''; curr = ''; rendered = '' for s in range(0, len(name)): curr = name[s] if curr == '<': logger.info('"{0}" is using Elasticsearch date math.'.format(name)) rendered = name break if curr == '%': pass elif curr in settings.date_regex() and prev == '%': rendered += str(datetime.utcnow().strftime('%{0}'.format(curr))) else: rendered += curr logger.debug('Partially rendered name: {0}'.format(rendered)) prev = curr logger.debug('Fully rendered name: {0}'.format(rendered)) return rendered def prune_nones(mydict): """ Remove keys from `mydict` whose values are `None` :arg mydict: The dictionary to act on :rtype: dict """ # Test for `None` instead of existence or zero values will be caught return dict([(k,v) for k, v in mydict.items() if v != None and v != 'None']) def validate_filters(action, filters): """ Validate that the filters are appropriate for the action type, e.g. no index filters applied to a snapshot list. :arg action: An action name :arg filters: A list of filters to test. """ # Define which set of filtertypes to use for testing if action in settings.snapshot_actions(): filtertypes = settings.snapshot_filtertypes() else: filtertypes = settings.index_filtertypes() for f in filters: if f['filtertype'] not in filtertypes: raise ConfigurationError( '"{0}" filtertype is not compatible with action "{1}"'.format( f['filtertype'], action ) ) # If we get to this point, we're still valid. Return the original list return filters def validate_actions(data): """ Validate an Action configuration dictionary, as imported from actions.yml, for example. The method returns a validated and sanitized configuration dictionary. :arg data: The configuration dictionary :rtype: dict """ # data is the ENTIRE schema... clean_config = { } # Let's break it down into smaller chunks... # First, let's make sure it has "actions" as a key, with a subdictionary root = SchemaCheck(data, actions.root(), 'Actions File', 'root').result() # We've passed the first step. Now let's iterate over the actions... for action_id in root['actions']: # Now, let's ensure that the basic action structure is correct, with # the proper possibilities for 'action' action_dict = root['actions'][action_id] loc = 'Action ID "{0}"'.format(action_id) valid_structure = SchemaCheck( action_dict, actions.structure(action_dict, loc), 'structure', loc ).result() # With the basic structure validated, now we extract the action name current_action = valid_structure['action'] # And let's update the location with the action. loc = 'Action ID "{0}", action "{1}"'.format( action_id, current_action) clean_options = SchemaCheck( prune_nones(valid_structure['options']), options.get_schema(current_action), 'options', loc ).result() clean_config[action_id] = { 'action' : current_action, 'description' : valid_structure['description'], 'options' : clean_options, } if current_action == 'alias': add_remove = {} for k in ['add', 'remove']: if k in valid_structure: current_filters = SchemaCheck( valid_structure[k]['filters'], Schema(filters.Filters(current_action, location=loc)), '"{0}" filters', '{1}, "filters"'.format(k, loc) ).result() add_remove.update( { k: { 'filters' : SchemaCheck( current_filters, Schema( filters.Filters( current_action, location=loc ) ), 'filters', '{0}, "{1}", "filters"'.format(loc, k) ).result() } } ) # Add/Remove here clean_config[action_id].update(add_remove) elif current_action in ['cluster_routing', 'create_index', 'rollover']: # neither cluster_routing nor create_index should have filters pass else: # Filters key only appears in non-alias actions valid_filters = SchemaCheck( valid_structure['filters'], Schema(filters.Filters(current_action, location=loc)), 'filters', '{0}, "filters"'.format(loc) ).result() clean_filters = validate_filters(current_action, valid_filters) clean_config[action_id].update({'filters' : clean_filters}) # This is a special case for remote reindex if current_action == 'reindex': # Check only if populated with something. if 'remote_filters' in valid_structure['options']: valid_filters = SchemaCheck( valid_structure['options']['remote_filters'], Schema(filters.Filters(current_action, location=loc)), 'filters', '{0}, "filters"'.format(loc) ).result() clean_remote_filters = validate_filters( current_action, valid_filters) clean_config[action_id]['options'].update( { 'remote_filters' : clean_remote_filters } ) # if we've gotten this far without any Exceptions raised, it's valid! return { 'actions' : clean_config } def health_check(client, **kwargs): """ This function calls client.cluster.health and, based on the args provided, will return `True` or `False` depending on whether that particular keyword appears in the output, and has the expected value. If multiple keys are provided, all must match for a `True` response. :arg client: An :class:`elasticsearch.Elasticsearch` client object """ logger.debug('KWARGS= "{0}"'.format(kwargs)) klist = list(kwargs.keys()) if len(klist) < 1: raise MissingArgument('Must provide at least one keyword argument') hc_data = client.cluster.health() response = True for k in klist: # First, verify that all kwargs are in the list if not k in list(hc_data.keys()): raise ConfigurationError('Key "{0}" not in cluster health output') if not hc_data[k] == kwargs[k]: logger.debug( 'NO MATCH: Value for key "{0}", health check data: ' '{1}'.format(kwargs[k], hc_data[k]) ) response = False else: logger.debug( 'MATCH: Value for key "{0}", health check data: ' '{1}'.format(kwargs[k], hc_data[k]) ) if response: logger.info('Health Check for all provided keys passed.') return response def snapshot_check(client, snapshot=None, repository=None): """ This function calls `client.snapshot.get` and tests to see whether the snapshot is complete, and if so, with what status. It will log errors according to the result. If the snapshot is still `IN_PROGRESS`, it will return `False`. `SUCCESS` will be an `INFO` level message, `PARTIAL` nets a `WARNING` message, `FAILED` is an `ERROR`, message, and all others will be a `WARNING` level message. :arg client: An :class:`elasticsearch.Elasticsearch` client object :arg snapshot: The name of the snapshot. :arg repository: The Elasticsearch snapshot repository to use """ try: state = client.snapshot.get( repository=repository, snapshot=snapshot)['snapshots'][0]['state'] except Exception as e: raise CuratorException( 'Unable to obtain information for snapshot "{0}" in repository ' '"{1}". Error: {2}'.format(snapshot, repository, e) ) logger.debug('Snapshot state = {0}'.format(state)) if state == 'IN_PROGRESS': logger.info('Snapshot {0} still in progress.'.format(snapshot)) return False elif state == 'SUCCESS': logger.info( 'Snapshot {0} successfully completed.'.format(snapshot)) elif state == 'PARTIAL': logger.warn( 'Snapshot {0} completed with state PARTIAL.'.format(snapshot)) elif state == 'FAILED': logger.error( 'Snapshot {0} completed with state FAILED.'.format(snapshot)) else: logger.warn( 'Snapshot {0} completed with state: {0}'.format(snapshot)) return True def restore_check(client, index_list): """ This function calls client.indices.recovery with the list of indices to check for complete recovery. It will return `True` if recovery of those indices is complete, and `False` otherwise. It is designed to fail fast: if a single shard is encountered that is still recovering (not in `DONE` stage), it will immediately return `False`, rather than complete iterating over the rest of the response. :arg client: An :class:`elasticsearch.Elasticsearch` client object :arg index_list: The list of indices to verify having been restored. """ try: response = client.indices.recovery(index=to_csv(index_list), human=True) except Exception as e: raise CuratorException( 'Unable to obtain recovery information for specified indices. ' 'Error: {0}'.format(e) ) # This should address #962, where perhaps the cluster state hasn't yet # had a chance to add a _recovery state yet, so it comes back empty. if response == {}: logger.info('_recovery returned an empty response. Trying again.') return False # Fixes added in #989 logger.info('Provided indices: {0}'.format(index_list)) logger.info('Found indices: {0}'.format(list(response.keys()))) for index in response: for shard in range(0, len(response[index]['shards'])): # Apparently `is not` is not always `!=`. Unsure why, will # research later. Using != fixes #966 if response[index]['shards'][shard]['stage'] != 'DONE': logger.info( 'Index "{0}" is still in stage "{1}"'.format( index, response[index]['shards'][shard]['stage'] ) ) return False # If we've gotten here, all of the indices have recovered return True def task_check(client, task_id=None): """ This function calls client.tasks.get with the provided `task_id`. If the task data contains ``'completed': True``, then it will return `True` If the task is not completed, it will log some information about the task and return `False` :arg client: An :class:`elasticsearch.Elasticsearch` client object :arg task_id: A task_id which ostensibly matches a task searchable in the tasks API. """ try: task_data = client.tasks.get(task_id=task_id) except Exception as e: raise CuratorException( 'Unable to obtain task information for task_id "{0}". Exception ' '{1}'.format(task_id, e) ) task = task_data['task'] completed = task_data['completed'] running_time = 0.000000001 * task['running_time_in_nanos'] logger.debug('running_time_in_nanos = {0}'.format(running_time)) descr = task['description'] if completed: completion_time = ((running_time * 1000) + task['start_time_in_millis']) time_string = time.strftime( '%Y-%m-%dT%H:%M:%SZ', time.localtime(completion_time/1000) ) logger.info('Task "{0}" completed at {1}.'.format(descr, time_string)) return True else: # Log the task status here. logger.debug('Full Task Data: {0}'.format(task_data)) logger.info( 'Task "{0}" with task_id "{1}" has been running for ' '{2} seconds'.format(descr, task_id, running_time)) return False def wait_for_it( client, action, task_id=None, snapshot=None, repository=None, index_list=None, wait_interval=9, max_wait=-1 ): """ This function becomes one place to do all wait_for_completion type behaviors :arg client: An :class:`elasticsearch.Elasticsearch` client object :arg action: The action name that will identify how to wait :arg task_id: If the action provided a task_id, this is where it must be declared. :arg snapshot: The name of the snapshot. :arg repository: The Elasticsearch snapshot repository to use :arg wait_interval: How frequently the specified "wait" behavior will be polled to check for completion. :arg max_wait: Number of seconds will the "wait" behavior persist before giving up and raising an Exception. The default is -1, meaning it will try forever. """ action_map = { 'allocation':{ 'function': health_check, 'args': {'relocating_shards':0}, }, 'replicas':{ 'function': health_check, 'args': {'status':'green'}, }, 'cluster_routing':{ 'function': health_check, 'args': {'relocating_shards':0}, }, 'snapshot':{ 'function':snapshot_check, 'args':{'snapshot':snapshot, 'repository':repository}, }, 'restore':{ 'function':restore_check, 'args':{'index_list':index_list}, }, 'reindex':{ 'function':task_check, 'args':{'task_id':task_id}, }, 'shrink':{ 'function': health_check, 'args': {'status':'green'}, }, } wait_actions = list(action_map.keys()) if action not in wait_actions: raise ConfigurationError( '"action" must be one of {0}'.format(wait_actions) ) if action == 'reindex' and task_id == None: raise MissingArgument( 'A task_id must accompany "action" {0}'.format(action) ) if action == 'snapshot' and ((snapshot == None) or (repository == None)): raise MissingArgument( 'A snapshot and repository must accompany "action" {0}. snapshot: ' '{1}, repository: {2}'.format(action, snapshot, repository) ) if action == 'restore' and index_list == None: raise MissingArgument( 'An index_list must accompany "action" {0}'.format(action) ) elif action == 'reindex': try: task_dict = client.tasks.get(task_id=task_id) except Exception as e: # This exception should only exist in API usage. It should never # occur in regular Curator usage. raise CuratorException( 'Unable to find task_id {0}. Exception: {1}'.format(task_id, e) ) # Now with this mapped, we can perform the wait as indicated. start_time = datetime.now() result = False while True: elapsed = int((datetime.now() - start_time).total_seconds()) logger.debug('Elapsed time: {0} seconds'.format(elapsed)) response = action_map[action]['function']( client, **action_map[action]['args']) logger.debug('Response: {0}'.format(response)) # Success if response: logger.debug( 'Action "{0}" finished executing (may or may not have been ' 'successful)'.format(action)) result = True break # Not success, and reached maximum wait (if defined) elif (max_wait != -1) and (elapsed >= max_wait): logger.error( 'Unable to complete action "{0}" within max_wait ({1}) ' 'seconds.'.format(action, max_wait) ) break # Not success, so we wait. else: logger.debug( 'Action "{0}" not yet complete, {1} total seconds elapsed. ' 'Waiting {2} seconds before checking ' 'again.'.format(action, elapsed, wait_interval)) time.sleep(wait_interval) logger.debug('Result: {0}'.format(result)) if result == False: raise ActionTimeout( 'Action "{0}" failed to complete in the max_wait period of ' '{1} seconds'.format(action, max_wait) ) def node_roles(client, node_id): """ Return the list of roles assigned to the node identified by ``node_id`` :arg client: An :class:`elasticsearch.Elasticsearch` client object :rtype: list """ return client.nodes.info()['nodes'][node_id]['roles'] def index_size(client, idx): return client.indices.stats(index=idx)['indices'][idx]['total']['store']['size_in_bytes'] def single_data_path(client, node_id): """ In order for a shrink to work, it should be on a single filesystem, as shards cannot span filesystems. Return `True` if the node has a single filesystem, and `False` otherwise. :arg client: An :class:`elasticsearch.Elasticsearch` client object :rtype: bool """ return len(client.nodes.stats()['nodes'][node_id]['fs']['data']) == 1 def name_to_node_id(client, name): """ Return the node_id of the node identified by ``name`` :arg client: An :class:`elasticsearch.Elasticsearch` client object :rtype: str """ stats = client.nodes.stats() for node in stats['nodes']: if stats['nodes'][node]['name'] == name: logger.debug('Found node_id "{0}" for name "{1}".'.format(node, name)) return node logger.error('No node_id found matching name: "{0}"'.format(name)) return None def node_id_to_name(client, node_id): """ Return the name of the node identified by ``node_id`` :arg client: An :class:`elasticsearch.Elasticsearch` client object :rtype: str """ stats = client.nodes.stats() name = None if node_id in stats['nodes']: name = stats['nodes'][node_id]['name'] else: logger.error('No node_id found matching: "{0}"'.format(node_id)) logger.debug('Name associated with node_id "{0}": {1}'.format(node_id, name)) return name curator-5.2.0/curator/validators/000077500000000000000000000000001315226075300170305ustar00rootroot00000000000000curator-5.2.0/curator/validators/__init__.py000066400000000000000000000000451315226075300211400ustar00rootroot00000000000000from .schemacheck import SchemaCheck curator-5.2.0/curator/validators/actions.py000066400000000000000000000032711315226075300210450ustar00rootroot00000000000000from voluptuous import * from ..defaults import settings from . import SchemaCheck ### Schema information ### # Actions: root level def root(): return Schema({ Required('actions'): dict }) def valid_action(): return { Required('action'): Any( In(settings.all_actions()), msg='action must be one of {0}'.format( settings.all_actions() ) ) } # Basic action structure def structure(data, location): # Validate the action type first, so we can use it for other tests valid_action_type = SchemaCheck( data, Schema(valid_action(), extra=True), 'action type', location, ).result() # Build a valid schema knowing that the action has already been validated retval = valid_action() retval.update( { Optional('description', default='No description given'): Any( str, unicode ) } ) retval.update( { Optional('options', default=settings.default_options()): dict } ) action = data['action'] if action in [ 'cluster_routing', 'create_index', 'rollover']: # The cluster_routing, create_index, and rollover actions should not # have a 'filters' block pass elif action == 'alias': # The alias action should not have a filters block, but should have # an add and/or remove block. retval.update( { Optional('add'): dict, Optional('remove'): dict, } ) else: retval.update( { Optional('filters', default=settings.default_filters()): list } ) return Schema(retval) curator-5.2.0/curator/validators/config_file.py000066400000000000000000000004001315226075300216400ustar00rootroot00000000000000from voluptuous import * from ..defaults import client_defaults def client(): return Schema( { Optional('client'): client_defaults.config_client(), Optional('logging'): client_defaults.config_logging(), } ) curator-5.2.0/curator/validators/filters.py000066400000000000000000000032641315226075300210570ustar00rootroot00000000000000from voluptuous import * from ..defaults import settings, filtertypes from ..exceptions import ConfigurationError from . import SchemaCheck import logging logger = logging.getLogger(__name__) def filtertype(): return { Required('filtertype'): Any( In(settings.all_filtertypes()), msg='filtertype must be one of {0}'.format( settings.all_filtertypes() ) ) } def structure(): # This is to first ensure that only the possible keys/filter elements are # there, and get a dictionary back to work with. retval = settings.structural_filter_elements() retval.update(filtertype()) return Schema(retval) def single(action, data): try: ft = data['filtertype'] except KeyError: raise ConfigurationError('Missing key "filtertype"') f = filtertype() for each in getattr(filtertypes, ft)(action, data): f.update(each) return Schema(f) def Filters(action, location=None): def f(v): def prune_nones(mydict): return dict([(k,v) for k, v in mydict.items() if v != None and v != 'None']) # This validator method simply validates all filters in the list. for idx in range(0, len(v)): pruned = prune_nones(v[idx]) filter_dict = SchemaCheck( pruned, single(action, pruned), 'filter', '{0}, filter #{1}: {2}'.format(location, idx, pruned) ).result() logger.debug('Filter #{0}: {1}'.format(idx, filter_dict)) v[idx] = filter_dict # If we've made it here without raising an Exception, it's valid return v return f curator-5.2.0/curator/validators/options.py000066400000000000000000000127111315226075300210770ustar00rootroot00000000000000from voluptuous import * from ..defaults import option_defaults ## Methods for building the schema def action_specific(action): options = { 'alias' : [ option_defaults.name(action), option_defaults.warn_if_no_indices(), option_defaults.extra_settings(), ], 'allocation' : [ option_defaults.key(), option_defaults.value(), option_defaults.allocation_type(), option_defaults.wait_for_completion(action), option_defaults.wait_interval(action), option_defaults.max_wait(action), ], 'close' : [ option_defaults.delete_aliases() ], 'cluster_routing' : [ option_defaults.routing_type(), option_defaults.cluster_routing_setting(), option_defaults.cluster_routing_value(), option_defaults.wait_for_completion(action), option_defaults.wait_interval(action), option_defaults.max_wait(action), ], 'create_index' : [ option_defaults.name(action), option_defaults.extra_settings(), ], 'delete_indices' : [], 'delete_snapshots' : [ option_defaults.repository(), option_defaults.retry_interval(), option_defaults.retry_count(), ], 'forcemerge' : [ option_defaults.delay(), option_defaults.max_num_segments(), ], 'index_settings' : [ option_defaults.index_settings(), option_defaults.ignore_unavailable(), option_defaults.preserve_existing(), ], 'open' : [], 'reindex' : [ option_defaults.request_body(), option_defaults.refresh(), option_defaults.requests_per_second(), option_defaults.slices(), option_defaults.timeout(action), option_defaults.wait_for_active_shards(action), option_defaults.wait_for_completion(action), option_defaults.wait_interval(action), option_defaults.max_wait(action), option_defaults.remote_certificate(), option_defaults.remote_client_cert(), option_defaults.remote_client_key(), option_defaults.remote_aws_key(), option_defaults.remote_aws_secret_key(), option_defaults.remote_aws_region(), option_defaults.remote_filters(), option_defaults.remote_url_prefix(), option_defaults.remote_ssl_no_validate(), option_defaults.migration_prefix(), option_defaults.migration_suffix(), ], 'replicas' : [ option_defaults.count(), option_defaults.wait_for_completion(action), option_defaults.wait_interval(action), option_defaults.max_wait(action), ], 'rollover' : [ option_defaults.name(action), option_defaults.new_index(), option_defaults.conditions(), option_defaults.extra_settings(), option_defaults.wait_for_active_shards(action), ], 'restore' : [ option_defaults.repository(), option_defaults.name(action), option_defaults.indices(), option_defaults.ignore_unavailable(), option_defaults.include_aliases(), option_defaults.include_global_state(action), option_defaults.partial(), option_defaults.rename_pattern(), option_defaults.rename_replacement(), option_defaults.extra_settings(), option_defaults.wait_for_completion(action), option_defaults.wait_interval(action), option_defaults.max_wait(action), option_defaults.skip_repo_fs_check(), ], 'snapshot' : [ option_defaults.repository(), option_defaults.name(action), option_defaults.ignore_unavailable(), option_defaults.include_global_state(action), option_defaults.partial(), option_defaults.wait_for_completion(action), option_defaults.wait_interval(action), option_defaults.max_wait(action), option_defaults.skip_repo_fs_check(), ], 'shrink' : [ option_defaults.shrink_node(), option_defaults.node_filters(), option_defaults.number_of_shards(), option_defaults.number_of_replicas(), option_defaults.shrink_prefix(), option_defaults.shrink_suffix(), option_defaults.delete_after(), option_defaults.post_allocation(), option_defaults.wait_for_active_shards(action), option_defaults.extra_settings(), option_defaults.wait_for_completion(action), option_defaults.wait_interval(action), option_defaults.max_wait(action), ], } return options[action] def get_schema(action): # Appending the options dictionary seems to be the best way, since the # "Required" and "Optional" elements are hashes themselves. options = {} defaults = [ option_defaults.continue_if_exception(), option_defaults.disable_action(), option_defaults.ignore_empty_list(), option_defaults.timeout_override(action), ] for each in defaults: options.update(each) for each in action_specific(action): options.update(each) return Schema(options) curator-5.2.0/curator/validators/schemacheck.py000066400000000000000000000050771315226075300216510ustar00rootroot00000000000000from voluptuous import * from ..exceptions import ConfigurationError import re import logging class SchemaCheck(object): def __init__(self, config, schema, test_what, location): """ Validate ``config`` with the provided voluptuous ``schema``. ``test_what`` and ``location`` are for reporting the results, in case of failure. If validation is successful, the method returns ``config`` as valid. :arg config: A configuration dictionary. :type config: dict :arg schema: A voluptuous schema definition :type schema: :class:`voluptuous.Schema` :arg test_what: which configuration block is being validated :type test_what: str :arg location: An string to report which configuration sub-block is being tested. :type location: str """ self.loggit = logging.getLogger('curator.validators.SchemaCheck') # Set the Schema for validation... self.loggit.debug('Schema: {0}'.format(schema)) self.loggit.debug('"{0}" config: {1}'.format(test_what, config)) self.config = config self.schema = schema self.test_what = test_what self.location = location def __parse_error(self): """ Report the error, and try to report the bad key or value as well. """ def get_badvalue(data_string, data): elements = re.sub('[\'\]]', '', data_string).split('[') elements.pop(0) # Get rid of data as the first element value = None for k in elements: try: key = int(k) except ValueError: key = k if value == None: value = data[key] # if this fails, it's caught below return value try: self.badvalue = get_badvalue(str(self.error).split()[-1], self.config) except: self.badvalue = '(could not determine)' def result(self): try: return self.schema(self.config) except Exception as e: try: self.error = e.errors[0] except: self.error = '{0}'.format(e) self.__parse_error() self.loggit.error('Schema error: {0}'.format(self.error)) raise ConfigurationError( 'Configuration: {0}: Location: {1}: Bad Value: "{2}", {3}. ' 'Check configuration file.'.format( self.test_what, self.location, self.badvalue, self.error) ) curator-5.2.0/docs/000077500000000000000000000000001315226075300141315ustar00rootroot00000000000000curator-5.2.0/docs/Changelog.rst000066400000000000000000001746161315226075300165710ustar00rootroot00000000000000.. _changelog: Changelog ========= 5.2.0 (1 September 2017) ------------------------ **New Features** * Shrink action! Apologies to all who have patiently waited for this feature. It's been a long time coming, but it is hopefully worth the wait. There are a lot of checks and tests associated with this action, as there are many conditions that have to be met in order for a shrink to take place. Curator will try its best to ensure that all of these conditions are met so you can comfortably rest assured that shrink will work properly unattended. See the documentation for more information. * The ``cli`` function has been split into ``cli`` and ``run`` functions. The behavior of ``cli`` will be indistinguishable from previous releases, preserving API integrity. The new ``run`` function allows lambda and other users to `run` Curator from the API with only a client configuration file and action file as arguments. Requested in #1031 (untergeek) * Allow use of time/date string interpolation for Rollover index naming. Added in #1010 (tschroeder-zendesk) * New ``unit_count_pattern`` allows you to derive the ``unit_count`` from the index name itself. This involves regular expressions, so be sure to do lots of testing in ``--dry-run`` mode before deploying to production. Added by (soenkeliebau) in #997 **Bug Fixes** * Reindex ``request_body`` allows for 2 different ``size`` options. One limits the number of documents reindexed. The other is for batch sizing. The batch sizing option was missing from the schema validator. This has been corrected. Reported in #1038 (untergeek) * A few sundry logging and notification changes were made. 5.1.2 (08 August 2017) ---------------------- **Errata** * An update to Elasticsearch 5.5.0 changes the behavior of ``filter_by_aliases``, differing from previous 5.x versions. If a list of aliases is provided, indices must appear in _all_ listed aliases or a 404 error will result, leading to no indices being matched. In older versions, if the index was associated with even one of the aliases in aliases, it would result in a match. Tests and documentation have been updated to address these changes. * Debian 9 changed SSL versions, which means that the pre-built debian packages no longer work in Debian 9. In the short term, this requires a new repository. In the long term, I will try to get a better repository system working for these so they all work together, better. Requested in #998 (untergeek) **Bug Fixes** * Support date math in reindex operations better. It did work previously, but would report failure because the test was looking for the index with that name from a list of indices, rather than letting Elasticsearch do the date math. Reported by DPattee in #1008 (untergeek) * Under rare circumstances, snapshot delete (or create) actions could fail, even when there were no snapshots in state ``IN_PROGRESS``. This was tracked down by JD557 as a collision with a previously deleted snapshot that hadn't finished deleting. It could be seen in the tasks API. An additional test for snapshot activity in the tasks API has been added to cover this scenario. Reported in #999 (untergeek) * The ``restore_check`` function did not work properly with wildcard index patterns. This has been rectified, and an integration test added to satisfy this. Reported in #989 (untergeek) * Make Curator report the Curator version, and not just reiterate the elasticsearch version when reporting version incompatibilities. Reported in #992. (untergeek) * Fix repository/snapshot name logging issue. #1005 (jpcarey) * Fix Windows build issue #1014 (untergeek) **Documentation** * Fix/improve rST API documentation. * Thanks to many users who not only found and reported documentation issues, but also submitted corrections. <<<<<<< HEAD ======= >>>>>>> master 5.1.1 (8 June 2017) ------------------- **Bug Fixes** * Mock and cx_Freeze don't play well together. Packages weren't working, so I reverted the string-based comparison as before. 5.1.0 (8 June 2017) ------------------- **New Features** * Index Settings are here! First requested as far back as #160, it's been requested in various forms culminating in #656. The official documentation addresses the usage. (untergeek) * Remote reindex now adds the ability to migrate from one cluster to another, preserving the index names, or optionally adding a prefix and/or a suffix. The official documentation shows you how. (untergeek) * Added support for naming rollover indices. #970 (jurajseffer) * Testing against ES 5.4.1, 5.3.3 **Bug Fixes** * Since Curator no longer supports old versions of python, convert tests to use ``isinstance``. #973 (untergeek) * Fix stray instance of ``is not`` comparison instead of ``!=`` #972 (untergeek) * Increase remote client timeout to 180 seconds for remote reindex. #930 (untergeek) **General** * elasticsearch-py dependency bumped to 5.4.0 * Added mock dependency due to isinstance and testing requirements * AWS ES 5.3 officially supports Curator now. Documentation has been updated to reflect this. 5.0.4 (16 May 2017) ------------------- **Bug Fixes** * The ``_recovery`` check needs to compare using ``!=`` instead of ``is not``, which apparently does not accurately compare unicode strings. Reported in #966. (untergeek) 5.0.3 (15 May 2017) ------------------- **Bug Fixes** * Restoring a snapshot on an exceptionally fast cluster/node can create a race race condition where a ``_recovery`` check returns an empty dictionary ``{}``, which causes Curator to fail. Added test and code to correct this. Reported in #962. (untergeek) 5.0.2 (4 May 2017) ------------------ **Bug Fixes** * Nasty bug in schema validation fixed where boolean options or filter flags would validate as ``True`` if non-boolean types were submitted. Reported in #945. (untergeek) * Check for presence of alias after reindex, in case the reindex was to an alias. Reported in #941. (untergeek) * Fix an edge case where an index named with `1970.01.01` could not be sorted by index-name age. Reported in #951. (untergeek) * Update tests to include ES 5.3.2 * Bump certifi requirement to 2017.4.17. **Documentation** * Document substitute strftime symbols for doing ISO Week timestrings added in #932. (untergeek) * Document how to include file paths better. Fixes #944. (untergeek) 5.0.1 (10 April 2017) --------------------- **Bug Fixes** * Fixed default values for ``include_global_state`` on the restore action to be in line with defaults in Elasticsearch 5.3 **Documentation** * Huge improvement to documenation, with many more examples. * Address age filter limitations per #859 (untergeek) * Address date matching behavior better per #858 (untergeek) 5.0.0 (5 April 2017) -------------------- The full feature set of 5.0 (including alpha releases) is included here. **New Features** * Reindex is here! The new reindex action has a ton of flexibility. You can even reindex from remote locations, so long as the remote cluster is Elasticsearch 1.4 or newer. * Added the ``period`` filter (#733). This allows you to select indices or snapshots, based on whether they fit within a period of hours, days, weeks, months, or years. * Add dedicated "wait for completion" functionality. This supports health checks, recovery (restore) checks, snapshot checks, and operations which support the new tasks API. All actions which can use this have been refactored to take advantage of this. The benefit of this new feature is that client timeouts will be less likely to happen when performing long operations, like snapshot and restore. NOTE: There is one caveat: forceMerge does not support this, per the Elasticsearch API. A forceMerge call will hold the client until complete, or the client times out. There is no clean way around this that I can discern. * Elasticsearch date math naming is supported and documented for the ``create_index`` action. An integration test is included for validation. * Allow allocation action to unset a key/value pair by using an empty value. Requested in #906. (untergeek) * Added support for the Rollover API. Requested in #898, and by countless others. * Added ``warn_if_no_indices`` option for ``alias`` action in response to #883. Using this option will permit the ``alias`` add or remove to continue with a logged warning, even if the filters result in a NoIndices condition. Use with care. **General** * Bumped ``click`` (python module) version dependency to 6.7 * Bumped ``urllib3`` (python module) version dependency to 1.20 * Bumped ``elasticsearch`` (python module) version dependency to 5.3 * Refactored a ton of code to be cleaner and hopefully more consistent. **Bug Fixes** * Curator now logs version incompatibilities as an error, rather than just raising an Exception. #874 (untergeek) * The ``get_repository()`` function now properly raises an exception instead of returning `False` if nothing is found. #761 (untergeek) * Check if an index is in an alias before attempting to delete it from the alias. Issue raised in #887. (untergeek) * Fix allocation issues when using Elasticsearch 5.1+. Issue raised in #871 (untergeek) **Documentation** * Add missing repository arg to auto-gen API docs. Reported in #888 (untergeek) * Add all new documentation and clean up for v5 specific. **Breaking Changes** * IndexList no longer checks to see if there are indices on initialization. 5.0.0a1 (23 March 2017) ----------------------- This is the first alpha release of Curator 5. This should not be used for production! There `will` be many more changes before 5.0.0 is released. **New Features** * Allow allocation action to unset a key/value pair by using an empty value. Requested in #906. (untergeek) * Added support for the Rollover API. Requested in #898, and by countless others. * Added ``warn_if_no_indices`` option for ``alias`` action in response to #883. Using this option will permit the ``alias`` add or remove to continue with a logged warning, even if the filters result in a NoIndices condition. Use with care. **Bug Fixes** * Check if an index is in an alias before attempting to delete it from the alias. Issue raised in #887. (untergeek) * Fix allocation issues when using Elasticsearch 5.1+. Issue raised in #871 (untergeek) **Documentation** * Add missing repository arg to auto-gen API docs. Reported in #888 (untergeek) 4.2.6 (27 January 2016) ----------------------- **General** * Update Curator to use version 5.1 of the ``elasticsearch-py`` python module. With this change, there will be no reverse compatibility with Elasticsearch 2.x. For 2.x versions, continue to use the 4.x branches of Curator. * Tests were updated to reflect the changes in API calls, which were minimal. * Remove "official" support for Python 2.6. If you must use Curator on a system that uses Python 2.6 (RHEL/CentOS 6 users), it is recommended that you use the official RPM package as it is a frozen binary built on Python 3.5.x which will not conflict with your system Python. * Use ``isinstance()`` to verify client object. #862 (cp2587) * Prune older versions from Travis CI tests. * Update ``certifi`` dependency to latest version **Documentation** * Add version compatibility section to official documentation. * Update docs to reflect changes. Remove cruft and references to older versions. 4.2.5 (22 December 2016) ------------------------ **General** * Add and increment test versions for Travis CI. #839 (untergeek) * Make `filter_list` optional in snapshot, show_snapshot and show_indices singleton actions. #853 (alexef) **Bug Fixes** * Fix cli integration test when different host/port are specified. Reported in #843 (untergeek) * Catch empty list condition during filter iteration in singleton actions. Reported in #848 (untergeek) **Documentation** * Add docs regarding how filters are ANDed together, and how to do an OR with the regex pattern filter type. Requested in #842 (untergeek) * Fix typo in Click version in docs. #850 (breml) * Where applicable, replace `[source,text]` with `[source,yaml]` for better formatting in the resulting docs. 4.2.4 (7 December 2016) ----------------------- **Bug Fixes** * ``--wait_for_completion`` should be `True` by default for Snapshot singleton action. Reported in #829 (untergeek) * Increase `version_max` to 5.1.99. Prematurely reported in #832 (untergeek) * Make the '.security' index visible for snapshots so long as proper credentials are used. Reported in #826 (untergeek) 4.2.3.post1 (22 November 2016) ------------------------------ This fix is `only` going in for ``pip``-based installs. There are no other code changes. **Bug Fixes** * Fixed incorrect assumption of PyPI picking up dependency for certifi. It is still a dependency, but should not affect ``pip`` installs with an error any more. Reported in #821 (untergeek) 4.2.3 (21 November 2016) ------------------------ 4.2.2 was pulled immediately after release after it was discovered that the Windows binary distributions were still not including the certifi-provided certificates. This has now been remedied. **General** * ``certifi`` is now officially a requirement. * ``setup.py`` now forcibly includes the ``certifi`` certificate PEM file in the "frozen" distributions (i.e., the compiled versions). The ``get_client`` method was updated to reflect this and catch it for both the Linux and Windows binary distributions. This should `finally` put to rest #810 4.2.2 (21 November 2016) ------------------------ **Bug Fixes** * The certifi-provided certificates were not propagating to the compiled RPM/DEB packages. This has been corrected. Reported in #810 (untergeek) **General** * Added missing ``--ignore_empty_list`` option to singleton actions. Requested in #812 (untergeek) **Documentation** * Add a FAQ entry regarding the click module's need for Unicode when using Python 3. Kind of a bug fix too, as the entry_points were altered to catch this omission and report a potential solution on the command-line. Reported in #814 (untergeek) * Change the "Command-Line" documentation header to be "Running Curator" 4.2.1 (8 November 2016) ----------------------- **Bug Fixes** * In the course of package release testing, an undesirable scenario was caught where boolean flags default values for ``curator_cli`` were improperly overriding values from a yaml config file. **General** * Adding in direct download URLs for the RPM, DEB, tarball and zip packages. 4.2.0 (4 November 2016) ----------------------- **New Features** * Shard routing allocation enable/disable. This will allow you to disable shard allocation routing before performing one or more actions, and then re-enable after it is complete. Requested in #446 (untergeek) * Curator 3.x-style command-line. This is now ``curator_cli``, to differentiate between the current binary. Not all actions are available, but the most commonly used ones are. With the addition in 4.1.0 of schema and configuration validation, there's even a way to still do filter chaining on the command-line! Requested in #767, and by many other users (untergeek) **General** * Update testing to the most recent versions. * Lock elasticsearch-py module version at >= 2.4.0 and <= 3.0.0. There are API changes in the 5.0 release that cause tests to fail. **Bug Fixes** * Guarantee that binary packages are built from the latest Python + libraries. This ensures that SSL/TLS will work without warning messages about insecure connections, unless they actually are insecure. Reported in #780, though the reported problem isn't what was fixed. The fix is needed based on what was discovered while troubleshooting the problem. (untergeek) 4.1.2 (6 October 2016) ---------------------- This release does not actually add any new code to Curator, but instead improves documentation and includes new linux binary packages. **General** * New Curator binary packages for common Linux systems! These will be found in the same repositories that the python-based packages are in, but have no dependencies. All necessary libraries/modules are bundled with the binary, so everything should work out of the box. This feature doesn't change any other behavior, so it's not a major release. These binaries have been tested in: * CentOS 6 & 7 * Ubuntu 12.04, 14.04, 16.04 * Debian 8 They do not work in Debian 7 (library mismatch). They may work in other systems, but that is untested. The script used is in the unix_packages directory. The Vagrantfiles for the various build systems are in the Vagrant directory. **Bug Fixes** * The only bug that can be called a bug is actually a stray ``.exe`` suffix in the binary package creation section (cx_freeze) of ``setup.py``. The Windows binaries should have ``.exe`` extensions, but not unix variants. * Elasticsearch 5.0.0-beta1 testing revealed that a document ID is required during document creation in tests. This has been fixed, and a redundant bit of code in the forcemerge integration test was removed. **Documentation** * The documentation has been updated and improved. Examples and installation are now top-level events, with the sub-sections each having their own link. They also now show how to install and use the binary packages, and the section on installation from source has been improved. The missing section on installing the voluptuous schema verification module has been written and included. #776 (untergeek) 4.1.1 (27 September 2016) ------------------------- **Bug Fixes** * String-based booleans are now properly coerced. This fixes an issue where `True`/`False` were used in environment variables, but not recognized. #765 (untergeek) * Fix missing `count` method in ``__map_method`` in SnapshotList. Reported in #766 (untergeek) **General** * Update es_repo_mgr to use the same client/logging YAML config file. Requested in #752 (untergeek) **Schema Validation** * Cases where ``source`` was not defined in a filter (but should have been) were informing users that a `timestring` field was there that shouldn't have been. This edge case has been corrected. **Documentation** * Added notifications and FAQ entry to explain that AWS ES is not supported. 4.1.0 (6 September 2016) ------------------------ **New Features** * Configuration and Action file schema validation. Requested in #674 (untergeek) * Alias filtertype! With this filter, you can select indices based on whether they are part of an alias. Merged in #748 (untergeek) * Count filtertype! With this filter, you can now configure Curator to only keep the most recent _n_ indices (or snapshots!). Merged in #749 (untergeek) * Experimental! Use environment variables in your YAML configuration files. This was a popular request, #697. (untergeek) **General** * New requirement! ``voluptuous`` Python schema validation module * Requirement version bump: Now requires ``elasticsearch-py`` 2.4.0 **Bug Fixes** * ``delete_aliases`` option in ``close`` action no longer results in an error if not all selected indices have an alias. Add test to confirm expected behavior. Reported in #736 (untergeek) **Documentation** * Add information to FAQ regarding indices created before Elasticsearch 1.4. Merged in #747 4.0.6 (15 August 2016) ---------------------- **Bug Fixes** * Update old calls used with ES 1.x to reflect changes in 2.x+. This was necessary to work with Elasticsearch 5.0.0-alpha5. Fixed in #728 (untergeek) **Doc Fixes** * Add section detailing that the value of a ``value`` filter element should be encapsulated in single quotes. Reported in #726. (untergeek) 4.0.5 (3 August 2016) --------------------- **Bug Fixes** * Fix incorrect variable name for AWS Region reported in #679 (basex) * Fix ``filter_by_space()`` to not fail when index age metadata is not present. Indices without the appropriate age metadata will instead be excluded, with a debug-level message. Reported in #724 (untergeek) **Doc Fixes** * Fix documentation for the space filter and the source filter element. 4.0.4 (1 August 2016) --------------------- **Bug Fixes** * Fix incorrect variable name in Allocation action. #706 (lukewaite) * Incorrect error message in ``create_snapshot_body`` reported in #711 (untergeek) * Test for empty index list object should happen in action initialization for snapshot action. Discovered in #711. (untergeek) **Doc Fixes** * Add menus to asciidoc chapters #704 (untergeek) * Add pyyaml dependency #710 (dtrv) 4.0.3 (22 July 2016) -------------------- **General** * 4.0.2 didn't work for ``pip`` installs due to an omission in the MANIFEST.in file. This came up during release testing, but before the release was fully published. As the release was never fully published, this should not have actually affected anyone. **Bug Fixes** * These are the same as 4.0.2, but it was never fully released. * All default settings are now values returned from functions instead of constants. This was resulting in settings getting stomped on. New test addresses the original complaint. This removes the need for ``deepcopy``. See issue #687 (untergeek) * Fix ``host`` vs. ``hosts`` issue in ``get_client()`` rather than the non-functional function in ``repomgrcli.py``. * Update versions being tested. * Community contributed doc fixes. * Reduced logging verbosity by making most messages debug level. #684 (untergeek) * Fixed log whitelist behavior (and switched to blacklisting instead). Default behavior will now filter traffic from the ``elasticsearch`` and ``urllib3`` modules. * Fix Travis CI testing to accept some skipped tests, as needed. #695 (untergeek) * Fix missing empty index test in snapshot action. #682 (sherzberg) 4.0.2 (22 July 2016) -------------------- **Bug Fixes** * All default settings are now values returned from functions instead of constants. This was resulting in settings getting stomped on. New test addresses the original complaint. This removes the need for ``deepcopy``. See issue #687 (untergeek) * Fix ``host`` vs. ``hosts`` issue in ``get_client()`` rather than the non-functional function in ``repomgrcli.py``. * Update versions being tested. * Community contributed doc fixes. * Reduced logging verbosity by making most messages debug level. #684 (untergeek) * Fixed log whitelist behavior (and switched to blacklisting instead). Default behavior will now filter traffic from the ``elasticsearch`` and ``urllib3`` modules. * Fix Travis CI testing to accept some skipped tests, as needed. #695 (untergeek) * Fix missing empty index test in snapshot action. #682 (sherzberg) 4.0.1 (1 July 2016) ------------------- **Bug Fixes** * Coerce Logstash/JSON logformat type timestamp value to always use UTC. #661 (untergeek) * Catch and remove indices from the actionable list if they do not have a `creation_date` field in settings. This field was introduced in ES v1.4, so that indicates a rather old index. #663 (untergeek) * Replace missing ``state`` filter for ``snapshotlist``. #665 (untergeek) * Restore ``es_repo_mgr`` as a stopgap until other CLI scripts are added. It will remain undocumented for now, as I am debating whether to make repository creation its own action in the API. #668 (untergeek) * Fix dry run results for snapshot action. #673 (untergeek) 4.0.0 (24 June 2016) -------------------- It's official! Curator 4.0.0 is released! **Breaking Changes** * New and improved API! * Command-line changes. No more command-line args, except for ``--config``, ``--actions``, and ``--dry-run``: - ``--config`` points to a YAML client and logging configuration file. The default location is ``~/.curator/curator.yml`` - ``--actions`` arg points to a YAML action configuration file - ``--dry-run`` will simulate the action(s) which would have taken place, but not actually make any changes to the cluster or its indices. **New Features** * Snapshot restore is here! * YAML configuration files. Now a single file can define an entire batch of commands, each with their own filters, to be performed in sequence. * Sort by index age not only by index name (as with previous versions of Curator), but also by index `creation_date`, or by calculations from the Field Stats API on a timestamp field. * Atomically add/remove indices from aliases! This is possible by way of the new `IndexList` class and YAML configuration files. * State of indices pulled and stored in `IndexList` instance. Fewer API calls required to serially test for open/close, `size_in_bytes`, etc. * Filter by space now allows sorting by age! * Experimental! Use AWS IAM credentials to sign requests to Elasticsearch. This requires the end user to *manually* install the `requests_aws4auth` python module. * Optionally delete aliases from indices before closing. * An empty index or snapshot list no longer results in an error if you set ``ignore_empty_list`` to `True`. If `True` it will still log that the action was not performed, but will continue to the next action. If 'False' it will log an ERROR and exit with code 1. **API** * Updated API documentation * Class: `IndexList`. This pulls all indices at instantiation, and you apply filters, which are class methods. You can iterate over as many filters as you like, in fact, due to the YAML config file. * Class: `SnapshotList`. This pulls all snapshots from the given repository at instantiation, and you apply filters, which are class methods. You can iterate over as many filters as you like, in fact, due to the YAML config file. * Add `wait_for_completion` to Allocation and Replicas actions. These will use the client timeout, as set by default or `timeout_override`, to determine how long to wait for timeout. These are handled in batches of indices for now. * Allow `timeout_override` option for all actions. This allows for different timeout values per action. * Improve API by giving each action its own `do_dry_run()` method. **General** * Updated use documentation for Elastic main site. * Include example files for ``--config`` and ``--actions``. 4.0.0b2 (16 June 2016) ---------------------- **Second beta release of the 4.0 branch** **New Feature** * An empty index or snapshot list no longer results in an error if you set ``ignore_empty_list`` to `True`. If `True` it will still log that the action was not performed, but will continue to the next action. If 'False' it will log an ERROR and exit with code 1. (untergeek) 4.0.0b1 (13 June 2016) ---------------------- **First beta release of the 4.0 branch!** The release notes will be rehashing the new features in 4.0, rather than the bug fixes done during the alphas. **Breaking Changes** * New and improved API! * Command-line changes. No more command-line args, except for ``--config``, ``--actions``, and ``--dry-run``: - ``--config`` points to a YAML client and logging configuration file. The default location is ``~/.curator/curator.yml`` - ``--actions`` arg points to a YAML action configuration file - ``--dry-run`` will simulate the action(s) which would have taken place, but not actually make any changes to the cluster or its indices. **New Features** * Snapshot restore is here! * YAML configuration files. Now a single file can define an entire batch of commands, each with their own filters, to be performed in sequence. * Sort by index age not only by index name (as with previous versions of Curator), but also by index `creation_date`, or by calculations from the Field Stats API on a timestamp field. * Atomically add/remove indices from aliases! This is possible by way of the new `IndexList` class and YAML configuration files. * State of indices pulled and stored in `IndexList` instance. Fewer API calls required to serially test for open/close, `size_in_bytes`, etc. * Filter by space now allows sorting by age! * Experimental! Use AWS IAM credentials to sign requests to Elasticsearch. This requires the end user to *manually* install the `requests_aws4auth` python module. * Optionally delete aliases from indices before closing. **API** * Updated API documentation * Class: `IndexList`. This pulls all indices at instantiation, and you apply filters, which are class methods. You can iterate over as many filters as you like, in fact, due to the YAML config file. * Class: `SnapshotList`. This pulls all snapshots from the given repository at instantiation, and you apply filters, which are class methods. You can iterate over as many filters as you like, in fact, due to the YAML config file. * Add `wait_for_completion` to Allocation and Replicas actions. These will use the client timeout, as set by default or `timeout_override`, to determine how long to wait for timeout. These are handled in batches of indices for now. * Allow `timeout_override` option for all actions. This allows for different timeout values per action. * Improve API by giving each action its own `do_dry_run()` method. **General** * Updated use documentation for Elastic main site. * Include example files for ``--config`` and ``--actions``. 4.0.0a10 (10 June 2016) ----------------------- **New Features** * Snapshot restore is here! * Optionally delete aliases from indices before closing. Fixes #644 (untergeek) **General** * Add `wait_for_completion` to Allocation and Replicas actions. These will use the client timeout, as set by default or `timeout_override`, to determine how long to wait for timeout. These are handled in batches of indices for now. * Allow `timeout_override` option for all actions. This allows for different timeout values per action. **Bug Fixes** * Disallow use of `master_only` if multiple hosts are used. Fixes #615 (untergeek) * Fix an issue where arguments weren't being properly passed and populated. * ForceMerge replaced Optimize in ES 2.1.0. * Fix prune_nones to work with Python 2.6. Fixes #619 (untergeek) * Fix TimestringSearch to work with Python 2.6. Fixes #622 (untergeek) * Add language classifiers to ``setup.py``. Fixes #640 (untergeek) * Changed references to readthedocs.org to be readthedocs.io. 4.0.0a9 (27 Apr 2016) --------------------- **General** * Changed `create_index` API to use kwarg `extra_settings` instead of `body` * Normalized Alias action to use `name` instead of `alias`. This simplifies documentation by reducing the number of option elements. * Streamlined some code * Made `exclude` a filter element setting for all filters. Updated all examples to show this. * Improved documentation **New Features** * Alias action can now accept `extra_settings` to allow adding filters, and/or routing. 4.0.0a8 (26 Apr 2016) --------------------- **Bug Fixes** * Fix to use `optimize` with versions of Elasticsearch < 5.0 * Fix missing setting in testvars 4.0.0a7 (25 Apr 2016) --------------------- **Bug Fixes** * Fix AWS4Auth error. 4.0.0a6 (25 Apr 2016) --------------------- **General** * Documentation updates. * Improve API by giving each action its own `do_dry_run()` method. **Bug Fixes** * Do not escape characters other than ``.`` and ``-`` in timestrings. Fixes #602 (untergeek) ** New Features** * Added `CreateIndex` action. 4.0.0a4 (21 Apr 2016) --------------------- **Bug Fixes** * Require `pyyaml` 3.10 or better. * In the case that no `options` are in an action, apply the defaults. 4.0.0a3 (21 Apr 2016) --------------------- It's time for Curator 4.0 alpha! **Breaking Changes** * New API! (again?!) * Command-line changes. No more command-line args, except for ``--config``, ``--actions``, and ``--dry-run``: - ``--config`` points to a YAML client and logging configuration file. The default location is ``~/.curator/curator.yml`` - ``--actions`` arg points to a YAML action configuration file - ``--dry-run`` will simulate the action(s) which would have taken place, but not actually make any changes to the cluster or its indices. **General** * Updated API documentation * Updated use documentation for Elastic main site. * Include example files for ``--config`` and ``--actions``. **New Features** * Sort by index age not only by index name (as with previous versions of Curator), but also by index `creation_date`, or by calculations from the Field Stats API on a timestamp field. * Class: `IndexList`. This pulls all indices at instantiation, and you apply filters, which are class methods. You can iterate over as many filters as you like, in fact, due to the YAML config file. * Class: `SnapshotList`. This pulls all snapshots from the given repository at instantiation, and you apply filters, which are class methods. You can iterate over as many filters as you like, in fact, due to the YAML config file. * YAML configuration files. Now a single file can define an entire batch of commands, each with their own filters, to be performed in sequence. * Atomically add/remove indices from aliases! This is possible by way of the new `IndexList` class and YAML configuration files. * State of indices pulled and stored in `IndexList` instance. Fewer API calls required to serially test for open/close, `size_in_bytes`, etc. * Filter by space now allows sorting by age! * Experimental! Use AWS IAM credentials to sign requests to Elasticsearch. This requires the end user to *manually* install the `requests_aws4auth` python module. 3.5.1 (21 March 2016) --------------------- **Bug fixes** * Add more logging information to snapshot delete method #582 (untergeek) * Improve default timeout, logging, and exception handling for `seal` command #583 (untergeek) * Fix use of default snapshot name. #584 (untergeek) 3.5.0 (16 March 2016) --------------------- **General** * Add support for the `--client-cert` and `--client-key` command line parameters and client_cert and client_key parameters to the get_client() call. #520 (richm) **Bug fixes** * Disallow users from creating snapshots with upper-case letters, which is not permitted by Elasticsearch. #562 (untergeek) * Remove `print()` command from ``setup.py`` as it causes issues with command- line retrieval of ``--url``, etc. #568 (thib-ack) * Remove unnecessary argument from `build_filter()` #530 (zzugg) * Allow day of year filter to be made up with 1, 2 or 3 digits #578 (petitout) 3.4.1 (10 February 2016) ------------------------ **General** * Update license copyright to 2016 * Use slim python version with Docker #527 (xaka) * Changed ``--master-only`` exit code to 0 when connected to non-master node #540 (wkruse) * Add ``cx_Freeze`` capability to ``setup.py``, plus a ``binary_release.py`` script to simplify binary package creation. #554 (untergeek) * Set Elastic as author. #555 (untergeek) * Put repository creation methods into API and document them. Requested in #550 (untergeek) **Bug fixes** * Fix sphinx documentation build error #506 (hydrapolic) * Ensure snapshots are found before iterating #507 (garyelephant) * Fix a doc inconsistency #509 (pmoust) * Fix a typo in `show` documentation #513 (pbamba) * Default to trying the cluster state for checking whether indices are closed, and then fall back to using the _cat API (for Amazon ES instances). #519 (untergeek) * Improve logging to show time delay between optimize runs, if selected. #525 (untergeek) * Allow elasticsearch-py module versions through 2.3.0 (a presumption at this point) #524 (untergeek) * Improve logging in snapshot api method to reveal when a repository appears to be missing. Reported in #551 (untergeek) * Test that ``--timestring`` has the correct variable for ``--time-unit``. Reported in #544 (untergeek) * Allocation will exit with exit_code 0 now when there are no indices to work on. Reported in #531 (untergeek) 3.4.0 (28 October 2015) ----------------------- **General** * API change in elasticsearch-py 1.7.0 prevented alias operations. Fixed in #486 (HonzaKral) * During index selection you can now select only closed indices with ``--closed-only``. Does not impact ``--all-indices`` Reported in #476. Fixed in #487 (Basster) * API Changes in Elasticsearch 2.0.0 required some refactoring. All tests pass for ES versions 1.0.3 through 2.0.0-rc1. Fixed in #488 (untergeek) * es_repo_mgr now has access to the same SSL options from #462. #489 (untergeek) * Logging improvements requested in #475. (untergeek) * Added ``--quiet`` flag. #494 (untergeek) * Fixed ``index_closed`` to work with AWS Elasticsearch. #499 (univerio) * Acceptable versions of Elasticsearch-py module are 1.8.0 up to 2.1.0 (untergeek) 3.3.0 (31 August 2015) ---------------------- **Announcement** * Curator is tested in Jenkins. Each commit to the master branch is tested with both Python versions 2.7.6 and 3.4.0 against each of the following Elasticsearch versions: * 1.7_nightly * 1.6_nightly * 1.7.0 * 1.6.1 * 1.5.1 * 1.4.4 * 1.3.9 * 1.2.4 * 1.1.2 * 1.0.3 * If you are using a version different from this, your results may vary. **General** * Allocation type can now also be ``include`` or ``exclude``, in addition to the the existing default ``require`` type. Add ``--type`` to the allocation command to specify the type. #443 (steffo) * Bump elasticsearch python module dependency to 1.6.0+ to enable synced_flush API call. Reported in #447 (untergeek) * Add SSL features, ``--ssl-no-validate`` and ``certificate`` to provide other ways to validate SSL connections to Elasticsearch. #436 (untergeek) **Bug fixes** * Delete by space was only reporting space used by primary shards. Fixed to show all space consumed. Reported in #455 (untergeek) * Update exit codes and messages for snapshot selection. Reported in #452 (untergeek) * Fix potential int/float casting issues. Reported in #465 (untergeek) 3.2.3 (16 July 2015) -------------------- **Bug fix** * In order to address customer and community issues with bulk deletes, the ``master_timeout`` is now invoked for delete operations. This should address 503s with 30s timeouts in the debug log, even when ``--timeout`` is set to a much higher value. The ``master_timeout`` is tied to the ``--timeout`` flag value, but will not exceed 300 seconds. #420 (untergeek) **General** * Mixing it up a bit here by putting `General` second! The only other changes are that logging has been improved for deletes so you won't need to have the ``--debug`` flag to see if you have error codes >= 400, and some code documentation improvements. 3.2.2 (13 July 2015) -------------------- **General** * This is a very minor change. The ``mock`` library recently removed support for Python 2.6. As many Curator users are using RHEL/CentOS 6, which is pinned to Python 2.6, this requires the mock version referenced by Curator to also be pinned to a supported version (``mock==1.0.1``). 3.2.1 (10 July 2015) -------------------- **General** * Added delete verification & retry (fixed at 3x) to potentially cover an edge case in #420 (untergeek) * Since GitHub allows rST (reStructuredText) README documents, and that's what PyPI wants also, the README has been rebuilt in rST. (untergeek) **Bug fixes** * If closing indices with ES 1.6+, and all indices are closed, ensure that the seal command does not try to seal all indices. Reported in #426 (untergeek) * Capture AttributeError when sealing indices if a non-TransportError occurs. Reported in #429 (untergeek) 3.2.0 (25 June 2015) -------------------- **New!** * Added support to manually seal, or perform a [synced flush](http://www.elastic.co/guide/en/elasticsearch/reference/current/indices-synced-flush.html) on indices with the ``seal`` command. #394 (untergeek) * Added *experimental* support for SSL certificate validation. In order for this to work, you must install the ``certifi`` python module: ``pip install certifi`` This feature *should* automatically work if the ``certifi`` module is installed. Please report any issues. **General** * Changed logging to go to stdout rather than stderr. Reopened #121 and figured they were right. This is better. (untergeek) * Exit code 99 was unpopular. It has been removed. Reported in #371 and #391 (untergeek) * Add ``--skip-repo-validation`` flag for snapshots. Do not validate write access to repository on all cluster nodes before proceeding. Useful for shared filesystems where intermittent timeouts can affect validation, but won't likely affect snapshot success. Requested in #396 (untergeek) * An alias no longer needs to be pre-existent in order to use the alias command. #317 (untergeek) * es_repo_mgr now passes through upstream errors in the event a repository fails to be created. Requested in #405 (untergeek) **Bug fixes** * In rare cases, ``*`` wildcard would not expand. Replaced with _all. Reported in #399 (untergeek) * Beginning with Elasticsearch 1.6, closed indices cannot have their replica count altered. Attempting to do so results in this error: ``org.elasticsearch.ElasticsearchIllegalArgumentException: Can't update [index.number_of_replicas] on closed indices [[test_index]] - can leave index in an unopenable state`` As a result, the ``change_replicas`` method has been updated to prune closed indices. This change will apply to all versions of Elasticsearch. Reported in #400 (untergeek) * Fixed es_repo_mgr repository creation verification error. Reported in #389 (untergeek) 3.1.0 (21 May 2015) ------------------- **General** * If ``wait_for_completion`` is true, snapshot success is now tested and logged. Reported in #253 (untergeek) * Log & return false if a snapshot is already in progress (untergeek) * Logs individual deletes per index, even though they happen in batch mode. Also log individual snapshot deletions. Reported in #372 (untergeek) * Moved ``chunk_index_list`` from cli to api utils as it's now also used by ``filter.py`` * Added a warning and 10 second timer countdown if you use ``--timestring`` to filter indices, but do not use ``--older-than`` or ``--newer-than`` in conjunction with it. This is to address #348, which behavior isn't a bug, but prevents accidental action against all of your time-series indices. The warning and timer are not displayed for ``show`` and ``--dry-run`` operations. * Added tests for ``es_repo_mgr`` in #350 * Doc fixes **Bug fixes** * delete-by-space needed the same fix used for #245. Fixed in #353 (untergeek) * Increase default client timeout for ``es_repo_mgr`` as node discovery and availability checks for S3 repositories can take a bit. Fixed in #352 (untergeek) * If an index is closed, indicate in ``show`` and ``--dry-run`` output. Reported in #327. (untergeek) * Fix issue where CLI parameters were not being passed to the ``es_repo_mgr`` create sub-command. Reported in #337. (feltnerm) 3.0.3 (27 Mar 2015) ------------------- **Announcement** This is a bug fix release. #319 and #320 are affecting a few users, so this release is being expedited. Test count: 228 Code coverage: 99% **General** * Documentation for the CLI converted to Asciidoc and moved to http://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html * Improved logging, and refactored a few methods to help with this. * Dry-run output is now more like v2, with the index or snapshot in the log line, along with the command. Several tests needed refactoring with this change, along with a bit of documentation. **Bug fixes** * Fix links to repository in setup.py. Reported in #318 (untergeek) * No more ``--delay`` with optimized indices. Reported in #319 (untergeek) * ``--request_timeout`` not working as expected. Reinstate the version 2 timeout override feature to prevent default timeouts for ``optimize`` and ``snapshot`` operations. Reported in #320 (untergeek) * Reduce index count to 200 for test.integration.test_cli_commands.TestCLISnapshot.test_cli_snapshot_huge_list in order to reduce or eliminate Jenkins CI test timeouts. Reported in #324 (untergeek) * ``--dry-run`` no longer calls ``show``, but will show output in the log, as in v2. This was a recurring complaint. See #328 (untergeek) 3.0.2 (23 Mar 2015) ------------------- **Announcement** This is a bug fix release. #307 and #309 were big enough to warrant an expedited release. **Bug fixes** * Purge unneeded constants, and clean up config options for snapshot. Reported in #303 (untergeek) * Don't split large index list if performing snapshots. Reported in #307 (untergeek) * Act correctly if a zero value for `--older-than` or `--newer-than` is provided. #309 (untergeek) 3.0.1 (16 Mar 2015) ------------------- **Announcement** The ``regex_iterate`` method was horribly named. It has been renamed to ``apply_filter``. Methods have been added to allow API users to build a filtered list of indices similarly to how the CLI does. This was an oversight. Props to @SegFaultAX for pointing this out. **General** * In conjunction with the rebrand to Elastic, URLs and documentation were updated. * Renamed horribly named `regex_iterate` method to `apply_filter` #298 (untergeek) * Added `build_filter` method to mimic CLI calls. #298 (untergeek) * Added Examples page in the API documentation. #298 (untergeek) **Bug fixes** * Refactored to show `--dry-run` info for `--disk-space` calls. Reported in #290 (untergeek) * Added list chunking so acting on huge lists of indices won't result in a URL bigger than 4096 bytes (Elasticsearch's default limit.) Reported in https://github.com/elastic/curator/issues/245#issuecomment-77916081 * Refactored `to_csv()` method to be simpler. * Added and removed tests according to changes. Code coverage still at 99% 3.0.0 (9 March 2015) -------------------- **Release Notes** The full release of Curator 3.0 is out! Check out all of the changes here! *Note:* This release is _not_ reverse compatible with any previous version. Because 3.0 is a major point release, there have been some major changes to both the API as well as the CLI arguments and structure. Be sure to read the updated command-line specific docs in the [wiki](https://github.com/elasticsearch/curator/wiki) and change your command-line arguments accordingly. The API docs are still at http://curator.readthedocs.io. Be sure to read the latest docs, or select the docs for 3.0.0. **General** * **Breaking changes to the API.** Because this is a major point revision, changes to the API have been made which are non-reverse compatible. Before upgrading, be sure to update your scripts and test them thoroughly. * **Python 3 support** Somewhere along the line, Curator would no longer work with curator. All tests now pass for both Python2 and Python3, with 99% code coverage in both environments. * **New CLI library.** Using Click now. http://click.pocoo.org/3/ This change is especially important as it allows very easy CLI integration testing. * **Pipelined filtering!** You can now use ``--older-than`` & ``--newer-than`` in the same command! You can also provide your own regex via the ``--regex`` parameter. You can use multiple instances of the ``--exclude`` flag. * **Manually include indices!** With the ``--index`` paramter, you can add an index to the working list. You can provide multiple instances of the ``--index`` parameter as well! * **Tests!** So many tests now. Test coverage of the API methods is at 100% now, and at 99% for the CLI methods. This doesn't mean that all of the tests are perfect, or that I haven't missed some scenarios. It does mean, however, that it will be much easier to write tests if something turns up missed. It also means that any new functionality will now need to have tests. * **Iteration changes** Methods now only iterate through each index when appropriate! In fact, the only commands that iterate are `alias` and `optimize`. The `bloom` command will iterate, but only if you have added the `--delay` flag with a value greater than zero. * **Improved packaging!** Methods have been moved into categories of ``api`` and ``cli``, and further broken out into individual modules to help them be easier to find and read. * Check for allocation before potentially re-applying an allocation rule. #273 (ferki) * Assigning replica count and routing allocation rules _can_ be done to closed indices. #283 (ferki) **Bug fixes** * Don't accidentally delete ``.kibana`` index. #261 (malagoli) * Fix segment count for empty indices. #265 (untergeek) * Change bloom filter cutoff Elasticsearch version to 1.4. Reported in #267 (untergeek) 3.0.0rc1 (5 March 2015) ----------------------- **Release Notes** RC1 is here! I'm re-releasing the Changes from all betas here, minus the intra-beta code fixes. Barring any show stoppers, the official release will be soon. **General** * **Breaking changes to the API.** Because this is a major point revision, changes to the API have been made which are non-reverse compatible. Before upgrading, be sure to update your scripts and test them thoroughly. * **Python 3 support** Somewhere along the line, Curator would no longer work with curator. All tests now pass for both Python2 and Python3, with 99% code coverage in both environments. * **New CLI library.** Using Click now. http://click.pocoo.org/3/ This change is especially important as it allows very easy CLI integration testing. * **Pipelined filtering!** You can now use ``--older-than`` & ``--newer-than`` in the same command! You can also provide your own regex via the ``--regex`` parameter. You can use multiple instances of the ``--exclude`` flag. * **Manually include indices!** With the ``--index`` paramter, you can add an index to the working list. You can provide multiple instances of the ``--index`` parameter as well! * **Tests!** So many tests now. Test coverage of the API methods is at 100% now, and at 99% for the CLI methods. This doesn't mean that all of the tests are perfect, or that I haven't missed some scenarios. It does mean, however, that it will be much easier to write tests if something turns up missed. It also means that any new functionality will now need to have tests. * Methods now only iterate through each index when appropriate! * Improved packaging! Hopefully the ``entry_point`` issues some users have had will be addressed by this. Methods have been moved into categories of ``api`` and ``cli``, and further broken out into individual modules to help them be easier to find and read. * Check for allocation before potentially re-applying an allocation rule. #273 (ferki) * Assigning replica count and routing allocation rules _can_ be done to closed indices. #283 (ferki) **Bug fixes** * Don't accidentally delete ``.kibana`` index. #261 (malagoli) * Fix segment count for empty indices. #265 (untergeek) * Change bloom filter cutoff Elasticsearch version to 1.4. Reported in #267 (untergeek) 3.0.0b4 (5 March 2015) ---------------------- **Notes** Integration testing! Because I finally figured out how to use the Click Testing API, I now have a good collection of command-line simulations, complete with a real back-end. This testing found a few bugs (this is why testing exists, right?), and fixed a few of them. **Bug fixes** * HUGE! `curator show snapshots` would _delete_ snapshots. This is fixed. * Return values are now being sent from the commands. * `scripttest` is no longer necessary (click.Test works!) * Calling `get_snapshot` without a snapshot name returns all snapshots 3.0.0b3 (4 March 2015) ---------------------- **Bug fixes** * setup.py was lacking the new packages "curator.api" and "curator.cli" The package works now. * Python3 suggested I had to normalize the beta tag to just b3, so that's also changed. * Cleaned out superfluous imports and logger references from the __init__.py files. 3.0.0-beta2 (3 March 2015) -------------------------- **Bug fixes** * Python3 issues resolved. Tests now pass on both Python2 and Python3 3.0.0-beta1 (3 March 2015) -------------------------- **General** * **Breaking changes to the API.** Because this is a major point revision, changes to the API have been made which are non-reverse compatible. Before upgrading, be sure to update your scripts and test them thoroughly. * **New CLI library.** Using Click now. http://click.pocoo.org/3/ * **Pipelined filtering!** You can now use ``--older-than`` & ``--newer-than`` in the same command! You can also provide your own regex via the ``--regex`` parameter. You can use multiple instances of the ``--exclude`` flag. * **Manually include indices!** With the ``--index`` paramter, you can add an index to the working list. You can provide multiple instances of the ``--index`` parameter as well! * **Tests!** So many tests now. Unit test coverage of the API methods is at 100% now. This doesn't mean that all of the tests are perfect, or that I haven't missed some scenarios. It does mean that any new functionality will need to also have tests, now. * Methods now only iterate through each index when appropriate! * Improved packaging! Hopefully the ``entry_point`` issues some users have had will be addressed by this. Methods have been moved into categories of ``api`` and ``cli``, and further broken out into individual modules to help them be easier to find and read. * Check for allocation before potentially re-applying an allocation rule. #273 (ferki) **Bug fixes** * Don't accidentally delete ``.kibana`` index. #261 (malagoli) * Fix segment count for empty indices. #265 (untergeek) * Change bloom filter cutoff Elasticsearch version to 1.4. Reported in #267 (untergeek) 2.1.2 (22 January 2015) ----------------------- **Bug fixes** * Do not try to set replica count if count matches provided argument. #247 (bobrik) * Fix JSON logging (Logstash format). #250 (magnusbaeck) * Fix bug in `filter_by_space()` which would match all indices if the provided patterns found no matches. Reported in #254 (untergeek) 2.1.1 (30 December 2014) ------------------------ **Bug fixes** * Renamed unnecessarily redundant ``--replicas`` to ``--count`` in args for ``curator_script.py`` 2.1.0 (30 December 2014) ------------------------ **General** * Snapshot name now appears in log output or STDOUT. #178 (untergeek) * Replicas! You can now change the replica count of indices. Requested in #175 (untergeek) * Delay option added to Bloom Filter functionality. #206 (untergeek) * Add 2-digit years as acceptable pattern (y vs. Y). Reported in #209 (untergeek) * Add Docker container definition #226 (christianvozar) * Allow the use of 0 with --older-than, --most-recent and --delete-older-than. See #208. #211 (bobrik) **Bug fixes** * Edge case where 1.4.0.Beta1-SNAPSHOT would break version check. Reported in #183 (untergeek) * Typo fixed. #193 (ferki) * Type fixed. #204 (gheppner) * Shows proper error in the event of concurrent snapshots. #177 (untergeek) * Fixes erroneous index display of ``_, a, l, l`` when --all-indices selected. Reported in #222 (untergeek) * Use json.dumps() to escape exceptions. Reported in #210 (untergeek) * Check if index is closed before adding to alias. Reported in #214 (bt5e) * No longer force-install argparse if pre-installed #216 (whyscream) * Bloom filters have been removed from Elasticsearch 1.5.0. Update methods and tests to act accordingly. #233 (untergeek) 2.0.2 (8 October 2014) ---------------------- **Bug fixes** * Snapshot name not displayed in log or STDOUT #185 (untergeek) * Variable name collision in delete_snapshot() #186 (untergeek) 2.0.1 (1 October 2014) ---------------------- **Bug fix** * Override default timeout when snapshotting --all-indices #179 (untergeek) 2.0.0 (25 September 2014) ------------------------- **General** * New! Separation of Elasticsearch Curator Python API and curator_script.py (untergeek) * New! ``--delay`` after optimize to allow cluster to quiesce #131 (untergeek) * New! ``--suffix`` option in addition to ``--prefix`` #136 (untergeek) * New! Support for wildcards in prefix & suffix #136 (untergeek) * Complete refactor of snapshots. Now supporting incrementals! (untergeek) **Bug fix** * Incorrect error msg if no indices sent to create_snapshot (untergeek) * Correct for API change coming in ES 1.4 #168 (untergeek) * Missing ``"`` in Logstash log format #143 (cassianoleal) * Change non-master node test to exit code 0, log as ``INFO``. #145 (untergeek) * `months` option missing from validate_timestring() (untergeek) 1.2.2 (29 July 2014) -------------------- **Bug fix** * Updated ``README.md`` to briefly explain what curator does #117 (untergeek) * Fixed ``es_repo_mgr`` logging whitelist #119 (untergeek) * Fixed absent ``months`` time-unit #120 (untergeek) * Filter out ``.marvel-kibana`` when prefix is ``.marvel-`` #120 (untergeek) * Clean up arg parsing code where redundancy exists #123 (untergeek) * Properly divide debug from non-debug logging #125 (untergeek) * Fixed ``show`` command bug caused by changes to command structure #126 (michaelweiser) 1.2.1 (24 July 2014) -------------------- **Bug fix** * Fixed the new logging when called by ``curator`` entrypoint. 1.2.0 (24 July 2014) -------------------- **General** * New! Allow user-specified date patterns: ``--timestring`` #111 (untergeek) * New! Curate weekly indices (must use week of year) #111 (untergeek) * New! Log output in logstash format ``--logformat logstash`` #111 (untergeek) * Updated! Cleaner default logs (debug still shows everything) (untergeek) * Improved! Dry runs are more visible in log output (untergeek) Errata * The ``--separator`` option was removed in lieu of user-specified date patterns. * Default ``--timestring`` for days: ``%Y.%m.%d`` (Same as before) * Default ``--timestring`` for hours: ``%Y.%m.%d.%H`` (Same as before) * Default ``--timestring`` for weeks: ``%Y.%W`` 1.1.3 (18 July 2014) -------------------- **Bug fix** * Prefix not passed in ``get_object_list()`` #106 (untergeek) * Use ``os.devnull`` instead of ``/dev/null`` for Windows #102 (untergeek) * The http auth feature was erroneously omitted #100 (bbuchacher) 1.1.2 (13 June 2014) -------------------- **Bug fix** * This was a showstopper bug for anyone using RHEL/CentOS with a Python 2.6 dependency for yum * Python 2.6 does not like format calls without an index. #96 via #95 (untergeek) * We won't talk about what happened to 1.1.1. No really. I hate git today :( 1.1.0 (12 June 2014) -------------------- **General** * Updated! New command structure * New! Snapshot to fs or s3 #82 (untergeek) * New! Add/Remove indices to alias #82 via #86 (cschellenger) * New! ``--exclude-pattern`` #80 (ekamil) * New! (sort of) Restored ``--log-level`` support #73 (xavier-calland) * New! show command-line options #82 via #68 (untergeek) * New! Shard Allocation Routing #82 via #62 (nickethier) **Bug fix** * Fix ``--max_num_segments`` not being passed correctly #74 (untergeek) * Change ``BUILD_NUMBER`` to ``CURATOR_BUILD_NUMBER`` in ``setup.py`` #60 (mohabusama) * Fix off-by-one error in time calculations #66 (untergeek) * Fix testing with python3 #92 (untergeek) Errata * Removed ``optparse`` compatibility. Now requires ``argparse``. 1.0.0 (25 Mar 2014) ------------------- **General** * compatible with ``elasticsearch-py`` 1.0 and Elasticsearch 1.0 (honzakral) * Lots of tests! (honzakral) * Streamline code for 1.0 ES versions (honzakral) **Bug fix** * Fix ``find_expired_indices()`` to not skip closed indices (honzakral) 0.6.2 (18 Feb 2014) ------------------- **General** * Documentation fixes #38 (dharrigan) * Add support for HTTPS URI scheme and ``optparse`` compatibility for Python 2.6 (gelim) * Add elasticsearch module version checking for future compatibility checks (untergeek) 0.6.1 (08 Feb 2014) ------------------- **General** * Added tarball versioning to ``setup.py`` (untergeek) **Bug fix** * Fix ``long_description`` by including ``README.md`` in ``MANIFEST.in`` (untergeek) * Incorrect version number in ``curator.py`` (untergeek) 0.6.0 (08 Feb 2014) ------------------- **General** * Restructured repository to a be a proper python package. (arieb) * Added ``setup.py`` file. (arieb) * Removed the deprecated file ``logstash_index_cleaner.py`` (arieb) * Updated ``README.md`` to fit the new package, most importantly the usage and installation. (arieb) * Fixes and package push to PyPI (untergeek) 0.5.2 (26 Jan 2014) ------------------- **General** * Fix boolean logic determining hours or days for time selection (untergeek) 0.5.1 (20 Jan 2014) ------------------- **General** * Fix ``can_bloom`` to compare numbers (HonzaKral) * Switched ``find_expired_indices()`` to use ``datetime`` and ``timedelta`` * Do not try and catch unrecoverable exceptions. (HonzaKral) * Future proofing the use of the elasticsearch client (i.e. work with version 1.0+ of Elasticsearch) (HonzaKral) Needs more testing, but should work. * Add tests for these scenarios (HonzaKral) 0.5.0 (17 Jan 2014) ------------------- **General** * Deprecated ``logstash_index_cleaner.py`` Use new ``curator.py`` instead (untergeek) * new script change: ``curator.py`` (untergeek) * new add index optimization (Lucene forceMerge) to reduce segments and therefore memory usage. (untergeek) * update refactor of args and several functions to streamline operation and make it more readable (untergeek) * update refactor further to clean up and allow immediate (and future) portability (HonzaKral) 0.4.0 ----- **General** * First version logged in ``CHANGELOG`` * new ``--disable-bloom-days`` feature requires 0.90.9+ http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-codec.html#bloom-postings This can save a lot of heap space on cold indexes (i.e. not actively indexing documents) curator-5.2.0/docs/Makefile000066400000000000000000000152061315226075300155750ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # User-friendly check for sphinx-build ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) endif # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " xml to make Docutils-native XML files" @echo " pseudoxml to make pseudoxml-XML files for display purposes" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Elasticsearch.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Elasticsearch.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/Elasticsearch" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/Elasticsearch" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." latexpdfja: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through platex and dvipdfmx..." $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." xml: $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml @echo @echo "Build finished. The XML files are in $(BUILDDIR)/xml." pseudoxml: $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml @echo @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." curator-5.2.0/docs/actionclasses.rst000066400000000000000000000032101315226075300175120ustar00rootroot00000000000000.. _actionclasses: Action Classes ============== .. seealso:: It is important to note that each action has a `do_action()` method, which accepts no arguments. This is the means by which all actions are executed. * `Alias`_ * `Allocation`_ * `Close`_ * `ClusterRouting`_ * `CreateIndex`_ * `DeleteIndices`_ * `DeleteSnapshots`_ * `ForceMerge`_ * `IndexSettings`_ * `Open`_ * `Reindex`_ * `Replicas`_ * `Restore`_ * `Rollover`_ * `Shrink`_ * `Snapshot`_ Alias ----- .. autoclass:: curator.actions.Alias :members: Allocation ---------- .. autoclass:: curator.actions.Allocation :members: Close ----- .. autoclass:: curator.actions.Close :members: ClusterRouting -------------- .. autoclass:: curator.actions.ClusterRouting :members: CreateIndex -------------- .. autoclass:: curator.actions.CreateIndex :members: DeleteIndices ------------- .. autoclass:: curator.actions.DeleteIndices :members: DeleteSnapshots --------------- .. autoclass:: curator.actions.DeleteSnapshots :members: ForceMerge ---------- .. autoclass:: curator.actions.ForceMerge :members: IndexSettings -------------- .. autoclass:: curator.actions.IndexSettings :members: Open ---- .. autoclass:: curator.actions.Open :members: Reindex -------- .. autoclass:: curator.actions.Reindex :members: Replicas -------- .. autoclass:: curator.actions.Replicas :members: Restore -------- .. autoclass:: curator.actions.Restore :members: Rollover -------- .. autoclass:: curator.actions.Rollover :members: Shrink -------- .. autoclass:: curator.actions.Shrink :members: Snapshot -------- .. autoclass:: curator.actions.Snapshot :members: curator-5.2.0/docs/asciidoc/000077500000000000000000000000001315226075300157075ustar00rootroot00000000000000curator-5.2.0/docs/asciidoc/about.asciidoc000066400000000000000000000123771315226075300205330ustar00rootroot00000000000000[[about]] = About [partintro] -- Elasticsearch Curator helps you curate, or manage, your Elasticsearch indices and snapshots by: 1. Obtaining the full list of indices (or snapshots) from the cluster, as the _actionable list_ 2. Iterate through a list of user-defined <> to progressively remove indices (or snapshots) from this _actionable list_ as needed. 3. Perform various <> on the items which remain in the _actionable list._ Learn More: * <> * <> * <> * <> * <> * <> * <> -- [[about-origin]] == Origin Curator was first called https://logstash.jira.com/browse/LOGSTASH-211[`clearESindices.py`]. Its sole function was to delete indices. It was almost immediately renamed to https://logstash.jira.com/browse/LOGSTASH-211[`logstash_index_cleaner.py`]. After a time it was briefly relocated under the https://github.com/elastic/logstash[logstash] repository as `expire_logs`, at which point it began to gain new functionality. Soon thereafter, Jordan Sissel was hired by Elastic (then still Elasticsearch), as was the original author of Curator. Not long after that it became Elasticsearch Curator and is now hosted at https://github.com/elastic/curator Curator now performs many operations on your Elasticsearch indices, from delete to snapshot to shard allocation routing. [[about-features]] == Features Curator allows for many different operations to be performed to both indices and snapshots, including: * Add or remove indices (or both!) from an <> * Change shard routing <> * <> indices * <> * <> * <> * <> closed indices * <> indices * <> indices, including from remote clusters * Change the number of <> per shard for indices * <> indices * Take a <> (backup) of indices * <> snapshots [[about-cli]] == Command-Line Interface (CLI) Curator has always been a command-line tool. This site provides the documentation for how to use Curator on the command-line. TIP: Learn more about <>. [[about-api]] == Application Program Interface (API) Curator ships with both an API and CLI tool. The API, or Application Program Interface, allows you to write your own scripts to accomplish similar goals--or even new and different things--with the same code that Curator uses. The API documentation is not on this site, but is available at http://curator.readthedocs.io/. The Curator API is built using the http://www.elastic.co/guide/en/elasticsearch/client/python-api/current/index.html[Elasticsearch Python API]. [[license]] == License Copyright (c) 2011–2017 Elasticsearch Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. [[site-corrections]] == Site Corrections All documentation on this site allows for quick correction submission. To do this, use the "Edit" link to the right on any page. * You will need to log in with your GitHub account. * Make your corrections or additions to the documentation. * Please use the option to "Create a new branch for this commit and start a pull request." * Please make sure you have signed our http://www.elastic.co/contributor-agreement/[Contributor License Agreement]. We are not asking you to assign copyright to us, but to give us the right to distribute your code (even documentation corrections) without restriction. We ask this of all contributors in order to assure our users of the origin and continuing existence of the code. You only need to sign the CLA once. If you are uncomfortable with this, feel free to submit a https://github.com/elastic/curator/issues[GitHub Issue] with your suggested correction instead. * Changes will be reviewed and merged, if acceptable. [[about-contributing]] == Contributing We welcome contributions and bug fixes to Curator's API and CLI. We are grateful for the many https://github.com/elastic/curator/blob/master/CONTRIBUTORS[contributors] who have helped Curator become what it is today. Please read through our https://github.com/elastic/curator/blob/master/CONTRIBUTING.md[contribution] guide, and the Curator https://github.com/elastic/curator/blob/master/README.rst[readme] document. A brief overview of the steps * fork the repo * make changes in your fork * add tests to cover your changes (if necessary) * run tests * sign the http://elastic.co/contributor-agreement/[CLA] * send a pull request! TIP: To submit documentation fixes for this site, see <> curator-5.2.0/docs/asciidoc/actions.asciidoc000066400000000000000000001021031315226075300210440ustar00rootroot00000000000000[[actions]] = Actions [partintro] -- Actions are the tasks which Curator can perform on your indices. Snapshots, once created, can only be deleted. * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> -- [[alias]] == Alias [source,yaml] ------------- action: alias description: "Add/Remove selected indices to or from the specified alias" options: name: alias_name add: filters: - filtertype: ... remove: filters: - filtertype: ... ------------- NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given action, it will be ignored. This action adds and/or removes indices from the alias identified by <> The <> under the `add` and `remove` directives define which indices will be added and/or removed. This is an atomic action, so adds and removes happen instantaneously. The <> option allows the addition of extra settings with the `add` directive. These settings are ignored for `remove`. An example of how these settings can be used to create a filtered alias might be: [source,yaml] ------------- action: alias description: "Add/Remove selected indices to or from the specified alias" options: name: alias_name extra_settings: filter: term: user: kimchy add: filters: - filtertype: ... remove: filters: - filtertype: ... ------------- WARNING: Before creating a filtered alias, first ensure that the fields already exist in the mapping. Learn more about adding filtering and routing to aliases in the {ref}/indices-aliases.html[Elasticsearch Alias API documentation]. === Required settings * <> === Optional settings * <> * <> * <> * <> * <> * <> TIP: See an example of this action in an <> <>. [[allocation]] == Allocation [source,yaml] ------------- action: allocation description: "Apply shard allocation filtering rules to the specified indices" options: key: ... value: ... allocation_type: ... filters: - filtertype: ... ------------- NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given action, it will be ignored. This action changes the shard routing allocation for the selected indices. See {ref}/shard-allocation-filtering.html for more information. You can optionally set `wait_for_completion` to `True` to have Curator wait for the shard routing to complete before continuing: [source,yaml] ------------- action: allocation description: "Apply shard allocation filtering rules to the specified indices" options: key: ... value: ... allocation_type: ... wait_for_completion: True max_wait: 300 wait_interval: 10 filters: - filtertype: ... ------------- This configuration will wait for a maximum of 300 seconds for shard routing and reallocation to complete before giving up. A `max_wait` value of `-1` will wait indefinitely. Curator will poll for completion at `10` second intervals, as defined by `wait_interval`. === Required settings * <> === Optional settings * <> * <> * <> * <> * <> * <> * <> * <> * <> TIP: See an example of this action in an <> <>. [[close]] == Close [source,yaml] ------------- action: close description: "Close selected indices" options: delete_aliases: False filters: - filtertype: ... ------------- NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given action, it will be ignored. This action closes the selected indices, and optionally deletes associated aliases beforehand. === Optional settings * <> * <> * <> * <> * <> TIP: See an example of this action in an <> <>. [[cluster_routing]] == Cluster Routing [source,yaml] ------------- action: cluster_routing description: "Apply routing rules to the entire cluster" options: routing_type: value: ... setting: enable ------------- NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given action, it will be ignored. This action changes the shard routing allocation for the selected indices. See {ref}/shards-allocation.html for more information. You can optionally set `wait_for_completion` to `True` to have Curator wait for the shard routing to complete before continuing: [source,yaml] ------------- action: cluster_routing description: "Apply routing rules to the entire cluster" options: routing_type: value: ... setting: enable wait_for_completion: True max_wait: 300 wait_interval: 10 ------------- This configuration will wait for a maximum of 300 seconds for shard routing and reallocation to complete before giving up. A `max_wait` value of `-1` will wait indefinitely. Curator will poll for completion at `10` second intervals, as defined by `wait_interval`. === Required settings * <> * <> * <> Currently must be set to `enable`. This setting is a placeholder for potential future expansion. === Optional settings * <> * <> * <> * <> * <> * <> TIP: See an example of this action in an <> <>. [[create_index]] == Create Index [source,yaml] ------------- action: create_index description: "Create index as named" options: name: ... ------------- NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given action, it will be ignored. This action creates the named index. There are multiple different ways to configure how the name is represented. === Manual naming [source,yaml] ------------- action: create_index description: "Create index as named" options: name: myindex # ... ------------- In this case, what you see is what you get. An index named `myindex` will be created === Python strftime [source,yaml] ------------- action: create_index description: "Create index as named" options: name: 'myindex-%Y.%m' # ... ------------- For the `create_index` action, the <> option can contain Python strftime strings. The method for doing so is described in detail, including which strftime strings are acceptable, in the documentation for the <> option. === Date Math [source,yaml] ------------- action: create_index description: "Create index as named" options: name: '' # ... ------------- For the `create_index` action, the <> option can be in Elasticsearch {ref}/date-math-index-names.html[date math] format. This allows index names containing dates to use deterministic math to set a date name in the past or the future. For example, if today's date were 2017-03-27, the name `` will create an index named `logstash-2017.03.27`. If you wanted to create _tomorrow's_ index, you would use the name ``, which adds 1 day. This pattern creates an index named `logstash-2017.03.28`. For many more configuration options, read the Elasticsearch {ref}/date-math-index-names.html[date math] documentation. === Extra Settings The <> option allows the addition of extra settings, such as index settings and mappings. An example of how these settings can be used to create an index might be: [source,yaml] ------------- action: create_index description: "Create index as named" options: name: myindex # ... extra_settings: settings: number_of_shards: 1 number_of_replicas: 0 mappings: type1: properties: field1: type: string index: not_analyzed ------------- === Required settings * <> === Optional settings * <> No default value. You can add any acceptable index settings and mappings as nested YAML. See the {ref}/indices-create-index.html[Elasticsearch Create Index API documentation] for more information. * <> * <> * <> TIP: See an example of this action in an <> <>. [[delete_indices]] == Delete Indices [source,yaml] ------------- action: delete_indices description: "Delete selected indices" options: continue_if_exception: False filters: - filtertype: ... ------------- NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given action, it will be ignored. This action deletes the selected indices. In clusters which are overcrowded with indices, or a high number of shards per node, deletes can take a longer time to process. In such cases, it may be helpful to set a higher timeout than is set in the <>. You can override that <> as follows: [source,yaml] ------------- action: delete_indices description: "Delete selected indices" options: timeout_override: 300 continue_if_exception: False filters: - filtertype: ... ------------- === Optional settings * <> * <> * <> * <> TIP: See an example of this action in an <> <>. [[delete_snapshots]] == Delete Snapshots [source,yaml] ------------- action: delete_snapshots description: "Delete selected snapshots from 'repository'" options: repository: ... retry_interval: 120 retry_count: 3 filters: - filtertype: ... ------------- NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given action, it will be ignored. This action deletes the selected snapshots from the selected <>. If a snapshot is currently underway, Curator will retry up to <> times, with a delay of <> seconds between retries. === Required settings * <> === Optional settings * <> * <> * <> * <> * <> * <> TIP: See an example of this action in an <> <>. [[forcemerge]] == Forcemerge [source,yaml] ------------- action: forcemerge description: >- Perform a forceMerge on selected indices to 'max_num_segments' per shard options: max_num_segments: 2 timeout_override: 21600 filters: - filtertype: ... ------------- NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given action, it will be ignored. This action performs a forceMerge on the selected indices, merging them to <> per shard. You can optionally pause between each merge for <> seconds to allow the cluster to quiesce: [source,yaml] ------------- action: forcemerge description: >- Perform a forceMerge on selected indices to 'max_num_segments' per shard options: max_num_segments: 2 timeout_override: 21600 delay: 120 filters: - filtertype: ... ------------- === Required settings * <> === Optional settings * <> * <> * <> * <> * <> TIP: See an example of this action in an <> <>. [[index_settings]] == Index Settings [source,yaml] ------------- action: index_settings description: "Change settings for selected indices" options: index_settings: index: refresh_interval: 5s ignore_unavailable: False preserve_existing: False filters: - filtertype: ... ------------- NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given action, it will be ignored. This action updates the specified index settings for the selected indices. [IMPORTANT] ======================= While Elasticsearch allows for either dotted notation of index settings, such as [source,json] ------------- PUT /indexname/_settings { "index.blocks.read_only": true } ------------- or in nested structure, like this: [source,json] ------------- PUT /indexname/_settings { "index": { "blocks": { "read_only": true } } } ------------- In order to appropriately detect https://www.elastic.co/guide/en/elasticsearch/reference/5.4/index-modules.html#_static_index_settings[static] vs. https://www.elastic.co/guide/en/elasticsearch/reference/5.4/index-modules.html#dynamic-index-settings[dynamic] settings, and to be able to verify configurational integrity in the YAML file, **Curator does not support using dotted notation.** ======================= === Optional settings * <> * <> * <> * <> * <> * <> TIP: See an example of this action in an <> <>. [[open]] == Open [source,yaml] ------------- action: open description: "open selected indices" options: continue_if_exception: False filters: - filtertype: ... ------------- NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given action, it will be ignored. This action opens the selected indices. === Optional settings * <> * <> * <> * <> TIP: See an example of this action in an <> <>. [[reindex]] == Reindex [source,yaml] ------------- actions: 1: description: "Reindex index1 into index2" action: reindex options: wait_interval: 9 max_wait: -1 request_body: source: index: index1 dest: index: index2 filters: - filtertype: none ------------- There are many options for the reindex option. The best place to start is in the <> to see how to configure this action. All other options are as follows. === Required settings * <> === Optional settings * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> TIP: See an example of this action in an <> <>. === Compatibility Generally speaking, the Curator should be able to perform a remote reindex from any version of Elasticsearch, 1.4 and newer. Strictly speaking, the Reindex API in Elasticsearch _is_ able to reindex from older clusters, but Curator cannot be used to facilitate this due to Curator's dependency on changes released in 1.4. However, there is a https://github.com/elastic/elasticsearch/pull/23805[known bug] with Elasticsearch 5.3.0 not being able to reindex from remote clusters older than 2.0. The patch is available in Elasticsearch 5.3.1. Earlier versions of Elasticsearch 5.x do not suffer from this bug. This bug appeared again in Elasticsearch 5.4.0, and was fixed in 5.4.1, and hopefully will not appear in any other future releases. [[replicas]] == Replicas [source,yaml] ------------- action: replicas description: >- Set the number of replicas per shard for selected indices to 'count' options: count: ... filters: - filtertype: ... ------------- NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given action, it will be ignored. This action will set the number of replicas per shard to the value of <>. You can optionally set `wait_for_completion` to `True` to have Curator wait for the replication operation to complete before continuing: [source,yaml] ------------- action: replicas description: >- Set the number of replicas per shard for selected indices to 'count' options: count: ... wait_for_completion: True max_wait: 600 wait_interval: 10 filters: - filtertype: ... ------------- This configuration will wait for a maximum of 600 seconds for all index replicas to be complete before giving up. A `max_wait` value of `-1` will wait indefinitely. Curator will poll for completion at `10` second intervals, as defined by `wait_interval`. === Required settings * <> === Optional settings * <> * <> * <> * <> * <> * <> * <> TIP: See an example of this action in an <> <>. [[restore]] == Restore [source,yaml] ------------- actions: 1: action: restore description: >- Restore all indices in the most recent snapshot with state SUCCESS. Wait for the restore to complete before continuing. Do not skip the repository filesystem access check. Use the other options to define the index/shard settings for the restore. options: repository: # If name is blank, the most recent snapshot by age will be selected name: # If indices is blank, all indices in the snapshot will be restored indices: wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: state state: SUCCESS exclude: - filtertype: ... ------------- NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given action, it will be ignored. This action will restore indices from the indicated <>, from the most recent snapshot identified by the applied filters, or the snapshot identified by <>. === Renaming indices on restore You can cause indices to be renamed at restore with the <> and <> options: [source,yaml] ------------- actions: 1: action: restore description: >- Restore all indices in the most recent snapshot with state SUCCESS. Wait for the restore to complete before continuing. Do not skip the repository filesystem access check. Use the other options to define the index/shard settings for the restore. options: repository: # If name is blank, the most recent snapshot by age will be selected name: # If indices is blank, all indices in the snapshot will be restored indices: rename_pattern: 'index(.+)' rename_replacement: 'restored_index$1' wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: state state: SUCCESS exclude: - filtertype: ... ------------- In this configuration, Elasticsearch will capture whatever appears after `index` and put it after `restored_index`. For example, if I was restoring `index-2017.03.01`, the resulting index would be renamed to `restored_index-2017.03.01`. === Extra settings The <> option allows the addition of extra settings, such as index settings. An example of how these settings can be used to change settings for an index being restored might be: [source,yaml] ------------- actions: 1: action: restore description: >- Restore all indices in the most recent snapshot with state SUCCESS. Wait for the restore to complete before continuing. Do not skip the repository filesystem access check. Use the other options to define the index/shard settings for the restore. options: repository: # If name is blank, the most recent snapshot by age will be selected name: # If indices is blank, all indices in the snapshot will be restored indices: extra_settings: index_settings: number_of_replicas: 0 wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: state state: SUCCESS exclude: - filtertype: ... ------------- In this case, the number of replicas will be applied to the restored indices. For more information see the {ref}/modules-snapshots.html#_changing_index_settings_during_restore[official Elasticsearch Documentation]. === Required settings * <> === Optional settings * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> TIP: See an example of this action in an <> <>. [[rollover]] == Rollover [source,yaml] ------------- action: rollover description: >- Rollover the index associated with index 'name', which should be in the form of prefix-000001 (or similar), or prefix-YYYY.MM.DD-1. options: name: aliasname conditions: max_age: 1d max_docs: 1000000 ------------- This action uses the {ref}/indices-rollover-index.html[Elasticsearch Rollover API] to create a new index, if the described conditions are met. IMPORTANT: When choosing `conditions`, **either** one of <> or <>, **or both** may be used. If both are used, then both conditions must be matched for the rollover to occur. WARNING: If either or both of the <> or <> options are present, they must each have a value. Because there is no default value, neither of these options can be left empty, or Curator will generate an error. === Extra settings The <> option allows the addition of extra index settings (but not mappings). An example of how these settings can be used might be: [source,yaml] ------------- action: rollover description: >- Rollover the index associated with index 'name', which should be in the form of prefix-000001 (or similar), or prefix-YYYY.MM.DD-1. options: name: aliasname conditions: max_age: 1d max_docs: 1000000 extra_settings: index.number_of_shards: 3 index.number_of_replicas: 1 timeout_override: continue_if_exception: False disable_action: False ------------- === Required settings * <> The alias name * <> The maximum age that is allowed before triggering a rollover. This _must_ be nested under `conditions:`. There is no default value. If this condition is specified, it must have a value, or Curator will generate an error. * <> The maximum number of documents allowed in an index before triggering a rollover. This _must_ be nested under `conditions:`. There is no default value. If this condition is specified, it must have a value, or Curator will generate an error. === Optional settings * <> No default value. You can add any acceptable index settings (not mappings) as nested YAML. See the {ref}/indices-create-index.html[Elasticsearch Create Index API documentation] for more information. * <> * <> * <> TIP: See an example of this action in an <> <>. [[shrink]] == Shrink [source,yaml] ------------- action: shrink description: >- Shrink selected indices on the node with the most available space. Delete source index after successful shrink, then reroute the shrunk index with the provided parameters. options: ignore_empty_list: True shrink_node: DETERMINISTIC node_filters: permit_masters: False exclude_nodes: ['not_this_node'] number_of_shards: 1 number_of_replicas: 1 shrink_prefix: shrink_suffix: '-shrink' delete_after: True post_allocation: allocation_type: include key: node_tag value: cold wait_for_active_shards: 1 extra_settings: settings: index.codec: best_compression wait_for_completion: True wait_interval: 9 max_wait: -1 filters: - filtertype: ... ------------- NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given action, it will be ignored. Shrinking an index is a good way to reduce the total shard count in your cluster. https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-shrink-index.html#_shrinking_an_index[Several conditions need to be met] in order for index shrinking to take place: * The index must be marked as read-only * A (primary or replica) copy of every shard in the index must be relocated to the same node * The cluster must have health `green` * The target index must not exist * The number of primary shards in the target index must be a factor of the number of primary shards in the source index. * The source index must have more primary shards than the target index. * The index must not contain more than 2,147,483,519 documents in total across all shards that will be shrunk into a single shard on the target index as this is the maximum number of docs that can fit into a single shard. * The node handling the shrink process must have sufficient free disk space to accommodate a second copy of the existing index. Curator will try to meet these conditions. If it is unable to meet them all, it will not perform a shrink operation. This action will shrink indices to the target index, the name of which is the value of <> + the source index name + <>. The resulting index will have <> primary shards, and <> replica shards. The shrinking will take place on the node identified by <>, unless `DETERMINISTIC` is specified, in which case Curator will evaluate all of the nodes to determine which one has the most free space. If multiple indices are identified for shrinking by the filter block, and `DETERMINISTIC` is specified, the node selection process will be repeated for each successive index, preventing all of the space being consumed on a single node. By default, Curator will delete the source index after a successful shrink. This can be disabled by setting <> to `False`. If the source index, is not deleted after a successful shrink, Curator will remove the read-only setting and the shard allocation routing applied to the source index to put it on the shrink node. Curator will wait for the shards to stop rerouting before continuing. The <> option applies to the target index after the shrink is complete. If set, this shard allocation routing will be applied (after a successful shrink) and Curator will wait for all shards to stop rerouting before continuing. The only <> which are acceptable are `settings` and `aliases`. Please note that in the example above, while `best_compression` is being applied to the new index, it will not take effect until new writes are made to the index, such as when <> the shard to a single segment. The other options are usually okay to leave at the defaults, but feel free to change them as needed. === Required settings * <> === Optional settings * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> TIP: See an example of this action in an <> <>. [[snapshot]] == Snapshot [source,yaml] ------------- action: snapshot description: >- Snapshot selected indices to 'repository' with the snapshot name or name pattern in 'name'. Use all other options as assigned options: repository: ... # Leaving name blank will result in the default 'curator-%Y%m%d%H%M%S' name: wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: ... ------------- NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given action, it will be ignored. This action will snapshot indices to the indicated <>, with a name, or name pattern, as identified by <>. The other options are usually okay to leave at the defaults, but feel free to read about them and change them accordingly. === Required settings * <> === Optional settings * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> TIP: See an example of this action in an <> <>. curator-5.2.0/docs/asciidoc/command-line.asciidoc000066400000000000000000000201401315226075300217470ustar00rootroot00000000000000[[cli]] = Running Curator [partintro] -- * <> * <> * <> -- [[command-line]] == Command Line Interface The command-line arguments are as follows: [source,sh] ------- curator [--config CONFIG.YML] [--dry-run] ACTION_FILE.YML ------- The square braces indicate optional elements. If `--config` and `CONFIG.YML` are not provided, Curator will look in `~/.curator/curator.yml` for the configuration file. `~` is the home directory of the user executing Curator. In a Unix system, this might be `/home/username/.curator/curator.yml`, while on a Windows system, it might be `C:\Users\username\.curator\curator.yml` If `--dry-run` is included, Curator will simulate the action(s) in ACTION_FILE.YML as closely as possible without actually making any changes. The results will be in the logfile, or STDOUT/command-line if no logfile is specified. `ACTION_FILE.YML` is a YAML <>. Command-line help is never far away: [source,sh] ------- curator --help ------- The help output looks like this: [source,sh] ------- $ curator --help Usage: curator [OPTIONS] ACTION_FILE Curator for Elasticsearch indices. See http://elastic.co/guide/en/elasticsearch/client/curator/current Options: --config PATH Path to configuration file. Default: ~/.curator/curator.yml --dry-run Do not perform any changes. --version Show the version and exit. --help Show this message and exit. ------- You can use <> in your configuration files. TIP: See <>   [[singleton-cli]] == Singleton Command Line Interface The `curator_cli` command allows users to run a single, supported action from the command-line, without needing either the client or action YAML configuration file, though it does support using the client configuration file if you want. As an important bonus, the command-line options allow you to override the settings in the `curator.yml` file! [source,sh] --------- $ curator_cli --help Usage: curator_cli [OPTIONS] COMMAND [ARGS]... Options: --config PATH Path to configuration file. Default: ~/.curator/curator.yml --host TEXT Elasticsearch host. --url_prefix TEXT Elasticsearch http url prefix. --port TEXT Elasticsearch port. --use_ssl Connect to Elasticsearch through SSL. --certificate TEXT Path to certificate to use for SSL validation. --client-cert TEXT Path to file containing SSL certificate for client auth. --client-key TEXT Path to file containing SSL key for client auth. --ssl-no-validate Do not validate SSL certificate --http_auth TEXT Use Basic Authentication ex: user:pass --timeout INTEGER Connection timeout in seconds. --master-only Only operate on elected master node. --dry-run Do not perform any changes. --loglevel TEXT Log level --logfile TEXT log file --logformat TEXT Log output format [default|logstash|json]. --version Show the version and exit. --help Show this message and exit. Commands: allocation Shard Routing Allocation close Close indices delete_indices Delete indices delete_snapshots Delete snapshots forcemerge forceMerge index/shard segments open Open indices replicas Change replica count show_indices Show indices show_snapshots Show snapshots snapshot Snapshot indices --------- The option flags for the given commands match those used for the same <>. The only difference is how filtering is handled. TIP: See <> === Command-line filtering Recent improvements in Curator include schema and setting validation. With these improvements, it is possible to validate filters and their many permutations if passed in a way that Curator can easily digest. [source,sh] ----------- --filter_list TEXT JSON string representing an array of filters. ----------- This means that filters need to be passed as a single object, or an array of objects in JSON format. Single: [source,sh] ----------- --filter_list '{"filtertype":"none"}' ----------- Multiple: [source,sh] ----------- --filter_list '[{"filtertype":"age","source":"creation_date","direction":"older","unit":"days","unit_count":13},{"filtertype":"pattern","kind":"prefix","value":"logstash"}]' ----------- This preserves the power of chained filters, making them available on the command line. NOTE: You may need to escape all of the double quotes on some platforms, or shells like PowerShell, for instance. Caveats to this approach: 1. Only one action can be taken at a time. 2. Not all actions have singleton analogs. For example, <> and + <> do not have singleton actions. === Show Indices/Snapshots One feature that the singleton command offers that the other cannot is to show which indices and snapshots are in the system. It's a great way to visually test your filters without causing any harm to the system. [source,sh] ----------- $ curator_cli show_indices --help Usage: curator_cli show_indices [OPTIONS] Show indices Options: --verbose Show verbose output. --header Print header if --verbose --epoch Print time as epoch if --verbose --filter_list TEXT JSON string representing an array of filters. [required] --help Show this message and exit. ----------- [source,sh] ----------- $ curator_cli show_snapshots --help Usage: curator_cli show_snapshots [OPTIONS] Show snapshots Options: --repository TEXT Snapshot repository name [required] --filter_list TEXT JSON string representing an array of filters. [required] --help Show this message and exit. ----------- The `show_snapshots` command will only show snapshots matching the provided filters. The `show_indices` command will also do this, but also offers a few extra features. * `--verbose` adds state, total size of primary and all replicas, the document count, the number of primary and replica shards, and the creation date in ISO8601 format. * `--header` adds a header that shows the column names. This only occurs if `--verbose` is also selected. * `--epoch` changes the date format from ISO8601 to epoch time. If `--header` is also selected, the column header title will change to `creation_date` There are no extra columns or `--verbose` output for the `show_snapshots` command. Without `--epoch` [source,sh] ----------- Index State Size Docs Pri Rep Creation Timestamp logstash-2016.10.20 close 0.0B 0 5 1 2016-10-20T00:00:03Z logstash-2016.10.21 open 763.3MB 5860016 5 1 2016-10-21T00:00:03Z logstash-2016.10.22 open 759.1MB 5858450 5 1 2016-10-22T00:00:04Z logstash-2016.10.23 open 757.8MB 5857456 5 1 2016-10-23T00:00:04Z logstash-2016.10.24 open 771.5MB 5859720 5 1 2016-10-24T00:00:00Z logstash-2016.10.25 open 771.0MB 5860112 5 1 2016-10-25T00:00:01Z logstash-2016.10.27 open 658.3MB 4872830 5 1 2016-10-27T00:00:03Z logstash-2016.10.28 open 655.1MB 5237250 5 1 2016-10-28T00:00:00Z ----------- With `--epoch` [source,sh] ----------- Index State Size Docs Pri Rep creation_date logstash-2016.10.20 close 0.0B 0 5 1 1476921603 logstash-2016.10.21 open 763.3MB 5860016 5 1 1477008003 logstash-2016.10.22 open 759.1MB 5858450 5 1 1477094404 logstash-2016.10.23 open 757.8MB 5857456 5 1 1477180804 logstash-2016.10.24 open 771.5MB 5859720 5 1 1477267200 logstash-2016.10.25 open 771.0MB 5860112 5 1 1477353601 logstash-2016.10.27 open 658.3MB 4872830 5 1 1477526403 logstash-2016.10.28 open 655.1MB 5237250 5 1 1477612800 -----------   [[exit-codes]] == Exit Codes Exit codes will indicate success or failure. * `0` — Success * `1` — Failure * `-1` - Exception raised that does not result in a `1` exit code.   curator-5.2.0/docs/asciidoc/configuration.asciidoc000066400000000000000000000346621315226075300222710ustar00rootroot00000000000000[[configuration]] = Configuration [partintro] -- These are the higher-level configuration settings used by the configuration files. <> and <> are documented separately. * <> * <> * <> -- [[envvars]] == Environment Variables WARNING: This functionality is experimental and may be changed or removed + completely in a future release. You can use environment variable references in the both the <> and the <> to set values that need to be configurable at runtime. To do this, use: [source,sh] ------- ${VAR} ------- Where `VAR` is the name of the environment variable. Each variable reference is replaced at startup by the value of the environment variable. The replacement is case-sensitive and occurs while the YAML file is parsed, but before configuration schema validation. References to undefined variables are replaced by `None` unless you specify a default value. To specify a default value, use: [source,sh] ------- ${VAR:default_value} ------- Where `default_value` is the value to use if the environment variable is undefined. [IMPORTANT] .Unsupported use cases ================================================================= When using environment variables, the value must _only_ be the environment variable. Using extra text, such as: [source,sh] ------- logfile: ${LOGPATH}/extra/path/information/file.log ------- is not supported at this time. ================================================================= === Examples Here are some examples of configurations that use environment variables and what each configuration looks like after replacement: [options="header"] |================================== |Config source |Environment setting |Config after replacement |`unit: ${UNIT}` |`export UNIT=days` |`unit: days` |`unit: ${UNIT}` |no setting |`unit:` |`unit: ${UNIT:days}` |no setting |`unit: days` |`unit: ${UNIT:days}` |`export UNIT=hours` |`unit: hours` |================================== [[actionfile]] == Action File NOTE: You can use <> in your configuration files. An action file has the following structure: [source,sh] ----------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: ACTION1 description: OPTIONAL DESCRIPTION options: option1: value1 ... optionN: valueN continue_if_exception: False disable_action: True filters: - filtertype: *first* filter_element1: value1 ... filter_elementN: valueN - filtertype: *second* filter_element1: value1 ... filter_elementN: valueN 2: action: ACTION2 description: OPTIONAL DESCRIPTION options: option1: value1 ... optionN: valueN continue_if_exception: False disable_action: True filters: - filtertype: *first* filter_element1: value1 ... filter_elementN: valueN - filtertype: *second* filter_element1: value1 ... filter_elementN: valueN 3: action: ACTION3 ... 4: action: ACTION4 ... ----------- It is a YAML configuration file. The root key must be `actions`, after which there can be any number of actions, nested underneath numbers. Actions will be taken in the order they are completed. The high-level elements of each numbered action are: * <> * <> * <> * <> In the case of the <>, there are two additional high-level elements: `add` and `remove`, which are described in the <> documentation. [[description]] === description This is an optional description which can help describe what the action and its filters are supposed to do. [source,yaml] ------------- description: >- I can make the description span multiple lines by putting ">-" at the beginning of the line, as seen above. Subsequent lines must also be indented. options: option1: ... ------------- [[configfile]] == Configuration File NOTE: The default location of the configuration file is `~/.curator/curator.yml`, but another location can be specified using the `--config` flag on the <>. NOTE: You can use <> in your configuration files. The configuration file contains client connection and settings for logging. It looks like this: [source,sh] ----------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" client: hosts: - 127.0.0.1 port: 9200 url_prefix: use_ssl: False certificate: client_cert: client_key: ssl_no_validate: False http_auth: timeout: 30 master_only: False logging: loglevel: INFO logfile: logformat: default blacklist: ['elasticsearch', 'urllib3'] ----------- It is a YAML configuration file. The two root keys must be `client` and `logging`. The subkeys of each of these will be described here. [[hosts]] === hosts This can be a single value: [source,sh] ----------- hosts: 127.0.0.1 ----------- Or multiple values in the 3 acceptable YAML ways to render sequences, or arrays: WARNING: Curator can only work with one cluster at a time. Including clients from multiple clusters in the `hosts` setting will result in errors. Flow: [source,sh] ----------- hosts: [ "10.0.0.1", "10.0.0.2" ] ----------- Spanning: [source,sh] ----------- hosts: [ "10.0.0.1", "10.0.0.2" ] ----------- Block: [source,sh] ----------- hosts: - 10.0.0.1 - 10.0.0.2 ----------- You can also provide these hosts with optional ports, and bypass the port option: [source,sh] ----------- hosts: - 10.0.0.1:9200 - 10.0.0.2:9201 ----------- WARNING: When adding a port to the end of a host or IP, the YAML Flow and Spanning styles require `host:port` to be single `'` or double `"` quote encapsulated or you will receive an error. The Block style does not have this limitation. [[port]] === port This should be a single value: [source,sh] ----------- port: 9200 ----------- The default is `9200`. This value will only be applied to <> without a port affixed, e.g. `localhost:9202`. [[url_prefix]] === url_prefix This should be a single value or left empty. [source,sh] ----------- url_prefix: ----------- In some cases you may be obliged to connect to your Elasticsearch cluster through a proxy of some kind. There may be a URL prefix before the API URI items, e.g. http://example.com/elasticsearch/ as opposed to http://localhost:9200. In such a case, set the `url_prefix` to the appropriate value, 'elasticsearch' in this example. The default is an empty string. [[use_ssl]] === use_ssl This should be `True`, `False` or left empty. [source,sh] ----------- use_ssl: ----------- If access to your Elasticsearch instance is protected by SSL encryption, you must use set `use_ssl` to `True`. The default is `False` [[certificate]] === certificate This should be a file path to your CA certificate, or left empty. [source,sh] ----------- certificate: ----------- This setting allows the use of a specified CA certificate file to validate the SSL certificate used by Elasticsearch. There is no default. include::inc_filepath.asciidoc[] [[client_cert]] === client_cert This should be a file path to a client certificate (public key), or left empty. [source,sh] ----------- client_cert: ----------- Allows the use of a specified SSL client cert file to authenticate to Elasticsearch. The file may contain both an SSL client certificate and an SSL key, in which case <> is not used. If specifying `client_cert`, and the file specified does not also contain the key, use <> to specify the file containing the SSL key. The file must be in PEM format, and the key part, if used, must be an unencrypted key in PEM format as well. include::inc_filepath.asciidoc[] [[client_key]] === client_key This should be a file path to a client key (private key), or left empty. [source,sh] ----------- client_key: ----------- Allows the use of a specified SSL client key file to authenticate to Elasticsearch. If using <> and the file specified does not also contain the key, use `client_key` to specify the file containing the SSL key. The key file must be an unencrypted key in PEM format. include::inc_filepath.asciidoc[] [[aws_key]] === aws_key WARNING: This feature has not been fully tested and should be considered BETA. WARNING: This setting will not work unless the `requests-aws4auth` Python module has been manually installed first. This should be an AWS IAM access key, or left empty. [source,sh] ----------- aws_key: ----------- IMPORTANT: You must set your <> to the proper hostname _with_ port. It may not work setting <> and <> to only a host name due to the different connection module used. [[aws_secret_key]] === aws_secret_key WARNING: This feature has not been fully tested and should be considered BETA. WARNING: This setting will not work unless the `requests-aws4auth` Python module has been manually installed first. This should be an AWS IAM secret access key, or left empty. [source,sh] ----------- aws_secret_key: ----------- IMPORTANT: You must set your <> to the proper hostname _with_ port. It may not work setting <> and <> to only a host name due to the different connection module used. [[aws_region]] === aws_region WARNING: This feature has not been fully tested and should be considered BETA. WARNING: This setting will not work unless the `requests-aws4auth` Python module has been manually installed first. This should be an AWS region, or left empty. [source,sh] ----------- aws_region: ----------- IMPORTANT: You must set your <> to the proper hostname _with_ port. It may not work setting <> and <> to only a host name due to the different connection module used. [[ssl_no_validate]] === ssl_no_validate This should be `True`, `False` or left empty. [source,sh] ----------- ssl_no_validate: ----------- If access to your Elasticsearch instance is protected by SSL encryption, you may set `ssl_no_validate` to `True` to disable SSL certificate verification. Valid use cases for doing so include the use of self-signed certificates that cannot be otherwise verified and would generate error messages. WARNING: Setting `ssl_no_validate` to `True` will likely result in a warning message that your SSL certificates are not trusted. This is expected behavior. The default value is `False`. [[http_auth]] === http_auth This should be a authentication credentials (e.g. `user:pass`), or left empty. [source,sh] ----------- http_auth: ----------- This setting allows basic HTTP authentication to an Elasticsearch instance. The default is empty. [[timeout]] === timeout This should be an integer number of seconds, or left empty. [source,sh] ----------- timeout: ----------- You can change the default client connection timeout value with this setting. The default value is `30` (seconds) should typically not be changed to be very large. If a longer timeout is necessary for a given action, such as <>, <>, or <>, the client timeout can be overridden on per action basis by setting <> in the action <>. There are default override values for some of those longer running actions. [[master_only]] === master_only This should be `True`, `False` or left empty. [source,sh] ----------- master_only: ----------- In some situations, primarily with automated deployments, it makes sense to install Curator on every node. But you wouldn’t want it to run on each node. By setting `master_only` to `True`, this is possible. It tests for, and will only continue running on the node that is the elected master. WARNING: If `master_only` is `True`, and <> has more than one value, Curator will raise an Exception. This setting should _only_ be used with a single host in <>, as its utility centers around deploying to all nodes in the cluster. The default value is `False`. [[loglevel]] === loglevel This should be `CRITICAL`, `ERROR`, `WARNING`, `INFO`, `DEBUG`, or left empty. [source,sh] ----------- loglevel: ----------- Set the minimum acceptable log severity to display. * `CRITICAL` will only display critical messages. * `ERROR` will only display error and critical messages. * `WARNING` will display error, warning, and critical messages. * `INFO` will display informational, error, warning, and critical messages. * `DEBUG` will display debug messages, in addition to all of the above. The default value is `INFO`. [[logfile]] === logfile This should be a path to a log file, or left empty. [source,sh] ----------- logfile: ----------- include::inc_filepath.asciidoc[] The default value is empty, which will result in logging to `STDOUT`, or the console. [[logformat]] === logformat This should `default`, `json`, `logstash`, or left empty. [source,sh] ----------- logformat: ----------- The `default` format looks like: [source,sh] ----------- 2016-04-22 11:53:09,972 INFO Action #1: ACTIONNAME ----------- The `json` or `logstash` formats look like: [source,sh] ----------- {"@timestamp": "2016-04-22T11:54:29.033Z", "function": "cli", "linenum": 178, "loglevel": "INFO", "message": "Action #1: ACTIONNAME", "name": "curator.cli"} ----------- The default value is `default`. [[blacklist]] === blacklist This should be an empty array `[]`, an array of log handler strings, or left empty. [source,sh] ----------- blacklist: ['elasticsearch', 'urllib3'] ----------- The default value is `['elasticsearch', 'urllib3']`, which will result in logs for the `elasticsearch` and `urllib3` Python modules _not_ being output. These can be quite verbose, so unless you need them to debug an issue, you should accept the default value. TIP: If you do need to troubleshoot an issue, set `blacklist` to `[]`, which is an empty array. Leaving it unset will result in the default behavior, which is to filter out `elasticsearch` and `urllib3` log traffic. curator-5.2.0/docs/asciidoc/examples.asciidoc000066400000000000000000000460061315226075300212330ustar00rootroot00000000000000[[examples]] = Examples [partintro] -- These examples should help illustrate how to build your own <>. You can use <> in your configuration files. * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> -- [[ex_alias]] == alias [source,yaml] ------------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: alias description: >- Alias indices from last week, with a prefix of logstash- to 'last_week', remove indices from the previous week. options: name: last_week warn_if_no_indices: False disable_action: True add: filters: - filtertype: pattern kind: prefix value: logstash- exclude: - filtertype: period source: name range_from: -1 range_to: -1 timestring: '%Y.%m.%d' unit: weeks week_starts_on: sunday remove: filters: - filtertype: pattern kind: prefix value: logstash- - filtertype: period source: name range_from: -2 range_to: -2 timestring: '%Y.%m.%d' unit: weeks week_starts_on: sunday ------------- [[ex_allocation]] == allocation [source,yaml] ------------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: allocation description: >- Apply shard allocation routing to 'require' 'tag=cold' for hot/cold node setup for logstash- indices older than 3 days, based on index_creation date options: key: tag value: cold allocation_type: require disable_action: True filters: - filtertype: pattern kind: prefix value: logstash- - filtertype: age source: creation_date direction: older unit: days unit_count: 3 ------------- [[ex_close]] == close [source,yaml] ------------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: close description: >- Close indices older than 30 days (based on index name), for logstash- prefixed indices. options: delete_aliases: False disable_action: True filters: - filtertype: pattern kind: prefix value: logstash- - filtertype: age source: name direction: older timestring: '%Y.%m.%d' unit: days unit_count: 30 ------------- [[ex_cluster_routing]] == cluster_routing [source,yaml] ------------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. # # This action example has a blank spot at action ID 2. This is to show that # Curator can disable allocation before one or more actions, and then re-enable # it afterward. actions: 1: action: cluster_routing description: >- Disable shard routing for the entire cluster. options: routing_type: allocation value: none setting: enable wait_for_completion: True disable_action: True 2: action: (any other action details go here) ... 3: action: cluster_routing description: >- Re-enable shard routing for the entire cluster. options: routing_type: allocation value: all setting: enable wait_for_completion: True disable_action: True ------------- [[ex_create_index]] == create_index [source,yaml] ------------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: create_index description: Create the index as named, with the specified extra settings. options: name: myindex extra_settings: settings: number_of_shards: 2 number_of_replicas: 1 disable_action: True ------------- [[ex_delete_indices]] == delete_indices [source,yaml] ------------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: delete_indices description: >- Delete indices older than 45 days (based on index name), for logstash- prefixed indices. Ignore the error if the filter does not result in an actionable list of indices (ignore_empty_list) and exit cleanly. options: ignore_empty_list: True disable_action: True filters: - filtertype: pattern kind: prefix value: logstash- - filtertype: age source: name direction: older timestring: '%Y.%m.%d' unit: days unit_count: 45 ------------- [[ex_delete_snapshots]] == delete_snapshots [source,yaml] ------------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: delete_snapshots description: >- Delete snapshots from the selected repository older than 45 days (based on creation_date), for 'curator-' prefixed snapshots. options: repository: disable_action: True filters: - filtertype: pattern kind: prefix value: curator- exclude: - filtertype: age source: creation_date direction: older unit: days unit_count: 45 ------------- [[ex_forcemerge]] == forcemerge [source,yaml] ------------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: forcemerge description: >- forceMerge logstash- prefixed indices older than 2 days (based on index creation_date) to 2 segments per shard. Delay 120 seconds between each forceMerge operation to allow the cluster to quiesce. Skip indices that have already been forcemerged to the minimum number of segments to avoid reprocessing. options: max_num_segments: 2 delay: 120 timeout_override: continue_if_exception: False disable_action: True filters: - filtertype: pattern kind: prefix value: logstash- exclude: - filtertype: age source: creation_date direction: older unit: days unit_count: 2 exclude: - filtertype: forcemerged max_num_segments: 2 exclude: ------------- [[ex_index_settings]] == index_settings [source,yaml] ------------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: index_settings description: >- Set Logstash indices older than 10 days to be read only (block writes) options: disable_action: True index_settings: index: blocks: write: True ignore_unavailable: False preserve_existing: False filters: - filtertype: pattern kind: prefix value: logstash- exclude: - filtertype: age source: name direction: older timestring: '%Y.%m.%d' unit: days unit_count: 10 ------------- [[ex_open]] == open [source,yaml] ------------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: open description: >- Open indices older than 30 days but younger than 60 days (based on index name), for logstash- prefixed indices. options: disable_action: True filters: - filtertype: pattern kind: prefix value: logstash- exclude: - filtertype: age source: name direction: older timestring: '%Y.%m.%d' unit: days unit_count: 30 - filtertype: age source: name direction: younger timestring: '%Y.%m.%d' unit: days unit_count: 60 ------------- [[ex_reindex]] == reindex === Manually selected reindex of a single index [source,yaml] ------------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: description: "Reindex index1 into index2" action: reindex options: disable_action: True wait_interval: 9 max_wait: -1 request_body: source: index: index1 dest: index: index2 filters: - filtertype: none ------------- === Manually selected reindex of a multiple indices [source,yaml] ------------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: description: "Reindex index1,index2,index3 into new_index" action: reindex options: disable_action: True wait_interval: 9 max_wait: -1 request_body: source: index: ['index1', 'index2', 'index3'] dest: index: new_index filters: - filtertype: none ------------- === Filter-Selected Indices [source,yaml] ------------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: description: >- 'Reindex all daily logstash indices from March 2017 into logstash-2017.03' action: reindex options: disable_action: True wait_interval: 9 max_wait: -1 request_body: source: index: REINDEX_SELECTION dest: index: logstash-2017.03 filters: - filtertype: pattern kind: prefix value: logstash-2017.03. ------------- === Reindex From Remote [source,yaml] ------------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: description: >- 'Reindex all daily logstash indices from March 2017 into logstash-2017.03' action: reindex options: disable_action: True wait_interval: 9 max_wait: -1 request_body: source: remote: host: http://otherhost:9200 username: myuser password: mypass index: index1 dest: index: index1 filters: - filtertype: none ------------- === Reindex From Remote With Filter-Selected Indices [source,yaml] ------------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: description: >- Reindex all remote daily logstash indices from March 2017 into local index logstash-2017.03 action: reindex options: disable_action: True wait_interval: 9 max_wait: -1 request_body: source: remote: host: http://otherhost:9200 username: myuser password: mypass index: REINDEX_SELECTION dest: index: logstash-2017.03 remote_filters: - filtertype: pattern kind: prefix value: logstash-2017.03. filters: - filtertype: none ------------- [[ex_replicas]] == replicas [source,yaml] ------------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: replicas description: >- Reduce the replica count to 0 for logstash- prefixed indices older than 10 days (based on index creation_date) options: count: 0 wait_for_completion: True disable_action: True filters: - filtertype: pattern kind: prefix value: logstash- - filtertype: age source: creation_date direction: older unit: days unit_count: 10 ------------- [[ex_restore]] == restore [source,yaml] ------------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: restore description: >- Restore all indices in the most recent curator-* snapshot with state SUCCESS. Wait for the restore to complete before continuing. Do not skip the repository filesystem access check. Use the other options to define the index/shard settings for the restore. options: repository: # If name is blank, the most recent snapshot by age will be selected name: # If indices is blank, all indices in the snapshot will be restored indices: include_aliases: False ignore_unavailable: False include_global_state: False partial: False rename_pattern: rename_replacement: extra_settings: wait_for_completion: True skip_repo_fs_check: True disable_action: True filters: - filtertype: pattern kind: prefix value: curator- - filtertype: state state: SUCCESS ------------- [[ex_rollover]] == rollover [source,yaml] ------------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: rollover description: >- Rollover the index associated with index 'name', which should be in the form of prefix-000001 (or similar), or prefix-YYYY.MM.DD-1. options: disable_action: True name: aliasname conditions: max_age: 1d max_docs: 1000000 extra_settings: index.number_of_shards: 3 index.number_of_replicas: 1 disable_action: True ------------- [[ex_shrink]] == shrink [source,yaml] ------------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: shrink description: >- Shrink logstash indices older than 21 days on the node with the most available space, excluding the node named 'not_this_node'. Delete each source index after successful shrink, then reroute the shrunk index with the provided parameters. options: disable_action: True ignore_empty_list: True shrink_node: DETERMINISTIC node_filters: permit_masters: False exclude_nodes: ['not_this_node'] number_of_shards: 1 number_of_replicas: 1 shrink_prefix: shrink_suffix: '-shrink' delete_after: True post_allocation: allocation_type: include key: node_tag value: cold wait_for_active_shards: 1 extra_settings: settings: index.codec: best_compression wait_for_completion: True wait_interval: 9 max_wait: -1 filters: - filtertype: pattern kind: prefix value: logstash- - filtertype: age source: creation_date direction: older unit: days unit_count: 21 ------------- [[ex_snapshot]] == snapshot [source,yaml] ------------- --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: snapshot description: >- Snapshot logstash- prefixed indices older than 1 day (based on index creation_date) with the default snapshot name pattern of 'curator-%Y%m%d%H%M%S'. Wait for the snapshot to complete. Do not skip the repository filesystem access check. Use the other options to create the snapshot. options: repository: # Leaving name blank will result in the default 'curator-%Y%m%d%H%M%S' name: ignore_unavailable: False include_global_state: True partial: False wait_for_completion: True skip_repo_fs_check: False disable_action: True filters: - filtertype: pattern kind: prefix value: logstash- - filtertype: age source: creation_date direction: older unit: days unit_count: 1 ------------- curator-5.2.0/docs/asciidoc/faq.asciidoc000066400000000000000000000245221315226075300201630ustar00rootroot00000000000000[[faq]] = Frequently Asked Questions [partintro] -- This section will be updated as more frequently asked questions arise * <> * <> * <> * <> * <> * <> -- [[faq_doc_error]] == Q: How can I report an error in the documentation? === A: Use the "Edit" link on any page See <>. [[faq_partial_delete]] == Q: Can I delete only certain data from within indices? === A: It's complicated [float] TL;DR: No. Curator can only delete entire indices. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [float] Full answer: ^^^^^^^^^^^^ As a thought exercise, think of Elasticsearch indices as being like databases, or tablespaces within a database. If you had hundreds of millions of rows to delete from your database, would you run a separate `DELETE from TABLE where date> with <> set to `regex`, and <> set to the needed regular expression. [float] The Problem: ^^^^^^^^^^^^ Illegal characters make it hard to delete indices. ------------------ % curl logs.example.com:9200/_cat/indices red }?ebc-2015.04.08.03 sip-request{ 5 1 0 0 632b 316b red }?ebc-2015.04.08.03 sip-response 5 1 0 0 474b 237b red ?ebc-2015.04.08.02 sip-request{ 5 1 0 0 474b 316b red eb 5 1 0 0 632b 316b red ?e 5 1 0 0 632b 316b ------------------   You can see it looks like there are some tab characters and maybe newline characters. This makes it hard to use the HTTP API to delete the indices. Dumping all the index settings out: [source,sh] ------- curl -XGET localhost:9200/*/_settings?pretty -------   ...reveals the index names as the first key in the resulting JSON. In this case, the names were very atypical: ------- }\b?\u0011ebc-2015.04.08.02\u000Bsip-request{ }\u0006?\u0011ebc-2015.04.08.03\u000Bsip-request{ }\u0003?\u0011ebc-2015.04.08.03\fsip-response ... -------   Curator lets you use regular expressions to select indices to perform actions on. WARNING: Before attempting an action, see what will be affected by using the `--dry-run` flag first. To delete the first three from the above example, use `'.*sip.*'` as your regular expression. NOTE: In an <>, regular expressions and strftime date strings _must_ be encapsulated in single-quotes. The next one is trickier. The real name of the index was `\n\u0011eb`. The regular expression `.*b$` did not work, but `'\n.*'` did. The last index can be deleted with a regular expression of `'.*e$'`. The resulting <> might look like this: [source,yaml] -------- actions: 1: description: Delete indices with strange characters that match regex '.*sip.*' action: delete_indices options: continue_if_exception: False disable_action: False filters: - filtertype: pattern kind: regex value: '.*sip.*' 2: description: Delete indices with strange characters that match regex '\n.*' action: delete_indices options: continue_if_exception: False disable_action: False filters: - filtertype: pattern kind: regex value: '\n.*' 3: description: Delete indices with strange characters that match regex '.*e$' action: delete_indices options: continue_if_exception: False disable_action: False filters: - filtertype: pattern kind: regex value: '.*e$' --------   ''''' [[entrypoint-fix]] == Q: I'm getting `DistributionNotFound` and `entry_point` errors when I try to run Curator. What am I doing wrong? === A: You likely need to upgrade `setuptools` If you are still unable to install, or get strange errors about dependencies you know you've installed, or messages mentioning `entry_point`, you may need to upgrade the `setuptools` package. This is especially common with RHEL and CentOS installs, and their variants, as they depend on Python 2.6. If you can run `pip install -U setuptools`, it should correct the problem. You may also be able to download and install manually: . `wget https://pypi.python.org/packages/source/s/setuptools/setuptools-15.1.tar.gz -O setuptools-15.1.tar.gz` . `pip install setuptools-15.1.tar.gz` Any dependencies this version of setuptools may require will have to be manually acquired and installed for your platform. For more information about setuptools, see https://pypi.python.org/pypi/setuptools This fix originally appeared https://github.com/elastic/curator/issues/56#issuecomment-77843587[here]. ''''' [[faq_aws_iam]] == Q: Why doesn't Curator work with AWS Elasticsearch? === A: Because Curator requires access to the `/_cluster/state/metadata` endpoint. NOTE: AWS ES 5.3 officially supports Curator for index managment operations. AWS ES 5.3 does not yet expose the `/_snapshot/_status` endpoint Curator uses, and therefore does not yet support snapshot operations. Older versions of AWS ES are not supported by Curator versions 4 or 5. There is some confusion because Curator 3 supported AWS ES, but Curator 4 & 5 do not. There are even some IAM credentials listed as options for client connections. These are currently available, but not able to be used. This may change at some point, so they remain at the ready until then. Curator 4 & 5 require access to the `/_cluster/state/metadata` endpoint in order to pull metadata at IndexList initialization time for _all_ indices. This metadata is used to determine index routing information, index sizing, index state (either `open` or `close`), aliases, and more. Curator 4 switched to doing this in order to reduce the number of repetitive client calls that were made in the previous versions. Curator 5 uses the same method. AWS currently has a 5.3 version of Elasticsearch, which officially supports Curator _for index management operations only._ AWS ES does not yet expose the `/_snapshot/status` endpoint, which is required by Curator for snapshot management. Additionally, AWS ES versions 5.1 and older do not fully support the `/_cluster/state/metadata` endpoint, which means that Curator cannot be used to manage indices in AWS if your cluster is version 5.1 or older. ''''' [[faq_unicode]] == Q: Why am I getting an error message about ASCII encoding? === A: You need to change your encoding to UTF-8 If you see messages like this: [source,sh] ----------- Click will abort further execution because Python 3 was configured to use ASCII as encoding for the environment. Either run this under Python 2 or consult http://click.pocoo.org/python3/ for mitigation steps. This system lists a couple of UTF-8 supporting locales that you can pick from. The following suitable locales where discovered: aa_DJ.utf8, aa_ER.utf8, aa_ET.utf8, ... ----------- You are likely running Curator with Python 3, or the RPM/DEB package, which was compiled with Python 3. Using the command-line library http://click.pocoo.org[click] with Python 3 requires your locale to be Unicode. You can set this up by exporting the `LC_ALL` environment variable like this: [source,sh] ----------- $ export LC_ALL=mylocale.utf8 ----------- Where `mylocale.utf8` is one of the listed "suitable locales." You can also set the locale on the command-line before the Curator command: [source,sh] ----------- $ LC_ALL=mylocale.utf8 curator [ARGS] ... ----------- IMPORTANT: If you use `export`, be sure to choose the correct locale as it will be set for the duration of your terminal session. ''''' curator-5.2.0/docs/asciidoc/filter_elements.asciidoc000066400000000000000000000517311315226075300225770ustar00rootroot00000000000000 [[filter_elements]] = Filter Elements [partintro] -- * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> You can use <> in your configuration files. -- [[fe_aliases]] == aliases include::inc_filter_by_aliases.asciidoc[] NOTE: This setting is used only when using the <> filter. The value of this setting must be a single alias name, or a list of alias names. This can be done in any of the ways YAML allows for lists or arrays. Here are a few examples. **Single** [source,txt] ------- filters: - filtertype: alias aliases: my_alias exclude: False ------- **List** - Flow style: + [source,txt] ------- filters: - filtertype: alias aliases: [ my_alias, another_alias ] exclude: False ------- - Block style: + [source,txt] ------- filters: - filtertype: alias aliases: - my_alias - another_alias exclude: False ------- There is no default value. This setting must be set by the user or an exception will be raised, and execution will halt. [[fe_allocation_type]] == allocation_type NOTE: This setting is used only when using the <> filter. [source,yaml] ------------- - filtertype: allocated key: ... value: ... allocation_type: require exclude: True ------------- The value of this setting must be one of `require`, `include`, or `exclude`. Read more about these settings in the {ref}/shard-allocation-filtering.html[Elasticsearch documentation]. The default value for this setting is `require`. [[fe_count]] == count NOTE: This setting is only used with the <> filtertype + and is a required setting. [source,yaml] ------------- - filtertype: count count: 10 ------------- The value for this setting is a number of indices or snapshots to match. Items will remain in the actionable list depending on the value of <>, and <>. There is no default value. This setting must be set by the user or an exception will be raised, and execution will halt. [[fe_direction]] == direction NOTE: This setting is only used with the <> filtertype. [source,yaml] ------------- - filtertype: age source: creation_date direction: older unit: days unit_count: 3 ------------- This setting must be either `older` or `younger`. This setting is used to determine whether indices or snapshots are `older` or `younger` than the reference point in time determined by <>, <>, and optionally, <>. If `direction` is `older`, then indices (or snapshots) which are _older_ than the reference point in time will be matched. Likewise, if `direction` is `younger`, then indices (or snapshots) which are _younger_ than the reference point in time will be matched. There is no default value. This setting must be set by the user or an exception will be raised, and execution will halt. [[fe_disk_space]] == disk_space NOTE: This setting is only used with the <> filtertype + and is a required setting. [source,yaml] ------------- - filtertype: space disk_space: 100 ------------- The value for this setting is a number of gigabytes. Indices in excess of this number of gigabytes will be matched. There is no default value. This setting must be set by the user or an exception will be raised, and execution will halt. [[fe_epoch]] == epoch NOTE: This setting is available in the <> filtertype, and any filter which has the <> setting. This setting is strictly optional. TIP: This setting is not common. It is most frequently used for testing. <>, <>, and optionally, <>, are used by Curator to establish the moment in time point of reference with this formula: [source,sh] ----------- point_of_reference = epoch - ((number of seconds in unit) * unit_count) ----------- If <> is unset, the current time is used. It is possible to set a point of reference in the future by using a negative value for <>. === Example [source,yaml] ------------- - filtertype: age source: creation_date direction: older unit: days unit_count: 3 epoch: 1491577200 ------------- The value for this setting must be an epoch timestamp. In this example, the given epoch time of `1491577200` is 2017-04-04T15:00:00Z (UTC). This will use 3 days older than that timestamp as the point of reference for age comparisons. [[fe_exclude]] == exclude NOTE: This setting is available in _all_ filter types. If `exclude` is `True`, the filter will remove matches from the actionable list. If `exclude` is `False`, then only matches will be kept in the actionable list. The default value for this setting is different for each filter type. === Examples [source,yaml] ------------- - filtertype: opened exclude: True ------------- This filter will result in only `closed` indices being in the actionable list. [source,yaml] ------------- - filtertype: opened exclude: False ------------- This filter will result in only `open` indices being in the actionable list. [[fe_field]] == field NOTE: This setting is available in the <> filtertype, and any filter which has the <> setting. This setting is strictly optional. [source,yaml] ------------- - filtertype: age source: field_stats direction: older unit: days unit_count: 3 field: '@timestamp' stats_result: min_value ------------- The value of this setting must be a timestamp field name. This field must be present in the indices being filtered or an exception will be raised, and execution will halt. `field_stats` uses the {ref}/search-field-stats.html[Field Stats API] to calculate either the `min_value` or the `max_value` of the <> as the <>, and then use that value for age comparisons. This setting is only used when <> is `field_stats`. The default value for this setting is `@timestamp`. [[fe_key]] == key NOTE: This setting is required when using the <>. [source,yaml] ------------- - filtertype: allocated key: ... value: ... allocation_type: exclude: True ------------- The value of this setting should correspond to a node setting on one or more nodes in your cluster. For example, you might have set [source,sh] ----------- node.tag: myvalue ----------- in your `elasticsearch.yml` file for one or more of your nodes. To match allocation in this case, set key to `tag`. These special attributes are also supported: [cols="2*", options="header"] |=== |attribute |description |`_name` |Match nodes by node name |`_host_ip` |Match nodes by host IP address (IP associated with hostname) |`_publish_ip` |Match nodes by publish IP address |`_ip` |Match either `_host_ip` or `_publish_ip` |`_host` |Match nodes by hostname |=== There is no default value. This setting must be set by the user or an exception will be raised, and execution will halt. [[fe_kind]] == kind NOTE: This setting is only used with the <> + filtertype and is a required setting. This setting tells the <> what pattern type to match. Acceptable values for this setting are `prefix`, `suffix`, `timestring`, and `regex`. include::inc_filter_chaining.asciidoc[] There is no default value. This setting must be set by the user or an exception will be raised, and execution will halt. include::inc_kinds.asciidoc[] [[fe_max_num_segments]] == max_num_segments NOTE: This setting is only used with the <> filtertype. [source,yaml] ------------- - filtertype: forcemerged max_num_segments: 2 exclude: True ------------- The value for this setting is the cutoff number of segments per shard. Indices which have this number of segments per shard, or fewer, will be actionable depending on the value of <>, which is `True` by default for the <> filter type. There is no default value. This setting must be set by the user or an exception will be raised, and execution will halt. [[fe_range_from]] == range_from NOTE: This setting is only used with the <> filtertype [source,yaml] ------------- - filtertype: period source: name range_from: -1 range_to: -1 timestring: '%Y.%m.%d' unit: days ------------- <> and <> are counters of whole <>. A negative number indicates a whole unit in the past, while a positive number indicates a whole unit in the future. A `0` indicates the present unit. Read more about this setting in context in the <>, including examples. [[fe_range_to]] == range_to NOTE: This setting is only used with the <> filtertype [source,yaml] ------------- - filtertype: period source: name range_from: -1 range_to: -1 timestring: '%Y.%m.%d' unit: days ------------- <> and <> are counters of whole <>. A negative number indicates a whole unit in the past, while a positive number indicates a whole unit in the future. A `0` indicates the present unit. Read more about this setting in context in the <>, including examples. [[fe_reverse]] == reverse NOTE: This setting is used in the <> and <> filtertypes This setting affects the sort order of the indices. `True` means reverse-alphabetical. This means that if all index names share the same pattern with a date--e.g. index-2016.03.01--older indices will be selected first. The default value of this setting is `True`. This setting is ignored if <> is `True`. TIP: There are context-specific examples of how `reverse` works in the <> and <> documentation. [[fe_source]] == source The _source_ from which to derive the index or snapshot age. Can be one of `name`, `creation_date`, or `field_stats`. NOTE: This setting is only used with the <> filtertype, or + with the <> filtertype when <> is set to `True`. NOTE: When using the <> filtertype, source requires + <>, <>, <>, + and additionally, the optional setting, <>. include::inc_sources.asciidoc[] [[fe_state]] == state NOTE: This setting is only used with the <> filtertype. [source,yaml] ------------- - filtertype: state state: SUCCESS ------------- The value for this setting must be one of `SUCCESS`, `PARTIAL`, `FAILED`, or `IN_PROGRESS`. This setting determines what kind of snapshots will be passed. The default value for this setting is `SUCCESS`. [[fe_stats_result]] == stats_result NOTE: This setting is only used with the <> filtertype. [source,yaml] ------------- - filtertype: age source: field_stats direction: older unit: days unit_count: 3 field: '@timestamp' stats_result: min_value ------------- The value for this setting can be either `min_value` or `max_value`. This setting is only used when <> is `field_stats`, and determines whether Curator will use the minimum or maximum value of <> for time calculations. The default value for this setting is `min_value`. [[fe_timestring]] == timestring NOTE: This setting is only used with the <> filtertype, or + with the <> filtertype if <> is set to `True`. === strftime This setting must be a valid Python strftime string. It is used to match and extract the timestamp in an index or snapshot name. include::inc_strftime_table.asciidoc[] These identifiers may be combined with each other, and/or separated from each other with hyphens `-`, periods `.`, underscores `_`, or other characters valid in an index name. Each identifier must be preceded by a `%` character in the timestring. For example, an index like `index-2016.04.01` would use a timestring of `'%Y.%m.%d'`. When <> is `name`, this setting must be set by the user or an exception will be raised, and execution will halt. There is no default value. include::inc_timestring_regex.asciidoc[] [[fe_unit]] == unit NOTE: This setting is used with the <> filtertype, with the <> filtertype, or with the <> filtertype if <> is set to `True`. [source,yaml] ------------- - filtertype: age source: creation_date direction: older unit: days unit_count: 3 ------------- This setting must be one of `seconds`, `minutes`, `hours`, `days`, `weeks`, `months`, or `years`. The values `seconds` and `minutes` are not allowed with the <> filtertype and will result in an error condition if used there. For the <> filtertype, or when <> is set to `True`, <>, <>, and optionally, <>, are used by Curator to establish the moment in time point of reference with this formula: [source,sh] ----------- point_of_reference = epoch - ((number of seconds in unit) * unit_count) ----------- include::inc_unit_table.asciidoc[] If <> is unset, the current time is used. It is possible to set a point of reference in the future by using a negative value for <>. This setting must be set by the user or an exception will be raised, and execution will halt. TIP: See the <> for more information about time calculation. [[fe_unit_count]] == unit_count NOTE: This setting is only used with the <> filtertype, or + with the <> filtertype if <> is set to `True`. [source,yaml] ------------- - filtertype: age source: creation_date direction: older unit: days unit_count: 3 ------------- The value of this setting will be used as a multiplier for <>. <>, <>, and optionally, <>, are used by Curator to establish the moment in time point of reference with this formula: [source,sh] ----------- point_of_reference = epoch - ((number of seconds in unit) * unit_count) ----------- include::inc_unit_table.asciidoc[] If <> is unset, the current time is used. It is possible to set a point of reference in the future by using a negative value for <>. This setting must be set by the user or an exception will be raised, and execution will halt. If this setting is used in conjunction with <>, the configured value will only be used as a fallback value in case the pattern could not be matched. The value _-1_ has a special meaning in this case and causes the index to be ignored when pattern matching fails. TIP: See the <> for more information about time calculation. [[fe_unit_count_pattern]] == unit_count_pattern NOTE: This setting is only used with the age filtertype to define, whether the <> value is taken from the configuration or read from the index name via a regular expression. [source,yaml] ------------- - filtertype: age source: creation_date direction: older unit: days unit_count: 3 unit_count_pattern: -([0-9]+)- ------------- This setting can be used in cases where the value against which index age should be assessed is not a static value but can be different for every index. For this case, there is the option of extracting the index specific value from the index names via a regular expression defined in this parameter. Consider for example the following index name patterns that contain the retention time in their name: _logstash-30-yyyy.mm.dd_, _logstash-12-yyyy.mm_, __3_logstash-yyyy.mm.dd_. To extract a value from the index names, this setting will be compiled as a regular expression and matched against index names, for a successful match, the value of the first capture group from the regular expression is used as the value for <>. If there is any error during compiling or matching the expression, or the expression does not contain a capture group, the value configured in <> is used as a fallback value, unless it is set to _-1_, in which case the index will be skipped. TIP: Regular expressions and match groups are not explained here as they are a fairly large and complex topic, but there are numerous resources online that will help. Using an online tool for testing regular expressions like https://regex101.com/[regex101.com] will help a lot when developing patterns. *Examples* * _logstash-30-yyyy.mm.dd_: Daily index that should be deleted after 30 days, indices that don't match the pattern will be deleted after 365 days [source,yaml] ------------- - filtertype: age source: creation_date direction: older unit: days unit_count: 365 unit_count_pattern: -([0-9]+)- ------------- * _logstash-12-yyyy.mm_: Monthly index that should be deleted after 12 months, indices that don't match the pattern will be deleted after 3 months [source,yaml] ------------- - filtertype: age source: creation_date direction: older unit: months unit_count: 3 unit_count_pattern: -([0-9]+)- ------------- * __3_logstash-yyyy.mm.dd_: Daily index that should be deleted after 3 years, indices that don't match the pattern will be ignored [source,yaml] ------------- - filtertype: age source: creation_date direction: older unit: years unit_count: -1 unit_count_pattern: ^_([0-9]+)_ ------------- IMPORTANT: Be sure to pay attention to the interaction of this parameter and <>! [[fe_use_age]] == use_age [source,yaml] ------------- - filtertype: count count: 10 use_age: True source: creation_date ------------- This setting allows filtering of indices by their age _after_ other considerations. The default value of this setting is `False` NOTE: Use of this setting requires the additional setting, <>. TIP: There are context-specific examples using `use_age` in the <> and <> documentation. [[fe_value]] == value NOTE: This setting is only used with the <> filtertype and is a required setting. There is a separate <> associated with the <>, and the <>. The value of this setting is used by <> as follows: * `prefix`: Search the first part of an index name for the provided value * `suffix`: Search the last part of an index name for the provided value * `regex`: Provide your own regular expression, and Curator will find the matches. * `timestring`: An strftime string to extrapolate and find indices that match. For example, given a `timestring` of `'%Y.%m.%d'`, matching indices would include `logstash-2016.04.01` and `.marvel-2016.04.01`, but not `myindex-2016-04-01`, as the pattern is different. IMPORTANT: Whatever you provide for `value` is always going to be a part of a + regular expression. The safest practice is to always encapsulate within single quotes. For example: `value: '-suffix'`, or `value: 'prefix-'` There is no default value. This setting must be set by the user or an exception will be raised, and execution will halt. TIP: There are context-specific examples using `value` in the <> documentation. [[fe_week_starts_on]] == week_starts_on NOTE: This setting is only used with the <> filtertype. [source,yaml] ------------- - filtertype: period source: name range_from: -1 range_to: -1 timestring: '%Y.%m.%d' unit: weeks week_starts_on: sunday ------------- The value of this setting indicates whether weeks should be measured starting on `sunday` or `monday`. Though Monday is the ISO standard, Sunday is frequently preferred. This setting is only used when <> is set to `weeks`. The default value for this setting is `sunday`. curator-5.2.0/docs/asciidoc/filters.asciidoc000066400000000000000000000616301315226075300210650ustar00rootroot00000000000000[[filters]] = Filters [partintro] -- Filters are the way to select only the indices (or snapshots) you want. include::inc_filter_chaining.asciidoc[] The index filtertypes are: * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> The snapshot filtertypes are: * <> * <> * <> * <> * <> * <> You can use <> in your configuration files. -- [[filtertype]] == filtertype Each filter is defined first by a `filtertype`. Each filtertype has its own settings, or no settings at all. In a configuration file, filters are defined as follows: [source,yaml] ------------- - filtertype: *first* setting1: ... ... settingN: ... - filtertype: *second* setting1: ... ... settingN: ... - filtertype: *third* ------------- The `-` indicates in the YAML that this is an array element. Each filtertype declaration must be preceded by a `-` for the filters to be read properly. This is how Curator can chain filters together. Anywhere filters can be used, multiple can be chained together in this manner. The index filtertypes are: * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> The snapshot filtertypes are: * <> * <> * <> * <> * <> * <> [[filtertype_age]] == age NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given <>, it may generate an error. This <> will iterate over the actionable list and match indices based on their age. They will remain in, or be removed from the actionable list based on the value of <>. === Age calculation include::inc_unit_table.asciidoc[] All calculations are in epoch time, which is the number of seconds elapsed since 1 Jan 1970. If no <> is specified in the filter, then the current epoch time-which is always UTC-is used as the basis for comparison. As epoch time is always increasing, lower numbers indicate dates and times in the past. When age is calculated, <> is multiplied by <> to obtain a total number of seconds to use as a differential. For example, if the time at execution were 2017-04-07T15:00:00Z (UTC), then the epoch timestamp would be `1491577200`. If I had an age filter defined like this: [source,yaml] ------------- - filtertype: age source: creation_date direction: older unit: days unit_count: 3 ------------- The time differential would be `3*24*60*60` seconds, which is `259200` seconds. Subtracting this value from `1491577200` gives us `1491318000`, which is 2017-04-04T15:00:00Z (UTC), exactly 3 days in the past. The `creation_date` of indices or snapshots is compared to this timestamp. If it is `older`, it stays in the actionable list, otherwise it is removed from the actionable list. [IMPORTANT] .`age` filter vs. `period` filter ================================= The time differential means of calculation can lead to frustration. Setting `unit` to `months`, and `unit_count` to `3` will actually calculate the age as `3*30*24*60*60`, which is `7776000` seconds. This may be a big deal. If the date is 2017-01-01T02:30:00Z, or `1483237800` in epoch time. Subtracting `7776000` seconds makes `1475461800`, which is 2016-10-03T02:30:00Z. If you were to try to match monthly indices, `index-2016.12`, `index-2016.11`, `2016.10`, `2016.09`, etc., then both `index-2016.09` _and_ `index-2016.10` will be _older_ than the cutoff date. This may result in unintended behavior. Another way this can cause issues is with weeks. Weekly indices may start on Sunday or Monday. The age filter's calculation doesn't take this into consideration, and merely tests the difference between execution time and the timestamp on the index (from any `source`). Another means of selecting indices and snapshots is the <> filter, which is perhaps a better choice for selecting weeks and months as it compensates for these differences. ================================= include::inc_sources.asciidoc[] === Required settings * <> * <> * <> * <> === Dependent settings * <> (required if `source` is `name`) * <> (required if `source` is `field_stats`) [Indices only] * <> (only used if `source` is `field_stats`) [Indices only] === Optional settings * <> * <> * <> (default is `False`) [[filtertype_alias]] == alias [source,yaml] ------------- - filtertype: alias aliases: ... ------------- NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given <>, it may generate an error. This <> will iterate over the actionable list and match indices based on whether they are associated with the given <>, which can be a single value, or an array. They will remain in, or be removed from the actionable list based on the value of <>. include::inc_filter_by_aliases.asciidoc[] === Required settings * <> === Optional settings * <> [[filtertype_allocated]] == allocated [source,yaml] ------------- - filtertype: allocated key: ... value: ... allocation_type: exclude: True ------------- NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given <>, it may generate an error. This <> will iterate over the actionable list and match indices based on their shard routing allocation settings. They will remain in, or be removed from the actionable list based on the value of <>. === Required settings * <> * <> === Optional settings * <> * <> (default is `True`) [[filtertype_closed]] == closed [source,yaml] ------------- - filtertype: closed exclude: True ------------- This <> will iterate over the actionable list and match indices which are closed. They will remain in, or be removed from the actionable list based on the value of <>. === Optional settings * <> (default is `True`) [[filtertype_count]] == count [source,yaml] ------------- - filtertype: count count: 10 ------------- NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given <>, it may generate an error. This <> will iterate over the actionable list of indices _or_ snapshots. They are ordered by age, or by alphabet, so as to guarantee that the correct items will remain in, or be removed from the actionable list based on the values of <>, <>, and <>. === Age-based sorting For use cases, where "like" items are being counted, and their name pattern guarantees date sorting is equal to alphabetical sorting, it is unnecessary to set <> to `True`, as item names will be sorted in <> order by default. This means that the item count will start beginning with the _newest_ indices or snapshots, and proceed through to the oldest. Where this is not the case, the <> setting can be used to ensure that index or snapshot ages are properly considered for sorting: [source,yaml] ------------- - filtertype: count count: 10 use_age: True source: creation_date ------------- All of the age-related settings from the <> filter are supported, and the same restrictions apply with regard to filtering indices vs. snapshots. === Reversing sorting Using the default configuration, <> is `True`. Given These indices: [source,sh] ------------- index1 index2 index3 index4 index5 ------------- And this filter: [source,yaml] ------------- - filtertype: count count: 2 ------------- Indices `index5` and `index4` will be recognized as the `2` _most recent,_ and will be removed from the actionable list, leaving `index1`, `index2`, and `index3` to be acted on by the given <>. Similarly, given these indices: [source,sh] ------------- index-2017.03.01 index-2017.03.02 index-2017.03.03 index-2017.03.04 index-2017.03.05 ------------- And this filter: [source,yaml] ------------- - filtertype: count count: 2 use_age: True source: name timestring: '%Y.%m.%d' ------------- The result will be similar. Indices `index-2017.03.05` and `index-2017.03.04` will be recognized as the `2` _most recent,_ and will be removed from the actionable list, leaving `index-2017.03.01`, `index-2017.03.02`, and `index-2017.03.03` to be acted on by the given <>. In some cases, you may wish to filter for the most recent indices. To accomplish this you set <> to `False`: [source,yaml] ------------- - filtertype: count count: 2 reverse: False ------------- This time indices `index1` and `index2` will be the `2` which will be removed from the actionable list, leaving `index3`, `index4`, and `index5` to be acted on by the given <>. Likewise with the age sorted indices: [source,yaml] ------------- - filtertype: count count: 2 use_age: True source: name timestring: '%Y.%m.%d' reverse: True ------------- Indices `index-2017.03.01` and `index-2017.03.02` will be the `2` which will be removed from the actionable list, leaving `index-2017.03.03`, `index-2017.03.04`, and `index-2017.03.05` to be acted on by the given <>. === Required settings * <> === Optional settings * <> * <> * <> (required if `use_age` is `True`) * <> (required if `source` is `name`) * <> (default is `False`) === Index-only settings * <> (required if `source` is `field_stats`) * <> (only used if `source` is `field_stats`) [[filtertype_forcemerged]] == forcemerged [source,yaml] ------------- - filtertype: forcemerged max_num_segments: 2 exclude: True ------------- NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given <>, it may generate an error. This <> will iterate over the actionable list and match indices which have `max_num_segments` segments per shard, or fewer. They will remain in, or be removed from the actionable list based on the value of <>. === Required settings * <> === Optional settings * <> (default is `True`) [[filtertype_kibana]] == kibana [source,yaml] ------------- - filtertype: kibana exclude: True ------------- This <> will remove indices `.kibana`, `.marvel-kibana`, `kibana-int`, and `.marvel-es-data` from the list of indices, if present. This <> will iterate over the actionable list and match indices `.kibana`, `.marvel-kibana`, `kibana-int`, or `.marvel-es-data`. They will remain in, or be removed from the actionable list based on the value of <>. === Optional settings * <> (default is `True`) [[filtertype_none]] == none [source,yaml] ------------- - filtertype: none ------------- This <> will not filter anything, returning the full list of indices or snapshots. There are no settings for this <>. [[filtertype_opened]] == opened [source,yaml] ------------- - filtertype: opened exclude: True ------------- This <> will iterate over the actionable list and match indices which are opened. They will remain in, or be removed from the actionable list based on the value of <>. === Optional settings * <> (default is `True`) [[filtertype_pattern]] == pattern [source,yaml] ------------- - filtertype: pattern kind: ... value: ... ------------- NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given <>, it may generate an error. This <> will iterate over the actionable list and match indices matching a given pattern. They will remain in, or be removed from the actionable list based on the value of <>. include::inc_filter_chaining.asciidoc[] include::inc_kinds.asciidoc[] === Required settings * <> * <> === Optional settings * <> (default is `False`) [[filtertype_period]] == period This <> will iterate over the actionable list and match indices or snapshots based on whether they fit within the given time range. They will remain in, or be removed from the actionable list based on the value of <>. [source,yaml] ------------- - filtertype: period source: name range_from: -1 range_to: -1 timestring: '%Y.%m.%d' unit: weeks week_starts_on: sunday ------------- NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given <>, it may generate an error. === Periods, or Date Ranges For the purposes of this filter, a <> can be one of `hours`, `days`, `weeks`, `months`, or `years`. Unless an <> timestamp is provided, reckoning will be centered on execution time. Reckoning is truncated to the most recent whole unit. For example, if I selected `hours` as my `unit`, and I began execution at 02:35, then the point of reckoning would be 02:00. This is relatively easy with `days`, `months`, and `years`, but slightly more complicated with `weeks`. Some users may wish to reckon weeks by the ISO standard, which starts weeks on Monday. Others may wish to use Sunday as the first day of the week. Both are acceptable options with the `period` filter. The default behavior for `weeks` is to have Sunday be the start of the week. This can be overridden with <> as follows: [source,yaml] ------------- - filtertype: period source: name range_from: -1 range_to: -1 timestring: '%Y.%m.%d' unit: weeks week_starts_on: monday ------------- <> and <> are counters of whole <>. A negative number indicates a whole unit in the past, while a positive number indicates a whole unit in the future. A `0` indicates the present unit. With such a timeline mentality, it is relatively easy to create a date range that will meet your needs. If the time of execution time is *2017-04-03T13:45:23.831*, this table will help you figure out what the previous whole unit, current unit, and next whole unit will be, in ISO8601 format. [frame="topbot",options="header"] |====================================================================== |unit |-1 |0 |+1 |hours |2017-04-03T12:00:00|2017-04-03T13:00:00|2017-04-03T14:00:00 |days |2017-04-02T00:00:00|2017-04-03T00:00:00|2017-04-04T00:00:00 |weeks sun |2017-03-26T00:00:00|2017-04-02T00:00:00|2017-04-09T00:00:00 |weeks mon |2017-03-27T00:00:00|2017-04-03T00:00:00|2017-04-10T00:00:00 |months |2017-03-01T00:00:00|2017-04-01T00:00:00|2017-05-01T00:00:00 |years |2016-01-01T00:00:00|2017-01-01T00:00:00|2018-01-01T00:00:00 |====================================================================== Ranges must be from older dates to newer dates, or smaller numbers (including negative numbers) to larger numbers or Curator will return an exception. An example `period` filter demonstrating how to select all daily indices by timestring found in the index name from last month might look like this: [source,yaml] ------------- - filtertype: period source: name range_from: -1 range_to: -1 timestring: '%Y.%m.%d' unit: months ------------- Having `range_from` and `range_to` both be the same value will mean that only that whole unit will be selected, in this case, a month. === Required settings * <> * <> * <> * <> === Dependent settings * <> (required if `source` is `name`) * <> (required if `source` is `field_stats`) [Indices only] * <> (only used if `source` is `field_stats`) [Indices only] === Optional settings * <> * <> (default is `False`) * <> [[filtertype_space]] == space This <> will iterate over the actionable list and match indices when their cumulative disk consumption exceeds <> gigabytes. They are first ordered by age, or by alphabet, so as to guarantee the oldest indices are deleted first. They will remain in, or be removed from the actionable list based on the value of <>. === Deleting Indices By Space [source,yaml] ------------- - filtertype: space disk_space: 100 ------------- This <> is for those who want to retain indices based on disk consumption, rather than by a set number of days. There are some important caveats regarding this choice: * Elasticsearch cannot calculate the size of closed indices. Elasticsearch does not keep tabs on how much disk-space closed indices consume. If you close indices, your space calculations will be inaccurate. * Indices consume resources just by existing. You could run into performance and/or operational snags in Elasticsearch as the count of indices climbs. * You need to manually calculate how much space across all nodes. The total you give will be the sum of all space consumed across all nodes in your cluster. If you use shard allocation to put more shards or indices on a single node, it will not affect the total space reported by the cluster, but you may still run out of space on that node. These are only a few of the caveats. This is still a valid use-case, especially for those running a single-node test box. For use cases, where "like" indices are being counted, and their name pattern guarantees date sorting is equal to alphabetical sorting, it is unnecessary to set <> to `True`, as index names will be sorted in <> order by default. For this case, this means that disk space calculations will start beginning with the _newest_ indices, and proceeding through to the oldest. === Age-based sorting [source,yaml] ------------- - filtertype: space disk_space: 100 use_age: True source: creation_date ------------- For use cases, where "like" indices are being counted, and their name pattern guarantees date sorting is equal to alphabetical sorting, it is unnecessary to set <> to `True`, as index names will be sorted in <> order by default. For this case, this means that disk space calculations will start beginning with the _newest_ indices, and proceeding through to the oldest. Where this is not the case, the <> setting can be used to ensure that index or snapshot ages are properly considered for sorting: All of the age-related settings from the <> filter are supported. === Reversing sorting IMPORTANT: The <> setting is ignored when <> is `True`. When <> is `True`, sorting is _always_ from newest to oldest, ensuring that old indices are always selected first. Using the default configuration, <> is `True`. Given These indices: [source,sh] ------------- index1 10g index2 10g index3 10g index4 10g index5 10g ------------- And this filter: [source,yaml] ------------- - filtertype: space disk_space: 21 ------------- The indices will be sorted alphabetically and iterated over in the indicated order (the value of <>) and the total disk space compared after adding the size of each successive index. In this example, that means that `index5` will be added first, and the running total of consumed disk space will be `10g`. Since `10g` is less than the indicated threshold of `21`, `index5` will be removed from the actionable list. On the next iteration, the amount of space consumed by `index4` will be added, which brings the running total to `20g`, which is still less than the `21` threshold, so `index4` is also removed from the actionable list. This process changes when the iteration adds the disk space consumed by `index3`. Now the running total crosses the `21` threshold established by <> (the running total is now `30g`). Even though it is only `1g` in excess of the total, `index3` will remain in the actionable list. The threshold is absolute. The remaining indices, `index2` and `index1` will also be in excess of the threshold, so they will also remain in the actionable list. So in this example `index1`, `index2`, and `index3` will be acted on by the <> for this block. If you were to run this with <> set to `DEBUG`, you might see messages like these in the output: [source,sh] ------------- ...Removed from actionable list: index5, summed disk usage is 10GB and disk limit is 21.0GB. ...Removed from actionable list: index4, summed disk usage is 20GB and disk limit is 21.0GB. ...Remains in actionable list: index3, summed disk usage is 30GB and disk limit is 21.0GB. ...Remains in actionable list: index2, summed disk usage is 40GB and disk limit is 21.0GB. ...Remains in actionable list: index1, summed disk usage is 50GB and disk limit is 21.0GB. ------------- In some cases, you may wish to filter in the reverse order. To accomplish this, you set <> to `False`: [source,yaml] ------------- - filtertype: space disk_space: 21 reverse: False ------------- This time indices `index1` and `index2` will be the ones removed from the actionable list, leaving `index3`, `index4`, and `index5` to be acted on by the given <>. If you were to run this with <> set to `DEBUG`, you might see messages like these in the output: [source,sh] ------------- ...Removed from actionable list: index1, summed disk usage is 10GB and disk limit is 21.0GB. ...Removed from actionable list: index2, summed disk usage is 20GB and disk limit is 21.0GB. ...Remains in actionable list: index3, summed disk usage is 30GB and disk limit is 21.0GB. ...Remains in actionable list: index4, summed disk usage is 40GB and disk limit is 21.0GB. ...Remains in actionable list: index5, summed disk usage is 50GB and disk limit is 21.0GB. ------------- === Required settings * <> === Optional settings * <> * <> * <> (required if `use_age` is `True`) * <> (required if `source` is `name`) * <> (required if `source` is `field_stats`) * <> (only used if `source` is `field_stats`) * <> (default is `False`) [[filtertype_state]] == state [source,yaml] ------------- - filtertype: state state: SUCCESS ------------- NOTE: Empty values and commented lines will result in the default value, if any, being selected. If a setting is set, but not used by a given <>, it may generate an error. This <> will iterate over the actionable list and match snapshots based on the value of <>. They will remain in, or be removed from the actionable list based on the value of <>. === Required settings * <> === Optional settings * <> (default is `False`) curator-5.2.0/docs/asciidoc/inc_filepath.asciidoc000066400000000000000000000007031315226075300220340ustar00rootroot00000000000000[TIP] .File paths ===================================================================== File paths can be specified as follows: *For Windows:* [source,sh] ------------- 'C:\path\to\file' ------------- *For Linux, BSD, Mac OS:* [source,sh] ------------- '/path/to/file' ------------- Using single-quotes around your file path is encouraged, especially with Windows file paths. ===================================================================== curator-5.2.0/docs/asciidoc/inc_filter_by_aliases.asciidoc000066400000000000000000000012151315226075300237170ustar00rootroot00000000000000[IMPORTANT] .API Change in Elasticsearch 5.5.0 ============================ https://www.elastic.co/guide/en/elasticsearch/reference/5.5/breaking-changes-5.5.html#breaking_55_rest_changes[An update to Elasticsearch 5.5.0 changes the behavior of this filter, differing from previous 5.x versions]. If a list of <> is provided (instead of only one), indices must appear in _all_ listed <> or a 404 error will result, leading to no indices being matched. In older versions, if the index was associated with even one of the aliases in <>, it would result in a match. ============================ curator-5.2.0/docs/asciidoc/inc_filter_chaining.asciidoc000066400000000000000000000017141315226075300233700ustar00rootroot00000000000000[NOTE] .Filter chaining ===================================================================== It is important to note that while filters can be chained, each is linked by an implied logical *AND* operation. If you want to match from one of several different patterns, as with a logical *OR* operation, you can do so with the <> filtertype using _regex_ as the <>. This example shows how to select multiple indices based on them beginning with either `alpha-`, `bravo-`, or `charlie-`: [source,yaml] ------------- filters: - filtertype: pattern kind: regex value: '^(alpha-|bravo-|charlie-).*$' ------------- Explaining all of the different ways in which regular expressions can be used is outside the scope of this document, but hopefully this gives you some idea of how a regular expression pattern can be used when a logical *OR* is desired. ===================================================================== curator-5.2.0/docs/asciidoc/inc_kinds.asciidoc000066400000000000000000000032111315226075300213450ustar00rootroot00000000000000The different <> are described as follows: === prefix To match all indices starting with `logstash-`: [source,yaml] ------------- - filtertype: pattern kind: prefix value: logstash- ------------- To match all indices _except_ those starting with `logstash-`: [source,yaml] ------------- - filtertype: pattern kind: prefix value: logstash- exclude: True ------------- === suffix To match all indices ending with `-prod`: [source,yaml] ------------- - filtertype: pattern kind: suffix value: -prod ------------- To match all indices _except_ those ending with `-prod`: [source,yaml] ------------- - filtertype: pattern kind: suffix value: -prod exclude: True ------------- === timestring IMPORTANT: No age calculation takes place here. It is strictly a pattern match. To match all indices with a Year.month.day pattern, like `index-2017.04.01`: [source,yaml] ------------- - filtertype: pattern kind: timestring value: '%Y.%m.%d' ------------- To match all indices _except_ those with a Year.month.day pattern, like `index-2017.04.01`: [source,yaml] ------------- - filtertype: pattern kind: timestring value: '%Y.%m.%d' ------------- include::inc_timestring_regex.asciidoc[] === regex This <> allows you to design a regular-expression to match indices or snapshots: To match all indices starting with `a-`, `b-`, or `c-`: [source,yaml] ------------- - filtertype: pattern kind: regex value: '^a-|^b-|^c-' ------------- To match all indices _except_ those ending with `a-`, `b-`, or `c-`: [source,yaml] ------------- - filtertype: pattern kind: regex value: '^a-|^b-|^c-' exclude: True ------------- curator-5.2.0/docs/asciidoc/inc_sources.asciidoc000066400000000000000000000023221315226075300217220ustar00rootroot00000000000000=== `name`-based ages Using `name` as the `source` tells Curator to look for a <> within the index or snapshot name, and convert that into an epoch timestamp (epoch implies UTC). [source,yaml] ------------- - filtertype: age source: name direction: older timestring: '%Y.%m.%d' unit: days unit_count: 3 ------------- include::inc_timestring_regex.asciidoc[] === `creation_date`-based ages `creation_date` extracts the epoch time of index or snapshot creation. [source,yaml] ------------- - filtertype: age source: creation_date direction: older unit: days unit_count: 3 ------------- === `field_stats`-based ages NOTE: `source` can only be `field_stats` when filtering indices. `field_stats` uses the {ref}/search-field-stats.html[Field Stats API] to calculate either the `min_value` or the `max_value` of the <> as the <>, and then use that value for age comparisons. <> must be of type `date` in Elasticsearch. [source,yaml] ------------- - filtertype: age source: field_stats direction: older unit: days unit_count: 3 field: '@timestamp' stats_result: min_value ------------- curator-5.2.0/docs/asciidoc/inc_strftime_table.asciidoc000066400000000000000000000007711315226075300232510ustar00rootroot00000000000000The identifiers that Curator currently recognizes include: [width="50%", cols="> are calculated as follows: [width="50%", cols="> page for more information. * <>, the easiest way to use and upgrade. * <>, installs a single, binary package! * <>, installs a single, binary package! * <> * <> * <> -- [[pip]] == pip This installation procedure utilizes https://pip.pypa.io/en/latest/installing.html[python pip], and requires that the target machine has internet connectivity. --------------------------------- pip install elasticsearch-curator --------------------------------- [IMPORTANT] ====================== If you plan on using Curator with AWS ES using IAM credentials, you must also install the `requests_aws4auth` python module: --------------------------------- pip install requests_aws4auth --------------------------------- ====================== === Upgrading with pip If you already have Elasticsearch Curator installed, and want to upgrade to the latest version, use the `-U` flag: ------------------------------------ pip install -U elasticsearch-curator ------------------------------------ === Installing a specific version with pip The `-U` flag uninstalls the current version (if any), then installs the latest version, or a specified one. Specify a specific version by adding `==` followed by the version you'd like to install, like this: ------------------------------------------- pip install -U elasticsearch-curator==X.Y.Z ------------------------------------------- === System-wide vs. User-only installation The above commands each imply a system-wide installation. This usually requires super-user access, or the `sudo` command. There is a way to install Curator into a path for just the current user, using the `--user` flag. ---------------------------------------- pip install --user elasticsearch-curator ---------------------------------------- This will result in the `curator` end-point being installed in the current user's home directory, in the `.local` directory, in the `bin` subdirectory. The full path might look something like this: ----------------------------- /home/user/.local/bin/curator ----------------------------- You can make an alias or a symlink to this so you can call it more easily. The `--user` flag can also be used in conjunction with the `-U` flag: ---------------------------------------- pip install -U --user elasticsearch-curator==X.Y.Z ----------------------------------------   [[apt-repository]] == APT repository We use the PGP key http://pgp.mit.edu/pks/lookup?op=vindex&search=0xD27D666CD88E42B4[D88E42B4], Elastic's Signing Key, with fingerprint 4609 5ACC 8548 582C 1A26 99A9 D27D 666C D88E 42B4 to sign all our packages. PGP is available from http://pgp.mit.edu. === Signing Key Download and install the Public Signing Key: [source,sh] -------------------------------------------------- wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - -------------------------------------------------- === Direct Package Download Link * https://packages.elastic.co/curator/{curator_major}/debian/pool/main/e/elasticsearch-curator/elasticsearch-curator_{curator_version}_amd64.deb[Elasticsearch Curator {curator_version} Binary Package (DEB)] * https://packages.elastic.co/curator/{curator_major}/debian9/pool/main/e/elasticsearch-curator/elasticsearch-curator_{curator_version}_amd64.deb[Elasticsearch Curator {curator_version} Binary Package for newer Debian 9 based systems (DEB)] === Repository Configuration [IMPORTANT] ==================================================== Debian 9 was recently released, and it uses OpenSSL 1.1.0 libraries, where earlier releases, including Debian 8, Ubuntu 12.04 - 16.04 LTS, and others used OpenSSL 1.0.0 libraries. As a consequence, a new repository has been introduced, in order to preserve the functionality of the old one, without causing collisions. Please use the appropriate package or repository. ==================================================== Add one of the following -- **noting the correct path, `debian` or `debian9`** -- in your `/etc/apt/sources.list.d/` directory in a file with a `.list` suffix, for example `curator.list` ["source","sh",subs="attributes,callouts"] -------------------------------------------------- deb [arch=amd64] http://packages.elastic.co/curator/{curator_major}/debian stable main -------------------------------------------------- or ["source","sh",subs="attributes,callouts"] -------------------------------------------------- deb [arch=amd64] http://packages.elastic.co/curator/{curator_major}/debian9 stable main -------------------------------------------------- After running `sudo apt-get update`, the repository is ready for use. [WARNING] ================================================== Use the file edit method described above to add the Curator repository. Do not use `add-apt-repository` as it will add a `deb-src` entry as well, but we do not provide a source package. If you have added the `deb-src` entry, you will see an error like the following: Unable to find expected entry 'main/source/Sources' in Release file (Wrong sources.list entry or malformed file) Just delete the `deb-src` entry from the `/etc/apt/sources.list.d/curator.list` file and the installation should work as expected. ================================================== [[apt-binary]] === Binary Package Installation The APT version of Curator is a binary version. What this really means is that the source is frozen, and all required libraries are bundled with the `curator` binary, so there are no conflicts. The packages resulting from the creation process have been tested on Debian 8, and Ubuntu 12.04, 14.04, & 16.04. It was also tested on Debian 7, but that failed due to a libc library mismatch. This binary package may work on other similar or derivative variants, but they have not been tested. [source,sh] -------------------------------------------------- sudo apt-get update && sudo apt-get install elasticsearch-curator -------------------------------------------------- This will install the necessary files into `/opt/elasticsearch-curator` and make a symlink at `/usr/bin/curator` that points to the `curator` binary in the aforementioned directory. [[yum-repository]] == YUM repository We use the PGP key http://pgp.mit.edu/pks/lookup?op=vindex&search=0xD27D666CD88E42B4[D88E42B4], Elastic's Signing Key, with fingerprint 4609 5ACC 8548 582C 1A26 99A9 D27D 666C D88E 42B4 to sign all our packages. It is available from http://pgp.mit.edu. === Signing Key Download and install the public signing key: [source,sh] -------------------------------------------------- rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch -------------------------------------------------- === Direct Package Download Link * https://packages.elastic.co/curator/{curator_major}/centos/6/Packages/elasticsearch-curator-{curator_version}-1.x86_64.rpm[Elasticsearch Curator {curator_version} RHEL/CentOS 6 Binary Package (RPM)] * https://packages.elastic.co/curator/{curator_major}/centos/7/Packages/elasticsearch-curator-{curator_version}-1.x86_64.rpm[Elasticsearch Curator {curator_version} RHEL/CentOS 7 Binary Package (RPM)] === Repository Configuration Add the following in your `/etc/yum.repos.d/` directory in a file with a `.repo` suffix, for example `curator.repo` [WARNING] ======================================== The repositories are different for CentOS/RHEL 6 and 7 due to library and path differences. Be sure to use the correct version for your system! RHEL/CentOS 6: ["source","sh",subs="attributes,callouts"] -------------------------------------------------- [curator-{curator_major}] name=CentOS/RHEL 6 repository for Elasticsearch Curator {curator_major}.x packages baseurl=http://packages.elastic.co/curator/{curator_major}/centos/6 gpgcheck=1 gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch enabled=1 -------------------------------------------------- RHEL/CentOS 7: ["source","sh",subs="attributes,callouts"] -------------------------------------------------- [curator-{curator_major}] name=CentOS/RHEL 7 repository for Elasticsearch Curator {curator_major}.x packages baseurl=http://packages.elastic.co/curator/{curator_major}/centos/7 gpgcheck=1 gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch enabled=1 -------------------------------------------------- ========================================= [[yum-binary]] === Binary Package Installation The RPM version of Curator is a binary version. What this really means is that the source is frozen, and all required libraries are bundled with the `curator` binary, so there are no conflicts. There are separate binary packages for RedHat variants. The binary packages resulting from the creation process have been tested on CentOS 6 & 7, with a different binary for each. They may work on similar variants and/or derivatives, but they have not been tested. [source,sh] ---------------------------------------- yum install elasticsearch-curator ---------------------------------------- This will install the necessary files into `/opt/elasticsearch-curator` and make a symlink at `/usr/bin/curator` that points to the `curator` binary in the aforementioned directory. [[windows-zip]] == Windows Binary Zip Package If you do not wish to install and maintain Python on Windows, there is a compiled binary version available. It is in a directory with EXE files and all necessary libraries that Python requires. You can navigate to the directory and run the `curator` command just as you otherwise would. WARNING: If you do have Python installed, do not uncompress the zip file into your Python directory. It can cause library path collisions which will prevent Curator from properly functioning. * https://packages.elastic.co/curator/{curator_major}/windows/elasticsearch-curator-{curator_version}-amd64.zip[Download Curator] ** https://packages.elastic.co/curator/{curator_major}/windows/elasticsearch-curator-{curator_version}-amd64.zip.md5.txt[MD5] ** https://packages.elastic.co/curator/{curator_major}/windows/elasticsearch-curator-{curator_version}-amd64.zip.sha1.txt[SHA1] [[windows-msi]] == Windows MSI Installer There is now a rudimentary MSI installer available for you to try. One known issue is that in-place upgrades are not possible. Subsequent installs will be side-by-side. The recommended course of action is to uninstall the old version, then install the new one. The installation will default to `"C:\Program Files (x86)\elasticsearch-curator"`. The same binaries and libraries found in the Windows Binary Package will be installed into this directory. * https://packages.elastic.co/curator/{curator_major}/windows/elasticsearch-curator-{curator_version}-amd64.msi[Download Curator Installer] ** https://packages.elastic.co/curator/{curator_major}/windows/elasticsearch-curator-{curator_version}-amd64.msi.md5.txt[MD5] ** https://packages.elastic.co/curator/{curator_major}/windows/elasticsearch-curator-{curator_version}-amd64.msi.sha1.txt[SHA1] [[python-source]] == Installation from source Installing or Curator from source is also possible. In order to do so requires that all dependent libraries are installed first. If you have `pip` installed, then you can install from a gzipped file. If not, you have to uncompress the gzipped file and run `python setup.py install`. That might look like this: [source,sh] -------------------------------------- wget https://pypi.python.org/packages/source/p/package/package-#.#.#.tar.gz tar zxf package-#.#.#.tar.gz cd package-#.#.# python setup.py install -------------------------------------- The dependencies are as follows === setuptools Download https://bootstrap.pypa.io/ez_setup.py[ez_setup.py] and run it using the target Python version. The script will download the appropriate version and install it for you: [source,sh] ----------- wget https://bootstrap.pypa.io/ez_setup.py -O - | python ----------- Note that you will need to invoke the command with superuser privileges to install to the system Python: [source,sh] ----------- wget https://bootstrap.pypa.io/ez_setup.py -O - | sudo python ----------- Alternatively, setuptools may be installed to a user-local path: [source,sh] ----------- wget https://bootstrap.pypa.io/ez_setup.py -O - | python - --user -----------   === Urllib3 Download and install the https://github.com/shazow/urllib3[urllib3] dependency (1.8.3 or greater): . `wget https://pypi.python.org/packages/source/u/urllib3/urllib3-1.20.tar.gz` . `pip install urllib3-1.20.tar.gz` or uncompress and run `python setup.py install`   === click Download and install the http://click.pocoo.org/[click] dependency (6.0 or greater): . `wget https://pypi.python.org/packages/source/c/click/click-6.7.tar.gz -O click-6.7.tar.gz` . `pip install click-6.7.tar.gz` or uncompress and run `python setup.py install`   === certifi Download and install the `certifi` dependency. Always use the most recent version: . `wget https://github.com/certifi/python-certifi/archive/2017.1.23.tar.gz -O certifi.tar.gz` . `pip install certifi.tar.gz`   === PyYAML Download and install the http://pyyaml.org/wiki/PyYAML/[PyYAML] dependency (3.10 or greater): . `wget http://pyyaml.org/download/pyyaml/PyYAML-3.12.tar.gz -O PyYAML-3.12.tar.gz` . `pip install PyYAML-3.12.tar.gz` or uncompress and run `python setup.py install`   === voluptuous Download and install the https://github.com/alecthomas/voluptuous[voluptuous] dependency (0.9.3 or greater): . `wget https://github.com/alecthomas/voluptuous/archive/0.9.3.tar.gz` . `pip install 0.9.3.tar.gz` or uncompress and run `python setup.py install`   === elasticsearch (python module) Download and install the https://github.com/elastic/elasticsearch-py[elasticsearch-py] dependency: . `wget https://github.com/elastic/elasticsearch-py/archive/`+pass:attributes[{es_py_version}].tar.gz -O elasticsearch-py.tar.gz+ . `pip install elasticsearch-py.tar.gz` or uncompress and run `python setup.py install`   === elasticsearch-curator (python module) Download and install Curator: . `wget https://github.com/elastic/curator/archive/v`+pass:attributes[{curator_version}].tar.gz -O elasticsearch-curator.tar.gz+ . `pip install elasticsearch-curator.tar.gz` or uncompress and run `python setup.py install`. At this point you could also run `run_curator.py` from the source directory as well. curator-5.2.0/docs/asciidoc/options.asciidoc000066400000000000000000002331461315226075300211130ustar00rootroot00000000000000[[options]] = Options [partintro] -- Options are settings used by <>. * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> * <> You can use <> in your configuration files. -- [[option_allocation_type]] == allocation_type NOTE: This setting is used only when using the <> [source,yaml] ------------- action: allocation description: "Apply shard allocation filtering rules to the specified indices" options: key: ... value: ... allocation_type: ... filters: - filtertype: ... ------------- The value of this setting must be one of `require`, `include`, or `exclude`. Read more about these settings at {ref}/shard-allocation-filtering.html The default value for this setting is `require`. [[option_continue]] == continue_if_exception [IMPORTANT] .Using `ignore_empty_list` rather than `continue_if_exception` ==================================== Curator has two general classifications of exceptions: Empty list exceptions, and everything else. The empty list conditions are `curator.exception.NoIndices` and `curator.exception.NoSnapshots`. The `continue_if_exception` option _only_ catches conditions _other_ than empty list conditions. In most cases, you will want to use `ignore_empty_list` instead of `continue_if_exception`. So why are there two kinds of exceptions? When Curator 4 was released, the ability to continue in the event of any exception was covered by the `continue_if_exception` option. However, an empty list is a _benign_ condition. In fact, it's expected with brand new clusters, or when new index patterns are added. The decision was made to split the exceptions, and have a new option catch the empty lists. See <> for more information. ==================================== NOTE: This setting is available in all actions. [source,yaml] ------------- action: delete_indices description: "Delete selected indices" options: continue_if_exception: False filters: - filtertype: ... ------------- If `continue_if_exception` is set to `True`, Curator will attempt to continue on to the next action, if any, even if an exception is encountered. Curator will log but ignore the exception that was raised. The default value for this setting is `False` [[option_count]] == count NOTE: This setting is required when using the <>. [source,yaml] ------------- action: replicas description: >- Set the number of replicas per shard for selected indices to 'count' options: count: ... filters: - filtertype: ... ------------- The value for this setting is the number of replicas to assign to matching indices. There is no default value. This setting must be set by the user or an exception will be raised, and execution will halt. [[option_delay]] == delay NOTE: This setting is only used by the <>, and is optional. [source,yaml] ------------- action: forcemerge description: >- Perform a forceMerge on selected indices to 'max_num_segments' per shard options: max_num_segments: 2 timeout_override: 21600 delay: 120 filters: - filtertype: ... ------------- The value for this setting is the number of seconds to delay between forceMerging indices, to allow the cluster to quiesce. There is no default value. [[option_delete_after]] == delete_after NOTE: This setting is only used by the <> action. [source,yaml] ------------- action: shrink description: >- Shrink selected indices on the node with the most available space. Delete source index after successful shrink. options: shrink_node: DETERMINISTIC delete_after: True filters: - filtertype: ... ------------- The default value of this setting is `True`. After an index has been successfully shrunk, the source index will be deleted or preserved based on the value of this setting. [[option_delete_aliases]] == delete_aliases NOTE: This setting is only used by the <>, and is optional. [source,yaml] ------------- action: close description: "Close selected indices" options: delete_aliases: False filters: - filtertype: ... ------------- The value for this setting determines whether aliases will be deleted from indices before closing. The default value is `False`. [[option_disable]] == disable_action NOTE: This setting is available in all actions. [source,yaml] ------------- action: delete_indices description: "Delete selected indices" options: disable_action: False filters: - filtertype: ... ------------- If `disable_action` is set to `True`, Curator will ignore the current action. This may be useful for temporarily disabling actions in a large configuration file. The default value for this setting is `False` [[option_extra_settings]] == extra_settings This setting should be nested YAML. The values beneath `extra_settings` will be used by whichever action uses the option. === <> [source,yaml] ------------- action: alias description: "Add/Remove selected indices to or from the specified alias" options: name: alias_name extra_settings: filter: term: user: kimchy add: filters: - filtertype: ... remove: filters: - filtertype: ... ------------- === <> [source,yaml] ------------- action: create_index description: "Create index as named" options: name: myindex # ... extra_settings: settings: number_of_shards: 1 number_of_replicas: 0 mappings: type1: properties: field1: type: string index: not_analyzed ------------- === <> See the {ref}/modules-snapshots.html#_changing_index_settings_during_restore[official Elasticsearch Documentation]. [source,yaml] ------------- actions: 1: action: restore description: >- Restore all indices in the most recent snapshot with state SUCCESS. Wait for the restore to complete before continuing. Do not skip the repository filesystem access check. Use the other options to define the index/shard settings for the restore. options: repository: # If name is blank, the most recent snapshot by age will be selected name: # If indices is blank, all indices in the snapshot will be restored indices: extra_settings: index_settings: number_of_replicas: 0 wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: state state: SUCCESS exclude: - filtertype: ... ------------- === <> [source,yaml] ------------- action: rollover description: >- Rollover the index associated with index 'name', which should be in the form of prefix-000001 (or similar), or prefix-YYYY.MM.DD-1. options: name: aliasname conditions: max_age: 1d max_docs: 1000000 extra_settings: index.number_of_shards: 3 index.number_of_replicas: 1 timeout_override: continue_if_exception: False disable_action: False ------------- === <> NOTE: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-shrink-index.html#_shrinking_an_index[Only `settings` and `aliases` are acceptable] when used in <>. [source,yaml] ------------- action: shrink description: >- Shrink selected indices on the node with the most available space. Delete source index after successful shrink, then reroute the shrunk index with the provided parameters. options: shrink_node: DETERMINISTIC extra_settings: settings: index.codec: best_compression aliases: my_alias: {} filters: - filtertype: ... ------------- There is no default value. [[option_ignore_empty]] == ignore_empty_list This setting must be either `True` or `False`. [source,yaml] ------------- action: delete_indices description: "Delete selected indices" options: ignore_empty_list: True filters: - filtertype: ... ------------- Depending on your indices, and how you've filtered them, an empty list may be presented to the action. This results in an error condition. When the ignore_empty_list option is set to `True`, the action will exit with an INFO level log message indicating such. If ignore_empty_list is set to `False`, an ERROR level message will be logged, and Curator will exit with code 1. The default value of this setting is `False` [[option_ignore]] == ignore_unavailable NOTE: This setting is used by the <>, <>, and <> actions. This setting must be either `True` or `False`. The default value of this setting is `False` === <> [source,yaml] ------------- actions: 1: action: restore description: Restore my_index from my_snapshot in my_repository options: repository: my_repository name: my_snapshot indices: my_index ignore_unavailable: True wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: state state: SUCCESS exclude: - filtertype: ... ------------- When the `ignore_unavailable` option is `False` and an index is missing the restore request will fail. === <> [source,yaml] ------------- action: snapshot description: >- Snapshot selected indices to 'repository' with the snapshot name or name pattern in 'name'. Use all other options as assigned options: repository: my_repository name: my_snapshot ignore_unavailable: False wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: ... ------------- When the `ignore_unavailable` option is `False` and an index is missing, the snapshot request will fail. This is not frequently a concern in Curator, as it should only ever find indices that exist. === <> [source,yaml] ------------- action: index_settings description: "Change settings for selected indices" options: index_settings: index: refresh_interval: 5s ignore_unavailable: False preserve_existing: False filters: - filtertype: ... ------------- When the `ignore_unavailable` option is `False` and an index is missing, or if the request is to apply a https://www.elastic.co/guide/en/elasticsearch/reference/5.4/index-modules.html#_static_index_settings[static] setting and the index is opened, the index setting request will fail. The `ignore_unavailable` option allows these indices to be skipped, when set to `True`. NOTE: https://www.elastic.co/guide/en/elasticsearch/reference/5.4/index-modules.html#dynamic-index-settings[Dynamic] index settings can be applied to either open or closed indices. [[option_include_aliases]] == include_aliases NOTE: This setting is only used by the <> action. [source,yaml] ------------- actions: 1: action: restore description: Restore my_index from my_snapshot in my_repository options: repository: my_repository name: my_snapshot indices: my_index include_aliases: True wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: state state: SUCCESS exclude: - filtertype: ... ------------- This setting must be either `True` or `False`. The value of this setting determines whether Elasticsearch should include index aliases when restoring the snapshot. The default value of this setting is `False` [[option_include_gs]] == include_global_state NOTE: This setting is used by the <> and <> actions. This setting must be either `True` or `False`. The value of this setting determines whether Elasticsearch should include the global cluster state with the snapshot or restore. When performing a <>, the default value of this setting is `True`. When performing a <>, the default value of this setting is `False`. === <> [source,yaml] ------------- actions: 1: action: restore description: Restore my_index from my_snapshot in my_repository options: repository: my_repository name: my_snapshot indices: my_index include_global_state: False wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: state state: SUCCESS exclude: - filtertype: ... ------------- === <> [source,yaml] ------------- action: snapshot description: >- Snapshot selected indices to 'repository' with the snapshot name or name pattern in 'name'. Use all other options as assigned options: repository: my_repository name: my_snapshot include_global_state: True wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: ... ------------- [[option_indices]] == indices NOTE: This setting is only used by the <> action. === <> [source,yaml] ------------- actions: 1: action: restore description: Restore my_index from my_snapshot in my_repository options: repository: my_repository name: my_snapshot indices: my_index wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: state state: SUCCESS exclude: - filtertype: ... ------------- This setting must be a list of indices to restore. Any valid YAML format for lists are acceptable here. If `indices` is left empty, or unset, all indices in the snapshot will be restored. The default value of this setting is an empty setting. [[option_key]] == key NOTE: This setting is required when using the <>. [source,yaml] ------------- action: allocation description: "Apply shard allocation filtering rules to the specified indices" options: key: ... value: ... allocation_type: ... filters: - filtertype: ... ------------- The value of this setting should correspond to a node setting on one or more nodes in your cluster. For example, you might have set [source,sh] ----------- node.tag: myvalue ----------- in your `elasticsearch.yml` file for one or more of your nodes. To match allocation in this case, set key to `tag`. These special attributes are also supported: [cols="2*", options="header"] |=== |attribute |description |`_name` |Match nodes by node name |`_host_ip` |Match nodes by host IP address (IP associated with hostname) |`_publish_ip` |Match nodes by publish IP address |`_ip` |Match either `_host_ip` or `_publish_ip` |`_host` |Match nodes by hostname |=== There is no default value. This setting must be set by the user or an exception will be raised, and execution will halt. [[option_max_age]] == max_age [source,yaml] ------------- action: rollover description: >- Rollover the index associated with index 'name', which should be in the form of prefix-000001 (or similar), or prefix-YYYY.MM.DD-1. options: name: aliasname conditions: max_age: 1d ------------- NOTE: Either <> or <>, or both are required as `conditions:` for the <> action. The maximum age that is allowed before triggering a rollover. _Must be nested under `conditions:`_ There is no default value. If this condition is specified, it must have a value, or Curator will generate an error. Ages such as `1d` for one day, or `30s` for 30 seconds can be used. [[option_max_docs]] == max_docs [source,yaml] ------------- action: rollover description: >- Rollover the index associated with index 'name', which should be in the form of prefix-000001 (or similar), or prefix-YYYY.MM.DD-1. options: name: aliasname conditions: max_docs: 1000000 ------------- NOTE: Either <> or <>, or both are required as `conditions:` for the <> action. The maximum number of documents allowed in an index before triggering a rollover. _Must be nested under `conditions:`_ There is no default value. If this condition is specified, it must have a value, or Curator will generate an error. [[option_mns]] == max_num_segments NOTE: This setting is required when using the <>. [source,yaml] ------------- action: forcemerge description: >- Perform a forceMerge on selected indices to 'max_num_segments' per shard options: max_num_segments: 2 timeout_override: 21600 filters: - filtertype: ... ------------- The value for this setting is the cutoff number of segments per shard. Indices which have more than this number of segments per shard will remain in the index list. There is no default value. This setting must be set by the user or an exception will be raised, and execution will halt. [[option_max_wait]] == max_wait NOTE: This setting is used by the <>, <>, <>, <>, <>, and <> actions. This setting must be a positive integer, or `-1`. This setting specifies how long in seconds to wait to see if the action has completed before giving up. This option is used in conjunction with <>, which is the number of seconds to wait between checking to see if the given action is complete. The default value for this setting is `-1`, meaning that Curator will wait indefinitely for the action to complete. === <> [source,yaml] ------------- action: allocation description: "Apply shard allocation filtering rules to the specified indices" options: key: ... value: ... allocation_type: ... wait_for_completion: True max_wait: 300 wait_interval: 10 filters: - filtertype: ... ------------- === <> [source,yaml] ------------- action: cluster_routing description: "Apply routing rules to the entire cluster" options: routing_type: value: ... setting: enable wait_for_completion: True max_wait: 300 wait_interval: 10 ------------- === <> [source,yaml] ------------- actions: 1: description: "Reindex index1 into index2" action: reindex options: wait_interval: 9 max_wait: -1 request_body: source: index: index1 dest: index: index2 filters: - filtertype: none ------------- === <> [source,yaml] ------------- action: replicas description: >- Set the number of replicas per shard for selected indices to 'count' options: count: ... wait_for_completion: True max_wait: 600 wait_interval: 10 filters: - filtertype: ... ------------- === <> [source,yaml] ------------- actions: 1: action: restore description: Restore my_index from my_snapshot in my_repository options: repository: my_repository name: my_snapshot indices: my_index include_global_state: False wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: state state: SUCCESS exclude: - filtertype: ... ------------- === <> [source,yaml] ------------- action: snapshot description: >- Snapshot selected indices to 'repository' with the snapshot name or name pattern in 'name'. Use all other options as assigned options: repository: my_repository name: my_snapshot include_global_state: True wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: ... ------------- [[option_migration_prefix]] == migration_prefix NOTE: This setting is used by the <> action. If the destination index is set to `MIGRATION`, Curator will reindex all selected indices one by one until they have all been reindexed. By configuring `migration_prefix`, a value can prepend each of those index names. For example, if I were reindexing `index1`, `index2`, and `index3`, and `migration_prefix` were set to `new-`, then the reindexed indices would be named `new-index1`, `new-index2`, and `new-index3`: [source,yaml] ------------- actions: 1: description: >- Reindex index1, index2, and index3 with a prefix of new-, resulting in indices named new-index1, new-index2, and new-index3 action: reindex options: wait_interval: 9 max_wait: -1 migration_prefix: new- request_body: source: index: ["index1", "index2", "index3"] dest: index: MIGRATION filters: - filtertype: none ------------- `migration_prefix` can be used in conjunction with <> [[option_migration_suffix]] == migration_suffix NOTE: This setting is used by the <> action. If the destination index is set to `MIGRATION`, Curator will reindex all selected indices one by one until they have all been reindexed. By configuring `migration_suffix`, a value can be appended to each of those index names. For example, if I were reindexing `index1`, `index2`, and `index3`, and `migration_suffix` were set to `-new`, then the reindexed indices would be named `index1-new`, `index2-new`, and `index3-new`: [source,yaml] ------------- actions: 1: description: >- Reindex index1, index2, and index3 with a suffix of -new, resulting in indices named index1-new, index2-new, and index3-new action: reindex options: wait_interval: 9 max_wait: -1 migration_suffix: -new request_body: source: index: ["index1", "index2", "index3"] dest: index: MIGRATION filters: - filtertype: none ------------- `migration_suffix` can be used in conjunction with <> [[option_name]] == name NOTE: This setting is used by the <>, <> and <>, actions. The value of this setting is the name of the alias, snapshot, or index, depending on which action makes use of `name`. This setting may contain a valid Python strftime string. Curator will extract the strftime identifiers and replace them with the corresponding values. The identifiers that Curator currently recognizes include: [width="50%", cols="> [source,yaml] ------------- action: alias description: "Add/Remove selected indices to or from the specified alias" options: name: alias_name add: filters: - filtertype: ... remove: filters: - filtertype: ... ------------- This option is required by the <> action, and has no default setting in that context. === <> For the <> action, there is no default setting, but you can use strftime: [source,yaml] ------------- action: create_index description: "Create index as named" options: name: 'myindex-%Y.%m' # ... ------------- or use Elasticsearch {ref}/date-math-index-names.html[date math] [source,yaml] ------------- action: create_index description: "Create index as named" options: name: '' # ... ------------- to name your indices. See more in the <> documenation. === <> [source,yaml] ------------- action: snapshot description: >- Snapshot selected indices to 'repository' with the snapshot name or name pattern in 'name'. Use all other options as assigned options: repository: my_repository name: include_global_state: True wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: ... ------------- For the <> action, the default value of this setting is `curator-%Y%m%d%H%M%S` [[option_node_filters]] == node_filters NOTE: This setting is only used by the <> action. [source,yaml] ------------- action: shrink description: >- Shrink selected indices on the node with the most available space. Allow master/data nodes to be potential shrink targets, but exclude 'named_node' from potential selection. options: shrink_node: DETERMINISTIC node_filters: permit_masters: True exclude_nodes: ['named_node'] filters: - filtertype: ... ------------- There is no default value for `node_filters`. The current sub-options are as follows: === permit_masters [IMPORTANT] ===================================== The `permit_masters` sub-option has a default value of `False`. If you have a small cluster with only master/data nodes, you must set `permit_masters` to `True` in order to select one of those nodes as a potential <>. ===================================== === exclude_nodes This option provides means to exclude nodes from selection when using `DETERMINISTIC` as the value for <>. It should be noted that you _can_ use a named node for <> and then exclude it here, and it will prevent a shrink from occurring. [[option_number_of_replicas]] == number_of_replicas NOTE: This setting is only used by the <> action. [source,yaml] ------------- action: shrink description: >- Shrink selected indices on the node with the most available space. Set the number of replicas to 0. options: shrink_node: DETERMINISTIC number_of_replicas: 0 filters: - filtertype: ... ------------- The value of this setting determines the number of replica shards per primary shard in the target index. The default value is `1`. [[option_number_of_shards]] == number_of_shards NOTE: This setting is only used by the <> action. [source,yaml] ------------- action: shrink description: >- Shrink selected indices on the node with the most available space. Set the number of shards to 2. options: shrink_node: DETERMINISTIC number_of_shards: 2 filters: - filtertype: ... ------------- The value of this setting determines the number of primary shards in the target index. The default value is `1`. [IMPORTANT] =========================== The value for `number_of_shards` must meet the following criteria: * It must be lower than the number of primary shards in the source index. * It must be a factor of the number of primary shards in the source index. =========================== [[option_partial]] == partial NOTE: This setting is only used by the <> action. [source,yaml] ------------- action: snapshot description: >- Snapshot selected indices to 'repository' with the snapshot name or name pattern in 'name'. Use all other options as assigned options: repository: my_repository name: ... partial: False wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: ... ------------- This setting must be either `True` or `False`. The entire snapshot will fail if one or more indices being added to the snapshot do not have all primary shards available. This behavior can be changed by setting partial to `True`. The default value of this setting is `False` [[option_post_allocation]] == post_allocation NOTE: This setting is only used by the <> action. [source,yaml] ------------- action: shrink description: >- Shrink selected indices on the node with the most available space. Apply shard routing allocation to target indices. options: shrink_node: DETERMINISTIC post_allocation: allocation_type: include key: node_tag value: cold filters: - filtertype: ... ------------- The only permitted subkeys for `post_allocation` are the same options used by the <> action: <>, <>, and <>. If present, these values will be use to apply shard routing allocation to the target index after shrinking. There is no default value for `post_allocation`. [[option_preserve_existing]] == preserve_existing [source,yaml] ------------- action: index_settings description: "Change settings for selected indices" options: index_settings: index: refresh_interval: 5s ignore_unavailable: False preserve_existing: False filters: - filtertype: ... ------------- This setting must be either `True` or `False`. If `preserve_existing` is set to `True`, and the configuration attempts to push a setting to an index that already has any value for that setting, the existing setting will be preserved, and remain unchanged. The default value of this setting is `False` [[option_refresh]] == refresh NOTE: This setting is only used by the <> action. [source,yaml] ------------- actions: 1: description: "Reindex index1 into index2" action: reindex options: wait_interval: 9 max_wait: -1 refresh: True request_body: source: index: index1 dest: index: index2 filters: - filtertype: none ------------- Setting `refresh` to `True` will cause all re-indexed indexes to be refreshed. This differs from the Index API’s refresh parameter which causes just the _shard_ that received the new data to be refreshed. Read more about this setting at {ref}/docs-reindex.html The default value is `True`. [[option_remote_aws_key]] == remote_aws_key NOTE: This option is only used by the <> when performing a remote reindex operation. WARNING: This feature allows connection to AWS using IAM credentials, but <>. WARNING: This setting will not work unless the `requests-aws4auth` Python module has been manually installed first. This should be an AWS IAM access key, or left empty. [source,yaml] ------------- actions: 1: description: "Reindex index1 into index2" action: reindex options: wait_interval: 9 max_wait: -1 remote_aws_key: AWS_KEY remote_aws_secret_key: AWS_SECRET_KEY remote_aws_region: us-east-1 request_body: source: remote: host: https://otherhost:9200 index: index1 dest: index: index2 filters: - filtertype: none ------------- IMPORTANT: You must set your <> to the proper hostname _with_ port. It may not work setting <> and <> to only a host name due to the different connection module used. [[option_remote_aws_region]] == remote_aws_region NOTE: This option is only used by the <> when performing a remote reindex operation. WARNING: This feature allows connection to AWS using IAM credentials, but <>. WARNING: This setting will not work unless the `requests-aws4auth` Python module has been manually installed first. This should be an AWS region, or left empty. [source,yaml] ------------- actions: 1: description: "Reindex index1 into index2" action: reindex options: wait_interval: 9 max_wait: -1 remote_aws_key: AWS_KEY remote_aws_secret_key: AWS_SECRET_KEY remote_aws_region: us-east-1 request_body: source: remote: host: https://otherhost:9200 index: index1 dest: index: index2 filters: - filtertype: none ------------- IMPORTANT: You must set your <> to the proper hostname _with_ port. It may not work setting <> and <> to only a host name due to the different connection module used. [[option_remote_aws_secret_key]] == remote_aws_secret_key NOTE: This option is only used by the <> when performing a remote reindex operation. WARNING: This feature allows connection to AWS using IAM credentials, but <>. WARNING: This setting will not work unless the `requests-aws4auth` Python module has been manually installed first. This should be an AWS IAM secret access key, or left empty. [source,yaml] ------------- actions: 1: description: "Reindex index1 into index2" action: reindex options: wait_interval: 9 max_wait: -1 remote_aws_key: AWS_KEY remote_aws_secret_key: AWS_SECRET_KEY remote_aws_region: us-east-1 request_body: source: remote: host: https://otherhost:9200 index: index1 dest: index: index2 filters: - filtertype: none ------------- IMPORTANT: You must set your <> to the proper hostname _with_ port. It may not work setting <> and <> to only a host name due to the different connection module used. [[option_remote_certificate]] == remote_certificate This should be a file path to a CA certificate, or left empty. [source,yaml] ------------- actions: 1: description: "Reindex index1 into index2" action: reindex options: wait_interval: 9 max_wait: -1 remote_certificate: /path/to/my/ca.cert remote_client_cert: /path/to/my/client.cert remote_client_key: /path/to/my/client.key request_body: source: remote: host: https://otherhost:9200 index: index1 dest: index: index2 filters: - filtertype: none ------------- NOTE: This option is only used by the <> when performing a remote reindex operation. This setting allows the use of a specified CA certificate file to validate the SSL certificate used by Elasticsearch. There is no default. [[option_remote_client_cert]] == remote_client_cert NOTE: This option is only used by the <> when performing a remote reindex operation. This should be a file path to a client certificate (public key), or left empty. [source,yaml] ------------- actions: 1: description: "Reindex index1 into index2" action: reindex options: wait_interval: 9 max_wait: -1 remote_certificate: /path/to/my/ca.cert remote_client_cert: /path/to/my/client.cert remote_client_key: /path/to/my/client.key request_body: source: remote: host: https://otherhost:9200 index: index1 dest: index: index2 filters: - filtertype: none ------------- Allows the use of a specified SSL client cert file to authenticate to Elasticsearch. The file may contain both an SSL client certificate and an SSL key, in which case <> is not used. If specifying `client_cert`, and the file specified does not also contain the key, use <> to specify the file containing the SSL key. The file must be in PEM format, and the key part, if used, must be an unencrypted key in PEM format as well. [[option_remote_client_key]] == remote_client_key NOTE: This option is only used by the <> when performing a remote reindex operation. This should be a file path to a client key (private key), or left empty. [source,yaml] ------------- actions: 1: description: "Reindex index1 into index2" action: reindex options: wait_interval: 9 max_wait: -1 remote_certificate: /path/to/my/ca.cert remote_client_cert: /path/to/my/client.cert remote_client_key: /path/to/my/client.key request_body: source: remote: host: https://otherhost:9200 index: index1 dest: index: index2 filters: - filtertype: none ------------- Allows the use of a specified SSL client key file to authenticate to Elasticsearch. If using <> and the file specified does not also contain the key, use `client_key` to specify the file containing the SSL key. The key file must be an unencrypted key in PEM format. [[option_remote_filters]] == remote_filters NOTE: This option is only used by the <> when performing a remote reindex operation. This is an array of <>, exactly as with regular index selection: [source,yaml] ------------- actions: 1: description: "Reindex matching indices into index2" action: reindex options: wait_interval: 9 max_wait: -1 request_body: source: remote: host: https://otherhost:9200 index: REINDEX_SELECTION dest: index: index2 remote_filters: - filtertype: *first* setting1: ... ... settingN: ... - filtertype: *second* setting1: ... ... settingN: ... - filtertype: *third* filters: - filtertype: none ------------- This feature will only work when the `source` `index` is set to `REINDEX_SELECTION`. It will select _remote_ indices matching the filters provided, and reindex them to the _local_ cluster as the name provided in the `dest` `index`. In this example, that is `index2`. [[option_remote_ssl_no_validate]] == remote_ssl_no_validate This should be `True`, `False` or left empty. [source,yaml] ------------- actions: 1: description: "Reindex index1 into index2" action: reindex options: wait_interval: 9 max_wait: -1 remote_ssl_no_validate: True request_body: source: remote: host: https://otherhost:9200 index: index1 dest: index: index2 filters: - filtertype: none ------------- If access to your Elasticsearch instance is protected by SSL encryption, you may set `ssl_no_validate` to `True` to disable SSL certificate verification. Valid use cases for doing so include the use of self-signed certificates that cannot be otherwise verified and would generate error messages. WARNING: Setting `ssl_no_validate` to `True` will likely result in a warning message that your SSL certificates are not trusted. This is expected behavior. The default value is `False`. [[option_remote_url_prefix]] == remote_url_prefix NOTE: This option is only used by the <> when performing a remote reindex operation. This should be a single value or left empty. [source,yaml] ------------- actions: 1: description: "Reindex index1 into index2" action: reindex options: wait_interval: 9 max_wait: -1 remote_url_prefix: my_prefix request_body: source: remote: host: https://otherhost:9200 index: index1 dest: index: index2 filters: - filtertype: none ------------- In some cases you may be obliged to connect to a remote Elasticsearch cluster through a proxy of some kind. There may be a URL prefix before the API URI items, e.g. http://example.com/elasticsearch/ as opposed to http://localhost:9200. In such a case, set the `remote_url_prefix` to the appropriate value, 'elasticsearch' in this example. The default is an empty string. [[option_rename_pattern]] == rename_pattern NOTE: This setting is only used by the <> action. [TIP] .from the Elasticsearch documentation ====================================== The <> and <> options can be also used to rename indices on restore using regular expression that supports referencing the original text as explained http://docs.oracle.com/javase/6/docs/api/java/util/regex/Matcher.html#appendReplacement(java.lang.StringBuffer,%20java.lang.String)[here]. ====================================== [source,yaml] ------------- actions: 1: action: restore description: >- Restore all indices in the most recent snapshot with state SUCCESS. Wait for the restore to complete before continuing. Do not skip the repository filesystem access check. Use the other options to define the index/shard settings for the restore. options: repository: # If name is blank, the most recent snapshot by age will be selected name: # If indices is blank, all indices in the snapshot will be restored indices: rename_pattern: 'index(.+)' rename_replacement: 'restored_index$1' wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: state state: SUCCESS exclude: - filtertype: ... ------------- In this configuration, Elasticsearch will capture whatever appears after `index` and put it after `restored_index`. For example, if I was restoring `index-2017.03.01`, the resulting index would be renamed to `restored_index-2017.03.01`. Read more about this setting at {ref}/modules-snapshots.html#_restore There is no default value. [[option_rename_replacement]] == rename_replacement NOTE: This setting is only used by the <> action. [TIP] .From the Elasticsearch documentation ====================================== The <> and <> options can be also used to rename indices on restore using regular expression that supports referencing the original text as explained http://docs.oracle.com/javase/6/docs/api/java/util/regex/Matcher.html#appendReplacement(java.lang.StringBuffer,%20java.lang.String)[here]. ====================================== [source,yaml] ------------- actions: 1: action: restore description: >- Restore all indices in the most recent snapshot with state SUCCESS. Wait for the restore to complete before continuing. Do not skip the repository filesystem access check. Use the other options to define the index/shard settings for the restore. options: repository: # If name is blank, the most recent snapshot by age will be selected name: # If indices is blank, all indices in the snapshot will be restored indices: rename_pattern: 'index(.+)' rename_replacement: 'restored_index$1' wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: state state: SUCCESS exclude: - filtertype: ... ------------- In this configuration, Elasticsearch will capture whatever appears after `index` and put it after `restored_index`. For example, if I was restoring `index-2017.03.01`, the resulting index would be renamed to `restored_index-2017.03.01`. Read more about this setting at {ref}/modules-snapshots.html#_restore There is no default value. [[option_repository]] == repository NOTE: This setting is only used by the <>, and <> actions. There is no default value. This setting must be set by the user or an exception will be raised, and execution will halt. === <> [source,yaml] ------------- actions: 1: action: restore description: Restore my_index from my_snapshot in my_repository options: repository: my_repository name: my_snapshot indices: my_index wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: state state: SUCCESS exclude: - filtertype: ... ------------- === <> [source,yaml] ------------- action: snapshot description: >- Snapshot selected indices to 'repository' with the snapshot name or name pattern in 'name'. Use all other options as assigned options: repository: my_repository name: my_snapshot wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: ... ------------- [[option_requests_per_second]] == requests_per_second NOTE: This option is only used by the <> [source,yaml] ------------- actions: 1: description: "Reindex index1 into index2" action: reindex options: wait_interval: 9 max_wait: -1 requests_per_second: -1 request_body: source: index: index1 dest: index: index2 filters: - filtertype: none ------------- `requests_per_second` can be set to any positive decimal number (1.4, 6, 1000, etc) and throttles the number of requests per second that the reindex issues or it can be set to `-1` to disable throttling. The default value for this is option is `-1`. [[option_request_body]] == request_body NOTE: This setting is only used by the <> action. === Manual index selection The `request_body` option is the heart of the reindex action. In here, using YAML syntax, you recreate the body sent to Elasticsearch as described in {ref}/docs-reindex.html[the official documentation.] You can manually select indices as with this example: [source,yaml] ------------- actions: 1: description: "Reindex index1 into index2" action: reindex options: wait_interval: 9 max_wait: -1 request_body: source: index: index1 dest: index: index2 filters: - filtertype: none ------------- You can also select multiple indices to reindex by making a list in acceptable YAML syntax: [source,yaml] ------------- actions: 1: description: "Reindex index1,index2,index3 into new_index" action: reindex options: wait_interval: 9 max_wait: -1 request_body: source: index: ['index1', 'index2', 'index3'] dest: index: new_index filters: - filtertype: none ------------- IMPORTANT: You _must_ set at least a <> filter, or the action will fail. Do not worry. If you've manually specified your indices, those are the only ones which will be reindexed. === Filter-Selected Indices Curator allows you to use all indices found by the `filters` section by setting the `source` index to `REINDEX_SELECTION`, like this: [source,yaml] ------------- actions: 1: description: >- Reindex all daily logstash indices from March 2017 into logstash-2017.03 action: reindex options: wait_interval: 9 max_wait: -1 request_body: source: index: REINDEX_SELECTION dest: index: logstash-2017.03 filters: - filtertype: pattern kind: prefix value: logstash-2017.03. ------------- === Reindex From Remote You can also reindex from remote: [source,yaml] ------------- actions: 1: description: "Reindex remote index1 to local index1" action: reindex options: wait_interval: 9 max_wait: -1 request_body: source: remote: host: http://otherhost:9200 username: myuser password: mypass index: index1 dest: index: index1 filters: - filtertype: none ------------- IMPORTANT: You _must_ set at least a <> filter, or the action will fail. Do not worry. Only the indices you specified in `source` will be reindexed. Curator will create a connection to the host specified as the `host` key in the above example. It will determine which port to connect to, and whether to use SSL by parsing the URL entered there. Because this `host` is specifically used by Elasticsearch, and Curator is making a separate connection, it is important to ensure that both Curator _and_ your Elasticsearch cluster have access to the remote host. If you do not whitelist the remote cluster, you will not be able to reindex. This can be done by adding the following line to your `elasticsearch.yml` file: [source,yaml] ------------- reindex.remote.whitelist: remote_host_or_IP1:9200, remote_host_or_IP2:9200 ------------- or by adding this flag to the command-line when starting Elasticsearch: [source,sh] ------------- bin/elasticsearch -Edefault.reindex.remote.whitelist="remote_host_or_IP:9200" ------------- Of course, be sure to substitute the correct host, IP, or port. Other client connection arguments can also be supplied in the form of action options: * <> * <> * <> * <> * <> * <> * <> === Reindex From Remote With Filter-Selected Indices You can even reindex from remote with filter-selected indices on the remote side: [source,yaml] ------------- actions: 1: description: >- Reindex all remote daily logstash indices from March 2017 into local index logstash-2017.03 action: reindex options: wait_interval: 9 max_wait: -1 request_body: source: remote: host: http://otherhost:9200 username: myuser password: mypass index: REINDEX_SELECTION dest: index: logstash-2017.03 remote_filters: - filtertype: pattern kind: prefix value: logstash-2017.03. filters: - filtertype: none ------------- IMPORTANT: Even though you are reindexing from remote, you _must_ set at least a <> filter, or the action will fail. Do not worry. Only the indices specified in `source` will be reindexed. Curator will create a connection to the host specified as the `host` key in the above example. It will determine which port to connect to, and whether to use SSL by parsing the URL entered there. Because this `host` is specifically used by Elasticsearch, and Curator is making a separate connection, it is important to ensure that both Curator _and_ your Elasticsearch cluster have access to the remote host. If you do not whitelist the remote cluster, you will not be able to reindex. This can be done by adding the following line to your `elasticsearch.yml` file: [source,yaml] ------------- reindex.remote.whitelist: remote_host_or_IP1:9200, remote_host_or_IP2:9200 ------------- or by adding this flag to the command-line when starting Elasticsearch: [source,sh] ------------- bin/elasticsearch -Edefault.reindex.remote.whitelist="remote_host_or_IP:9200" ------------- Of course, be sure to substitute the correct host, IP, or port. Other client connection arguments can also be supplied in the form of action options: * <> * <> * <> * <> * <> * <> * <> === Reindex - Migration Curator allows reindexing, particularly from remote, as a migration path. This can be a very useful feature for migrating an older cluster (1.4+) to a new cluster, on different hardware. It can also be a useful tool for serially reindexing indices into newer mappings in an automatable way. Ordinarily, a reindex operation is from either one, or many indices, to a single, named index. Assigning the `dest` `index` to `MIGRATION` tells Curator to treat this reindex differently. [IMPORTANT] ============================= **If it is a _local_ reindex,** you _must_ set either <>, or <>, or both. This prevents collisions and other bad things from happening. By assigning a prefix or a suffix (or both), you can reindex any local indices to new versions of themselves, but named differently. It is true the Reindex API already has this functionality. Curator includes this same functionality for convenience. ============================= This example will reindex all of the remote indices matching `logstash-2017.03.` into the local cluster, but preserve the original index names, rather than merge all of the contents into a single index. Internal to Curator, this results in multiple reindex actions: one per index. All other available options and settings are available. [source,yaml] ------------- actions: 1: description: >- Reindex all remote daily logstash indices from March 2017 into local versions with the same index names. action: reindex options: wait_interval: 9 max_wait: -1 request_body: source: remote: host: http://otherhost:9200 username: myuser password: mypass index: REINDEX_SELECTION dest: index: MIGRATION remote_filters: - filtertype: pattern kind: prefix value: logstash-2017.03. filters: - filtertype: none ------------- IMPORTANT: Even though you are reindexing from remote, you _must_ set at least a <> filter, or the action will fail. Do not worry. Only the indices specified in `source` will be reindexed. Curator will create a connection to the host specified as the `host` key in the above example. It will determine which port to connect to, and whether to use SSL by parsing the URL entered there. Because this `host` is specifically used by Elasticsearch, and Curator is making a separate connection, it is important to ensure that both Curator _and_ your Elasticsearch cluster have access to the remote host. If you do not whitelist the remote cluster, you will not be able to reindex. This can be done by adding the following line to your `elasticsearch.yml` file: [source,yaml] ------------- reindex.remote.whitelist: remote_host_or_IP1:9200, remote_host_or_IP2:9200 ------------- or by adding this flag to the command-line when starting Elasticsearch: [source,sh] ------------- bin/elasticsearch -Edefault.reindex.remote.whitelist="remote_host_or_IP:9200" ------------- Of course, be sure to substitute the correct host, IP, or port. Other client connection arguments can also be supplied in the form of action options: * <> * <> * <> * <> * <> * <> * <> * <> * <> === Other scenarios and options Nearly all scenarios supported by the reindex API are supported in the request_body, including (but not limited to): * Pipelines * Scripting * Queries * Conflict resolution * Limiting by count * Versioning * Reindexing operation type (for example, create-only) Read more about these, and more, at {ref}/docs-reindex.html Notable exceptions include: * You cannot manually specify slices. Instead, use the <> option for automated sliced reindexing. [[option_retry_count]] == retry_count NOTE: This setting is only used by the <>. [source,yaml] ------------- action: delete_snapshots description: "Delete selected snapshots from 'repository'" options: repository: ... retry_interval: 120 retry_count: 3 filters: - filtertype: ... ------------- The value of this setting is the number of times to retry deleting a snapshot. The default for this setting is `3`. [[option_retry_interval]] == retry_interval NOTE: This setting is only used by the <>. [source,yaml] ------------- action: delete_snapshots description: "Delete selected snapshots from 'repository'" options: repository: ... retry_interval: 120 retry_count: 3 filters: - filtertype: ... ------------- The value of this setting is the number of seconds to delay between retries. The default for this setting is `120`. [[option_routing_type]] == routing_type NOTE: This setting is only used by the <>. [source,yaml] ------------- action: cluster_routing description: "Apply routing rules to the entire cluster" options: routing_type: value: ... setting: enable wait_for_completion: True max_wait: 300 wait_interval: 10 ------------- The value of this setting must be either `allocation` or `rebalance` There is no default value. This setting must be set by the user or an exception will be raised, and execution will halt. [[option_setting]] == setting NOTE: This setting is only used by the <>. [source,yaml] ------------- action: cluster_routing description: "Apply routing rules to the entire cluster" options: routing_type: value: ... setting: enable wait_for_completion: True max_wait: 300 wait_interval: 10 ------------- The value of this must be `enable` at present. It is a placeholder for future expansion. There is no default value. This setting must be set by the user or an exception will be raised, and execution will halt. [[option_shrink_node]] == shrink_node NOTE: This setting is only used by the <> action. [source,yaml] ------------- action: shrink description: >- Shrink selected indices on the node with the most available space, excluding master nodes and the node named 'not_this_node' options: shrink_node: DETERMINISTIC node_filters: permit_masters: False exclude_nodes: ['not_this_node'] shrink_suffix: '-shrink' filters: - filtertype: ... ------------- This setting is required. There is no default value. The value of this setting must be the valid name of a node in your Elasticsearch cluster, or `DETERMINISTIC`. If the value is `DETERMINISTIC`, Curator will automatically select the data node with the most available free space and make that the target node. Curator will repeat this process for each successive index when the value is `DETERMINISTIC`. If <>, such as `exclude_nodes` are defined, those nodes will not be considered as potential target nodes. [[option_shrink_prefix]] == shrink_prefix NOTE: This setting is only used by the <> action. [source,yaml] ------------- action: shrink description: >- Shrink selected indices on the node with the most available space. Prepend target index names with 'foo-' and append a suffix of '-shrink' options: shrink_node: DETERMINISTIC shrink_prefix: 'foo-' shrink_suffix: '-shrink' filters: - filtertype: ... ------------- There is no default value for this setting. The value of this setting will be prepended to target index names. If the source index were `index`, and the `shrink_prefix` were `foo-`, and the `shrink_suffix` were `-shrink`, the resulting target index name would be `foo-index-shrink`. [[option_shrink_suffix]] == shrink_suffix NOTE: This setting is only used by the <> action. [source,yaml] ------------- action: shrink description: >- Shrink selected indices on the node with the most available space. Prepend target index names with 'foo-' and append a suffix of '-shrink' options: shrink_node: DETERMINISTIC shrink_prefix: 'foo-' shrink_suffix: '-shrink' filters: - filtertype: ... ------------- The default value for this setting is `-shrink`. The value of this setting will be appended to target index names. If the source index were `index`, and the `shrink_prefix` were `foo-`, and the `shrink_suffix` were `-shrink`, the resulting target index name would be `foo-index-shrink`. [[option_slices]] == slices NOTE: This setting is only used by the <> action. This setting can speed up reindexing operations by using {ref}/search-request-scroll.html#sliced-scroll[Sliced Scroll] to slice on the \_uid. [source,yaml] ------------- actions: 1: description: "Reindex index1,index2,index3 into new_index" action: reindex options: wait_interval: 9 max_wait: -1 slices: 3 request_body: source: index: ['index1', 'index2', 'index3'] dest: index: new_index filters: - filtertype: none ------------- === Picking the number of slices Here are a few recommendations around the number of `slices` to use: * Don’t use large numbers. `500` creates fairly massive CPU thrash, so Curator will not allow a number larger than this. * It is more efficient from a query performance standpoint to use some multiple of the number of shards in the source index. * Using exactly as many shards as are in the source index is the most efficient from a query performance standpoint. * Indexing performance should scale linearly across available resources with the number of slices. * Whether indexing or query performance dominates that process depends on lots of factors like the documents being reindexed and the cluster doing the reindexing. [[option_skip_fsck]] == skip_repo_fs_check NOTE: This setting is used by the <> and <> actions. This setting must be either `True` or `False`. The default value of this setting is `False` === <> Each master and data node in the cluster _must_ have write access to the shared filesystem used by the repository for a snapshot to be 100% valid. For the purposes of a <>, this is a lightweight attempt to ensure that all nodes are _still_ actively able to write to the repository, in hopes that snapshots were from all nodes. It is not otherwise necessary for the purposes of a restore. Some filesystems may take longer to respond to a check, which results in a false positive for the filesystem access check. For these cases, it is desirable to bypass this verification step, by setting this to `True.` [source,yaml] ------------- actions: 1: action: restore description: Restore my_index from my_snapshot in my_repository options: repository: my_repository name: my_snapshot indices: my_index skip_repo_fs_check: False wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: state state: SUCCESS exclude: - filtertype: ... ------------- === <> Each master and data node in the cluster _must_ have write access to the shared filesystem used by the repository for a snapshot to be 100% valid. Some filesystems may take longer to respond to a check, which results in a false positive for the filesystem access check. For these cases, it is desirable to bypass this verification step, by setting this to `True.` [source,yaml] ------------- action: snapshot description: >- Snapshot selected indices to 'repository' with the snapshot name or name pattern in 'name'. Use all other options as assigned options: repository: my_repository name: my_snapshot skip_repo_fs_check: False wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: ... ------------- [[option_timeout]] == timeout NOTE: This setting is only used by the <> action. The `timeout` is the length in seconds each individual bulk request should wait for shards that are unavailable. The default value is `60`, meaning 60 seconds. [source,yaml] ------------- actions: 1: description: "Reindex index1,index2,index3 into new_index" action: reindex options: wait_interval: 9 max_wait: -1 timeout: 90 request_body: source: index: ['index1', 'index2', 'index3'] dest: index: new_index filters: - filtertype: none ------------- [[option_timeout_override]] == timeout_override NOTE: This setting is available in all actions. [source,yaml] ------------- action: forcemerge description: >- Perform a forceMerge on selected indices to 'max_num_segments' per shard options: max_num_segments: 2 timeout_override: 21600 filters: - filtertype: ... ------------- Actions <>, <>, and <> will override this value to `21600` if `timeout_override` is unset. The <> action will override the value to 180 if unset. Some actions have a default value for `timeout_override`. The following table shows these default values: [cols="m,", options="header"] |=== |Action Name |Default `timeout_override` Value |close |180 |forcemerge |21600 |restore |21600 |snapshot |21600 |=== All other actions have no default value for `timeout_override`. This setting must be an integer number of seconds, or an error will result. This setting is particularly useful for the <> action, as all other actions have a new polling behavior when using <> that should reduce or prevent client timeouts. [[option_value]] == value NOTE: This setting is optional when using the <> and required when using the <>. === <> For the <>, the value of this setting should correspond to a node setting on one or more nodes in your cluster For example, you might have set [source,sh] ----------- node.tag: myvalue ----------- in your `elasticsearch.yml` file for one or more of your nodes. To match allocation in this case, set value to `myvalue`. Additonally, if you used one of the special attribute names `_ip`, `_name`, `_id`, or `_host` for <>, value can match the one of the node ip addresses, names, ids, or host names, respectively. NOTE: To remove a routing allocation, the value of this setting should be left empty, or the `value` setting not even included as an option. For example, you might have set [source,sh] ----------- PUT test/_settings { "index.routing.allocation.exclude.size": "small" } ----------- to keep index `test` from allocating shards on nodes that have `node.tag: small`. To remove this shard routing allocation setting, you might use an action file similar to this: [source,yaml] ----------- --- actions: 1: action: allocation description: -> Unset 'index.routing.allocation.exclude.size' for index 'test' by passing an empty value. options: key: size value: ... allocation_type: exclude filters: - filtertype: pattern kind: regex value: '^test$' ----------- === <> For the <>, the acceptable values for this setting depend on the value of <>. [source,yaml] ------------- action: cluster_routing description: "Apply routing rules to the entire cluster" options: routing_type: ... value: ... setting: enable wait_for_completion: True max_wait: 300 wait_interval: 10 ------------- Acceptable values when <> is either `allocation` or `rebalance` are `all`, `primaries`, and `none` (string, not `NoneType`). If `routing_type` is `allocation`, this can also be `new_primaries`. If `routing_type` is `rebalance`, then the value can also be `replicas`. There is no default value. This setting must be set by the user or an exception will be raised, and execution will halt. [[option_wait_for_active_shards]] == wait_for_active_shards NOTE: This setting is used by the <>, <>, and <> actions. Each use it similarly. This setting determines the number of shard copies that must be active before the client returns. The default value is 1, which implies only the primary shards. Set to `all` for all shard copies, otherwise set to any non-negative value less than or equal to the total number of copies for the shard (number of replicas + 1) Read {ref}/docs-index_.html#index-wait-for-active-shards[the Elasticsearch documentation] for more information. === Reindex [source,yaml] ------------- actions: 1: description: "Reindex index1,index2,index3 into new_index" action: reindex options: wait_interval: 9 max_wait: -1 wait_for_active_shards: 2 request_body: source: index: ['index1', 'index2', 'index3'] dest: index: new_index filters: - filtertype: none ------------- === Rollover [source,yaml] ------------- action: rollover description: >- Rollover the index associated with index 'name', which should be in the form of prefix-000001 (or similar), or prefix-YYYY.MM.DD-1. options: name: aliasname conditions: max_age: 1d max_docs: 1000000 wait_for_active_shards: 1 extra_settings: index.number_of_shards: 3 index.number_of_replicas: 1 timeout_override: continue_if_exception: False disable_action: False ------------- === Shrink [source,yaml] ------------- action: shrink description: >- Shrink selected indices on the node with the most available space. Prepend target index names with 'foo-' and append a suffix of '-shrink' options: shrink_node: DETERMINISTIC wait_for_active_shards: all filters: - filtertype: ... ------------- [[option_wfc]] == wait_for_completion NOTE: This setting is used by the <>, <>, <>, <>, <>, and <> actions. This setting must be either `True` or `False`. This setting specifies whether or not the request should return immediately or wait for the operation to complete before returning. === <> [source,yaml] ------------- action: allocation description: "Apply shard allocation filtering rules to the specified indices" options: key: ... value: ... allocation_type: ... wait_for_completion: False max_wait: 300 wait_interval: 10 filters: - filtertype: ... ------------- The default value for the <> action is `False`. === <> [source,yaml] ------------- action: cluster_routing description: "Apply routing rules to the entire cluster" options: routing_type: value: ... setting: enable wait_for_completion: True max_wait: 300 wait_interval: 10 ------------- The default value for the <> action is `False`. === <> [source,yaml] ------------- actions: 1: description: "Reindex index1 into index2" action: reindex options: wait_interval: 9 max_wait: -1 request_body: source: index: index1 dest: index: index2 filters: - filtertype: none ------------- The default value for the <> action is `False`. === <> [source,yaml] ------------- action: replicas description: >- Set the number of replicas per shard for selected indices to 'count' options: count: ... wait_for_completion: True max_wait: 600 wait_interval: 10 filters: - filtertype: ... ------------- The default value for the <> action is `False`. === <> [source,yaml] ------------- actions: 1: action: restore description: Restore my_index from my_snapshot in my_repository options: repository: my_repository name: my_snapshot indices: my_index wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: state state: SUCCESS exclude: - filtertype: ... ------------- The default value for the <> action is `True`. === <> [source,yaml] ------------- action: snapshot description: >- Snapshot selected indices to 'repository' with the snapshot name or name pattern in 'name'. Use all other options as assigned options: repository: my_repository name: my_snapshot wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: ... ------------- The default value for the <> action is `True`. TIP: During snapshot initialization, information about all previous snapshots is loaded into the memory, which means that in large repositories it may take several seconds (or even minutes) for this command to return even if the `wait_for_completion` setting is set to `False`. [[option_wait_interval]] == wait_interval NOTE: This setting is used by the <>, <>, <>, <>, <>, and <> actions. This setting must be a positive integer between 1 and 30. This setting specifies how long to wait between checks to see if the action has completed or not. This number should not be larger than the client <> or the <>. As the default client <> value for is 30, this should be uncommon. The default value for this setting is `9`, meaning 9 seconds between checks. This option is generally used in conjunction with <>, which is the maximum amount of time in seconds to wait for the given action to complete. === <> [source,yaml] ------------- action: allocation description: "Apply shard allocation filtering rules to the specified indices" options: key: ... value: ... allocation_type: ... wait_for_completion: False max_wait: 300 wait_interval: 10 filters: - filtertype: ... ------------- === <> [source,yaml] ------------- action: cluster_routing description: "Apply routing rules to the entire cluster" options: routing_type: value: ... setting: enable wait_for_completion: True max_wait: 300 wait_interval: 10 ------------- === <> [source,yaml] ------------- actions: 1: description: "Reindex index1 into index2" action: reindex options: wait_interval: 9 max_wait: -1 request_body: source: index: index1 dest: index: index2 filters: - filtertype: none ------------- === <> [source,yaml] ------------- action: replicas description: >- Set the number of replicas per shard for selected indices to 'count' options: count: ... wait_for_completion: True max_wait: 600 wait_interval: 10 filters: - filtertype: ... ------------- === <> [source,yaml] ------------- actions: 1: action: restore description: Restore my_index from my_snapshot in my_repository options: repository: my_repository name: my_snapshot indices: my_index wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: state state: SUCCESS exclude: - filtertype: ... ------------- === <> [source,yaml] ------------- action: snapshot description: >- Snapshot selected indices to 'repository' with the snapshot name or name pattern in 'name'. Use all other options as assigned options: repository: my_repository name: my_snapshot wait_for_completion: True max_wait: 3600 wait_interval: 10 filters: - filtertype: ... ------------- [[option_warn_if_no_indices]] == warn_if_no_indices NOTE: This setting is only used by the <> action. This setting must be either `True` or `False`. The default value for this setting is `False`. [source,yaml] ------------- action: alias description: "Add/Remove selected indices to or from the specified alias" options: name: alias_name warn_if_no_indices: False add: filters: - filtertype: ... remove: filters: - filtertype: ... ------------- This setting specifies whether or not the alias action should continue with a warning or return immediately in the event that the filters in the add or remove section result in an empty index list. [WARNING] .Improper use of this setting can yield undesirable results ===================================================================== *Ideal use case:* For example, you want to add the most recent seven days of time-series indices into a _lastweek_ alias, and remove indices older than seven days from this same alias. If you do not not yet have any indices older than seven days, this will result in an empty index list condition which will prevent the entire alias action from completing successfully. If `warn_if_no_indices` were set to `True`, however, it would avert that potential outcome. *Potentially undesirable outcome:* A _non-beneficial_ case would be where if `warn_if_no_indices` is set to `True`, and a misconfiguration results in indices not being found, and therefore not being disassociated from the alias. As a result, an alias that should only query one week now references multiple weeks of data. If `warn_if_no_indices` were set to `False`, this scenario would have been averted because the empty list condition would have resulted in an error. ===================================================================== curator-5.2.0/docs/asciidoc/security.asciidoc000066400000000000000000000112561315226075300212630ustar00rootroot00000000000000[[security]] = Security [partintro] -- Please read the following sections for help with securing the connection between Curator and Elasticsearch. * <> * <> -- [[python-security]] == Python and Secure Connectivity Curator was written in Python, which allows it to be distributed as code which can run across a wide variety of systems, including Linux, Windows, Mac OS, and any other system or architecture for which a Python interpreter has been written. Curator was also written to be usable by the 4 most recent major release branches of Python: 2.7, 3.4, 3.5, and 3.6. It may even run on other versions, but those versions are not tested. Unfortunately, this broad support comes at a cost. While Curator happily ran on Python version 2.6, this version had its last update more than 3 years ago. There have been many improvements to security, SSL/TLS and the libraries that support them since then. Not all of these have been back-ported, which results in Curator not being able to communicate securely via SSL/TLS, or in some cases even connect securely. Because it is impossible to know if a given system has the correct Python version, leave alone the most recent libraries and modules, it becomes nearly impossible to guarantee that Curator will be able to make a secure and error-free connection to a secured Elasticsearch instance for any `pip` or RPM/DEB installed modules. This has lead to an increased amount of troubleshooting and support work for Curator. The precompiled binary packages were created to address this. The precompiled binary packages (APT/YUM, Windows) have been compiled with Python {pybuild_ver}, which has all of the up-to-date libraries needed for secure transactions. These packages have been tested connecting to Security (5.x X-Pack) with self-signed PKI certificates. Connectivity via SSL or TLS to other open-source plugins may work, but is not guaranteed. If you are encountering SSL/TLS errors in Curator, please see the list of <>. [[security-errors]] == Common Security Error Messages === Elasticsearch ConnectionError [source,sh] ----------- Unable to create client connection to Elasticsearch. Error:ConnectionError(error return without exception set) caused by: SystemError(error return without exception set) ----------- This error can happen on non-secured connections as well. If it happens with a secured instance, it will usually be accompanied by one or more of the following messages === SNIMissingWarning [source,sh] ----------- SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings ----------- This happens on Python 2 versions older than 2.7.9. These older versions lack https://en.wikipedia.org/wiki/Server_Name_Indication[SNI] support. This can cause servers to present a certificate that the client thinks is invalid. Follow the https://urllib3.readthedocs.io/en/latest/user-guide.html#ssl-py2[pyOpenSSL] guide to resolve this warning. === InsecurePlatformWarning [source,sh] ----------- InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings ----------- This happens on Python 2 platforms that have an outdated **ssl** module. These older **ssl** modules can cause some insecure requests to succeed where they should fail and secure requests to fail where they should succeed. Follow the https://urllib3.readthedocs.io/en/latest/user-guide.html#ssl-py2[pyOpenSSL] guide to resolve this warning. === InsecureRequestWarning [source,sh] ----------- InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html ----------- This happens when an request is made to an HTTPS URL without certificate verification enabled. Follow the https://urllib3.readthedocs.io/en/latest/user-guide.html#ssl[certificate verification] guide to resolve this warning. Related: [source,sh] ----------- SSLError: [Errno 1] _ssl.c:510: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ----------- curator-5.2.0/docs/asciidoc/versions.asciidoc000066400000000000000000000037671315226075300212740ustar00rootroot00000000000000[[versions]] = Versions [partintro] -- Elasticsearch Curator has been around for many different versions of Elasticsearch. The following document helps clarify which versions of Curator work with which versions of Elasticsearch. The current version of Curator is {curator_version} * <> -- [[version-compatibility]] == Version Compatibility   IMPORTANT: Each listed version of Elasticsearch Curator has been fully tested against unmodified release versions of Elasticsearch. **Modified versions of Elasticsearch may not be fully supported.** The current version of Curator is {curator_version} [cols="<,<,<,<",options="header",grid="cols"] |=== |Curator Version |ES 1.x |ES 2.x |ES 5.x |          3 |  ✅ footnoteref:[aws_ss,Curator is unable to make snapshots for modified versions of ES which do not allow access to the snapshot status API endpoint. As a result, Curator is unable to make snapshots in AWS ES.] |  ✅ footnoteref:[aws_ss] |  ❌ |          4 |  ❌ |  ✅ footnote:[AWS ES (which is different from installing Elasticsearch on your own EC2 instances) version 5.3 officially supports Curator. If using an older version of AWS ES, please see the FAQ question, <>] |  ✅ footnote:[Not all of the APIs available in Elasticsearch 5 are available in Curator 4.] |          5 |  ❌ |  ❌ |  ✅footnoteref:[aws_ss] |=== Learn more about the different versions at: * https://www.elastic.co/guide/en/elasticsearch/client/curator/3.5/index.html[Curator 3 Documentation] * https://www.elastic.co/guide/en/elasticsearch/client/curator/4.2/index.html[Curator 4 Documentation] * https://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html[Curator 5 Documentation] curator-5.2.0/docs/conf.py000066400000000000000000000213771315226075300154420ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # Elasticsearch documentation build configuration file, created by # sphinx-quickstart on Mon May 6 15:38:41 2013. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os, re # Utility function to read from file. def fread(fname): return open(os.path.join(os.path.dirname(__file__), fname)).read() def get_version(): VERSIONFILE="../curator/_version.py" verstrline = fread(VERSIONFILE).strip() vsre = r"^__version__ = ['\"]([^'\"]*)['\"]" mo = re.search(vsre, verstrline, re.M) if mo: VERSION = mo.group(1) else: raise RuntimeError("Unable to find version string in %s." % (VERSIONFILE,)) build_number = os.environ.get('CURATOR_BUILD_NUMBER', None) if build_number: return VERSION + "b{}".format(build_number) return VERSION # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.insert(0, os.path.abspath('.')) sys.path.insert(0, os.path.abspath('../')) # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.intersphinx'] intersphinx_mapping = { 'python': ('https://docs.python.org/3.6', None), 'elasticsearch': ('http://elasticsearch-py.readthedocs.io/en/5.4.0', None), } autoclass_content = "both" # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Elasticsearch Curator' copyright = u'2011-2017, Elasticsearch' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The full version, including alpha/beta/rc tags. release = get_version() # The short X.Y version. version = '.'.join(release.split('.')[:2]) # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. #keep_warnings = False # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'default' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'Curatordoc' # -- Options for LaTeX output -------------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'ES_Curator.tex', u'Elasticsearch Curator Documentation', u'Aaron Mildenstein', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'curator', u'Elasticsearch Curator Documentation', [u'Aaron Mildenstein'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------------ # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'Curator', u'Elasticsearch Curator Documentation', u'Aaron Mildenstein', 'Curator', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. #texinfo_no_detailmenu = False curator-5.2.0/docs/examples.rst000066400000000000000000000043771315226075300165140ustar00rootroot00000000000000.. _examples: Examples ======== Each of these examples presupposes that the requisite modules have been imported and an instance of the Elasticsearch client object has been created: :: import elasticsearch import curator client = elasticsearch.Elasticsearch() Filter indices by prefix ++++++++++++++++++++++++ :: ilo = curator.IndexList(client) ilo.filter_by_regex(kind='prefix', value='logstash-') The contents of `ilo.indices` would then only be indices matching the `prefix`. Filter indices by suffix ++++++++++++++++++++++++ :: ilo = curator.IndexList(client) ilo.filter_by_regex(kind='suffix', value='-prod') The contents of `ilo.indices` would then only be indices matching the `suffix`. Filter indices by age (name) ++++++++++++++++++++++++++++ This example will match indices with the following criteria: * Have a date string of ``%Y.%m.%d`` * Use `days` as the unit of time measurement * Filter indices `older` than 5 `days` :: ilo = curator.IndexList(client) ilo.filter_by_age(source='name', direction='older', timestring='%Y.%m.%d', unit='days', unit_count=5 ) The contents of `ilo.indices` would then only be indices matching these criteria. Filter indices by age (creation_date) +++++++++++++++++++++++++++++++++++++ This example will match indices with the following criteria: * Use `months` as the unit of time measurement * Filter indices where the index creation date is `older` than 2 `months` from this moment. :: ilo = curator.IndexList(client) ilo.filter_by_age(source='creation_date', direction='older', unit='months', unit_count=2 ) The contents of `ilo.indices` would then only be indices matching these criteria. Filter indices by age (field_stats) +++++++++++++++++++++++++++++++++++ This example will match indices with the following criteria: * Use `days` as the unit of time measurement * Filter indices where the `timestamp` field's `min_value` is a date `older` than 3 `weeks` from this moment. :: ilo = curator.IndexList(client) ilo.filter_by_age(source='field_stats', direction='older', unit='weeks', unit_count=3, field='timestamp', stats_result='min_value' ) The contents of `ilo.indices` would then only be indices matching these criteria. curator-5.2.0/docs/filters.rst000066400000000000000000000026761315226075300163460ustar00rootroot00000000000000.. _filters: Filter Methods ============== * `IndexList`_ * `SnapshotList`_ IndexList --------- .. automethod:: curator.indexlist.IndexList.filter_allocated :noindex: .. automethod:: curator.indexlist.IndexList.filter_by_age :noindex: .. automethod:: curator.indexlist.IndexList.filter_by_regex :noindex: .. automethod:: curator.indexlist.IndexList.filter_by_space :noindex: .. automethod:: curator.indexlist.IndexList.filter_closed :noindex: .. automethod:: curator.indexlist.IndexList.filter_forceMerged :noindex: .. automethod:: curator.indexlist.IndexList.filter_kibana :noindex: .. automethod:: curator.indexlist.IndexList.filter_opened :noindex: .. automethod:: curator.indexlist.IndexList.filter_none :noindex: .. automethod:: curator.indexlist.IndexList.filter_by_alias :noindex: .. automethod:: curator.indexlist.IndexList.filter_by_count :noindex: .. automethod:: curator.indexlist.IndexList.filter_period :noindex: SnapshotList ------------ .. automethod:: curator.snapshotlist.SnapshotList.filter_by_age :noindex: .. automethod:: curator.snapshotlist.SnapshotList.filter_by_regex :noindex: .. automethod:: curator.snapshotlist.SnapshotList.filter_by_state :noindex: .. automethod:: curator.snapshotlist.SnapshotList.filter_none :noindex: .. automethod:: curator.snapshotlist.SnapshotList.filter_by_count :noindex: .. automethod:: curator.snapshotlist.SnapshotList.filter_period :noindex:curator-5.2.0/docs/index.rst000066400000000000000000000055001315226075300157720ustar00rootroot00000000000000Elasticsearch Curator Python API ================================ The Elasticsearch Curator Python API helps you manage your indices and snapshots. .. note:: This documentation is for the Elasticsearch Curator Python API. Documentation for the Elasticsearch Curator *CLI* -- which uses this API and is installed as an entry_point as part of the package -- is available in the `Elastic guide`_. .. _Elastic guide: http://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html Compatibility ------------- The Elasticsearch Curator Python API is compatible with the 5.x Elasticsearch versions, and supports Python versions 2.7 and later. Example Usage ------------- :: import elasticsearch import curator client = elasticsearch.Elasticsearch() ilo = curator.IndexList(client) ilo.filter_by_regex(kind='prefix', value='logstash-') ilo.filter_by_age(source='name', direction='older', timestring='%Y.%m.%d', unit='days', unit_count=30) delete_indices = curator.DeleteIndices(ilo) delete_indices.do_action() .. TIP:: See more examples in the :doc:`Examples ` page. Features -------- The API methods fall into the following categories: * :doc:`Object Classes ` build and filter index list or snapshot list objects. * :doc:`Action Classes ` act on object classes. * :doc:`Utilities ` are helper methods. Logging ~~~~~~~ The Elasticsearch Curator Python API uses the standard `logging library`_ from Python. It inherits two loggers from ``elasticsearch-py``: ``elasticsearch`` and ``elasticsearch.trace``. Clients use the ``elasticsearch`` logger to log standard activity, depending on the log level. The ``elasticsearch.trace`` logger logs requests to the server in JSON format as pretty-printed ``curl`` commands that you can execute from the command line. The ``elasticsearch.trace`` logger is not inherited from the base logger and must be activated separately. .. _logging library: http://docs.python.org/3.6/library/logging.html Contents -------- .. toctree:: :maxdepth: 2 objectclasses actionclasses filters utilities examples Changelog License ------- Copyright (c) 2012–2017 Elasticsearch Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Indices and tables ------------------ * :ref:`genindex` * :ref:`search` curator-5.2.0/docs/objectclasses.rst000066400000000000000000000003751315226075300175140ustar00rootroot00000000000000.. _objectclasses: Object Classes ============== * `IndexList`_ * `SnapshotList`_ IndexList --------- .. autoclass:: curator.indexlist.IndexList :members: SnapshotList ------------ .. autoclass:: curator.snapshotlist.SnapshotList :members: curator-5.2.0/docs/utilities.rst000066400000000000000000000002371315226075300167000ustar00rootroot00000000000000.. _utilities: Utility & Helper Methods ======================== .. automodule:: curator.utils :members: .. autoclass:: curator.SchemaCheck :members: curator-5.2.0/examples/000077500000000000000000000000001315226075300150175ustar00rootroot00000000000000curator-5.2.0/examples/actions/000077500000000000000000000000001315226075300164575ustar00rootroot00000000000000curator-5.2.0/examples/actions/alias.yml000066400000000000000000000024351315226075300202770ustar00rootroot00000000000000--- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: alias description: >- Alias indices older than 7 days but newer than 14 days, with a prefix of logstash- to 'last_week', remove indices older than 14 days. options: name: last_week extra_settings: timeout_override: continue_if_exception: False disable_action: True add: filters: - filtertype: pattern kind: prefix value: logstash- exclude: - filtertype: age source: name direction: older timestring: '%Y.%m.%d' unit: days unit_count: 7 exclude: - filtertype: age direction: younger timestring: '%Y.%m.%d' unit: days unit_count: 14 exclude: remove: filters: - filtertype: pattern kind: prefix value: logstash- exclude: - filtertype: age source: name direction: older timestring: '%Y.%m.%d' unit: days unit_count: 14 exclude: curator-5.2.0/examples/actions/allocation.yml000066400000000000000000000015741315226075300213360ustar00rootroot00000000000000--- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: allocation description: >- Apply shard allocation routing to 'require' 'tag=cold' for hot/cold node setup for logstash- indices older than 3 days, based on index_creation date options: key: tag value: cold allocation_type: require wait_for_completion: False continue_if_exception: False disable_action: True filters: - filtertype: pattern kind: prefix value: logstash- exclude: - filtertype: age source: creation_date direction: older unit: days unit_count: 3 exclude: curator-5.2.0/examples/actions/close.yml000066400000000000000000000014341315226075300203110ustar00rootroot00000000000000--- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: close description: >- Close indices older than 30 days (based on index name), for logstash- prefixed indices. options: delete_aliases: False timeout_override: continue_if_exception: False disable_action: True filters: - filtertype: pattern kind: prefix value: logstash- exclude: - filtertype: age source: name direction: older timestring: '%Y.%m.%d' unit: days unit_count: 30 exclude: curator-5.2.0/examples/actions/create_index.yml000066400000000000000000000011461315226075300216360ustar00rootroot00000000000000--- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: create_index description: Create the index as named, with the specified extra settings. options: name: myindex extra_settings: settings: number_of_shards: 2 number_of_replicas: 1 timeout_override: continue_if_exception: False disable_action: True curator-5.2.0/examples/actions/delete_indices.yml000066400000000000000000000016441315226075300221470ustar00rootroot00000000000000--- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: delete_indices description: >- Delete indices older than 45 days (based on index name), for logstash- prefixed indices. Ignore the error if the filter does not result in an actionable list of indices (ignore_empty_list) and exit cleanly. options: ignore_empty_list: True timeout_override: continue_if_exception: False disable_action: True filters: - filtertype: pattern kind: prefix value: logstash- exclude: - filtertype: age source: name direction: older timestring: '%Y.%m.%d' unit: days unit_count: 45 exclude: curator-5.2.0/examples/actions/delete_snapshots.yml000066400000000000000000000014561315226075300225540ustar00rootroot00000000000000--- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: delete_snapshots description: >- Delete snapshots from the selected repository older than 45 days (based on creation_date), for 'curator-' prefixed snapshots. options: repository: timeout_override: continue_if_exception: False disable_action: True filters: - filtertype: pattern kind: prefix value: curator- exclude: - filtertype: age source: creation_date direction: older unit: days unit_count: 45 exclude: curator-5.2.0/examples/actions/forcemerge.yml000066400000000000000000000020531315226075300213200ustar00rootroot00000000000000--- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: forcemerge description: >- forceMerge logstash- prefixed indices older than 2 days (based on index creation_date) to 2 segments per shard. Delay 120 seconds between each forceMerge operation to allow the cluster to quiesce. This action will ignore indices already forceMerged to the same or fewer number of segments per shard, so the 'forcemerged' filter is unneeded. options: max_num_segments: 2 delay: 120 timeout_override: continue_if_exception: False disable_action: True filters: - filtertype: pattern kind: prefix value: logstash- exclude: - filtertype: age source: creation_date direction: older unit: days unit_count: 2 exclude: curator-5.2.0/examples/actions/open.yml000066400000000000000000000016531315226075300201500ustar00rootroot00000000000000--- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: open description: >- Open indices older than 30 days but younger than 60 days (based on index name), for logstash- prefixed indices. options: timeout_override: continue_if_exception: False disable_action: True filters: - filtertype: pattern kind: prefix value: logstash- exclude: - filtertype: age source: name direction: older timestring: '%Y.%m.%d' unit: days unit_count: 30 exclude: - filtertype: age source: name direction: younger timestring: '%Y.%m.%d' unit: days unit_count: 60 exclude: curator-5.2.0/examples/actions/replicas.yml000066400000000000000000000014661315226075300210130ustar00rootroot00000000000000--- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: replicas description: >- Reduce the replica count to 0 for logstash- prefixed indices older than 10 days (based on index creation_date) options: count: 0 wait_for_completion: False timeout_override: continue_if_exception: False disable_action: True filters: - filtertype: pattern kind: prefix value: logstash- exclude: - filtertype: age source: creation_date direction: older unit: days unit_count: 10 exclude: curator-5.2.0/examples/actions/restore.yml000066400000000000000000000024601315226075300206670ustar00rootroot00000000000000--- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: restore description: >- Restore all indices in the most recent curator-* snapshot with state SUCCESS. Wait for the restore to complete before continuing. Do not skip the repository filesystem access check. Use the other options to define the index/shard settings for the restore. options: repository: # Leaving name blank will result in restoring the most recent snapshot by age name: # Leaving indices blank will result in restoring all indices in the snapshot indices: include_aliases: False ignore_unavailable: False include_global_state: True partial: False rename_pattern: rename_replacement: extra_settings: wait_for_completion: True skip_repo_fs_check: False timeout_override: continue_if_exception: False disable_action: False filters: - filtertype: pattern kind: prefix value: curator- exclude: - filtertype: state state: SUCCESS exclude: curator-5.2.0/examples/actions/snapshot.yml000066400000000000000000000023101315226075300210350ustar00rootroot00000000000000--- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: snapshot description: >- Snapshot logstash- prefixed indices older than 1 day (based on index creation_date) with the default snapshot name pattern of 'curator-%Y%m%d%H%M%S'. Wait for the snapshot to complete. Do not skip the repository filesystem access check. Use the other options to create the snapshot. options: repository: # Leaving name blank will result in the default 'curator-%Y%m%d%H%M%S' name: ignore_unavailable: False include_global_state: True partial: False wait_for_completion: True skip_repo_fs_check: False timeout_override: continue_if_exception: False disable_action: False filters: - filtertype: pattern kind: prefix value: logstash- exclude: - filtertype: age source: creation_date direction: older unit: days unit_count: 1 exclude: curator-5.2.0/examples/curator.yml000066400000000000000000000006741315226075300172300ustar00rootroot00000000000000--- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" client: hosts: - 127.0.0.1 port: 9200 url_prefix: use_ssl: False certificate: client_cert: client_key: aws_key: aws_secret_key: aws_region: ssl_no_validate: False http_auth: timeout: 30 master_only: False logging: loglevel: INFO logfile: logformat: default blacklist: ['elasticsearch', 'urllib3'] curator-5.2.0/requirements.txt000066400000000000000000000001501315226075300164610ustar00rootroot00000000000000urllib3>=1.20 elasticsearch>=5.4.0,<6.0.0 click>=6.7 pyyaml>=3.10 voluptuous>=0.9.3 certifi>=2017.7.27.1curator-5.2.0/run_curator.py000077500000000000000000000015761315226075300161320ustar00rootroot00000000000000#!/usr/bin/env python """Wrapper for running curator from source.""" from curator.cli import cli if __name__ == '__main__': try: cli() except Exception as e: if type(e) == type(RuntimeError()): if 'ASCII' in str(e): print('{0}'.format(e)) print( ''' When used with Python 3 (and the DEB and RPM packages of Curator are compiled and bundled with Python 3), Curator requires the locale to be unicode. Any of the above unicode definitions are acceptable. To set the locale to be unicode, try: $ export LC_ALL=en_US.utf8 $ curator [ARGS] Alternately, you should be able to specify the locale on the command-line: $ LC_ALL=en_US.utf8 curator [ARGS] Be sure to substitute your unicode variant for en_US.utf8 ''' ) else: import sys print('{0}'.format(e)) sys.exit(1) curator-5.2.0/run_es_repo_mgr.py000077500000000000000000000016431315226075300167470ustar00rootroot00000000000000#!/usr/bin/env python """Wrapper for running es_repo_mgr from source.""" from curator.repomgrcli import repo_mgr_cli if __name__ == '__main__': try: repo_mgr_cli() except Exception as e: if type(e) == type(RuntimeError()): if 'ASCII' in str(e): print('{0}'.format(e)) print( ''' When used with Python 3 (and the DEB and RPM packages of Curator are compiled and bundled with Python 3), Curator requires the locale to be unicode. Any of the above unicode definitions are acceptable. To set the locale to be unicode, try: $ export LC_ALL=en_US.utf8 $ es_repo_mgr [ARGS] Alternately, you should be able to specify the locale on the command-line: $ LC_ALL=en_US.utf8 es_repo_mgr [ARGS] Be sure to substitute your unicode variant for en_US.utf8 ''' ) else: import sys print('{0}'.format(e)) sys.exit(1) curator-5.2.0/run_singleton.py000077500000000000000000000016421315226075300164470ustar00rootroot00000000000000#!/usr/bin/env python """Wrapper for running singletons from source.""" import click from curator.singletons import cli if __name__ == '__main__': try: cli(obj={}) except Exception as e: if type(e) == type(RuntimeError()): if 'ASCII' in str(e): print('{0}'.format(e)) print( ''' When used with Python 3 (and the DEB and RPM packages of Curator are compiled and bundled with Python 3), Curator requires the locale to be unicode. Any of the above unicode definitions are acceptable. To set the locale to be unicode, try: $ export LC_ALL=en_US.utf8 $ curator_cli [ARGS] Alternately, you should be able to specify the locale on the command-line: $ LC_ALL=en_US.utf8 curator_cli [ARGS] Be sure to substitute your unicode variant for en_US.utf8 ''' ) else: import sys print('{0}'.format(e)) sys.exit(1) curator-5.2.0/setup.cfg000066400000000000000000000001031315226075300150140ustar00rootroot00000000000000[metadata] description-file = README.md [bdist_wheel] universal=1 curator-5.2.0/setup.py000066400000000000000000000125371315226075300147230ustar00rootroot00000000000000import os import re import sys from setuptools import setup # Utility function to read from file. def fread(fname): return open(os.path.join(os.path.dirname(__file__), fname)).read() def get_version(): VERSIONFILE="curator/_version.py" verstrline = fread(VERSIONFILE).strip() vsre = r"^__version__ = ['\"]([^'\"]*)['\"]" mo = re.search(vsre, verstrline, re.M) if mo: VERSION = mo.group(1) else: raise RuntimeError("Unable to find version string in %s." % (VERSIONFILE,)) build_number = os.environ.get('CURATOR_BUILD_NUMBER', None) if build_number: return VERSION + "b{}".format(build_number) return VERSION def get_install_requires(): res = ['elasticsearch>=5.4.0,<6.0.0' ] res.append('click>=6.7') res.append('pyyaml>=3.10') res.append('voluptuous>=0.9.3') res.append('certifi>=2017.4.17') return res try: ### cx_Freeze ### from cx_Freeze import setup, Executable try: import certifi cert_file = certifi.where() except ImportError: cert_file = '' # Dependencies are automatically detected, but it might need # fine tuning. buildOptions = dict( packages = [], excludes = [], include_files = [cert_file], ) base = 'Console' icon = None if os.path.exists('Elastic.ico'): icon = 'Elastic.ico' curator_exe = Executable( "run_curator.py", base=base, targetName = "curator", ) curator_cli_exe = Executable( "run_singleton.py", base=base, targetName = "curator_cli", ) repomgr_exe = Executable( "run_es_repo_mgr.py", base=base, targetName = "es_repo_mgr", ) if sys.platform == "win32": curator_exe = Executable( "run_curator.py", base=base, targetName = "curator.exe", icon = icon ) curator_cli_exe = Executable( "run_singleton.py", base=base, targetName = "curator_cli.exe", icon = icon ) repomgr_exe = Executable( "run_es_repo_mgr.py", base=base, targetName = "es_repo_mgr.exe", icon = icon ) setup( name = "elasticsearch-curator", version = get_version(), author = "Elastic", author_email = "info@elastic.co", description = "Tending your Elasticsearch indices", long_description=fread('README.rst'), url = "http://github.com/elastic/curator", download_url = "https://github.com/elastic/curator/tarball/v" + get_version(), license = "Apache License, Version 2.0", install_requires = get_install_requires(), keywords = "elasticsearch time-series indexed index-expiry", packages = ["curator"], include_package_data=True, entry_points = { "console_scripts" : [ "curator = curator.cli:cli", "curator_cli = curator.curator_cli:main", "es_repo_mgr = curator.repomgrcli:repo_mgr_cli", ] }, classifiers=[ "Intended Audience :: Developers", "Intended Audience :: System Administrators", "License :: OSI Approved :: Apache Software License", "Operating System :: OS Independent", "Programming Language :: Python", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3.4", "Programming Language :: Python :: 3.5", "Programming Language :: Python :: 3.6", ], test_suite = "test.run_tests.run_all", tests_require = ["mock", "nose", "coverage", "nosexcover"], options = {"build_exe" : buildOptions}, executables = [curator_exe,curator_cli_exe,repomgr_exe] ) ### end cx_Freeze ### except ImportError: setup( name = "elasticsearch-curator", version = get_version(), author = "Elastic", author_email = "info@elastic.co", description = "Tending your Elasticsearch indices", long_description=fread('README.rst'), url = "http://github.com/elastic/curator", download_url = "https://github.com/elastic/curator/tarball/v" + get_version(), license = "Apache License, Version 2.0", install_requires = get_install_requires(), keywords = "elasticsearch time-series indexed index-expiry", packages = ["curator"], include_package_data=True, entry_points = { "console_scripts" : [ "curator = curator.cli:cli", "curator_cli = curator.curator_cli:main", "es_repo_mgr = curator.repomgrcli:repo_mgr_cli", ] }, classifiers=[ "Intended Audience :: Developers", "Intended Audience :: System Administrators", "License :: OSI Approved :: Apache Software License", "Operating System :: OS Independent", "Programming Language :: Python", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3.4", "Programming Language :: Python :: 3.5", "Programming Language :: Python :: 3.6", ], test_suite = "test.run_tests.run_all", tests_require = ["mock", "nose", "coverage", "nosexcover"] ) curator-5.2.0/test/000077500000000000000000000000001315226075300141605ustar00rootroot00000000000000curator-5.2.0/test/__init__.py000066400000000000000000000000001315226075300162570ustar00rootroot00000000000000curator-5.2.0/test/integration/000077500000000000000000000000001315226075300165035ustar00rootroot00000000000000curator-5.2.0/test/integration/__init__.py000066400000000000000000000127371315226075300206260ustar00rootroot00000000000000import time import os import shutil import tempfile import random import string from datetime import timedelta, datetime, date from elasticsearch import Elasticsearch from elasticsearch.exceptions import ConnectionError from unittest import SkipTest, TestCase from mock import Mock client = None DATEMAP = { 'months': '%Y.%m', 'weeks': '%Y.%W', 'days': '%Y.%m.%d', 'hours': '%Y.%m.%d.%H', } host, port = os.environ.get('TEST_ES_SERVER', 'localhost:9200').split(':') port = int(port) if port else 9200 def random_directory(): dirname = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(8)) directory = tempfile.mkdtemp(suffix=dirname) if not os.path.exists(directory): os.makedirs(directory) return directory def get_client(): global client if client is not None: return client client = Elasticsearch([os.environ.get('TEST_ES_SERVER', {})], timeout=300) # wait for yellow status for _ in range(100): time.sleep(.1) try: client.cluster.health(wait_for_status='yellow') return client except ConnectionError: continue else: # timeout raise SkipTest("Elasticsearch failed to start.") def setup(): get_client() class Args(dict): def __getattr__(self, att_name): return self.get(att_name, None) class CuratorTestCase(TestCase): def setUp(self): super(CuratorTestCase, self).setUp() self.client = get_client() args = {} args['host'], args['port'] = host, port args['time_unit'] = 'days' args['prefix'] = 'logstash-' self.args = args # dirname = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(8)) # This will create a psuedo-random temporary directory on the machine # which runs the unit tests, but NOT on the machine where elasticsearch # is running. This means tests may fail if run against remote instances # unless you explicitly set `self.args['location']` to a proper spot # on the target machine. self.args['location'] = random_directory() self.args['configdir'] = random_directory() self.args['configfile'] = os.path.join(self.args['configdir'], 'curator.yml') self.args['actionfile'] = os.path.join(self.args['configdir'], 'actions.yml') self.args['repository'] = 'TEST_REPOSITORY' # if not os.path.exists(self.args['location']): # os.makedirs(self.args['location']) def tearDown(self): self.delete_repositories() self.client.indices.delete(index='*') self.client.indices.delete_template(name='*', ignore=404) for path_arg in ['location', 'configdir']: if os.path.exists(self.args[path_arg]): shutil.rmtree(self.args[path_arg]) def parse_args(self): return Args(self.args) def create_indices(self, count, unit=None): now = datetime.utcnow() unit = unit if unit else self.args['time_unit'] format = DATEMAP[unit] if not unit == 'months': step = timedelta(**{unit: 1}) for x in range(count): self.create_index(self.args['prefix'] + now.strftime(format), wait_for_yellow=False) now -= step else: # months now = date.today() d = date(now.year, now.month, 1) self.create_index(self.args['prefix'] + now.strftime(format), wait_for_yellow=False) for i in range(1, count): if d.month == 1: d = date(d.year-1, 12, 1) else: d = date(d.year, d.month-1, 1) self.create_index(self.args['prefix'] + datetime(d.year, d.month, 1).strftime(format), wait_for_yellow=False) self.client.cluster.health(wait_for_status='yellow') def create_index(self, name, shards=1, wait_for_yellow=True): self.client.indices.create( index=name, body={'settings': {'number_of_shards': shards, 'number_of_replicas': 0}} ) if wait_for_yellow: self.client.cluster.health(wait_for_status='yellow') def add_docs(self, idx): for i in ["1", "2", "3"]: self.client.create( index=idx, doc_type='log', id=i, body={"doc" + i :'TEST DOCUMENT'}, ) # This should force each doc to be in its own segment. self.client.indices.flush(index=idx, force=True) def create_snapshot(self, name, csv_indices): body = { "indices": csv_indices, "ignore_unavailable": False, "include_global_state": True, "partial": False, } self.create_repository() self.client.snapshot.create( repository=self.args['repository'], snapshot=name, body=body, wait_for_completion=True ) def create_repository(self): body = {'type':'fs', 'settings':{'location':self.args['location']}} self.client.snapshot.create_repository(repository=self.args['repository'], body=body) def delete_repositories(self): result = self.client.snapshot.get_repository(repository='_all') for repo in result: self.client.snapshot.delete_repository(repository=repo) def close_index(self, name): self.client.indices.close(index=name) def write_config(self, fname, data): with open(fname, 'w') as f: f.write(data) curator-5.2.0/test/integration/test_alias.py000066400000000000000000000261231315226075300212110ustar00rootroot00000000000000import elasticsearch import curator import os import json import string, random, tempfile import click from click import testing as clicktest from mock import patch, Mock from . import CuratorTestCase from . import testvars as testvars import logging logger = logging.getLogger(__name__) host, port = os.environ.get('TEST_ES_SERVER', 'localhost:9200').split(':') port = int(port) if port else 9200 class TestCLIAlias(CuratorTestCase): def test_add_only(self): alias = 'testalias' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.alias_add_only.format(alias)) self.create_index('my_index') self.create_index('dummy') self.client.indices.put_alias(index='dummy', name=alias) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(2, len(self.client.indices.get_alias(name=alias))) def test_add_only_with_extra_settings(self): alias = 'testalias' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.alias_add_only_with_extra_settings.format(alias)) self.create_index('my_index') test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals( { 'my_index': { 'aliases': { 'testalias': { 'filter': { 'term': { 'user': 'kimchy' } } } } } }, self.client.indices.get_alias(name=alias) ) def test_alias_remove_only(self): alias = 'testalias' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.alias_remove_only.format(alias)) self.create_index('my_index') self.create_index('dummy') self.client.indices.put_alias(index='dummy', name=alias) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual( {'dummy': {'aliases': {}}, 'my_index': {'aliases': {}}}, self.client.indices.get_alias(index='dummy,my_index') ) def test_add_only_skip_closed(self): alias = 'testalias' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.alias_add_only.format(alias)) self.create_index('my_index') self.client.indices.close(index='my_index') self.create_index('dummy') self.client.indices.put_alias(index='dummy', name=alias) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) version = curator.get_version(self.client) if version > (3,0,0): self.assertEquals(2, len(self.client.indices.get_alias(name=alias))) else: self.assertEquals(1, len(self.client.indices.get_alias(name=alias))) def test_add_and_remove(self): alias = 'testalias' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.alias_add_remove.format(alias)) self.create_index('my_index') self.create_index('dummy') self.client.indices.put_alias(index='dummy', name=alias) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual( {u'my_index': {u'aliases': {alias: {}}}}, self.client.indices.get_alias(name=alias) ) def test_add_with_empty_remove(self): alias = 'testalias' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.alias_add_with_empty_remove.format(alias)) self.create_index('my_index') self.create_index('dummy') self.client.indices.put_alias(index='dummy', name=alias) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual( {'dummy': {'aliases': {alias: {}}},'my_index': {'aliases': {alias: {}}}}, self.client.indices.get_alias() ) def test_remove_with_empty_add(self): alias = 'testalias' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.alias_remove_with_empty_add.format(alias)) self.create_index('my_index') self.create_index('dummy') self.client.indices.put_alias(index='dummy,my_index', name=alias) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual( {'dummy': {'aliases': {}},'my_index': {'aliases': {alias: {}}}}, self.client.indices.get_alias() ) def test_add_with_empty_list(self): alias = 'testalias' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.alias_add_remove_empty.format(alias, 'du', 'rickroll')) self.create_index('my_index') self.create_index('dummy') self.client.indices.put_alias(index='dummy', name=alias) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual( {'dummy': {'aliases': {alias: {}}}, 'my_index': {'aliases': {}}}, self.client.indices.get_alias() ) def test_remove_with_empty_list(self): alias = 'testalias' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.alias_add_remove_empty.format(alias, 'rickroll', 'my')) self.create_index('my_index') self.create_index('dummy') self.client.indices.put_alias(index='dummy', name=alias) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual( {'dummy': {'aliases': {alias: {}}}, 'my_index': {'aliases': {}}}, self.client.indices.get_alias() ) def test_remove_index_not_in_alias(self): alias = 'testalias' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.alias_remove_index_not_there.format(alias,'my')) self.create_index('my_index1') self.create_index('my_index2') self.client.indices.put_alias(index='my_index1', name=alias) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual( {'my_index1': {'aliases': {}}, 'my_index2': {'aliases': {}}}, self.client.indices.get_alias() ) self.assertEqual(0, result.exit_code) def test_no_add_remove(self): alias = 'testalias' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.alias_no_add_remove.format(alias)) self.create_index('my_index') self.create_index('dummy') test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(1, result.exit_code) def test_no_alias(self): somevar = 'testalias' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.alias_no_alias) self.create_index('my_index') self.create_index('dummy') test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(-1, result.exit_code) def test_extra_options(self): somevar = 'testalias' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.bad_option_proto_test.format('alias')) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(-1, result.exit_code) curator-5.2.0/test/integration/test_allocation.py000066400000000000000000000200731315226075300222430ustar00rootroot00000000000000import elasticsearch import curator import os import json import string, random, tempfile import click from click import testing as clicktest from mock import patch, Mock from . import CuratorTestCase from . import testvars as testvars import logging logger = logging.getLogger(__name__) host, port = os.environ.get('TEST_ES_SERVER', 'localhost:9200').split(':') port = int(port) if port else 9200 class TestCLIAllocation(CuratorTestCase): def test_include(self): key = 'tag' value = 'value' at = 'include' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.allocation_test.format(key, value, at, False)) self.create_index('my_index') self.create_index('not_my_index') test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(value, self.client.indices.get_settings(index='my_index')['my_index']['settings']['index']['routing']['allocation'][at][key]) self.assertNotIn('routing', self.client.indices.get_settings(index='not_my_index')['not_my_index']['settings']['index']) def test_require(self): key = 'tag' value = 'value' at = 'require' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.allocation_test.format(key, value, at, False)) self.create_index('my_index') self.create_index('not_my_index') test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(value, self.client.indices.get_settings(index='my_index')['my_index']['settings']['index']['routing']['allocation'][at][key]) self.assertNotIn('routing', self.client.indices.get_settings(index='not_my_index')['not_my_index']['settings']['index']) def test_exclude(self): key = 'tag' value = 'value' at = 'exclude' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.allocation_test.format(key, value, at, False)) self.create_index('my_index') self.create_index('not_my_index') test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(value, self.client.indices.get_settings(index='my_index')['my_index']['settings']['index']['routing']['allocation'][at][key]) self.assertNotIn('routing', self.client.indices.get_settings(index='not_my_index')['not_my_index']['settings']['index']) def test_remove_exclude_with_none_value(self): key = 'tag' value = '' at = 'exclude' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.allocation_test.format(key, value, at, False)) self.create_index('my_index') self.create_index('not_my_index') # Put a setting in place before we start the test. self.client.indices.put_settings( index='my_index', body={'index.routing.allocation.{0}.{1}'.format(at, key): 'bar'} ) # Ensure we _have_ it here first. self.assertEquals('bar', self.client.indices.get_settings(index='my_index')['my_index']['settings']['index']['routing']['allocation'][at][key]) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertNotIn('routing', self.client.indices.get_settings(index='my_index')['my_index']['settings']['index']) self.assertNotIn('routing', self.client.indices.get_settings(index='not_my_index')['not_my_index']['settings']['index']) def test_invalid_allocation_type(self): key = 'tag' value = 'value' at = 'invalid' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.allocation_test.format(key, value, at, False)) self.create_index('my_index') self.create_index('not_my_index') test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(-1, result.exit_code) def test_extra_option(self): key = 'tag' value = 'value' at = 'invalid' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.bad_option_proto_test.format('allocation')) self.create_index('my_index') self.create_index('not_my_index') test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(-1, result.exit_code) def test_skip_closed(self): key = 'tag' value = 'value' at = 'include' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.allocation_test.format(key, value, at, False)) self.create_index('my_index') self.client.indices.close(index='my_index') self.create_index('not_my_index') test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertNotIn('routing', self.client.indices.get_settings(index='my_index')['my_index']['settings']['index']) self.assertNotIn('routing', self.client.indices.get_settings(index='not_my_index')['not_my_index']['settings']['index']) def test_wait_for_completion(self): key = 'tag' value = 'value' at = 'require' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.allocation_test.format(key, value, at, True)) self.create_index('my_index') self.create_index('not_my_index') test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(value, self.client.indices.get_settings(index='my_index')['my_index']['settings']['index']['routing']['allocation'][at][key]) self.assertNotIn('routing', self.client.indices.get_settings(index='not_my_index')['not_my_index']['settings']['index']) curator-5.2.0/test/integration/test_cli.py000066400000000000000000000223251315226075300206670ustar00rootroot00000000000000import elasticsearch import curator import os import json import string, random, tempfile import click from click import testing as clicktest from mock import patch, Mock from . import CuratorTestCase from . import testvars as testvars import logging logger = logging.getLogger(__name__) host, port = os.environ.get('TEST_ES_SERVER', 'localhost:9200').split(':') port = int(port) if port else 9200 class TestCLIMethods(CuratorTestCase): def test_bad_client_config(self): self.create_indices(10) self.write_config( self.args['configfile'], testvars.bad_client_config.format(host, port) ) self.write_config(self.args['actionfile'], testvars.disabled_proto.format('close', 'delete_indices')) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], '--dry-run', self.args['actionfile'] ], ) self.assertEqual(-1, result.exit_code) def test_no_config(self): # This test checks whether localhost:9200 is provided if no hosts or # port are in the configuration. But in testing, sometimes # TEST_ES_SERVER is set to something other than localhost:9200. In this # case, the test here would fail. The if statement at the end now # compensates. See https://github.com/elastic/curator/issues/843 localtest = False if (host == 'localhost' or host == '127.0.0.1') and \ port == 9200: localtest = True self.create_indices(10) self.write_config( self.args['configfile'], ' \n' ) self.write_config(self.args['actionfile'], testvars.disabled_proto.format('close', 'delete_indices')) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], '--dry-run', self.args['actionfile'] ], ) if localtest: self.assertEqual(0, result.exit_code) else: self.assertEqual(-1, result.exit_code) def test_no_logging_config(self): self.create_indices(10) self.write_config( self.args['configfile'], testvars.no_logging_config.format(host, port) ) self.write_config(self.args['actionfile'], testvars.disabled_proto.format('close', 'delete_indices')) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], '--dry-run', self.args['actionfile'] ], ) self.assertEqual(0, result.exit_code) def test_logging_none(self): self.create_indices(10) self.write_config( self.args['configfile'], testvars.none_logging_config.format(host, port) ) self.write_config(self.args['actionfile'], testvars.disabled_proto.format('close', 'delete_indices')) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], '--dry-run', self.args['actionfile'] ], ) self.assertEqual(0, result.exit_code) def test_invalid_action(self): self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.optionless_proto.format('invalid_action')) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(-1, result.exit_code) def test_action_is_None(self): self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.optionless_proto.format(' ')) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual( type(curator.ConfigurationError()), type(result.exception)) def test_no_action(self): self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.actionless_proto) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual( type(curator.ConfigurationError()), type(result.exception)) def test_dry_run(self): self.create_indices(10) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.delete_proto.format( 'age', 'name', 'older', '\'%Y.%m.%d\'', 'days', 5, ' ', ' ', ' ' ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], '--dry-run', self.args['actionfile'] ], ) self.assertEquals(10, len(curator.get_indices(self.client))) def test_action_disabled(self): self.create_indices(10) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.disabled_proto.format('close', 'delete_indices')) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(0, len(curator.get_indices(self.client))) self.assertEqual(0, result.exit_code) # I'll have to think up another way to create an exception. # The exception that using "alias" created, a missing argument, # is caught too early for this to actually run the test now :/ # def test_continue_if_exception(self): name = 'log1' self.create_index(name) self.create_index('log2') self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.continue_proto.format( name, True, 'delete_indices', False ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(0, len(curator.get_indices(self.client))) self.assertEqual(0, result.exit_code) def test_continue_if_exception_False(self): name = 'log1' self.create_index(name) self.create_index('log2') self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.continue_proto.format( name, False, 'delete_indices', False ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(2, len(curator.get_indices(self.client))) self.assertEqual(1, result.exit_code) def test_no_options_in_action(self): self.create_indices(10) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.no_options_proto.format('delete_indices')) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], '--dry-run', self.args['actionfile'] ], ) self.assertEqual(0, result.exit_code) curator-5.2.0/test/integration/test_close.py000066400000000000000000000125551315226075300212310ustar00rootroot00000000000000import elasticsearch import curator import os import json import string, random, tempfile import click from click import testing as clicktest from mock import patch, Mock from . import CuratorTestCase from . import testvars as testvars import logging logger = logging.getLogger(__name__) host, port = os.environ.get('TEST_ES_SERVER', 'localhost:9200').split(':') port = int(port) if port else 9200 class TestCLIClose(CuratorTestCase): def test_close_opened(self): self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.optionless_proto.format('close')) self.create_index('my_index') self.create_index('dummy') test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals( 'close', self.client.cluster.state( index='my_index', metric='metadata', )['metadata']['indices']['my_index']['state'] ) self.assertNotEqual( 'close', self.client.cluster.state( index='dummy', metric='metadata', )['metadata']['indices']['dummy']['state'] ) def test_close_closed(self): self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.optionless_proto.format('close')) self.create_index('my_index') self.client.indices.close( index='my_index', ignore_unavailable=True) self.create_index('dummy') test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals( 'close', self.client.cluster.state( index='my_index', metric='metadata', )['metadata']['indices']['my_index']['state'] ) self.assertNotEqual( 'close', self.client.cluster.state( index='dummy', metric='metadata', )['metadata']['indices']['dummy']['state'] ) def test_close_delete_aliases(self): # Create aliases first alias = 'testalias' index = 'my_index' self.create_index(index) self.create_index('dummy') self.create_index('my_other') self.client.indices.put_alias(index='my_index,dummy', name=alias) self.assertEquals( { "dummy":{"aliases":{"testalias":{}}}, "my_index":{"aliases":{"testalias":{}}} }, self.client.indices.get_alias(name=alias) ) # Now close `index` with delete_aliases=True (dummy stays open) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.close_delete_aliases) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals( 'close', self.client.cluster.state( index=index, metric='metadata', )['metadata']['indices'][index]['state'] ) self.assertEquals( 'close', self.client.cluster.state( index='my_other', metric='metadata', )['metadata']['indices']['my_other']['state'] ) # Now open the indices and verify that the alias is still gone. self.client.indices.open(index=index) self.assertEquals( {"dummy":{"aliases":{"testalias":{}}}}, self.client.indices.get_alias(name=alias) ) def test_extra_option(self): self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.bad_option_proto_test.format('close')) self.create_index('my_index') self.create_index('dummy') test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertNotEqual( 'close', self.client.cluster.state( index='my_index', metric='metadata', )['metadata']['indices']['my_index']['state'] ) self.assertNotEqual( 'close', self.client.cluster.state( index='dummy', metric='metadata', )['metadata']['indices']['dummy']['state'] ) self.assertEqual(-1, result.exit_code) curator-5.2.0/test/integration/test_clusterrouting.py000066400000000000000000000035341315226075300232120ustar00rootroot00000000000000import elasticsearch import curator import os import json import string, random, tempfile import click from click import testing as clicktest from mock import patch, Mock from . import CuratorTestCase from . import testvars as testvars import logging logger = logging.getLogger(__name__) host, port = os.environ.get('TEST_ES_SERVER', 'localhost:9200').split(':') port = int(port) if port else 9200 class TestCLIClusterRouting(CuratorTestCase): def test_allocation_all(self): routing_type = 'allocation' value = 'all' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.cluster_routing_test.format(routing_type, value)) self.create_index('my_index') self.create_index('not_my_index') test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(testvars.CRA_all, self.client.cluster.get_settings()) def test_extra_option(self): self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.bad_option_proto_test.format('cluster_routing')) self.create_index('my_index') self.create_index('not_my_index') test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(-1, result.exit_code) curator-5.2.0/test/integration/test_create_index.py000066400000000000000000000077151315226075300225600ustar00rootroot00000000000000import elasticsearch import curator import os import json import string, random, tempfile import click from click import testing as clicktest from mock import patch, Mock from . import CuratorTestCase from . import testvars as testvars import logging logger = logging.getLogger(__name__) host, port = os.environ.get('TEST_ES_SERVER', 'localhost:9200').split(':') port = int(port) if port else 9200 class TestCLICreateIndex(CuratorTestCase): def test_plain(self): self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.create_index.format('testing')) self.assertEqual([], curator.get_indices(self.client)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(['testing'], curator.get_indices(self.client)) def test_with_extra_settings(self): self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.create_index_with_extra_settings.format('testing')) self.assertEqual([], curator.get_indices(self.client)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) ilo = curator.IndexList(self.client) self.assertEqual(['testing'], ilo.indices) self.assertEqual(ilo.index_info['testing']['number_of_shards'], '1') self.assertEqual(ilo.index_info['testing']['number_of_replicas'], '0') def test_with_strftime(self): self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.create_index.format('testing-%Y.%m.%d')) self.assertEqual([], curator.get_indices(self.client)) name = curator.parse_date_pattern('testing-%Y.%m.%d') test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual([name], curator.get_indices(self.client)) def test_with_date_math(self): self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.create_index.format('')) self.assertEqual([], curator.get_indices(self.client)) name = curator.parse_date_pattern('testing-%Y.%m.%d') test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual([name], curator.get_indices(self.client)) def test_extra_option(self): self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.bad_option_proto_test.format('create_index')) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual([], curator.get_indices(self.client)) self.assertEqual(-1, result.exit_code) curator-5.2.0/test/integration/test_delete_indices.py000066400000000000000000000362041315226075300230610ustar00rootroot00000000000000import elasticsearch import curator import os import json import string, random, tempfile import time from click import testing as clicktest from mock import patch, Mock import unittest from . import CuratorTestCase from . import testvars as testvars import logging logger = logging.getLogger(__name__) host, port = os.environ.get('TEST_ES_SERVER', 'localhost:9200').split(':') port = int(port) if port else 9200 # ' - filtertype: {0}\n' # ' source: {1}\n' # ' direction: {2}\n' # ' timestring: {3}\n' # ' unit: {4}\n' # ' unit_count: {5}\n' # ' field: {6}\n' # ' stats_result: {7}\n' # ' epoch: {8}\n') global_client = elasticsearch.Elasticsearch(host=host, port=port) class TestCLIDeleteIndices(CuratorTestCase): def test_retention_from_name_months(self): # Test extraction of unit_count from index name # Create indices for 10 months with retention time of 2 months in index name # Expected: 8 oldest indices are deleted, 2 remain self.args['prefix'] = 'logstash_2_' self.args['time_unit'] = 'months' self.create_indices(10) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.delete_pattern_proto.format( 'age', 'name', 'older', '\'%Y.%m\'', 'months', -1, '_([0-9]+)_', ' ', ' ', ' ' ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(2, len(curator.get_indices(self.client))) def test_retention_from_name_days(self): # Test extraction of unit_count from index name # Create indices for 10 days with retention time of 5 days in index name # Expected: 5 oldest indices are deleted, 5 remain self.args['prefix'] = 'logstash_5_' self.create_indices(10) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.delete_pattern_proto.format( 'age', 'name', 'older', '\'%Y.%m.%d\'', 'days', 30, '_([0-9]+)_', ' ', ' ', ' ' ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(5, len(curator.get_indices(self.client))) def test_retention_from_name_days_ignore_failed_match(self): # Test extraction of unit_count from index name # Create indices for 10 days with retention time of 5 days in index name # Create indices for 10 days with no retention time in index name # Expected: 5 oldest indices are deleted, 5 remain - 10 indices without retention time are ignored and remain self.args['prefix'] = 'logstash_5_' self.create_indices(10) self.args['prefix'] = 'logstash_' self.create_indices(10) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.delete_pattern_proto.format( 'age', 'name', 'older', '\'%Y.%m.%d\'', 'days', 30, '_([0-9]+)_', ' ', ' ', ' ' ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(15, len(curator.get_indices(self.client))) def test_retention_from_name_days_failed_match_with_fallback(self): # Test extraction of unit_count from index name # Create indices for 10 days with retention time of 5 days in index name # Create indices for 10 days with no retention time in index name but configure fallback value of 7 # Expected: 5 oldest indices are deleted, 5 remain - 7 indices without retention time are ignored and remain due to the fallback value self.args['prefix'] = 'logstash_5_' self.create_indices(10) self.args['prefix'] = 'logstash_' self.create_indices(10) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.delete_pattern_proto.format( 'age', 'name', 'older', '\'%Y.%m.%d\'', 'days', 7, '_([0-9]+)_', ' ', ' ', ' ' ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(12, len(curator.get_indices(self.client))) def test_retention_from_name_no_capture_group(self): # Test extraction of unit_count from index name when pattern contains no capture group # Create indices for 10 months with retention time of 2 months in index name # Expected: all indices remain as the pattern cannot be used to extract a retention time self.args['prefix'] = 'logstash_2_' self.args['time_unit'] = 'months' self.create_indices(10) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.delete_pattern_proto.format( 'age', 'name', 'older', '\'%Y.%m\'', 'months', -1, '_[0-9]+_', ' ', ' ', ' ' ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(10, len(curator.get_indices(self.client))) def test_retention_from_name_illegal_regex_no_fallback(self): # Test extraction of unit_count from index name when pattern contains an illegal regular expression # Create indices for 10 months with retention time of 2 months in index name # Expected: all indices remain as the pattern cannot be used to extract a retention time self.args['prefix'] = 'logstash_2_' self.args['time_unit'] = 'months' self.create_indices(10) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.delete_pattern_proto.format( 'age', 'name', 'older', '\'%Y.%m\'', 'months', -1, '_[0-9+_', ' ', ' ', ' ' ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(10, len(curator.get_indices(self.client))) def test_retention_from_name_illegal_regex_with_fallback(self): # Test extraction of unit_count from index name when pattern contains an illegal regular expression # Create indices for 10 months with retention time of 2 months in index name # Expected: Fallback value of 3 is used and 3 most recent indices remain in place self.args['prefix'] = 'logstash_2_' self.args['time_unit'] = 'months' self.create_indices(10) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.delete_pattern_proto.format( 'age', 'name', 'older', '\'%Y.%m\'', 'months', 3, '_[0-9+_', ' ', ' ', ' ' ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(3, len(curator.get_indices(self.client))) def test_name_older_than_now(self): self.create_indices(10) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.delete_proto.format( 'age', 'name', 'older', '\'%Y.%m.%d\'', 'days', 5, ' ', ' ', ' ' ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(5, len(curator.get_indices(self.client))) def test_creation_date_newer_than_epoch(self): self.create_indices(10) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.delete_proto.format( 'age', 'creation_date', 'younger', ' ', 'seconds', 0, ' ', ' ', int(time.time()) - 60 ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(0, len(curator.get_indices(self.client))) def test_delete_in_period(self): # filtertype: {0} # source: {1} # range_from: {2} # range_to: {3} # timestring: {4} # unit: {5} # field: {6} # stats_result: {7} # epoch: {8} # week_starts_on: {9} self.create_indices(10) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.delete_period_proto.format( 'period', 'name', '-5', '-1', "'%Y.%m.%d'", 'days', ' ', ' ', ' ', 'monday' ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(0, result.exit_code) self.assertEquals(5, len(curator.get_indices(self.client))) def test_empty_list(self): self.create_indices(10) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.delete_proto.format( 'age', 'creation_date', 'older', ' ', 'days', 90, ' ', ' ', int(time.time()) ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(10, len(curator.get_indices(self.client))) self.assertEqual(1, result.exit_code) def test_ignore_empty_list(self): self.create_indices(10) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.delete_ignore_proto.format( 'age', 'creation_date', 'older', ' ', 'days', 90, ' ', ' ', int(time.time()) ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(10, len(curator.get_indices(self.client))) self.assertEqual(0, result.exit_code) def test_extra_options(self): self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.bad_option_proto_test.format('delete_indices')) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(-1, result.exit_code) def test_945(self): self.create_indices(10) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.test_945) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(-1, result.exit_code) def test_name_epoch_zero(self): self.create_index('epoch_zero-1970.01.01') self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.delete_proto.format( 'age', 'name', 'older', '\'%Y.%m.%d\'', 'days', 5, ' ', ' ', ' ' ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(0, len(curator.get_indices(self.client))) def test_name_negative_epoch(self): self.create_index('index-1969.12.31') self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.delete_proto.format( 'age', 'name', 'older', '\'%Y.%m.%d\'', 'days', 5, ' ', ' ', ' ' ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(0, len(curator.get_indices(self.client))) curator-5.2.0/test/integration/test_delete_snapshots.py000066400000000000000000000067351315226075300234730ustar00rootroot00000000000000import elasticsearch import curator import os import json import string, random, tempfile import time from click import testing as clicktest from mock import patch, Mock from . import CuratorTestCase from . import testvars as testvars import logging logger = logging.getLogger(__name__) host, port = os.environ.get('TEST_ES_SERVER', 'localhost:9200').split(':') port = int(port) if port else 9200 # ' repository: {0}\n' # ' - filtertype: {1}\n' # ' source: {2}\n' # ' direction: {3}\n' # ' timestring: {4}\n' # ' unit: {5}\n' # ' unit_count: {6}\n' # ' epoch: {7}\n') class TestCLIDeleteSnapshots(CuratorTestCase): def test_deletesnapshot(self): ### Create snapshots to delete and verify them self.create_repository() timestamps = [] for i in range(1,4): self.add_docs('my_index{0}'.format(i)) ilo = curator.IndexList(self.client) snap = curator.Snapshot(ilo, repository=self.args['repository'], name='curator-%Y%m%d%H%M%S', wait_interval=0.5 ) snap.do_action() snapshot = curator.get_snapshot( self.client, self.args['repository'], '_all' ) self.assertEqual(i, len(snapshot['snapshots'])) time.sleep(1.0) timestamps.append(int(time.time())) time.sleep(1.0) ### Setup the actual delete self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.delete_snap_proto.format( self.args['repository'], 'age', 'creation_date', 'older', ' ', 'seconds', '0', timestamps[0] ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) snapshot = curator.get_snapshot( self.client, self.args['repository'], '_all' ) self.assertEqual(2, len(snapshot['snapshots'])) def test_no_repository(self): self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.delete_snap_proto.format( ' ', 'age', 'creation_date', 'older', ' ', 'seconds', '0', ' ' ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(-1, result.exit_code) def test_extra_options(self): self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.bad_option_proto_test.format('delete_snapshots')) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(-1, result.exit_code) curator-5.2.0/test/integration/test_envvars.py000066400000000000000000000055621315226075300216100ustar00rootroot00000000000000import elasticsearch import curator import os import json import string, random, tempfile import click from click import testing as clicktest from mock import patch, Mock from . import CuratorTestCase from . import testvars as testvars import logging logger = logging.getLogger(__name__) host, port = os.environ.get('TEST_ES_SERVER', 'localhost:9200').split(':') port = int(port) if port else 9200 def random_envvar(size): return ''.join( random.SystemRandom().choice( string.ascii_uppercase + string.digits ) for _ in range(size) ) class TestEnvVars(CuratorTestCase): def test_present(self): evar = random_envvar(8) os.environ[evar] = "1234" dollar = '${' + evar + '}' self.write_config( self.args['configfile'], testvars.client_config_envvars.format(dollar, port, 30) ) cfg = curator.get_yaml(self.args['configfile']) self.assertEqual( cfg['client']['hosts'], os.environ.get(evar) ) del os.environ[evar] def test_not_present(self): evar = random_envvar(8) dollar = '${' + evar + '}' self.write_config( self.args['configfile'], testvars.client_config_envvars.format(dollar, port, 30) ) cfg = curator.get_yaml(self.args['configfile']) self.assertIsNone(cfg['client']['hosts']) def test_not_present_with_default(self): evar = random_envvar(8) default = random_envvar(8) dollar = '${' + evar + ':' + default + '}' self.write_config( self.args['configfile'], testvars.client_config_envvars.format(dollar, port, 30) ) cfg = curator.get_yaml(self.args['configfile']) self.assertEqual( cfg['client']['hosts'], default ) def test_do_something_with_int_value(self): self.create_indices(10) evar = random_envvar(8) os.environ[evar] = "1234" dollar = '${' + evar + '}' self.write_config( self.args['configfile'], testvars.client_config_envvars.format(host, port, dollar) ) cfg = curator.get_yaml(self.args['configfile']) self.assertEqual( cfg['client']['timeout'], os.environ.get(evar) ) self.write_config(self.args['actionfile'], testvars.delete_proto.format( 'age', 'name', 'older', '\'%Y.%m.%d\'', 'days', 5, ' ', ' ', ' ' ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(5, len(curator.get_indices(self.client))) del os.environ[evar] curator-5.2.0/test/integration/test_es_repo_mgr.py000066400000000000000000000122501315226075300224150ustar00rootroot00000000000000import elasticsearch import curator import os import json import click import string, random, tempfile from click import testing as clicktest from mock import patch, Mock, MagicMock from . import CuratorTestCase from . import testvars as testvars import logging logger = logging.getLogger(__name__) host, port = os.environ.get('TEST_ES_SERVER', 'localhost:9200').split(':') port = int(port) if port else 9200 class TestLoggingModules(CuratorTestCase): def test_logger_without_null_handler(self): mock = Mock() modules = {'logger': mock, 'logger.NullHandler': mock.module} self.write_config( self.args['configfile'], testvars.client_conf_logfile.format(host, port, os.devnull) ) with patch.dict('sys.modules', modules): self.create_repository() test = clicktest.CliRunner() result = test.invoke( curator.repo_mgr_cli, [ '--config', self.args['configfile'], 'show' ] ) self.assertEqual(self.args['repository'], result.output.rstrip()) class TestCLIRepositoryCreate(CuratorTestCase): def test_create_fs_repository_success(self): self.write_config( self.args['configfile'], testvars.client_conf_logfile.format(host, port, os.devnull) ) test = clicktest.CliRunner() result = test.invoke( curator.repo_mgr_cli, [ '--config', self.args['configfile'], 'create', 'fs', '--repository', self.args['repository'], '--location', self.args['location'] ] ) self.assertTrue(1, len(self.client.snapshot.get_repository(repository=self.args['repository']))) self.assertEqual(0, result.exit_code) def test_create_fs_repository_fail(self): self.write_config( self.args['configfile'], testvars.client_conf_logfile.format(host, port, os.devnull) ) test = clicktest.CliRunner() result = test.invoke( curator.repo_mgr_cli, [ '--config', self.args['configfile'], 'create', 'fs', '--repository', self.args['repository'], '--location', os.devnull ] ) self.assertEqual(1, result.exit_code) def test_create_s3_repository_fail(self): self.write_config( self.args['configfile'], testvars.client_conf_logfile.format(host, port, os.devnull) ) test = clicktest.CliRunner() result = test.invoke( curator.repo_mgr_cli, [ '--config', self.args['configfile'], 'create', 's3', '--bucket', 'mybucket', '--repository', self.args['repository'], ] ) self.assertEqual(1, result.exit_code) class TestCLIDeleteRepository(CuratorTestCase): def test_delete_repository_success(self): self.create_repository() self.write_config( self.args['configfile'], testvars.client_conf_logfile.format(host, port, os.devnull) ) test = clicktest.CliRunner() result = test.invoke( curator.repo_mgr_cli, [ '--config', self.args['configfile'], 'delete', '--yes', # This ensures no prompting will happen '--repository', self.args['repository'] ] ) self.assertFalse( curator.repository_exists(self.client, self.args['repository']) ) def test_delete_repository_notfound(self): self.write_config( self.args['configfile'], testvars.client_conf_logfile.format(host, port, os.devnull) ) test = clicktest.CliRunner() result = test.invoke( curator.repo_mgr_cli, [ '--config', self.args['configfile'], 'delete', '--yes', # This ensures no prompting will happen '--repository', self.args['repository'] ] ) self.assertEqual(1, result.exit_code) class TestCLIShowRepositories(CuratorTestCase): def test_show_repository(self): self.create_repository() self.write_config( self.args['configfile'], testvars.client_conf_logfile.format(host, port, os.devnull) ) test = clicktest.CliRunner() result = test.invoke( curator.repo_mgr_cli, [ '--config', self.args['configfile'], 'show' ] ) self.assertEqual(self.args['repository'], result.output.rstrip()) curator-5.2.0/test/integration/test_forcemerge.py000066400000000000000000000036121315226075300222340ustar00rootroot00000000000000import elasticsearch import curator import os import json import string, random, tempfile import click from click import testing as clicktest from mock import patch, Mock from . import CuratorTestCase from . import testvars as testvars import logging logger = logging.getLogger(__name__) host, port = os.environ.get('TEST_ES_SERVER', 'localhost:9200').split(':') port = int(port) if port else 9200 class TestCLIforceMerge(CuratorTestCase): def test_merge(self): count = 1 idx = 'my_index' self.create_index(idx) self.add_docs(idx) ilo1 = curator.IndexList(self.client) ilo1._get_segmentcounts() self.assertEqual(3, ilo1.index_info[idx]['segments']) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.forcemerge_test.format(count, 0.20)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) ilo2 = curator.IndexList(self.client) ilo2._get_segmentcounts() self.assertEqual(count, ilo2.index_info[idx]['segments']) def test_extra_option(self): self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.bad_option_proto_test.format('forcemerge')) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(-1, result.exit_code) curator-5.2.0/test/integration/test_integrations.py000066400000000000000000000057521315226075300226330ustar00rootroot00000000000000import elasticsearch import curator import os import json import string, random, tempfile import click from click import testing as clicktest from mock import patch, Mock from . import CuratorTestCase from . import testvars as testvars import logging logger = logging.getLogger(__name__) host, port = os.environ.get('TEST_ES_SERVER', 'localhost:9200').split(':') port = int(port) if port else 9200 class TestFilters(CuratorTestCase): def test_filter_by_alias(self): alias = 'testalias' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.filter_by_alias.format('testalias', False)) self.create_index('my_index') self.create_index('dummy') self.client.indices.put_alias(index='dummy', name=alias) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals(1, len(curator.get_indices(self.client))) def test_filter_by_array_of_aliases(self): alias = 'testalias' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.filter_by_alias.format(' [ testalias, foo ]', False)) self.create_index('my_index') self.create_index('dummy') self.client.indices.put_alias(index='dummy', name=alias) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) ver = curator.get_version(self.client) if ver >= (5,5,0): self.assertEquals(2, len(curator.get_indices(self.client))) else: self.assertEquals(1, len(curator.get_indices(self.client))) def test_filter_by_alias_bad_aliases(self): alias = 'testalias' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.filter_by_alias.format('{"this":"isadict"}', False)) self.create_index('my_index') self.create_index('dummy') self.client.indices.put_alias(index='dummy', name=alias) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals( type(curator.ConfigurationError()), type(result.exception)) self.assertEquals(2, len(curator.get_indices(self.client))) curator-5.2.0/test/integration/test_open.py000066400000000000000000000073201315226075300210570ustar00rootroot00000000000000import elasticsearch import curator import os import json import string, random, tempfile import click from click import testing as clicktest from mock import patch, Mock from . import CuratorTestCase from . import testvars as testvars import logging logger = logging.getLogger(__name__) host, port = os.environ.get('TEST_ES_SERVER', 'localhost:9200').split(':') port = int(port) if port else 9200 class TestCLIOpenClosed(CuratorTestCase): def test_open_closed(self): self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.optionless_proto.format('open')) self.create_index('my_index') self.client.indices.close( index='my_index', ignore_unavailable=True) self.create_index('dummy') test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertNotEqual( 'close', self.client.cluster.state( index='my_index', metric='metadata', )['metadata']['indices']['my_index']['state'] ) self.assertNotEqual( 'close', self.client.cluster.state( index='dummy', metric='metadata', )['metadata']['indices']['dummy']['state'] ) def test_open_opened(self): self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.optionless_proto.format('open')) self.create_index('my_index') self.create_index('dummy') test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertNotEqual( 'close', self.client.cluster.state( index='my_index', metric='metadata', )['metadata']['indices']['my_index']['state'] ) self.assertNotEqual( 'close', self.client.cluster.state( index='dummy', metric='metadata', )['metadata']['indices']['dummy']['state'] ) def test_extra_option(self): self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.bad_option_proto_test.format('open')) self.create_index('my_index') self.client.indices.close( index='my_index', ignore_unavailable=True) self.create_index('dummy') test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEquals( 'close', self.client.cluster.state( index='my_index', metric='metadata', )['metadata']['indices']['my_index']['state'] ) self.assertNotEqual( 'close', self.client.cluster.state( index='dummy', metric='metadata', )['metadata']['indices']['dummy']['state'] ) self.assertEqual(-1, result.exit_code) curator-5.2.0/test/integration/test_reindex.py000066400000000000000000000347161315226075300215650ustar00rootroot00000000000000import elasticsearch import curator import os import json import string, random, tempfile import click from click import testing as clicktest import time from . import CuratorTestCase from unittest.case import SkipTest from . import testvars as testvars import logging logger = logging.getLogger(__name__) host, port = os.environ.get('TEST_ES_SERVER', 'localhost:9200').split(':') rhost, rport = os.environ.get('REMOTE_ES_SERVER', 'localhost:9201').split(':') port = int(port) if port else 9200 rport = int(rport) if rport else 9201 class TestCLIReindex(CuratorTestCase): def test_reindex_manual(self): wait_interval = 1 max_wait = 3 source = 'my_source' dest = 'my_dest' expected = 3 self.create_index(source) self.add_docs(source) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.reindex.format(wait_interval, max_wait, source, dest)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(expected, self.client.count(index=dest)['count']) def test_reindex_selected(self): wait_interval = 1 max_wait = 3 source = 'my_source' dest = 'my_dest' expected = 3 self.create_index(source) self.add_docs(source) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.reindex.format(wait_interval, max_wait, 'REINDEX_SELECTION', dest)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(expected, self.client.count(index=dest)['count']) def test_reindex_empty_list(self): wait_interval = 1 max_wait = 3 source = 'my_source' dest = 'my_dest' expected = '.tasks' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.reindex.format(wait_interval, max_wait, source, dest)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(expected, curator.get_indices(self.client)[0]) def test_reindex_selected_many_to_one(self): wait_interval = 1 max_wait = 3 source1 = 'my_source1' source2 = 'my_source2' dest = 'my_dest' expected = 6 self.create_index(source1) self.add_docs(source1) self.create_index(source2) for i in ["4", "5", "6"]: self.client.create( index=source2, doc_type='log', id=i, body={"doc" + i :'TEST DOCUMENT'}, ) self.client.indices.flush(index=source2, force=True) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.reindex.format(wait_interval, max_wait, 'REINDEX_SELECTION', dest)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(expected, self.client.count(index=dest)['count']) def test_reindex_from_remote(self): wait_interval = 1 max_wait = 3 source1 = 'my_source1' source2 = 'my_source2' prefix = 'my_' dest = 'my_dest' expected = 6 # Build remote client try: rclient = curator.get_client( host=rhost, port=rport, skip_version_test=True) rclient.info() except: raise SkipTest( 'Unable to connect to host at {0}:{1}'.format(rhost, rport)) # Build indices remotely. counter = 0 for rindex in [source1, source2]: rclient.indices.create(index=rindex) for i in range(0, 3): rclient.create( index=rindex, doc_type='log', id=str(counter+1), body={"doc" + str(counter+i) :'TEST DOCUMENT'}, ) counter += 1 rclient.indices.flush(index=rindex, force=True) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.remote_reindex.format( wait_interval, max_wait, 'http://{0}:{1}'.format(rhost, rport), 'REINDEX_SELECTION', dest, prefix ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) # Do our own cleanup here. rclient.indices.delete(index='{0},{1}'.format(source1, source2)) self.assertEqual(expected, self.client.count(index=dest)['count']) def test_reindex_migrate_from_remote(self): wait_interval = 1 max_wait = 3 source1 = 'my_source1' source2 = 'my_source2' prefix = 'my_' dest = 'MIGRATION' expected = 3 # Build remote client try: rclient = curator.get_client( host=rhost, port=rport, skip_version_test=True) rclient.info() except: raise SkipTest( 'Unable to connect to host at {0}:{1}'.format(rhost, rport)) # Build indices remotely. counter = 0 for rindex in [source1, source2]: rclient.indices.create(index=rindex) for i in range(0, 3): rclient.create( index=rindex, doc_type='log', id=str(counter+1), body={"doc" + str(counter+i) :'TEST DOCUMENT'}, ) counter += 1 rclient.indices.flush(index=rindex, force=True) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.remote_reindex.format( wait_interval, max_wait, 'http://{0}:{1}'.format(rhost, rport), 'REINDEX_SELECTION', dest, prefix ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) # Do our own cleanup here. rclient.indices.delete(index='{0},{1}'.format(source1, source2)) # And now the neat trick of verifying that the reindex worked to both # indices, and they preserved their names self.assertEqual(expected, self.client.count(index=source1)['count']) self.assertEqual(expected, self.client.count(index=source2)['count']) def test_reindex_migrate_from_remote_with_pre_suf_fixes(self): wait_interval = 1 max_wait = 3 source1 = 'my_source1' source2 = 'my_source2' prefix = 'my_' dest = 'MIGRATION' expected = 3 mpfx = 'pre-' msfx = '-fix' # Build remote client try: rclient = curator.get_client( host=rhost, port=rport, skip_version_test=True) rclient.info() except: raise SkipTest( 'Unable to connect to host at {0}:{1}'.format(rhost, rport)) # Build indices remotely. counter = 0 for rindex in [source1, source2]: rclient.indices.create(index=rindex) for i in range(0, 3): rclient.create( index=rindex, doc_type='log', id=str(counter+1), body={"doc" + str(counter+i) :'TEST DOCUMENT'}, ) counter += 1 rclient.indices.flush(index=rindex, force=True) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.migration_reindex.format( wait_interval, max_wait, mpfx, msfx, 'http://{0}:{1}'.format(rhost, rport), 'REINDEX_SELECTION', dest, prefix ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) # Do our own cleanup here. rclient.indices.delete(index='{0},{1}'.format(source1, source2)) # And now the neat trick of verifying that the reindex worked to both # indices, and they preserved their names self.assertEqual(expected, self.client.count(index='{0}{1}{2}'.format(mpfx,source1,msfx))['count']) self.assertEqual(expected, self.client.count(index='{0}{1}{2}'.format(mpfx,source1,msfx))['count']) def test_reindex_from_remote_no_connection(self): wait_interval = 1 max_wait = 3 bad_port = 70000 dest = 'my_dest' expected = 1 self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.remote_reindex.format( wait_interval, max_wait, 'http://{0}:{1}'.format(rhost, bad_port), 'REINDEX_SELECTION', dest, 'my_' ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(expected, result.exit_code) def test_reindex_from_remote_no_indices(self): wait_interval = 1 max_wait = 3 source1 = 'wrong1' source2 = 'wrong2' prefix = 'my_' dest = 'my_dest' expected = 1 # Build remote client try: rclient = curator.get_client( host=rhost, port=rport, skip_version_test=True) rclient.info() except: raise SkipTest( 'Unable to connect to host at {0}:{1}'.format(rhost, rport)) # Build indices remotely. counter = 0 for rindex in [source1, source2]: rclient.indices.create(index=rindex) for i in range(0, 3): rclient.create( index=rindex, doc_type='log', id=str(counter+1), body={"doc" + str(counter+i) :'TEST DOCUMENT'}, ) counter += 1 rclient.indices.flush(index=rindex, force=True) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.remote_reindex.format( wait_interval, max_wait, 'http://{0}:{1}'.format(rhost, rport), 'REINDEX_SELECTION', dest, prefix ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) # Do our own cleanup here. rclient.indices.delete(index='{0},{1}'.format(source1, source2)) self.assertEqual(expected, result.exit_code) def test_reindex_into_alias(self): wait_interval = 1 max_wait = 3 source = 'my_source' dest = 'my_dest' expected = 3 alias_body = { 'aliases' : { dest : {} } } self.client.indices.create(index='dummy', body=alias_body) self.add_docs(source) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.reindex.format(wait_interval, max_wait, source, dest)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(expected, self.client.count(index=dest)['count']) def test_reindex_manual_date_math(self): wait_interval = 1 max_wait = 3 source = '' dest = '' expected = 3 self.create_index(source) self.add_docs(source) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.reindex.format(wait_interval, max_wait, source, dest)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(expected, self.client.count(index=dest)['count'])curator-5.2.0/test/integration/test_replicas.py000066400000000000000000000045611315226075300217240ustar00rootroot00000000000000import elasticsearch import curator import os import json import string, random, tempfile import click from click import testing as clicktest from mock import patch, Mock from . import CuratorTestCase from . import testvars as testvars import logging logger = logging.getLogger(__name__) host, port = os.environ.get('TEST_ES_SERVER', 'localhost:9200').split(':') port = int(port) if port else 9200 class TestCLIReplicas(CuratorTestCase): def test_increase_count(self): count = 2 idx = 'my_index' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.replicas_test.format(count)) self.create_index(idx) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual( count, int(self.client.indices.get_settings( index=idx)[idx]['settings']['index']['number_of_replicas']) ) def test_no_count(self): self.create_index('foo') self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.replicas_test.format(' ')) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(-1, result.exit_code) def test_extra_option(self): self.create_index('foo') self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.bad_option_proto_test.format('replicas')) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(-1, result.exit_code) curator-5.2.0/test/integration/test_restore.py000066400000000000000000000145141315226075300216040ustar00rootroot00000000000000import elasticsearch import curator import os import time import json import string, random, tempfile from click import testing as clicktest from mock import patch, Mock from . import CuratorTestCase from . import testvars as testvars import logging logger = logging.getLogger(__name__) host, port = os.environ.get('TEST_ES_SERVER', 'localhost:9200').split(':') port = int(port) if port else 9200 # ' repository: {0}\n' # ' name: {1}\n' # ' indices: {2}\n' # ' include_aliases: {3}\n' # ' ignore_unavailable: {4}\n' # ' include_global_state: {5}\n' # ' partial: {6}\n' # ' rename_pattern: {7}\n' # ' rename_replacement: {8}\n' # ' extra_settings: {9}\n' # ' wait_for_completion: {10}\n' # ' skip_repo_fs_check: {11}\n' # ' timeout_override: {12}\n' # ' wait_interval: {13}\n' # ' max_wait: {14}\n' class TestCLIRestore(CuratorTestCase): def test_restore(self): indices = [] for i in range(1,4): self.add_docs('my_index{0}'.format(i)) indices.append('my_index{0}'.format(i)) snap_name = 'snapshot1' self.create_snapshot(snap_name, ','.join(indices)) snapshot = curator.get_snapshot( self.client, self.args['repository'], '_all' ) self.assertEqual(1, len(snapshot['snapshots'])) self.client.indices.delete(','.join(indices)) self.assertEqual([], curator.get_indices(self.client)) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.restore_snapshot_proto.format( self.args['repository'], snap_name, indices, False, False, True, False, ' ', ' ', ' ', True, False, 301, 1, 3 ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) restored_indices = sorted(curator.get_indices(self.client)) self.assertEqual(indices, restored_indices) # The test runs so fast that it tries to execute the cleanup step # and delete the repository before Elasticsearch is actually ready time.sleep(0.5) def test_restore_with_rename(self): indices = [] for i in range(1,4): self.add_docs('my_index{0}'.format(i)) indices.append('my_index{0}'.format(i)) snap_name = 'snapshot1' self.create_snapshot(snap_name, ','.join(indices)) snapshot = curator.get_snapshot( self.client, self.args['repository'], '_all' ) time.sleep(1) self.assertEqual(1, len(snapshot['snapshots'])) self.client.indices.delete(','.join(indices)) self.assertEqual([], curator.get_indices(self.client)) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.restore_snapshot_proto.format( self.args['repository'], snap_name, indices, False, False, True, False, 'my_index(.+)', 'new_index$1', ' ', True, False, 301, 1, 3, ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) time.sleep(1) restored_indices = sorted(curator.get_indices(self.client)) self.assertEqual( ['new_index1', 'new_index2', 'new_index3'], restored_indices ) # The test runs so fast that it tries to execute the cleanup step # and delete the repository before Elasticsearch is actually ready time.sleep(1) def test_restore_wildcard(self): indices = [] my_indices = [] wildcard = ['my_*'] for i in range(1,4): for prefix in ['my_', 'not_my_']: self.add_docs('{0}index{1}'.format(prefix, i)) indices.append('{0}index{1}'.format(prefix, i)) if prefix == 'my_': my_indices.append('{0}index{1}'.format(prefix, i)) snap_name = 'snapshot1' self.create_snapshot(snap_name, ','.join(indices)) snapshot = curator.get_snapshot( self.client, self.args['repository'], '_all' ) self.assertEqual(1, len(snapshot['snapshots'])) self.client.indices.delete(','.join(indices)) self.assertEqual([], curator.get_indices(self.client)) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.restore_snapshot_proto.format( self.args['repository'], snap_name, wildcard, False, False, True, False, ' ', ' ', ' ', True, False, 301, 1, 3 ) ) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) restored_indices = sorted(curator.get_indices(self.client)) self.assertEqual(my_indices, restored_indices) # The test runs so fast that it tries to execute the cleanup step # and delete the repository before Elasticsearch is actually ready time.sleep(0.5)curator-5.2.0/test/integration/test_rollover.py000066400000000000000000000335201315226075300217630ustar00rootroot00000000000000import elasticsearch import curator import os import json import string, random, tempfile import click from click import testing as clicktest import time from . import CuratorTestCase from . import testvars as testvars import logging logger = logging.getLogger(__name__) host, port = os.environ.get('TEST_ES_SERVER', 'localhost:9200').split(':') port = int(port) if port else 9200 class TestCLIRollover(CuratorTestCase): def test_max_age_true(self): oldindex = 'rolltome-000001' newindex = 'rolltome-000002' alias = 'delamitri' condition = 'max_age' value = '1s' expected = {newindex: {u'aliases': {alias: {}}}} self.client.indices.create( index=oldindex, body={ 'aliases': { alias: {} } } ) time.sleep(1) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.rollover_one.format(alias, condition, value)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(expected, self.client.indices.get_alias(name=alias)) def test_max_age_false(self): oldindex = 'rolltome-000001' newindex = 'rolltome-000002' alias = 'delamitri' condition = 'max_age' value = '10s' expected = {oldindex: {u'aliases': {alias: {}}}} self.client.indices.create( index=oldindex, body={ 'aliases': { alias: {} } } ) time.sleep(1) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.rollover_one.format(alias, condition, value)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(expected, self.client.indices.get_alias(name=alias)) def test_max_docs_true(self): oldindex = 'rolltome-000001' newindex = 'rolltome-000002' alias = 'delamitri' condition = 'max_docs' value = '2' expected = {newindex: {u'aliases': {alias: {}}}} self.client.indices.create( index=oldindex, body={ 'aliases': { alias: {} } } ) self.add_docs(oldindex) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.rollover_one.format(alias, condition, value)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(expected, self.client.indices.get_alias(name=alias)) def test_max_docs_false(self): oldindex = 'rolltome-000001' newindex = 'rolltome-000002' alias = 'delamitri' condition = 'max_docs' value = '5' expected = {oldindex: {u'aliases': {alias: {}}}} self.client.indices.create( index=oldindex, body={ 'aliases': { alias: {} } } ) self.add_docs(oldindex) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.rollover_one.format(alias, condition, value)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(expected, self.client.indices.get_alias(name=alias)) def test_max_docs_true(self): oldindex = 'rolltome-000001' newindex = 'rolltome-000002' alias = 'delamitri' condition = 'max_docs' value = '2' expected = {newindex: {u'aliases': {alias: {}}}} self.client.indices.create( index=oldindex, body={ 'aliases': { alias: {} } } ) self.add_docs(oldindex) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.rollover_one.format(alias, condition, value)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(expected, self.client.indices.get_alias(name=alias)) def test_conditions_both_false(self): oldindex = 'rolltome-000001' newindex = 'rolltome-000002' alias = 'delamitri' max_age = '10s' max_docs = '5' expected = {oldindex: {u'aliases': {alias: {}}}} self.client.indices.create( index=oldindex, body={ 'aliases': { alias: {} } } ) self.add_docs(oldindex) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.rollover_both.format(alias, max_age, max_docs)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(expected, self.client.indices.get_alias(name=alias)) def test_conditions_both_true(self): oldindex = 'rolltome-000001' newindex = 'rolltome-000002' alias = 'delamitri' max_age = '1s' max_docs = '2' expected = {newindex: {u'aliases': {alias: {}}}} self.client.indices.create( index=oldindex, body={ 'aliases': { alias: {} } } ) time.sleep(1) self.add_docs(oldindex) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.rollover_both.format(alias, max_age, max_docs)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(expected, self.client.indices.get_alias(name=alias)) def test_conditions_one_false_one_true(self): oldindex = 'rolltome-000001' newindex = 'rolltome-000002' alias = 'delamitri' max_age = '10s' max_docs = '2' expected = {newindex: {u'aliases': {alias: {}}}} self.client.indices.create( index=oldindex, body={ 'aliases': { alias: {} } } ) self.add_docs(oldindex) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.rollover_both.format(alias, max_age, max_docs)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(expected, self.client.indices.get_alias(name=alias)) def test_conditions_one_empty_one_true(self): oldindex = 'rolltome-000001' newindex = 'rolltome-000002' alias = 'delamitri' max_age = ' ' max_docs = '2' expected = {oldindex: {u'aliases': {alias: {}}}} self.client.indices.create( index=oldindex, body={ 'aliases': { alias: {} } } ) self.add_docs(oldindex) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.rollover_both.format(alias, max_age, max_docs)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(expected, self.client.indices.get_alias(name=alias)) self.assertEqual(-1, result.exit_code) def test_bad_settings(self): oldindex = 'rolltome-000001' newindex = 'rolltome-000002' alias = 'delamitri' max_age = '10s' max_docs = '2' expected = {oldindex: {u'aliases': {alias: {}}}} self.client.indices.create( index=oldindex, body={ 'aliases': { alias: {} } } ) self.add_docs(oldindex) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.rollover_bad_settings.format(alias, max_age, max_docs)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(expected, self.client.indices.get_alias(name=alias)) self.assertEqual(1, result.exit_code) def test_extra_option(self): self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.bad_option_proto_test.format('rollover')) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual([], curator.get_indices(self.client)) self.assertEqual(-1, result.exit_code) def test_max_age_with_new_name(self): oldindex = 'rolltome-000001' newindex = 'crazy_test' alias = 'delamitri' condition = 'max_age' value = '1s' expected = {newindex: {u'aliases': {alias: {}}}} self.client.indices.create( index=oldindex, body={ 'aliases': { alias: {} } } ) time.sleep(1) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.rollover_with_name.format(alias, condition, value, newindex)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(expected, self.client.indices.get_alias(name=alias)) def test_max_age_with_new_name_with_date(self): oldindex = 'rolltome-000001' newindex = 'crazy_test-%Y.%m.%d' alias = 'delamitri' condition = 'max_age' value = '1s' expected = {curator.parse_date_pattern(newindex): {u'aliases': {alias: {}}}} self.client.indices.create( index=oldindex, body={ 'aliases': { alias: {} } } ) time.sleep(1) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.rollover_with_name.format(alias, condition, value, newindex)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(expected, self.client.indices.get_alias(name=alias)) def test_max_age_old_index_with_date_with_new_index(self): oldindex = 'crazy_test-2017.01.01' newindex = 'crazy_test-%Y.%m.%d' alias = 'delamitri' condition = 'max_age' value = '1s' expected = {"%s" % curator.parse_date_pattern(newindex): {u'aliases': {alias: {}}}} self.client.indices.create( index=oldindex, body={ 'aliases': { alias: {} } } ) time.sleep(1) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.rollover_with_name.format(alias, condition, value, newindex)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(expected, self.client.indices.get_alias(name=alias)) curator-5.2.0/test/integration/test_snapshot.py000066400000000000000000000104471315226075300217610ustar00rootroot00000000000000import elasticsearch import curator import os import json import string, random, tempfile from click import testing as clicktest from mock import patch, Mock from . import CuratorTestCase from . import testvars as testvars import logging logger = logging.getLogger(__name__) host, port = os.environ.get('TEST_ES_SERVER', 'localhost:9200').split(':') port = int(port) if port else 9200 class TestCLISnapshot(CuratorTestCase): def test_snapshot(self): self.create_indices(5) self.create_repository() snap_name = 'snapshot1' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.snapshot_test.format(self.args['repository'], snap_name, 1, 30)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) snapshot = curator.get_snapshot( self.client, self.args['repository'], '_all' ) self.assertEqual(1, len(snapshot['snapshots'])) self.assertEqual(snap_name, snapshot['snapshots'][0]['snapshot']) def test_snapshot_ignore_empty_list(self): self.create_indices(5) self.create_repository() snap_name = 'snapshot1' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.test_682.format(self.args['repository'], snap_name, True, 1, 30)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) snapshot = curator.get_snapshot( self.client, self.args['repository'], '_all' ) self.assertEqual(0, len(snapshot['snapshots'])) self.assertEquals(0, len(curator.get_indices(self.client))) def test_snapshot_do_not_ignore_empty_list(self): self.create_indices(5) self.create_repository() snap_name = 'snapshot1' self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.test_682.format(self.args['repository'], snap_name, False, 1, 30)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) snapshot = curator.get_snapshot( self.client, self.args['repository'], '_all' ) self.assertEqual(0, len(snapshot['snapshots'])) self.assertEquals(5, len(curator.get_indices(self.client))) def test_no_repository(self): self.create_indices(5) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.snapshot_test.format(' ', 'snap_name', 1, 30)) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(-1, result.exit_code) def test_extra_option(self): self.create_indices(5) self.write_config( self.args['configfile'], testvars.client_config.format(host, port)) self.write_config(self.args['actionfile'], testvars.bad_option_proto_test.format('snapshot')) test = clicktest.CliRunner() result = test.invoke( curator.cli, [ '--config', self.args['configfile'], self.args['actionfile'] ], ) self.assertEqual(-1, result.exit_code) curator-5.2.0/test/integration/testvars.py000066400000000000000000000474161315226075300207440ustar00rootroot00000000000000import os, sys client_config = ('---\n' 'client:\n' ' hosts: {0}\n' ' port: {1}\n' ' url_prefix:\n' ' use_ssl: False\n' ' certificate:\n' ' client_cert:\n' ' client_key:\n' ' ssl_no_validate: False\n' ' http_auth: \n' ' timeout: 30\n' ' master_only: False\n' '\n' 'logging:\n' ' loglevel: DEBUG\n' ' logfile:\n' ' logformat: default\n' ' blacklist: []\n') client_conf_logfile = ('---\n' 'client:\n' ' hosts: {0}\n' ' port: {1}\n' '\n' 'logging:\n' ' loglevel: DEBUG\n' ' logfile: {2}\n') client_config_envvars = ('---\n' 'client:\n' ' hosts: {0}\n' ' port: {1}\n' ' timeout: {2}\n' ' master_only: False\n' '\n' 'logging:\n' ' loglevel: DEBUG\n' ' logfile:\n' ' logformat: default\n' ' blacklist: []\n') bad_client_config = ('---\n' 'misspelled:\n' ' hosts: {0}\n' ' port: {1}\n' ' url_prefix:\n' ' use_ssl: False\n' ' certificate:\n' ' client_cert:\n' ' client_key:\n' ' ssl_no_validate: False\n' ' http_auth: \n' ' timeout: 30\n' ' master_only: False\n') no_logging_config = ('---\n' 'client:\n' ' hosts: {0}\n' ' port: {1}\n' ' url_prefix:\n' ' use_ssl: False\n' ' certificate:\n' ' client_cert:\n' ' client_key:\n' ' ssl_no_validate: False\n' ' http_auth: \n' ' timeout: 30\n' ' master_only: False\n') none_logging_config = ('---\n' 'client:\n' ' hosts: {0}\n' ' port: {1}\n' ' url_prefix:\n' ' use_ssl: False\n' ' certificate:\n' ' client_cert:\n' ' client_key:\n' ' ssl_no_validate: False\n' ' http_auth: \n' ' timeout: 30\n' ' master_only: False\n' '\n' 'logging: \n' '\n') alias_add_only = ('---\n' 'actions:\n' ' 1:\n' ' description: "Add all indices to specified alias"\n' ' action: alias\n' ' options:\n' ' name: {0}\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' add:\n' ' filters:\n' ' - filtertype: none\n') alias_add_only_with_extra_settings = ('---\n' 'actions:\n' ' 1:\n' ' description: "Add all indices to specified alias"\n' ' action: alias\n' ' options:\n' ' name: {0}\n' ' extra_settings:\n' ' filter:\n' ' term:\n' ' user: kimchy\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' add:\n' ' filters:\n' ' - filtertype: none\n') alias_remove_only = ('---\n' 'actions:\n' ' 1:\n' ' description: "Remove all indices from specified alias"\n' ' action: alias\n' ' options:\n' ' name: {0}\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' remove:\n' ' filters:\n' ' - filtertype: none\n') alias_add_remove = ('---\n' 'actions:\n' ' 1:\n' ' description: "Add/remove specified indices from designated alias"\n' ' action: alias\n' ' options:\n' ' name: {0}\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' remove:\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: du\n' ' add:\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: my\n') alias_remove_index_not_there = ('---\n' 'actions:\n' ' 1:\n' ' description: "Add/remove specified indices from designated alias"\n' ' action: alias\n' ' options:\n' ' name: {0}\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' remove:\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: {1}\n') alias_add_with_empty_remove = ('---\n' 'actions:\n' ' 1:\n' ' description: "Add/remove specified indices from designated alias"\n' ' action: alias\n' ' options:\n' ' name: {0}\n' ' warn_if_no_indices: True\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' remove:\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: insertrickrollhere\n' ' add:\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: my\n') alias_remove_with_empty_add = ('---\n' 'actions:\n' ' 1:\n' ' description: "Add/remove specified indices from designated alias"\n' ' action: alias\n' ' options:\n' ' name: {0}\n' ' warn_if_no_indices: True\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' remove:\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: du\n' ' add:\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: insertrickrollhere\n') alias_add_remove_empty = ('---\n' 'actions:\n' ' 1:\n' ' description: "Add/remove specified indices from designated alias"\n' ' action: alias\n' ' options:\n' ' name: {0}\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' remove:\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: {1}\n' ' add:\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: {2}\n') alias_no_add_remove = ('---\n' 'actions:\n' ' 1:\n' ' description: "No add or remove should raise an exception"\n' ' action: alias\n' ' options:\n' ' name: {0}\n' ' continue_if_exception: False\n' ' disable_action: False\n') alias_no_alias = ('---\n' 'actions:\n' ' 1:\n' ' description: "Removing alias from options should result in an exception"\n' ' action: alias\n' ' options:\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' remove:\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: du\n' ' add:\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: my\n') allocation_test = ('---\n' 'actions:\n' ' 1:\n' ' description: "Allocate by key/value/allocation_type"\n' ' action: allocation\n' ' options:\n' ' key: {0}\n' ' value: {1}\n' ' allocation_type: {2}\n' ' wait_for_completion: {3}\n' ' wait_interval: 1\n' ' max_wait: -1\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: my\n') cluster_routing_test = ('---\n' 'actions:\n' ' 1:\n' ' description: "Alter cluster routing by routing_type/value"\n' ' action: cluster_routing\n' ' options:\n' ' routing_type: {0}\n' ' value: {1}\n' ' setting: enable\n') optionless_proto = ('---\n' 'actions:\n' ' 1:\n' ' description: "Act on indices as filtered"\n' ' action: {0}\n' ' options:\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: my\n') no_options_proto = ('---\n' 'actions:\n' ' 1:\n' ' description: "Act on indices as filtered"\n' ' action: {0}\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: my\n') actionless_proto = ('---\n' 'actions:\n' ' 1:\n' ' options:\n' ' continue_if_exception: False\n' ' disable_action: False\n') disabled_proto = ('---\n' 'actions:\n' ' 1:\n' ' description: "Act on indices as filtered"\n' ' action: {0}\n' ' options:\n' ' continue_if_exception: False\n' ' ignore_empty_list: True\n' ' disable_action: True\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: my\n' ' 2:\n' ' description: "Act on indices as filtered"\n' ' action: {1}\n' ' options:\n' ' continue_if_exception: False\n' ' ignore_empty_list: True\n' ' disable_action: False\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: log\n') continue_proto = ('---\n' 'actions:\n' ' 1:\n' ' description: "Create named index"\n' ' action: create_index\n' ' options:\n' ' name: {0}\n' ' continue_if_exception: {1}\n' ' disable_action: False\n' ' 2:\n' ' description: "Act on indices as filtered"\n' ' action: {2}\n' ' options:\n' ' continue_if_exception: {3}\n' ' disable_action: False\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: log\n') close_delete_aliases = ('---\n' 'actions:\n' ' 1:\n' ' description: "Close indices as filtered"\n' ' action: close\n' ' options:\n' ' delete_aliases: True\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: my\n') delete_proto = ('---\n' 'actions:\n' ' 1:\n' ' description: "Delete indices as filtered"\n' ' action: delete_indices\n' ' options:\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' filters:\n' ' - filtertype: {0}\n' ' source: {1}\n' ' direction: {2}\n' ' timestring: {3}\n' ' unit: {4}\n' ' unit_count: {5}\n' ' field: {6}\n' ' stats_result: {7}\n' ' epoch: {8}\n') delete_pattern_proto = ('---\n' 'actions:\n' ' 1:\n' ' description: "Delete indices as filtered"\n' ' action: delete_indices\n' ' options:\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' filters:\n' ' - filtertype: {0}\n' ' source: {1}\n' ' direction: {2}\n' ' timestring: {3}\n' ' unit: {4}\n' ' unit_count: {5}\n' ' unit_count_pattern: {6}\n') delete_period_proto = ('---\n' 'actions:\n' ' 1:\n' ' description: "Delete indices as filtered"\n' ' action: delete_indices\n' ' options:\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' filters:\n' ' - filtertype: {0}\n' ' source: {1}\n' ' range_from: {2}\n' ' range_to: {3}\n' ' timestring: {4}\n' ' unit: {5}\n' ' field: {6}\n' ' stats_result: {7}\n' ' epoch: {8}\n' ' week_starts_on: {9}\n') delete_ignore_proto = ('---\n' 'actions:\n' ' 1:\n' ' description: "Delete indices as filtered"\n' ' action: delete_indices\n' ' options:\n' ' ignore_empty_list: True\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' filters:\n' ' - filtertype: {0}\n' ' source: {1}\n' ' direction: {2}\n' ' timestring: {3}\n' ' unit: {4}\n' ' unit_count: {5}\n' ' field: {6}\n' ' stats_result: {7}\n' ' epoch: {8}\n') filter_by_alias = ('---\n' 'actions:\n' ' 1:\n' ' description: "Delete indices as filtered"\n' ' action: delete_indices\n' ' options:\n' ' ignore_empty_list: True\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' filters:\n' ' - filtertype: alias\n' ' aliases: {0}\n' ' exclude: {1}\n') bad_option_proto_test = ('---\n' 'actions:\n' ' 1:\n' ' description: "Should raise exception due to extra option"\n' ' action: {0}\n' ' options:\n' ' invalid: this_should_not_be_here\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' filters:\n' ' - filtertype: none\n') replicas_test = ('---\n' 'actions:\n' ' 1:\n' ' description: "Increase replica count to provided value"\n' ' action: replicas\n' ' options:\n' ' count: {0}\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: my\n') forcemerge_test = ('---\n' 'actions:\n' ' 1:\n' ' description: "forceMerge segment count per shard to provided value with optional delay"\n' ' action: forcemerge\n' ' options:\n' ' max_num_segments: {0}\n' ' delay: {1}\n' ' timeout_override: 300\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: my\n') snapshot_test = ('---\n' 'actions:\n' ' 1:\n' ' description: "Snapshot selected indices"\n' ' action: snapshot\n' ' options:\n' ' repository: {0}\n' ' name: {1}\n' ' wait_interval: {2}\n' ' max_wait: {3}\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' filters:\n' ' - filtertype: none\n') delete_snap_proto = ('---\n' 'actions:\n' ' 1:\n' ' description: "Delete snapshots as filtered"\n' ' action: delete_snapshots\n' ' options:\n' ' repository: {0}\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' filters:\n' ' - filtertype: {1}\n' ' source: {2}\n' ' direction: {3}\n' ' timestring: {4}\n' ' unit: {5}\n' ' unit_count: {6}\n' ' epoch: {7}\n') create_index = ('---\n' 'actions:\n' ' 1:\n' ' description: "Create index as named"\n' ' action: create_index\n' ' options:\n' ' name: {0}\n' ' continue_if_exception: False\n' ' disable_action: False\n') create_index_with_extra_settings = ('---\n' 'actions:\n' ' 1:\n' ' description: "Create index as named with extra settings"\n' ' action: create_index\n' ' options:\n' ' name: {0}\n' ' extra_settings:\n' ' number_of_shards: 1\n' ' number_of_replicas: 0\n' ' continue_if_exception: False\n' ' disable_action: False\n') restore_snapshot_proto = ('---\n' 'actions:\n' ' 1:\n' ' description: Restore snapshot as configured\n' ' action: restore\n' ' options:\n' ' repository: {0}\n' ' name: {1}\n' ' indices: {2}\n' ' include_aliases: {3}\n' ' ignore_unavailable: {4}\n' ' include_global_state: {5}\n' ' partial: {6}\n' ' rename_pattern: {7}\n' ' rename_replacement: {8}\n' ' extra_settings: {9}\n' ' wait_for_completion: {10}\n' ' skip_repo_fs_check: {11}\n' ' timeout_override: {12}\n' ' wait_interval: {13}\n' ' max_wait: {14}\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' filters:\n' ' - filtertype: none\n') test_687 = ('---\n' 'actions:\n' ' 1:\n' ' action: snapshot\n' ' description: >-\n' ' Create a snapshot with the last week index.\n' ' options:\n' ' repository: {0}\n' ' name: {1}\n' ' ignore_unavailable: False\n' ' include_global_state: True\n' ' partial: False\n' ' wait_for_completion: True\n' ' skip_repo_fs_check: False\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: logstash-\n' ' exclude:\n' ' - filtertype: age\n' ' source: creation_date\n' ' direction: younger\n' ' epoch: 1467020729\n' ' unit: weeks\n' ' unit_count: 2\n' ' exclude:\n' ' - filtertype: age\n' ' source: creation_date\n' ' direction: younger\n' ' epoch: 1467020729\n' ' unit: weeks\n' ' unit_count: 1\n' ' exclude: True\n' ' 2:\n' ' action: delete_indices\n' ' description: >-\n' ' Remove indices starting with logstash- older than 5 weeks\n' ' options:\n' ' ignore_empty_list: True\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: logstash-\n' ' exclude:\n' ' - filtertype: age\n' ' source: creation_date\n' ' epoch: 1467020729\n' ' direction: older\n' ' unit: weeks\n' ' unit_count: 5\n' ' exclude:\n') test_682 = ('---\n' 'actions:\n' ' 1:\n' ' description: "Snapshot selected indices"\n' ' action: snapshot\n' ' options:\n' ' repository: {0}\n' ' name: {1}\n' ' ignore_empty_list: {2}\n' ' wait_interval: {3}\n' ' max_wait: {4}\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: notlogstash-\n' ' exclude:\n' ' 2:\n' ' description: "Delete selected indices"\n' ' action: delete_indices\n' ' options:\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: logstash-\n' ' exclude:\n') CRA_all = { u'persistent':{}, u'transient':{u'cluster':{u'routing':{u'allocation':{u'enable':u'all'}}}} } rollover_one = ('---\n' 'actions:\n' ' 1:\n' ' description: "Rollover selected alias/index"\n' ' action: rollover\n' ' options:\n' ' name: {0}\n' ' conditions: \n' ' {1}: {2}\n' ' extra_settings:\n' ' index.number_of_shards: 1\n' ' index.number_of_replicas: 0\n') rollover_both = ('---\n' 'actions:\n' ' 1:\n' ' description: "Rollover selected alias/index"\n' ' action: rollover\n' ' options:\n' ' name: {0}\n' ' conditions: \n' ' max_age: {1}\n' ' max_docs: {2}\n' ' extra_settings:\n' ' index.number_of_shards: 1\n' ' index.number_of_replicas: 0\n') rollover_bad_settings = ('---\n' 'actions:\n' ' 1:\n' ' description: "Rollover selected alias/index"\n' ' action: rollover\n' ' options:\n' ' name: {0}\n' ' conditions: \n' ' max_age: {1}\n' ' max_docs: {2}\n' ' extra_settings:\n' ' foo: 1\n' ' bar: 0\n') rollover_with_name = ('---\n' 'actions:\n' ' 1:\n' ' description: "Rollover selected alias/index"\n' ' action: rollover\n' ' options:\n' ' name: {0}\n' ' conditions: \n' ' {1}: {2}\n' ' new_index: {3}\n' ' extra_settings:\n' ' index.number_of_shards: 1\n' ' index.number_of_replicas: 0\n') reindex = ('---\n' 'actions:\n' ' 1:\n' ' description: "Reindex"\n' ' action: reindex\n' ' options:\n' ' wait_interval: {0}\n' ' max_wait: {1}\n' ' request_body:\n' ' source:\n' ' index: {2}\n' ' dest:\n' ' index: {3}\n' ' filters:\n' ' - filtertype: none\n') remote_reindex = ('---\n' 'actions:\n' ' 1:\n' ' description: "Reindex from remote"\n' ' action: reindex\n' ' options:\n' ' wait_interval: {0}\n' ' max_wait: {1}\n' ' request_body:\n' ' source:\n' ' remote:\n' ' host: {2}\n' ' index: {3}\n' ' dest:\n' ' index: {4}\n' ' remote_filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: {5}\n' ' filters:\n' ' - filtertype: none\n') migration_reindex = ('---\n' 'actions:\n' ' 1:\n' ' description: "Reindex from remote"\n' ' action: reindex\n' ' options:\n' ' wait_interval: {0}\n' ' max_wait: {1}\n' ' migration_prefix: {2}\n' ' migration_suffix: {3}\n' ' request_body:\n' ' source:\n' ' remote:\n' ' host: {4}\n' ' index: {5}\n' ' dest:\n' ' index: {6}\n' ' remote_filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: {7}\n' ' filters:\n' ' - filtertype: none\n') test_945 = ('---\n' 'actions:\n' ' 1:\n' ' action: delete_indices\n' ' description: >-\n' ' Delete indices older than 7 days\n' ' options:\n' ' continue_if_exception: False\n' ' disable_action: False\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: logstash-\n' ' exclude:\n' ' - filtertype: age\n' ' source: name\n' ' direction: older\n' ' unit: days\n' ' unit_count: 7\n') index_settings = ('---\n' 'actions:\n' ' 1:\n' ' description: "Act on indices as filtered"\n' ' action: index_settings\n' ' options:\n' ' index_settings:\n' ' index:\n' ' {0}: {1}\n' ' ignore_unavailable: {2}\n' ' preserve_existing: {3}\n' ' filters:\n' ' - filtertype: pattern\n' ' kind: prefix\n' ' value: my\n')curator-5.2.0/test/run_tests.py000077500000000000000000000013051315226075300165620ustar00rootroot00000000000000#!/usr/bin/env python from __future__ import print_function import sys from os.path import dirname, abspath import nose def run_all(argv=None): sys.exitfunc = lambda: sys.stderr.write('Shutting down....\n') # always insert coverage when running tests through setup.py if argv is None: argv = [ 'nosetests', '--with-xunit', '--logging-format=%(levelname)s %(name)22s %(funcName)22s:%(lineno)-4d %(message)s', '--with-xcoverage', '--cover-package=curator', '--cover-erase', '--verbose', ] nose.run_exit( argv=argv, defaultTest=abspath(dirname(__file__)) ) if __name__ == '__main__': run_all(sys.argv) curator-5.2.0/test/unit/000077500000000000000000000000001315226075300151375ustar00rootroot00000000000000curator-5.2.0/test/unit/__init__.py000066400000000000000000000031401315226075300172460ustar00rootroot00000000000000import os import shutil import tempfile import random import string from unittest import SkipTest, TestCase from mock import Mock from .testvars import * class CLITestCase(TestCase): def setUp(self): super(CLITestCase, self).setUp() self.args = {} dirname = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(8)) ymlname = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(8)) badyaml = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(8)) # This will create a psuedo-random temporary directory on the machine # which runs the unit tests, but NOT on the machine where elasticsearch # is running. This means tests may fail if run against remote instances # unless you explicitly set `self.args['location']` to a proper spot # on the target machine. self.args['tmpdir'] = tempfile.mkdtemp(suffix=dirname) if not os.path.exists(self.args['tmpdir']): os.makedirs(self.args['tmpdir']) self.args['yamlfile'] = os.path.join(self.args['tmpdir'], ymlname) self.args['invalid_yaml'] = os.path.join(self.args['tmpdir'], badyaml) self.args['no_file_here'] = os.path.join(self.args['tmpdir'], 'not_created') with open(self.args['yamlfile'], 'w') as f: f.write(testvars.yamlconfig) with open(self.args['invalid_yaml'], 'w') as f: f.write('gobbledeygook: @failhere\n') def tearDown(self): if os.path.exists(self.args['tmpdir']): shutil.rmtree(self.args['tmpdir']) curator-5.2.0/test/unit/test_action_alias.py000066400000000000000000000143461315226075300212060ustar00rootroot00000000000000from unittest import TestCase from mock import Mock, patch import elasticsearch import curator # Get test variables and constants from a single source from . import testvars as testvars class TestActionAlias(TestCase): def test_init_raise(self): self.assertRaises(curator.MissingArgument, curator.Alias) def test_add_raises_on_missing_parameter(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one ilo = curator.IndexList(client) ao = curator.Alias(name='alias') self.assertRaises(TypeError, ao.add) def test_add_raises_on_invalid_parameter(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one ilo = curator.IndexList(client) ao = curator.Alias(name='alias') self.assertRaises(TypeError, ao.add, []) def test_add_single(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one ilo = curator.IndexList(client) ao = curator.Alias(name='alias') ao.add(ilo) self.assertEqual(testvars.alias_one_add, ao.actions) def test_add_single_with_extra_settings(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one ilo = curator.IndexList(client) esd = { 'filter' : { 'term' : { 'user' : 'kimchy' } } } ao = curator.Alias(name='alias', extra_settings=esd) ao.add(ilo) self.assertEqual(testvars.alias_one_add_with_extras, ao.actions) def test_remove_single(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.get_alias.return_value = testvars.settings_1_get_aliases ilo = curator.IndexList(client) ao = curator.Alias(name='my_alias') ao.remove(ilo) self.assertEqual(testvars.alias_one_rm, ao.actions) def test_add_multiple(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two ilo = curator.IndexList(client) ao = curator.Alias(name='alias') ao.add(ilo) cmp = sorted(ao.actions, key=lambda k: k['add']['index']) self.assertEqual(testvars.alias_two_add, cmp) def test_remove_multiple(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.indices.get_alias.return_value = testvars.settings_2_get_aliases ilo = curator.IndexList(client) ao = curator.Alias(name='my_alias') ao.remove(ilo) cmp = sorted(ao.actions, key=lambda k: k['remove']['index']) self.assertEqual(testvars.alias_two_rm, cmp) def test_raise_on_empty_body(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one ilo = curator.IndexList(client) ao = curator.Alias(name='alias') self.assertRaises(curator.ActionError, ao.body) def test_do_dry_run(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.update_aliases.return_value = testvars.alias_success ilo = curator.IndexList(client) ao = curator.Alias(name='alias') ao.add(ilo) self.assertIsNone(ao.do_dry_run()) def test_do_action(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.update_aliases.return_value = testvars.alias_success ilo = curator.IndexList(client) ao = curator.Alias(name='alias') ao.add(ilo) self.assertIsNone(ao.do_action()) def test_do_action_raises_exception(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.update_aliases.return_value = testvars.alias_success client.indices.update_aliases.side_effect = testvars.four_oh_one ilo = curator.IndexList(client) ao = curator.Alias(name='alias') ao.add(ilo) self.assertRaises(curator.FailedExecution, ao.do_action) curator-5.2.0/test/unit/test_action_allocation.py000066400000000000000000000123271315226075300222370ustar00rootroot00000000000000from unittest import TestCase from mock import Mock, patch import elasticsearch import curator # Get test variables and constants from a single source from . import testvars as testvars class TestActionAllocation(TestCase): def test_init_raise(self): self.assertRaises(TypeError, curator.Allocation, 'invalid') def test_init(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one ilo = curator.IndexList(client) ao = curator.Allocation(ilo, key='key', value='value') self.assertEqual(ilo, ao.index_list) self.assertEqual(client, ao.client) def test_create_body_no_key(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one ilo = curator.IndexList(client) self.assertRaises(curator.MissingArgument, curator.Allocation, ilo) def test_create_body_invalid_allocation_type(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one ilo = curator.IndexList(client) self.assertRaises( ValueError, curator.Allocation, ilo, key='key', value='value', allocation_type='invalid' ) def test_create_body_valid(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one ilo = curator.IndexList(client) ao = curator.Allocation(ilo, key='key', value='value') self.assertEqual({'index.routing.allocation.require.key': 'value'}, ao.body) def test_do_action_raise_on_put_settings(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.put_settings.return_value = None client.indices.put_settings.side_effect = testvars.fake_fail ilo = curator.IndexList(client) ao = curator.Allocation(ilo, key='key', value='value') self.assertRaises(Exception, ao.do_action) def test_do_dry_run(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.put_settings.return_value = None ilo = curator.IndexList(client) ao = curator.Allocation(ilo, key='key', value='value') self.assertIsNone(ao.do_dry_run()) def test_do_action(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.put_settings.return_value = None ilo = curator.IndexList(client) ao = curator.Allocation(ilo, key='key', value='value') self.assertIsNone(ao.do_action()) def test_do_action_wait_v50(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.put_settings.return_value = None client.cluster.health.return_value = {'relocating_shards':0} ilo = curator.IndexList(client) ao = curator.Allocation( ilo, key='key', value='value', wait_for_completion=True) self.assertIsNone(ao.do_action()) def test_do_action_wait_v51(self): client = Mock() client.info.return_value = {'version': {'number': '5.1.1'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.put_settings.return_value = None client.cluster.health.return_value = {'relocating_shards':0} ilo = curator.IndexList(client) ao = curator.Allocation( ilo, key='key', value='value', wait_for_completion=True) self.assertIsNone(ao.do_action()) curator-5.2.0/test/unit/test_action_close.py000066400000000000000000000074321315226075300212200ustar00rootroot00000000000000from unittest import TestCase from mock import Mock, patch import elasticsearch import curator # Get test variables and constants from a single source from . import testvars as testvars class TestActionClose(TestCase): def test_init_raise(self): self.assertRaises(TypeError, curator.Close, 'invalid') def test_init(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one ilo = curator.IndexList(client) co = curator.Close(ilo) self.assertEqual(ilo, co.index_list) self.assertEqual(client, co.client) def test_do_dry_run(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.flush_synced.return_value = testvars.synced_pass client.indices.close.return_value = None ilo = curator.IndexList(client) co = curator.Close(ilo) self.assertIsNone(co.do_dry_run()) def test_do_action(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.flush_synced.return_value = testvars.synced_pass client.indices.close.return_value = None ilo = curator.IndexList(client) co = curator.Close(ilo) self.assertIsNone(co.do_action()) def test_do_action_with_delete_aliases(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.flush_synced.return_value = testvars.synced_pass client.indices.close.return_value = None ilo = curator.IndexList(client) co = curator.Close(ilo, delete_aliases=True) self.assertIsNone(co.do_action()) def test_do_action_raises_exception(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.flush_synced.return_value = testvars.synced_pass client.indices.close.return_value = None client.indices.close.side_effect = testvars.fake_fail ilo = curator.IndexList(client) co = curator.Close(ilo) self.assertRaises(curator.FailedExecution, co.do_action) def test_do_action_delete_aliases_with_exception(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.flush_synced.return_value = testvars.synced_pass client.indices.close.return_value = None ilo = curator.IndexList(client) client.indices.delete_alias.side_effect = testvars.fake_fail co = curator.Close(ilo, delete_aliases=True) self.assertIsNone(co.do_action()) curator-5.2.0/test/unit/test_action_clusterrouting.py000066400000000000000000000045441315226075300232050ustar00rootroot00000000000000from unittest import TestCase from mock import Mock, patch import elasticsearch import curator # Get test variables and constants from a single source from . import testvars as testvars class TestActionAllocation(TestCase): def test_bad_client(self): self.assertRaises(TypeError, curator.ClusterRouting, 'invalid') def test_bad_setting(self): client = Mock() self.assertRaises( ValueError, curator.ClusterRouting, client, setting='invalid' ) def test_bad_routing_type(self): client = Mock() self.assertRaises( ValueError, curator.ClusterRouting, client, routing_type='invalid', setting='enable' ) def test_bad_value_with_allocation(self): client = Mock() self.assertRaises( ValueError, curator.ClusterRouting, client, routing_type='allocation', setting='enable', value='invalid' ) def test_bad_value_with_rebalance(self): client = Mock() self.assertRaises( ValueError, curator.ClusterRouting, client, routing_type='rebalance', setting='enable', value='invalid' ) def test_do_dry_run(self): client = Mock() cro = curator.ClusterRouting( client, routing_type='allocation', setting='enable', value='all' ) self.assertIsNone(cro.do_dry_run()) def test_do_action_raise_on_put_settings(self): client = Mock() client.cluster.put_settings.return_value = None client.cluster.put_settings.side_effect = testvars.fake_fail cro = curator.ClusterRouting( client, routing_type='allocation', setting='enable', value='all' ) self.assertRaises(Exception, cro.do_action) def test_do_action_wait(self): client = Mock() client.cluster.put_settings.return_value = None client.cluster.health.return_value = {'relocating_shards':0} cro = curator.ClusterRouting( client, routing_type='allocation', setting='enable', value='all', wait_for_completion=True ) self.assertIsNone(cro.do_action()) curator-5.2.0/test/unit/test_action_create_index.py000066400000000000000000000024431315226075300225420ustar00rootroot00000000000000from unittest import TestCase from mock import Mock, patch import elasticsearch import curator # Get test variables and constants from a single source from . import testvars as testvars class TestActionCreate_index(TestCase): def test_init_raise(self): self.assertRaises(TypeError, curator.CreateIndex, 'invalid') def test_init_raise_no_name(self): client = Mock() self.assertRaises(curator.ConfigurationError, curator.CreateIndex, client, None) def test_init(self): client = Mock() co = curator.CreateIndex(client, 'name') self.assertEqual('name', co.name) self.assertEqual(client, co.client) def test_do_dry_run(self): client = Mock() co = curator.CreateIndex(client, 'name') self.assertIsNone(co.do_dry_run()) def test_do_action(self): client = Mock() client.indices.create.return_value = None co = curator.CreateIndex(client, 'name') self.assertIsNone(co.do_action()) def test_do_action_raises_exception(self): client = Mock() client.indices.create.return_value = None client.indices.create.side_effect = testvars.fake_fail co = curator.CreateIndex(client, 'name') self.assertRaises(curator.FailedExecution, co.do_action) curator-5.2.0/test/unit/test_action_delete_indices.py000066400000000000000000000075271315226075300230600ustar00rootroot00000000000000from unittest import TestCase from mock import Mock, patch import elasticsearch import curator # Get test variables and constants from a single source from . import testvars as testvars class TestActionDeleteIndices(TestCase): def test_init_raise(self): self.assertRaises(TypeError, curator.DeleteIndices, 'invalid') def test_init_raise_bad_master_timeout(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one ilo = curator.IndexList(client) self.assertRaises(TypeError, curator.DeleteIndices, ilo, 'invalid') def test_init(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one ilo = curator.IndexList(client) do = curator.DeleteIndices(ilo) self.assertEqual(ilo, do.index_list) self.assertEqual(client, do.client) def test_do_dry_run(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.indices.delete.return_value = None ilo = curator.IndexList(client) do = curator.DeleteIndices(ilo) self.assertIsNone(do.do_dry_run()) def test_do_action(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.indices.delete.return_value = None ilo = curator.IndexList(client) do = curator.DeleteIndices(ilo) self.assertIsNone(do.do_action()) def test_do_action_not_successful(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.indices.delete.return_value = None ilo = curator.IndexList(client) do = curator.DeleteIndices(ilo) self.assertIsNone(do.do_action()) def test_do_action_raises_exception(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.indices.delete.return_value = None client.indices.delete.side_effect = testvars.fake_fail ilo = curator.IndexList(client) do = curator.DeleteIndices(ilo) self.assertRaises(curator.FailedExecution, do.do_action) def test_verify_result_positive(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.indices.delete.return_value = None ilo = curator.IndexList(client) do = curator.DeleteIndices(ilo) self.assertTrue(do._verify_result([],2)) curator-5.2.0/test/unit/test_action_delete_snapshots.py000066400000000000000000000052221315226075300234520ustar00rootroot00000000000000from unittest import TestCase from mock import Mock, patch import elasticsearch import curator # Get test variables and constants from a single source from . import testvars as testvars class TestActionDeleteSnapshots(TestCase): def test_init_raise(self): self.assertRaises(TypeError, curator.DeleteSnapshots, 'invalid') def test_init(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) do = curator.DeleteSnapshots(slo) self.assertEqual(slo, do.snapshot_list) self.assertEqual(client, do.client) def test_do_dry_run(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo client.tasks.get.return_value = testvars.no_snap_tasks client.snapshot.delete.return_value = None slo = curator.SnapshotList(client, repository=testvars.repo_name) do = curator.DeleteSnapshots(slo) self.assertIsNone(do.do_dry_run()) def test_do_action(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo client.tasks.get.return_value = testvars.no_snap_tasks client.snapshot.delete.return_value = None slo = curator.SnapshotList(client, repository=testvars.repo_name) do = curator.DeleteSnapshots(slo) self.assertIsNone(do.do_action()) def test_do_action_raises_exception(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo client.snapshot.delete.return_value = None client.tasks.get.return_value = testvars.no_snap_tasks client.snapshot.delete.side_effect = testvars.fake_fail slo = curator.SnapshotList(client, repository=testvars.repo_name) do = curator.DeleteSnapshots(slo) self.assertRaises(curator.FailedExecution, do.do_action) def test_not_safe_to_snap_raises_exception(self): client = Mock() client.snapshot.get.return_value = testvars.inprogress client.snapshot.get_repository.return_value = testvars.test_repo client.tasks.get.return_value = testvars.no_snap_tasks slo = curator.SnapshotList(client, repository=testvars.repo_name) do = curator.DeleteSnapshots(slo, retry_interval=0, retry_count=1) self.assertRaises(curator.FailedExecution, do.do_action) curator-5.2.0/test/unit/test_action_forcemerge.py000066400000000000000000000114551315226075300222310ustar00rootroot00000000000000from unittest import TestCase from mock import Mock, patch import elasticsearch import curator # Get test variables and constants from a single source from . import testvars as testvars class TestActionForceMerge(TestCase): def test_init_raise_bad_client(self): self.assertRaises( TypeError, curator.ForceMerge, 'invalid', max_num_segments=2) def test_init_raise_no_segment_count(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.segments.return_value = testvars.shards ilo = curator.IndexList(client) self.assertRaises( curator.MissingArgument, curator.ForceMerge, ilo) def test_init(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.segments.return_value = testvars.shards ilo = curator.IndexList(client) fmo = curator.ForceMerge(ilo, max_num_segments=2) self.assertEqual(ilo, fmo.index_list) self.assertEqual(client, fmo.client) def test_do_dry_run(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.segments.return_value = testvars.shards client.indices.forcemerge.return_value = None client.indices.optimize.return_value = None ilo = curator.IndexList(client) fmo = curator.ForceMerge(ilo, max_num_segments=2) self.assertIsNone(fmo.do_dry_run()) def test_do_action_pre5(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.segments.return_value = testvars.shards client.info.return_value = {'version': {'number': '2.3.2'} } client.indices.optimize.return_value = None ilo = curator.IndexList(client) fmo = curator.ForceMerge(ilo, max_num_segments=2) self.assertIsNone(fmo.do_action()) def test_do_action(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.segments.return_value = testvars.shards client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.forcemerge.return_value = None ilo = curator.IndexList(client) fmo = curator.ForceMerge(ilo, max_num_segments=2) self.assertIsNone(fmo.do_action()) def test_do_action_with_delay(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.segments.return_value = testvars.shards client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.forcemerge.return_value = None ilo = curator.IndexList(client) fmo = curator.ForceMerge(ilo, max_num_segments=2, delay=0.050) self.assertIsNone(fmo.do_action()) def test_do_action_raises_exception(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.segments.return_value = testvars.shards client.indices.forcemerge.return_value = None client.indices.optimize.return_value = None client.indices.forcemerge.side_effect = testvars.fake_fail client.indices.optimize.side_effect = testvars.fake_fail ilo = curator.IndexList(client) fmo = curator.ForceMerge(ilo, max_num_segments=2) self.assertRaises(curator.FailedExecution, fmo.do_action) curator-5.2.0/test/unit/test_action_open.py000066400000000000000000000047741315226075300210620ustar00rootroot00000000000000from unittest import TestCase from mock import Mock, patch import elasticsearch import curator # Get test variables and constants from a single source from . import testvars as testvars class TestActionOpen(TestCase): def test_init_raise(self): self.assertRaises(TypeError, curator.Open, 'invalid') def test_init(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one ilo = curator.IndexList(client) oo = curator.Open(ilo) self.assertEqual(ilo, oo.index_list) self.assertEqual(client, oo.client) def test_do_dry_run(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.indices.open.return_value = None ilo = curator.IndexList(client) ilo.filter_opened() oo = curator.Open(ilo) self.assertEqual([u'c-2016.03.05'], oo.index_list.indices) self.assertIsNone(oo.do_dry_run()) def test_do_action(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.indices.open.return_value = None ilo = curator.IndexList(client) ilo.filter_opened() oo = curator.Open(ilo) self.assertEqual([u'c-2016.03.05'], oo.index_list.indices) self.assertIsNone(oo.do_action()) def test_do_action_raises_exception(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.indices.open.return_value = None client.indices.open.side_effect = testvars.fake_fail ilo = curator.IndexList(client) oo = curator.Open(ilo) self.assertRaises(curator.FailedExecution, oo.do_action) curator-5.2.0/test/unit/test_action_reindex.py000066400000000000000000000210201315226075300215360ustar00rootroot00000000000000from unittest import TestCase from mock import Mock, patch import elasticsearch import curator # Get test variables and constants from a single source from . import testvars as testvars class TestActionReindex(TestCase): def test_init_bad_ilo(self): self.assertRaises(TypeError, curator.Reindex, 'foo', 'invalid') def test_init_raise_bad_request_body(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one ilo = curator.IndexList(client) self.assertRaises(curator.ConfigurationError, curator.Reindex, ilo, 'invalid') def test_init_raise_local_migration_no_prefix_or_suffix(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one ilo = curator.IndexList(client) self.assertRaises(curator.ConfigurationError, curator.Reindex, ilo, testvars.reindex_migration) def test_init(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one ilo = curator.IndexList(client) ro = curator.Reindex(ilo, testvars.reindex_basic) self.assertEqual(ilo, ro.index_list) self.assertEqual(client, ro.client) def test_do_dry_run(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four ilo = curator.IndexList(client) ro = curator.Reindex(ilo, testvars.reindex_basic) self.assertIsNone(ro.do_dry_run()) def test_replace_index_list(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four ilo = curator.IndexList(client) ro = curator.Reindex(ilo, testvars.reindex_replace) self.assertEqual(ro.index_list.indices, ro.body['source']['index']) def test_reindex_with_wait(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.reindex.return_value = testvars.generic_task client.tasks.get.return_value = testvars.completed_task ilo = curator.IndexList(client) # After building ilo, we need a different return value client.indices.get_settings.return_value = {'other_index':{}} ro = curator.Reindex(ilo, testvars.reindex_basic) self.assertIsNone(ro.do_action()) def test_reindex_without_wait(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.reindex.return_value = testvars.generic_task client.tasks.get.return_value = testvars.completed_task ilo = curator.IndexList(client) ro = curator.Reindex(ilo, testvars.reindex_basic, wait_for_completion=False) self.assertIsNone(ro.do_action()) def test_reindex_timedout(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.reindex.return_value = testvars.generic_task client.tasks.get.return_value = testvars.incomplete_task ilo = curator.IndexList(client) ro = curator.Reindex(ilo, testvars.reindex_basic, max_wait=1, wait_interval=1) self.assertRaises(curator.FailedExecution, ro.do_action) def test_remote_with_no_host_key(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.reindex.return_value = testvars.generic_task client.tasks.get.return_value = testvars.completed_task ilo = curator.IndexList(client) # After building ilo, we need a different return value client.indices.get_settings.return_value = {'other_index':{}} badval = { 'source': { 'index': 'irrelevant', 'remote': {'wrong': 'invalid'} }, 'dest': { 'index': 'other_index' } } self.assertRaises( curator.ConfigurationError, curator.Reindex, ilo, badval) def test_remote_with_bad_host(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.reindex.return_value = testvars.generic_task client.tasks.get.return_value = testvars.completed_task ilo = curator.IndexList(client) # After building ilo, we need a different return value client.indices.get_settings.return_value = {'other_index':{}} badval = { 'source': { 'index': 'irrelevant', 'remote': {'host': 'invalid'} }, 'dest': { 'index': 'other_index' } } self.assertRaises( curator.ConfigurationError, curator.Reindex, ilo, badval) def test_remote_with_bad_url(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.reindex.return_value = testvars.generic_task client.tasks.get.return_value = testvars.completed_task ilo = curator.IndexList(client) # After building ilo, we need a different return value client.indices.get_settings.return_value = {'other_index':{}} badval = { 'source': { 'index': 'irrelevant', 'remote': {'host': 'asdf://hostname:1234'} }, 'dest': { 'index': 'other_index' } } self.assertRaises( curator.ConfigurationError, curator.Reindex, ilo, badval) def test_remote_with_bad_connection(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.reindex.return_value = testvars.generic_task client.tasks.get.return_value = testvars.completed_task ilo = curator.IndexList(client) # After building ilo, we need a different return value client.indices.get_settings.return_value = {'other_index':{}} badval = { 'source': { 'index': 'REINDEX_SELECTION', 'remote': {'host': 'https://example.org:XXXX'} }, 'dest': { 'index': 'other_index' } } urllib3 = Mock() urllib3.util.retry.side_effect = testvars.fake_fail self.assertRaises(Exception, curator.Reindex, ilo, badval)curator-5.2.0/test/unit/test_action_replicas.py000066400000000000000000000071231315226075300217120ustar00rootroot00000000000000from unittest import TestCase from mock import Mock, patch import elasticsearch import curator # Get test variables and constants from a single source from . import testvars as testvars class TestActionReplicas(TestCase): def test_init_raise_bad_client(self): self.assertRaises( TypeError, curator.Replicas, 'invalid', count=2) def test_init_raise_no_count(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one ilo = curator.IndexList(client) self.assertRaises( curator.MissingArgument, curator.Replicas, ilo) def test_init(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.put_settings.return_value = None ilo = curator.IndexList(client) ro = curator.Replicas(ilo, count=2) self.assertEqual(ilo, ro.index_list) self.assertEqual(client, ro.client) def test_do_dry_run(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.put_settings.return_value = None ilo = curator.IndexList(client) ro = curator.Replicas(ilo, count=0) self.assertIsNone(ro.do_dry_run()) def test_do_action(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.put_settings.return_value = None ilo = curator.IndexList(client) ro = curator.Replicas(ilo, count=0) self.assertIsNone(ro.do_action()) def test_do_action_wait(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.put_settings.return_value = None client.cluster.health.return_value = {'status':'green'} ilo = curator.IndexList(client) ro = curator.Replicas(ilo, count=1, wait_for_completion=True) self.assertIsNone(ro.do_action()) def test_do_action_raises_exception(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.segments.return_value = testvars.shards client.indices.put_settings.return_value = None client.indices.put_settings.side_effect = testvars.fake_fail ilo = curator.IndexList(client) ro = curator.Replicas(ilo, count=2) self.assertRaises(curator.FailedExecution, ro.do_action) curator-5.2.0/test/unit/test_action_restore.py000066400000000000000000000200661315226075300215740ustar00rootroot00000000000000from unittest import TestCase from mock import Mock, patch import elasticsearch import curator # Get test variables and constants from a single source from . import testvars as testvars class TestActionRestore(TestCase): def test_init_raise_bad_snapshot_list(self): self.assertRaises(TypeError, curator.Restore, 'invalid') def test_init_raise_unsuccessful_snapshot_list(self): client = Mock() client.snapshot.get.return_value = testvars.partial client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) self.assertRaises(curator.CuratorException, curator.Restore, slo) def test_snapshot_derived_name(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) ro = curator.Restore(slo) self.assertEqual('snapshot-2015.03.01', ro.name) def test_provided_name(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) ro = curator.Restore(slo, name=testvars.snap_name) self.assertEqual(testvars.snap_name, ro.name) def test_partial_snap(self): client = Mock() client.snapshot.get.return_value = testvars.partial client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) ro = curator.Restore(slo, partial=True) self.assertEqual(testvars.snap_name, ro.name) def test_provided_indices(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) ro = curator.Restore(slo, indices=testvars.named_indices) self.assertEqual('snapshot-2015.03.01', ro.name) def test_extra_settings(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) ro = curator.Restore(slo, extra_settings={'foo':'bar'}) self.assertEqual(ro.body['foo'], 'bar') def test_bad_extra_settings(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) ro = curator.Restore(slo, extra_settings='invalid') self.assertEqual(ro.body, { 'ignore_unavailable': False, 'partial': False, 'include_aliases': False, 'rename_replacement': '', 'rename_pattern': '', 'indices': ['index-2015.01.01', 'index-2015.02.01'], 'include_global_state': False } ) def test_get_expected_output(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) ro = curator.Restore( slo, rename_pattern='(.+)', rename_replacement='new_$1') self.assertEqual( ro.expected_output, ['new_index-2015.01.01', 'new_index-2015.02.01'] ) def test_do_dry_run(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) ro = curator.Restore(slo) self.assertIsNone(ro.do_dry_run()) def test_do_dry_run_with_renames(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) ro = curator.Restore( slo, rename_pattern='(.+)', rename_replacement='new_$1') self.assertIsNone(ro.do_dry_run()) def test_report_state_all(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.snapshot.get.return_value = testvars.snapshot client.snapshot.get_repository.return_value = testvars.test_repo client.indices.get_settings.return_value = testvars.settings_named slo = curator.SnapshotList(client, repository=testvars.repo_name) ro = curator.Restore(slo) self.assertIsNone(ro.report_state()) def test_report_state_not_all(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo client.indices.get_settings.return_value = testvars.settings_one slo = curator.SnapshotList(client, repository=testvars.repo_name) ro = curator.Restore( slo, rename_pattern='(.+)', rename_replacement='new_$1') self.assertIsNone(ro.report_state()) def test_do_action_success(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo client.snapshot.status.return_value = testvars.nosnap_running client.snapshot.verify_repository.return_value = testvars.verified_nodes client.indices.get_settings.return_value = testvars.settings_named client.indices.recovery.return_value = testvars.recovery_output slo = curator.SnapshotList(client, repository=testvars.repo_name) ro = curator.Restore(slo, wait_interval=0.5, max_wait=1) self.assertIsNone(ro.do_action()) def test_do_action_snap_in_progress(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo client.snapshot.status.return_value = testvars.snap_running client.snapshot.verify_repository.return_value = testvars.verified_nodes client.indices.get_settings.return_value = testvars.settings_named slo = curator.SnapshotList(client, repository=testvars.repo_name) ro = curator.Restore(slo) self.assertRaises(curator.SnapshotInProgress, ro.do_action) def test_do_action_success_no_wfc(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo client.snapshot.status.return_value = testvars.nosnap_running client.snapshot.verify_repository.return_value = testvars.verified_nodes client.indices.get_settings.return_value = testvars.settings_named slo = curator.SnapshotList(client, repository=testvars.repo_name) ro = curator.Restore(slo, wait_for_completion=False) self.assertIsNone(ro.do_action()) def test_do_action_report_on_failure(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo client.snapshot.status.return_value = testvars.nosnap_running client.snapshot.verify_repository.return_value = testvars.verified_nodes client.indices.get_settings.return_value = testvars.settings_named client.snapshot.restore.side_effect = testvars.fake_fail slo = curator.SnapshotList(client, repository=testvars.repo_name) ro = curator.Restore(slo) self.assertRaises(curator.FailedExecution, ro.do_action) curator-5.2.0/test/unit/test_action_rollover.py000066400000000000000000000031471315226075300217560ustar00rootroot00000000000000from unittest import TestCase from mock import Mock, patch import elasticsearch import curator # Get test variables and constants from a single source from . import testvars as testvars class TestActionRollover(TestCase): def test_init_raise_bad_client(self): self.assertRaises( TypeError, curator.Rollover, 'invalid', 'name', {}) def test_init_raise_bad_conditions(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } self.assertRaises( curator.ConfigurationError, curator.Rollover, client, 'name', 'string') def test_init_raise_bad_extra_settings(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } self.assertRaises( curator.ConfigurationError, curator.Rollover, client, 'name', {'a':'b'}, None, 'string') def test_init_raise_non_rollable_index(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_alias.return_value = testvars.alias_retval self.assertRaises( ValueError, curator.Rollover, client, testvars.named_alias, {'a':'b'}) def test_do_dry_run(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_alias.return_value = testvars.rollable_alias client.indices.rollover.return_value = testvars.dry_run_rollover ro = curator.Rollover( client, testvars.named_alias, testvars.rollover_conditions) self.assertIsNone(ro.do_dry_run()) curator-5.2.0/test/unit/test_action_snapshot.py000066400000000000000000000242601315226075300217500ustar00rootroot00000000000000from unittest import TestCase from mock import Mock, patch import elasticsearch import curator # Get test variables and constants from a single source from . import testvars as testvars class TestActionSnapshot(TestCase): def test_init_raise_bad_index_list(self): self.assertRaises(TypeError, curator.Snapshot, 'invalid') def test_init_no_repo_arg_exception(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one ilo = curator.IndexList(client) self.assertRaises(curator.MissingArgument, curator.Snapshot, ilo) def test_init_no_repo_exception(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.snapshot.get_repository.return_value = {'repo':{'foo':'bar'}} ilo = curator.IndexList(client) self.assertRaises( curator.ActionError, curator.Snapshot, ilo, repository='notfound') def test_init_no_name_exception(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.snapshot.get_repository.return_value = testvars.test_repo ilo = curator.IndexList(client) self.assertRaises(curator.MissingArgument, curator.Snapshot, ilo, repository=testvars.repo_name) def test_init_success(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.snapshot.get_repository.return_value = testvars.test_repo ilo = curator.IndexList(client) so = curator.Snapshot(ilo, repository=testvars.repo_name, name=testvars.snap_name) self.assertEqual(testvars.repo_name, so.repository) self.assertIsNone(so.state) def test_get_state_success(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.snapshot.get_repository.return_value = testvars.test_repo client.snapshot.get.return_value = testvars.snapshots client.tasks.get.return_value = testvars.no_snap_tasks ilo = curator.IndexList(client) so = curator.Snapshot(ilo, repository=testvars.repo_name, name=testvars.snap_name) so.get_state() self.assertEqual('SUCCESS', so.state) def test_get_state_fail(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.snapshot.get_repository.return_value = testvars.test_repo client.snapshot.get.return_value = {'snapshots':[]} client.tasks.get.return_value = testvars.no_snap_tasks ilo = curator.IndexList(client) so = curator.Snapshot(ilo, repository=testvars.repo_name, name=testvars.snap_name) self.assertRaises(curator.CuratorException, so.get_state) def test_report_state_success(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.snapshot.get_repository.return_value = testvars.test_repo client.snapshot.get.return_value = testvars.snapshots client.tasks.get.return_value = testvars.no_snap_tasks ilo = curator.IndexList(client) so = curator.Snapshot(ilo, repository=testvars.repo_name, name=testvars.snap_name) so.report_state() self.assertEqual('SUCCESS', so.state) def test_report_state_other(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.snapshot.get_repository.return_value = testvars.test_repo client.snapshot.get.return_value = testvars.highly_unlikely client.tasks.get.return_value = testvars.no_snap_tasks ilo = curator.IndexList(client) so = curator.Snapshot(ilo, repository=testvars.repo_name, name=testvars.snap_name) so.report_state() self.assertEqual('IN_PROGRESS', so.state) def test_do_dry_run(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.snapshot.get_repository.return_value = testvars.test_repo client.snapshot.get.return_value = testvars.snapshots client.tasks.get.return_value = testvars.no_snap_tasks client.snapshot.create.return_value = None client.snapshot.status.return_value = testvars.nosnap_running client.snapshot.verify_repository.return_value = testvars.verified_nodes ilo = curator.IndexList(client) so = curator.Snapshot(ilo, repository=testvars.repo_name, name=testvars.snap_name) self.assertIsNone(so.do_dry_run()) def test_do_action_success(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.snapshot.get_repository.return_value = testvars.test_repo client.snapshot.get.return_value = testvars.snapshots client.tasks.get.return_value = testvars.no_snap_tasks client.snapshot.create.return_value = testvars.generic_task client.tasks.get.return_value = testvars.completed_task client.snapshot.status.return_value = testvars.nosnap_running client.snapshot.verify_repository.return_value = testvars.verified_nodes ilo = curator.IndexList(client) so = curator.Snapshot(ilo, repository=testvars.repo_name, name=testvars.snap_name) self.assertIsNone(so.do_action()) def test_do_action_raise_snap_in_progress(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.snapshot.get_repository.return_value = testvars.test_repo client.snapshot.get.return_value = testvars.snapshots client.tasks.get.return_value = testvars.no_snap_tasks client.snapshot.create.return_value = None client.snapshot.status.return_value = testvars.snap_running client.snapshot.verify_repository.return_value = testvars.verified_nodes ilo = curator.IndexList(client) so = curator.Snapshot(ilo, repository=testvars.repo_name, name=testvars.snap_name) self.assertRaises(curator.SnapshotInProgress, so.do_action) def test_do_action_no_wait_for_completion(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.snapshot.get_repository.return_value = testvars.test_repo client.snapshot.get.return_value = testvars.snapshots client.tasks.get.return_value = testvars.no_snap_tasks client.snapshot.create.return_value = testvars.generic_task client.snapshot.status.return_value = testvars.nosnap_running client.snapshot.verify_repository.return_value = testvars.verified_nodes ilo = curator.IndexList(client) so = curator.Snapshot(ilo, repository=testvars.repo_name, name=testvars.snap_name, wait_for_completion=False) self.assertIsNone(so.do_action()) def test_do_action_raise_on_failure(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.snapshot.get_repository.return_value = testvars.test_repo client.snapshot.get.return_value = testvars.snapshots client.tasks.get.return_value = testvars.no_snap_tasks client.snapshot.create.return_value = None client.snapshot.create.side_effect = testvars.fake_fail client.snapshot.status.return_value = testvars.nosnap_running client.snapshot.verify_repository.return_value = testvars.verified_nodes ilo = curator.IndexList(client) so = curator.Snapshot(ilo, repository=testvars.repo_name, name=testvars.snap_name) self.assertRaises(curator.FailedExecution, so.do_action) curator-5.2.0/test/unit/test_class_index_list.py000066400000000000000000001611161315226075300221050ustar00rootroot00000000000000from unittest import TestCase from mock import Mock, patch import elasticsearch import yaml import curator # Get test variables and constants from a single source from . import testvars as testvars class TestIndexListClientAndInit(TestCase): def test_init_bad_client(self): client = 'not a real client' self.assertRaises(TypeError, curator.IndexList, client) def test_init_get_indices_exception(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.indices.get_settings.side_effect = testvars.fake_fail self.assertRaises(curator.FailedExecution, curator.IndexList, client) def test_init(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) self.assertEqual( testvars.stats_two['indices']['index-2016.03.03']['total']['store']['size_in_bytes'], il.index_info['index-2016.03.03']['size_in_bytes'] ) self.assertEqual( testvars.clu_state_two['metadata']['indices']['index-2016.03.04']['state'], il.index_info['index-2016.03.04']['state'] ) self.assertEqual(['index-2016.03.03','index-2016.03.04'], sorted(il.indices)) def test_for_closed_index(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_2_closed client.cluster.state.return_value = testvars.cs_two_closed client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) self.assertEqual('close', il.index_info['index-2016.03.03']['state']) def test_skip_index_without_creation_date(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two_no_cd client.cluster.state.return_value = testvars.clu_state_two_no_cd client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) self.assertEqual(['index-2016.03.03'], sorted(il.indices)) class TestIndexListOtherMethods(TestCase): def test_empty_list(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) self.assertEqual(2, len(il.indices)) il.indices = [] self.assertRaises(curator.NoIndices, il.empty_list_check) def test_get_segmentcount(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.segments.return_value = testvars.shards il = curator.IndexList(client) il._get_segmentcounts() self.assertEqual(71, il.index_info[testvars.named_index]['segments']) class TestIndexListAgeFilterName(TestCase): def test_get_name_based_ages_match(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) il._get_name_based_ages('%Y.%m.%d') self.assertEqual(1456963200,il.index_info['index-2016.03.03']['age']['name']) def test_get_name_based_ages_no_match(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two nomatch = curator.IndexList(client) nomatch._get_name_based_ages('%Y-%m-%d') self.assertEqual( curator.fix_epoch( testvars.settings_two['index-2016.03.03']['settings']['index']['creation_date'] ), nomatch.index_info['index-2016.03.03']['age']['creation_date'] ) class TestIndexListAgeFilterStatsAPI(TestCase): def test_get_field_stats_dates_success(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) client.field_stats.return_value = testvars.fieldstats_two il._get_field_stats_dates(field='timestamp') self.assertEqual( curator.fix_epoch( testvars.fieldstats_two['indices']['index-2016.03.04']['fields']['timestamp']['min_value'] ), il.index_info['index-2016.03.04']['age']['min_value'] ) def test_get_field_stats_dates_negative(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) client.field_stats.return_value = testvars.fieldstats_two il._get_field_stats_dates(field='timestamp') self.assertNotIn('not_an_index_name', list(il.index_info.keys())) def test_get_field_stats_dates_field_not_found(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.field_stats.return_value = testvars.fieldstats_two il = curator.IndexList(client) self.assertRaises( curator.ActionError, il._get_field_stats_dates, field='not_in_index') class TestIndexListRegexFilters(TestCase): def test_filter_by_regex_prefix(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) self.assertEqual( [u'index-2016.03.03', u'index-2016.03.04'], sorted(il.indices) ) il.filter_by_regex(kind='prefix', value='ind') self.assertEqual( [u'index-2016.03.03', u'index-2016.03.04'], sorted(il.indices) ) il.filter_by_regex(kind='prefix', value='ind', exclude=True) self.assertEqual([], il.indices) def test_filter_by_regex_timestring(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) self.assertEqual( [u'index-2016.03.03', u'index-2016.03.04'], sorted(il.indices) ) il.filter_by_regex(kind='timestring', value='%Y.%m.%d') self.assertEqual( [u'index-2016.03.03', u'index-2016.03.04'], sorted(il.indices) ) il.filter_by_regex(kind='timestring', value='%Y.%m.%d', exclude=True) self.assertEqual([], il.indices) def test_filter_by_regex_no_match_exclude(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) self.assertEqual( [u'index-2016.03.03', u'index-2016.03.04'], sorted(il.indices) ) il.filter_by_regex(kind='prefix', value='invalid', exclude=True) # self.assertEqual([], il.indices) self.assertEqual( [u'index-2016.03.03', u'index-2016.03.04'], sorted(il.indices) ) def test_filter_by_regex_no_value(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) self.assertEqual( [u'index-2016.03.03', u'index-2016.03.04'], sorted(il.indices) ) self.assertRaises(ValueError, il.filter_by_regex, kind='prefix', value=None) self.assertEqual( [u'index-2016.03.03', u'index-2016.03.04'], sorted(il.indices) ) il.filter_by_regex(kind='prefix', value=0) self.assertEqual([], il.indices) def test_filter_by_regex_bad_kind(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) self.assertEqual( [u'index-2016.03.03', u'index-2016.03.04'], sorted(il.indices) ) self.assertRaises(ValueError, il.filter_by_regex, kind='invalid', value=None) class TestIndexListFilterByAge(TestCase): def test_missing_direction(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) self.assertRaises(curator.MissingArgument, il.filter_by_age, unit='days', unit_count=1 ) def test_bad_direction(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) self.assertRaises(ValueError, il.filter_by_age, unit='days', unit_count=1, direction="invalid" ) def test_name_no_timestring(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) self.assertRaises(curator.MissingArgument, il.filter_by_age, source='name', unit='days', unit_count=1, direction='older' ) def test_name_older_than_now(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) il.filter_by_age(source='name', direction='older', timestring='%Y.%m.%d', unit='days', unit_count=1 ) self.assertEqual( ['index-2016.03.03','index-2016.03.04'], sorted(il.indices) ) def test_name_older_than_now_exclude(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) il.filter_by_age(source='name', direction='older', timestring='%Y.%m.%d', unit='days', unit_count=1, exclude=True ) self.assertEqual( [], sorted(il.indices) ) def test_name_younger_than_now(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) il.filter_by_age(source='name', direction='younger', timestring='%Y.%m.%d', unit='days', unit_count=1 ) self.assertEqual([], sorted(il.indices)) def test_name_younger_than_now_exclude(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) il.filter_by_age(source='name', direction='younger', timestring='%Y.%m.%d', unit='days', unit_count=1, exclude=True ) self.assertEqual( ['index-2016.03.03','index-2016.03.04'], sorted(il.indices)) def test_name_younger_than_past_date(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) il.filter_by_age(source='name', direction='younger', timestring='%Y.%m.%d', unit='seconds', unit_count=0, epoch=1457049599 ) self.assertEqual(['index-2016.03.04'], sorted(il.indices)) def test_name_older_than_past_date(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) il.filter_by_age(source='name', direction='older', timestring='%Y.%m.%d', unit='seconds', unit_count=0, epoch=1456963201 ) self.assertEqual(['index-2016.03.03'], sorted(il.indices)) def test_creation_date_older_than_now(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) il.filter_by_age(source='creation_date', direction='older', unit='days', unit_count=1 ) self.assertEqual( ['index-2016.03.03','index-2016.03.04'], sorted(il.indices) ) def test_creation_date_older_than_now_raises(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) il.index_info['index-2016.03.03']['age'].pop('creation_date') il.index_info['index-2016.03.04']['age'].pop('creation_date') il.filter_by_age( source='creation_date', direction='older', unit='days', unit_count=1 ) self.assertEqual([], il.indices) def test_creation_date_younger_than_now(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) il.filter_by_age(source='creation_date', direction='younger', unit='days', unit_count=1 ) self.assertEqual([], sorted(il.indices)) def test_creation_date_younger_than_now_raises(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) il.index_info['index-2016.03.03']['age'].pop('creation_date') il.index_info['index-2016.03.04']['age'].pop('creation_date') il.filter_by_age( source='creation_date', direction='younger', unit='days', unit_count=1 ) self.assertEqual([], il.indices) def test_creation_date_younger_than_past_date(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) il.filter_by_age(source='creation_date', direction='younger', unit='seconds', unit_count=0, epoch=1457049599 ) self.assertEqual(['index-2016.03.04'], sorted(il.indices)) def test_creation_date_older_than_past_date(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) il.filter_by_age(source='creation_date', direction='older', unit='seconds', unit_count=0, epoch=1456963201 ) self.assertEqual(['index-2016.03.03'], sorted(il.indices)) def test_field_stats_missing_field(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) self.assertRaises(curator.MissingArgument, il.filter_by_age, source='field_stats', direction='older', unit='days', unit_count=1 ) def test_field_stats_invalid_stats_result(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) self.assertRaises(ValueError, il.filter_by_age, field='timestamp', source='field_stats', direction='older', unit='days', unit_count=1, stats_result='invalid' ) def test_field_stats_invalid_source(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) self.assertRaises(ValueError, il.filter_by_age, source='invalid', direction='older', unit='days', unit_count=1 ) def test_field_stats_older_than_now(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.field_stats.return_value = testvars.fieldstats_two il = curator.IndexList(client) il.filter_by_age(source='field_stats', direction='older', field='timestamp', unit='days', unit_count=1 ) self.assertEqual( ['index-2016.03.03','index-2016.03.04'], sorted(il.indices) ) def test_field_stats_younger_than_now(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.field_stats.return_value = testvars.fieldstats_two il = curator.IndexList(client) il.filter_by_age(source='field_stats', direction='younger', field='timestamp', unit='days', unit_count=1 ) self.assertEqual([], sorted(il.indices)) def test_field_stats_younger_than_past_date(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.field_stats.return_value = testvars.fieldstats_two il = curator.IndexList(client) il.filter_by_age(source='field_stats', direction='younger', field='timestamp', unit='seconds', unit_count=0, epoch=1457049599 ) self.assertEqual(['index-2016.03.04'], sorted(il.indices)) def test_field_stats_older_than_past_date(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.field_stats.return_value = testvars.fieldstats_two il = curator.IndexList(client) il.filter_by_age(source='field_stats', direction='older', field='timestamp', unit='seconds', unit_count=0, epoch=1456963207 ) self.assertEqual(['index-2016.03.03'], sorted(il.indices)) def test_field_stats_older_than_now_max(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.field_stats.return_value = testvars.fieldstats_two il = curator.IndexList(client) il.filter_by_age(source='field_stats', direction='older', field='timestamp', stats_result='max_value', unit='days', unit_count=0 ) self.assertEqual( ['index-2016.03.03','index-2016.03.04'], sorted(il.indices) ) def test_field_stats_younger_than_now_max(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.field_stats.return_value = testvars.fieldstats_two il = curator.IndexList(client) il.filter_by_age(source='field_stats', direction='younger', field='timestamp', stats_result='max_value', unit='days', unit_count=0 ) self.assertEqual([], sorted(il.indices)) def test_field_stats_younger_than_past_date_max(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.field_stats.return_value = testvars.fieldstats_two il = curator.IndexList(client) il.filter_by_age(source='field_stats', direction='younger', field='timestamp', stats_result='max_value', unit='seconds', unit_count=0, epoch=1457135998 ) self.assertEqual(['index-2016.03.04'], sorted(il.indices)) def test_field_stats_older_than_past_date_max(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.field_stats.return_value = testvars.fieldstats_two il = curator.IndexList(client) il.filter_by_age(source='field_stats', direction='older', field='timestamp', stats_result='max_value', unit='seconds', unit_count=0, epoch=1457049600 ) self.assertEqual(['index-2016.03.03'], sorted(il.indices)) class TestIndexListFilterBySpace(TestCase): def test_missing_disk_space_value(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.field_stats.return_value = testvars.fieldstats_two il = curator.IndexList(client) self.assertRaises(curator.MissingArgument, il.filter_by_space) def test_filter_result_by_name(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.field_stats.return_value = testvars.fieldstats_two il = curator.IndexList(client) il.filter_by_space(disk_space=1.1) self.assertEqual(['index-2016.03.03'], il.indices) def test_filter_result_by_name_reverse_order(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.field_stats.return_value = testvars.fieldstats_two il = curator.IndexList(client) il.filter_by_space(disk_space=1.1, reverse=False) self.assertEqual(['index-2016.03.04'], il.indices) def test_filter_result_by_name_exclude(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.field_stats.return_value = testvars.fieldstats_two il = curator.IndexList(client) il.filter_by_space(disk_space=1.1, exclude=True) self.assertEqual(['index-2016.03.04'], il.indices) def test_filter_result_by_date_raise(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.field_stats.return_value = testvars.fieldstats_four il = curator.IndexList(client) self.assertRaises(ValueError, il.filter_by_space, disk_space=2.1, use_age=True, source='invalid' ) def test_filter_result_by_date_timestring_raise(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.field_stats.return_value = testvars.fieldstats_four il = curator.IndexList(client) self.assertRaises(curator.MissingArgument, il.filter_by_space, disk_space=2.1, use_age=True, source='name' ) def test_filter_result_by_date_timestring(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.field_stats.return_value = testvars.fieldstats_four il = curator.IndexList(client) il.filter_by_space( disk_space=2.1, use_age=True, source='name', timestring='%Y.%m.%d' ) self.assertEqual(['a-2016.03.03'], sorted(il.indices)) def test_filter_result_by_date_non_matching_timestring(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.field_stats.return_value = testvars.fieldstats_four il = curator.IndexList(client) il.filter_by_space( disk_space=2.1, use_age=True, source='name', timestring='%Y.%m.%d.%H' ) self.assertEqual([], sorted(il.indices)) def test_filter_result_by_date_field_stats_raise(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.field_stats.return_value = testvars.fieldstats_four il = curator.IndexList(client) self.assertRaises(ValueError, il.filter_by_space, disk_space=2.1, use_age=True, source='min_value' ) def test_filter_result_by_date_no_field_raise(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.field_stats.return_value = testvars.fieldstats_four il = curator.IndexList(client) self.assertRaises(curator.MissingArgument, il.filter_by_space, disk_space=2.1, use_age=True, source='field_stats' ) def test_filter_result_by_date_invalid_stats_result_raise(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.field_stats.return_value = testvars.fieldstats_four il = curator.IndexList(client) self.assertRaises(ValueError, il.filter_by_space, disk_space=2.1, use_age=True, source='field_stats', field='timestamp', stats_result='invalid' ) def test_filter_result_by_date_field_stats(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.field_stats.return_value = testvars.fieldstats_four il = curator.IndexList(client) il.filter_by_space( disk_space=2.1, use_age=True, source='field_stats', field='timestamp' ) self.assertEqual(['a-2016.03.03'], il.indices) def test_filter_result_by_creation_date(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.field_stats.return_value = testvars.fieldstats_four il = curator.IndexList(client) il.filter_by_space(disk_space=2.1, use_age=True) self.assertEqual(['a-2016.03.03'], il.indices) class TestIndexListFilterKibana(TestCase): def test_filter_kibana_positive(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.field_stats.return_value = testvars.fieldstats_two il = curator.IndexList(client) # Establish the object per requirements, then overwrite il.indices = ['.kibana', '.marvel-kibana', 'kibana-int', '.marvel-es-data', 'dummy'] il.filter_kibana() self.assertEqual(['dummy'], il.indices) def test_filter_kibana_positive_exclude(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.field_stats.return_value = testvars.fieldstats_two il = curator.IndexList(client) # Establish the object per requirements, then overwrite kibana_indices = [ '.kibana', '.marvel-kibana', 'kibana-int', '.marvel-es-data'] il.indices = kibana_indices il.indices.append('dummy') il.filter_kibana(exclude=True) self.assertEqual(kibana_indices, il.indices) def test_filter_kibana_negative(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.field_stats.return_value = testvars.fieldstats_two il = curator.IndexList(client) # Establish the object per requirements, then overwrite il.indices = ['kibana', 'marvel-kibana', 'cabana-int', 'marvel-es-data', 'dummy'] il.filter_kibana() self.assertEqual( ['kibana', 'marvel-kibana', 'cabana-int', 'marvel-es-data', 'dummy'], il.indices ) class TestIndexListFilterForceMerged(TestCase): def test_filter_forcemerge_raise(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.segments.return_value = testvars.shards il = curator.IndexList(client) self.assertRaises(curator.MissingArgument, il.filter_forceMerged) def test_filter_forcemerge_positive(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.segments.return_value = testvars.shards il = curator.IndexList(client) il.filter_forceMerged(max_num_segments=2) self.assertEqual([testvars.named_index], il.indices) def test_filter_forcemerge_negative(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.segments.return_value = testvars.fm_shards il = curator.IndexList(client) il.filter_forceMerged(max_num_segments=2) self.assertEqual([], il.indices) class TestIndexListFilterOpened(TestCase): def test_filter_opened(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.field_stats.return_value = testvars.fieldstats_four il = curator.IndexList(client) il.filter_opened() self.assertEqual(['c-2016.03.05'], il.indices) class TestIndexListFilterAllocated(TestCase): def test_missing_key(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) self.assertRaises( curator.MissingArgument, il.filter_allocated, value='foo', allocation_type='invalid' ) def test_missing_value(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) self.assertRaises( curator.MissingArgument, il.filter_allocated, key='tag', allocation_type='invalid' ) def test_invalid_allocation_type(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) self.assertRaises( ValueError, il.filter_allocated, key='tag', value='foo', allocation_type='invalid' ) def test_success(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) il.filter_allocated(key='tag', value='foo', allocation_type='include') self.assertEqual(['index-2016.03.04'], il.indices) def test_invalid_tag(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) il.filter_allocated( key='invalid', value='foo', allocation_type='include') self.assertEqual( ['index-2016.03.03','index-2016.03.04'], sorted(il.indices)) class TestIterateFiltersIndex(TestCase): def test_no_filters(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four ilo = curator.IndexList(client) ilo.iterate_filters({}) self.assertEqual( ['a-2016.03.03', 'b-2016.03.04', 'c-2016.03.05', 'd-2016.03.06'], sorted(ilo.indices) ) def test_no_filtertype(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four ilo = curator.IndexList(client) config = {'filters': [{'no_filtertype':'fail'}]} self.assertRaises( curator.ConfigurationError, ilo.iterate_filters, config) def test_invalid_filtertype(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four ilo = curator.IndexList(client) config = {'filters': [{'filtertype':12345.6789}]} self.assertRaises( curator.ConfigurationError, ilo.iterate_filters, config) def test_pattern_filtertype(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four ilo = curator.IndexList(client) config = yaml.load(testvars.pattern_ft)['actions'][1] ilo.iterate_filters(config) self.assertEqual(['a-2016.03.03'], ilo.indices) def test_age_filtertype(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two ilo = curator.IndexList(client) config = yaml.load(testvars.age_ft)['actions'][1] ilo.iterate_filters(config) self.assertEqual(['index-2016.03.03'], ilo.indices) def test_space_filtertype(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.field_stats.return_value = testvars.fieldstats_four ilo = curator.IndexList(client) config = yaml.load(testvars.space_ft)['actions'][1] ilo.iterate_filters(config) self.assertEqual(['a-2016.03.03'], ilo.indices) def test_forcemerge_filtertype(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one client.indices.segments.return_value = testvars.shards ilo = curator.IndexList(client) config = yaml.load(testvars.forcemerge_ft)['actions'][1] ilo.iterate_filters(config) self.assertEqual([testvars.named_index], ilo.indices) def test_allocated_filtertype(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two ilo = curator.IndexList(client) config = yaml.load(testvars.allocated_ft)['actions'][1] ilo.iterate_filters(config) self.assertEqual(['index-2016.03.04'], ilo.indices) def test_kibana_filtertype(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.field_stats.return_value = testvars.fieldstats_two ilo = curator.IndexList(client) # Establish the object per requirements, then overwrite ilo.indices = [ '.kibana', '.marvel-kibana', 'kibana-int', '.marvel-es-data', 'dummy' ] config = yaml.load(testvars.kibana_ft)['actions'][1] ilo.iterate_filters(config) self.assertEqual(['dummy'], ilo.indices) def test_opened_filtertype(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.field_stats.return_value = testvars.fieldstats_four ilo = curator.IndexList(client) config = yaml.load(testvars.opened_ft)['actions'][1] ilo.iterate_filters(config) self.assertEqual(['c-2016.03.05'], ilo.indices) def test_closed_filtertype(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_four client.cluster.state.return_value = testvars.clu_state_four client.indices.stats.return_value = testvars.stats_four client.field_stats.return_value = testvars.fieldstats_four ilo = curator.IndexList(client) config = yaml.load(testvars.closed_ft)['actions'][1] ilo.iterate_filters(config) self.assertEqual( ['a-2016.03.03','b-2016.03.04','d-2016.03.06'], sorted(ilo.indices)) def test_none_filtertype(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two ilo = curator.IndexList(client) config = yaml.load(testvars.none_ft)['actions'][1] ilo.iterate_filters(config) self.assertEqual( ['index-2016.03.03', 'index-2016.03.04'], sorted(ilo.indices)) def test_unknown_filtertype_raises(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two ilo = curator.IndexList(client) config = yaml.load(testvars.invalid_ft)['actions'][1] self.assertRaises( curator.ConfigurationError, ilo.iterate_filters, config ) class TestIndexListFilterAlias(TestCase): def test_raise(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one il = curator.IndexList(client) self.assertRaises(curator.MissingArgument, il.filter_by_alias) def test_positive(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.indices.get_alias.return_value = testvars.settings_2_get_aliases il = curator.IndexList(client) il.filter_by_alias(aliases=['my_alias']) self.assertEqual( sorted(list(testvars.settings_two.keys())), sorted(il.indices)) def test_negative(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.indices.get_alias.return_value = {} il = curator.IndexList(client) il.filter_by_alias(aliases=['not_my_alias']) self.assertEqual( sorted([]), sorted(il.indices)) def test_get_alias_raises(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.indices.get_alias.side_effect = testvars.get_alias_fail client.indices.get_alias.return_value = testvars.settings_2_get_aliases il = curator.IndexList(client) il.filter_by_alias(aliases=['my_alias']) self.assertEqual( sorted([]), sorted(il.indices)) class TestIndexListFilterCount(TestCase): def test_raise(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_one client.cluster.state.return_value = testvars.clu_state_one client.indices.stats.return_value = testvars.stats_one il = curator.IndexList(client) self.assertRaises(curator.MissingArgument, il.filter_by_count) def test_without_age(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.indices.get_alias.return_value = testvars.settings_2_get_aliases il = curator.IndexList(client) il.filter_by_count(count=1) self.assertEqual([u'index-2016.03.03'], il.indices) def test_without_age_reversed(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.indices.get_alias.return_value = testvars.settings_2_get_aliases il = curator.IndexList(client) il.filter_by_count(count=1, reverse=False) self.assertEqual([u'index-2016.03.04'], il.indices) def test_with_age(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.indices.get_alias.return_value = testvars.settings_2_get_aliases il = curator.IndexList(client) il.filter_by_count( count=1, use_age=True, source='name', timestring='%Y.%m.%d' ) self.assertEqual([u'index-2016.03.03'], il.indices) def test_with_age_creation_date(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.indices.get_alias.return_value = testvars.settings_2_get_aliases il = curator.IndexList(client) il.filter_by_count(count=1, use_age=True) self.assertEqual([u'index-2016.03.03'], il.indices) def test_with_age_reversed(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.indices.get_alias.return_value = testvars.settings_2_get_aliases il = curator.IndexList(client) il.filter_by_count( count=1, use_age=True, source='name', timestring='%Y.%m.%d', reverse=False ) self.assertEqual([u'index-2016.03.04'], il.indices) class TestIndexListPeriodFilterName(TestCase): def test_get_name_based_age_in_range(self): unit = 'days' range_from = -1 range_to = 0 timestring = '%Y.%m.%d' epoch = 1456963201 expected = ['index-2016.03.03'] client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) il.filter_period(unit=unit, range_from=range_from, range_to=range_to, source='name', timestring=timestring, epoch=epoch) self.assertEqual(expected, il.indices) def test_get_name_based_age_not_in_range(self): unit = 'days' range_from = -3 range_to = -2 timestring = '%Y.%m.%d' epoch = 1456963201 expected = [] client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) il.filter_period(unit=unit, range_from=range_from, range_to=range_to, source='name', timestring=timestring, epoch=epoch) self.assertEqual(expected, il.indices) def test_bad_arguments(self): unit = 'days' range_from = -2 range_to = -3 timestring = '%Y.%m.%d' epoch = 1456963201 expected = [] client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) self.assertRaises(curator.FailedExecution, il.filter_period, unit=unit, range_from=range_from, range_to=range_to, source='name', timestring=timestring, epoch=epoch ) def test_missing_creation_date_raises(self): unit = 'days' range_from = -1 range_to = 0 epoch = 1456963201 expected = [] client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two il = curator.IndexList(client) il.index_info['index-2016.03.03']['age'].pop('creation_date') il.index_info['index-2016.03.04']['age'].pop('creation_date') il.filter_period(unit=unit, range_from=range_from, range_to=range_to, source='creation_date', epoch=epoch) self.assertEqual(expected, il.indices)curator-5.2.0/test/unit/test_class_snapshot_list.py000066400000000000000000000515261315226075300226400ustar00rootroot00000000000000from unittest import TestCase from mock import Mock, patch import elasticsearch import yaml import curator # Get test variables and constants from a single source from . import testvars as testvars class TestSnapshotListClientAndInit(TestCase): def test_init_bad_client(self): client = 'not a real client' self.assertRaises(TypeError, curator.SnapshotList, client) def test_init_no_repo_exception(self): client = Mock() self.assertRaises(curator.MissingArgument, curator.SnapshotList, client) def test_init_get_snapshots_exception(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get.side_effect = testvars.fake_fail client.snapshot.get_repository.return_value = {} self.assertRaises( curator.FailedExecution, curator.SnapshotList, client, repository=testvars.repo_name ) def test_init(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) self.assertEqual(testvars.snapshots['snapshots'],sl.all_snapshots) self.assertEqual( ['snap_name','snapshot-2015.03.01'], sorted(sl.snapshots) ) class TestSnapshotListOtherMethods(TestCase): def test_empty_list(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) self.assertEqual(2, len(sl.snapshots)) sl.snapshots = [] self.assertRaises(curator.NoSnapshots, sl.empty_list_check) def test_working_list(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) self.assertEqual(['snap_name', 'snapshot-2015.03.01'], sl.working_list()) class TestSnapshotListAgeFilterName(TestCase): def test_get_name_based_ages_match(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) sl._get_name_based_ages('%Y.%m.%d') self.assertEqual(1425168000, sl.snapshot_info['snapshot-2015.03.01']['age_by_name'] ) def test_get_name_based_ages_no_match(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) sl._get_name_based_ages('%Y.%m.%d') self.assertIsNone(sl.snapshot_info['snap_name']['age_by_name']) class TestSnapshotListStateFilter(TestCase): def test_success_inclusive(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) sl.filter_by_state(state='SUCCESS') self.assertEqual( [u'snap_name', u'snapshot-2015.03.01'], sorted(sl.snapshots) ) def test_success_exclusive(self): client = Mock() client.snapshot.get.return_value = testvars.inprogress client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) sl.filter_by_state(state='SUCCESS', exclude=True) self.assertEqual([u'snapshot-2015.03.01'], sorted(sl.snapshots)) def test_invalid_state(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) self.assertRaises(ValueError, sl.filter_by_state, state='invalid') class TestSnapshotListRegexFilters(TestCase): def test_filter_by_regex_prefix(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) self.assertEqual( [u'snap_name', u'snapshot-2015.03.01'], sorted(sl.snapshots) ) sl.filter_by_regex(kind='prefix', value='sna') self.assertEqual( [u'snap_name', u'snapshot-2015.03.01'], sorted(sl.snapshots) ) sl.filter_by_regex(kind='prefix', value='sna', exclude=True) self.assertEqual([], sl.snapshots) def test_filter_by_regex_prefix_exclude(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) self.assertEqual( [u'snap_name', u'snapshot-2015.03.01'], sorted(sl.snapshots) ) sl.filter_by_regex(kind='prefix', value='snap_', exclude=True) self.assertEqual([u'snapshot-2015.03.01'], sl.snapshots) def test_filter_by_regex_timestring(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) self.assertEqual( [u'snap_name', u'snapshot-2015.03.01'], sorted(sl.snapshots) ) sl.filter_by_regex(kind='timestring', value='%Y.%m.%d') self.assertEqual( [u'snapshot-2015.03.01'], sorted(sl.snapshots) ) sl.filter_by_regex(kind='timestring', value='%Y.%m.%d', exclude=True) self.assertEqual([], sl.snapshots) def test_filter_by_regex_no_value(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) self.assertEqual( [u'snap_name', u'snapshot-2015.03.01'], sorted(sl.snapshots) ) self.assertRaises(ValueError, sl.filter_by_regex, kind='prefix', value=None) self.assertEqual( [u'snap_name', u'snapshot-2015.03.01'], sorted(sl.snapshots) ) sl.filter_by_regex(kind='prefix', value=0) self.assertEqual([], sl.snapshots) def test_filter_by_regex_bad_kind(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) self.assertEqual( [u'snap_name', u'snapshot-2015.03.01'], sorted(sl.snapshots) ) self.assertRaises( ValueError, sl.filter_by_regex, kind='invalid', value=None) class TestSnapshotListFilterByAge(TestCase): def test_filter_by_age_missing_direction(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) self.assertRaises(curator.MissingArgument, sl.filter_by_age, unit='days', unit_count=1 ) def test_filter_by_age_bad_direction(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) self.assertRaises(ValueError, sl.filter_by_age, unit='days', unit_count=1, direction="invalid" ) def test_filter_by_age_invalid_source(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) self.assertRaises(ValueError, sl.filter_by_age, unit='days', source='invalid', unit_count=1, direction="older" ) def test_filter_by_age__name_no_timestring(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) self.assertRaises(curator.MissingArgument, sl.filter_by_age, source='name', unit='days', unit_count=1, direction='older' ) def test_filter_by_age__name_older_than_now(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) sl.filter_by_age(source='name', direction='older', timestring='%Y.%m.%d', unit='days', unit_count=1 ) self.assertEqual(['snapshot-2015.03.01'], sl.snapshots) def test_filter_by_age__name_younger_than_now(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) sl.filter_by_age(source='name', direction='younger', timestring='%Y.%m.%d', unit='days', unit_count=1 ) self.assertEqual([], sl.snapshots) def test_filter_by_age__name_younger_than_past_date(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) sl.filter_by_age(source='name', direction='younger', timestring='%Y.%m.%d', unit='seconds', unit_count=0, epoch=1422748800 ) self.assertEqual(['snapshot-2015.03.01'], sl.snapshots) def test_filter_by_age__name_older_than_past_date(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) sl.filter_by_age(source='name', direction='older', timestring='%Y.%m.%d', unit='seconds', unit_count=0, epoch=1456963200 ) self.assertEqual(['snapshot-2015.03.01'], sl.snapshots) def test_filter_by_age__creation_date_older_than_now(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) sl.filter_by_age(direction='older', unit='days', unit_count=1) self.assertEqual( ['snap_name', 'snapshot-2015.03.01'], sorted(sl.snapshots)) def test_filter_by_age__creation_date_younger_than_now(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) sl.filter_by_age(direction='younger', timestring='%Y.%m.%d', unit='days', unit_count=1 ) self.assertEqual([], sl.snapshots) def test_filter_by_age__creation_date_younger_than_past_date(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) sl.filter_by_age(direction='younger', timestring='%Y.%m.%d', unit='seconds', unit_count=0, epoch=1422748801 ) self.assertEqual(['snapshot-2015.03.01'], sl.snapshots) def test_filter_by_age__creation_date_older_than_past_date(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) sl.filter_by_age(direction='older', timestring='%Y.%m.%d', unit='seconds', unit_count=0, epoch=1425168001 ) self.assertEqual(['snap_name'], sl.snapshots) class TestIterateFiltersSnaps(TestCase): def test_no_filters(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) slo.iterate_filters({}) self.assertEqual( ['snap_name', 'snapshot-2015.03.01'], sorted(slo.snapshots) ) def test_no_filtertype(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) config = {'filters': [{'no_filtertype':'fail'}]} self.assertRaises( curator.ConfigurationError, slo.iterate_filters, config) def test_invalid_filtertype_class(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) config = {'filters': [{'filtertype':12345.6789}]} self.assertRaises( curator.ConfigurationError, slo.iterate_filters, config) def test_invalid_filtertype(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) config = yaml.load(testvars.invalid_ft)['actions'][1] self.assertRaises( curator.ConfigurationError, slo.iterate_filters, config ) def test_age_filtertype(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) config = yaml.load(testvars.snap_age_ft)['actions'][1] slo.iterate_filters(config) self.assertEqual( ['snap_name', 'snapshot-2015.03.01'], sorted(slo.snapshots)) def test_pattern_filtertype(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) config = yaml.load(testvars.snap_pattern_ft)['actions'][1] slo.iterate_filters(config) self.assertEqual( ['snap_name', 'snapshot-2015.03.01'], sorted(slo.snapshots)) def test_none_filtertype(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) config = yaml.load(testvars.snap_none_ft)['actions'][1] slo.iterate_filters(config) self.assertEqual( ['snap_name', 'snapshot-2015.03.01'], sorted(slo.snapshots)) class TestSnapshotListFilterCount(TestCase): def test_missing_count(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) self.assertRaises(curator.MissingArgument, slo.filter_by_count) def test_without_age(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) slo.filter_by_count(count=1) self.assertEqual(['snap_name'], slo.snapshots) def test_without_age_reversed(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) slo.filter_by_count(count=1, reverse=False) self.assertEqual(['snapshot-2015.03.01'], slo.snapshots) def test_with_age(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) slo.filter_by_count( count=1, source='creation_date', use_age=True ) self.assertEqual(['snap_name'], slo.snapshots) def test_with_age_reversed(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) slo.filter_by_count( count=1, source='creation_date', use_age=True, reverse=False ) self.assertEqual(['snapshot-2015.03.01'], slo.snapshots) def test_sort_by_age(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo slo = curator.SnapshotList(client, repository=testvars.repo_name) slo._calculate_ages() slo.age_keyfield = 'invalid' snaps = slo.snapshots slo._sort_by_age(snaps) self.assertEqual(['snapshot-2015.03.01'], slo.snapshots) class TestSnapshotListPeriodFilter(TestCase): def test_bad_args(self): unit = 'days' range_from = -1 range_to = -2 timestring = '%Y.%m.%d' epoch = 1456963201 expected = curator.FailedExecution client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) self.assertRaises(expected, sl.filter_period, unit=unit, range_from=range_from, range_to=range_to, source='name', timestring=timestring, epoch=epoch ) def test_in_range(self): unit = 'days' range_from = -2 range_to = 2 timestring = '%Y.%m.%d' epoch = 1425168000 expected = ['snapshot-2015.03.01'] client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) sl.filter_period(source='name', range_from=range_from, epoch=epoch, range_to=range_to, timestring='%Y.%m.%d', unit=unit, ) self.assertEqual(expected, sl.snapshots) def test_not_in_range(self): unit = 'days' range_from = 2 range_to = 4 timestring = '%Y.%m.%d' epoch = 1425168000 expected = [] client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) sl.filter_period(source='name', range_from=range_from, epoch=epoch, range_to=range_to, timestring='%Y.%m.%d', unit=unit, ) self.assertEqual(expected, sl.snapshots) def test_no_creation_date(self): unit = 'days' range_from = -2 range_to = 2 epoch = 1456963201 expected = [] client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo sl = curator.SnapshotList(client, repository=testvars.repo_name) sl.snapshot_info['snap_name']['start_time_in_millis'] = None sl.snapshot_info['snapshot-2015.03.01']['start_time_in_millis'] = None sl.filter_period(source='creation_date', range_from=range_from, epoch=epoch, range_to=range_to, unit=unit, ) self.assertEqual(expected, sl.snapshots)curator-5.2.0/test/unit/test_cli_methods.py000066400000000000000000000050231315226075300210420ustar00rootroot00000000000000import sys import logging from unittest import TestCase from mock import Mock, patch, mock_open import elasticsearch import curator from curator import _version as __version__ from . import CLITestCase # Get test variables and constants from a single source from . import testvars as testvars class TestCLI_A(TestCase): def test_read_file_no_file(self): self.assertRaises(TypeError, curator.read_file) def test_loginfo_defaults(self): loginfo = curator.LogInfo({}) self.assertEqual(20, loginfo.numeric_log_level) self.assertEqual(testvars.default_format, loginfo.format_string) def test_loginfo_debug(self): loginfo = curator.LogInfo({"loglevel": "DEBUG"}) self.assertEqual(10, loginfo.numeric_log_level) self.assertEqual(testvars.debug_format, loginfo.format_string) def test_loginfo_bad_level_raises(self): self.assertRaises( ValueError, curator.LogInfo, {"loglevel": "NOTALOGLEVEL"} ) def test_loginfo_logstash_formatter(self): loginfo = curator.LogInfo({"logformat": "logstash"}) logging.root.addHandler(loginfo.handler) logging.root.setLevel(loginfo.numeric_log_level) logger = logging.getLogger('testing') logger.info('testing') self.assertEqual(20, loginfo.numeric_log_level) def test_client_options_certificate(self): a = {'use_ssl':True, 'certificate':'invalid_path'} self.assertRaises( curator.FailedExecution, curator.test_client_options, a ) def test_client_options_client_cert(self): a = {'use_ssl':True, 'client_cert':'invalid_path'} self.assertRaises( curator.FailedExecution, curator.test_client_options, a ) def test_client_options_client_key(self): a = {'use_ssl':True, 'client_key':'invalid_path'} self.assertRaises( curator.FailedExecution, curator.test_client_options, a ) class TestCLI_B(CLITestCase): def test_read_file_pass(self): cfg = curator.get_yaml(self.args['yamlfile']) self.assertEqual('localhost', cfg['client']['hosts']) self.assertEqual(9200, cfg['client']['port']) def test_read_file_corrupt_fail(self): self.assertRaises(curator.ConfigurationError, curator.get_yaml, self.args['invalid_yaml']) def test_read_file_missing_fail(self): self.assertRaises( curator.FailedExecution, curator.read_file, self.args['no_file_here'] ) curator-5.2.0/test/unit/test_utils.py000066400000000000000000001331571315226075300177220ustar00rootroot00000000000000from datetime import datetime, timedelta from unittest import TestCase from mock import Mock import elasticsearch import yaml from . import testvars as testvars import curator class TestEnsureList(TestCase): def test_ensure_list_returns_lists(self): l = ["a", "b", "c", "d"] e = ["a", "b", "c", "d"] self.assertEqual(e, curator.ensure_list(l)) l = "abcd" e = ["abcd"] self.assertEqual(e, curator.ensure_list(l)) l = [["abcd","defg"], 1, 2, 3] e = [["abcd","defg"], 1, 2, 3] self.assertEqual(e, curator.ensure_list(l)) l = {"a":"b", "c":"d"} e = [{"a":"b", "c":"d"}] self.assertEqual(e, curator.ensure_list(l)) class TestTo_CSV(TestCase): def test_to_csv_will_return_csv(self): l = ["a", "b", "c", "d"] c = "a,b,c,d" self.assertEqual(c, curator.to_csv(l)) def test_to_csv_will_return_single(self): l = ["a"] c = "a" self.assertEqual(c, curator.to_csv(l)) def test_to_csv_will_return_None(self): l = [] self.assertIsNone(curator.to_csv(l)) class TestCheckCSV(TestCase): def test_check_csv_positive(self): c = "1,2,3" self.assertTrue(curator.check_csv(c)) def test_check_csv_negative(self): c = "12345" self.assertFalse(curator.check_csv(c)) def test_check_csv_list(self): l = ["1", "2", "3"] self.assertTrue(curator.check_csv(l)) def test_check_csv_unicode(self): u = u'test' self.assertFalse(curator.check_csv(u)) def test_check_csv_wrong_value(self): v = 123 self.assertRaises(TypeError, curator.check_csv, v ) class TestGetVersion(TestCase): def test_positive(self): client = Mock() client.info.return_value = {'version': {'number': '9.9.9'} } version = curator.get_version(client) self.assertEqual(version, (9,9,9)) def test_negative(self): client = Mock() client.info.return_value = {'version': {'number': '9.9.9'} } version = curator.get_version(client) self.assertNotEqual(version, (8,8,8)) def test_dev_version_4_dots(self): client = Mock() client.info.return_value = {'version': {'number': '9.9.9.dev'} } version = curator.get_version(client) self.assertEqual(version, (9,9,9)) def test_dev_version_with_dash(self): client = Mock() client.info.return_value = {'version': {'number': '9.9.9-dev'} } version = curator.get_version(client) self.assertEqual(version, (9,9,9)) class TestIsMasterNode(TestCase): def test_positive(self): client = Mock() client.nodes.info.return_value = { 'nodes': { "foo" : "bar"} } client.cluster.state.return_value = { "master_node" : "foo" } self.assertTrue(curator.is_master_node(client)) def test_negative(self): client = Mock() client.nodes.info.return_value = { 'nodes': { "bad" : "mojo"} } client.cluster.state.return_value = { "master_node" : "foo" } self.assertFalse(curator.is_master_node(client)) class TestGetIndexTime(TestCase): def test_get_datetime(self): for text, datestring, dt in [ ('2014.01.19', '%Y.%m.%d', datetime(2014, 1, 19)), ('14.01.19', '%y.%m.%d', datetime(2014, 1, 19)), ('2014-01-19', '%Y-%m-%d', datetime(2014, 1, 19)), ('2010-12-29', '%Y-%m-%d', datetime(2010, 12, 29)), ('2012-12', '%Y-%m', datetime(2012, 12, 1)), ('2011.01', '%Y.%m', datetime(2011, 1, 1)), ('2014-28', '%Y-%W', datetime(2014, 7, 14)), ('2014-28', '%Y-%U', datetime(2014, 7, 14)), ('2010.12.29.12', '%Y.%m.%d.%H', datetime(2010, 12, 29, 12)), ('2009101112136', '%Y%m%d%H%M%S', datetime(2009, 10, 11, 12, 13, 6)), ('2016-03-30t16', '%Y-%m-%dt%H', datetime(2016, 3, 30, 16, 0)), # ISO weeks # In 2014 ISO weeks were one week more than Greg weeks ('2014-42', '%Y-%W', datetime(2014, 10, 20)), ('2014-42', '%G-%V', datetime(2014, 10, 13)), ('2014-43', '%G-%V', datetime(2014, 10, 20)), # ('2008-52', '%G-%V', datetime(2008, 12, 22)), ('2008-52', '%Y-%W', datetime(2008, 12, 29)), ('2009-01', '%Y-%W', datetime(2009, 1, 5)), ('2009-01', '%G-%V', datetime(2008, 12, 29)), # The case when both ISO and Greg are same week number ('2017-16', '%Y-%W', datetime(2017, 4, 17)), ('2017-16', '%G-%V', datetime(2017, 4, 17)), # Weeks were leading 0 is needed for week number ('2017-02', '%Y-%W', datetime(2017, 1, 9)), ('2017-02', '%G-%V', datetime(2017, 1, 9)), ('2010-01', '%G-%V', datetime(2010, 1, 4)), ('2010-01', '%Y-%W', datetime(2010, 1, 4)), # In Greg week 53 for year 2009 doesn't exist, it converts to week 1 of next year. ('2009-53', '%Y-%W', datetime(2010, 1, 4)), ('2009-53', '%G-%V', datetime(2009, 12, 28)), ]: self.assertEqual(dt, curator.get_datetime(text, datestring)) class TestGetDateRegex(TestCase): def test_non_escaped(self): self.assertEqual( '\\d{4}\\-\\d{2}\\-\\d{2}t\\d{2}', curator.get_date_regex('%Y-%m-%dt%H') ) class TestFixEpoch(TestCase): def test_fix_epoch(self): for long_epoch, epoch in [ (1459287636999, 1459287636), (1459287636000000, 1459287636), (145928763600000000, 1459287636), (145928763600000001, 1459287636), (1459287636123456789, 1459287636), (1459287636999, 1459287636), ]: self.assertEqual(epoch, curator.fix_epoch(long_epoch)) def test_fix_epoch_raise(self): self.assertRaises(ValueError, curator.fix_epoch, 12345678901) class TestGetPointOfReference(TestCase): def test_get_point_of_reference(self): epoch = 1459288037 for unit, result in [ ('seconds', epoch-1), ('minutes', epoch-60), ('hours', epoch-3600), ('days', epoch-86400), ('weeks', epoch-(86400*7)), ('months', epoch-(86400*30)), ('years', epoch-(86400*365)), ]: self.assertEqual(result, curator.get_point_of_reference(unit, 1, epoch)) def test_get_por_raise(self): self.assertRaises(ValueError, curator.get_point_of_reference, 'invalid', 1) class TestByteSize(TestCase): def test_byte_size(self): size = 3*1024*1024*1024*1024*1024*1024*1024 unit = ['Z','E','P','T','G','M','K',''] for i in range(0,7): self.assertEqual('3.0{0}B'.format(unit[i]), curator.byte_size(size)) size /= 1024 def test_byte_size_yotta(self): size = 3*1024*1024*1024*1024*1024*1024*1024*1024 self.assertEqual('3.0YB', curator.byte_size(size)) def test_raise_invalid(self): self.assertRaises(TypeError, curator.byte_size, 'invalid') class TestChunkIndexList(TestCase): def test_big_list(self): indices = [] for i in range(100,150): indices.append("superlongindexnamebyanystandardyouchoosethisissillyhowbigcanthisgetbeforeitbreaks" + str(i)) self.assertEqual(2, len(curator.chunk_index_list(indices))) def test_small_list(self): self.assertEqual(1, len(curator.chunk_index_list(['short','list','of','indices']))) class TestGetIndices(TestCase): def test_client_exception(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.indices.get_settings.side_effect = testvars.fake_fail self.assertRaises( curator.FailedExecution, curator.get_indices, client) def test_positive(self): client = Mock() client.indices.get_settings.return_value = testvars.settings_two client.info.return_value = {'version': {'number': '5.0.0'} } self.assertEqual( ['index-2016.03.03', 'index-2016.03.04'], sorted(curator.get_indices(client)) ) def test_empty(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = {} self.assertEqual([], curator.get_indices(client)) class TestCheckVersion(TestCase): def test_check_version_(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.2'} } self.assertIsNone(curator.check_version(client)) def test_check_version_less_than(self): client = Mock() client.info.return_value = {'version': {'number': '2.4.3'} } self.assertRaises(curator.CuratorException, curator.check_version, client) def test_check_version_greater_than(self): client = Mock() client.info.return_value = {'version': {'number': '6.0.1'} } self.assertRaises(curator.CuratorException, curator.check_version, client) class TestCheckMaster(TestCase): def test_check_master_positive(self): client = Mock() client.nodes.info.return_value = { 'nodes': { "foo" : "bar"} } client.cluster.state.return_value = { "master_node" : "foo" } self.assertIsNone(curator.check_master(client, master_only=True)) def test_check_master_negative(self): client = Mock() client.nodes.info.return_value = { 'nodes': { "bad" : "mojo"} } client.cluster.state.return_value = { "master_node" : "foo" } with self.assertRaises(SystemExit) as cm: curator.check_master(client, master_only=True) self.assertEqual(cm.exception.code, 0) class TestGetClient(TestCase): # These unit test cases can't really get a client object, so it's more for # code coverage than anything def test_url_prefix_none(self): kwargs = { 'url_prefix': None, 'use_ssl' : True, 'ssl_no_validate' : True } self.assertRaises( elasticsearch.ElasticsearchException, curator.get_client, **kwargs ) def test_url_prefix_none_str(self): kwargs = { 'url_prefix': 'None', 'use_ssl' : True, 'ssl_no_validate' : True } self.assertRaises( elasticsearch.ElasticsearchException, curator.get_client, **kwargs ) def test_master_only_multiple_hosts(self): kwargs = { 'url_prefix': '', 'master_only' : True, 'hosts' : ['127.0.0.1', '127.0.0.1'] } self.assertRaises( curator.ConfigurationError, curator.get_client, **kwargs ) def test_host_with_hosts(self): kwargs = { 'url_prefix': '', 'host' : '127.0.0.1', 'hosts' : ['127.0.0.2'], } self.assertRaises( curator.ConfigurationError, curator.get_client, **kwargs ) def test_certificate_logic(self): kwargs = { 'use_ssl' : True, 'certificate' : 'mycert.pem' } self.assertRaises( elasticsearch.ElasticsearchException, curator.get_client, **kwargs ) def test_client_cert_logic(self): kwargs = { 'use_ssl' : True, 'client_cert' : 'myclientcert.pem' } self.assertRaises( elasticsearch.ElasticsearchException, curator.get_client, **kwargs ) def test_client_key_logic(self): kwargs = { 'use_ssl' : True, 'client_key' : 'myclientkey.pem' } self.assertRaises( elasticsearch.ElasticsearchException, curator.get_client, **kwargs ) def test_certificate_no_verify_logic(self): kwargs = { 'use_ssl' : True, 'ssl_no_validate' : True } self.assertRaises( elasticsearch.ElasticsearchException, curator.get_client, **kwargs ) class TestShowDryRun(TestCase): # For now, since it's a pain to capture logging output, this is just a # simple code coverage run def test_index_list(self): client = Mock() client.info.return_value = {'version': {'number': '5.0.0'} } client.indices.get_settings.return_value = testvars.settings_two client.cluster.state.return_value = testvars.clu_state_two client.indices.stats.return_value = testvars.stats_two client.field_stats.return_value = testvars.fieldstats_two il = curator.IndexList(client) self.assertIsNone(curator.show_dry_run(il, 'test_action')) class TestGetRepository(TestCase): def test_get_repository_missing_arg(self): client = Mock() client.snapshot.get_repository.return_value = {} self.assertEqual({}, curator.get_repository(client)) def test_get_repository_positive(self): client = Mock() client.snapshot.get_repository.return_value = testvars.test_repo self.assertEqual(testvars.test_repo, curator.get_repository(client, repository=testvars.repo_name)) def test_get_repository_transporterror_negative(self): client = Mock() client.snapshot.get_repository.side_effect = elasticsearch.TransportError(503,'foo','bar') self.assertRaises( curator.CuratorException, curator.get_repository, client, repository=testvars.repo_name ) def test_get_repository_notfounderror_negative(self): client = Mock() client.snapshot.get_repository.side_effect = elasticsearch.NotFoundError(404,'foo','bar') self.assertRaises( curator.CuratorException, curator.get_repository, client, repository=testvars.repo_name ) def test_get_repository__all_positive(self): client = Mock() client.snapshot.get_repository.return_value = testvars.test_repos self.assertEqual(testvars.test_repos, curator.get_repository(client)) class TestGetSnapshot(TestCase): def test_get_snapshot_missing_repository_arg(self): client = Mock() self.assertRaises( curator.MissingArgument, curator.get_snapshot, client, snapshot=testvars.snap_name ) def test_get_snapshot_positive(self): client = Mock() client.snapshot.get.return_value = testvars.snapshot self.assertEqual(testvars.snapshot, curator.get_snapshot(client, repository=testvars.repo_name, snapshot=testvars.snap_name)) def test_get_snapshot_transporterror_negative(self): client = Mock() client.snapshot.get_repository.return_value = testvars.test_repo client.snapshot.get.side_effect = testvars.four_oh_one self.assertRaises( curator.FailedExecution, curator.get_snapshot, client, repository=testvars.repo_name, snapshot=testvars.snap_name ) def test_get_snapshot_notfounderror_negative(self): client = Mock() client.snapshot.get_repository.return_value = testvars.test_repo client.snapshot.get.side_effect = elasticsearch.NotFoundError(404, 'Snapshot not found') self.assertRaises( curator.FailedExecution, curator.get_snapshot, client, repository=testvars.repo_name, snapshot=testvars.snap_name ) class TestGetSnapshotData(TestCase): def test_missing_repo_arg(self): client = Mock() self.assertRaises(curator.MissingArgument, curator.get_snapshot_data, client) def test_return_data(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo self.assertEqual( testvars.snapshots['snapshots'], curator.get_snapshot_data(client, repository=testvars.repo_name) ) def test_raises_exception_onfail(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get.side_effect = testvars.four_oh_one client.snapshot.get_repository.return_value = testvars.test_repo self.assertRaises( curator.FailedExecution, curator.get_snapshot_data, client, repository=testvars.repo_name ) class TestSnapshotInProgress(TestCase): def test_all_snapshots_for_in_progress(self): client = Mock() client.snapshot.get.return_value = testvars.inprogress client.snapshot.get_repository.return_value = testvars.test_repo self.assertEqual( 'snapshot-2015.03.01', curator.snapshot_in_progress(client, repository=testvars.repo_name) ) def test_specified_snapshot_in_progress(self): client = Mock() client.snapshot.get.return_value = testvars.inprogress client.snapshot.get_repository.return_value = testvars.test_repo self.assertEqual( 'snapshot-2015.03.01', curator.snapshot_in_progress( client, repository=testvars.repo_name, snapshot='snapshot-2015.03.01' ) ) def test_specified_snapshot_in_progress_negative(self): client = Mock() client.snapshot.get.return_value = testvars.inprogress client.snapshot.get_repository.return_value = testvars.test_repo self.assertFalse( curator.snapshot_in_progress( client, repository=testvars.repo_name, snapshot=testvars.snap_name ) ) def test_all_snapshots_for_in_progress_negative(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo self.assertFalse( curator.snapshot_in_progress(client, repository=testvars.repo_name) ) def test_for_multiple_in_progress(self): client = Mock() client.snapshot.get.return_value = testvars.highly_unlikely client.snapshot.get_repository.return_value = testvars.test_repo self.assertRaises( curator.CuratorException, curator.snapshot_in_progress, client, repository=testvars.repo_name ) class TestCreateSnapshotBody(TestCase): def test_create_snapshot_body_empty_arg(self): self.assertFalse(curator.create_snapshot_body([])) def test_create_snapshot_body__all_positive(self): self.assertEqual(testvars.snap_body_all, curator.create_snapshot_body('_all')) def test_create_snapshot_body_positive(self): self.assertEqual(testvars.snap_body, curator.create_snapshot_body(testvars.named_indices)) class TestCreateRepoBody(TestCase): def test_missing_repo_type(self): self.assertRaises(curator.MissingArgument, curator.create_repo_body ) def test_s3(self): body = curator.create_repo_body(repo_type='s3') self.assertEqual(body['type'], 's3') class TestCreateRepository(TestCase): def test_missing_arg(self): client = Mock() self.assertRaises(curator.MissingArgument, curator.create_repository, client ) def test_empty_result_call(self): client = Mock() client.snapshot.get_repository.return_value = None self.assertTrue(curator.create_repository(client, repository="repo", repo_type="fs")) def test_repo_not_in_results(self): client = Mock() client.snapshot.get_repository.return_value = {'not_your_repo':{'foo':'bar'}} self.assertTrue(curator.create_repository(client, repository="repo", repo_type="fs")) def test_repo_already_in_results(self): client = Mock() client.snapshot.get_repository.return_value = {'repo':{'foo':'bar'}} self.assertRaises(curator.FailedExecution, curator.create_repository, client, repository="repo", repo_type="fs" ) def test_raises_exception(self): client = Mock() client.snapshot.get_repository.return_value = {'not_your_repo':{'foo':'bar'}} client.snapshot.create_repository.side_effect = elasticsearch.TransportError(500, "Error message", {"message":"Error"}) self.assertRaises(curator.FailedExecution, curator.create_repository, client, repository="repo", repo_type="fs") class TestRepositoryExists(TestCase): def test_missing_arg(self): client = Mock() self.assertRaises(curator.MissingArgument, curator.repository_exists, client ) def test_repository_in_results(self): client = Mock() client.snapshot.get_repository.return_value = {'repo':{'foo':'bar'}} self.assertTrue(curator.repository_exists(client, repository="repo")) def test_repo_not_in_results(self): client = Mock() client.snapshot.get_repository.return_value = {'not_your_repo':{'foo':'bar'}} self.assertFalse(curator.repository_exists(client, repository="repo")) class TestRepositoryFs(TestCase): def test_passing(self): client = Mock() client.snapshot.verify_repository.return_value = testvars.verified_nodes self.assertIsNone( curator.test_repo_fs(client, repository=testvars.repo_name)) def test_raises_404(self): client = Mock() client.snapshot.verify_repository.return_value = testvars.verified_nodes client.snapshot.verify_repository.side_effect = testvars.four_oh_four self.assertRaises(curator.ActionError, curator.test_repo_fs, client, repository=testvars.repo_name) def test_raises_401(self): client = Mock() client.snapshot.verify_repository.return_value = testvars.verified_nodes client.snapshot.verify_repository.side_effect = testvars.four_oh_one self.assertRaises(curator.ActionError, curator.test_repo_fs, client, repository=testvars.repo_name) def test_raises_other(self): client = Mock() client.snapshot.verify_repository.return_value = testvars.verified_nodes client.snapshot.verify_repository.side_effect = testvars.fake_fail self.assertRaises(curator.ActionError, curator.test_repo_fs, client, repository=testvars.repo_name) class TestSafeToSnap(TestCase): def test_missing_arg(self): client = Mock() self.assertRaises(curator.MissingArgument, curator.safe_to_snap, client ) def test_in_progress_fail(self): client = Mock() client.snapshot.get.return_value = testvars.inprogress client.snapshot.get_repository.return_value = testvars.test_repo client.tasks.get.return_value = testvars.no_snap_tasks self.assertFalse( curator.safe_to_snap( client, repository=testvars.repo_name, retry_interval=0, retry_count=1 ) ) def test_ongoing_tasks_fail(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo client.tasks.get.return_value = testvars.snap_task self.assertFalse( curator.safe_to_snap( client, repository=testvars.repo_name, retry_interval=0, retry_count=1 ) ) def test_in_progress_pass(self): client = Mock() client.snapshot.get.return_value = testvars.snapshots client.snapshot.get_repository.return_value = testvars.test_repo client.tasks.get.return_value = testvars.no_snap_tasks self.assertTrue( curator.safe_to_snap( client, repository=testvars.repo_name, retry_interval=0, retry_count=1 ) ) class TestSnapshotRunning(TestCase): def test_true(self): client = Mock() client.snapshot.status.return_value = testvars.snap_running self.assertTrue(curator.snapshot_running(client)) def test_false(self): client = Mock() client.snapshot.status.return_value = testvars.nosnap_running self.assertFalse(curator.snapshot_running(client)) def test_raises_exception(self): client = Mock() client.snapshot.status.return_value = testvars.nosnap_running client.snapshot.status.side_effect = testvars.fake_fail self.assertRaises( curator.FailedExecution, curator.snapshot_running, client) class TestPruneNones(TestCase): def test_prune_nones_with(self): self.assertEqual({}, curator.prune_nones({'a':None})) def test_prune_nones_without(self): a = {'foo':'bar'} self.assertEqual(a, curator.prune_nones(a)) class TestValidateFilters(TestCase): def test_snapshot_with_index_filter(self): self.assertRaises( curator.ConfigurationError, curator.validate_filters, 'delete_snapshots', [{'filtertype': 'kibana'}] ) def test_index_with_snapshot_filter(self): self.assertRaises( curator.ConfigurationError, curator.validate_filters, 'delete_indices', [{'filtertype': 'state', 'state': 'SUCCESS'}] ) class TestVerifyClientObject(TestCase): def test_is_client_object(self): test = elasticsearch.Elasticsearch() self.assertIsNone(curator.verify_client_object(test)) def test_is_not_client_object(self): test = 'not a client object' self.assertRaises(TypeError, curator.verify_client_object, test) def test_is_a_subclass_client_object(self): class ElasticsearchSubClass(elasticsearch.Elasticsearch): pass test = ElasticsearchSubClass() self.assertIsNone(curator.verify_client_object(test)) class TestRollableAlias(TestCase): def test_return_false_if_no_alias(self): client = Mock() client.indices.get_alias.return_value = {} client.indices.get_alias.side_effect = elasticsearch.NotFoundError self.assertFalse(curator.rollable_alias(client, 'foo')) def test_return_false_too_many_indices(self): client = Mock() client.indices.get_alias.return_value = testvars.not_rollable_multiple self.assertFalse(curator.rollable_alias(client, 'foo')) def test_return_false_non_numeric(self): client = Mock() client.indices.get_alias.return_value = testvars.not_rollable_non_numeric self.assertFalse(curator.rollable_alias(client, 'foo')) def test_return_true_two_digits(self): client = Mock() client.indices.get_alias.return_value = testvars.is_rollable_2digits self.assertTrue(curator.rollable_alias(client, 'foo')) def test_return_true_hypenated(self): client = Mock() client.indices.get_alias.return_value = testvars.is_rollable_hypenated self.assertTrue(curator.rollable_alias(client, 'foo')) class TestHealthCheck(TestCase): def test_no_kwargs(self): client = Mock() self.assertRaises( curator.MissingArgument, curator.health_check, client ) def test_key_value_match(self): client = Mock() client.cluster.health.return_value = testvars.cluster_health self.assertTrue( curator.health_check(client, status='green') ) def test_key_value_no_match(self): client = Mock() client.cluster.health.return_value = testvars.cluster_health self.assertFalse( curator.health_check(client, status='red') ) def test_key_not_found(self): client = Mock() client.cluster.health.return_value = testvars.cluster_health self.assertRaises( curator.ConfigurationError, curator.health_check, client, foo='bar' ) class TestSnapshotCheck(TestCase): def test_fail_to_get_snapshot(self): client = Mock() client.snapshot.get.side_effect = testvars.fake_fail self.assertRaises( curator.CuratorException, curator.snapshot_check, client ) def test_in_progress(self): client = Mock() client.snapshot.get.return_value = testvars.oneinprogress self.assertFalse( curator.snapshot_check(client, repository='foo', snapshot=testvars.snap_name) ) def test_success(self): client = Mock() client.snapshot.get.return_value = testvars.snapshot self.assertTrue( curator.snapshot_check(client, repository='foo', snapshot=testvars.snap_name) ) def test_partial(self): client = Mock() client.snapshot.get.return_value = testvars.partial self.assertTrue( curator.snapshot_check(client, repository='foo', snapshot=testvars.snap_name) ) def test_failed(self): client = Mock() client.snapshot.get.return_value = testvars.failed self.assertTrue( curator.snapshot_check(client, repository='foo', snapshot=testvars.snap_name) ) def test_other(self): client = Mock() client.snapshot.get.return_value = testvars.othersnap self.assertTrue( curator.snapshot_check(client, repository='foo', snapshot=testvars.snap_name) ) class TestRestoreCheck(TestCase): def test_fail_to_get_recovery(self): client = Mock() client.indices.recovery.side_effect = testvars.fake_fail self.assertRaises( curator.CuratorException, curator.restore_check, client, [] ) def test_incomplete_recovery(self): client = Mock() client.indices.recovery.return_value = testvars.unrecovered_output self.assertFalse( curator.restore_check(client, testvars.named_indices) ) def test_completed_recovery(self): client = Mock() client.indices.recovery.return_value = testvars.recovery_output self.assertTrue( curator.restore_check(client, testvars.named_indices) ) def test_empty_recovery(self): client = Mock() client.indices.recovery.return_value = {} self.assertFalse( curator.restore_check(client, testvars.named_indices) ) def test_fix_966(self): client = Mock() client.indices.recovery.return_value = testvars.recovery_966 self.assertTrue( curator.restore_check(client, testvars.index_list_966) ) class TestTaskCheck(TestCase): def test_bad_task_id(self): client = Mock() client.tasks.get.side_effect = testvars.fake_fail self.assertRaises( curator.CuratorException, curator.task_check, client, 'foo' ) def test_incomplete_task(self): client = Mock() client.tasks.get.return_value = testvars.incomplete_task self.assertFalse( curator.task_check(client, task_id=testvars.generic_task['task']) ) def test_complete_task(self): client = Mock() client.tasks.get.return_value = testvars.completed_task self.assertTrue( curator.task_check(client, task_id=testvars.generic_task['task']) ) class TestWaitForIt(TestCase): def test_bad_action(self): client = Mock() self.assertRaises( curator.ConfigurationError, curator.wait_for_it, client, 'foo') def test_reindex_action_no_task_id(self): client = Mock() self.assertRaises( curator.MissingArgument, curator.wait_for_it, client, 'reindex') def test_snapshot_action_no_snapshot(self): client = Mock() self.assertRaises( curator.MissingArgument, curator.wait_for_it, client, 'snapshot', repository='foo') def test_snapshot_action_no_repository(self): client = Mock() self.assertRaises( curator.MissingArgument, curator.wait_for_it, client, 'snapshot', snapshot='foo') def test_restore_action_no_indexlist(self): client = Mock() self.assertRaises( curator.MissingArgument, curator.wait_for_it, client, 'restore') def test_reindex_action_bad_task_id(self): client = Mock() client.tasks.get.return_value = {'a':'b'} client.tasks.get.side_effect = testvars.fake_fail self.assertRaises( curator.CuratorException, curator.wait_for_it, client, 'reindex', task_id='foo') def test_reached_max_wait(self): client = Mock() client.cluster.health.return_value = {'status':'red'} self.assertRaises(curator.ActionTimeout, curator.wait_for_it, client, 'replicas', wait_interval=1, max_wait=1 ) class TestDateRange(TestCase): def test_bad_unit(self): self.assertRaises(curator.ConfigurationError, curator.date_range, 'invalid', 1, 1 ) def test_bad_range(self): self.assertRaises(curator.ConfigurationError, curator.date_range, 'hours', 1, -1 ) def test_hours_single(self): unit = 'hours' range_from = -1 range_to = -1 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2017, 4, 3, 21, 0, 0)) end = curator.datetime_to_epoch(datetime(2017, 4, 3, 21, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch)) def test_hours_past_range(self): unit = 'hours' range_from = -3 range_to = -1 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2017, 4, 3, 19, 0, 0)) end = curator.datetime_to_epoch(datetime(2017, 4, 3, 21, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch)) def test_hours_future_range(self): unit = 'hours' range_from = 0 range_to = 2 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 0, 0)) end = curator.datetime_to_epoch(datetime(2017, 4, 4, 00, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch)) def test_hours_span_range(self): unit = 'hours' range_from = -1 range_to = 2 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2017, 4, 3, 21, 0, 0)) end = curator.datetime_to_epoch(datetime(2017, 4, 4, 00, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch)) def test_days_single(self): unit = 'days' range_from = -1 range_to = -1 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2017, 4, 2, 0, 0, 0)) end = curator.datetime_to_epoch(datetime(2017, 4, 2, 23, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch)) def test_days_past_range(self): unit = 'days' range_from = -3 range_to = -1 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2017, 3, 31, 0, 0, 0)) end = curator.datetime_to_epoch(datetime(2017, 4, 2, 23, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch)) def test_days_future_range(self): unit = 'days' range_from = 0 range_to = 2 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2017, 4, 3, 0, 0, 0)) end = curator.datetime_to_epoch(datetime(2017, 4, 5, 23, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch)) def test_days_span_range(self): unit = 'days' range_from = -1 range_to = 2 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2017, 4, 2, 0, 0, 0)) end = curator.datetime_to_epoch(datetime(2017, 4, 5, 23, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch)) def test_weeks_single(self): unit = 'weeks' range_from = -1 range_to = -1 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2017, 3, 26, 0, 0, 0)) end = curator.datetime_to_epoch(datetime(2017, 4, 1, 23, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch)) def test_weeks_past_range(self): unit = 'weeks' range_from = -3 range_to = -1 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2017, 3, 12, 0, 0, 0)) end = curator.datetime_to_epoch(datetime(2017, 4, 1, 23, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch)) def test_weeks_future_range(self): unit = 'weeks' range_from = 0 range_to = 2 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2017, 4, 2, 00, 0, 0)) end = curator.datetime_to_epoch(datetime(2017, 4, 22, 23, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch)) def test_weeks_span_range(self): unit = 'weeks' range_from = -1 range_to = 2 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2017, 3, 26, 0, 0, 0)) end = curator.datetime_to_epoch(datetime(2017, 4, 22, 23, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch)) def test_weeks_single_iso(self): unit = 'weeks' range_from = -1 range_to = -1 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2017, 3, 27, 0, 0, 0)) end = curator.datetime_to_epoch(datetime(2017, 4, 2, 23, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch, week_starts_on='monday') ) def test_weeks_past_range_iso(self): unit = 'weeks' range_from = -3 range_to = -1 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2017, 3, 13, 0, 0, 0)) end = curator.datetime_to_epoch(datetime(2017, 4, 2, 23, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch, week_starts_on='monday') ) def test_weeks_future_range_iso(self): unit = 'weeks' range_from = 0 range_to = 2 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2017, 4, 3, 0, 0, 0)) end = curator.datetime_to_epoch(datetime(2017, 4, 23, 23, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch, week_starts_on='monday') ) def test_weeks_span_range_iso(self): unit = 'weeks' range_from = -1 range_to = 2 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2017, 3, 27, 0, 0, 0)) end = curator.datetime_to_epoch(datetime(2017, 4, 23, 23, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch, week_starts_on='monday') ) def test_months_single(self): unit = 'months' range_from = -1 range_to = -1 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2017, 3, 1, 0, 0, 0)) end = curator.datetime_to_epoch(datetime(2017, 3, 31, 23, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch)) def test_months_past_range(self): unit = 'months' range_from = -4 range_to = -1 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2016, 12, 1, 0, 0, 0)) end = curator.datetime_to_epoch(datetime(2017, 3, 31, 23, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch)) def test_months_future_range(self): unit = 'months' range_from = 7 range_to = 10 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2017, 11, 1, 0, 0, 0)) end = curator.datetime_to_epoch(datetime(2018, 2, 28, 23, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch)) def test_months_super_future_range(self): unit = 'months' range_from = 9 range_to = 10 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2018, 1, 1, 0, 0, 0)) end = curator.datetime_to_epoch(datetime(2018, 2, 28, 23, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch)) def test_months_span_range(self): unit = 'months' range_from = -1 range_to = 2 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2017, 3, 1, 0, 0, 0)) end = curator.datetime_to_epoch(datetime(2017, 6, 30, 23, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch)) def test_years_single(self): unit = 'years' range_from = -1 range_to = -1 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2016, 1, 1, 0, 0, 0)) end = curator.datetime_to_epoch(datetime(2016, 12, 31, 23, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch)) def test_years_past_range(self): unit = 'years' range_from = -3 range_to = -1 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2014, 1, 1, 0, 0, 0)) end = curator.datetime_to_epoch(datetime(2016, 12, 31, 23, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch)) def test_years_future_range(self): unit = 'years' range_from = 0 range_to = 2 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2017, 1, 1, 0, 0, 0)) end = curator.datetime_to_epoch(datetime(2019, 12, 31, 23, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch)) def test_years_span_range(self): unit = 'years' range_from = -1 range_to = 2 epoch = curator.datetime_to_epoch(datetime(2017, 4, 3, 22, 50, 17)) start = curator.datetime_to_epoch(datetime(2016, 1, 1, 0, 0, 0)) end = curator.datetime_to_epoch(datetime(2019, 12, 31, 23, 59, 59)) self.assertEqual((start,end), curator.date_range(unit, range_from, range_to, epoch=epoch)) class TestNodeRoles(TestCase): def test_node_roles(self): node_id = u'my_node' expected = ['data'] client = Mock() client.nodes.info.return_value = {u'nodes':{node_id:{u'roles':testvars.data_only_node_role}}} self.assertEqual(expected, curator.node_roles(client, node_id)) class TestSingleDataPath(TestCase): def test_single_data_path(self): node_id = 'my_node' client = Mock() client.nodes.stats.return_value = {u'nodes':{node_id:{u'fs':{u'data':[u'one']}}}} self.assertTrue(curator.single_data_path(client, node_id)) def test_two_data_paths(self): node_id = 'my_node' client = Mock() client.nodes.stats.return_value = {u'nodes':{node_id:{u'fs':{u'data':[u'one',u'two']}}}} self.assertFalse(curator.single_data_path(client, node_id)) class TestNameToNodeId(TestCase): def test_positive(self): node_id = 'node_id' node_name = 'node_name' client = Mock() client.nodes.stats.return_value = {u'nodes':{node_id:{u'name':node_name}}} self.assertEqual(node_id, curator.name_to_node_id(client, node_name)) def test_negative(self): node_id = 'node_id' node_name = 'node_name' client = Mock() client.nodes.stats.return_value = {u'nodes':{node_id:{u'name':node_name}}} self.assertIsNone(curator.name_to_node_id(client, 'wrong_name')) class TestNodeIdToName(TestCase): def test_negative(self): client = Mock() client.nodes.stats.return_value = {u'nodes':{'my_node_id':{u'name':'my_node_name'}}} self.assertIsNone(curator.node_id_to_name(client, 'not_my_node_id')) curator-5.2.0/test/unit/test_validators.py000066400000000000000000000151631315226075300207260ustar00rootroot00000000000000import sys import logging from unittest import TestCase from mock import Mock, patch, mock_open from voluptuous import * import curator def shared_result(config, action): return curator.validators.SchemaCheck( config, Schema(curator.validators.filters.Filters(action)), 'filters', 'testing' ).result() class TestFilters(TestCase): def test_single_raises_configuration_error(self): data = {'max_num_segments': 1, 'exclude': True} self.assertRaises( curator.ConfigurationError, curator.validators.filters.single, 'forcemerge', data ) class TestFilterTypes(TestCase): def test_alias(self): action = 'delete_indices' config = [ { 'filtertype' : 'alias', 'aliases' : ['alias1', 'alias2'], 'exclude' : False, } ] self.assertEqual(config, shared_result(config, action)) def test_age(self): action = 'delete_indices' config = [ { 'filtertype' : 'age', 'direction' : 'older', 'unit' : 'days', 'unit_count' : 1, 'source' : 'field_stats', 'field' : '@timestamp', } ] self.assertEqual(config, shared_result(config, action)) def test_age_with_string_unit_count(self): action = 'delete_indices' config = [ { 'filtertype' : 'age', 'direction' : 'older', 'unit' : 'days', 'unit_count' : "1", 'source' : 'field_stats', 'field' : '@timestamp', } ] result = shared_result(config, action) self.assertEqual(1, result[0]['unit_count']) def test_allocated(self): action = 'delete_indices' config = [ { 'filtertype' : 'allocated', 'key' : 'foo', 'value' : 'bar', 'allocation_type' : 'require', 'exclude' : False, } ] self.assertEqual(config, shared_result(config, action)) def test_closed(self): action = 'delete_indices' config = [ { 'filtertype' : 'closed', 'exclude' : False, } ] self.assertEqual(config, shared_result(config, action)) def test_count(self): action = 'delete_indices' config = [ { 'filtertype' : 'count', 'count' : 1, 'reverse' : True, 'exclude' : False, } ] self.assertEqual(config, shared_result(config, action)) def test_forcemerged(self): action = 'delete_indices' config = [ { 'filtertype' : 'forcemerged', 'max_num_segments' : 1, 'exclude' : False, } ] self.assertEqual(config, shared_result(config, action)) def test_kibana(self): action = 'delete_indices' config = [ { 'filtertype' : 'kibana', 'exclude' : False, } ] self.assertEqual(config, shared_result(config, action)) def test_opened(self): action = 'delete_indices' config = [ { 'filtertype' : 'opened', 'exclude' : False, } ] self.assertEqual(config, shared_result(config, action)) def test_space_name_age(self): action = 'delete_indices' config = [ { 'filtertype' : 'space', 'disk_space' : 1, 'use_age' : True, 'exclude' : False, 'source' : 'name', 'timestring' : '%Y.%m.%d', } ] self.assertEqual(config, shared_result(config, action)) def test_space_name_age_string_float(self): action = 'delete_indices' config = [ { 'filtertype' : 'space', 'disk_space' : "1.0", 'use_age' : True, 'exclude' : False, 'source' : 'name', 'timestring' : '%Y.%m.%d', } ] result = shared_result(config, action) self.assertEqual(1.0, result[0]['disk_space']) def test_space_name_age_no_ts(self): action = 'delete_indices' config = [ { 'filtertype' : 'space', 'disk_space' : 1, 'use_age' : True, 'exclude' : False, 'source' : 'name', } ] schema = curator.validators.SchemaCheck( config, Schema(curator.validators.filters.Filters(action)), 'filters', 'testing' ) self.assertRaises(curator.ConfigurationError, schema.result) def test_space_field_stats_age(self): action = 'delete_indices' config = [ { 'filtertype' : 'space', 'disk_space' : 1, 'use_age' : True, 'exclude' : False, 'source' : 'field_stats', 'field' : '@timestamp', } ] self.assertEqual(config, shared_result(config, action)) def test_space_field_stats_age_no_field(self): action = 'delete_indices' config = [ { 'filtertype' : 'space', 'disk_space' : 1, 'use_age' : True, 'exclude' : False, 'source' : 'field_stats', } ] schema = curator.validators.SchemaCheck( config, Schema(curator.validators.filters.Filters(action)), 'filters', 'testing' ) self.assertRaises(curator.ConfigurationError, schema.result) def test_space_creation_date_age(self): action = 'delete_indices' config = [ { 'filtertype' : 'space', 'disk_space' : 1, 'use_age' : True, 'exclude' : False, 'source' : 'creation_date', } ] self.assertEqual(config, shared_result(config, action)) def test_state(self): action = 'delete_snapshots' config = [ { 'filtertype' : 'state', 'state' : 'SUCCESS', 'exclude' : False, } ] self.assertEqual(config, shared_result(config, action)) curator-5.2.0/test/unit/testvars.py000066400000000000000000001272771315226075300174040ustar00rootroot00000000000000import elasticsearch from voluptuous import * fake_fail = Exception('Simulated Failure') four_oh_one = elasticsearch.TransportError(401, "simulated error") four_oh_four = elasticsearch.TransportError(404, "simulated error") get_alias_fail = elasticsearch.NotFoundError(404, "simulated error") named_index = 'index_name' named_indices = [ "index-2015.01.01", "index-2015.02.01" ] open_index = {'metadata': {'indices' : { named_index : {'state' : 'open'}}}} closed_index = {'metadata': {'indices' : { named_index : {'state' : 'close'}}}} cat_open_index = [{'status': 'open'}] cat_closed_index = [{'status': 'close'}] open_indices = { 'metadata': { 'indices' : { 'index1' : { 'state' : 'open' }, 'index2' : { 'state' : 'open' }}}} closed_indices = { 'metadata': { 'indices' : { 'index1' : { 'state' : 'close' }, 'index2' : { 'state' : 'close' }}}} named_alias = 'alias_name' alias_retval = { "pre_aliased_index": { "aliases" : { named_alias : { }}}} rollable_alias = { "index-000001": { "aliases" : { named_alias : { }}}} rollover_conditions = { 'conditions': { 'max_age': '1s' } } dry_run_rollover = { "acknowledged": True, "shards_acknowledged": True, "old_index": "index-000001", "new_index": "index-000002", "rolled_over": False, "dry_run": True, "conditions": { "max_age" : "1s" } } aliases_retval = { "index1": { "aliases" : { named_alias : { } } }, "index2": { "aliases" : { named_alias : { } } }, } alias_one_add = [{'add': {'alias': 'alias', 'index': 'index_name'}}] alias_one_add_with_extras = [ { 'add': { 'alias': 'alias', 'index': 'index_name', 'filter' : { 'term' : { 'user' : 'kimchy' }} } }] alias_one_rm = [{'remove': {'alias': 'my_alias', 'index': named_index}}] alias_one_body = { "actions" : [ {'remove': {'alias': 'alias', 'index': 'index_name'}}, {'add': {'alias': 'alias', 'index': 'index_name'}} ]} alias_two_add = [ {'add': {'alias': 'alias', 'index': 'index-2016.03.03'}}, {'add': {'alias': 'alias', 'index': 'index-2016.03.04'}}, ] alias_two_rm = [ {'remove': {'alias': 'my_alias', 'index': 'index-2016.03.03'}}, {'remove': {'alias': 'my_alias', 'index': 'index-2016.03.04'}}, ] alias_success = { "acknowledged": True } allocation_in = {named_index: {'settings': {'index': {'routing': {'allocation': {'require': {'foo': 'bar'}}}}}}} allocation_out = {named_index: {'settings': {'index': {'routing': {'allocation': {'require': {'not': 'foo'}}}}}}} indices_space = { 'indices' : { 'index1' : { 'index' : { 'primary_size_in_bytes': 1083741824 }}, 'index2' : { 'index' : { 'primary_size_in_bytes': 1083741824 }}}} snap_name = 'snap_name' repo_name = 'repo_name' test_repo = {repo_name: {'type': 'fs', 'settings': {'compress': 'true', 'location': '/tmp/repos/repo_name'}}} test_repos = {'TESTING': {'type': 'fs', 'settings': {'compress': 'true', 'location': '/tmp/repos/TESTING'}}, repo_name: {'type': 'fs', 'settings': {'compress': 'true', 'location': '/rmp/repos/repo_name'}}} snap_running = { 'snapshots': ['running'] } nosnap_running = { 'snapshots': [] } snapshot = { 'snapshots': [ { 'duration_in_millis': 60000, 'start_time': '2015-02-01T00:00:00.000Z', 'shards': {'successful': 4, 'failed': 0, 'total': 4}, 'end_time_in_millis': 0, 'state': 'SUCCESS', 'snapshot': snap_name, 'end_time': '2015-02-01T00:00:01.000Z', 'indices': named_indices, 'failures': [], 'start_time_in_millis': 1422748800 }]} oneinprogress = { 'snapshots': [ { 'duration_in_millis': 60000, 'start_time': '2015-03-01T00:00:02.000Z', 'shards': {'successful': 4, 'failed': 0, 'total': 4}, 'end_time_in_millis': 0, 'state': 'IN_PROGRESS', 'snapshot': snap_name, 'end_time': '2015-03-01T00:00:03.000Z', 'indices': named_indices, 'failures': [], 'start_time_in_millis': 1425168002 }]} partial = { 'snapshots': [ { 'duration_in_millis': 60000, 'start_time': '2015-02-01T00:00:00.000Z', 'shards': {'successful': 4, 'failed': 0, 'total': 4}, 'end_time_in_millis': 0, 'state': 'PARTIAL', 'snapshot': snap_name, 'end_time': '2015-02-01T00:00:01.000Z', 'indices': named_indices, 'failures': [], 'start_time_in_millis': 1422748800 }]} failed = { 'snapshots': [ { 'duration_in_millis': 60000, 'start_time': '2015-02-01T00:00:00.000Z', 'shards': {'successful': 4, 'failed': 0, 'total': 4}, 'end_time_in_millis': 0, 'state': 'FAILED', 'snapshot': snap_name, 'end_time': '2015-02-01T00:00:01.000Z', 'indices': named_indices, 'failures': [], 'start_time_in_millis': 1422748800 }]} othersnap = { 'snapshots': [ { 'duration_in_millis': 60000, 'start_time': '2015-02-01T00:00:00.000Z', 'shards': {'successful': 4, 'failed': 0, 'total': 4}, 'end_time_in_millis': 0, 'state': 'SOMETHINGELSE', 'snapshot': snap_name, 'end_time': '2015-02-01T00:00:01.000Z', 'indices': named_indices, 'failures': [], 'start_time_in_millis': 1422748800 }]} snapshots = { 'snapshots': [ { 'duration_in_millis': 60000, 'start_time': '2015-02-01T00:00:00.000Z', 'shards': {'successful': 4, 'failed': 0, 'total': 4}, 'end_time_in_millis': 0, 'state': 'SUCCESS', 'snapshot': snap_name, 'end_time': '2015-02-01T00:00:01.000Z', 'indices': named_indices, 'failures': [], 'start_time_in_millis': 1422748800 }, { 'duration_in_millis': 60000, 'start_time': '2015-03-01T00:00:02.000Z', 'shards': {'successful': 4, 'failed': 0, 'total': 4}, 'end_time_in_millis': 0, 'state': 'SUCCESS', 'snapshot': 'snapshot-2015.03.01', 'end_time': '2015-03-01T00:00:03.000Z', 'indices': named_indices, 'failures': [], 'start_time_in_millis': 1425168002 }]} inprogress = { 'snapshots': [ { 'duration_in_millis': 60000, 'start_time': '2015-02-01T00:00:00.000Z', 'shards': {'successful': 4, 'failed': 0, 'total': 4}, 'end_time_in_millis': 0, 'state': 'SUCCESS', 'snapshot': snap_name, 'end_time': '2015-02-01T00:00:01.000Z', 'indices': named_indices, 'failures': [], 'start_time_in_millis': 1422748800 }, { 'duration_in_millis': 60000, 'start_time': '2015-03-01T00:00:02.000Z', 'shards': {'successful': 4, 'failed': 0, 'total': 4}, 'end_time_in_millis': 0, 'state': 'IN_PROGRESS', 'snapshot': 'snapshot-2015.03.01', 'end_time': '2015-03-01T00:00:03.000Z', 'indices': named_indices, 'failures': [], 'start_time_in_millis': 1425168002 }]} highly_unlikely = { 'snapshots': [ { 'duration_in_millis': 60000, 'start_time': '2015-02-01T00:00:00.000Z', 'shards': {'successful': 4, 'failed': 0, 'total': 4}, 'end_time_in_millis': 0, 'state': 'IN_PROGRESS', 'snapshot': snap_name, 'end_time': '2015-02-01T00:00:01.000Z', 'indices': named_indices, 'failures': [], 'start_time_in_millis': 1422748800 }, { 'duration_in_millis': 60000, 'start_time': '2015-03-01T00:00:02.000Z', 'shards': {'successful': 4, 'failed': 0, 'total': 4}, 'end_time_in_millis': 0, 'state': 'IN_PROGRESS', 'snapshot': 'snapshot-2015.03.01', 'end_time': '2015-03-01T00:00:03.000Z', 'indices': named_indices, 'failures': [], 'start_time_in_millis': 1425168002 }]} snap_body_all = { "ignore_unavailable": False, "include_global_state": True, "partial": False, "indices" : "_all" } snap_body = { "ignore_unavailable": False, "include_global_state": True, "partial": False, "indices" : "index-2015.01.01,index-2015.02.01" } verified_nodes = {'nodes': {'nodeid1': {'name': 'node1'}, 'nodeid2': {'name': 'node2'}}} synced_pass = { "_shards":{"total":1,"successful":1,"failed":0}, "index_name":{ "total":1,"successful":1,"failed":0, "failures":[], } } synced_fail = { "_shards":{"total":1,"successful":0,"failed":1}, "index_name":{ "total":1,"successful":0,"failed":1, "failures":[ {"shard":0,"reason":"pending operations","routing":{"state":"STARTED","primary":True,"node":"nodeid1","relocating_node":None,"shard":0,"index":"index_name"}}, ] } } sync_conflict = elasticsearch.ConflictError(409, u'{"_shards":{"total":1,"successful":0,"failed":1},"index_name":{"total":1,"successful":0,"failed":1,"failures":[{"shard":0,"reason":"pending operations","routing":{"state":"STARTED","primary":true,"node":"nodeid1","relocating_node":null,"shard":0,"index":"index_name"}}]}})', synced_fail) synced_fails = { "_shards":{"total":2,"successful":1,"failed":1}, "index1":{ "total":1,"successful":0,"failed":1, "failures":[ {"shard":0,"reason":"pending operations","routing":{"state":"STARTED","primary":True,"node":"nodeid1","relocating_node":None,"shard":0,"index":"index_name"}}, ] }, "index2":{ "total":1,"successful":1,"failed":0, "failures":[] }, } settings_one = { named_index: { u'state': u'open', u'aliases': [u'my_alias'], u'mappings': {}, u'settings': { u'index': { u'number_of_replicas': u'1', u'uuid': u'random_uuid_string_here', u'number_of_shards': u'2', u'creation_date': u'1456963200172', u'routing': {u'allocation': {u'include': {u'tag': u'foo'}}}, u'version': {u'created': u'2020099'}, u'refresh_interval': u'5s' } } } } settings_1_get_aliases = { named_index: { "aliases" : { 'my_alias' : { } } } } settings_two = { u'index-2016.03.03': { u'state': u'open', u'aliases': [u'my_alias'], u'mappings': {}, u'settings': { u'index': { u'number_of_replicas': u'1', u'uuid': u'random_uuid_string_here', u'number_of_shards': u'5', u'creation_date': u'1456963200172', u'routing': {u'allocation': {u'include': {u'tag': u'foo'}}}, u'version': {u'created': u'2020099'}, u'refresh_interval': u'5s' } } }, u'index-2016.03.04': { u'state': u'open', u'aliases': [u'my_alias'], u'mappings': {}, u'settings': { u'index': { u'number_of_replicas': u'1', u'uuid': u'another_random_uuid_string', u'number_of_shards': u'5', u'creation_date': u'1457049600812', u'routing': {u'allocation': {u'include': {u'tag': u'bar'}}}, u'version': {u'created': u'2020099'}, u'refresh_interval': u'5s' } } } } settings_2_get_aliases = { "index-2016.03.03": { "aliases" : { 'my_alias' : { } } }, "index-2016.03.04": { "aliases" : { 'my_alias' : { } } }, } settings_2_closed = { u'index-2016.03.03': { u'state': u'close', u'aliases': [u'my_alias'], u'mappings': {}, u'settings': { u'index': { u'number_of_replicas': u'1', u'uuid': u'random_uuid_string_here', u'number_of_shards': u'5', u'creation_date': u'1456963200172', u'routing': {u'allocation': {u'include': {u'tag': u'foo'}}}, u'version': {u'created': u'2020099'}, u'refresh_interval': u'5s' } } }, u'index-2016.03.04': { u'state': u'open', u'aliases': [u'my_alias'], u'mappings': {}, u'settings': { u'index': { u'number_of_replicas': u'1', u'uuid': u'another_random_uuid_string', u'number_of_shards': u'5', u'creation_date': u'1457049600812', u'routing': {u'allocation': {u'include': {u'tag': u'bar'}}}, u'version': {u'created': u'2020099'}, u'refresh_interval': u'5s' } } } } settings_two_no_cd = { u'index-2016.03.03': { u'state': u'open', u'aliases': [u'my_alias'], u'mappings': {}, u'settings': { u'index': { u'number_of_replicas': u'1', u'uuid': u'random_uuid_string_here', u'number_of_shards': u'5', u'creation_date': u'1456963200172', u'routing': {u'allocation': {u'include': {u'tag': u'foo'}}}, u'version': {u'created': u'2020099'}, u'refresh_interval': u'5s' } } }, u'index-2016.03.04': { u'state': u'open', u'aliases': [u'my_alias'], u'mappings': {}, u'settings': { u'index': { u'number_of_replicas': u'1', u'uuid': u'another_random_uuid_string', u'number_of_shards': u'5', u'routing': {u'allocation': {u'include': {u'tag': u'bar'}}}, u'version': {u'created': u'2020099'}, u'refresh_interval': u'5s' } } } } settings_four = { u'a-2016.03.03': { u'state': u'open', u'aliases': [u'my_alias'], u'mappings': {}, u'settings': { u'index': { u'number_of_replicas': u'1', u'uuid': u'random_uuid_string_here', u'number_of_shards': u'5', u'creation_date': u'1456963200172', u'routing': {u'allocation': {u'include': {u'tag': u'foo'}}}, u'version': {u'created': u'2020099'}, u'refresh_interval': u'5s' } } }, u'b-2016.03.04': { u'state': u'open', u'aliases': [u'my_alias'], u'mappings': {}, u'settings': { u'index': { u'number_of_replicas': u'1', u'uuid': u'another_random_uuid_string', u'number_of_shards': u'5', u'creation_date': u'1457049600812', u'routing': {u'allocation': {u'include': {u'tag': u'bar'}}}, u'version': {u'created': u'2020099'}, u'refresh_interval': u'5s' } } }, u'c-2016.03.05': { u'state': u'close', u'aliases': [u'my_alias'], u'mappings': {}, u'settings': { u'index': { u'number_of_replicas': u'1', u'uuid': u'random_uuid_string_here', u'number_of_shards': u'5', u'creation_date': u'1457136000933', u'routing': {u'allocation': {u'include': {u'tag': u'foo'}}}, u'version': {u'created': u'2020099'}, u'refresh_interval': u'5s' } } }, u'd-2016.03.06': { u'state': u'open', u'aliases': [u'my_alias'], u'mappings': {}, u'settings': { u'index': { u'number_of_replicas': u'1', u'uuid': u'another_random_uuid_string', u'number_of_shards': u'5', u'creation_date': u'1457222400527', u'routing': {u'allocation': {u'include': {u'tag': u'bar'}}}, u'version': {u'created': u'2020099'}, u'refresh_interval': u'5s' } } } } settings_named = { u'index-2015.01.01': { u'state': u'open', u'aliases': [u'my_alias'], u'mappings': {}, u'settings': { u'index': { u'number_of_replicas': u'1', u'uuid': u'random_uuid_string_here', u'number_of_shards': u'5', u'creation_date': u'1456963200172', u'routing': {u'allocation': {u'include': {u'tag': u'foo'}}}, u'version': {u'created': u'2020099'}, u'refresh_interval': u'5s' } } }, u'index-2015.02.01': { u'state': u'open', u'aliases': [u'my_alias'], u'mappings': {}, u'settings': { u'index': { u'number_of_replicas': u'1', u'uuid': u'another_random_uuid_string', u'number_of_shards': u'5', u'creation_date': u'1457049600812', u'routing': {u'allocation': {u'include': {u'tag': u'bar'}}}, u'version': {u'created': u'2020099'}, u'refresh_interval': u'5s' } } } } clu_state_one = { u'metadata': { u'indices': settings_one } } clu_state_two = { u'metadata': { u'indices': settings_two } } cs_two_closed = { u'metadata': { u'indices': settings_2_closed } } clu_state_two_no_cd = { u'metadata': { u'indices': settings_two_no_cd } } clu_state_four = { u'metadata': { u'indices': settings_four } } stats_one = { u'indices': { named_index : { u'total': { u'docs': {u'count': 6374962, u'deleted': 0}, u'store': {u'size_in_bytes': 1115219663, u'throttle_time_in_millis': 0} }, u'primaries': { u'docs': {u'count': 3187481, u'deleted': 0}, u'store': {u'size_in_bytes': 557951789, u'throttle_time_in_millis': 0} } } } } stats_two = { u'indices': { u'index-2016.03.03': { u'total': { u'docs': {u'count': 6374962, u'deleted': 0}, u'store': {u'size_in_bytes': 1115219663, u'throttle_time_in_millis': 0} }, u'primaries': { u'docs': {u'count': 3187481, u'deleted': 0}, u'store': {u'size_in_bytes': 557951789, u'throttle_time_in_millis': 0} } }, u'index-2016.03.04': { u'total': { u'docs': {u'count': 6377544, u'deleted': 0}, u'store': {u'size_in_bytes': 1120891046, u'throttle_time_in_millis': 0} }, u'primaries': { u'docs': {u'count': 3188772, u'deleted': 0}, u'store': {u'size_in_bytes': 560677114, u'throttle_time_in_millis': 0} } } } } stats_four = { u'indices': { u'a-2016.03.03': { u'total': { u'docs': {u'count': 6374962, u'deleted': 0}, u'store': {u'size_in_bytes': 1115219663, u'throttle_time_in_millis': 0} }, u'primaries': { u'docs': {u'count': 3187481, u'deleted': 0}, u'store': {u'size_in_bytes': 557951789, u'throttle_time_in_millis': 0} } }, u'b-2016.03.04': { u'total': { u'docs': {u'count': 6377544, u'deleted': 0}, u'store': {u'size_in_bytes': 1120891046, u'throttle_time_in_millis': 0} }, u'primaries': { u'docs': {u'count': 3188772, u'deleted': 0}, u'store': {u'size_in_bytes': 560677114, u'throttle_time_in_millis': 0} } }, # CLOSED, ergo, not present # u'c-2016.03.05': { # u'total': { # u'docs': {u'count': 6266434, u'deleted': 0}, # u'store': {u'size_in_bytes': 1120882166, u'throttle_time_in_millis': 0} # }, # u'primaries': { # u'docs': {u'count': 3133217, u'deleted': 0}, # u'store': {u'size_in_bytes': 560441083, u'throttle_time_in_millis': 0} # } # }, u'd-2016.03.06': { u'total': { u'docs': {u'count': 6266436, u'deleted': 0}, u'store': {u'size_in_bytes': 1120882168, u'throttle_time_in_millis': 0} }, u'primaries': { u'docs': {u'count': 3133218, u'deleted': 0}, u'store': {u'size_in_bytes': 560441084, u'throttle_time_in_millis': 0} } } } } fieldstats_one = { u'indices': { named_index : { u'fields': { u'timestamp': { u'density': 100, u'min_value_as_string': u'2016-03-03T00:00:06.189Z', u'max_value': 1457049599152, u'max_doc': 415651, u'min_value': 1456963206189, u'doc_count': 415651, u'max_value_as_string': u'2016-03-03T23:59:59.152Z', u'sum_total_term_freq': -1, u'sum_doc_freq': 1662604}}}} } fieldstats_two = { u'indices': { u'index-2016.03.03': { u'fields': { u'timestamp': { u'density': 100, u'min_value_as_string': u'2016-03-03T00:00:06.189Z', u'max_value': 1457049599152, u'max_doc': 415651, u'min_value': 1456963206189, u'doc_count': 415651, u'max_value_as_string': u'2016-03-03T23:59:59.152Z', u'sum_total_term_freq': -1, u'sum_doc_freq': 1662604}}}, u'index-2016.03.04': { u'fields': { u'timestamp': { u'density': 100, u'min_value_as_string': u'2016-03-04T00:00:00.812Z', u'max_value': 1457135999223, u'max_doc': 426762, u'min_value': 1457049600812, u'doc_count': 426762, u'max_value_as_string': u'2016-03-04T23:59:59.223Z', u'sum_total_term_freq': -1, u'sum_doc_freq': 1673715}}}, } } fieldstats_four = { u'indices': { u'a-2016.03.03': { u'fields': { u'timestamp': { u'density': 100, u'min_value_as_string': u'2016-03-03T00:00:06.189Z', u'max_value': 1457049599152, u'max_doc': 415651, u'min_value': 1456963206189, u'doc_count': 415651, u'max_value_as_string': u'2016-03-03T23:59:59.152Z', u'sum_total_term_freq': -1, u'sum_doc_freq': 1662604}}}, u'b-2016.03.04': { u'fields': { u'timestamp': { u'density': 100, u'min_value_as_string': u'2016-03-04T00:00:00.812Z', u'max_value': 1457135999223, u'max_doc': 426762, u'min_value': 1457049600812, u'doc_count': 426762, u'max_value_as_string': u'2016-03-04T23:59:59.223Z', u'sum_total_term_freq': -1, u'sum_doc_freq': 1673715}}}, u'd-2016.03.06': { u'fields': { u'timestamp': { u'density': 100, u'min_value_as_string': u'2016-03-04T00:00:00.812Z', u'max_value': 1457308799223, u'max_doc': 426762, u'min_value': 1457222400567, u'doc_count': 426762, u'max_value_as_string': u'2016-03-04T23:59:59.223Z', u'sum_total_term_freq': -1, u'sum_doc_freq': 1673715}}}, } } shards = { 'indices': { named_index: { 'shards': { '0': [ { 'num_search_segments' : 15 }, { 'num_search_segments' : 21 } ], '1': [ { 'num_search_segments' : 19 }, { 'num_search_segments' : 16 } ] }}}} fm_shards = { 'indices': { named_index: { 'shards': { '0': [ { 'num_search_segments' : 1 }, { 'num_search_segments' : 1 } ], '1': [ { 'num_search_segments' : 1 }, { 'num_search_segments' : 1 } ] }}}} loginfo = { "loglevel": "INFO", "logfile": None, "logformat": "default" } default_format = '%(asctime)s %(levelname)-9s %(message)s' debug_format = '%(asctime)s %(levelname)-9s %(name)22s %(funcName)22s:%(lineno)-4d %(message)s' yamlconfig = ''' --- # Remember, leave a key empty to use the default value. None will be a string, # not a Python "NoneType" client: hosts: localhost port: 9200 url_prefix: use_ssl: False certificate: client_cert: client_key: ssl_no_validate: False http_auth: timeout: 30 master_only: False options: dry_run: False loglevel: DEBUG logfile: logformat: default quiet: False ''' pattern_ft = ''' --- actions: 1: description: open all matching indices action: open options: continue_if_exception: False disable_action: False filters: - filtertype: pattern kind: prefix value: a exclude: False ''' age_ft = ''' --- actions: 1: description: open all matching indices action: open options: continue_if_exception: False disable_action: False filters: - filtertype: age source: name direction: older timestring: '%Y.%m.%d' unit: seconds unit_count: 0 epoch: 1456963201 ''' space_ft = ''' --- actions: 1: description: open all matching indices action: open options: continue_if_exception: False disable_action: False filters: - filtertype: space disk_space: 2.1 source: name use_age: True timestring: '%Y.%m.%d' ''' forcemerge_ft = ''' --- actions: 1: description: open all matching indices action: open options: continue_if_exception: False disable_action: False filters: - filtertype: forcemerged max_num_segments: 2 ''' allocated_ft = ''' --- actions: 1: description: open all matching indices action: open options: continue_if_exception: False disable_action: False filters: - filtertype: allocated key: tag value: foo allocation_type: include ''' kibana_ft = ''' --- actions: 1: description: open all matching indices action: open options: continue_if_exception: False disable_action: False filters: - filtertype: kibana ''' opened_ft = ''' --- actions: 1: description: open all matching indices action: open options: continue_if_exception: False disable_action: False filters: - filtertype: opened ''' closed_ft = ''' --- actions: 1: description: open all matching indices action: open options: continue_if_exception: False disable_action: False filters: - filtertype: closed ''' none_ft = ''' --- actions: 1: description: open all matching indices action: open options: continue_if_exception: False disable_action: False filters: - filtertype: none ''' invalid_ft = ''' --- actions: 1: description: open all matching indices action: open options: continue_if_exception: False disable_action: False filters: - filtertype: sir_not_appearing_in_this_film ''' snap_age_ft = ''' --- actions: 1: description: test action: delete_snapshots options: continue_if_exception: False disable_action: False filters: - filtertype: age direction: older unit: days unit_count: 1 ''' snap_pattern_ft= ''' --- actions: 1: description: test action: delete_snapshots options: continue_if_exception: False disable_action: False filters: - filtertype: pattern kind: prefix value: sna ''' snap_none_ft = ''' --- actions: 1: description: test action: delete_snapshots options: continue_if_exception: False disable_action: False filters: - filtertype: none ''' not_rollable_name = {'index': {u'aliases': {'foo': {}}}} not_rollable_multiple = {u'index-a': {u'aliases': {u'foo': {}}}, u'index-b': {u'aliases': {u'foo': {}}}} not_rollable_non_numeric = {u'index-a': {u'aliases': {u'foo': {}}}} is_rollable_2digits = {u'index-00001': {u'aliases': {u'foo': {}}}} is_rollable_hypenated = {u'index-2017.03.07-1': {u'aliases': {u'foo': {}}}} generic_task = {u'task': u'I0ekFjMhSPCQz7FUs1zJOg:54510686'} incomplete_task = {u'completed': False, u'task': {u'node': u'I0ekFjMhSPCQz7FUs1zJOg', u'status': {u'retries': {u'bulk': 0, u'search': 0}, u'updated': 0, u'batches': 3647, u'throttled_until_millis': 0, u'throttled_millis': 0, u'noops': 0, u'created': 3646581, u'deleted': 0, u'requests_per_second': -1.0, u'version_conflicts': 0, u'total': 3646581}, u'description': u'UNIT TEST', u'running_time_in_nanos': 1637039537721, u'cancellable': True, u'action': u'indices:data/write/reindex', u'type': u'transport', u'id': 54510686, u'start_time_in_millis': 1489695981997}, u'response': {u'retries': {u'bulk': 0, u'search': 0}, u'updated': 0, u'batches': 3647, u'throttled_until_millis': 0, u'throttled_millis': 0, u'noops': 0, u'created': 3646581, u'deleted': 0, u'took': 1636917, u'requests_per_second': -1.0, u'timed_out': False, u'failures': [], u'version_conflicts': 0, u'total': 3646581}} completed_task = {u'completed': True, u'task': {u'node': u'I0ekFjMhSPCQz7FUs1zJOg', u'status': {u'retries': {u'bulk': 0, u'search': 0}, u'updated': 0, u'batches': 3647, u'throttled_until_millis': 0, u'throttled_millis': 0, u'noops': 0, u'created': 3646581, u'deleted': 0, u'requests_per_second': -1.0, u'version_conflicts': 0, u'total': 3646581}, u'description': u'UNIT TEST', u'running_time_in_nanos': 1637039537721, u'cancellable': True, u'action': u'indices:data/write/reindex', u'type': u'transport', u'id': 54510686, u'start_time_in_millis': 1489695981997}, u'response': {u'retries': {u'bulk': 0, u'search': 0}, u'updated': 0, u'batches': 3647, u'throttled_until_millis': 0, u'throttled_millis': 0, u'noops': 0, u'created': 3646581, u'deleted': 0, u'took': 1636917, u'requests_per_second': -1.0, u'timed_out': False, u'failures': [], u'version_conflicts': 0, u'total': 3646581}} recovery_output = {'index-2015.01.01': {'shards' : [{'stage':'DONE'}]}, 'index-2015.02.01': {'shards' : [{'stage':'DONE'}]}} unrecovered_output = {'index-2015.01.01': {'shards' : [{'stage':'INDEX'}]}, 'index-2015.02.01': {'shards' : [{'stage':'INDEX'}]}} cluster_health = { "cluster_name": "unit_test", "status": "green", "timed_out": False, "number_of_nodes": 7, "number_of_data_nodes": 3, "active_primary_shards": 235, "active_shards": 471, "relocating_shards": 0, "initializing_shards": 0, "unassigned_shards": 0, "delayed_unassigned_shards": 0, "number_of_pending_tasks": 0, "task_max_waiting_in_queue_millis": 0, "active_shards_percent_as_number": 100} reindex_basic = { 'source': { 'index': named_index }, 'dest': { 'index': 'other_index' } } reindex_replace = { 'source': { 'index': 'REINDEX_SELECTION' }, 'dest': { 'index': 'other_index' } } reindex_migration = { 'source': { 'index': named_index }, 'dest': { 'index': 'MIGRATION' } } index_list_966 = ['indexv0.2_2017-02-12_536a9247f9fa4fc7a7942ad46ea14e0d'] recovery_966 = {u'indexv0.2_2017-02-12_536a9247f9fa4fc7a7942ad46ea14e0d': {u'shards': [{u'total_time': u'10.1m', u'index': {u'files': {u'reused': 0, u'total': 15, u'percent': u'100.0%', u'recovered': 15}, u'total_time': u'10.1m', u'target_throttle_time': u'-1', u'total_time_in_millis': 606577, u'source_throttle_time_in_millis': 0, u'source_throttle_time': u'-1', u'target_throttle_time_in_millis': 0, u'size': {u'recovered_in_bytes': 3171596177, u'reused': u'0b', u'total_in_bytes': 3171596177, u'percent': u'100.0%', u'reused_in_bytes': 0, u'total': u'2.9gb', u'recovered': u'2.9gb'}}, u'verify_index': {u'total_time': u'0s', u'total_time_in_millis': 0, u'check_index_time_in_millis': 0, u'check_index_time': u'0s'}, u'target': {u'ip': u'x.x.x.7', u'host': u'x.x.x.7', u'transport_address': u'x.x.x.7:9300', u'id': u'K4xQPaOFSWSPLwhb0P47aQ', u'name': u'staging-es5-forcem'}, u'source': {u'index': u'indexv0.2_2017-02-12_536a9247f9fa4fc7a7942ad46ea14e0d', u'version': u'5.1.1', u'snapshot': u'force-merge', u'repository': u'force-merge'}, u'translog': {u'total_time': u'45ms', u'percent': u'100.0%', u'total_time_in_millis': 45, u'total_on_start': 0, u'total': 0, u'recovered': 0}, u'start_time': u'2017-05-16T11:54:48.183Z', u'primary': True, u'total_time_in_millis': 606631, u'stop_time_in_millis': 1494936294815, u'stop_time': u'2017-05-16T12:04:54.815Z', u'stage': u'DONE', u'type': u'SNAPSHOT', u'id': 1, u'start_time_in_millis': 1494935688183}, {u'total_time': u'10m', u'index': {u'files': {u'reused': 0, u'total': 15, u'percent': u'100.0%', u'recovered': 15}, u'total_time': u'10m', u'target_throttle_time': u'-1', u'total_time_in_millis': 602302, u'source_throttle_time_in_millis': 0, u'source_throttle_time': u'-1', u'target_throttle_time_in_millis': 0, u'size': {u'recovered_in_bytes': 3162299781, u'reused': u'0b', u'total_in_bytes': 3162299781, u'percent': u'100.0%', u'reused_in_bytes': 0, u'total': u'2.9gb', u'recovered': u'2.9gb'}}, u'verify_index': {u'total_time': u'0s', u'total_time_in_millis': 0, u'check_index_time_in_millis': 0, u'check_index_time': u'0s'}, u'target': {u'ip': u'x.x.x.7', u'host': u'x.x.x.7', u'transport_address': u'x.x.x.7:9300', u'id': u'K4xQPaOFSWSPLwhb0P47aQ', u'name': u'staging-es5-forcem'}, u'source': {u'index': u'indexv0.2_2017-02-12_536a9247f9fa4fc7a7942ad46ea14e0d', u'version': u'5.1.1', u'snapshot': u'force-merge', u'repository': u'force-merge'}, u'translog': {u'total_time': u'389ms', u'percent': u'100.0%', u'total_time_in_millis': 389, u'total_on_start': 0, u'total': 0, u'recovered': 0}, u'start_time': u'2017-05-16T12:04:51.606Z', u'primary': True, u'total_time_in_millis': 602698, u'stop_time_in_millis': 1494936894305, u'stop_time': u'2017-05-16T12:14:54.305Z', u'stage': u'DONE', u'type': u'SNAPSHOT', u'id': 5, u'start_time_in_millis': 1494936291606}, {u'total_time': u'10.1m', u'index': {u'files': {u'reused': 0, u'total': 15, u'percent': u'100.0%', u'recovered': 15}, u'total_time': u'10.1m', u'target_throttle_time': u'-1', u'total_time_in_millis': 606692, u'source_throttle_time_in_millis': 0, u'source_throttle_time': u'-1', u'target_throttle_time_in_millis': 0, u'size': {u'recovered_in_bytes': 3156050994, u'reused': u'0b', u'total_in_bytes': 3156050994, u'percent': u'100.0%', u'reused_in_bytes': 0, u'total': u'2.9gb', u'recovered': u'2.9gb'}}, u'verify_index': {u'total_time': u'0s', u'total_time_in_millis': 0, u'check_index_time_in_millis': 0, u'check_index_time': u'0s'}, u'target': {u'ip': u'x.x.x.7', u'host': u'x.x.x.7', u'transport_address': u'x.x.x.7:9300', u'id': u'K4xQPaOFSWSPLwhb0P47aQ', u'name': u'staging-es5-forcem'}, u'source': {u'index': u'indexv0.2_2017-02-12_536a9247f9fa4fc7a7942ad46ea14e0d', u'version': u'5.1.1', u'snapshot': u'force-merge', u'repository': u'force-merge'}, u'translog': {u'total_time': u'38ms', u'percent': u'100.0%', u'total_time_in_millis': 38, u'total_on_start': 0, u'total': 0, u'recovered': 0}, u'start_time': u'2017-05-16T11:54:48.166Z', u'primary': True, u'total_time_in_millis': 606737, u'stop_time_in_millis': 1494936294904, u'stop_time': u'2017-05-16T12:04:54.904Z', u'stage': u'DONE', u'type': u'SNAPSHOT', u'id': 3, u'start_time_in_millis': 1494935688166}, {u'total_time': u'10m', u'index': {u'files': {u'reused': 0, u'total': 15, u'percent': u'100.0%', u'recovered': 15}, u'total_time': u'10m', u'target_throttle_time': u'-1', u'total_time_in_millis': 602010, u'source_throttle_time_in_millis': 0, u'source_throttle_time': u'-1', u'target_throttle_time_in_millis': 0, u'size': {u'recovered_in_bytes': 3153017440, u'reused': u'0b', u'total_in_bytes': 3153017440, u'percent': u'100.0%', u'reused_in_bytes': 0, u'total': u'2.9gb', u'recovered': u'2.9gb'}}, u'verify_index': {u'total_time': u'0s', u'total_time_in_millis': 0, u'check_index_time_in_millis': 0, u'check_index_time': u'0s'}, u'target': {u'ip': u'x.x.x.7', u'host': u'x.x.x.7', u'transport_address': u'x.x.x.7:9300', u'id': u'K4xQPaOFSWSPLwhb0P47aQ', u'name': u'staging-es5-forcem'}, u'source': {u'index': u'indexv0.2_2017-02-12_536a9247f9fa4fc7a7942ad46ea14e0d', u'version': u'5.1.1', u'snapshot': u'force-merge', u'repository': u'force-merge'}, u'translog': {u'total_time': u'558ms', u'percent': u'100.0%', u'total_time_in_millis': 558, u'total_on_start': 0, u'total': 0, u'recovered': 0}, u'start_time': u'2017-05-16T12:04:51.369Z', u'primary': True, u'total_time_in_millis': 602575, u'stop_time_in_millis': 1494936893944, u'stop_time': u'2017-05-16T12:14:53.944Z', u'stage': u'DONE', u'type': u'SNAPSHOT', u'id': 4, u'start_time_in_millis': 1494936291369}, {u'total_time': u'10m', u'index': {u'files': {u'reused': 0, u'total': 15, u'percent': u'100.0%', u'recovered': 15}, u'total_time': u'10m', u'target_throttle_time': u'-1', u'total_time_in_millis': 600492, u'source_throttle_time_in_millis': 0, u'source_throttle_time': u'-1', u'target_throttle_time_in_millis': 0, u'size': {u'recovered_in_bytes': 3153347402, u'reused': u'0b', u'total_in_bytes': 3153347402, u'percent': u'100.0%', u'reused_in_bytes': 0, u'total': u'2.9gb', u'recovered': u'2.9gb'}}, u'verify_index': {u'total_time': u'0s', u'total_time_in_millis': 0, u'check_index_time_in_millis': 0, u'check_index_time': u'0s'}, u'target': {u'ip': u'x.x.x.7', u'host': u'x.x.x.7', u'transport_address': u'x.x.x.7:9300', u'id': u'K4xQPaOFSWSPLwhb0P47aQ', u'name': u'staging-es5-forcem'}, u'source': {u'index': u'indexv0.2_2017-02-12_536a9247f9fa4fc7a7942ad46ea14e0d', u'version': u'5.1.1', u'snapshot': u'force-merge', u'repository': u'force-merge'}, u'translog': {u'total_time': u'445ms', u'percent': u'100.0%', u'total_time_in_millis': 445, u'total_on_start': 0, u'total': 0, u'recovered': 0}, u'start_time': u'2017-05-16T12:04:54.817Z', u'primary': True, u'total_time_in_millis': 600946, u'stop_time_in_millis': 1494936895764, u'stop_time': u'2017-05-16T12:14:55.764Z', u'stage': u'DONE', u'type': u'SNAPSHOT', u'id': 6, u'start_time_in_millis': 1494936294817}, {u'total_time': u'10m', u'index': {u'files': {u'reused': 0, u'total': 15, u'percent': u'100.0%', u'recovered': 15}, u'total_time': u'10m', u'target_throttle_time': u'-1', u'total_time_in_millis': 603194, u'source_throttle_time_in_millis': 0, u'source_throttle_time': u'-1', u'target_throttle_time_in_millis': 0, u'size': {u'recovered_in_bytes': 3148003580, u'reused': u'0b', u'total_in_bytes': 3148003580, u'percent': u'100.0%', u'reused_in_bytes': 0, u'total': u'2.9gb', u'recovered': u'2.9gb'}}, u'verify_index': {u'total_time': u'0s', u'total_time_in_millis': 0, u'check_index_time_in_millis': 0, u'check_index_time': u'0s'}, u'target': {u'ip': u'x.x.x.7', u'host': u'x.x.x.7', u'transport_address': u'x.x.x.7:9300', u'id': u'K4xQPaOFSWSPLwhb0P47aQ', u'name': u'staging-es5-forcem'}, u'source': {u'index': u'indexv0.2_2017-02-12_536a9247f9fa4fc7a7942ad46ea14e0d', u'version': u'5.1.1', u'snapshot': u'force-merge', u'repository': u'force-merge'}, u'translog': {u'total_time': u'225ms', u'percent': u'100.0%', u'total_time_in_millis': 225, u'total_on_start': 0, u'total': 0, u'recovered': 0}, u'start_time': u'2017-05-16T11:54:48.173Z', u'primary': True, u'total_time_in_millis': 603429, u'stop_time_in_millis': 1494936291602, u'stop_time': u'2017-05-16T12:04:51.602Z', u'stage': u'DONE', u'type': u'SNAPSHOT', u'id': 2, u'start_time_in_millis': 1494935688173}, {u'total_time': u'10m', u'index': {u'files': {u'reused': 0, u'total': 15, u'percent': u'100.0%', u'recovered': 15}, u'total_time': u'10m', u'target_throttle_time': u'-1', u'total_time_in_millis': 601453, u'source_throttle_time_in_millis': 0, u'source_throttle_time': u'-1', u'target_throttle_time_in_millis': 0, u'size': {u'recovered_in_bytes': 3168132171, u'reused': u'0b', u'total_in_bytes': 3168132171, u'percent': u'100.0%', u'reused_in_bytes': 0, u'total': u'2.9gb', u'recovered': u'2.9gb'}}, u'verify_index': {u'total_time': u'0s', u'total_time_in_millis': 0, u'check_index_time_in_millis': 0, u'check_index_time': u'0s'}, u'target': {u'ip': u'x.x.x.7', u'host': u'x.x.x.7', u'transport_address': u'x.x.x.7:9300', u'id': u'K4xQPaOFSWSPLwhb0P47aQ', u'name': u'staging-es5-forcem'}, u'source': {u'index': u'indexv0.2_2017-02-12_536a9247f9fa4fc7a7942ad46ea14e0d', u'version': u'5.1.1', u'snapshot': u'force-merge', u'repository': u'force-merge'}, u'translog': {u'total_time': u'43ms', u'percent': u'100.0%', u'total_time_in_millis': 43, u'total_on_start': 0, u'total': 0, u'recovered': 0}, u'start_time': u'2017-05-16T12:04:54.905Z', u'primary': True, u'total_time_in_millis': 601503, u'stop_time_in_millis': 1494936896408, u'stop_time': u'2017-05-16T12:14:56.408Z', u'stage': u'DONE', u'type': u'SNAPSHOT', u'id': 7, u'start_time_in_millis': 1494936294905}, {u'total_time': u'10m', u'index': {u'files': {u'reused': 0, u'total': 15, u'percent': u'100.0%', u'recovered': 15}, u'total_time': u'10m', u'target_throttle_time': u'-1', u'total_time_in_millis': 602897, u'source_throttle_time_in_millis': 0, u'source_throttle_time': u'-1', u'target_throttle_time_in_millis': 0, u'size': {u'recovered_in_bytes': 3153750393, u'reused': u'0b', u'total_in_bytes': 3153750393, u'percent': u'100.0%', u'reused_in_bytes': 0, u'total': u'2.9gb', u'recovered': u'2.9gb'}}, u'verify_index': {u'total_time': u'0s', u'total_time_in_millis': 0, u'check_index_time_in_millis': 0, u'check_index_time': u'0s'}, u'target': {u'ip': u'x.x.x.7', u'host': u'x.x.x.7', u'transport_address': u'x.x.x.7:9300', u'id': u'K4xQPaOFSWSPLwhb0P47aQ', u'name': u'staging-es5-forcem'}, u'source': {u'index': u'indexv0.2_2017-02-12_536a9247f9fa4fc7a7942ad46ea14e0d', u'version': u'5.1.1', u'snapshot': u'force-merge', u'repository': u'force-merge'}, u'translog': {u'total_time': u'271ms', u'percent': u'100.0%', u'total_time_in_millis': 271, u'total_on_start': 0, u'total': 0, u'recovered': 0}, u'start_time': u'2017-05-16T11:54:48.191Z', u'primary': True, u'total_time_in_millis': 603174, u'stop_time_in_millis': 1494936291366, u'stop_time': u'2017-05-16T12:04:51.366Z', u'stage': u'DONE', u'type': u'SNAPSHOT', u'id': 0, u'start_time_in_millis': 1494935688191}]}} no_snap_tasks = {u'nodes': {u'node1': {u'tasks': {u'task1': {u'action': u'cluster:monitor/tasks/lists[n]'}}}}} snap_task = {u'nodes': {u'node1': {u'tasks': {u'task1': {u'action': u'cluster:admin/snapshot/delete'}}}}} watermark_persistent = {u'persistent':{u'cluster':{u'routing':{u'allocation':{u'disk':{u'watermark':{u'low':u'11%',u'high':u'60gb'}}}}}}} watermark_transient = {u'transient':{u'cluster':{u'routing':{u'allocation':{u'disk':{u'watermark':{u'low':u'9%',u'high':u'50gb'}}}}}}} watermark_both = { u'persistent': {u'cluster':{u'routing':{u'allocation':{u'disk':{u'watermark':{u'low':u'11%',u'high':u'60gb'}}}}}}, u'transient': {u'cluster':{u'routing':{u'allocation':{u'disk':{u'watermark':{u'low':u'9%',u'high':u'50gb'}}}}}}, } empty_cluster_settings = {u'persistent':{},u'transient':{}} data_only_node_role = ['data'] master_data_node_role = ['data','master'] curator-5.2.0/travis-run.sh000077500000000000000000000046751315226075300156660ustar00rootroot00000000000000#!/bin/bash set -ex # There's at least 1 expected, skipped test, only with 5.0.0-alpha4 right now expected_skips=1 setup_es() { download_url=$1 curl -sL $download_url > elasticsearch.tar.gz mkdir elasticsearch tar -xzf elasticsearch.tar.gz --strip-components=1 -C ./elasticsearch/. } start_es() { jhome=$1 es_args=$2 es_port=$3 es_cluster=$4 export JAVA_HOME=$jhome elasticsearch/bin/elasticsearch $es_args > /tmp/$es_cluster.log & sleep 20 curl http://127.0.0.1:$es_port && echo "$es_cluster Elasticsearch is up!" || cat /tmp/$es_cluster.log ./elasticsearch/logs/$es_cluster.log # curl http://127.0.0.1:$es_port && echo "ES is up!" || cat /tmp/$es_cluster.log ./elasticsearch/logs/$es_cluster.log } setup_es https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-$ES_VERSION.tar.gz java_home='/usr/lib/jvm/java-8-oracle' ### Build local cluster config (since 5.4 removed most flags) LC=elasticsearch/localcluster mkdir -p $LC cp elasticsearch/config/log4j2.properties $LC echo 'network.host: 127.0.0.1' > $LC/elasticsearch.yml echo 'http.port: 9200' >> $LC/elasticsearch.yml echo 'cluster.name: local' >> $LC/elasticsearch.yml echo 'node.max_local_storage_nodes: 2' >> $LC/elasticsearch.yml echo 'discovery.zen.ping.unicast.hosts: ["127.0.0.1:9200"]' >> $LC/elasticsearch.yml echo 'path.repo: /' >> $LC/elasticsearch.yml echo 'reindex.remote.whitelist: localhost:9201' >> $LC/elasticsearch.yml ### Build remote cluster config (since 5.4 removed most flags) RC=elasticsearch/remotecluster mkdir -p $RC cp elasticsearch/config/log4j2.properties $RC echo 'network.host: 127.0.0.1' > $RC/elasticsearch.yml echo 'http.port: 9201' >> $RC/elasticsearch.yml echo 'cluster.name: remote' >> $RC/elasticsearch.yml echo 'node.max_local_storage_nodes: 2' >> $RC/elasticsearch.yml echo 'discovery.zen.ping.unicast.hosts: ["127.0.0.1:9201"]' >> $RC/elasticsearch.yml start_es $java_home "-d -Epath.conf=$LC" 9200 "local" start_es $java_home "-d -Epath.conf=$RC" 9201 "remote" python setup.py test result=$(head -1 nosetests.xml | awk '{print $6 " " $7 " " $8}' | awk -F\> '{print $1}' | tr -d '"') echo "Result = $result" errors=$(echo $result | awk '{print $1}' | awk -F\= '{print $2}') failures=$(echo $result | awk '{print $2}' | awk -F\= '{print $2}') skips=$(echo $result | awk '{print $3}' | awk -F\= '{print $2}') if [[ $errors -gt 0 ]]; then exit 1 elif [[ $failures -gt 0 ]]; then exit 1 elif [[ $skips -gt $expected_skips ]]; then exit 1 fi curator-5.2.0/unix_packages/000077500000000000000000000000001315226075300160225ustar00rootroot00000000000000curator-5.2.0/unix_packages/build_official_package.sh000077500000000000000000000140651315226075300227750ustar00rootroot00000000000000#!/bin/bash BASEPATH=$(pwd) PKG_TARGET=/curator_packages WORKDIR=/tmp/curator CX_VER="5.0.1" CX_FILE="/curator_source/unix_packages/cx_freeze-${CX_VER}.dev.tar.gz" CX_PATH="anthony_tuininga-cx_freeze-0e565139e6a3" PYVER=3.6 MINOR=2 INPUT_TYPE=python CATEGORY=python VENDOR=Elastic MAINTAINER="'Elastic Developers '" C_POST_INSTALL=${WORKDIR}/es_curator_after_install.sh C_PRE_REMOVE=${WORKDIR}/es_curator_before_removal.sh C_POST_REMOVE=${WORKDIR}/es_curator_after_removal.sh C_PRE_UPGRADE=${WORKDIR}/es_curator_before_upgrade.sh C_POST_UPGRADE=${WORKDIR}/es_curator_after_upgrade.sh # Build our own package pre/post scripts sudo rm -rf ${WORKDIR} /opt/elasticsearch-curator mkdir -p ${WORKDIR} for file in ${C_POST_INSTALL} ${C_PRE_REMOVE} ${C_POST_REMOVE}; do echo '#!/bin/bash' > ${file} echo >> ${file} chmod +x ${file} done remove_python() { sudo rm -f /usr/local/lib/libpython${1}m.a sudo rm -f /usr/local/lib/pkgconfig/python-${1}.pc sudo rm -rf /usr/local/lib/python${1} sudo rm -f /usr/lib/libpython${1}.a sudo rm -rf /usr/local/include/python${1}m cd /usr/local/bin sudo rm -f 2to3-${1} easy_install-${1} idle${1} pip${1} pydoc${1} python${1} python${1}m python${1}m-config pyvenv-${1} cd - } build_python() { cd /tmp wget -c https://www.python.org/ftp/python/${1}/Python-${1}.tgz tar zxf Python-${1}.tgz cd Python-${1} ./configure --prefix=/usr/local sudo make altinstall sudo ln -s /usr/local/lib/libpython${1}m.a /usr/lib/libpython${1}.a cd - } echo "ln -s /opt/elasticsearch-curator/curator /usr/bin/curator" >> ${C_POST_INSTALL} echo "ln -s /opt/elasticsearch-curator/curator_cli /usr/bin/curator_cli" >> ${C_POST_INSTALL} echo "ln -s /opt/elasticsearch-curator/es_repo_mgr /usr/bin/es_repo_mgr" >> ${C_POST_INSTALL} echo "ln -s /opt/elasticsearch-curator/curator /usr/bin/curator" >> ${C_POST_UPGRADE} echo "ln -s /opt/elasticsearch-curator/curator_cli /usr/bin/curator_cli" >> ${C_POST_UPGRADE} echo "ln -s /opt/elasticsearch-curator/es_repo_mgr /usr/bin/es_repo_mgr" >> ${C_POST_UPGRADE} echo "rm -f /usr/bin/curator" >> ${C_PRE_REMOVE} echo "rm -f /usr/bin/curator_cli" >> ${C_PRE_REMOVE} echo "rm -f /usr/bin/es_repo_mgr" >> ${C_PRE_REMOVE} echo "rm -f /usr/bin/curator" >> ${C_PRE_UPGRADE} echo "rm -f /usr/bin/curator_cli" >> ${C_PRE_UPGRADE} echo "rm -f /usr/bin/es_repo_mgr" >> ${C_PRE_UPGRADE} echo 'if [ -d "/opt/elasticsearch-curator" ]; then' >> ${C_POST_REMOVE} echo ' rm -rf /opt/elasticsearch-curator' >> ${C_POST_REMOVE} echo 'fi' >> ${C_POST_REMOVE} ID=$(grep ^ID\= /etc/*release | awk -F\= '{print $2}' | tr -d \") VERSION_ID=$(grep ^VERSION_ID\= /etc/*release | awk -F\= '{print $2}' | tr -d \") if [ "${ID}x" == "x" ]; then ID=$(cat /etc/*release | grep -v LSB | uniq | awk '{print $1}' | tr "[:upper:]" "[:lower:]" ) VERSION_ID=$(cat /etc/*release | grep -v LSB | uniq | awk '{print $3}' | awk -F\. '{print $1}') fi # build if [ "${1}x" == "x" ]; then echo "Must provide version number" exit 1 else FILE="v${1}.tar.gz" cd ${WORKDIR} wget -c https://github.com/elastic/curator/archive/${FILE} fi case "$ID" in ubuntu|debian) PKGTYPE=deb PLATFORM=debian case "$VERSION_ID" in 1404|1604|8) PACKAGEDIR="${PKG_TARGET}/${1}/${PLATFORM}";; 9) PACKAGEDIR="${PKG_TARGET}/${1}/${PLATFORM}${VERSION_ID}";; *) PACKAGEDIR="${PKG_TARGET}/${1}/${PLATFORM}";; esac sudo apt update -y sudo apt install -y openssl zlib1g zlib1g-dev libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev dirmngr curl ;; centos|rhel) PKGTYPE=rpm PLATFORM=centos case "$VERSION_ID" in 6|7) sudo rm -f /etc/yum.repos.d/puppetlabs-pc1.repo sudo yum -y update sudo yum install -y openssl ;; *) echo "unknown system version: ${VERSION_ID}"; exit 1;; esac PACKAGEDIR="${PKG_TARGET}/${1}/${PLATFORM}/${VERSION_ID}" ;; *) echo "unknown system type: ${ID}"; exit 1;; esac HAS_PY3=$(which python${PYVER}) if [ "${HAS_PY3}x" == "x" ]; then build_python ${PYVER}.${MINOR} fi FOUNDVER=$(python${PYVER} --version | awk '{print $2}') if [ "${FOUNDVER}" != "${PYVER}.${MINOR}" ]; then remove_python $(echo ${FOUNDVER} | awk -F\. '{print $1"."$2}') build_python ${PYVER}.${MINOR} fi PIPBIN=/usr/local/bin/pip${PYVER} PYBIN=/usr/local/bin/python${PYVER} if [ -e "${HOME}/.rvm/scripts/rvm" ]; then source ${HOME}/.rvm/scripts/rvm fi HAS_FPM=$(which fpm) if [ "${HAS_FPM}x" == "x" ]; then gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 curl -sSL https://get.rvm.io | bash -s stable source ${HOME}/.rvm/scripts/rvm rvm install ruby gem install fpm fi tar zxf ${FILE} ${PIPBIN} install -U --user setuptools ${PIPBIN} install -U --user requests_aws4auth if [ "${CX_VER}" != "$(${PIPBIN} list | grep cx | awk '{print $2}' | tr -d '()')" ]; then cd ${WORKDIR} rm -rf ${CX_PATH} tar zxf ${CX_FILE} cd ${CX_PATH} ${PIPBIN} install -U --user . cd ${WORKDIR} fi mkdir -p ${PACKAGEDIR} cd curator-${1} ${PIPBIN} install -U --user -r requirements.txt ${PYBIN} setup.py build_exe sudo mv build/exe.linux-x86_64-${PYVER} /opt/elasticsearch-curator sudo chown -R root:root /opt/elasticsearch-curator cd .. fpm \ -s dir \ -t ${PKGTYPE} \ -n elasticsearch-curator \ -v ${1} \ --vendor ${VENDOR} \ --maintainer "${MAINTAINER}" \ --license 'Apache-2.0' \ --category tools \ --description 'Have indices in Elasticsearch? This is the tool for you!\n\nLike a museum curator manages the exhibits and collections on display, \nElasticsearch Curator helps you curate, or manage your indices.' \ --after-install ${C_POST_INSTALL} \ --before-remove ${C_PRE_REMOVE} \ --after-remove ${C_POST_REMOVE} \ --before-upgrade ${C_PRE_UPGRADE} \ --after-upgrade ${C_POST_UPGRADE} \ --provides elasticsearch-curator \ --conflicts python-elasticsearch-curator \ --conflicts python3-elasticsearch-curator \ /opt/elasticsearch-curator mv ${WORKDIR}/*.${PKGTYPE} ${PACKAGEDIR} rm ${C_POST_INSTALL} ${C_PRE_REMOVE} ${C_POST_REMOVE} ${C_PRE_UPGRADE} ${C_POST_UPGRADE} # go back to where we started cd ${BASEPATH} curator-5.2.0/unix_packages/build_package_from_source.sh000077500000000000000000000137541315226075300235500ustar00rootroot00000000000000#!/bin/bash BASEPATH=$(pwd) PKG_TARGET=/curator_packages WORKDIR=/tmp/curator CX_VER="5.0.1" CX_FILE="/curator_source/unix_packages/cx_freeze-${CX_VER}.dev.tar.gz" CX_PATH="anthony_tuininga-cx_freeze-0e565139e6a3" PYVER=3.6 MINOR=2 INPUT_TYPE=python CATEGORY=python VENDOR=Elastic MAINTAINER="'Elastic Developers '" C_POST_INSTALL=${WORKDIR}/es_curator_after_install.sh C_PRE_REMOVE=${WORKDIR}/es_curator_before_removal.sh C_POST_REMOVE=${WORKDIR}/es_curator_after_removal.sh C_PRE_UPGRADE=${WORKDIR}/es_curator_before_upgrade.sh C_POST_UPGRADE=${WORKDIR}/es_curator_after_upgrade.sh # Build our own package pre/post scripts sudo rm -rf ${WORKDIR} /opt/elasticsearch-curator mkdir -p ${WORKDIR} for file in ${C_POST_INSTALL} ${C_PRE_REMOVE} ${C_POST_REMOVE}; do echo '#!/bin/bash' > ${file} echo >> ${file} chmod +x ${file} done remove_python() { sudo rm -f /usr/local/lib/libpython${1}m.a sudo rm -f /usr/local/lib/pkgconfig/python-${1}.pc sudo rm -rf /usr/local/lib/python${1} sudo rm -f /usr/lib/libpython${1}.a sudo rm -rf /usr/local/include/python${1}m cd /usr/local/bin sudo rm -f 2to3-${1} easy_install-${1} idle${1} pip${1} pydoc${1} python${1} python${1}m python${1}m-config pyvenv-${1} cd - } build_python() { cd /tmp wget -c https://www.python.org/ftp/python/${1}/Python-${1}.tgz tar zxf Python-${1}.tgz cd Python-${1} ./configure --prefix=/usr/local sudo make altinstall sudo ln -s /usr/local/lib/libpython${1}m.a /usr/lib/libpython${1}.a cd - } echo "ln -s /opt/elasticsearch-curator/curator /usr/bin/curator" >> ${C_POST_INSTALL} echo "ln -s /opt/elasticsearch-curator/curator_cli /usr/bin/curator_cli" >> ${C_POST_INSTALL} echo "ln -s /opt/elasticsearch-curator/es_repo_mgr /usr/bin/es_repo_mgr" >> ${C_POST_INSTALL} echo "ln -s /opt/elasticsearch-curator/curator /usr/bin/curator" >> ${C_POST_UPGRADE} echo "ln -s /opt/elasticsearch-curator/curator_cli /usr/bin/curator_cli" >> ${C_POST_UPGRADE} echo "ln -s /opt/elasticsearch-curator/es_repo_mgr /usr/bin/es_repo_mgr" >> ${C_POST_UPGRADE} echo "rm -f /usr/bin/curator" >> ${C_PRE_REMOVE} echo "rm -f /usr/bin/curator_cli" >> ${C_PRE_REMOVE} echo "rm -f /usr/bin/es_repo_mgr" >> ${C_PRE_REMOVE} echo "rm -f /usr/bin/curator" >> ${C_PRE_UPGRADE} echo "rm -f /usr/bin/curator_cli" >> ${C_PRE_UPGRADE} echo "rm -f /usr/bin/es_repo_mgr" >> ${C_PRE_UPGRADE} echo 'if [ -d "/opt/elasticsearch-curator" ]; then' >> ${C_POST_REMOVE} echo ' rm -rf /opt/elasticsearch-curator' >> ${C_POST_REMOVE} echo 'fi' >> ${C_POST_REMOVE} ID=$(grep ^ID\= /etc/*release | awk -F\= '{print $2}' | tr -d \") VERSION_ID=$(grep ^VERSION_ID\= /etc/*release | awk -F\= '{print $2}' | tr -d \") if [ "${ID}x" == "x" ]; then ID=$(cat /etc/*release | grep -v LSB | uniq | awk '{print $1}' | tr "[:upper:]" "[:lower:]" ) VERSION_ID=$(cat /etc/*release | grep -v LSB | uniq | awk '{print $3}' | awk -F\. '{print $1}') fi # build if [ "${1}x" == "x" ]; then echo "Must provide version number (can be arbitrary)" exit 1 else cd .. SOURCE_DIR=$(pwd) fi case "$ID" in ubuntu|debian) PKGTYPE=deb PLATFORM=debian case "$VERSION_ID" in 1404|1604|8) PACKAGEDIR="${PKG_TARGET}/${1}/${PLATFORM}";; 9) PACKAGEDIR="${PKG_TARGET}/${1}/${PLATFORM}${VERSION_ID}";; *) PACKAGEDIR="${PKG_TARGET}/${1}/${PLATFORM}";; esac sudo apt update -y sudo apt install -y openssl zlib1g zlib1g-dev libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev dirmngr curl ;; centos|rhel) PKGTYPE=rpm PLATFORM=centos case "$VERSION_ID" in 6|7) sudo rm -f /etc/yum.repos.d/puppetlabs-pc1.repo sudo yum -y update sudo yum install -y openssl ;; *) echo "unknown system version: ${VERSION_ID}"; exit 1;; esac PACKAGEDIR="${PKG_TARGET}/${1}/${PLATFORM}/${VERSION_ID}" ;; *) echo "unknown system type: ${ID}"; exit 1;; esac HAS_PY3=$(which python${PYVER}) if [ "${HAS_PY3}x" == "x" ]; then build_python ${PYVER}.${MINOR} fi FOUNDVER=$(python${PYVER} --version | awk '{print $2}') if [ "${FOUNDVER}" != "${PYVER}.${MINOR}" ]; then remove_python $(echo ${FOUNDVER} | awk -F\. '{print $1"."$2}') build_python ${PYVER}.${MINOR} fi PIPBIN=/usr/local/bin/pip${PYVER} PYBIN=/usr/local/bin/python${PYVER} if [ -e "${HOME}/.rvm/scripts/rvm" ]; then source ${HOME}/.rvm/scripts/rvm fi HAS_FPM=$(which fpm) if [ "${HAS_FPM}x" == "x" ]; then gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 curl -sSL https://get.rvm.io | bash -s stable source ${HOME}/.rvm/scripts/rvm rvm install ruby gem install fpm fi ${PIPBIN} install -U --user setuptools ${PIPBIN} install -U --user requests_aws4auth if [ "${CX_VER}" != "$(${PIPBIN} list | grep cx | awk '{print $2}' | tr -d '()')" ]; then cd ${WORKDIR} rm -rf ${CX_PATH} tar zxf ${CX_FILE} cd ${CX_PATH} ${PIPBIN} install -U --user . cd ${WORKDIR} fi cd $SOURCE_DIR mkdir -p ${PACKAGEDIR} ${PIPBIN} install -U --user -r requirements.txt ${PYBIN} setup.py build_exe sudo mv build/exe.linux-x86_64-${PYVER} /opt/elasticsearch-curator sudo chown -R root:root /opt/elasticsearch-curator cd $WORKDIR fpm \ -s dir \ -t ${PKGTYPE} \ -n elasticsearch-curator \ -v ${1} \ --vendor ${VENDOR} \ --maintainer "${MAINTAINER}" \ --license 'Apache-2.0' \ --category tools \ --description 'Have indices in Elasticsearch? This is the tool for you!\n\nLike a museum curator manages the exhibits and collections on display, \nElasticsearch Curator helps you curate, or manage your indices.' \ --after-install ${C_POST_INSTALL} \ --before-remove ${C_PRE_REMOVE} \ --after-remove ${C_POST_REMOVE} \ --before-upgrade ${C_PRE_UPGRADE} \ --after-upgrade ${C_POST_UPGRADE} \ --provides elasticsearch-curator \ --conflicts python-elasticsearch-curator \ --conflicts python3-elasticsearch-curator \ /opt/elasticsearch-curator mv ${WORKDIR}/*.${PKGTYPE} ${PACKAGEDIR} rm ${C_POST_INSTALL} ${C_PRE_REMOVE} ${C_POST_REMOVE} ${C_PRE_UPGRADE} ${C_POST_UPGRADE} # go back to where we started cd ${BASEPATH} curator-5.2.0/unix_packages/cx_freeze-5.0.1.dev.tar.gz000066400000000000000000002501661315226075300223510ustar00rootroot00000000000000qX}{ƕh_WdBzVUl9r%%iWՇ $X`f0xylk}mLg8sO(nţ_E Ï4xwowsUlA&l\|, ١wcs;[[[oAy{r"YL07UGӽ8 _n~M {4<3v^7fE[/ 6{to7ɂEx{E4X͊^xf?Ng~4ƞu77> @ix[Er{^jE-_o{Eſݶ{友"xI/Jd̢E/JEp-8YOEtU,Wx|m/,?S[[g]6$wnnomlxt#&Ɩ \vw­W6w xm텰sFiU팷z{{8ؘnڙN6_mlnL^Lw㽝M.& Ϋhd'^f?b+zoa8ӽ~rF[8Ál8osgcoocct*܇_1t;v&fjMC^ 6­$~9}~*ވmx=x^m`3w&{qBQoT^_M0]FCwnmE>N1 )㍝ wKv&֫W/3XWSVosk{3llM_NI2bef Ӏ7v 6Ϋ,|pͶ6A 6ݗc;${;W |"xoM'Q3 ^Q0yro:؊^J`{gr{%lf8#޷=lz;4x9ylL=XWQl ?0$ȣe<{Ma%w_M9܊vv'xBVr1|[[ w:}Ru7'_٨[,*=m{/Pp ;p,Y _#0󂵀yB|iB?w~|S5?5.fq^(/^( gq18nziym5Aj,tnHUqwzw6;f[&KLt{nX#fiy7O؛U0p42K8GݛYE̽`SvCr ,.7YD`y" "qZ9Xh !)nw?c 4-`:m{Ew={~x}sS~}I}e[[[;';A{/~}M|2 /{,k5DxE/"x 'h;~mo=j Cdbf)Wc^w %Sԁ NS/]ŘfERGx/SO\قG\⁇oo*`0"qy#zWHSӝOh\E6ljM`| 9X E_y+i!{SN>;[۟"Ն'>#yS5O~A},_@WO'pV~,hX7A|0f "AuzGWN}1~ }'P#]z,s %ʁhcP)ɳy<(\b^`*EwP x-^]q) !wpŜ7Ha t7å8,ǁw\cبېUeka "9x,"Jpycsl,./U/qeCo )z]5+EuOl26Ԁkw6yoex=d4.&.|5Ż+^GcuBi&6j/Qi|%O= ڄڛ@ D-P.g s[`y{̀RXe^R G$nj`zh)Aq;{,I 3M8[+.Y.dqyA(XwӴ6Yn5k]: xuubxjp-ۇdBP^xN8Q$x(L# yOȓM CiS҇8 MR5MUY1$cڋG<*At2;[9͸.5Oip‡hxe7Ƹ֤UbiYU(B[<߅7*%=RQlvҀ[*Zx @ת2Ko,Jy-st'6Ak'}Π/Y9g²x —`d!СNj x zKxF72e7)H򸢤#p<[Er+mǰ,h{ NVQBjM5g|cnMIS~du2K&ؘ7vOҹVdZK?y|J([  SKmK]^J C7W塒 j3m=(po~YS;b;@]ڨjaD{ɲdwQ OۏAFgHӾpR6ZtaEK țΖ`*8n 'k`mq Y% Z:V2֮K(oZK͝HsRϼw@u,@1_sPJH쭥C((' $}&v=ԬKgX*eOjks mB>azv6 | C5psvr z]Y/=IUjظQn<5]a#UAhU\"AjMO5НVm̉XB ?VŬ4oAJk$FFNd%Cy~T$>&fn\ǵ-]K`Ֆa˞wCso}ĉh$o1,Yi *URxq/CscI@^i6xMΖty[|&"%\(eqһ=dCfP6HrS<>pja ZoWcլޠ,fO15(cBP{ؕ&D1Z|B֮on֝[ 9zN6r|-:,on|TO1f d\ F2$&g+ynH7&E>7AIGgǏ꟞>:ȩxoʴE>{Gj-6T!^#{07Y0'(OxaYƗ|?\$/X8FKO@.NHC_0xz4-Tm}Th 蜟&km "!whf7# YR0X4ߓbu%u.gxh9V;j%E[cQ;5rO=cjs^^)֋{w;M0rZ5 b-F@}mle٪o. fR0AL;tC:t{]&UIZvWM܏CT8tr%>WuXqTWy`]&׮TbVM%ǂ, >*i0,͸>ЂյD6 ܋cN5,\{? <@ ̽6d\:$\~,^?hNCќb(kVq|3KHGLێ(~W FG-_|gc%PU_W;l9u D`ӡ7y*t Ԧ_e4UsSoo_Oj+tkaw1݈vJ#]>wm\FHj߸5Gs_z7w*;n`4u X[# q@udmeFq5nywA~'>X}мmu'oӓKdt9WQwc͛ Oώ;z,a>_W{,oA1s|==*س8({y:Њd(}DT(Qi7'p: lzyo$ӡIS!lG?)3)1̗k[ &},vT0Cx;2/͝}o$w9G}IK[TW3(ǰ` QEX]._oxOkGb *x&+H+[NQ7~>RC|IO.oOrÐ\$X}9f8Uij$?' B/%Er$ZJ4`dHx\$1/XJߊo)Iф&K +OEE7*Gcd%&O"BG!m9r cp#~IG `PA^q+s;L'd #CH,(-rNp.tFHfn@=Ae1!0%c)63hC+#;F$;۫"$|ΒRUgZGHؑ熻M9RG2v$ڻ8 {J+X0 $b-װg\w 1U)Rt$(=4`ƭ_1c/A2j_dDzZr.z( `ZXd"adZRq@tW/tb.P=蛝5%b zŋ}``hEq0yX/I |ڤʹLNC {KD./ |NAEyb#q?zxr`̀n/d~o]Fe eL`aDs&-vTlXޣJb~u6?bɇߜ쯖}&LƉCôm-0b.}\JI_.przfti;2OEVB}^IUt`8er$CY ɹtXULQW{&4ⱉQ~Q$Ulw%9٩1laM>He%" kΡZ*/%z lտtUN/,3cVz6 OWsMrky>CJX =z#vSDK4LrO@9A9#Ҽ;4ґ[EݺG" {/brƩ~͒BZcK ;t ?7]%V .~5='2!4#4LH\U"7pЩKYof|K֫k3 W\ԽeW#޴LH#7օ)]KƯgXEDp Z̓Y]:':͚ @4Ñ ѵOԕn-ܗOS[Cvy·Yd>"ԯ6oR& Z&5IYErl }[+#J,IW$4O4st\hҸ~5P&·>k5;\C"8%UefbtGH,ENEF8<_&xӷ]\mtzq,[-?ϒIx{{8HOgpH*J`sOd'rz ]l+&U=.y>n]@m`v)=EUI2Ra* R,3G&d3TBrXECWbqna1;2f6oƵBQ7RUQ3Gyc+XG~/V_>OBo*/a(n Ҥ]=ϯ.ʭGғOC2+b2ħ7"I PdVq?3JA Y) aIJL0#ze\K-6!GT^r)[ؐSUŽhܥtqtv=stiGc%m" E2,RnNU HԦ\)^)c1[@= *|nT[ >M"/_r@I h9PME}w7FfM4PD8@+9͐@tФU[lK0$0R R}Gka Y]^6lCaQ*Ud#5tg͍7/ZqY"k,] ieZJW.N2d9؉79}|5/a,2֯N2U/3ܬaMKwu&wAW>ܦy,f[ Mڨ7 g`"~Cb)0҈z!H-=kEqxC˅tw!볚``K iXp#!yZtїDzߡp,l`%E0k`k[ݜNZÆ42juy:迷2=p @QD)H*Yz\bOz맣Kֵ.?VR/Vx7f_?0DJ"sX<;Y}, )*Lg#n~n"~]kV Ai|[xSeTlKi |B+-L]2V!5I ?dŘ>`C(wT,$yHw-uD,IЌDCI*^BRjU>>Hf2rRƻ4R~R3 bFxPXC[Иw ZZ_=}'J$)|?;djȚak:,O3_܈ƈ< V\Reg#@:` b"sZݢZq[M,Cʒү&`j,lxwA%=:^ࠛd= ;֡/C~u4ʐna0Q`l JHʂxpBA6ql?bÄP!u,1+P=wP! ?mvOWa#݂jo7_B詈C2A;Fi01 ;Y ujj] u#-hnRDbRZ%xu~TwҬux:[$lZ83!M#vQA*Tb_sA Ws)/?,JԯKÎ8lغ [nR8CO)VQx$c;A Ÿ]k6cY Ft"e|P~p˓Q1AkU_Z CKNpw#F?sX@51E]% 杓ϼ7iQ §w!l6F'o ;a[M#duΏ@@Νk ݾB,9P|c_ѓ0kvF73jUbI^FiU >sG㐢+"DVw Xky:Yk[FYKI:4X{[+w-Kݽ~?iߨ#ڪ?VIznH [z\riR"PJ"HuW hG|q`HX\Dp}RRB,__AI۽b"/){{8O]*E ʡU9/w줇"ç}Ro>ap0.(?OT[7vP^=VV"28~\Dvb:֒TDem %{f1%iWy= K Od3гDwimmͮUH(*Pzz|kd EgChiÌxӨպ R+$xi6V|G'+C9,+Mq!-,%D+[Fz*IO>Yie0U"YkT~Lt6*p]ˀ .>`nǩg$> v\YZqqA.KhL#+ A )-e'*Fַ0Qu #%E=@7jN.c_8k0 IQ:mnU>ҽ2#Et&vmLARS,ʠmdًr)LHUIRFFH971o}11 =#1b^J*~gCVJ/@ݯm-asøRM& N(q&XN07EZm.>w GdSO$ 7,Rd#soh^fS:z _{5S}ElWW_*jw.DU|}`jz 1) *Yfӯ &Br$}== lmVR C$↵?Jq"4m`DBJ4Q2%9đ.d%깨[1TKt/)ME'LOlUX;Zd٣4veEb*62 |(0-}+b@-U6a+w<2MIq.wPUWa%rGI@!pǽ^bX}W Of-%Xe.QM{ESvN*rV"2sU'eUKPTVD1B_lݐUܟʠ_, ;Z:~Cw}#C7 !I聆rD.nWZLktdԑ$h1<)Bu5^Ս].s{u`}-(DM{˹tPT-Ui^jxVL'/б~@Jr>X4 P  h|FHQү qdvMZ>˛Sj}(sRk8{UD:5$B6R,̟裱DvFAWUdM ġI*V@[yo˚xǻA*vp%( [UlI)u59cP=j}^4-U@R>[x פNmeȶBd_OӜV1Xq$XWU 5u`44/!9/mf ?1я?;~錝rNo=~&#`Xm]I<"3NEzzfS ~(bR(VH3LlA&FlQlUUe/w/R# 3:N"iոp`CLLHגpK@Uly"&U%DNQ ~Sj'eJGf=ΛW 4bєŸRQ,_>ϗ˅UR LtM1.wt^cCWk՘5mg}>ȸ.߇Jl}۹dQ¨O?"TMzTsf{5>>Ejq&~\]Qjm[D܈x&֒ 3ܠ~5vP ѧ>tyLsdY}7 dJFb5C:jԦ'D//tU8Lv7B䦙xFH:Zs`XCm=|zηj]v߼yRKyK=o7D3kֈ9/R%3h{[7 }Lp"{Gzh]xŴjNux\VeV% \qāʱeUP#l_2+6k~ү Z蛞Bp{CVm^5+p4j;R7 20<|劌SZf7 */x 3ښ'd/'ma\'n02y%KEU'+]0ޗɲphPE2B 0\Ga&7¢v%QU 4Kix=M[~Va*,U5 |됒6Z%ƝpYK죭\H݈wwBΜ2$`Wj}֬Soc. ՛=G eT㶲q"+M+.#4A~Ӷ'ȏ%'NUPIlC/ 4sw{JL?]=xسЅOeŏwGO_\>]"| asW;.߮ZD SkN: a_n5%%FEhUo'}*_b$07G}o1b/$DoYU:(>d i\WʓqB(. d:y&PHu+'x(*bَ,b 3Tѓ=zO'cmS3%qĕ:(oT6sQ賜 p>FKъBOrN WozjhGe'"B5S 4Fۖ&/3p5o|C4[.LY3crt -xU7/L&YJ'K9pGElw6ru?" b.x {N&{׬'KYNRFŲ%+*Kadʗ' +hetj :2gt4^I!I2xvI&<<} $BJﰨz] \aopĆخ}^^H%_̛г6:; L\ &k)n}TEMZmvg9<,qm͵ *`2IT( 6]ͨ{'SW;׬ '*Jť炟Yd㷳S8֖ĩ&(rvVm6Ԧ V79yQi`܎lsn}MHLGѿjhJ""tOC[YXAT`>tګ>-'aUGj/9W4W6o5Q-?_k:O"OlXGJ]dO(3H}Ӌ}S6@o {+{߫﵏M_w&zr6o׍e7baaIm5o*nY2y>JdSmc8>2b'58kV^MvH+Gb3E0_j.jQu7OGOՂ0> Ha[1}ItCRl-i8ZG1*}W#f,QA>~, .np4[@ψeu6VZ76DJ&uQ*nӬ !ZE~Q |̒BBʂ(].WX*̠ Ֆ\ȔaF4717tl7RtɑCG/Xߋ!)qHN򞁕 >+6!=;z\i%ւ^~#}06yԱ<ϱ\ꁴS'Z3"5ڲ9Ekս+{*^zFj-9!pWDb#ц$֨V1RW<ƭ8N6+ j VId1̃@sQcI+pGjiq޹Mdf=^H]JsG펚f(_hMVbjB}MCᒾh.*lw xbpd].r[t&u]qiqTFAitќb݆2.[T"#f7< 쵓Cv/۵)̷ 4*EHȑ6QC9!!&} B*O~ϾVŊ=bM~Dg&-Zyx?6EUvuT&"$G`y,JtAEugtGt'P, kkņ Mi2b0uCQb&˅LQ*ZЕwaljzX0e$Hv-/zSjh&C(lu|jhƺUxXՐcM̞,\2MVE9n2>[QZ!@3`URzN# ,ňn ~Qpcl7xl=2|x@Sxds?y.~X6vS{K*?u֋_if*:aN'+y@ը]ӫ{{:kAAw#۵5% 851 ֿ.x?Tj_l RUƲf*Z"d Ӏ#o^kkt{\3{ s.%gC+r2%rj~}ƭڼOt!JuW/Kjone%Ɂ:]W(zݤtM~zc%#i5:m@臊*5>:ߎci[ZPK+.~_5JD~洍peu;-?':Ifo<髥٪5wJ9r-kWU뉟F BZ-LXl%'(uhjW1} DVNW2̹o/kɱxUoXOۉcEϻf|@S [ZW4UULv3&c4OV@-RoC\LN[b/QG}fSS t'srJ/7..]xiBM H[i^ɗ)Š9E1ty:f$H2G.=h +"9:JI>4TH5]_Y}@{:-! hl\2k,\)BCwAb,+8K;PaԂSww'è1KA2c/ж{(b e %IC^e:1BL=Ncڥ^SR3B&#t㔗uHMuvXK/]) ,بX( I4 ʱU2*ɚ|R)tTpA隬 y_[J:n {`u^Fൈ!)EA*-طx`OlTYfth_ eE@sIt&ڕ%ew2mٕrìV6v"FʈmQ.8^q4R͡iL57EP+6 ikR26:i"84u6pct|0i5~ᬭkhazw[]OXeNy_nR)dR.Z|=1Ä|<gKT.98peqJ* y,^92)<;%i̼Db<@1XmvEǥ8K[i}rpy}"sK{1gPZ)15Uݮms_J !{r[f#oڹH|$zRG9]pa'F aWlܷGClY‡ oEU_9\7Bc3ճffeα˗9Tt!cEH'M&GP`'{BitqQ*O$Ou+!+߄nW /jccYJ=i:(ۋ}inJqޤ`|ﭿWSTmJf*8FiͣvE< 0,D nX &f1Asodp񠚞Dp@4xQ`ၓq-T2|$+ +<7䘱> k'u, w,4LjQ%ܷO.@#LakE^řtPIQj[4L'[J*ޝg մ kNtjO$qz͛sӀ8s^ΫNHPk=^ta*U" {BuAiZ֓`L WR,>v5 Y ]ҿSL3 O̍Ӕu7cR )nHϊ+0`E>ʭTyԡikkm@̈X&GZS_L ^_O˿tL:vlmF c,, ?PFC6W7zov6rgk2_kij%B>']I8DC{c(YRn[ȩ{Bk ߐW(&uy^qTPzp;W)S]Swj8 Qz-(l-ª0IW'~e,eTzj.Do~89}>{qy[|=FzϿ;xE&bFp轖_L 2Jؿ=S9x6!iJsˍP[ṼQyӦV\;#B( :}wvgk梾6 #Aetv|k@qR0عL^. j-{?ZEhŏ`hx Gb*LX*s9O8EME͇jmD I#ʎղ.dE!gQW^NZVPg`fTFRZ,RRYqAT6ҪN),ZOV7ei3$UJJXo-v6 ӈѷϯQl/x,}/_bC ZM#?VknOB<ۯ&#Sj'k4_{oe O?76]<ŊF* _(c͋4C𷷳CŸ67w6p66vm Bw.ؓ&4iЩ,R X b62uص,KWԴ?^c`,X)Ie;+\%a.n#A~!k*EѷᢘUXP7ٲf_ 8SYn竉(ڻ,n'f P}`Y!bEdͶ5`H'r ˊ-0e n7=q,y ,ESe'et0Yv#ØL#tvqJ0pVZPJb%Lvy0KO c.u&N~6}NWV}GAn&wcDon$ an@=#4o: V,aZ a'0{d"uȮvi> xt@ K}W ]~%߸m-)\ޑxƨP- %?<c$j~~ZPg I1{8#f8tkmz+bWY̌- L1ۯۚR|͓!4X9TZRF%hdyΈBeS * )RMA(FO5 ~k/n1^5a0R> )) aiJBV-SJ.4<=,*6+ε͒=sv KO.7Gߜw8>)𔭷s#S@N>oxim O@...=j=>{#wm& .HX%FA9zpۣܽ7xcvo-Z ߝysܽp>Nÿ?;:=9xtmgp|tF?_tZc-05',]5%3 ɣN/쟼i!w'~  Dmqpwrcx/\:oZ{vwC<[7l?~|~֡яǰ3O>\_w1z)Gҽ4Oj>m2o jFGIҗ/=֪{ We#8 7BUqF^f7 90+uAx_o;dbG\ \3dYZ)vJ2i#%9+svN)QV&z>@1NfM%%%ڸa4~8sM숪Niε,ظ&K* 岰eF]usq"o:%#62Hft(p_u@1&iYdV&Һ3f=a@bkW;$멷QKY")9 :vLub4B&-*!tAtEPVcl+]{;0ͳE Llo5B73rYXc^G#ড়d`t#SQZInaM ,T@n``@Z)En&I`ukH-paZ(h#@-%IBQ;It .Y@1o}\m>Z@)-@bhYY0-@hJ.]XkيnZ^y10#0={lvʯ >2AA]="C:^OPaas`#j+Q i*٘!\>CiD1lC[]lJ5I3UEf9xIڵTD \n|S&cI/Jn֋\a M92`?,@/Χ>_co4dOa16J/Ӽ,oScڐdBQЊ(I /RQzwJB8N"YQ/nPY+ Tz9sqF& 0VI]u! YaB%Aivj<Ww=u8j4 DBޚJB'V0dْ7TZFOlH!F*; $7ިt2(ZBX)G2IįqxH7*}YũP.P D՚(ݤerr/`l((?+ڛG:$*\e%}v~YfFK;]iA[ /)~MX+8ʣQ4lB"dLoy.^#),+R y%Kr'= EnA}=EŔ(y4GQ 峑x2KN{7ixDC4Sѿ#i˪k*Ia,;xk1 "myzX? ځw%>~%p1GA{Ȑgg_sOE|1A"T|`,kc;!G/BX8lL0CG!W7+GdL"+Mveƻ&wm,5lTާ?*󫎼 ~jeTS9CtӖ^ֵ=CɅ <JoAǴzu>[}:A(%RhefL__WĄb4=.eƜ _Dƍ,ٌ6頧"R η\Q1E #>#'~`If4^CI}HtSazt a_ZgXfkv9妑tvOCirñlXu;j>ml6ZAm10Ehf~}nB 8#BxI[ex*:ۥӂ7|,S:ՒRj++ d﫜χ,8~oH)#nfiRC@cӢ=kXqn;B`SIۖ_,LJZ<Nox3c^k\´cPWnZ ”VVAfC@ƏԈM+&8#C4BW2 ى52ٷ/BQrOU=Tk/_'ߜU\w ;Af;ޢ{~ky.qmYqergfJi*ƨQF_iqmdReƟȻ gt/hfiv,(fpAg(RXct:,xZ\k.v/1nm=NrG\R@S6r贷a'?$bmXzc`P lxJ.Y=#ZGz1c1#aquugѓf &-%+)3 ΁OV$ȂCyz88FѠ٣Co'vw<1[kӖ&##ߢP[52_cJ[)s$cvcj7vcNṆͧu5u Ɨ8EԀ)*Fx={5~M +,ZopQ>|TYGHRHQDn.͸=XY ȗ 'ݘRWnAjܢVa:kF/Π}˹S7[fo\o \q#Ӑ4u>F}{mHs+pzqTZC[ a@rB E `+^2yܙl,}Z%T8oJG /{^kH=,Z^T؁i'r5WgmG "ZY/i}DA:GJ.ȱe1ǯGͺV/tȜ*v:JOޛ0$m1 V4]NG'Cde$=Pgi=Bn޺/uIf6R ]h&F}7IJK6lJ|fWل,)l(A2#lcWc7cbl1V^ɸk59e'J/.+S̖a[X+!otY!ɧW6p4s?-D :j.aRQy).H1Zb!X J1^*fX}@$36U*bywY& 9"2MBLQٛ6Qa]p5br!`=aLAы. H?j!CZ!>u*2hńWA!NXtS+9- I5.1fs(:Pʟ"=u*]ҋTvrخYO\qY๶UcFD ϛEN.*fLhe iɠhH 2 'Ta>IeoYɑd mh΍qP2LTae@$Or^UQ,B㲨XW9kHLsqaPwF-c,E ^Ve.zMׅ(_B*0bwh]HZ0C ,e,MB?ū?Epa?-jkcUyBE1.WbAeaV۶XM·xpMHI5NdC2οcgSc[ASݚ;V睝g_W|L4c]. E,5Y!$B IY4oX}kϭ,-)^O6C2[Ž J >.ޜ'Ԅ%öN9!s3׈GhzPOq6p$tvEߧr sA%B_jyF)l0Diw.VŘy[8U,ߞ=M&p-{o{o3Nt L72cf܂8nъtwer8">ʢî'Pf7rӲػ$j"dѿ^qFR*̂Ii"6n!㮻:evF4`eh<|DD.,-$&qD }<}U+憷"Zk 7?6 i/}Hѻ2S2oaU /ۘMa֌F'CAgN3'ksts[{Ytx"r|.XID ݅][O~Q@&g!RdMe*e4CRv91P f1[*4wJYDƺ'F)Vju bH۱U1FF>! 6[&䨲V3]?M[L‰'f~2qwg$Vf4cSUDžszIFsRhXHAK ?B7ɂdlT<+d,+^b//Ha,Ibz'1@>^C+O~} N߶gIM )|ȢJTT$VKNHٜLy%Us_enAE54Mulfɮ=+0 րJfvb~Ѿu66?rt~2Tq|;R8_ףK+D})1$8 g ` )(fP>>.7Źމd"k#v u h%|6GK•j1XOgy> &pmD:Ց#*1HFk緰^@IY,wIڊ2,[3|drkLDș{$ԼSMA"=E~}!<6lFtoh6֧zoYCbpNF MO`z635S*c8uSbl텶ZϞ + 1~ڥ(3 ?Os绷7]oXE߅7,59Q-KrsdIKd;>4HB)@Kɺs܇8OvkF7LgK$]]=UUW@:!VF0M= ò %_N6L+N*̲1GzBZ:4$Qs˜6Y{YHS,8Tӓbn^so'H~hȐR Ul5%f~iyj6,M;vJ'] Hz+R07mt娼uќS6ݵFl둻dB\_:# Ye1c$\ߑ,YEo*59=eI$B\!ݨ*Q!RL:yP2]s a1P4 RGd)@MY&;0hCF | P!F^|rz<RvIu 'FEYCavģoVr+ e ?Q [l]9&06J@ӬF6+B)#={ŐCԷ]z> k/1 I޿L 逍<*5 I0,NZ7j64.<FQRP- ({'&ӀgDn`%>#}r8mUqRvab3T ې3ZS@t戸&rh깇ť'F+A!*2 t!We^!79g5N>S q#F.2T,T԰l0(J1cL0$4>Ri 0ڮ8ъ/D`f_X,)Z֟F?PkQGzx.W oPxM.Fpeq)#T6M'0+AT^we/9h[ծG{d$(;K$lurpOsGpXsЫgFz8_+'KǦ:vg[xxiVSwT`)F=ҸՍ;b.8pOUg1񸚧Fug;Pͯ&23S4_6BX~a@Mu-֬ K>S)/_*EL)F} $ 0]eR@X6͒!Ja 99K$c8[JaƹpȼBy29T^Xe ұJM}b9R.^Rri `U+!exK,2IKRFk9ɠ%NybOƵ? H{PAc>Z]:d%?^KЙz7V/qQ#IP033<_q:X\!s^)!UáTZh<6Gs.3r>qRj}:\Y&D{0]c(U++. vP׺hDyȪ)$*:ܬ8.B i!1>{.lqWp,j83kFQSrYrɗ~F.tQh~?([ʠV'ZbռQc;u'qZ^Ei*.(3`S Vi2# H;Jkq4y}@:Ǟ&^BY8dYB{JSLe!b:HPXÜfu1% X|Έz3-ׂ^d:QOl;3 VSy?YlY>}DǡTVrG#8J8 ЬXݜ010e F9uA-f4PW/К4MsSl [a\>'%6f(oحSلM@α$,o Bʺ7֨n93Du}@sQMΜ 64dpi~\ԀX s{ʞ@9e+Z-? p|g,?ʸR+,5GWmŏkm_vw1ZFq{+&z5jS}s|ot}c{u[+csXmX!c&F<}H쪎ڼ80I0$p @.W@P=Vl58 &a qJ+к=Ma4Z;$Ӿ43m' 4\u}\5߭@nE3㌈ .QT$|\Y%&O;շ-u-6T##@Ō =1Д/O ]ǰ؇X$&c'eNK؃ww~A SKP@VX %EnzgHԀAS*шr,ŽZF \NFhr}vxU~ԽY 4-z+ ᳆ kw,MD 3Ȝɪ֩ւz:m}$(lATLO1 ^!WıhHa7&mвʓ lі3>iѢjĦ߳Jڒ &>pAcb >SY`ۺe҉;5\%ŋAыvc`6;`յZhDnКNr5mW^A&4 s$L%A&4IqJ52*X})&fuA# Mv1*I֭[`d FZ(ovɸ ʘa3.څdK7ǰ *yZ> $PT; g\sGHz8 Dd tD1qW6XN _*eT;m $fBŅT3V%}4[9X GvAu9l_J:<ܝ`ݙ5[#{kl\T"J0J9U2=D?;Y!~ pEkO>OϿJ?//.vEK|WgGFڭb48'gIvٱ^65Tn/n3.2_n'&sl2U;|됽7oK2ck`i]jõʹmh ʝ=&&I4I)@<2x9c;ՉnBZ"E3BTl:6"! zpNϵToO!^n0f!O,n8kcқ8h:+MbB$|O EZ%7=0Nꬒu]㉔g߽<;Nvb(,um^6XY Sy Sv/ɝjq 5b57C]jp42qb˴S_:m%fѱ0;J"㳼w8 . J!pd}I_>Q'ZclOiӓvR926/ ACL9$A)}M50PN(Tyh@Әûf*RU& 7)͠RfE?8Ox#(~+ EB"9K1𓈁$8{d>DSsp>`念be߹YI2 lר? 9*+}iJiNRl䰩sHy:>_tm/a+&(ɾr;nr- (o{#VFV P0QsF ժa?81d%N7a򍗚ttj@$ʉD6y6ܖnUn- jK)vfu],=ʣTZ` U{ha8ɫ# ρP<\@:Dt_wD b:"ˋcҝ4l)\B>NHKͪp4w4ꏟ-ۏ_l?Ŧ3Ұ>A7i\v =AI,JKJ zҫv/1U;2^;L>nnKγr{|<^}y~Pxss3"qsNyOzTgP)ۀL v-w}si%LAY8}yz>_(B/]hz>NF$NjRn>q1:$|<;2-5(ŌD0MNUH) .ƪcQ`ƚ,V9%fAʓ"d`3&C47\!v*vtxbTwSw2>1!/Kli~7GZEv'4d|8 )r_zo)N96R# &lo.Sc/I |<Uv~x94~ Fkcy;8h?/Vv00G [j]50+?ۮ'&:8 ӔtJc.U$ U"~ o;L#a?(G{L"j^L-Rqٴ%prw;p\"<5&ߛv6s2R=>9?R)gG1D (4t$1Wrvd{E.cxaHHVOn@+(zVox^g# ԹAvS_L'Me 9 *vo;CvGV}1P;-w宿!$1&۟p0A[}1-p5jh3jxШy䶶X8Z\쬨0c+?ߢ&XD=-EN#(1[AFgiN~XG泍%s{v ͵= "YBDϸ7JOe?ɯ`_f?4[ak_cko,_}o}a\x}/UAW17ĝ|xrA,4En8 ixqg ӣ{\)!R>+ x"eW88yt+. -I@OP{GlH{#|hZW 8r'>ƥH*a@S <0UU0QkpW Xj.AEl(XK)r2e5n$J;FߜGإH!acU4[nM۟ծfM&PJ9h~Poɱ< IGN)0v_gP]z2/Nnz{w;.:4Q\a|Ā r!adV0 &H2O.ڣB:ϊ9CTn(@,vy`'u$xVIWDX)],n,-/COFA\w2zocc!Q-_ܠN^06M~|'H}[Y՜CDhX<pF F3=rpD)"]ᘍxRT g% 61y;n!{ï)H}3"Ok`ENR6tnP@-Aaѳi@G(Wq{G=Nc,!3$G 08/099ReWr7Ifb @81),.TҊe*xZcN`&eY!I4f/b"QU^jϒw㇥o Z?URߖů x(DR 3?fAߐ|:ᄁIvF~$#\?" b(ҫ|VUj~8a̟ (Q5bmC~[U]#J)a^\YP&BKY 7s7mj# e] 6SC%氟E &O*k (M(ҍ \)(I/S~͊mLH?H"+*Mx8_,7;GtAqgdA2mX2&rIOdEOP&ldfs_ىf&HU, SUҧOJ`/%ߝ?v w ?,2{xHMOQ֥7tPK?.BySS_כ[&V92#ڤ|0aT2JM3eHV|s@e< [+2@!zHfy|H̘̒,/yGj>mw{s}lw;;. 2Pi7gU;ŏܔ~=EI"x:MS|>7J&\܋c f;6\s}{K tfh'=S })wt=P-X,ј&Ws{{ ftք}=%0FOgE^Ovt"AZVz4::_^#$/&Eszy67]IRɢ̍H43  BEE3e=j? 1ݛ?Y֋~R_t4s*KĀAV$32u&Q:+P.KNTwt=lGL; ъȣ] 0{4tSrt)IKiZϓa3 _/aP4 5C˧@i3`".67O1FT@B mm p\po,>Aay|Deڠ ]u֛yULY-n.GᨎN2yR8_H8?[>]ETۙ\N#Qju zI귄Uw"L5w}0K,==5d>W2Y $Yx:;)*N4jtdxon-?ˏГ(1iNWȩt{hzru8L$ozY]\.hd8 VS۫sjvP nF2kBB&A6?VyB 7U^+#hZD)`̟h} Xdb~Bc֓gU qk5^`aA(VgUy!`Ps z@s/۽Zi+Ԣ^}T rʡ ysF;t"ϧ6ΐ9 ꂶ-EfAPjYFFA5TԂr ύлpkM8hZa[@h-~ }jB }L $ҴFvj-q[F^6Fw_?NǪ\E" )5EftsUi4@| rGptokqTlV\*rK6^a?{Uj+iMEZN>Ocfl6W@)# ptd*oʽ-]D4I+v $ZHr {Ѯ,aQ\ƙsNY*WI(F܆I4Jj9"y\;@"8~zWL4e9 vcL !ɶ/ƊQFmӔDf|3?r:ݳ? "ҴlHʯ˫~v\ [.=Nn-~`e #g lQMDܖAqCvv8 O^*l桠Jo܉@Uu8G>K) ;? ZDH7Jw)jY4ݘ#PMqi5@2$%ƚKU$Wp@ 0 Of%6Dc p2hPB8P(+!tMQ pHw=tfAW#bq"v?JG*дasp^'*cjfYnkJl| h Z l2{ [DoR|_$hht  .;µ<4n;b9ۤ꬗cc,]jr+G]Xٞu\vkssc" 2v(_ fal+Xm_6lx5ǵ ˸AC95*}x9bCO!ͯ 1 5lw9FLsˋKu;aʼĔ kV޼uH:+/[/ lB5 e:0j8ꥄ ]Z }k&un߾"[[Gޮc" 7 廓Zt@q7 l|uGƺ8&<F{gvu=V,@Mڇ! Fw%ّ5dU3U6.t_&PskEh?FXYc`6?Zoq8l S3HU!؝7 &$'|{5\d0Gɕ QZL1ژ)v3cqeHɘrӡN/d̼d=S:΋BbQڀdm|X}?:ZnF-Ǵy(-2{G߬5o[y4_#'oYc\["U#|ֻN#t$:nk}/}:ZU> <;>عe& Emmn?Q[UצCr8T=B&&)֢0aaH :cEud㡔yz8Y 뢺5U3lă,V>^ŀzK3V65 giKɂJ&E>[y-S_vκXcʔcOӫ(sL^4Maa ɽsX4TiUf\@/ώ>=y̪͢,K^Jbi.bx*` znY^O_|bQ]o=e0@y֡$:$sAzN6b%ld;ng&|gv"n ݋EvYs>]=Wsme52)a_%5~54QNQ*+bhjGZk"$|goѶm񆍃bt/Q}I<bG,ͬ#5ZT]'bkr^ZiSD鰗w%4.)7WRw*Wj+LUKmH]^$?bgkʃ.R,~eD Aj+ܶFpFn2a*y_bTRʶ-k0+yM.Ksʔ]8CU0mQ4*)QK"ULa@(wHH6FԂNo7/bI^c= ךʚ7ʿ6ܣW_IUcSx:I/I-+.hW+4iWE P&]B?8i#^И ĉ)1e h;|S`sH[P:묖xxT|g9ʻ8ыGCP5eG@(zg".2ng`DM8níZOY= Rnms{ ·V;8y>3h+.B)HX+ v;``*a(Uik\{ `6hkPIqBKtM"YP9w,Z?zd[63{ƴ0&W VgWFed~0c ,]Y$@u{ߗnb1{.y#I/Xb;%yNzdjnOƷ Ɓ1ں\~K}y ?))Tm6_(!'7~w$ȣb.8e{B(>NGy8ܲ]ˑma}:;33Qޏ"vZMJ" Jbؕs>va/%1rY/cʝ$Ҫ{u87-WbbZٷ\<h$<9֘턬ABe5_DG.5]3032GtfF)DRH6ub>dK}mr*ƃhreD=|ð0{9bT8<`z\[FŠ %ĽZ4HD)jRBq{]I!O_w%(߲}66({૑@?WWr}/XJصt5;iA枪g0OcVJ Kqs% h?0B֒6-{FQ௻`.@&Q-wV<Սyfbt'W3^O]eYhQ E[ۉ94(axN']d;ã}&X'NwOδnǡbhǕk JIQǵ!drۛ %/)g(+/ݣ#Px[C]jلY9wm*Mpڤ_ A2 ,'*?gdgdxS0uw`Ҙ z;XyKTvcR}o!ξ#F5]? x菰G|0u$9TG±dC:kV5ۗGTiq>7P g3캳HjQ̪/eJZaYU?~VZ6rxϼ pٖ#Z#Cqf¡UoE ;#WML0YTm {D.z!pїb [}'x29NK +j Q[$mY@{k_X4 ᭢_aŦUKt%-6`h|x%kƜ/y()tECf#>#He4/$D-+w/p !zLJQ7-.31*::"2H=KQ!:,P42 H(\(]X>i{_a>huI5'ob]]iaPa3eۑσh~@M.q@i[kj07B (0#YJ*j0pDq!p V ~(n;kmAL& MМ]\CC6oO^x,P>#\h춓2J^P:<$d,9$H >!)cf"BEmK^) cL&fc1Μ7 )+%4@ic첆!+1P6@ُ106G!/aoh{ Ř60U@??NJ*<8gXbIePTJ3р`aex{DCRT;)G QвpF[h<{bydHK|펢I_'J6Nr d6W$p+rw9I}6ӸЀaz.% 1Nhq|z1d:J !._HoskI^Y>E3o%>qMlbӘqHi}{E]?}]㮭mvݵΚG-/&l?7 X lLRHFpNg" M #ĀQJGIf{2)`V+h$̷0d軨(s)}W3SYS M0ؔ8X@JsE=i.i=#b 7Ж8NGMMbOG0 T` 7БK-H ц %4ZpW!0mZ0zԋ(Zu|s: M 0JicU'_꘹s)?-4&G5S֊lИͳ)ޝCǤT;qowCOR_O'BB_&$٬+lt˟^6XZ=O7Q-GP?T&cYE1"G78'eKhЋBꌽj`59ry]g-ea2mI9'E`8If0fBRQDIx_ yȲ_/ =xIMԚDZ\eYiGeQgwZķJ,93uOZ.m\M`S~YJڔ^3"91'؅(p _I+L UF e$e#DeB0@zgD{@~~T{NP}\$`x0 ɕR%Tѣ3tZ4\[nܳ9bR& hctS0GBDʆ{Aq;6~vu9Gt;ȼf¹vq(b(k] ̞haJπ\FOQ-d?Ҩ VE=vbom]apm؅ڢT ):?Iإ[[\AOͤ4b6߼7`tţۦd` ^G`yE:^էn_LtH;(ϓ<3.GNSS28n趩=c=AxH0^nultjqF@41cֻ[!ĶWt(cxb|D~8ʘ5,GON]Oa9>P$kݺ"[88 9.rmNU >7-mg E56R1JAaZ4fUjZ+ 4ih3gF_A >^7[Bh S>?b IjLQH ^2CvwG Dt@_OXɈ@BEQ ㅈs&c 3,-MFs.ݩE1!8[ ?OTL>:M0xuRkrTajfBdvO ed}8R•L׷_H{dh f>F!"FiU&}BԪiX\[06+Ւ3ԴA^ TǡJh}A0SkV,zX :$=JFۉqQQzmOmԨ%V@QH$٭)D^ӧPyx3iP\PRr2S͚Fj-@78t%q; aJt^NNж[ ϑ/ώraeS_`:?GThxKt%DƨlHI TޢsQ&z&tZzRٓ3v.ٔo0/?7b4D;BTfg5w9Ϯx:D%9NS8_oվ/mbBEx! Pl<alSpZa2&w:NdRD`\E[ff~H3yV ^YDzb׻V̉?xgXJz2h:58rh4p»;F,({0q87oUS7HQ 3=ldC Rv{a8-Э B 'rb wz_p4vSЕ MUxP cY*LJ9mJ!O>ձ9TMP H.BL0 2*E,%Bk|."a0ꎪbʦт-aAsOh;Sh {tδ"k%ƇΊm!s 2ԯ3}ܓC0i!{} ì>t,A)w啁]]kӾ{P8 ,” PTD[7&0Z;~r$"5 /omV<_\ 0`Vd }opcF.KiPdp|JP{_3{Faxt<5o7AOMշ{L۴pA R1l*2R҂x6bі-88N72mX*h7:`C0eK>$4-\H 2#ufZR*waN,ޡ?$ ҥ7a#"csT\A>p~Uħ_-ϯ?*y-F4ӜG9d#t~x6y!ߏ<%te1g^%z`D;eɶcS3HCZl&uS};Ͽx>Țю~9!ڷ$^6ϴ.O:,~X==u1ÉN`| Oj'gx wzR{"Oi|1\Od;cU0|wHݛc (entJٹjQճ;hUG=oL )U; +oؼ۝oOz]'? nAuu8ek=oTayJIIVfdd֝_CJ}yƐ +!PA3xMgoLыCNXdd͉x(Do{{|EpvyS_cdGelb gCD^&dHNH9@v6,L_8 Qmt@h~Ð#$}mV^?Kwe&v+~6MTLK[/؉ﻦohks2lq.? u~!D>L"qƲtg 8S0 O61(44v*95oR⽆wL 2sC xJa:hxOˢ m%DR? =yG +< Q;sk\:(f0%75F1s Ψ+60JT$8_Z E6„*$ԊHm\g8LʦSͯ>/44ί?bMk|`Se;'yz-sk-U+___1y9~Y*۟?8R ]-"M6MFIbK$CuK e{6=GZΆ2Jj_EtK ϧZY{?Š3CIO(7WB!H$!1 3$^ Ik6&xoMJ?UX9^6ՓA"j_4j~5uiw:!5 U %W6{qQ)MWS>՗ S3 nv{8hqbjēR>kD@{*qqX6 0xTTaZ7rvAvc4k%1-=3\"}2 B`dkƯYfoHu?aBh,J6!(BYhܨjД5 e <'/ 3.A`]n NH(7ot;y$lg$ Gf?[4ʥXm>lqȩxY,'ӿ,  F_Y4`i "c'Pn0~%B,ufHlq{Gv5(Q|7!4>ɎNyDlH^/:EĕW Bcgy0ZNJQ.L:6w Pl}T\M{3WJ lW`ls[0N̚08(-hk4 L Ȱ"Qcb13`IVC%FA±2Ŷ㫣G{0st(sC"(eo ?%]j:#ta$w(#꞊Pj܇PEΊ>|| K+O{__D納<,Ԏ1Oud POӨO_^{ݩaPuއ_>0~oB h5BQtDyw9EF2d'j*诶Sl},2Bf?S/ a喏sl~rcgΤ/Tbü )Ge7TޫԯQt' \#+9y2*#/>{A0(؟'bMn2k>J#7$IL6ȏh^dTk'Ifb)ohs[%3(b ☭$qFF0׋0 TTsxLUi|Jbm0C {Xhwd}+\ƖCQCEk^n2 S'@%Ex蘧3ﯨxdՃF&~s Kˍgךb/gLXIXK³f >?*1?==3⚔GXܞkӷmv>Sd63e"ϐz+!)N(M 1B#ȏ l͏/ OQ2{, Ƨ } ~ȁݗJfz6P L)ib\菔Kt-Gീ<0,ij`XbQD9FWҧ,VjaQas13KPgqdu砧ѰjD\<[72'R$h&IO^rAKuV|Θ΄'qE ojȗ19Sq>ƜenL"zJɝ90/bb[!7V8V {F~]$kY~0аѝ`DN HqGx%Y Q(' 9&B qvU Ce s!j)O"֪X`pfviH,Iý&bgnm%184aK^ .{)"D|&S~ZGu?݆M֊hvxOl8LJ#*/<]iO߾uL to.$n(X wO}"1.0Z-r_1sg*-H@^LFhd1BO[ixM6:phrRf87Xm_ۍp` Hb4hɖp? @<}C&0^'0WYϔ(*s -;+R1onnxp3б4&OuX cf)'RDXV[^YW(]w@NI qTÎ Mq$ a؈u) @c EVa{t@ņV)Ƃbp$bټn/w7rՇ8|+G!Z:0Wcj9 u_P,<,2,f5otR4@|sꍣw$ 紽}j&fҰ!vׅ+تvLuB.G&KIϏW'+Ov7R9N1mEG HH\6A; )'\Դ-޽L<IksG1Y1DlX>tČ?h G,Iz'Ƨȅ¾rjwZH0v( b1z+D1MCN!Qc@+i7GwyE1%"Z6!N5sO/AD=~{@A0HV[LVQZA6Ԉ8p&4 P!:HNiQPΩ'p%mE~Oο{ln=:Kg* "bԄuwg!S"0B\T}&UqHtӕUJӁwK&n(=wk{.Mo"ZG j#ɠN+Ĩ:"f*5#iGE. ILA4 sx(dZ)=A `c!j :}#}ˁlNϔ0ҼC<yhVx7rHK8Vȯ  }u%})T ?)pwuD y5T}LusypDZm{xx~҆擖ˋ=㕔@խ랦q4oƞ?~=J^=^iIա4@x ėX٩6j[Pk@s, PĘF"K*"/YØ`QI;S*K@DoW{<T-oqt;JK3kGM8tلp:~AK@(xG!ɴǒ _Êc<-g #4V V 4u- eVk};gC8Ǩm1ݸΐ!JQ&S*50x0[k69RPcpPb#ݳýGg˳ 9!zπׁ舰?<~vvx:~xO1B&kӫ]Faс{~;@O"Uh !~wc/ =I0Z[;X+jZ3j;hA<O/k* k!>A;`'swNA;þ7-$Dx+d.FhVU㢵&8&I"UӤ%90W V8-Åa3R194pZR6҈חF@`[f*eS&k h+ #'NZr@.`Jqk)sC!Gn3[X>=?Zq^ N;zkK^?P5^sPgIBg;|NπrN{>VKN<Ίn?0B'Njuo?gnVR{ie{(8[Rs;OO'` 3Ijxou{&gJ$ݏm\\NPfN AeWPlUH]֟vpd^~yAŌ3 oˍ bwL_- 6I.8ɐ_ T5YufR?t@W–$rF'Jc8ښ=ZCUuԭ/h ?n&ͩILg`5^lp0^\.|pqѸ=| G( _q$3%AY2L=,ƴcpu3$=a>0P+5dl5ĶakR>(ff_a~oHϰ_gGtXW1.+2f@{׻0@QIq}e6eēih-Z̞ΆqhM4ƴX\kYkYLeK0EnMO}Co6Ys' kW-ɢ5!?lJa! bˉi&C^dvK{ZH cr=YgƘrĶ|(ıxrj!4\zfӄM(%¡raW\0-<34IjOTL\ʰP:J\4/c#qHl*ʂj@\Qճ;x:.`l"|Nh 3%8m]bV_8Dgc+×!UEƔwއ #dmikkoo=7[]!՞r&-簢oAÁ<{{F %e3X p:JF T146xS<7>g}a: m,#Xk(Y0kD0G, @$ >MpNipf]ʽ9_ђ!p* `0Iwz!3HGkdZeuקv}#[~$^??{́-´ 0Pxs7оd0^}RpU^A`d(&|ᷣI!SamH׿\Dc{c9M"Fˬoޔ*zJ:rQd:L}m|?fƁ *d@8VNr+HC*)5kJo^׺k:j{P3"  '(&bVxBZhF"2f~Sﱩee;*eUSd(;6KqwR-kLsr[laJ1{y@cdž\:%; }k~L+f`EmNK$ghijG$1 +۔N$d)MϊdAlIsLe$6QE`Y}{~@}5 !N ۈBq ElTXw2ce֟ 4ݹ2pOa=[:+"%/ևb+X^#fN8pq1S2z!ܮAX, ywur:Ɯy.=LPhL'. [@/CD$0hFiAπE S&*P#.Z19,S4AX2?84}GQuX0hlE:[9|׳(ZkCkxwؾ.`H9db+,S,Q^ _(m=v Q!*4*Sr/ތڊS_F!l Jb>~$9TM5ݛ[[t0y|/ @RQ}Yz\Ƿ{ kJYM|E鐈c.XqWuosSPN4|*BP-7Nr0Od !}gsorMPDj12כ ^> &S<_Ym o@ssJZV[h]ͫ(V a a˥{Ů9bUa_ވ]]vDX!% Z7_!dZq`$``\{YL6LJ,sC&bJxkz{#BJt9s 1ʽvPl$RRYu~ȀKQY>L2)Zzo䵚W4V 4bֺӁf'Ƅg(ZƧ|%&FxkI ފURU+'TW y\E:Ɗ~?7izt?hxo hU4J=tDdԍN p-cBmg[G Ʋf[HNqOD5{(% :m B Սz\ׄpYb 49|f2;Z̠-g u7N, ,N8NJY|osaָ ܝW&ƤVAE@+$Y*'A8RG8m>UהBJ4N:I hPƍ' Gjמ5+)L`[|JٵTej׬4N LE< %`>ցʏ}$NlNpa 1Kx~LP)> B9ѽ;MrY+z=CKo!cAH@ڈ_awsPvotSn66s>r`'sqaper؊ɚ|ҌBfRkA|k`xOY(C름Y̨=݁E4b~Dۘ$ci+C@#c6ƅנ6k1!%j)(ąOɶ@; niFTQ+ FcH "\W?Aa:_T_ \ܱA`;>:Kg>fGfՕoر,P`,<Ð1 -ٕ;u),--GN;| ;N$z_!9wx2*JA +ӎv\4z̍\2S[ )\$"/Dd2 G.nB~bM {1ϧk5<#7' pCeb@G&ucP+[#`c`KBZI3UDv~.ė}C⍲C;A5sjuΐQF!p꟔*[A%U*Y*%cQ)lX95=J+8ծ niƥ Ā8/A2Cb/e.̿+: 0sJP;X'sl22j c*) ]qER53>?I T?nfLbyX{t豶jEⱧ8Qe HI) Sew[pw#S4fF;K(ZЖJA ?O70sf[a]+SDEdP\ |3BUb'f?TC(q8(e g,#B3nF);@__uƊ 1<&/3)u LalK"܁9^Xp=E#NGѯ 5Y]?y "O\;B<^Q4T2,t'DʘbmMx/riwOD*]fڑK#/:&dGTH3ܠ!k2n9e]Q%ؑ`Ӻ1),\.sPpK:3^\--tz)O8kvr xFvCB$h,d'{x>P;ƩfP1˝ )!w'&T -nTw]qk/>p& k!/POi +BT7 LJ;1ET;DYb[g8HFGQX$ h4ςlhdE:a$2 ֎֌ $P<2^|*l Z=Mnȑ񞨈aem}Rs1ޒ=3gq";x!rBZdT`>휂,)HA%4~Rd `p#@aY9G0{\{5%W/̧3;n1-#dd7c@^%/q ,gE?(2ƲDF|loZ |p&Hg$V]H\kI<#r>Þcِ%!^`|CI -*~ۦ7|UvUv"7dV"|6^K@ꋻg^(2P|3sbb<`uOv m N6xc',“T ;B̹f(U}k7]Š~Bw4 =RY3=8)xyrzŠ5m2w۹g7oɨ$Bg\Y2;لJn'0h ζpbY^G(< V{Z_%mO6z;*6!/2d@Z 2fw(cnE0W~ *7dxCEAmWDEѭ|F9xS, ˷`d)P@Xpcϙs0yMnt`~B#K,r&aEBF9ް:c `)ksz0뾧B2$Oډp qRtj ~J3͗2{FJ$¡Ai5? Śmf-:YDϻ5r6P=^{.cf`WC?劍 xyjY 3ذ&T %iO0~5p}<0' YԜ!\ j"'k(C28g9u0a8 &y!AB8#)("^Na&Ar2bhw*e8:;MD1n9Yq"7TtE@S3\Qy9UwGH8kډr 1̎ $ qa圍Ӗ:e9ຂf +,|G}2ؘ9ڬ$c! Xro|%P[F_k yh &9:ܳ򘲴u4va-d7ܩ\i")Y mGYF*(0},yTN.GQmv6=:02=0@&X~*yC5*Զ$~FڍG+eZtzd#)Cv~p{toMi05W= TkȎ.tqVu W;` -a~Nvooξ%]B8W`=3(漠<Ɛ1Ce*W0 R#}7ʹ@r,Fhk1=Dir6:S…R:H4J=[o1k2ŐB* c sPLTouߋ.L:6JK3LRB%F/L_ohiDZͧ!oS$ ?CD/y`tP*''X4f0 $F$+3vHŸ׈L^.AșfAs`nSP`[` g(-x|}CtQ)؃R-z‰ryvF V[НPY8h SR9adA43}OX}h0UjD>"":?8=lXRFiDJJ|^EIp4P& 3UqV,FoRП 31 raZ߇ZY57 eEK\_k"|?<\S4IQ3;?0PқvUyW26\+UH̓#C_hYKaR]Bf|DPA57*"Hh:H,*Fqǚ^m~f [4]defIt'e&~YQ ^1?tv*'.nD; '8=V<8z:l$67 _ ^F/eljh5 ކVqEGW=ELl\iF_TkB7+Iԛk VQ( Y m #ӯ"0c&SQ0Es?_(REâXd<}S!I!0ؔP EE34j+o.cS0Rf]χ VeH|YUO ~4A-`g@Bldn[h@D\'np$cqi8 fHаggZ><^Uy8gqs.XvRќ$$iuR嘶79 y'ec*5D1ʐ.6BzڼYQDrs:պ$s<8oXkp= GɛA4,y$GDs\r+*!w@(-QY?#:Ww,kP~!B%G$ǟ7˘Xi+!׺,fDßUz)Bz/ BpY[}An05BA(R]8&ٻPehZTkԠz67\p*5ᬵ.`{ #9>ͬ~SlQr"BH`$>Z"!XCY.$)d-sk3]=p3Y=.v5sր9h@p;v[Ag6UG aph_ 4Qmӳi6r|TCAgc)@ (G&X-+ԺKCb݁l*RLCL!ZET`}g0q^T2gdʭNeȋ!*O <|9˶t)8As÷EGzgZF/ )82|&*yYҡc^RYբ{Z##@F*9*wN=.n$%v2jyZ*JEŗv%Xv#PsI &)ƶJʫUǬRDḾEQ M鬱H+|b /8^'쮸pNj)^2N ]GkЊTBp+ԉ''}#Cl. O jTD>4j{R{V|PH;vzLX~]قCV-򢛧Pr\TP Թ`\Ig6cpj ?(Sl0s@ EϿO>M)=[[Zy= W*D #w9ϳU]{,,./--./ҳ?Lg {ܿO V^_j6,v7Z $WpE`ypE_2]qj_/Pv4IFDT.u'*=)e1.j8?>|̜$Q,32cVnбR?$TU02 G(CQMGS@҄}Cgv9Q8"z@@3D*/S |8ĺJ]jW*͉{{U. MK#XTP#LӁ@kɭ藠{jY€A ^e.v[b>U0>+e%-b֗j.N\lE3FUorהj~Ìf^u:{}py\P1MtirUz?h?ZfPhռ|I#N]ۍ`fN,9X `!?V/58O&8ϞIlRaɂ«;,H'*!(c}??j1ؿ̼S`jr iK6'ip=Ի?T Mv@%9YMf%؞uZw+;Cƙ獋c燍{M_\IauVgl27 triԷLj5  )׎UL.&櫯3g1 >NA uF⿛+d41VC ,hԻ/hXH=k}FoL6SפoNi_ƣAYaҾ'Ty ϯG;ov{i{n?5ƽPenu{D!ؚ惑Gyd )@qX;揞~@<L]ZFyfB<*8\@ e_ʏ%C_!RM qH_^a Ky | %:LTɴ,IȒj‚Ybd} >fzoǽׯwq([)wZŚ߿\@ L3)􌌼(w&@Őzn#< ؁Pgw#\5{o1c·Ss\C # 8k8=H Vm&mp JоB`5Ѵ>DL ?OOj$D|v3 7Qɞ;z#X2Oѱ}1Ir9Ra6(wnZUu*hˬ]c+e}p$@=f L|<`''cVwo:Ƴ2ۅoͩTqicc0+d=Sn(\+媠 $"Fj:*^8.,тƃ+9xn!~Iև%>xT;wU|^jX):^cPqV{f.M̴9VJ`E~-f$ / v מv-qkRMn[aF1)}8y-->}'~KaoC}d4]#N`-X;EI}k4^'|NX=Vv-q%ǞQ JEQ}R\EՒK^33~l^F2gP h=k_}տ}#Dz&Z;ğ}C2CpBAc{ TQQ$W])ez}Qg]g9w4fiW Ĥӳ=s(J/2IO "wB%|h5U5kL$Kמ|O>OS[ ~ \K[2hH,Y!7a֮}վoY>N}} 'IfX >0>B\ =Q$su+fスͿ~uw67}C7fbrAZ9KJ/s{Mo)|+$r5qy }:[yo)091ldʂMXI&^Whku{vuG3RIpXxF%`MI0)s+1^PE7kn4 kt9J+{bYf ~I05kaWH6ST^Ti~ [S4C+8aVa֗` BĐ`3`U@6hv(pP`a)Kc1$Dz,YFlKԯ LShm@,٨I6́ڟu華%od~OO-&xbp/Qr)Y٤ z%woBnO"Ij (YA:}\_Q@6`zG:,XjU5SN f32[sQ ܴi3_0t*Nt10}ut⋱`m<n.z֫\?(VX(|hXee'ȷ5k 9z E^3^,@’"㩑O$z ib-V2S;q$,&>1+TS2F*^j߰v3``Nrf{#f~k^I„V/c^E _㕟c., u_kJУJOO @e8lu^7ccI ?\Y)_^Y[/Ahl/4*%/|~AYq I#yB >+]+f:WA!4=8LVW2`q /KW&;X u߃!ӛ5K RBRÒ,*b9rjJ96p V"k)Ntڐ2^RַVTLoA>Uߣ\ Wɝ}ku0 r7aO`v;5WIbs =q̯Nb<+ (ڗ,JPmpi*#SiKSs<}I -pؿ>VR_s1/u>%+ MO(Y )/a#k&'$m+@#8.վzgsx 5e731}5,f@ 4Jc0cv.tKz2oR/7~YStqݷPfwkgS :Ѭ|$`6_U0)εP4P\KFwb*<˹OgGouN+X;r翧lF (qmxf,߫jSI2NI8^Iy{6eOrn'I-q3{ :Pqu3tv(lcw~Tr2M9Q4qBpae洠^HfEsluӁc8[0_\TS@AԾcu`{oscuxiS^lI휌8q3}iSy?l@%Zh0{1Y53N5UBeojE(&tժ>5>{߇OGê"OӰ*c%WqK'gkSVɽR2pP!*դ>4& ycO[N9+/,\ T|6Ğ_!GJ"KdTuIF}ģ>Z may@;rh7v,lyiAm,>'*$ewsòw4c/Z+QjkXbӕ j Xx߱O7tQV9A۬cȅq#PN09@XljV+DF%ܮtأ0g'ޥ ]9:f8X[_Po֗r|[o;>/9gX;ĕ1'H  ͧ<>jj݃7ovߘ鶎 ӃŸ'/. `_;ѥw;?tvu'=!)7[2^xEfvuwˮ)sKu^-V{W⌶۝כow)_Oi@(UvsSTj=߭?-n8Yԧ6c;:ksvwPN;>-)qgK kN-t'+: 9',b*Џk{Z T$Gf_cFĂhibfHĐO"J]*vԸ# 2=gӨKt7,ӔEd8BpSГ{gaVΰ^pf%v }h28o*#}B+#9"PǂfHF.]@ %Ɖ\$&<v{XUp[}5fuvE?CI2=`?ᄃNPK?9Op\L3V" TYC=35CaZAQ5ibUP;gM=<34 )02q3qtA5Bk=(ECbׁGo98vpPz[HD0!)ke1bp 7 Adr5T@NdeTr=EnכݎU`8QN`],8A.9;2y]A9Ŵ}e, SwVkE{ic8m&hM =[#[Of=l8sԳxNle;sQ☆S/dN[.iŒkU4FaveGP=AK8ضi=*g;%8ljLA1'Qr47Vzsǒ#Y*JD{cDixzjj z=4C%]?;,Ocg#L; N#îe7qU:FSr 쇒c*p`'k:u>QEXxq"Ժx#Wc|GIU h(BeJQaiL˧ڞZPUIuEYK%Bm^2N6HcK?<*OF Ͻ=Hɼb4sheP^gOf~c>0zꮼO>u:G4JkΦv:!74)b΃LG& V tלDRbfWV%kSv|qf4f{7goTD oy slґYL|숂zY!pHLŸfḑ׃jYQ,+lжz8*,TTΫ: he21\NG@kzlo%sVZwP&^_?[M z"(9YuαHםF~60uO ;XvVpqyT3Xta{] 7WM}m.Z8l Z3,5;OCtyH -݊\S. ۝md,kǭv2=k9 4 1q:UOzn"1ݱΝTY1yms=}/VלtYbbQ.ts[>l5lsOSL)Hjg^W)d *>ɒ!*@%ΠU&S e wk͟⭀~[]~w0 vتw+T͖ z\=+([\ m#+㰇 slmRڣOsk7cB FsKB X"m q{z g]gko.dZN3{n긜rm) ͪR~r_o0}lBpl̝^Y1)w^wreUz7\iݴs+?++h%˽YeY­[İQ,&>W)`! vN\msu >DՖ52?IYy%rQ6h6V.pwYZO!iyb4!Ozmć(.SkF=l@mݞt'THaeZ|+JsJQ9ry ܯ:0ObڢiP%蚪U$8Կ?~K7WΎĂ_dմc+&l/'ԗ pϡt-FpB%#5tnC/J]XC]uBB9b܂b)V^\Nj$X"iwzٞͷ, $GjcEKΜ_EDzx.ٗK:e9.8HUC%qXFj_@_[ XދNX13o0g\ӝfk(a$8R 79GiP@}|RA?͸L̝E#=[Zc|&ȲydXs* Y$$ 6HHOAʀU џn7(Xy J] WX>?ɍFӋsSDyj`?Z^~]"PIb" I2}Çh{ˣsP?NiC.\5bI 7̜-I]1Ҕbؤ/Xu!jEh>HCG\B)ncy2ЊY\qvBy“!sMaD9U$2Bl0_^V`H&Գ{i8Wl}5GŁ=x=!Xίm;p)7O,t1Gr+ljM93М0~{(\j%.̢x)z m>PcxvNcٿh}4t kȲ{hs7w{c4Z#Vo-T$۝ '^KJ&1s4{}HiI!DO]vmgC'6;k ef&/Oubܫ7r9%j0i#|OM l<ǬQ=^6JUs:ys%c ПVfad+ cQ;DUZWtMFv0Yp2Oruneklt.7a>s ?,-' I `{ <2MC-F,M64k⺝oQr/>E!c>ne䁿@2'kdzvvtF"  )07(m׿Hŧ͗|K RjKjP~㓫ɗ;hi<_[zK֖ _Wy0ˢupu1eu: |f q?MtO.^ER=Wq2ڨ/IjN L,OxrZ d(2Y}g-9EPZ!wf;OQZ1R wE}ZNrou~nDcP dN!gߚ0Fu4ڨqz::_V%Rnxa9d]Ӏ:sr"\VN8k}L,lzֶK@ATz& Rb"͞3sf+O/.w^<[]Z]Opt6 C=up6 ;jaA[|&Y3V2Lҭd:* ou}h 9 EUA&f &E<42flP3X;3 201<̃WדȰ Ge&Ҵ ݆ځ98=͢RbaLKT 'k;[;Gh[#,60l`ystor;-}Z3ߕ`]Ui9^{eh<:XC{VX\%xϸҾre%qgZ謴1 ×N[<L8O9"\IA:?)ojĘ: i. ?Y"ڤ*|>Hé@s,;Y9NA,٣Æ22 }=u {Ʃ=x{i,5nn?#o_*mR-KjH^@۳/̑bUQY(?2<6$u{.)R{߫=VN]J{9mo32-uDY?Lk0FAplZc>f`@߷BE!lBBhv=~kHRܒ?|HfL3c. `iQpeS\!dsg&e]Xd)F|A S09B|T1ds>csCW{qo<|qрDM}!ͯyX74,Ar}S+U(+xGBKy#=yـWTA^Nj3(i8's뭗{Cies*y%>s.NSIL Bɭ3.@pn|լp~+87Kޖ,]3pb[$3QX|/.|`p֥c.]se~؅a?lxꦗ^ozޚ_v| -3[ˑI\oʽ7Oo`LfclfN%.uV;-<3^gH%^> 6#B$[/_D?ϫ6?ۣw]{ЈFI M1v;[oy~]cB6ߛE]ﲟ~:5"[-du蜳35@Nm}#< q,齽}ُU>A{pD~%a}ayaz,S%%F< U1SlpzY$4!MN|yJWݯ9⥷66t!gc1F+wҐ%_Aٵ }RǕu?*$Vx1}'maO}T\*5! Qr}n#Rtljh`C#GPlnc X:}܉Z,EzoU}Uzh&i[v_js|lIC{.lH\>/g BfAA Zz? =i}n u:') Ƥ<֥.Ƣ$,izUaN*}e@P@1dE|`~r>FuI {SEnkϾ#@'(//&1h~9?wC㹍?7kyk}A{Hpi϶WjZJ윹SML=Is3j5&/52 9[>AYW {dV/k-p8m:/?G e<9շPd;aci+x)B7>A?)] =Jxkm~qޡNNIʨWM#6_wfߕ#5KmsV:)=L'mY4>DtjJtae#c!ͨqSXz˛L3|%2 na @gYh