././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9724793 sahara-plugin-vanilla-10.0.0/0000775000175000017500000000000000000000000015772 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/.stestr.conf0000664000175000017500000000010200000000000020234 0ustar00zuulzuul00000000000000[DEFAULT] test_path=./sahara_plugin_vanilla/tests/unit top_dir=./ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/.zuul.yaml0000664000175000017500000000023600000000000017734 0ustar00zuulzuul00000000000000- project: templates: - check-requirements - openstack-python3-zed-jobs - publish-openstack-docs-pti - release-notes-jobs-python3 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419356.0 sahara-plugin-vanilla-10.0.0/AUTHORS0000664000175000017500000000772100000000000017051 0ustar00zuulzuul00000000000000Abhishek Chanda Adrien Vergé Alexander Ignatov Alok Jani Andreas Jaeger Andreas Jaeger Andrew Lazarev Andrey Pavlov Anh Tran Artem Osadchiy Bo Wang Cao Xuan Hoang ChangBo Guo(gcb) Charles Short Corey Bryant Daniele Venzano Davanum Srinivas DennyZhang Dexter Fryar Dina Belova Dirk Mueller Dmitry Mescheryakov Doug Hellmann Ethan Gafford Evgeny Sikachev Fengqian Gao Ghanshyam Mann Guo Shan Hervé Beraud Hongbin Lu Iwona Kotlarska Jacob Bin Wang James E. Blair Javeme Jaxon Wang Jeremy Freudberg Jeremy Stanley Jesse Pretorius Jon Maron Joseph D Natoli Julien Danjou Kazuki OIKAWA Kazuki Oikawa Ken Chen Li, Chen Luigi Toscano Luong Anh Tuan Marianne Linhares Monteiro Matthew Farrellee Michael Ionkin Michael Krotscheck Michael Lelyakin Michael McCune Michael McCune Mikhail Lelyakin Nadya Privalova Ngo Quoc Cuong Nguyen Hai Nikita Konovalov Nikolay Starodubtsev Ondřej Nový OpenStack Release Bot PanFengyun PavlovAndrey Ronald Bradford Ruslan Kamaldinov Sean McGinnis Sergey Lukjanov Sergey Lukjanov Sergey Reshetnyak Sergey Reshetnyak Sergey Vilgelm Shu Yingya Shuquan Huang Telles Nobrega Telles Nobrega Thierry Carrez Thomas Bechtold Tim Kelsey Trevor McKay Vadim Rovachev Venkateswarlu Pallamala Vitaly Gridnev Vitaly Gridnev Xi Yang Yaroslav Lobankov ZhongShengping Zhuang Changkun akhiljain23 anguoming artemosadchiy caoyue chenpengzi <1523688226@qq.com> chenxing dmitryme jiasen.lin lcsong luhuichun mathspanda nizam pangliye pawnesh.kumar pengyuesheng ricolin xuhaigang zhanghongtao ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/CONTRIBUTING.rst0000664000175000017500000000125700000000000020440 0ustar00zuulzuul00000000000000The source repository for this project can be found at: https://opendev.org/openstack/sahara-plugin-vanilla Pull requests submitted through GitHub are not monitored. To start contributing to OpenStack, follow the steps in the contribution guide to set up and use Gerrit: https://docs.openstack.org/contributors/code-and-documentation/quick-start.html Bugs should be filed on Storyboard: https://storyboard.openstack.org/#!/project/openstack/sahara-plugin-vanilla For more specific information about contributing to this repository, see the sahara-plugin-vanilla contributor guide: https://docs.openstack.org/sahara-plugin-vanilla/latest/contributor/contributing.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419356.0 sahara-plugin-vanilla-10.0.0/ChangeLog0000664000175000017500000007320100000000000017547 0ustar00zuulzuul00000000000000CHANGES ======= 10.0.0 ------ * Dropping lower constraints testing and remove py36,py37 support 8.0.0 ----- * Update master for stable/yoga 7.0.0 ----- * Update master for stable/xena 6.0.0 ----- * Update master for stable/wallaby 5.0.0 ----- * Imported Translations from Zanata * Fix reqs (focal), remove linters from l-r * Add Python3 wallaby unit tests * Update master for stable/victoria 4.0.0 ----- * Use unittest.mock instead of mock * Switch to newer openstackdocstheme and reno versions * Fix hacking min version to 3.0.1 * Imported Translations from Zanata * Bump default tox env from py37 to py38 * Add py38 package metadata * Add Python3 victoria unit tests * Update master for stable/ussuri 3.0.0 ----- * Ussuri contributor docs community goal * Cleanup py27 support * Update hacking for Python3 * fix: typo in tox minversion option * [ussuri][goal] Drop python 2.7 support and testing * Switch to Ussuri jobs * Imported Translations from Zanata * Imported Translations from Zanata * Update master for stable/train 2.0.0.0rc1 ---------- * Imported Translations from Zanata * Fix string * Update the constraints url * Doc updates: bump theme to 1.20.0, add PDF build * Imported Translations from Zanata * Limit envlist to py37 for Python 3 Train goal * Update sphinx from current requirements * Update Python 3 test runtimes for Train * Replace git.openstack.org URLs with opendev.org URLs * OpenDev Migration Patch * Dropping the py35 testing * Update master for stable/stein 1.0.0 ----- * add python 3.7 unit test job * Reduce the dependencies, add more common Zuul jobs * Update mailinglist from dev to discuss * Migrate away from oslo\_i18n.enable\_lazy() * Fix translations: add the babel.cfg file * Post-import fixes: name, license, doc, translations * Updating plugin documentation and release notes * Add .gitreview and basic Zuul jobs * Plugins splitted from sahara core * Add framework for sahara-status upgrade check * doc: restructure the image building documentation * Cleanup tox.ini constraint handling * doc: update distro information and cloud-init users * Imported Translations from Zanata * Update reno for stable/rocky * Imported Translations from Zanata * S3 data source * Switch the coverage tox target to stestr * Switch ostestr to stestr * Bump Flask version according requirements * Remove any reference to pre-built images * fix tox python3 overrides * Add support to deploy hadoop 2.7.5 * Switch from sahara-file to tarballs.o.o for artifacts * doc: add the redirect for a file recently renamed * Remove the (now obsolete) pip-missing-reqs tox target * Remove step upload package to oozie/sharelib * uncap eventlet * Follow the new PTI for document build * Updated from global requirements * add lower-constraints job * Migration to Storyboard * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Imported Translations from Zanata * Fix Spark EDP job failed in vanilla 2.8.2 * Updated from global requirements * Updated from global requirements * Imported Translations from Zanata * Update reno for stable/queens * Replace chinese quotes * Enable hacking-extensions H204, H205 * Add support to deploy Hadoop 2.8.2 * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * S3 job binary and binary retriever * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Remove setting of version/release from releasenotes * Updated from global requirements * Updated from global requirements * Add ZooKeeper support in Vanilla cluster * Incorrect indent Sahara Installation Guide in sahara * Updated from global requirements * Spark History Server in Vanilla auto sec group * Policy in code for Sahara * Updated from global requirements * Updated from global requirements * Add default configuration files to data\_files * Updated from global requirements * Updated from global requirements * Updated from global requirements * [ut] replace .testr.conf with .stestr.conf * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * doc: point to the main git repository and update links * Updated from global requirements * Updated from global requirements * Imported Translations from Zanata * Update reno for stable/pike * Updated from global requirements * Restructure the documentation according the new spec * Enable some off-by-default checks * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Update the documentation link for doc migration * Update Documention link * Updated from global requirements * Enable warnings as errors for doc building * Enable H904 check * doc: update the configuration of the theme * Updated from global requirements * Fix direct patches of methods in test\_versionhandler.py * Add test to sahara/plugins/vanilla/hadoop2/scaling.py * Add test to sahara/plugins/vanilla/hadoop2/run\_scripts.py * doc: switch to openstackdocstheme and add metadata * Updated from global requirements * Fix wrong patch in unit tests * Updated from global requirements * Add test to sahara/plugins/vanilla/hadoop2/starting\_scripts.py * Add test to edp\_engine.py * Add test to sahara/plugins/vanilla/hadoop2/oozie\_helper.py * Add test to sahara/plugins/vanilla/hadoop2/config\_helper.py * Add test to sahara/plugins/vanilla/v2\_7\_1/config\_helper.py * Updated from global requirements * Updated from global requirements * Add test to sahara/plugins/vanilla/v2\_7\_1/versionhandler.py * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Basic script for pack-based build image * Remove usage of parameter enforce\_type * Updated from global requirements * Updated from global requirements * Remove log translations * Updated from global requirements * Remove log translations * Remove log translations * Remove log translations * Updated from global requirements * Apply monkeypatching from eventlet before the tests starts * install saharaclient from pypi if not from source * Add ability to install with Apache in devstack * Support Job binary pluggability * Updated from global requirements * Updated from global requirements * Support Data Source pluggability * Indicating the location tests directory in oslo\_debug\_helper * Remove unused logging import * Fix api-ref build * Update validation unit test for all Vanilla processes * Updated from global requirements * [Fix gate]Update test requirement * Updated from global requirements * Updated from global requirements * Updated from global requirements * Add test\_get\_nodemanagers() * Updated from global requirements * Remove support for py34 * Update reno for stable/ocata * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Problem about permission * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Remove enable\_notifications option * Updated from global requirements * Updated from global requirements * Replaces uuid.uuid4 with uuidutils.generate\_uuid() * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Constraints are ready to be used for tox.ini * Enable release notes translation * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Update reno for stable/newton * fix docs env * Remove entry point of sahara tempest plugin * Updated from global requirements * Remove Tempest-like tests for clients (see sahara-tests) * standardize release note page ordering * Spark on Vanilla Clusters * Updated from global requirements * Updated from global requirements * Updating DOC on floating IPs change * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Remove hardcoded password from db schema * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Designate integration * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Remove hardcoded password for Oozie service * Updated from global requirements * Updated from global requirements * Add Python 3.5 classifier and venv * CLI for Plugin-Declared Image Declaration * Simplify tox hacking rule to match other projects * improvements on api for plugins * Updated from global requirements * Updated from global requirements * fix building api ref docs * Updated from global requirements * Updated from global requirements * Updated from global requirements * novaclient.v2.images to glanceclient migration * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Moving WADL docs to Sahara repository * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Remove hdp 2.0.6 plugin * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Remove openstack/common related stuff * Updated from global requirements * Updated from global requirements * keystoneclient to keystoneauth migration * PrettyTable and rfc3986 are no longer used in tests * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Bandit password tests * Move bandit to pep8 * Revert "Remove PyMySQL and psycopg2 from test-requirements.txt" * Remove PyMySQL and psycopg2 from test-requirements.txt * Update reno for stable/mitaka * register the config generator default hook with the right name * Updated from global requirements * Moved CORS middleware configuration into oslo-config-generator * Updated from global requirements * Updated from global requirements * Use ostestr instead of the custom pretty\_tox.sh * Updated from global requirements * Updated from global requirements * Updated from global requirements * Remove vanilla 2.6.0 code * Updated from global requirements * Add support running Sahara as wsgi app * Don't use Mock.called\_once\_with that does not exist * Python3: Fix using dictionary keys() * Updated from global requirements * Distributed periodic tasks implementation * Updated from global requirements * Remove outdated pot files * Updated from global requirements * Updated from global requirements * Remove scenario tests and related files * Updated from global requirements * Updated from global requirements * Updated from global requirements * add debug testenv in tox * Updated from global requirements * add helper functions for key manager * Enable passwordless ssh beetween vanilla nodes * Updated from global requirements * Updated from global requirements * Ensure default arguments are not mutable * Clean the code in vanilla's utils * Initial key manager implementation * Updated from global requirements * Updated from global requirements * Replace assertEqual(None, \*) with assertIsNone in tests * Updated from global requirements * Deprecated tox -downloadcache option removed * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * test: make enforce\_type=True in CONF.set\_override * Remove version from setup.cfg * Force releasenotes warnings to be treated as errors * Updated from global requirements * Updated from global requirements * Updated from global requirements * Drop direct engine support * Remove old integration tests for sahara codebase * Updated from global requirements * Updated from global requirements * Add "unreleased" release notes page * Support reno for release notes management * Updated from global requirements * Fix doc8 check failures * Updated from global requirements * Run py34 first in default tox run * Updated from global requirements * Publish sample conf to docs * Move doc8 dependency to test-requirements.txt * Fix E005 bashate error * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Add testresources used by oslo.db fixture * code cleanup * Updated from global requirements * Updated from global requirements * Updated from global requirements * replace multiple if stmts with dict and for loop * use list comprehensions * Open Mitaka development * Updated from global requirements * Adapt python client tests to use Tempest plugin interface * Formatting and mounting methods changed for ironic * Updated from global requirements * Register SSL cert in Java keystore to access to swift via SSL * Drop Vanilla Hadoop 1 * Modify recommend\_configs arguments in vanilla 1 * Updated from global requirements * Updated from global requirements * Remove useless test dependency 'discover' * Disable autotune configs for scaling old clusters * Deprecate Vanilla 2.6.0 * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * New version of HDP plugin * Use "get\_instances" method from sahara.plugins.utils * Updated from global requirements * Updated from global requirements * Add script to report uncovered new lines * Updated from global requirements * Update vanilla plugin to the latest version * Make starting scripts module for vanilla 2 plugin * Small refactoring for vanilla 2 * Updated from global requirements * Drop support of deprecated 2.4.1 Vanilla plugin * Mount share API * Updated from global requirements * Updated from global requirements * Remove extra merge methods in plugins * Cleanup .gitignore * Ignore .eggs directory in git * Implement recommendations for vanilla 2.6.0 * Remove openstack.common package * updating documentation on devstack usage * Updated from global requirements * Updated from global requirements * Fix problem with using volumes for HDFS data in vanilla plugin * Add py34 to envlist * Add bashate check for devstack scripts * Updated from global requirements * Updated from global requirements * Updated from global requirements * pass environment variables of proxy to tox * Switch to oslo.service * Updated from global requirements * Updated from global requirements * Update version for Liberty * Refactor exception is Sahara * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Use PyMySQL as MySQL DB driver for unit tests * Updated from global requirements * Improve compatible with python3 * Updated from global requirements * Remove sqlalchemy-migrate from test-requirements * Updated from global requirements * Adding basic bandit config * Use correct config\_helper in Vanilla 2.6 * Updated from global requirements * Fixing log messages to avoid information duplication * Adding cluster, instance, job\_execution ids to logs * Updated from global requirements * Restrict cluster to have at most one secondary namenode * Adding config hints for vanilla plugin * Fix strange check in code * Updated from global requirements * Open Liberty development * Migrate to oslo.policy lib instead of copy-pasted oslo-incubator * Add unit-tests for new integration tests * Leverage dict comprehension in PEP-0274 * Add a CLI tool for managing default templates * Add validation in new integration tests * Updated from global requirements * Add usages of plugin poll - part 1 * Remove the sahara.conf.sample file * Add usages of plugin poll - part 2 * Fix order of arguments in assertEqual - Part2 * Implement job-types endpoint support methods for Vanilla plugin * Replace empty list with scalable process in scaling * Rewrite log levels and messages * Adding barbican client and keymgr module * Updated from global requirements * Updated from global requirements * Apply event-log feature for Vanilla plugins * Fix indent miss caused by f4138a30c972fce334e5e2a0fc78570b0ddb288b * Add support of several scenario files in integration tests * Collect errors in new integration tests * Add support for oslo\_debug\_helper to tox.ini * Updated from global requirements * Updated from global requirements * Reorganized heat template generation code * New integration tests - base functional * Updated from global requirements * Refactor MapR plugin for Sahara * Updated from global requirements * Add ability to get cluster\_id directly from instance * [Vanilla2] Open ports for hive * Updated from global requirements * Using oslo\_\* instead of oslo.\* * Updated from global requirements * Remove log module from common modules * Drop cli/sahara-rootwrap * Make vanilla 2.4.1 plugin deprecated * Updated from global requirements * Migrate to oslo.log * Updated from global requirements * Updated from global requirements * Updated from global requirements * Add refactor to Vanilla 1.2.1 * Remove useless packages from requirements * Adding hive support for vanilla 2.6 * Use pretty-tox for better test output * Updated from global requirements * Move to hacking 0.10 * Added ability to listen HTTPS port * Updated from global requirements * Updated from global requirements * Updated from global requirements * Adding Storm entry point to setup.cfg * Cleaned up config generator settings * Extracted config check from pep8 to separate env * Fixed vanilla1/2 cluster not launched problem * Migrate to oslo.concurrency * Added validation on proxy domain for 'hiveserver' process * Updated from global requirements * Updated from global requirements * Migrate to oslo.context * Adding Hadoop 2.6.0 support to Vanilla plugin * Fixed configs generation for vanilla2 * Updated from global requirements * Updated from global requirements * Update plugin descriptions * Inherit Context from oslo * Remove py26 from tox * Added hive support for vanilla2 * Updated from global requirements * Updated from global requirements * Correcting small grammatical errors in logs * Updated from global requirements * Added ability to access a swift from vanilla-1 hive * Updated from global requirements * Remove oslo-incubator's gettextutils * Drop obsolete oslo-confing-generator * Fix vanilla test\_get\_configs() for i386 * Add missed translations * MapR plugin implementation * Small refactoring of get\_by\_id methods * Use oslo.middleware instead of copy-pasted * Updated from global requirements * Updated from global requirements * Updated from global requirements * Remove Vanilla 2.3 Hadoop * Add bashate checks * Use new style classes everywhere * Fix bashate errors * Updated from global requirements * Moved exceptions.py and utils.py up to plugins dir * Adding support for oslo.rootwrap to namespace access * Make versions list sorted for Vanilla and HDP * Support Cinder API version 2 * Open Kilo development * Updated from global requirements * [Vanilla] Increased security of temporary files for db * Add pip-missing-reqs tox env * Add genconfig tox env * Updated from global requirements * Updated from global requirements * Moved validate\_edp from plugin SPI to edp\_engine * Migrate to oslo.serialization * Add warn re sorting requirements * Add doc8 tox env * Removed comment about hashseed reset in unit tests * Moved get\_oozie\_server from plugin SPI to edp\_engine * Moved URI getters from plugin SPI to edp\_engine * Implemented get\_open\_ports method for vanilla hadoop2 * Added ability to create security group automatically * Make starting services in Vanilla 2.4.1 parallel * Updated from global requirements * Use auth\_token from keystonemiddleware * Updated from global requirements * Fix updating include files after scaling for vanilla 2 plugin * Make Vanilla 2.3.0 plugin deprecated * Added create\_hdfs\_dir method to oozie edp engine * Made EDP engine plugin specific * Do not rely on hash ordering in tests * Fix some of tests that rely on hash ordering * Correction of words decoMMiSSion-decoMMiSSioning * Updated from global requirements * Add translation support to plugin modules * Migration to oslo.utils * Update oslo.messaging to alpha/juno version * Update oslo.config to the alpha/juno version * Updated from global requirements * Removed a duplicate directive * Fixed a ValueError on provisioning cluster * Set python hash seed to 0 in tox.ini * Add CDH plugin to Sahara * Add rm from docs env to whitelist to avoid warn * Migration to oslo.db * Add translation support to upper level modules * Updated from global requirements * Bump Hadoop to 2.4.1 version * Fix creating cluster with Vanilla 2.4.0 plugin * Update oslo-incubator lockutils module * Fix scaling cluster Vanilla for Hadoop 2.3 * Updated from global requirements * Add vanilla plugin with Hadoop 2.4.0 * Fixed configuring instances for Vanilla 2.0 * Use oslo.i18n * Add oslo.i18n lib to requirements * Remove docutils pin * Fixed hadoop keys generation in case of existing extra * Switched Sahara unit tests base class to oslotest * Updated from global requirements * Fix formatting in readme for vanilla configs * Added validation check for number of datanodes * Corrected a number of pep8 errors * Updated from global requirements * Refactoring vanilla 2 plugin * Updated from global requirements * Fixed number of hacking errors * Updated from global requirements * Fixed H405 pep8 style check * Updated from global requirements * Migrated integration tests to testtools * Updated from global requirements * Fixed E265 pep8 * Added new hacking version to requirements * Updated from global requirements * Hided not found logger messages in unit tests * Migrated unit tests to testtools * Added jobhistory address config to vanilla 2 * Added secondary name node heap size param to vanilla plugin * Use in-memory sqlite DB for unit tests * Updated from global requirements * Sync the latest DB code from oslo-incubator * Added ability to run HDFS service only with Hadoop 2 * Removed versions from Vanilla plugin description * Updated from global requirements * Replaced RuntimeErrors with specific errors * Add Spark plugin to Sahara * Updated from global requirements * Extended plugin SPI with methods to communicate with EDP * Updated from global requirements * Updated from global requirements * Split sahara into sahara-api and sahara-engine * Add sahara-all binary * Fix eventlet monkey patch and threadlocal usage * Fix running EDP job on transient cluster * Add simple fake plugin for testing * Moved information about processes names to plugins * Updated from global requirements * Add secondarynamenode support to vanilla 2 plugin * Fixed wrong exceptions use for decommission errors * Remove IDH plugin from sahara * Add \*.log files to gitignore * Updated from global requirements * Fixed wrong use of SaharaException * Check that all po/pot files are valid * Added rackawareness to Hadoop 2 in vanilla plugin * Open Juno dev * Some configs updates for vanilla 2 plugin * Add EDP support for Vanilla 2 plugin * Updated from global requirements * Remove agent remote * Fixed incorrect use of RuntimeError * Move swift configs to core-site.xml * Updated from global requirements * Rename strings in plugins dir * Add Job History Server process to vanilla 2 plugin * Miscellaneous renaming string fixes * Move integration tests to python-saharaclient 0.6.0 * Change remaining savanna namespaces in setup.cfg * Renaming files with savanna words in its names * Keep python 3.X compatibility for xrange * Move the savanna subdir to sahara * Update i18n config due to the renaming * Updated from global requirements * Updated from global requirements * Make savanna able to be executed as sahara * Updated from global requirements * Add alias 'direct' for savanna/direct engine * Intial Agent remote implementation * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Auto generate and check config sample * Switch over to oslosphinx * Further preparation for transition to guest agent * Sync with global requirements * Make remote pluggable * Sync with global-requirements * Remove kombu from requirements * Bump stevedore to >=0.14 * Updated from global requirements * Updated from global requirements * Add integration test for Oozie java action * Updated from global requirements * Add alembic migration tool to sqlalchemy * Add missed i18n configs to setup.cfg * Extract common part of instances.py and instances\_heat.py * Adding IDH plugin basic implementation * Removal of AUTHORS file from repo * Launch integration tests with testr * Provisioning via Heat * Migrating to testr * Sync requirements: pin Sphinx to <1.2 * Bump savanna client used for tests to >= 0.4.0 * Make infrastructure engine pluggable * Use stevedore for plugins loading * There is no sense to keep py33 in tox envs * Revert "Support building wheels (PEP-427)" * Bump version to 2014.1 * Support building wheels (PEP-427) * Hacking contains all needed requirements * Fix style errors and upgrade hacking * Enable network operations over neutron private nets * Sync with global-requirements * Use release version of python-savannaclient * Add lower bound for the six dep * Use python-savannaclient 0.3.rc4 * Use savanna client 0.3-rc3 * Added EDP testing * Move swift client to runtime requirements * Hide savanna-subprocess endpoint from end users * Added rack topology configuration for hadoop cluster * Bump savanna client version to 0.3-rc2 * Add missing package dependency for test\_requirements.txt * Sync with global requirements * Replace copy-pasted sphinx theme with oslo.sphinx * Migration to new integration tests * Revert bump of alembic version * Integration test refactoring * Bump oslo.config version to use Havana release * Add default sqlite db to .gitignore * Sync requirements with global requirements * Remove version pbr pins from setup\_requires * Floating ip assignement support * Add direct dependency on iso8601 * Wrapping ssh calls into subprocesses * Sync requirements with os/requirements * Use setup.py develop for tox install * Move Babel from test to runtime requirements * Sync requirements with global-requirements * Get rid of pycrypto dep * Install configs to share/savanna from etc/savanna * Migrate to pbr * First steps for i18n support * Limit requests version * Sync OpenStack commons with oslo-incubator * Migrate to Conductor * Sync with global requirements * Raise eventlet to 0.13.0 * Bump hacking to 0.7 * Made Ambari RPM location configurable * Bump version to 0.3 * Fix docs build * Fix requests version * Add check S361 for imports of savanna.db module * Update requirements to the latest versions * Improve coverage calculation * Created savanna-db-manage script for new DB * Workflow creator * Docs build fixed * Enforce hacking >=0.6.0 * Allow sqlalchemy 0.8.X * Move requirements files to the common place * Use console\_scripts instead of bin * Cluster scaling: deletion * Added config tests * Fix author/homepage in setup.py * Rollback sitepackages fix for tox.ini * Fix pep8 and pycrypto versions, fix tox.ini * Posargs has been added to the flake8 command * License hacking tests has been added * XML coverage report added (cobertura) * Add cover report to .gitignore * Heap Size can be applied for Hadoop services now * Vanilla plugin configs are more informative now * Helper for Swift integration was added * Implementation of Vanilla Plugin * Adding lintstack to support pylint testing * Enable all code style tests * Introduce py33 to tox.ini * AUTHORS added to the repo * Initial version of Savanna v0.2 * .gitignore updated * cscope.out has been added to .gitignore * bump version to 0.1.2 * Adds xml hadoop config generating * Implements integration tests * Re-add setuptools-git to setup.py * All tools modev to tox * bump version to 0.1.1 * Remove an invalid trove classifier * setup.py has been improved * setuptools-get has been removed from deps * AUTHORS and ChangeLog has been added to .gitignore * resources has been added to sdist tarball * savanna-manage added to the scripts section of setup.py * Several fixes in tools and docs * Tools has been improved * savanna-manage has been added; reset-db/gen-templates moved to it * Author email has been fixed * dev-conf is now supported * some confs cleanup, pyflakes added to tox * simple tox.ini has been added * eho -> savanna * Build docs is now implemented using setup.py * setup utils is now from oslo-incubator * setup.py has been added * using conf files instead of hardcoded values * pylint and pyflakes static analysis has been added * nosetests.xml added to .gitignore * \*.db added to .gitignore * tests, coverage added * bin added * Initial commit ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/LICENSE0000664000175000017500000002363600000000000017011 0ustar00zuulzuul00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9724793 sahara-plugin-vanilla-10.0.0/PKG-INFO0000664000175000017500000000467400000000000017102 0ustar00zuulzuul00000000000000Metadata-Version: 1.2 Name: sahara-plugin-vanilla Version: 10.0.0 Summary: Vanilla Plugin for Sahara Project Home-page: https://docs.openstack.org/sahara/latest/ Author: OpenStack Author-email: openstack-discuss@lists.openstack.org License: Apache Software License Description: ======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/sahara.svg :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on OpenStack Data Processing ("Sahara") Vanilla Plugin ==================================================== OpenStack Sahara Vanilla Plugin provides the users the option to start Vanilla clusters on OpenStack Sahara. Check out OpenStack Sahara documentation to see how to deploy the Vanilla Plugin. Sahara at wiki.openstack.org: https://wiki.openstack.org/wiki/Sahara Storyboard project: https://storyboard.openstack.org/#!/project/openstack/sahara-plugin-vanilla Sahara docs site: https://docs.openstack.org/sahara/latest/ Quickstart guide: https://docs.openstack.org/sahara/latest/user/quickstart.html How to participate: https://docs.openstack.org/sahara/latest/contributor/how-to-participate.html Source: https://opendev.org/openstack/sahara-plugin-vanilla Bugs and feature requests: https://storyboard.openstack.org/#!/project/openstack/sahara-plugin-vanilla Release notes: https://docs.openstack.org/releasenotes/sahara-plugin-vanilla/ License ------- Apache License Version 2.0 http://www.apache.org/licenses/LICENSE-2.0 Platform: UNKNOWN Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Requires-Python: >=3.8 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/README.rst0000664000175000017500000000241300000000000017461 0ustar00zuulzuul00000000000000======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/sahara.svg :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on OpenStack Data Processing ("Sahara") Vanilla Plugin ==================================================== OpenStack Sahara Vanilla Plugin provides the users the option to start Vanilla clusters on OpenStack Sahara. Check out OpenStack Sahara documentation to see how to deploy the Vanilla Plugin. Sahara at wiki.openstack.org: https://wiki.openstack.org/wiki/Sahara Storyboard project: https://storyboard.openstack.org/#!/project/openstack/sahara-plugin-vanilla Sahara docs site: https://docs.openstack.org/sahara/latest/ Quickstart guide: https://docs.openstack.org/sahara/latest/user/quickstart.html How to participate: https://docs.openstack.org/sahara/latest/contributor/how-to-participate.html Source: https://opendev.org/openstack/sahara-plugin-vanilla Bugs and feature requests: https://storyboard.openstack.org/#!/project/openstack/sahara-plugin-vanilla Release notes: https://docs.openstack.org/releasenotes/sahara-plugin-vanilla/ License ------- Apache License Version 2.0 http://www.apache.org/licenses/LICENSE-2.0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/babel.cfg0000664000175000017500000000002000000000000017510 0ustar00zuulzuul00000000000000[python: **.py] ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9444795 sahara-plugin-vanilla-10.0.0/doc/0000775000175000017500000000000000000000000016537 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/doc/requirements.txt0000664000175000017500000000061700000000000022027 0ustar00zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. openstackdocstheme>=2.2.1 # Apache-2.0 os-api-ref>=1.4.0 # Apache-2.0 reno>=3.1.0 # Apache-2.0 sphinx>=2.0.0,!=2.1.0 # BSD sphinxcontrib-httpdomain>=1.3.0 # BSD whereto>=0.3.0 # Apache-2.0 ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9444795 sahara-plugin-vanilla-10.0.0/doc/source/0000775000175000017500000000000000000000000020037 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/doc/source/conf.py0000664000175000017500000001527400000000000021347 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # # sahara-plugin-vanilla documentation build configuration file. # # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [ 'reno.sphinxext', 'openstackdocstheme', ] # openstackdocstheme options openstackdocs_repo_name = 'openstack/sahara-plugin-vanilla' openstackdocs_pdf_link = True openstackdocs_use_storyboard = True openstackdocs_projects = [ 'sahara' ] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. copyright = u'2015, Sahara team' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'native' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". #html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'saharavanillaplugin-testsdoc' # -- Options for LaTeX output -------------------------------------------------- # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'doc-sahara-plugin-vanilla.tex', u'Sahara Vanilla Plugin Documentation', u'Sahara team', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True smartquotes_excludes = {'builders': ['latex']} # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'sahara-plugin-vanilla', u'sahara-plugin-vanilla Documentation', [u'Sahara team'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------------ # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'sahara-plugin-vanilla', u'sahara-plugin-vanilla Documentation', u'Sahara team', 'sahara-plugin-vanilla', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9444795 sahara-plugin-vanilla-10.0.0/doc/source/contributor/0000775000175000017500000000000000000000000022411 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/doc/source/contributor/contributing.rst0000664000175000017500000000130300000000000025647 0ustar00zuulzuul00000000000000============================ So You Want to Contribute... ============================ For general information on contributing to OpenStack, please check out the `contributor guide `_ to get started. It covers all the basics that are common to all OpenStack projects: the accounts you need, the basics of interacting with our Gerrit review system, how we communicate as a community, etc. sahara-plugin-vanilla is maintained by the OpenStack Sahara project. To understand our development process and how you can contribute to it, please look at the Sahara project's general contributor's page: http://docs.openstack.org/sahara/latest/contributor/contributing.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/doc/source/contributor/index.rst0000664000175000017500000000014500000000000024252 0ustar00zuulzuul00000000000000================= Contributor Guide ================= .. toctree:: :maxdepth: 2 contributing ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/doc/source/index.rst0000664000175000017500000000016600000000000021703 0ustar00zuulzuul00000000000000Vanilla plugin for Sahara ========================= .. toctree:: :maxdepth: 2 user/index contributor/index ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9484794 sahara-plugin-vanilla-10.0.0/doc/source/user/0000775000175000017500000000000000000000000021015 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/doc/source/user/index.rst0000664000175000017500000000012300000000000022652 0ustar00zuulzuul00000000000000========== User Guide ========== .. toctree:: :maxdepth: 2 vanilla-plugin ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/doc/source/user/vanilla-plugin.rst0000664000175000017500000000672600000000000024504 0ustar00zuulzuul00000000000000Vanilla Plugin ============== The vanilla plugin is a reference implementation which allows users to operate a cluster with Apache Hadoop. Since the Newton release Spark is integrated into the Vanilla plugin so you can launch Spark jobs on a Vanilla cluster. Images ------ For cluster provisioning, prepared images should be used. .. list-table:: Support matrix for the `vanilla` plugin :widths: 15 15 20 15 35 :header-rows: 1 * - Version (image tag) - Distribution - Build method - Version (build parameter) - Notes * - 2.8.2 - Ubuntu 16.04, CentOS 7 - sahara-image-create - 2.8.2 - Hive 2.3.2, Oozie 4.3.0 * - 2.7.5 - Ubuntu 16.04, CentOS 7 - sahara-image-create - 2.7.5 - Hive 2.3.2, Oozie 4.3.0 * - 2.7.1 - Ubuntu 16.04, CentOS 7 - sahara-image-create - 2.7.1 - Hive 0.11.0, Oozie 4.2.0 For more information about building image, refer to :sahara-doc:`Sahara documentation `. Vanilla plugin requires an image to be tagged in Sahara Image Registry with two tags: 'vanilla' and '' (e.g. '2.7.1'). The image requires a username. For more information, refer to the :sahara-doc:`registering image ` section of the Sahara documentation. Build settings ~~~~~~~~~~~~~~ When ``sahara-image-create`` is used, you can override few settings by exporting the corresponding environment variables before starting the build command: * ``DIB_HADOOP_VERSION`` - version of Hadoop to install * ``HIVE_VERSION`` - version of Hive to install * ``OOZIE_DOWNLOAD_URL`` - download link for Oozie (we have built Oozie libs here: https://tarballs.openstack.org/sahara-extra/dist/oozie/) * ``SPARK_DOWNLOAD_URL`` - download link for Spark Vanilla Plugin Requirements --------------------------- The image building tools described in :sahara-doc:`Building guest images ` add the required software to the image and their usage is strongly suggested. Nevertheless, here are listed the software that should be pre-loaded on the guest image so that it can be used to create Vanilla clusters: * ssh-client installed * Java (version >= 7) * Apache Hadoop installed * 'hadoop' user created See :sahara-doc:`Swift Integration ` for information on using Swift with your sahara cluster (for EDP support Swift integration is currently required). To support EDP, the following components must also be installed on the guest: * Oozie version 4 or higher * mysql/mariadb * hive Cluster Validation ------------------ When user creates or scales a Hadoop cluster using a Vanilla plugin, the cluster topology requested by user is verified for consistency. Currently there are the following limitations in cluster topology for Vanilla plugin: For Vanilla Hadoop version 2.x.x: + Cluster must contain exactly one namenode + Cluster can contain at most one resourcemanager + Cluster can contain at most one secondary namenode + Cluster can contain at most one historyserver + Cluster can contain at most one oozie and this process is also required for EDP + Cluster can't contain oozie without resourcemanager and without historyserver + Cluster can't have nodemanager nodes if it doesn't have resourcemanager + Cluster can have at most one hiveserver node. + Cluster can have at most one spark history server and this process is also required for Spark EDP (Spark is available since the Newton release). ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/errors.txt0000664000175000017500000001173700000000000020060 0ustar00zuulzuul00000000000000py27 develop-inst-nodeps: /home/tenobreg/coding/upstream/sahara/sahara py27 installed: alabaster==0.7.11,alembic==1.0.0,amqp==2.3.2,appdirs==1.4.3,asn1crypto==0.24.0,astroid==1.3.8,Babel==2.6.0,bandit==1.5.0,bashate==0.6.0,bcrypt==3.1.4,botocore==1.10.62,cachetools==2.1.0,castellan==0.18.0,certifi==2018.4.16,cffi==1.11.5,chardet==3.0.4,click==6.7,cliff==2.13.0,cmd2==0.8.8,contextlib2==0.5.5,coverage==4.5.1,cryptography==2.3,debtcollector==1.20.0,decorator==4.3.0,deprecation==2.0.5,doc8==0.8.0,docutils==0.14,dogpile.cache==0.6.6,dulwich==0.19.5,enum-compat==0.0.2,enum34==1.1.6,eventlet==0.20.0,extras==1.0.0,fasteners==0.14.1,fixtures==3.0.0,flake8==2.5.5,Flask==1.0.2,funcsigs==1.0.2,functools32==3.2.3.post2,future==0.16.0,futures==3.2.0,futurist==1.7.0,gitdb2==2.0.4,GitPython==2.1.11,greenlet==0.4.13,hacking==0.12.0,idna==2.7,imagesize==1.0.0,ipaddress==1.0.22,iso8601==0.1.12,itsdangerous==0.24,Jinja2==2.10,jmespath==0.9.3,jsonpatch==1.23,jsonpointer==2.0,jsonschema==2.6.0,keystoneauth1==3.10.0,keystonemiddleware==5.2.0,kombu==4.2.1,linecache2==1.0.0,logilab-common==1.4.2,Mako==1.0.7,MarkupSafe==1.0,mccabe==0.2.1,mock==2.0.0,monotonic==1.5,mox3==0.26.0,msgpack==0.5.6,munch==2.3.2,netaddr==0.7.19,netifaces==0.10.7,openstackdocstheme==1.22.0,openstacksdk==0.17.2,os-api-ref==1.5.0,os-client-config==1.31.2,os-service-types==1.3.0,os-testr==1.0.0,osc-lib==1.11.1,oslo.cache==1.30.1,oslo.concurrency==3.27.0,oslo.config==6.4.0,oslo.context==2.21.0,oslo.db==4.40.0,oslo.i18n==3.21.0,oslo.log==3.39.0,oslo.messaging==8.1.0,oslo.middleware==3.36.0,oslo.policy==1.38.1,oslo.rootwrap==5.14.1,oslo.serialization==2.27.0,oslo.service==1.31.3,oslo.utils==3.36.4,oslotest==3.6.0,packaging==17.1,paramiko==2.4.1,Paste==2.0.3,PasteDeploy==1.5.2,pbr==4.2.0,pep8==1.5.7,prettytable==0.7.2,psycopg2==2.7.5,pyasn1==0.4.3,pycadf==2.8.0,pycparser==2.18,pyflakes==0.8.1,Pygments==2.2.0,pyinotify==0.9.6,pylint==1.4.5,PyMySQL==0.9.2,PyNaCl==1.2.1,pyOpenSSL==18.0.0,pyparsing==2.2.0,pyperclip==1.6.4,python-barbicanclient==4.7.0,python-cinderclient==4.0.1,python-dateutil==2.7.3,python-editor==1.0.3,python-glanceclient==2.12.1,python-heatclient==1.16.1,python-keystoneclient==3.17.0,python-manilaclient==1.24.1,python-mimeparse==1.6.0,python-neutronclient==6.9.0,python-novaclient==11.0.0,python-openstackclient==3.16.0,python-saharaclient==2.0.0,python-subunit==1.3.0,python-swiftclient==3.6.0,pytz==2018.5,PyYAML==3.13,reno==2.9.2,repoze.lru==0.7,requests==2.19.1,requestsexceptions==1.4.0,restructuredtext-lint==1.1.3,rfc3986==1.1.0,Routes==2.4.1,-e git+https://github.com/openstack/sahara.git@efb05b3624044f307168d0b5da888132f51aebb7#egg=sahara,simplejson==3.16.0,six==1.11.0,smmap2==2.0.4,snowballstemmer==1.2.1,Sphinx==1.7.6,sphinxcontrib-httpdomain==1.7.0,sphinxcontrib-websupport==1.1.0,SQLAlchemy==1.2.10,sqlalchemy-migrate==0.11.0,sqlparse==0.2.4,statsd==3.2.2,stestr==2.1.0,stevedore==1.29.0,subprocess32==3.5.2,Tempita==0.5.2,tenacity==4.12.0,testresources==2.0.1,testscenarios==0.5.0,testtools==2.3.0,tooz==1.62.0,traceback2==1.4.0,typing==3.6.4,unicodecsv==0.14.1,unittest2==1.1.0,urllib3==1.23,vine==1.1.4,voluptuous==0.11.1,warlock==1.3.0,wcwidth==0.1.7,WebOb==1.8.2,Werkzeug==0.14.1,wrapt==1.10.11 py27 runtests: PYTHONHASHSEED='839100177' py27 runtests: commands[0] | ostestr ========================= Failures during discovery ========================= --- import errors --- Failed to import test module: sahara.tests.unit.service.edp.spark.test_shell Traceback (most recent call last): File "/home/tenobreg/coding/upstream/sahara/sahara/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py", line 456, in _find_test_path module = self._get_module_from_name(name) File "/home/tenobreg/coding/upstream/sahara/sahara/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py", line 395, in _get_module_from_name __import__(name) File "sahara/tests/unit/service/edp/spark/test_shell.py", line 18, in from sahara.plugins.spark import shell_engine ImportError: No module named spark Failed to import test module: sahara.tests.unit.service.edp.spark.test_spark Traceback (most recent call last): File "/home/tenobreg/coding/upstream/sahara/sahara/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py", line 456, in _find_test_path module = self._get_module_from_name(name) File "/home/tenobreg/coding/upstream/sahara/sahara/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py", line 395, in _get_module_from_name __import__(name) File "sahara/tests/unit/service/edp/spark/test_spark.py", line 17, in from sahara.plugins.spark import edp_engine as spark_edp ImportError: No module named spark ================================================================================ The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. ERROR: InvocationError: '/home/tenobreg/coding/upstream/sahara/sahara/.tox/py27/bin/ostestr' ___________________________________ summary ____________________________________ ERROR: py27: commands failed ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9404793 sahara-plugin-vanilla-10.0.0/releasenotes/0000775000175000017500000000000000000000000020463 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9484794 sahara-plugin-vanilla-10.0.0/releasenotes/notes/0000775000175000017500000000000000000000000021613 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/releasenotes/notes/drop-py2-7-345ca486b838f0bb.yaml0000664000175000017500000000035200000000000026535 0ustar00zuulzuul00000000000000--- upgrade: - | Python 2.7 support has been dropped. Last release of sahara and its plugins to support python 2.7 is OpenStack Train. The minimum version of Python now supported by sahara and its plugins is Python 3.6. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9484794 sahara-plugin-vanilla-10.0.0/releasenotes/source/0000775000175000017500000000000000000000000021763 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9484794 sahara-plugin-vanilla-10.0.0/releasenotes/source/_static/0000775000175000017500000000000000000000000023411 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/releasenotes/source/_static/.placeholder0000664000175000017500000000000000000000000025662 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9484794 sahara-plugin-vanilla-10.0.0/releasenotes/source/_templates/0000775000175000017500000000000000000000000024120 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/releasenotes/source/_templates/.placeholder0000664000175000017500000000000000000000000026371 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/releasenotes/source/conf.py0000664000175000017500000001526600000000000023274 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # Sahara Release Notes documentation build configuration file extensions = [ 'reno.sphinxext', 'openstackdocstheme' ] # openstackdocstheme options openstackdocs_repo_name = 'openstack/sahara-plugin-vanilla' openstackdocs_use_storyboard = True # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. copyright = u'2015, Sahara Developers' # Release do not need a version number in the title, they # cover multiple versions. # The full version, including alpha/beta/rc tags. release = '' # The short X.Y version. version = '' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'native' # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_domain_indices = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'SaharaVanillaReleaseNotesdoc' # -- Options for LaTeX output --------------------------------------------- # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'SaharaVanillaReleaseNotes.tex', u'Sahara Vanilla Plugin Release Notes Documentation', u'Sahara Developers', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'saharavanillareleasenotes', u'Sahara Vanilla Plugin Release Notes Documentation', [u'Sahara Developers'], 1) ] # If true, show URL addresses after external links. # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'SaharaVanillaReleaseNotes', u'Sahara Vanilla Plugin Release Notes Documentation', u'Sahara Developers', 'SaharaVanillaReleaseNotes', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. # texinfo_appendices = [] # If false, no module index is generated. # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # texinfo_no_detailmenu = False # -- Options for Internationalization output ------------------------------ locale_dirs = ['locale/'] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/releasenotes/source/index.rst0000664000175000017500000000034100000000000023622 0ustar00zuulzuul00000000000000===================================== Sahara Vanilla Plugin Release Notes ===================================== .. toctree:: :maxdepth: 1 unreleased yoga xena wallaby victoria ussuri train stein ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9404793 sahara-plugin-vanilla-10.0.0/releasenotes/source/locale/0000775000175000017500000000000000000000000023222 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9404793 sahara-plugin-vanilla-10.0.0/releasenotes/source/locale/de/0000775000175000017500000000000000000000000023612 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9484794 sahara-plugin-vanilla-10.0.0/releasenotes/source/locale/de/LC_MESSAGES/0000775000175000017500000000000000000000000025377 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/releasenotes/source/locale/de/LC_MESSAGES/releasenotes.po0000664000175000017500000000271300000000000030433 0ustar00zuulzuul00000000000000# Andreas Jaeger , 2019. #zanata # Andreas Jaeger , 2020. #zanata msgid "" msgstr "" "Project-Id-Version: sahara-plugin-vanilla\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2020-04-24 23:47+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2020-04-25 10:38+0000\n" "Last-Translator: Andreas Jaeger \n" "Language-Team: German\n" "Language: de\n" "X-Generator: Zanata 4.3.3\n" "Plural-Forms: nplurals=2; plural=(n != 1)\n" msgid "Current Series Release Notes" msgstr "Aktuelle Serie Releasenotes" msgid "" "Python 2.7 support has been dropped. Last release of sahara and its plugins " "to support python 2.7 is OpenStack Train. The minimum version of Python now " "supported by sahara and its plugins is Python 3.6." msgstr "" "Python 2.7 Unterstützung wurde beendet. Der letzte Release von Sahara und " "seinen Plugins der Python 2.7 unterstützt ist OpenStack Train. Die minimal " "Python Version welche von Sahara und seinen Plugins unterstützt wird, ist " "Python 3.6." msgid "Sahara Vanilla Plugin Release Notes" msgstr "Sahara Vanilla Plugin Releasenotes" msgid "Stein Series Release Notes" msgstr "Stein Serie Releasenotes" msgid "Train Series Release Notes" msgstr "Train Serie Releasenotes" msgid "Upgrade Notes" msgstr "Aktualisierungsnotizen" msgid "Ussuri Series Release Notes" msgstr "Ussuri Serie Releasenotes" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9404793 sahara-plugin-vanilla-10.0.0/releasenotes/source/locale/en_GB/0000775000175000017500000000000000000000000024174 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9484794 sahara-plugin-vanilla-10.0.0/releasenotes/source/locale/en_GB/LC_MESSAGES/0000775000175000017500000000000000000000000025761 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/releasenotes/source/locale/en_GB/LC_MESSAGES/releasenotes.po0000664000175000017500000000274300000000000031020 0ustar00zuulzuul00000000000000# Andi Chandler , 2020. #zanata msgid "" msgstr "" "Project-Id-Version: sahara-plugin-vanilla\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2020-10-07 22:09+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2020-11-03 10:20+0000\n" "Last-Translator: Andi Chandler \n" "Language-Team: English (United Kingdom)\n" "Language: en_GB\n" "X-Generator: Zanata 4.3.3\n" "Plural-Forms: nplurals=2; plural=(n != 1)\n" msgid "3.0.0" msgstr "3.0.0" msgid "Current Series Release Notes" msgstr "Current Series Release Notes" msgid "" "Python 2.7 support has been dropped. Last release of sahara and its plugins " "to support python 2.7 is OpenStack Train. The minimum version of Python now " "supported by sahara and its plugins is Python 3.6." msgstr "" "Python 2.7 support has been dropped. Last release of sahara and its plugins " "to support python 2.7 is OpenStack Train. The minimum version of Python now " "supported by Sahara and its plugins is Python 3.6." msgid "Sahara Vanilla Plugin Release Notes" msgstr "Sahara Vanilla Plugin Release Notes" msgid "Stein Series Release Notes" msgstr "Stein Series Release Notes" msgid "Train Series Release Notes" msgstr "Train Series Release Notes" msgid "Upgrade Notes" msgstr "Upgrade Notes" msgid "Ussuri Series Release Notes" msgstr "Ussuri Series Release Notes" msgid "Victoria Series Release Notes" msgstr "Victoria Series Release Notes" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9404793 sahara-plugin-vanilla-10.0.0/releasenotes/source/locale/ne/0000775000175000017500000000000000000000000023624 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9484794 sahara-plugin-vanilla-10.0.0/releasenotes/source/locale/ne/LC_MESSAGES/0000775000175000017500000000000000000000000025411 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/releasenotes/source/locale/ne/LC_MESSAGES/releasenotes.po0000664000175000017500000000145600000000000030450 0ustar00zuulzuul00000000000000# Surit Aryal , 2019. #zanata msgid "" msgstr "" "Project-Id-Version: sahara-plugin-vanilla\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2019-07-23 14:28+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2019-08-02 08:43+0000\n" "Last-Translator: Surit Aryal \n" "Language-Team: Nepali\n" "Language: ne\n" "X-Generator: Zanata 4.3.3\n" "Plural-Forms: nplurals=2; plural=(n != 1)\n" msgid "Current Series Release Notes" msgstr "Current Series रिलीज नोट्स" msgid "Sahara Vanilla Plugin Release Notes" msgstr "Sahara Vanilla प्लगइन रिलीज नोट्स" msgid "Stein Series Release Notes" msgstr "Stein Series रिलीज नोट्स" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/releasenotes/source/stein.rst0000664000175000017500000000022100000000000023632 0ustar00zuulzuul00000000000000=================================== Stein Series Release Notes =================================== .. release-notes:: :branch: stable/stein ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/releasenotes/source/train.rst0000664000175000017500000000017600000000000023636 0ustar00zuulzuul00000000000000========================== Train Series Release Notes ========================== .. release-notes:: :branch: stable/train ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/releasenotes/source/unreleased.rst0000664000175000017500000000016000000000000024641 0ustar00zuulzuul00000000000000============================== Current Series Release Notes ============================== .. release-notes:: ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/releasenotes/source/ussuri.rst0000664000175000017500000000020200000000000024041 0ustar00zuulzuul00000000000000=========================== Ussuri Series Release Notes =========================== .. release-notes:: :branch: stable/ussuri ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/releasenotes/source/victoria.rst0000664000175000017500000000021200000000000024330 0ustar00zuulzuul00000000000000============================= Victoria Series Release Notes ============================= .. release-notes:: :branch: stable/victoria ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/releasenotes/source/wallaby.rst0000664000175000017500000000020600000000000024146 0ustar00zuulzuul00000000000000============================ Wallaby Series Release Notes ============================ .. release-notes:: :branch: stable/wallaby ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/releasenotes/source/xena.rst0000664000175000017500000000017200000000000023450 0ustar00zuulzuul00000000000000========================= Xena Series Release Notes ========================= .. release-notes:: :branch: stable/xena ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/releasenotes/source/yoga.rst0000664000175000017500000000017200000000000023454 0ustar00zuulzuul00000000000000========================= Yoga Series Release Notes ========================= .. release-notes:: :branch: stable/yoga ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/requirements.txt0000664000175000017500000000076700000000000021270 0ustar00zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. pbr!=2.1.0,>=2.0.0 # Apache-2.0 Babel!=2.4.0,>=2.3.4 # BSD eventlet>=0.26.0 # MIT oslo.i18n>=3.15.3 # Apache-2.0 oslo.log>=3.36.0 # Apache-2.0 oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0 oslo.utils>=3.33.0 # Apache-2.0 requests>=2.14.2 # Apache-2.0 sahara>=10.0.0.0b1 six>=1.10.0 # MIT ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9484794 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/0000775000175000017500000000000000000000000022315 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/__init__.py0000664000175000017500000000000000000000000024414 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/i18n.py0000664000175000017500000000163100000000000023447 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # It's based on oslo.i18n usage in OpenStack Keystone project and # recommendations from https://docs.openstack.org/oslo.i18n/latest/ # user/usage.html import oslo_i18n _translators = oslo_i18n.TranslatorFactory(domain='sahara_plugin_vanilla') # The primary translation function using the well-known name "_" _ = _translators.primary ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9404793 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/locale/0000775000175000017500000000000000000000000023554 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9404793 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/locale/de/0000775000175000017500000000000000000000000024144 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9524794 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/locale/de/LC_MESSAGES/0000775000175000017500000000000000000000000025731 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/locale/de/LC_MESSAGES/sahara_plugin_vanilla.po0000664000175000017500000000623600000000000032623 0ustar00zuulzuul00000000000000# Andreas Jaeger , 2019. #zanata msgid "" msgstr "" "Project-Id-Version: sahara-plugin-vanilla VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2019-09-27 11:37+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2019-09-28 07:59+0000\n" "Last-Translator: Andreas Jaeger \n" "Language-Team: German\n" "Language: de\n" "X-Generator: Zanata 4.3.3\n" "Plural-Forms: nplurals=2; plural=(n != 1)\n" msgid "0 or 1" msgstr "0 oder 1" #, python-format msgid "Await %s start up" msgstr "Warte auf %s starte" msgid "Configure instances" msgstr "Konfigurieren Sie Instanzen" msgid "Configure topology data" msgstr "Konfigurieren Sie Topologiedaten" #, python-format msgid "Decommission %s" msgstr "Außerkraftsetzung %s" msgid "Number of datanodes must be not less than dfs.replication." msgstr "Die Anzahl der Daten muss nicht kleiner als dfs.replication sein." msgid "Number of zookeeper nodes should be odd." msgstr "Anzahl der Zookeeper-Knoten sollte ungerade sein." #, python-format msgid "Refresh %s nodes" msgstr "Aktualisiere die Knoten %s" msgid "Spark {base} or higher required to run {type} jobs" msgstr "Spark {base} oder höher erforderlich, um {type} Jobs auszuführen" msgid "" "The Apache Vanilla plugin provides the ability to launch upstream Vanilla " "Apache Hadoop cluster without any management consoles. It can also deploy " "the Oozie component." msgstr "" "Das Apache Vanilla-Plugin bietet die Möglichkeit, Upstream-Vanilla-Apache-" "Hadoop-Cluster ohne Verwaltungskonsolen zu starten. Es kann auch die Oozie-" "Komponente bereitstellen." #, python-format msgid "Unable to get parameter '%(name)s' from service %(service)s" msgstr "" "Der Parameter '%(name)s' konnte nicht vom Service %(service)s abgerufen " "werden" msgid "Update include files" msgstr "Update-Include-Dateien" msgid "" "Vanilla plugin cannot scale cluster because it must keep zookeeper service " "in odd." msgstr "" "Das Vanilla-Plugin kann den Cluster nicht skalieren, da der Zoowäklerservice " "in Odd gehalten werden muss." msgid "" "Vanilla plugin cannot scale node group with processes which have no master-" "processes run in cluster" msgstr "" "Das Vanilla-Plugin kann Knotengruppen nicht mit Prozessen skalieren, für die " "keine Masterprozesse im Cluster ausgeführt werden" #, python-format msgid "Vanilla plugin cannot scale nodegroup with processes: %s" msgstr "" "Das Vanilla-Plugin kann Knotengruppen nicht mit Prozessen skalieren: %s" #, python-format msgid "" "Vanilla plugin cannot shrink cluster because it would be not enough nodes " "for replicas (replication factor is %s)" msgstr "" "Das Vanilla-Plugin kann den Cluster nicht verkleinern, da es nicht genug " "Knoten für Replikate geben würde (der Replikationsfaktor ist %s)" msgid "Wait for decommissioning" msgstr "Warten Sie auf die Außerbetriebnahme" #, python-format msgid "Waiting on %s datanodes to start up" msgstr "Warten auf %s Daten, um zu starten" msgid "Waiting on 1 datanodes to start up" msgstr "Warte auf 1 Daten Node zum Starten" msgid "odd" msgstr "ungerade" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9404793 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/locale/en_GB/0000775000175000017500000000000000000000000024526 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9524794 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/locale/en_GB/LC_MESSAGES/0000775000175000017500000000000000000000000026313 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/locale/en_GB/LC_MESSAGES/sahara_plugin_vanilla.po0000664000175000017500000000572000000000000033202 0ustar00zuulzuul00000000000000# Andi Chandler , 2020. #zanata msgid "" msgstr "" "Project-Id-Version: sahara-plugin-vanilla VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2020-10-07 22:09+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2020-11-03 10:20+0000\n" "Last-Translator: Andi Chandler \n" "Language-Team: English (United Kingdom)\n" "Language: en_GB\n" "X-Generator: Zanata 4.3.3\n" "Plural-Forms: nplurals=2; plural=(n != 1)\n" msgid "0 or 1" msgstr "0 or 1" #, python-format msgid "Await %s start up" msgstr "Await %s start up" msgid "Configure instances" msgstr "Configure instances" msgid "Configure topology data" msgstr "Configure topology data" #, python-format msgid "Decommission %s" msgstr "Decommission %s" msgid "Number of datanodes must be not less than dfs.replication." msgstr "Number of datanodes must be not less than dfs.replication." msgid "Number of zookeeper nodes should be odd." msgstr "Number of Zookeeper nodes should be odd." #, python-format msgid "Refresh %s nodes" msgstr "Refresh %s nodes" msgid "Spark {base} or higher required to run {type} jobs" msgstr "Spark {base} or higher required to run {type} jobs" msgid "" "The Apache Vanilla plugin provides the ability to launch upstream Vanilla " "Apache Hadoop cluster without any management consoles. It can also deploy " "the Oozie component." msgstr "" "The Apache Vanilla plugin provides the ability to launch upstream Vanilla " "Apache Hadoop cluster without any management consoles. It can also deploy " "the Oozie component." #, python-format msgid "Unable to get parameter '%(name)s' from service %(service)s" msgstr "Unable to get parameter '%(name)s' from service %(service)s" msgid "Update include files" msgstr "Update include files" msgid "" "Vanilla plugin cannot scale cluster because it must keep zookeeper service " "in odd." msgstr "" "Vanilla plugin cannot scale cluster because it must keep Zookeeper service " "in odd." msgid "" "Vanilla plugin cannot scale node group with processes which have no master-" "processes run in cluster" msgstr "" "Vanilla plugin cannot scale node group with processes which have no master-" "processes run in cluster" #, python-format msgid "Vanilla plugin cannot scale nodegroup with processes: %s" msgstr "Vanilla plugin cannot scale nodegroup with processes: %s" #, python-format msgid "" "Vanilla plugin cannot shrink cluster because it would be not enough nodes " "for replicas (replication factor is %s)" msgstr "" "Vanilla plugin cannot shrink cluster because it would be not enough nodes " "for replicas (replication factor is %s)" msgid "Wait for decommissioning" msgstr "Wait for decommissioning" #, python-format msgid "Waiting on %s datanodes to start up" msgstr "Waiting on %s datanodes to start up" msgid "Waiting on 1 datanodes to start up" msgstr "Waiting on 1 datanodes to start up" msgid "odd" msgstr "odd" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9404793 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/locale/id/0000775000175000017500000000000000000000000024150 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9524794 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/locale/id/LC_MESSAGES/0000775000175000017500000000000000000000000025735 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/locale/id/LC_MESSAGES/sahara_plugin_vanilla.po0000664000175000017500000000605300000000000032624 0ustar00zuulzuul00000000000000# suhartono , 2019. #zanata msgid "" msgstr "" "Project-Id-Version: sahara-plugin-vanilla VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2019-09-30 09:36+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2019-10-06 03:37+0000\n" "Last-Translator: suhartono \n" "Language-Team: Indonesian\n" "Language: id\n" "X-Generator: Zanata 4.3.3\n" "Plural-Forms: nplurals=1; plural=0\n" msgid "0 or 1" msgstr "0 atau 1" #, python-format msgid "Await %s start up" msgstr "Tunggu %s start up (mulai)" msgid "Configure instances" msgstr "Konfigurasikan instance" msgid "Configure topology data" msgstr "Konfigurasikan data topologi" #, python-format msgid "Decommission %s" msgstr "Decommission %s" msgid "Number of datanodes must be not less than dfs.replication." msgstr "Jumlah datanoda harus tidak kurang dari dfs.replication." msgid "Number of zookeeper nodes should be odd." msgstr "Jumlah node zookeeper harus ganjil." #, python-format msgid "Refresh %s nodes" msgstr "Refresh %s nodes" msgid "Spark {base} or higher required to run {type} jobs" msgstr "" "Spark {base} atau lebih tinggi diperlukan untuk menjalankan jobs {type}" msgid "" "The Apache Vanilla plugin provides the ability to launch upstream Vanilla " "Apache Hadoop cluster without any management consoles. It can also deploy " "the Oozie component." msgstr "" "Plugin Apache Vanilla menyediakan kemampuan untuk meluncurkan cluster " "Vanilla Apache Hadoop hulu tanpa konsol manajemen. Itu juga dapat " "menggunakan komponen Oozie." #, python-format msgid "Unable to get parameter '%(name)s' from service %(service)s" msgstr "Tidak dapat memperoleh parameter '%(name)s' dari layanan %(service)s" msgid "Update include files" msgstr "Perbarui menyertakan file" msgid "" "Vanilla plugin cannot scale cluster because it must keep zookeeper service " "in odd." msgstr "" "Plugin vanilla tidak dapat mengukur cluster karena harus menjaga layanan " "zookeeper dalam keadaan odd." msgid "" "Vanilla plugin cannot scale node group with processes which have no master-" "processes run in cluster" msgstr "" "Plugin vanilla tidak dapat menskala node group dengan proses yang tidak " "memiliki proses master berjalan di cluster" #, python-format msgid "Vanilla plugin cannot scale nodegroup with processes: %s" msgstr "Plugin vanilla tidak dapat menskala nodegroup dengan proses: %s" #, python-format msgid "" "Vanilla plugin cannot shrink cluster because it would be not enough nodes " "for replicas (replication factor is %s)" msgstr "" "Plugin vanilla tidak dapat mengecilkan cluster karena tidak akan cukup node " "untuk replika (faktor replikasi adalah %s)" msgid "Wait for decommissioning" msgstr "Tunggu pembongkaran (decommissioning)" #, python-format msgid "Waiting on %s datanodes to start up" msgstr "Menunggu datanodes %s untuk memulai" msgid "Waiting on 1 datanodes to start up" msgstr "Menunggu 1 datanode untuk memulai" msgid "odd" msgstr "ganjil" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9404793 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/locale/ne/0000775000175000017500000000000000000000000024156 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9524794 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/locale/ne/LC_MESSAGES/0000775000175000017500000000000000000000000025743 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/locale/ne/LC_MESSAGES/sahara_plugin_vanilla.po0000664000175000017500000001004500000000000032626 0ustar00zuulzuul00000000000000# Surit Aryal , 2019. #zanata msgid "" msgstr "" "Project-Id-Version: sahara-plugin-vanilla VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2019-09-27 11:37+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2019-08-02 09:08+0000\n" "Last-Translator: Surit Aryal \n" "Language-Team: Nepali\n" "Language: ne\n" "X-Generator: Zanata 4.3.3\n" "Plural-Forms: nplurals=2; plural=(n != 1)\n" msgid "0 or 1" msgstr "० वा १" #, python-format msgid "Await %s start up" msgstr "पर्खिरहेका %s शुरू" msgid "Configure instances" msgstr "instances कन्फिगर गर्नुहोस्" msgid "Configure topology data" msgstr "टोपोलजी डाटा कन्फिगर गर्नुहोस्" #, python-format msgid "Decommission %s" msgstr "Decommission %s" msgid "Number of datanodes must be not less than dfs.replication." msgstr "datanodes संख्या dfs.replication भन्दा कम हुनुपर्छ।" #, python-format msgid "Refresh %s nodes" msgstr "%s नोडहरू ताजा गर्नुहोस्" msgid "Spark {base} or higher required to run {type} jobs" msgstr "Spark {base} or higher required to run {type} jobs" msgid "" "The Apache Vanilla plugin provides the ability to launch upstream Vanilla " "Apache Hadoop cluster without any management consoles. It can also deploy " "the Oozie component." msgstr "" "Apache Vanilla प्लगइनले कुनै व्यवस्थापन कन्सोल बिना अपस्ट्रिम Vanilla Apache Hadoop " "क्लस्टर सुरूवात गर्न क्षमता प्रदान गर्दछ। यसले Oozie कम्पोनेन्ट पनि डिप्लोय गर्न सक्दछ।" #, python-format msgid "Unable to get parameter '%(name)s' from service %(service)s" msgstr "सेवा %(service)s बाट प्यारामिटर '%(name)s' प्राप्त गर्न असमर्थ" msgid "Update include files" msgstr "अपडेटमा फाईलहरू समावेश छन्" msgid "" "Vanilla plugin cannot scale cluster because it must keep zookeeper service " "in odd." msgstr "" "Vanilla pluginले क्लस्टर मापन गर्न सक्दैन किनकि यसले जूटरकीपर सेवा बिजोर राख्नु पर्छ।" msgid "" "Vanilla plugin cannot scale node group with processes which have no master-" "processes run in cluster" msgstr "" "Vanilla pluginले नोड समूहलाई प्रक्रियाहरूसँग मापन गर्न सक्दैन जुन मास्टर-प्रक्रियाहरू " "क्लस्टरमा चल्दैनन्" #, python-format msgid "Vanilla plugin cannot scale nodegroup with processes: %s" msgstr "Vanilla plugin प्रक्रियासँग नोड ग्रुप मापन गर्न सक्दैन: %s" #, python-format msgid "" "Vanilla plugin cannot shrink cluster because it would be not enough nodes " "for replicas (replication factor is %s)" msgstr "" "Vanilla plugin क्लस्टर सिको गर्न सक्दैन किनकि यो प्रतिकृतिको लागि पर्याप्त नोडहरू हुनेछैन " "(प्रतिकृति कारक %s हो)" msgid "Wait for decommissioning" msgstr "डिकमिमिशनको लागि कुर्नुहोस्" #, python-format msgid "Waiting on %s datanodes to start up" msgstr "सुरू गर्न %s डाटाटोड पर्खँदै" msgid "Waiting on 1 datanodes to start up" msgstr "सुरू गर्न १ डेटानोडमा पर्खँदै" msgid "odd" msgstr "बिजोर" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9524794 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/0000775000175000017500000000000000000000000023776 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/__init__.py0000664000175000017500000000000000000000000026075 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9524794 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/0000775000175000017500000000000000000000000025424 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/__init__.py0000664000175000017500000000000000000000000027523 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/abstractversionhandler.py0000664000175000017500000000323500000000000032550 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import abc import six @six.add_metaclass(abc.ABCMeta) class AbstractVersionHandler(object): @abc.abstractmethod def get_node_processes(self): return @abc.abstractmethod def get_plugin_configs(self): return @abc.abstractmethod def configure_cluster(self, cluster): return @abc.abstractmethod def start_cluster(self, cluster): return @abc.abstractmethod def validate(self, cluster): return @abc.abstractmethod def scale_cluster(self, cluster, instances): return @abc.abstractmethod def decommission_nodes(self, cluster, instances): return @abc.abstractmethod def validate_scaling(self, cluster, existing, additional): return @abc.abstractmethod def get_edp_engine(self, cluster, job_type): return def get_edp_job_types(self): return [] def get_edp_config_hints(self, job_type): return {} @abc.abstractmethod def get_open_ports(self, node_group): return def on_terminate_cluster(self, cluster): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/confighints_helper.py0000664000175000017500000000310400000000000031646 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara.plugins import edp from sahara.plugins import utils def get_possible_hive_config_from(file_name): '''Return the possible configs, args, params for a Hive job.''' config = { 'configs': utils.load_hadoop_xml_defaults(file_name, 'sahara_plugin_vanilla'), 'params': {} } return config def get_possible_mapreduce_config_from(file_name): '''Return the possible configs, args, params for a MapReduce job.''' config = { 'configs': get_possible_pig_config_from(file_name).get('configs') } config['configs'] += edp.get_possible_mapreduce_configs() return config def get_possible_pig_config_from(file_name): '''Return the possible configs, args, params for a Pig job.''' config = { 'configs': utils.load_hadoop_xml_defaults(file_name, 'sahara_plugin_vanilla'), 'args': [], 'params': {} } return config ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/edp_engine.py0000664000175000017500000000264400000000000030101 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara.plugins import edp from sahara.plugins import exceptions as ex from sahara.plugins import utils as u from sahara_plugin_vanilla.plugins.vanilla import utils as vu class EdpOozieEngine(edp.PluginsOozieJobEngine): def get_hdfs_user(self): return 'hadoop' def get_name_node_uri(self, cluster): return cluster['info']['HDFS']['NameNode'] def get_oozie_server_uri(self, cluster): return cluster['info']['JobFlow']['Oozie'] + "/oozie/" def get_oozie_server(self, cluster): return vu.get_oozie(cluster) def validate_job_execution(self, cluster, job, data): oo_count = u.get_instances_count(cluster, 'oozie') if oo_count != 1: raise ex.InvalidComponentCountException('oozie', '1', oo_count) super(EdpOozieEngine, self).validate_job_execution(cluster, job, data) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9564793 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/0000775000175000017500000000000000000000000026760 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/__init__.py0000664000175000017500000000000000000000000031057 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/config.py0000664000175000017500000004273400000000000030611 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os from oslo_config import cfg from oslo_log import log as logging import six from sahara.plugins import castellan_utils as key_manager from sahara.plugins import context from sahara.plugins import swift_helper as swift from sahara.plugins import topology_helper as th from sahara.plugins import utils from sahara_plugin_vanilla.i18n import _ from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import config_helper from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import oozie_helper from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import utils as u from sahara_plugin_vanilla.plugins.vanilla import utils as vu CONF = cfg.CONF LOG = logging.getLogger(__name__) HADOOP_CONF_DIR = '/opt/hadoop/etc/hadoop' OOZIE_CONF_DIR = '/opt/oozie/conf' HIVE_CONF_DIR = '/opt/hive/conf' HADOOP_USER = 'hadoop' HADOOP_GROUP = 'hadoop' PORTS_MAP = { "namenode": [50070, 9000], "secondarynamenode": [50090], "resourcemanager": [8088, 8032], "historyserver": [19888], "datanode": [50010, 50075, 50020], "nodemanager": [8042], "oozie": [11000], "hiveserver": [9999, 10000], "spark history server": [18080], "zookeeper": [2181, 2888, 3888] } def configure_cluster(pctx, cluster): LOG.debug("Configuring cluster") if (CONF.use_identity_api_v3 and CONF.use_domain_for_proxy_users and vu.get_hiveserver(cluster) and config_helper.is_swift_enabled(pctx, cluster)): cluster = utils.create_proxy_user_for_cluster(cluster) instances = utils.get_instances(cluster) configure_instances(pctx, instances) configure_topology_data(pctx, cluster) configure_zookeeper(cluster) configure_spark(cluster) def configure_zookeeper(cluster, instances=None): zk_servers = vu.get_zk_servers(cluster) if zk_servers: zk_conf = config_helper.generate_zk_basic_config(cluster) zk_conf += _form_zk_servers_to_quorum(cluster, instances) _push_zk_configs_to_nodes(cluster, zk_conf, instances) def _form_zk_servers_to_quorum(cluster, to_delete_instances=None): quorum = [] instances = map(vu.get_instance_hostname, vu.get_zk_servers(cluster)) if to_delete_instances: delete_instances = map(vu.get_instance_hostname, to_delete_instances) reserve_instances = list(set(instances) - set(delete_instances)) # keep the original order of instances reserve_instances.sort(key=instances.index) else: reserve_instances = instances for index, instance in enumerate(reserve_instances): quorum.append("server.%s=%s:2888:3888" % (index, instance)) return '\n'.join(quorum) def _push_zk_configs_to_nodes(cluster, zk_conf, to_delete_instances=None): instances = vu.get_zk_servers(cluster) if to_delete_instances: for instance in to_delete_instances: if instance in instances: instances.remove(instance) for index, instance in enumerate(instances): with instance.remote() as r: r.write_file_to('/opt/zookeeper/conf/zoo.cfg', zk_conf, run_as_root=True) r.execute_command( 'sudo su - -c "echo %s > /var/zookeeper/myid" hadoop' % index) def configure_spark(cluster): spark_servers = vu.get_spark_history_server(cluster) if spark_servers: extra = _extract_spark_configs_to_extra(cluster) _push_spark_configs_to_node(cluster, extra) def _push_spark_configs_to_node(cluster, extra): spark_master = vu.get_spark_history_server(cluster) if spark_master: _push_spark_configs_to_existing_node(spark_master, cluster, extra) _push_cleanup_job(spark_master, extra) with spark_master.remote() as r: r.execute_command('sudo su - -c "mkdir /tmp/spark-events" hadoop') def _push_spark_configs_to_existing_node(spark_master, cluster, extra): sp_home = config_helper.get_spark_home(cluster) files = { os.path.join(sp_home, 'conf/spark-env.sh'): extra['sp_master'], os.path.join( sp_home, 'conf/spark-defaults.conf'): extra['sp_defaults'] } with spark_master.remote() as r: r.write_files_to(files, run_as_root=True) def _push_cleanup_job(sp_master, extra): with sp_master.remote() as r: if extra['job_cleanup']['valid']: r.write_file_to('/opt/hadoop/tmp-cleanup.sh', extra['job_cleanup']['script'], run_as_root=True) r.execute_command("sudo chmod 755 /opt/hadoop/tmp-cleanup.sh") cmd = 'sudo sh -c \'echo "%s" > /etc/cron.d/spark-cleanup\'' r.execute_command(cmd % extra['job_cleanup']['cron']) else: r.execute_command("sudo rm -f /opt/hadoop/tmp-cleanup.sh") r.execute_command("sudo rm -f /etc/cron.d/spark-cleanup") def _extract_spark_configs_to_extra(cluster): sp_master = utils.get_instance(cluster, "spark history server") extra = dict() config_master = '' if sp_master is not None: config_master = config_helper.generate_spark_env_configs(cluster) # Any node that might be used to run spark-submit will need # these libs for swift integration config_defaults = config_helper.generate_spark_executor_classpath(cluster) extra['job_cleanup'] = config_helper.generate_job_cleanup_config(cluster) extra['sp_master'] = config_master extra['sp_defaults'] = config_defaults return extra def configure_instances(pctx, instances): if len(instances) == 0: return utils.add_provisioning_step( instances[0].cluster_id, _("Configure instances"), len(instances)) for instance in instances: with context.set_current_instance_id(instance.instance_id): _configure_instance(pctx, instance) @utils.event_wrapper(True) def _configure_instance(pctx, instance): _provisioning_configs(pctx, instance) _post_configuration(pctx, instance) def _provisioning_configs(pctx, instance): xmls, env = _generate_configs(pctx, instance) _push_xml_configs(instance, xmls) _push_env_configs(instance, env) def _generate_configs(pctx, instance): hadoop_xml_confs = _get_hadoop_configs(pctx, instance) user_xml_confs, user_env_confs = _get_user_configs( pctx, instance.node_group) xml_confs = utils.merge_configs(user_xml_confs, hadoop_xml_confs) env_confs = utils.merge_configs(pctx['env_confs'], user_env_confs) return xml_confs, env_confs def _get_hadoop_configs(pctx, instance): cluster = instance.node_group.cluster nn_hostname = vu.get_instance_hostname(vu.get_namenode(cluster)) dirs = _get_hadoop_dirs(instance) confs = { 'Hadoop': { 'fs.defaultFS': 'hdfs://%s:9000' % nn_hostname }, 'HDFS': { 'dfs.namenode.name.dir': ','.join(dirs['hadoop_name_dirs']), 'dfs.datanode.data.dir': ','.join(dirs['hadoop_data_dirs']), 'dfs.hosts': '%s/dn-include' % HADOOP_CONF_DIR, 'dfs.hosts.exclude': '%s/dn-exclude' % HADOOP_CONF_DIR } } res_hostname = vu.get_instance_hostname(vu.get_resourcemanager(cluster)) if res_hostname: confs['YARN'] = { 'yarn.nodemanager.aux-services': 'mapreduce_shuffle', 'yarn.resourcemanager.hostname': '%s' % res_hostname, 'yarn.resourcemanager.nodes.include-path': '%s/nm-include' % ( HADOOP_CONF_DIR), 'yarn.resourcemanager.nodes.exclude-path': '%s/nm-exclude' % ( HADOOP_CONF_DIR) } confs['MapReduce'] = { 'mapreduce.framework.name': 'yarn' } hs_hostname = vu.get_instance_hostname(vu.get_historyserver(cluster)) if hs_hostname: confs['MapReduce']['mapreduce.jobhistory.address'] = ( "%s:10020" % hs_hostname) oozie = vu.get_oozie(cluster) if oozie: hadoop_cfg = { 'hadoop.proxyuser.hadoop.hosts': '*', 'hadoop.proxyuser.hadoop.groups': 'hadoop' } confs['Hadoop'].update(hadoop_cfg) oozie_cfg = oozie_helper.get_oozie_required_xml_configs( HADOOP_CONF_DIR) if config_helper.is_mysql_enabled(pctx, cluster): oozie_cfg.update(oozie_helper.get_oozie_mysql_configs(cluster)) confs['JobFlow'] = oozie_cfg if config_helper.is_swift_enabled(pctx, cluster): swift_configs = {} for config in swift.get_swift_configs(): swift_configs[config['name']] = config['value'] confs['Hadoop'].update(swift_configs) if config_helper.is_data_locality_enabled(pctx, cluster): confs['Hadoop'].update(th.TOPOLOGY_CONFIG) confs['Hadoop'].update({"topology.script.file.name": HADOOP_CONF_DIR + "/topology.sh"}) hive_hostname = vu.get_instance_hostname(vu.get_hiveserver(cluster)) if hive_hostname: hive_pass = u.get_hive_password(cluster) hive_cfg = { 'hive.warehouse.subdir.inherit.perms': True, 'javax.jdo.option.ConnectionURL': 'jdbc:derby:;databaseName=/opt/hive/metastore_db;create=true' } if config_helper.is_mysql_enabled(pctx, cluster): hive_cfg.update({ 'javax.jdo.option.ConnectionURL': 'jdbc:mysql://%s/metastore' % hive_hostname, 'javax.jdo.option.ConnectionDriverName': 'com.mysql.jdbc.Driver', 'javax.jdo.option.ConnectionUserName': 'hive', 'javax.jdo.option.ConnectionPassword': hive_pass, 'datanucleus.autoCreateSchema': 'false', 'datanucleus.fixedDatastore': 'true', 'hive.metastore.uris': 'thrift://%s:9083' % hive_hostname, }) proxy_configs = cluster.cluster_configs.get('proxy_configs') if proxy_configs and config_helper.is_swift_enabled(pctx, cluster): hive_cfg.update({ swift.HADOOP_SWIFT_USERNAME: proxy_configs['proxy_username'], swift.HADOOP_SWIFT_PASSWORD: key_manager.get_secret( proxy_configs['proxy_password']), swift.HADOOP_SWIFT_TRUST_ID: proxy_configs['proxy_trust_id'], swift.HADOOP_SWIFT_DOMAIN_NAME: CONF.proxy_user_domain_name }) confs['Hive'] = hive_cfg return confs def _get_user_configs(pctx, node_group): ng_xml_confs, ng_env_confs = _separate_configs(node_group.node_configs, pctx['env_confs']) cl_xml_confs, cl_env_confs = _separate_configs( node_group.cluster.cluster_configs, pctx['env_confs']) xml_confs = utils.merge_configs(cl_xml_confs, ng_xml_confs) env_confs = utils.merge_configs(cl_env_confs, ng_env_confs) return xml_confs, env_confs def _separate_configs(configs, all_env_configs): xml_configs = {} env_configs = {} for service, params in six.iteritems(configs): for param, value in six.iteritems(params): if all_env_configs.get(service, {}).get(param): if not env_configs.get(service): env_configs[service] = {} env_configs[service][param] = value else: if not xml_configs.get(service): xml_configs[service] = {} xml_configs[service][param] = value return xml_configs, env_configs def _generate_xml(configs): xml_confs = {} for service, confs in six.iteritems(configs): xml_confs[service] = utils.create_hadoop_xml(confs) return xml_confs def _push_env_configs(instance, configs): nn_heap = configs['HDFS']['NameNode Heap Size'] snn_heap = configs['HDFS']['SecondaryNameNode Heap Size'] dn_heap = configs['HDFS']['DataNode Heap Size'] rm_heap = configs['YARN']['ResourceManager Heap Size'] nm_heap = configs['YARN']['NodeManager Heap Size'] hs_heap = configs['MapReduce']['JobHistoryServer Heap Size'] with instance.remote() as r: r.replace_remote_string( '%s/hadoop-env.sh' % HADOOP_CONF_DIR, 'export HADOOP_NAMENODE_OPTS=.*', 'export HADOOP_NAMENODE_OPTS="-Xmx%dm"' % nn_heap) r.replace_remote_string( '%s/hadoop-env.sh' % HADOOP_CONF_DIR, 'export HADOOP_SECONDARYNAMENODE_OPTS=.*', 'export HADOOP_SECONDARYNAMENODE_OPTS="-Xmx%dm"' % snn_heap) r.replace_remote_string( '%s/hadoop-env.sh' % HADOOP_CONF_DIR, 'export HADOOP_DATANODE_OPTS=.*', 'export HADOOP_DATANODE_OPTS="-Xmx%dm"' % dn_heap) r.replace_remote_string( '%s/yarn-env.sh' % HADOOP_CONF_DIR, '\\#export YARN_RESOURCEMANAGER_HEAPSIZE=.*', 'export YARN_RESOURCEMANAGER_HEAPSIZE=%d' % rm_heap) r.replace_remote_string( '%s/yarn-env.sh' % HADOOP_CONF_DIR, '\\#export YARN_NODEMANAGER_HEAPSIZE=.*', 'export YARN_NODEMANAGER_HEAPSIZE=%d' % nm_heap) r.replace_remote_string( '%s/mapred-env.sh' % HADOOP_CONF_DIR, 'export HADOOP_JOB_HISTORYSERVER_HEAPSIZE=.*', 'export HADOOP_JOB_HISTORYSERVER_HEAPSIZE=%d' % hs_heap) def _push_xml_configs(instance, configs): xmls = _generate_xml(configs) service_to_conf_map = { 'Hadoop': '%s/core-site.xml' % HADOOP_CONF_DIR, 'HDFS': '%s/hdfs-site.xml' % HADOOP_CONF_DIR, 'YARN': '%s/yarn-site.xml' % HADOOP_CONF_DIR, 'MapReduce': '%s/mapred-site.xml' % HADOOP_CONF_DIR, 'JobFlow': '%s/oozie-site.xml' % OOZIE_CONF_DIR, 'Hive': '%s/hive-site.xml' % HIVE_CONF_DIR } xml_confs = {} for service, confs in six.iteritems(xmls): if service not in service_to_conf_map.keys(): continue xml_confs[service_to_conf_map[service]] = confs _push_configs_to_instance(instance, xml_confs) def _push_configs_to_instance(instance, configs): LOG.debug("Push configs to instance {instance}".format( instance=instance.instance_name)) with instance.remote() as r: for fl, data in six.iteritems(configs): r.write_file_to(fl, data, run_as_root=True) def _post_configuration(pctx, instance): dirs = _get_hadoop_dirs(instance) args = { 'hadoop_user': HADOOP_USER, 'hadoop_group': HADOOP_GROUP, 'hadoop_conf_dir': HADOOP_CONF_DIR, 'oozie_conf_dir': OOZIE_CONF_DIR, 'hadoop_name_dirs': " ".join(dirs['hadoop_name_dirs']), 'hadoop_data_dirs': " ".join(dirs['hadoop_data_dirs']), 'hadoop_log_dir': dirs['hadoop_log_dir'], 'hadoop_secure_dn_log_dir': dirs['hadoop_secure_dn_log_dir'], 'yarn_log_dir': dirs['yarn_log_dir'] } post_conf_script = utils.get_file_text( 'plugins/vanilla/hadoop2/resources/post_conf.template', 'sahara_plugin_vanilla') post_conf_script = post_conf_script.format(**args) with instance.remote() as r: r.write_file_to('/tmp/post_conf.sh', post_conf_script) r.execute_command('chmod +x /tmp/post_conf.sh') r.execute_command('sudo /tmp/post_conf.sh') if config_helper.is_data_locality_enabled(pctx, instance.cluster): t_script = HADOOP_CONF_DIR + '/topology.sh' r.write_file_to(t_script, utils.get_file_text( 'plugins/vanilla/hadoop2/resources/topology.sh', 'sahara_plugin_vanilla'), run_as_root=True) r.execute_command('chmod +x ' + t_script, run_as_root=True) def _get_hadoop_dirs(instance): dirs = {} storage_paths = instance.storage_paths() dirs['hadoop_name_dirs'] = _make_hadoop_paths( storage_paths, '/hdfs/namenode') dirs['hadoop_data_dirs'] = _make_hadoop_paths( storage_paths, '/hdfs/datanode') dirs['hadoop_log_dir'] = _make_hadoop_paths( storage_paths, '/hadoop/logs')[0] dirs['hadoop_secure_dn_log_dir'] = _make_hadoop_paths( storage_paths, '/hadoop/logs/secure')[0] dirs['yarn_log_dir'] = _make_hadoop_paths( storage_paths, '/yarn/logs')[0] return dirs def _make_hadoop_paths(paths, hadoop_dir): return [path + hadoop_dir for path in paths] @utils.event_wrapper( True, step=_("Configure topology data"), param=('cluster', 1)) def configure_topology_data(pctx, cluster): if config_helper.is_data_locality_enabled(pctx, cluster): LOG.warning("Node group awareness is not implemented in YARN yet " "so enable_hypervisor_awareness set to False explicitly") tpl_map = th.generate_topology_map(cluster, is_node_awareness=False) topology_data = "\n".join( [k + " " + v for k, v in tpl_map.items()]) + "\n" for ng in cluster.node_groups: for i in ng.instances: i.remote().write_file_to(HADOOP_CONF_DIR + "/topology.data", topology_data, run_as_root=True) def get_open_ports(node_group): ports = [] for key in PORTS_MAP: if key in node_group.node_processes: ports += PORTS_MAP[key] return ports ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/config_helper.py0000664000175000017500000002722200000000000032143 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg import six from sahara.plugins import exceptions as ex from sahara.plugins import provisioning as p from sahara.plugins import utils from sahara_plugin_vanilla.i18n import _ CONF = cfg.CONF CONF.import_opt("enable_data_locality", "sahara.topology.topology_helper") HIDDEN_CONFS = [ 'dfs.hosts', 'dfs.hosts.exclude', 'dfs.namenode.data.dir', 'dfs.namenode.name.dir', 'fs.default.name', 'fs.defaultFS', 'fs.swift.impl', 'hadoop.proxyuser.hadoop.groups', 'hadoop.proxyuser.hadoop.hosts', 'mapreduce.framework.name', 'mapreduce.jobhistory.address', 'mapreduce.jobhistory.done.dir', 'mapreduce.jobhistory.intermediate-done-dir', 'mapreduce.jobhistory.webapp.address', 'yarn.nodemanager.aux-services', 'yarn.resourcemanager.address', 'yarn.resourcemanager.admin.address', 'yarn.resourcemanager.hostname', 'yarn.resourcemanager.nodes.exclude-path', 'yarn.resourcemanager.nodes.include-path', 'yarn.resourcemanager.resource-tracker.address', 'yarn.resourcemanager.scheduler.address', 'yarn.resourcemanager.webapp.address' ] CLUSTER_WIDE_CONFS = [ 'dfs.blocksize', 'dfs.namenode.replication.min', 'dfs.permissions.enabled', 'dfs.replication', 'dfs.replication.max', 'io.compression.codecs', 'io.file.buffer.size', 'mapreduce.job.counters.max', 'mapreduce.map.output.compress.codec', 'mapreduce.output.fileoutputformat.compress.codec', 'mapreduce.output.fileoutputformat.compress.type', 'mapredude.map.output.compress', 'mapredude.output.fileoutputformat.compress' ] PRIORITY_1_CONFS = [ 'dfs.datanode.du.reserved', 'dfs.datanode.failed.volumes.tolerated', 'dfs.datanode.handler.count', 'dfs.datanode.max.transfer.threads', 'dfs.namenode.handler.count', 'mapred.child.java.opts', 'mapred.jobtracker.maxtasks.per.job', 'mapreduce.jobtracker.handler.count', 'mapreduce.map.java.opts', 'mapreduce.reduce.java.opts', 'mapreduce.task.io.sort.mb', 'mapreduce.tasktracker.map.tasks.maximum', 'mapreduce.tasktracker.reduce.tasks.maximum', 'yarn.nodemanager.resource.cpu-vcores', 'yarn.nodemanager.resource.memory-mb', 'yarn.scheduler.maximum-allocation-mb', 'yarn.scheduler.maximum-allocation-vcores', 'yarn.scheduler.minimum-allocation-mb', 'yarn.scheduler.minimum-allocation-vcores' ] SPARK_CONFS = { 'Spark': { "OPTIONS": [ { 'name': 'Spark home', 'description': 'The location of the spark installation' ' (default: /opt/spark)', 'default': '/opt/spark', 'priority': 2, }, { 'name': 'Minimum cleanup seconds', 'description': 'Job data will never be purged before this' ' amount of time elapses (default: 86400 = 1 day)', 'default': '86400', 'priority': 2, }, { 'name': 'Maximum cleanup seconds', 'description': 'Job data will always be purged after this' ' amount of time elapses (default: 1209600 = 14 days)', 'default': '1209600', 'priority': 2, }, { 'name': 'Minimum cleanup megabytes', 'description': 'No job data will be purged unless the total' ' job data exceeds this size (default: 4096 = 4GB)', 'default': '4096', 'priority': 2, }, ] } } ZOOKEEPER_CONFS = { "ZooKeeper": { "OPTIONS": [ { 'name': 'tickTime', 'description': 'The number of milliseconds of each tick', 'default': 2000, 'priority': 2, }, { 'name': 'initLimit', 'description': 'The number of ticks that the initial' ' synchronization phase can take', 'default': 10, 'priority': 2, }, { 'name': 'syncLimit', 'description': 'The number of ticks that can pass between' ' sending a request and getting an acknowledgement', 'default': 5, 'priority': 2, }, ] } } # for now we have not so many cluster-wide configs # lets consider all of them having high priority PRIORITY_1_CONFS += CLUSTER_WIDE_CONFS def init_xml_configs(xml_confs): configs = [] for service, config_lists in six.iteritems(xml_confs): for config_list in config_lists: for config in config_list: if config['name'] not in HIDDEN_CONFS: cfg = p.Config(config['name'], service, "node", is_optional=True, config_type="string", default_value=str(config['value']), description=config['description']) if cfg.default_value in ["true", "false"]: cfg.config_type = "bool" cfg.default_value = (cfg.default_value == 'true') elif utils.is_int(cfg.default_value): cfg.config_type = "int" cfg.default_value = int(cfg.default_value) if config['name'] in CLUSTER_WIDE_CONFS: cfg.scope = 'cluster' if config['name'] in PRIORITY_1_CONFS: cfg.priority = 1 configs.append(cfg) return configs ENABLE_SWIFT = p.Config('Enable Swift', 'general', 'cluster', config_type="bool", priority=1, default_value=True, is_optional=False) ENABLE_MYSQL = p.Config('Enable MySQL', 'general', 'cluster', config_type="bool", priority=1, default_value=True, is_optional=True) ENABLE_DATA_LOCALITY = p.Config('Enable Data Locality', 'general', 'cluster', config_type="bool", priority=1, default_value=True, is_optional=True) DATANODES_DECOMMISSIONING_TIMEOUT = p.Config( 'DataNodes decommissioning timeout', 'general', 'cluster', config_type='int', priority=1, default_value=3600 * 4, is_optional=True, description='Timeout for datanode decommissioning operation' ' during scaling, in seconds') NODEMANAGERS_DECOMMISSIONING_TIMEOUT = p.Config( 'NodeManagers decommissioning timeout', 'general', 'cluster', config_type='int', priority=1, default_value=300, is_optional=True, description='Timeout for NodeManager decommissioning operation' ' during scaling, in seconds') DATANODES_STARTUP_TIMEOUT = p.Config( 'DataNodes startup timeout', 'general', 'cluster', config_type='int', priority=1, default_value=10800, is_optional=True, description='Timeout for DataNodes startup, in seconds') def init_env_configs(env_confs): configs = [] for service, config_items in six.iteritems(env_confs): for name, value in six.iteritems(config_items): configs.append(p.Config(name, service, "node", default_value=value, priority=1, config_type="int")) return configs def _init_general_configs(): configs = [ENABLE_SWIFT, ENABLE_MYSQL, DATANODES_STARTUP_TIMEOUT, DATANODES_DECOMMISSIONING_TIMEOUT, NODEMANAGERS_DECOMMISSIONING_TIMEOUT] if CONF.enable_data_locality: configs.append(ENABLE_DATA_LOCALITY) return configs PLUGIN_GENERAL_CONFIGS = _init_general_configs() def get_config_value(pctx, service, name, cluster=None): if cluster: for ng in cluster.node_groups: cl_param = ng.configuration().get(service, {}).get(name) if cl_param is not None: return cl_param for c in pctx['all_confs']: if c.applicable_target == service and c.name == name: return c.default_value raise ex.PluginNotFoundException( {"name": name, "service": service}, _("Unable to get parameter '%(name)s' from service %(service)s")) def is_swift_enabled(pctx, cluster): return get_config_value(pctx, ENABLE_SWIFT.applicable_target, ENABLE_SWIFT.name, cluster) def is_mysql_enabled(pctx, cluster): return get_config_value( pctx, ENABLE_MYSQL.applicable_target, ENABLE_MYSQL.name, cluster) def is_data_locality_enabled(pctx, cluster): if not CONF.enable_data_locality: return False return get_config_value(pctx, ENABLE_DATA_LOCALITY.applicable_target, ENABLE_DATA_LOCALITY.name, cluster) def generate_spark_env_configs(cluster): configs = [] # point to the hadoop conf dir so that Spark can read things # like the swift configuration without having to copy core-site # to /opt/spark/conf HADOOP_CONF_DIR = '/opt/hadoop/etc/hadoop' configs.append('HADOOP_CONF_DIR=' + HADOOP_CONF_DIR) # Hadoop and YARN configs there are in one folder configs.append('YARN_CONF_DIR=' + HADOOP_CONF_DIR) return '\n'.join(configs) def generate_spark_executor_classpath(cluster): cp = utils.get_config_value_or_default( "Spark", "Executor extra classpath", cluster) if cp: return "spark.executor.extraClassPath " + cp return "\n" def generate_job_cleanup_config(cluster): args = { 'minimum_cleanup_megabytes': utils.get_config_value_or_default( "Spark", "Minimum cleanup megabytes", cluster), 'minimum_cleanup_seconds': utils.get_config_value_or_default( "Spark", "Minimum cleanup seconds", cluster), 'maximum_cleanup_seconds': utils.get_config_value_or_default( "Spark", "Maximum cleanup seconds", cluster) } job_conf = {'valid': (args['maximum_cleanup_seconds'] > 0 and (args['minimum_cleanup_megabytes'] > 0 and args['minimum_cleanup_seconds'] > 0))} if job_conf['valid']: job_conf['cron'] = utils.get_file_text( 'plugins/vanilla/hadoop2/resources/spark-cleanup.cron', 'sahara_plugin_vanilla'), job_cleanup_script = utils.get_file_text( 'plugins/vanilla/hadoop2/resources/tmp-cleanup.sh.template', 'sahara_plugin_vanilla') job_conf['script'] = job_cleanup_script.format(**args) return job_conf def get_spark_home(cluster): return utils.get_config_value_or_default("Spark", "Spark home", cluster) def generate_zk_basic_config(cluster): args = { 'ticktime': utils.get_config_value_or_default( "ZooKeeper", "tickTime", cluster), 'initlimit': utils.get_config_value_or_default( "ZooKeeper", "initLimit", cluster), 'synclimit': utils.get_config_value_or_default( "ZooKeeper", "syncLimit", cluster) } zoo_cfg = utils.get_file_text( 'plugins/vanilla/hadoop2/resources/zoo_sample.cfg', 'sahara_plugin_vanilla') return zoo_cfg.format(**args) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/edp_engine.py0000664000175000017500000000167300000000000031436 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara.plugins import edp from sahara_plugin_vanilla.plugins.vanilla import edp_engine class EdpOozieEngine(edp_engine.EdpOozieEngine): def create_hdfs_dir(self, remote, dir_name): edp.create_dir_hadoop2(remote, dir_name, self.get_hdfs_user()) def get_resource_manager_uri(self, cluster): return cluster['info']['YARN']['ResourceManager'] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/keypairs.py0000664000175000017500000000622500000000000031166 0ustar00zuulzuul00000000000000# Copyright (c) 2016 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from castellan.common.objects import passphrase from castellan import key_manager from oslo_log import log as logging from sahara.plugins import conductor from sahara.plugins import context from sahara.plugins import utils LOG = logging.getLogger(__name__) def _provision_key(instance, keypair): def append_to(remote, file, *args, **kwargs): kwargs['run_as_root'] = True path = "/home/hadoop/.ssh/%s" % file remote.append_to_file(path, *args, **kwargs) public, private = keypair['public'], keypair['private'] folder = '/home/hadoop/.ssh' with context.set_current_instance_id(instance_id=instance.instance_id): with instance.remote() as r: r.execute_command('sudo mkdir -p %s' % folder) append_to(r, 'authorized_keys', public) append_to(r, 'id_rsa', private) append_to(r, 'id_rsa.pub', public) r.execute_command('sudo chown -R hadoop %s' % folder) r.execute_command("sudo chmod 600 %s/id_rsa" % folder) LOG.debug("Passwordless ssh enabled") def _get_secret(secret): key = key_manager.API().get(context.current(), secret) return key.get_encoded() def _store_secret(secret): key = passphrase.Passphrase(secret) password = key_manager.API().store(context.current(), key) return password def _remove_secret(secret): key_manager.API().delete(context.current(), secret) def provision_keypairs(cluster, instances=None): extra = cluster.extra.to_dict() if cluster.extra else {} # use same keypair for scaling keypair = extra.get('vanilla_keypair') if not instances: instances = utils.get_instances(cluster) else: # scaling if not keypair: # cluster created before mitaka, skipping provisioning return if not keypair: private, public = utils.generate_key_pair() keypair = {'public': public, 'private': private} extra['vanilla_keypair'] = keypair extra['vanilla_keypair']['private'] = _store_secret( keypair['private']) conductor.cluster_update(context.ctx(), cluster, {'extra': extra}) else: keypair['private'] = _get_secret(keypair['private']) with context.PluginsThreadGroup() as tg: for instance in instances: tg.spawn( 'provision-key-%s' % instance.instance_name, _provision_key, instance, keypair) def drop_key(cluster): extra = cluster.extra.to_dict() if cluster.extra else {} keypair = extra.get('vanilla_keypair') if keypair: _remove_secret(keypair['private']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/oozie_helper.py0000664000175000017500000000422000000000000032014 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import utils as u def get_oozie_required_xml_configs(hadoop_conf_dir): """Following configs differ from default configs in oozie-default.xml.""" return { 'oozie.service.ActionService.executor.ext.classes': 'org.apache.oozie.action.email.EmailActionExecutor,' 'org.apache.oozie.action.hadoop.HiveActionExecutor,' 'org.apache.oozie.action.hadoop.ShellActionExecutor,' 'org.apache.oozie.action.hadoop.SqoopActionExecutor,' 'org.apache.oozie.action.hadoop.DistcpActionExecutor', 'oozie.service.SchemaService.wf.ext.schemas': 'shell-action-0.1.xsd,shell-action-0.2.xsd,shell-action-0.3.xsd,' 'email-action-0.1.xsd,hive-action-0.2.xsd,hive-action-0.3.xsd,' 'hive-action-0.4.xsd,hive-action-0.5.xsd,sqoop-action-0.2.xsd,' 'sqoop-action-0.3.xsd,sqoop-action-0.4.xsd,ssh-action-0.1.xsd,' 'ssh-action-0.2.xsd,distcp-action-0.1.xsd,distcp-action-0.2.xsd,' 'oozie-sla-0.1.xsd,oozie-sla-0.2.xsd', 'oozie.service.JPAService.create.db.schema': 'false', 'oozie.service.HadoopAccessorService.hadoop.configurations': '*=%s' % ( hadoop_conf_dir) } def get_oozie_mysql_configs(cluster): return { 'oozie.service.JPAService.jdbc.driver': 'com.mysql.jdbc.Driver', 'oozie.service.JPAService.jdbc.url': 'jdbc:mysql://localhost:3306/oozie', 'oozie.service.JPAService.jdbc.username': 'oozie', 'oozie.service.JPAService.jdbc.password': u.get_oozie_password( cluster) } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/recommendations_utils.py0000664000175000017500000000331400000000000033742 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara.plugins import recommendations_utils as ru def recommend_configs(cluster, plugin_configs, scaling): yarn_configs = [ 'yarn.nodemanager.resource.memory-mb', 'yarn.scheduler.minimum-allocation-mb', 'yarn.scheduler.maximum-allocation-mb', 'yarn.nodemanager.vmem-check-enabled', ] mapred_configs = [ 'yarn.app.mapreduce.am.resource.mb', 'yarn.app.mapreduce.am.command-opts', 'mapreduce.map.memory.mb', 'mapreduce.reduce.memory.mb', 'mapreduce.map.java.opts', 'mapreduce.reduce.java.opts', 'mapreduce.task.io.sort.mb', ] configs_to_configure = { 'cluster_configs': { 'dfs.replication': ('HDFS', 'dfs.replication') }, 'node_configs': { } } for mapr in mapred_configs: configs_to_configure['node_configs'][mapr] = ("MapReduce", mapr) for yarn in yarn_configs: configs_to_configure['node_configs'][yarn] = ('YARN', yarn) provider = ru.HadoopAutoConfigsProvider( configs_to_configure, plugin_configs, cluster, scaling) provider.apply_recommended_configs() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9564793 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/resources/0000775000175000017500000000000000000000000030772 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/resources/create_oozie_db.sql 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/resources/create_oozie_db0000664000175000017500000000031000000000000034024 0ustar00zuulzuul00000000000000create database oozie; grant all privileges on oozie.* to 'oozie'@'localhost' identified by 'password'; grant all privileges on oozie.* to 'oozie'@'%' identified by 'password'; flush privileges; exit ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/resources/post_conf.template 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/resources/post_conf.templ0000664000175000017500000000222300000000000034026 0ustar00zuulzuul00000000000000#!/bin/bash # change permission to config hadoop_configs=('core-site.xml' 'hdfs-site.xml' 'mapred-site.xml' 'yarn-site.xml') for conf in "${{hadoop_configs[@]}}" do chown -R {hadoop_group}:{hadoop_user} {hadoop_conf_dir}/$conf done chown -R {hadoop_group}:{hadoop_user} {oozie_conf_dir}/oozie-site.xml # create dirs for hdfs and mapreduce service dirs=({hadoop_name_dirs} {hadoop_data_dirs} {hadoop_log_dir} {hadoop_secure_dn_log_dir} {yarn_log_dir}) for dir in "${{dirs[@]}}" do mkdir -p $dir chown -R {hadoop_group}:{hadoop_user} $dir done # change hadoop log dir sed -i "s,\#export HADOOP_LOG_DIR=.*,export HADOOP_LOG_DIR={hadoop_log_dir}," {hadoop_conf_dir}/hadoop-env.sh sed -i "s,export HADOOP_SECURE_DN_LOG_DIR=.*,export HADOOP_SECURE_DN_LOG_DIR={hadoop_secure_dn_log_dir}," {hadoop_conf_dir}/hadoop-env.sh # change yarn log dir sed -i "s,YARN_LOG_DIR=.*,YARN_LOG_DIR={yarn_log_dir}," {hadoop_conf_dir}/yarn-env.sh # prepare scaling files sc_all_files=('dn-include' 'nm-include' 'dn-exclude' 'nm-exclude') for file in "${{sc_all_files[@]}}" do touch {hadoop_conf_dir}/$file chown {hadoop_group}:{hadoop_user} {hadoop_conf_dir}/$file done ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/resources/spark-cleanup.cron 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/resources/spark-cleanup.c0000664000175000017500000000013600000000000033703 0ustar00zuulzuul00000000000000# Cleans up old Spark job directories once per hour. 0 * * * * root /etc/hadoop/tmp-cleanup.sh././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/resources/tmp-cleanup.sh.template 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/resources/tmp-cleanup.sh.0000664000175000017500000000230500000000000033631 0ustar00zuulzuul00000000000000#!/bin/sh MINIMUM_CLEANUP_MEGABYTES={minimum_cleanup_megabytes} MINIMUM_CLEANUP_SECONDS={minimum_cleanup_seconds} MAXIMUM_CLEANUP_SECONDS={maximum_cleanup_seconds} CURRENT_TIMESTAMP=`date +%s` POSSIBLE_CLEANUP_THRESHOLD=$(($CURRENT_TIMESTAMP - $MINIMUM_CLEANUP_SECONDS)) DEFINITE_CLEANUP_THRESHOLD=$(($CURRENT_TIMESTAMP - $MAXIMUM_CLEANUP_SECONDS)) unset MAY_DELETE unset WILL_DELETE if [ ! -d /tmp/spark-edp ] then exit 0 fi cd /tmp/spark-edp for JOB in $(find . -maxdepth 1 -mindepth 1 -type d -printf '%f\n') do for EXECUTION in $(find $JOB -maxdepth 1 -mindepth 1 -type d -printf '%f\n') do TIMESTAMP=`stat $JOB/$EXECUTION --printf '%Y'` if [[ $TIMESTAMP -lt $DEFINITE_CLEANUP_THRESHOLD ]] then WILL_DELETE="$WILL_DELETE $JOB/$EXECUTION" else if [[ $TIMESTAMP -lt $POSSIBLE_CLEANUP_THRESHOLD ]] then MAY_DELETE="$MAY_DELETE $JOB/$EXECUTION" fi fi done done for EXECUTION in $WILL_DELETE do rm -Rf $EXECUTION done for EXECUTION in $(ls $MAY_DELETE -trd) do if [[ `du -s -BM | grep -o '[0-9]\+'` -le $MINIMUM_CLEANUP_MEGABYTES ]]; then break fi rm -Rf $EXECUTION done ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/resources/topology.sh0000775000175000017500000000062500000000000033210 0ustar00zuulzuul00000000000000#!/bin/bash HADOOP_CONF=/opt/hadoop/etc/hadoop while [ $# -gt 0 ] ; do nodeArg=$1 exec< ${HADOOP_CONF}/topology.data result="" while read line ; do ar=( $line ) if [ "${ar[0]}" = "$nodeArg" ] ; then result="${ar[1]}" fi done shift if [ -z "$result" ] ; then echo -n "/default/rack " else echo -n "$result " fi done ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/resources/zoo_sample.cfg0000664000175000017500000000166000000000000033626 0ustar00zuulzuul00000000000000# The number of milliseconds of each tick tickTime={ticktime} # The number of ticks that the initial # synchronization phase can take initLimit={initlimit} # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit={synclimit} # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/var/zookeeper # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/run_scripts.py0000664000175000017500000002514600000000000031715 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os from oslo_log import log as logging from sahara.plugins import context from sahara.plugins import edp from sahara.plugins import utils from sahara_plugin_vanilla.i18n import _ from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import config_helper from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import oozie_helper from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import utils as u from sahara_plugin_vanilla.plugins.vanilla import utils as vu LOG = logging.getLogger(__name__) def start_dn_nm_processes(instances): filternames = ['datanode', 'nodemanager'] instances = utils.instances_with_services(instances, filternames) if len(instances) == 0: return utils.add_provisioning_step( instances[0].cluster_id, utils.start_process_event_message("DataNodes, NodeManagers"), len(instances)) with context.PluginsThreadGroup() as tg: for instance in instances: with context.set_current_instance_id(instance.instance_id): processes = set(instance.node_group.node_processes) processes = processes.intersection(filternames) tg.spawn('vanilla-start-processes-%s' % instance.instance_name, _start_processes, instance, list(processes)) @utils.event_wrapper(True) def _start_processes(instance, processes): with instance.remote() as r: if 'datanode' in processes: r.execute_command( 'sudo su - -c "hadoop-daemon.sh start datanode" hadoop') if 'nodemanager' in processes: r.execute_command( 'sudo su - -c "yarn-daemon.sh start nodemanager" hadoop') def start_hadoop_process(instance, process): instance.remote().execute_command( 'sudo su - -c "hadoop-daemon.sh start %s" hadoop' % process) def start_yarn_process(instance, process): instance.remote().execute_command( 'sudo su - -c "yarn-daemon.sh start %s" hadoop' % process) @utils.event_wrapper( True, step=utils.start_process_event_message("HistoryServer")) def start_historyserver(instance): instance.remote().execute_command( 'sudo su - -c "mr-jobhistory-daemon.sh start historyserver" hadoop') @utils.event_wrapper(True, step=utils.start_process_event_message("Oozie")) def start_oozie_process(pctx, instance): with context.set_current_instance_id(instance.instance_id): with instance.remote() as r: if config_helper.is_mysql_enabled(pctx, instance.cluster): _start_mysql(r) LOG.debug("Creating Oozie DB Schema") sql_script = utils.get_file_text( 'plugins/vanilla/hadoop2/resources/create_oozie_db.sql', 'sahara_plugin_vanilla') password = oozie_helper.get_oozie_mysql_configs( instance.cluster)[ 'oozie.service.JPAService.jdbc.password'] sql_script = sql_script.replace("password", password) script_location = "create_oozie_db.sql" r.write_file_to(script_location, sql_script) r.execute_command('mysql -u root < %(script_location)s && ' 'rm %(script_location)s' % {"script_location": script_location}) _oozie_share_lib(r) _start_oozie(r) @utils.event_wrapper( True, step=utils.start_process_event_message("Spark History Server")) def start_spark_history_server(master): sp_home = config_helper.get_spark_home(master.cluster) with context.set_current_instance_id(master.instance_id): with master.remote() as r: r.execute_command('sudo su - -c "bash %s" hadoop' % os.path.join( sp_home, 'sbin/start-history-server.sh')) def start_zk_server(instances): utils.add_provisioning_step( instances[0].cluster_id, utils.start_process_event_message("ZooKeeper"), len(instances)) with context.PluginsThreadGroup() as tg: for instance in instances: with context.set_current_instance_id(instance.instance_id): tg.spawn('ZK-start-processes-%s' % instance.instance_name, _start_zk_processes, instance, 'start') def refresh_zk_servers(cluster, to_delete_instances=None): instances = vu.get_zk_servers(cluster) if to_delete_instances: for instance in to_delete_instances: if instance in instances: instances.remove(instance) utils.add_provisioning_step( cluster.id, utils.start_process_event_message("ZooKeeper"), len(instances)) with context.PluginsThreadGroup() as tg: for instance in instances: with context.set_current_instance_id(instance.instance_id): tg.spawn('ZK-restart-processes-%s' % instance.instance_name, _start_zk_processes, instance, 'restart') @utils.event_wrapper(True) def _start_zk_processes(instance, operation): with instance.remote() as r: r.execute_command( 'sudo su - -c "bash /opt/zookeeper/bin/zkServer.sh %s"' ' hadoop' % operation) def format_namenode(instance): instance.remote().execute_command( 'sudo su - -c "hdfs namenode -format" hadoop') @utils.event_wrapper( True, step=utils.start_process_event_message("Oozie"), param=('cluster', 0)) def refresh_hadoop_nodes(cluster): nn = vu.get_namenode(cluster) nn.remote().execute_command( 'sudo su - -c "hdfs dfsadmin -refreshNodes" hadoop') @utils.event_wrapper( True, step=_("Refresh %s nodes") % "YARN", param=('cluster', 0)) def refresh_yarn_nodes(cluster): rm = vu.get_resourcemanager(cluster) rm.remote().execute_command( 'sudo su - -c "yarn rmadmin -refreshNodes" hadoop') def _oozie_share_lib(remote): LOG.debug("Sharing Oozie libs") # remote.execute_command('sudo su - -c "/opt/oozie/bin/oozie-setup.sh ' # 'sharelib create -fs hdfs://%s:8020" hadoop' # % nn_hostname) # TODO(alazarev) return 'oozie-setup.sh sharelib create' back # when #1262023 is resolved remote.execute_command( 'sudo su - -c "mkdir /tmp/oozielib && ' 'tar zxf /opt/oozie/oozie-sharelib-*.tar.gz -C ' '/tmp/oozielib && ' 'hadoop fs -mkdir /user && ' 'hadoop fs -mkdir /user/hadoop && ' 'hadoop fs -put /tmp/oozielib/share /user/hadoop/ && ' 'rm -rf /tmp/oozielib" hadoop') LOG.debug("Creating sqlfile for Oozie") remote.execute_command('sudo su - -c "/opt/oozie/bin/ooziedb.sh ' 'create -sqlfile oozie.sql ' '-run Validate DB Connection" hadoop') def _start_mysql(remote): LOG.debug("Starting mysql") remote.execute_command('/opt/start-mysql.sh') def _start_oozie(remote): remote.execute_command( 'sudo su - -c "/opt/oozie/bin/oozied.sh start" hadoop') @utils.event_wrapper( True, step=_("Await %s start up") % "DataNodes", param=('cluster', 0)) def await_datanodes(cluster): datanodes_count = len(vu.get_datanodes(cluster)) if datanodes_count < 1: return l_message = _("Waiting on %s datanodes to start up") % datanodes_count with vu.get_namenode(cluster).remote() as r: utils.plugin_option_poll( cluster, _check_datanodes_count, config_helper.DATANODES_STARTUP_TIMEOUT, l_message, 1, { 'remote': r, 'count': datanodes_count}) def _check_datanodes_count(remote, count): if count < 1: return True LOG.debug("Checking datanode count") exit_code, stdout = remote.execute_command( 'sudo su -lc "hdfs dfsadmin -report" hadoop | ' r'grep \'Live datanodes\|Datanodes available:\' | ' r'grep -o \'[0-9]\+\' | head -n 1') LOG.debug("Datanode count='{count}'".format(count=stdout.rstrip())) return exit_code == 0 and stdout and int(stdout) == count def _hive_create_warehouse_dir(remote): LOG.debug("Creating Hive warehouse dir") remote.execute_command("sudo su - -c 'hadoop fs -mkdir -p " "/user/hive/warehouse' hadoop") def _hive_copy_shared_conf(remote, dest): LOG.debug("Copying shared Hive conf") dirname, filename = os.path.split(dest) remote.execute_command( "sudo su - -c 'hadoop fs -mkdir -p %s && " "hadoop fs -put /opt/hive/conf/hive-site.xml " "%s' hadoop" % (dirname, dest)) def _hive_create_db(remote): LOG.debug("Creating Hive metastore db") remote.execute_command("mysql -u root < /tmp/create_hive_db.sql") def _hive_metastore_start(remote): LOG.debug("Starting Hive Metastore Server") remote.execute_command("sudo su - -c 'nohup /opt/hive/bin/hive" " --service metastore > /dev/null &' hadoop") @utils.event_wrapper( True, step=utils.start_process_event_message("HiveServer")) def start_hiveserver_process(pctx, instance): with context.set_current_instance_id(instance.instance_id): with instance.remote() as r: _hive_create_warehouse_dir(r) _hive_copy_shared_conf( r, edp.get_hive_shared_conf_path('hadoop')) if config_helper.is_mysql_enabled(pctx, instance.cluster): oozie = vu.get_oozie(instance.node_group.cluster) if not oozie or instance.hostname() != oozie.hostname(): _start_mysql(r) version = instance.cluster.hadoop_version sql_script = utils.get_file_text( 'plugins/vanilla/v{}/resources/create_hive_db.sql'.format( version.replace('.', '_')), 'sahara_plugin_vanilla') sql_script = sql_script.replace( '{{password}}', u.get_hive_password(instance.cluster)) r.write_file_to('/tmp/create_hive_db.sql', sql_script) _hive_create_db(r) _hive_metastore_start(r) LOG.info("Hive Metastore server at {host} has been " "started".format(host=instance.hostname())) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/scaling.py0000664000175000017500000001304000000000000030750 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara.plugins import swift_helper from sahara.plugins import utils from sahara_plugin_vanilla.i18n import _ from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import config from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import config_helper from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import run_scripts as run from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import utils as pu from sahara_plugin_vanilla.plugins.vanilla import utils as vu HADOOP_CONF_DIR = config.HADOOP_CONF_DIR def scale_cluster(pctx, cluster, instances): config.configure_instances(pctx, instances) _update_include_files(cluster) run.refresh_hadoop_nodes(cluster) rm = vu.get_resourcemanager(cluster) if rm: run.refresh_yarn_nodes(cluster) config.configure_topology_data(pctx, cluster) run.start_dn_nm_processes(instances) swift_helper.install_ssl_certs(instances) config.configure_zookeeper(cluster) run.refresh_zk_servers(cluster) def _get_instances_with_service(instances, service): return [instance for instance in instances if service in instance.node_group.node_processes] @utils.event_wrapper( True, step=_("Update include files"), param=('cluster', 0)) def _update_include_files(cluster, dec_instances=None): dec_instances = dec_instances or [] dec_instances_ids = [instance.id for instance in dec_instances] instances = utils.get_instances(cluster) inst_filter = lambda inst: inst.id not in dec_instances_ids datanodes = filter(inst_filter, vu.get_datanodes(cluster)) nodemanagers = filter(inst_filter, vu.get_nodemanagers(cluster)) dn_hosts = utils.generate_fqdn_host_names(datanodes) nm_hosts = utils.generate_fqdn_host_names(nodemanagers) for instance in instances: with instance.remote() as r: r.execute_command( 'sudo su - -c "echo \'%s\' > %s/dn-include" hadoop' % ( dn_hosts, HADOOP_CONF_DIR)) r.execute_command( 'sudo su - -c "echo \'%s\' > %s/nm-include" hadoop' % ( nm_hosts, HADOOP_CONF_DIR)) def decommission_nodes(pctx, cluster, instances): datanodes = _get_instances_with_service(instances, 'datanode') nodemanagers = _get_instances_with_service(instances, 'nodemanager') _update_exclude_files(cluster, instances) run.refresh_hadoop_nodes(cluster) rm = vu.get_resourcemanager(cluster) if rm: run.refresh_yarn_nodes(cluster) _check_nodemanagers_decommission(cluster, nodemanagers) _check_datanodes_decommission(cluster, datanodes) _update_include_files(cluster, instances) _clear_exclude_files(cluster) run.refresh_hadoop_nodes(cluster) config.configure_topology_data(pctx, cluster) config.configure_zookeeper(cluster, instances) # TODO(shuyingya):should invent a way to lastly restart the leader node run.refresh_zk_servers(cluster, instances) def _update_exclude_files(cluster, instances): datanodes = _get_instances_with_service(instances, 'datanode') nodemanagers = _get_instances_with_service(instances, 'nodemanager') dn_hosts = utils.generate_fqdn_host_names(datanodes) nm_hosts = utils.generate_fqdn_host_names(nodemanagers) for instance in utils.get_instances(cluster): with instance.remote() as r: r.execute_command( 'sudo su - -c "echo \'%s\' > %s/dn-exclude" hadoop' % ( dn_hosts, HADOOP_CONF_DIR)) r.execute_command( 'sudo su - -c "echo \'%s\' > %s/nm-exclude" hadoop' % ( nm_hosts, HADOOP_CONF_DIR)) def _clear_exclude_files(cluster): for instance in utils.get_instances(cluster): with instance.remote() as r: r.execute_command( 'sudo su - -c "echo > %s/dn-exclude" hadoop' % HADOOP_CONF_DIR) r.execute_command( 'sudo su - -c "echo > %s/nm-exclude" hadoop' % HADOOP_CONF_DIR) def is_decommissioned(cluster, check_func, instances): statuses = check_func(cluster) for instance in instances: if statuses[instance.fqdn()] != 'decommissioned': return False return True def _check_decommission(cluster, instances, check_func, option): utils.plugin_option_poll( cluster, is_decommissioned, option, _("Wait for decommissioning"), 5, {'cluster': cluster, 'check_func': check_func, 'instances': instances}) @utils.event_wrapper( True, step=_("Decommission %s") % "NodeManagers", param=('cluster', 0)) def _check_nodemanagers_decommission(cluster, instances): _check_decommission(cluster, instances, pu.get_nodemanagers_status, config_helper.NODEMANAGERS_DECOMMISSIONING_TIMEOUT) @utils.event_wrapper( True, step=_("Decommission %s") % "DataNodes", param=('cluster', 0)) def _check_datanodes_decommission(cluster, instances): _check_decommission(cluster, instances, pu.get_datanodes_status, config_helper.DATANODES_DECOMMISSIONING_TIMEOUT) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/starting_scripts.py0000664000175000017500000000441700000000000032742 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara.plugins import utils from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import run_scripts as run from sahara_plugin_vanilla.plugins.vanilla import utils as vu def start_namenode(cluster): nn = vu.get_namenode(cluster) _start_namenode(nn) @utils.event_wrapper( True, step=utils.start_process_event_message('NameNode')) def _start_namenode(nn): run.format_namenode(nn) run.start_hadoop_process(nn, 'namenode') def start_secondarynamenode(cluster): snn = vu.get_secondarynamenode(cluster) if snn: _start_secondarynamenode(snn) @utils.event_wrapper( True, step=utils.start_process_event_message("SecondaryNameNodes")) def _start_secondarynamenode(snn): run.start_hadoop_process(snn, 'secondarynamenode') def start_resourcemanager(cluster): rm = vu.get_resourcemanager(cluster) if rm: _start_resourcemanager(rm) @utils.event_wrapper( True, step=utils.start_process_event_message('ResourceManager')) def _start_resourcemanager(snn): run.start_yarn_process(snn, 'resourcemanager') def start_historyserver(cluster): hs = vu.get_historyserver(cluster) if hs: run.start_historyserver(hs) def start_oozie(pctx, cluster): oo = vu.get_oozie(cluster) if oo: run.start_oozie_process(pctx, oo) def start_hiveserver(pctx, cluster): hiveserver = vu.get_hiveserver(cluster) if hiveserver: run.start_hiveserver_process(pctx, hiveserver) def start_spark(cluster): spark = vu.get_spark_history_server(cluster) if spark: run.start_spark_history_server(spark) def start_zookeeper(cluster): zk_servers = vu.get_zk_servers(cluster) if zk_servers: run.start_zk_server(zk_servers) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/utils.py0000664000175000017500000000552700000000000030503 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import re from oslo_log import log as logging from sahara.plugins import castellan_utils as castellan from sahara.plugins import conductor from sahara.plugins import context from sahara_plugin_vanilla.plugins.vanilla import utils as u LOG = logging.getLogger(__name__) def get_datanodes_status(cluster): statuses = {} namenode = u.get_namenode(cluster) status_regexp = r'^Hostname: (.*)\nDecommission Status : (.*)$' matcher = re.compile(status_regexp, re.MULTILINE) dfs_report = namenode.remote().execute_command( 'sudo su - -c "hdfs dfsadmin -report" hadoop')[1] for host, status in matcher.findall(dfs_report): statuses[host] = status.lower() return statuses def get_nodemanagers_status(cluster): statuses = {} resourcemanager = u.get_resourcemanager(cluster) status_regexp = r'^(\S+):\d+\s+(\w+)' matcher = re.compile(status_regexp, re.MULTILINE) yarn_report = resourcemanager.remote().execute_command( 'sudo su - -c "yarn node -all -list" hadoop')[1] for host, status in matcher.findall(yarn_report): statuses[host] = status.lower() return statuses def get_oozie_password(cluster): cluster = conductor.cluster_get(context.ctx(), cluster) extra = cluster.extra.to_dict() if 'oozie_pass_id' not in extra: extra['oozie_pass_id'] = u.generate_random_password() conductor.cluster_update(context.ctx(), cluster, {'extra': extra}) return castellan.get_secret(extra['oozie_pass_id']) def delete_oozie_password(cluster): extra = cluster.extra.to_dict() if 'oozie_pass_id' in extra: castellan.delete_secret(extra['oozie_pass_id']) else: LOG.warning("Cluster hasn't Oozie password") def get_hive_password(cluster): cluster = conductor.cluster_get(context.ctx(), cluster) extra = cluster.extra.to_dict() if 'hive_pass_id' not in extra: extra['hive_pass_id'] = u.generate_random_password() conductor.cluster_update(context.ctx(), cluster, {'extra': extra}) return castellan.get_secret(extra['hive_pass_id']) def delete_hive_password(cluster): extra = cluster.extra.to_dict() if 'hive_pass_id' in extra: castellan.delete_secret(extra['hive_pass_id']) else: LOG.warning("Cluster hasn't hive password") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/hadoop2/validation.py0000664000175000017500000001453300000000000031472 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara.plugins import exceptions as ex from sahara.plugins import utils as u from sahara_plugin_vanilla.i18n import _ from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import config_helper as cu from sahara_plugin_vanilla.plugins.vanilla import utils as vu def validate_cluster_creating(pctx, cluster): nn_count = _get_inst_count(cluster, 'namenode') if nn_count != 1: raise ex.InvalidComponentCountException('namenode', 1, nn_count) snn_count = _get_inst_count(cluster, 'secondarynamenode') if snn_count > 1: raise ex.InvalidComponentCountException('secondarynamenode', _('0 or 1'), snn_count) rm_count = _get_inst_count(cluster, 'resourcemanager') if rm_count > 1: raise ex.InvalidComponentCountException('resourcemanager', _('0 or 1'), rm_count) hs_count = _get_inst_count(cluster, 'historyserver') if hs_count > 1: raise ex.InvalidComponentCountException('historyserver', _('0 or 1'), hs_count) nm_count = _get_inst_count(cluster, 'nodemanager') if rm_count == 0: if nm_count > 0: raise ex.RequiredServiceMissingException('resourcemanager', required_by='nodemanager') oo_count = _get_inst_count(cluster, 'oozie') dn_count = _get_inst_count(cluster, 'datanode') if oo_count > 1: raise ex.InvalidComponentCountException('oozie', _('0 or 1'), oo_count) if oo_count == 1: if dn_count < 1: raise ex.RequiredServiceMissingException('datanode', required_by='oozie') if nm_count < 1: raise ex.RequiredServiceMissingException('nodemanager', required_by='oozie') if hs_count != 1: raise ex.RequiredServiceMissingException('historyserver', required_by='oozie') spark_hist_count = _get_inst_count(cluster, 'spark history server') if spark_hist_count > 1: raise ex.InvalidComponentCountException('spark history server', _('0 or 1'), spark_hist_count) rep_factor = cu.get_config_value(pctx, 'HDFS', 'dfs.replication', cluster) if dn_count < rep_factor: raise ex.InvalidComponentCountException( 'datanode', rep_factor, dn_count, _('Number of datanodes must be ' 'not less than ' 'dfs.replication.')) hive_count = _get_inst_count(cluster, 'hiveserver') if hive_count > 1: raise ex.InvalidComponentCountException('hive', _('0 or 1'), hive_count) zk_count = _get_inst_count(cluster, 'zookeeper') if zk_count > 0 and (zk_count % 2) != 1: raise ex.InvalidComponentCountException( 'zookeeper', _('odd'), zk_count, _('Number of zookeeper nodes ' 'should be odd.')) def validate_additional_ng_scaling(cluster, additional): rm = vu.get_resourcemanager(cluster) scalable_processes = _get_scalable_processes() for ng_id in additional: ng = u.get_by_id(cluster.node_groups, ng_id) if not set(ng.node_processes).issubset(scalable_processes): msg = _("Vanilla plugin cannot scale nodegroup with processes: %s") raise ex.NodeGroupCannotBeScaled(ng.name, msg % ' '.join(ng.node_processes)) if not rm and 'nodemanager' in ng.node_processes: msg = _("Vanilla plugin cannot scale node group with processes " "which have no master-processes run in cluster") raise ex.NodeGroupCannotBeScaled(ng.name, msg) def validate_existing_ng_scaling(pctx, cluster, existing): scalable_processes = _get_scalable_processes() dn_to_delete = 0 for ng in cluster.node_groups: if ng.id in existing: if ng.count > existing[ng.id] and "datanode" in ng.node_processes: dn_to_delete += ng.count - existing[ng.id] if not set(ng.node_processes).issubset(scalable_processes): msg = _("Vanilla plugin cannot scale nodegroup " "with processes: %s") raise ex.NodeGroupCannotBeScaled( ng.name, msg % ' '.join(ng.node_processes)) dn_amount = len(vu.get_datanodes(cluster)) rep_factor = cu.get_config_value(pctx, 'HDFS', 'dfs.replication', cluster) if dn_to_delete > 0 and dn_amount - dn_to_delete < rep_factor: msg = _("Vanilla plugin cannot shrink cluster because it would be " "not enough nodes for replicas (replication factor is %s)") raise ex.ClusterCannotBeScaled( cluster.name, msg % rep_factor) def validate_zookeeper_node_count(zk_ng, existing, additional): zk_amount = 0 for ng in zk_ng: if ng.id in existing: zk_amount += existing[ng.id] else: zk_amount += ng.count for ng_id in additional: ng = u.get_by_id(zk_ng, ng_id) if "zookeeper" in ng.node_processes: zk_amount += ng.count if (zk_amount % 2) != 1: msg = _("Vanilla plugin cannot scale cluster because it must keep" " zookeeper service in odd.") raise ex.ClusterCannotBeScaled(zk_ng[0].cluster.name, msg) def _get_scalable_processes(): return ['datanode', 'nodemanager', 'zookeeper'] def _get_inst_count(cluster, process): return sum([ng.count for ng in u.get_node_groups(cluster, process)]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/plugin.py0000664000175000017500000001010700000000000027273 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from sahara.plugins import provisioning as p from sahara_plugin_vanilla.i18n import _ from sahara_plugin_vanilla.plugins.vanilla import versionfactory as vhf class VanillaProvider(p.ProvisioningPluginBase): def __init__(self): self.version_factory = vhf.VersionFactory.get_instance() def get_description(self): return _('The Apache Vanilla plugin provides the ability to launch ' 'upstream Vanilla Apache Hadoop cluster without any ' 'management consoles. It can also deploy the Oozie ' 'component.') def _get_version_handler(self, hadoop_version): return self.version_factory.get_version_handler(hadoop_version) def get_node_processes(self, hadoop_version): return self._get_version_handler(hadoop_version).get_node_processes() def get_labels(self): default = {'enabled': {'status': True}, 'stable': {'status': True}} result = {'plugin_labels': copy.deepcopy(default)} result['version_labels'] = { version: copy.deepcopy(default) for version in self.get_versions() } return result def get_versions(self): return self.version_factory.get_versions() def get_title(self): return "Vanilla Apache Hadoop" def get_configs(self, hadoop_version): return self._get_version_handler(hadoop_version).get_plugin_configs() def configure_cluster(self, cluster): return self._get_version_handler( cluster.hadoop_version).configure_cluster(cluster) def start_cluster(self, cluster): return self._get_version_handler( cluster.hadoop_version).start_cluster(cluster) def validate(self, cluster): return self._get_version_handler( cluster.hadoop_version).validate(cluster) def scale_cluster(self, cluster, instances): return self._get_version_handler( cluster.hadoop_version).scale_cluster(cluster, instances) def decommission_nodes(self, cluster, instances): return self._get_version_handler( cluster.hadoop_version).decommission_nodes(cluster, instances) def validate_scaling(self, cluster, existing, additional): return self._get_version_handler( cluster.hadoop_version).validate_scaling(cluster, existing, additional) def get_edp_engine(self, cluster, job_type): return self._get_version_handler( cluster.hadoop_version).get_edp_engine(cluster, job_type) def get_edp_job_types(self, versions=None): res = {} for vers in self.version_factory.get_versions(): if not versions or vers in versions: vh = self.version_factory.get_version_handler(vers) res[vers] = vh.get_edp_job_types() return res def get_edp_config_hints(self, job_type, version): version_handler = ( self.version_factory.get_version_handler(version)) return version_handler.get_edp_config_hints(job_type) def get_open_ports(self, node_group): return self._get_version_handler( node_group.cluster.hadoop_version).get_open_ports(node_group) def on_terminate_cluster(self, cluster): return self._get_version_handler( cluster.hadoop_version).on_terminate_cluster(cluster) def recommend_configs(self, cluster, scaling=False): return self._get_version_handler( cluster.hadoop_version).recommend_configs(cluster, scaling) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/utils.py0000664000175000017500000000334100000000000027137 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_utils import uuidutils from sahara.plugins import castellan_utils as castellan from sahara.plugins import utils as u def get_namenode(cluster): return u.get_instance(cluster, "namenode") def get_resourcemanager(cluster): return u.get_instance(cluster, 'resourcemanager') def get_nodemanagers(cluster): return u.get_instances(cluster, 'nodemanager') def get_oozie(cluster): return u.get_instance(cluster, "oozie") def get_spark_history_server(cluster): return u.get_instance(cluster, "spark history server") def get_hiveserver(cluster): return u.get_instance(cluster, "hiveserver") def get_datanodes(cluster): return u.get_instances(cluster, 'datanode') def get_secondarynamenode(cluster): return u.get_instance(cluster, 'secondarynamenode') def get_historyserver(cluster): return u.get_instance(cluster, 'historyserver') def get_instance_hostname(instance): return instance.hostname() if instance else None def get_zk_servers(cluster): return u.get_instances(cluster, 'zookeeper') def generate_random_password(): password = uuidutils.generate_uuid() return castellan.store_secret(password) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9564793 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_1/0000775000175000017500000000000000000000000026421 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_1/__init__.py0000664000175000017500000000000000000000000030520 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_1/config_helper.py0000664000175000017500000001120100000000000031572 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from oslo_config import cfg import six from sahara.plugins import provisioning as p from sahara.plugins import utils from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import config_helper CONF = cfg.CONF CONF.import_opt("enable_data_locality", "sahara.topology.topology_helper") CORE_DEFAULT = utils.load_hadoop_xml_defaults( 'plugins/vanilla/v2_7_1/resources/core-default.xml', 'sahara_plugin_vanilla') HDFS_DEFAULT = utils.load_hadoop_xml_defaults( 'plugins/vanilla/v2_7_1/resources/hdfs-default.xml', 'sahara_plugin_vanilla') MAPRED_DEFAULT = utils.load_hadoop_xml_defaults( 'plugins/vanilla/v2_7_1/resources/mapred-default.xml', 'sahara_plugin_vanilla') YARN_DEFAULT = utils.load_hadoop_xml_defaults( 'plugins/vanilla/v2_7_1/resources/yarn-default.xml', 'sahara_plugin_vanilla') OOZIE_DEFAULT = utils.load_hadoop_xml_defaults( 'plugins/vanilla/v2_7_1/resources/oozie-default.xml', 'sahara_plugin_vanilla') HIVE_DEFAULT = utils.load_hadoop_xml_defaults( 'plugins/vanilla/v2_7_1/resources/hive-default.xml', 'sahara_plugin_vanilla') _default_executor_classpath = ":".join( ['/opt/hadoop/share/hadoop/tools/lib/hadoop-openstack-2.7.1.jar']) SPARK_CONFS = copy.deepcopy(config_helper.SPARK_CONFS) SPARK_CONFS['Spark']['OPTIONS'].append( { 'name': 'Executor extra classpath', 'description': 'Value for spark.executor.extraClassPath' ' in spark-defaults.conf' ' (default: %s)' % _default_executor_classpath, 'default': '%s' % _default_executor_classpath, 'priority': 2, } ) XML_CONFS = { "Hadoop": [CORE_DEFAULT], "HDFS": [HDFS_DEFAULT], "YARN": [YARN_DEFAULT], "MapReduce": [MAPRED_DEFAULT], "JobFlow": [OOZIE_DEFAULT], "Hive": [HIVE_DEFAULT] } ENV_CONFS = { "YARN": { 'ResourceManager Heap Size': 1024, 'NodeManager Heap Size': 1024 }, "HDFS": { 'NameNode Heap Size': 1024, 'SecondaryNameNode Heap Size': 1024, 'DataNode Heap Size': 1024 }, "MapReduce": { 'JobHistoryServer Heap Size': 1024 }, "JobFlow": { 'Oozie Heap Size': 1024 } } # Initialise plugin Hadoop configurations PLUGIN_XML_CONFIGS = config_helper.init_xml_configs(XML_CONFS) PLUGIN_ENV_CONFIGS = config_helper.init_env_configs(ENV_CONFS) def _init_all_configs(): configs = [] configs.extend(PLUGIN_XML_CONFIGS) configs.extend(PLUGIN_ENV_CONFIGS) configs.extend(config_helper.PLUGIN_GENERAL_CONFIGS) configs.extend(_get_spark_configs()) configs.extend(_get_zookeeper_configs()) return configs def _get_spark_opt_default(opt_name): for opt in SPARK_CONFS["Spark"]["OPTIONS"]: if opt_name == opt["name"]: return opt["default"] return None def _get_spark_configs(): spark_configs = [] for service, config_items in six.iteritems(SPARK_CONFS): for item in config_items['OPTIONS']: cfg = p.Config(name=item["name"], description=item["description"], default_value=item["default"], applicable_target=service, scope="cluster", is_optional=True, priority=item["priority"]) spark_configs.append(cfg) return spark_configs def _get_zookeeper_configs(): zk_configs = [] for service, config_items in six.iteritems(config_helper.ZOOKEEPER_CONFS): for item in config_items['OPTIONS']: cfg = p.Config(name=item["name"], description=item["description"], default_value=item["default"], applicable_target=service, scope="cluster", is_optional=True, priority=item["priority"]) zk_configs.append(cfg) return zk_configs PLUGIN_CONFIGS = _init_all_configs() def get_plugin_configs(): return PLUGIN_CONFIGS def get_xml_configs(): return PLUGIN_XML_CONFIGS def get_env_configs(): return ENV_CONFS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_1/edp_engine.py0000664000175000017500000000660400000000000031076 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os from sahara.plugins import edp from sahara.plugins import exceptions as ex from sahara.plugins import utils as plugin_utils from sahara_plugin_vanilla.i18n import _ from sahara_plugin_vanilla.plugins.vanilla import confighints_helper as chh from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import edp_engine from sahara_plugin_vanilla.plugins.vanilla import utils as v_utils class EdpOozieEngine(edp_engine.EdpOozieEngine): @staticmethod def get_possible_job_config(job_type): if edp.compare_job_type(job_type, edp.JOB_TYPE_HIVE): return {'job_config': chh.get_possible_hive_config_from( 'plugins/vanilla/v2_7_1/resources/hive-default.xml')} if edp.compare_job_type(job_type, edp.JOB_TYPE_MAPREDUCE, edp.JOB_TYPE_MAPREDUCE_STREAMING): return {'job_config': chh.get_possible_mapreduce_config_from( 'plugins/vanilla/v2_7_1/resources/mapred-default.xml')} if edp.compare_job_type(job_type, edp.JOB_TYPE_PIG): return {'job_config': chh.get_possible_pig_config_from( 'plugins/vanilla/v2_7_1/resources/mapred-default.xml')} return edp_engine.EdpOozieEngine.get_possible_job_config(job_type) class EdpSparkEngine(edp.PluginsSparkJobEngine): edp_base_version = "2.7.1" def __init__(self, cluster): super(EdpSparkEngine, self).__init__(cluster) self.master = plugin_utils.get_instance(cluster, "spark history server") self.plugin_params["spark-user"] = "sudo -u hadoop " self.plugin_params["spark-submit"] = os.path.join( plugin_utils.get_config_value_or_default( "Spark", "Spark home", self.cluster), "bin/spark-submit") self.plugin_params["deploy-mode"] = "cluster" self.plugin_params["master"] = "yarn" driver_cp = plugin_utils.get_config_value_or_default( "Spark", "Executor extra classpath", self.cluster) self.plugin_params["driver-class-path"] = driver_cp @staticmethod def edp_supported(version): return version >= EdpSparkEngine.edp_base_version @staticmethod def job_type_supported(job_type): return (job_type in edp.PluginsSparkJobEngine.get_supported_job_types()) def validate_job_execution(self, cluster, job, data): if (not self.edp_supported(cluster.hadoop_version) or not v_utils.get_spark_history_server(cluster)): raise ex.PluginInvalidDataException( _('Spark {base} or higher required to run {type} jobs').format( base=EdpSparkEngine.edp_base_version, type=job.type)) super(EdpSparkEngine, self).validate_job_execution(cluster, job, data) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9604795 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_1/resources/0000775000175000017500000000000000000000000030433 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_1/resources/README.rst0000664000175000017500000000272000000000000032123 0ustar00zuulzuul00000000000000Apache Hadoop Configurations for Sahara ======================================= This directory contains default XML configuration files: * core-default.xml * hdfs-default.xml * mapred-default.xml * yarn-default.xml * oozie-default.xml * hive-default.xml These files are applied for Sahara's plugin of Apache Hadoop version 2.7.1 Files were taken from here: * `core-default.xml `_ * `hdfs-default.xml `_ * `yarn-default.xml `_ * `mapred-default.xml `_ * `oozie-default.xml `_ * `hive-default.xml `_ XML configs are used to expose default Hadoop configurations to the users through Sahara's REST API. It allows users to override some config values which will be pushed to the provisioned VMs running Hadoop services as part of appropriate xml config. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_1/resources/core-default.xml0000664000175000017500000016735300000000000033546 0ustar00zuulzuul00000000000000 hadoop.common.configuration.version 0.23.0 version of this configuration file hadoop.tmp.dir /tmp/hadoop-${user.name} A base for other temporary directories. io.native.lib.available true Controls whether to use native libraries for bz2 and zlib compression codecs or not. The property does not control any other native libraries. hadoop.http.filter.initializers org.apache.hadoop.http.lib.StaticUserWebFilter A comma separated list of class names. Each class in the list must extend org.apache.hadoop.http.FilterInitializer. The corresponding Filter will be initialized. Then, the Filter will be applied to all user facing jsp and servlet web pages. The ordering of the list defines the ordering of the filters. hadoop.security.authorization false Is service-level authorization enabled? hadoop.security.instrumentation.requires.admin false Indicates if administrator ACLs are required to access instrumentation servlets (JMX, METRICS, CONF, STACKS). hadoop.security.authentication simple Possible values are simple (no authentication), and kerberos hadoop.security.group.mapping org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback Class for user to group mapping (get groups for a given user) for ACL. The default implementation, org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback, will determine if the Java Native Interface (JNI) is available. If JNI is available the implementation will use the API within hadoop to resolve a list of groups for a user. If JNI is not available then the shell implementation, ShellBasedUnixGroupsMapping, is used. This implementation shells out to the Linux/Unix environment with the bash -c groups command to resolve a list of groups for a user. hadoop.security.groups.cache.secs 300 This is the config controlling the validity of the entries in the cache containing the user->group mapping. When this duration has expired, then the implementation of the group mapping provider is invoked to get the groups of the user and then cached back. hadoop.security.groups.negative-cache.secs 30 Expiration time for entries in the the negative user-to-group mapping caching, in seconds. This is useful when invalid users are retrying frequently. It is suggested to set a small value for this expiration, since a transient error in group lookup could temporarily lock out a legitimate user. Set this to zero or negative value to disable negative user-to-group caching. hadoop.security.groups.cache.warn.after.ms 5000 If looking up a single user to group takes longer than this amount of milliseconds, we will log a warning message. hadoop.security.group.mapping.ldap.url The URL of the LDAP server to use for resolving user groups when using the LdapGroupsMapping user to group mapping. hadoop.security.group.mapping.ldap.ssl false Whether or not to use SSL when connecting to the LDAP server. hadoop.security.group.mapping.ldap.ssl.keystore File path to the SSL keystore that contains the SSL certificate required by the LDAP server. hadoop.security.group.mapping.ldap.ssl.keystore.password.file The path to a file containing the password of the LDAP SSL keystore. IMPORTANT: This file should be readable only by the Unix user running the daemons. hadoop.security.group.mapping.ldap.bind.user The distinguished name of the user to bind as when connecting to the LDAP server. This may be left blank if the LDAP server supports anonymous binds. hadoop.security.group.mapping.ldap.bind.password.file The path to a file containing the password of the bind user. IMPORTANT: This file should be readable only by the Unix user running the daemons. hadoop.security.group.mapping.ldap.base The search base for the LDAP connection. This is a distinguished name, and will typically be the root of the LDAP directory. hadoop.security.group.mapping.ldap.search.filter.user (&(objectClass=user)(sAMAccountName={0})) An additional filter to use when searching for LDAP users. The default will usually be appropriate for Active Directory installations. If connecting to an LDAP server with a non-AD schema, this should be replaced with (&(objectClass=inetOrgPerson)(uid={0}). {0} is a special string used to denote where the username fits into the filter. hadoop.security.group.mapping.ldap.search.filter.group (objectClass=group) An additional filter to use when searching for LDAP groups. This should be changed when resolving groups against a non-Active Directory installation. posixGroups are currently not a supported group class. hadoop.security.group.mapping.ldap.search.attr.member member The attribute of the group object that identifies the users that are members of the group. The default will usually be appropriate for any LDAP installation. hadoop.security.group.mapping.ldap.search.attr.group.name cn The attribute of the group object that identifies the group name. The default will usually be appropriate for all LDAP systems. hadoop.security.group.mapping.ldap.directory.search.timeout 10000 The attribute applied to the LDAP SearchControl properties to set a maximum time limit when searching and awaiting a result. Set to 0 if infinite wait period is desired. Default is 10 seconds. Units in milliseconds. hadoop.security.service.user.name.key For those cases where the same RPC protocol is implemented by multiple servers, this configuration is required for specifying the principal name to use for the service when the client wishes to make an RPC call. hadoop.security.uid.cache.secs 14400 This is the config controlling the validity of the entries in the cache containing the userId to userName and groupId to groupName used by NativeIO getFstat(). hadoop.rpc.protection authentication A comma-separated list of protection values for secured sasl connections. Possible values are authentication, integrity and privacy. authentication means authentication only and no integrity or privacy; integrity implies authentication and integrity are enabled; and privacy implies all of authentication, integrity and privacy are enabled. hadoop.security.saslproperties.resolver.class can be used to override the hadoop.rpc.protection for a connection at the server side. hadoop.security.saslproperties.resolver.class SaslPropertiesResolver used to resolve the QOP used for a connection. If not specified, the full set of values specified in hadoop.rpc.protection is used while determining the QOP used for the connection. If a class is specified, then the QOP values returned by the class will be used while determining the QOP used for the connection. hadoop.work.around.non.threadsafe.getpwuid false Some operating systems or authentication modules are known to have broken implementations of getpwuid_r and getpwgid_r, such that these calls are not thread-safe. Symptoms of this problem include JVM crashes with a stack trace inside these functions. If your system exhibits this issue, enable this configuration parameter to include a lock around the calls as a workaround. An incomplete list of some systems known to have this issue is available at http://wiki.apache.org/hadoop/KnownBrokenPwuidImplementations hadoop.kerberos.kinit.command kinit Used to periodically renew Kerberos credentials when provided to Hadoop. The default setting assumes that kinit is in the PATH of users running the Hadoop client. Change this to the absolute path to kinit if this is not the case. hadoop.security.auth_to_local Maps kerberos principals to local user names io.file.buffer.size 4096 The size of buffer for use in sequence files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. io.bytes.per.checksum 512 The number of bytes per checksum. Must not be larger than io.file.buffer.size. io.skip.checksum.errors false If true, when a checksum error is encountered while reading a sequence file, entries are skipped, instead of throwing an exception. io.compression.codecs A comma-separated list of the compression codec classes that can be used for compression/decompression. In addition to any classes specified with this property (which take precedence), codec classes on the classpath are discovered using a Java ServiceLoader. io.compression.codec.bzip2.library system-native The native-code library to be used for compression and decompression by the bzip2 codec. This library could be specified either by by name or the full pathname. In the former case, the library is located by the dynamic linker, usually searching the directories specified in the environment variable LD_LIBRARY_PATH. The value of "system-native" indicates that the default system library should be used. To indicate that the algorithm should operate entirely in Java, specify "java-builtin". io.serializations org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization A list of serialization classes that can be used for obtaining serializers and deserializers. io.seqfile.local.dir ${hadoop.tmp.dir}/io/local The local directory where sequence file stores intermediate data files during merge. May be a comma-separated list of directories on different devices in order to spread disk i/o. Directories that do not exist are ignored. io.map.index.skip 0 Number of index entries to skip between each entry. Zero by default. Setting this to values larger than zero can facilitate opening large MapFiles using less memory. io.map.index.interval 128 MapFile consist of two files - data file (tuples) and index file (keys). For every io.map.index.interval records written in the data file, an entry (record-key, data-file-position) is written in the index file. This is to allow for doing binary search later within the index file to look up records by their keys and get their closest positions in the data file. fs.defaultFS file:/// The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem. fs.default.name file:/// Deprecated. Use (fs.defaultFS) property instead fs.trash.interval 0 Number of minutes after which the checkpoint gets deleted. If zero, the trash feature is disabled. This option may be configured both on the server and the client. If trash is disabled server side then the client side configuration is checked. If trash is enabled on the server side then the value configured on the server is used and the client configuration value is ignored. fs.trash.checkpoint.interval 0 Number of minutes between trash checkpoints. Should be smaller or equal to fs.trash.interval. If zero, the value is set to the value of fs.trash.interval. Every time the checkpointer runs it creates a new checkpoint out of current and removes checkpoints created more than fs.trash.interval minutes ago. fs.AbstractFileSystem.file.impl org.apache.hadoop.fs.local.LocalFs The AbstractFileSystem for file: uris. fs.AbstractFileSystem.har.impl org.apache.hadoop.fs.HarFs The AbstractFileSystem for har: uris. fs.AbstractFileSystem.hdfs.impl org.apache.hadoop.fs.Hdfs The FileSystem for hdfs: uris. fs.AbstractFileSystem.viewfs.impl org.apache.hadoop.fs.viewfs.ViewFs The AbstractFileSystem for view file system for viewfs: uris (ie client side mount table:). fs.AbstractFileSystem.ftp.impl org.apache.hadoop.fs.ftp.FtpFs The FileSystem for Ftp: uris. fs.ftp.host 0.0.0.0 FTP filesystem connects to this server fs.ftp.host.port 21 FTP filesystem connects to fs.ftp.host on this port fs.df.interval 60000 Disk usage statistics refresh interval in msec. fs.du.interval 600000 File space usage statistics refresh interval in msec. fs.s3.block.size 67108864 Block size to use when writing files to S3. fs.s3.buffer.dir ${hadoop.tmp.dir}/s3 Determines where on the local filesystem the S3 filesystem should store files before sending them to S3 (or after retrieving them from S3). fs.s3.maxRetries 4 The maximum number of retries for reading or writing files to S3, before we signal failure to the application. fs.s3.sleepTimeSeconds 10 The number of seconds to sleep between each S3 retry. fs.swift.impl org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem The implementation class of the OpenStack Swift Filesystem fs.automatic.close true By default, FileSystem instances are automatically closed at program exit using a JVM shutdown hook. Setting this property to false disables this behavior. This is an advanced option that should only be used by server applications requiring a more carefully orchestrated shutdown sequence. fs.s3n.block.size 67108864 Block size to use when reading files using the native S3 filesystem (s3n: URIs). fs.s3n.multipart.uploads.enabled false Setting this property to true enables multiple uploads to native S3 filesystem. When uploading a file, it is split into blocks if the size is larger than fs.s3n.multipart.uploads.block.size. fs.s3n.multipart.uploads.block.size 67108864 The block size for multipart uploads to native S3 filesystem. Default size is 64MB. fs.s3n.multipart.copy.block.size 5368709120 The block size for multipart copy in native S3 filesystem. Default size is 5GB. fs.s3n.server-side-encryption-algorithm Specify a server-side encryption algorithm for S3. The default is NULL, and the only other currently allowable value is AES256. fs.s3a.awsAccessKeyId AWS access key ID. Omit for Role-based authentication. fs.s3a.awsSecretAccessKey AWS secret key. Omit for Role-based authentication. fs.s3a.connection.maximum 15 Controls the maximum number of simultaneous connections to S3. fs.s3a.connection.ssl.enabled true Enables or disables SSL connections to S3. fs.s3a.endpoint AWS S3 endpoint to connect to. An up-to-date list is provided in the AWS Documentation: regions and endpoints. Without this property, the standard region (s3.amazonaws.com) is assumed. fs.s3a.proxy.host Hostname of the (optional) proxy server for S3 connections. fs.s3a.proxy.port Proxy server port. If this property is not set but fs.s3a.proxy.host is, port 80 or 443 is assumed (consistent with the value of fs.s3a.connection.ssl.enabled). fs.s3a.proxy.username Username for authenticating with proxy server. fs.s3a.proxy.password Password for authenticating with proxy server. fs.s3a.proxy.domain Domain for authenticating with proxy server. fs.s3a.proxy.workstation Workstation for authenticating with proxy server. fs.s3a.attempts.maximum 10 How many times we should retry commands on transient errors. fs.s3a.connection.establish.timeout 5000 Socket connection setup timeout in milliseconds. fs.s3a.connection.timeout 50000 Socket connection timeout in milliseconds. fs.s3a.paging.maximum 5000 How many keys to request from S3 when doing directory listings at a time. fs.s3a.threads.max 256 Maximum number of concurrent active (part)uploads, which each use a thread from the threadpool. fs.s3a.threads.core 15 Number of core threads in the threadpool. fs.s3a.threads.keepalivetime 60 Number of seconds a thread can be idle before being terminated. fs.s3a.max.total.tasks 1000 Number of (part)uploads allowed to the queue before blocking additional uploads. fs.s3a.multipart.size 104857600 How big (in bytes) to split upload or copy operations up into. fs.s3a.multipart.threshold 2147483647 Threshold before uploads or copies use parallel multipart operations. fs.s3a.acl.default Set a canned ACL for newly created and copied objects. Value may be private, public-read, public-read-write, authenticated-read, log-delivery-write, bucket-owner-read, or bucket-owner-full-control. fs.s3a.multipart.purge false True if you want to purge existing multipart uploads that may not have been completed/aborted correctly fs.s3a.multipart.purge.age 86400 Minimum age in seconds of multipart uploads to purge fs.s3a.buffer.dir ${hadoop.tmp.dir}/s3a Comma separated list of directories that will be used to buffer file uploads to. fs.s3a.fast.upload false Upload directly from memory instead of buffering to disk first. Memory usage and parallelism can be controlled as up to fs.s3a.multipart.size memory is consumed for each (part)upload actively uploading (fs.s3a.threads.max) or queueing (fs.s3a.max.total.tasks) fs.s3a.fast.buffer.size 1048576 Size of initial memory buffer in bytes allocated for an upload. No effect if fs.s3a.fast.upload is false. fs.s3a.impl org.apache.hadoop.fs.s3a.S3AFileSystem The implementation class of the S3A Filesystem io.seqfile.compress.blocksize 1000000 The minimum block size for compression in block compressed SequenceFiles. io.seqfile.lazydecompress true Should values of block-compressed SequenceFiles be decompressed only when necessary. io.seqfile.sorter.recordlimit 1000000 The limit on number of records to be kept in memory in a spill in SequenceFiles.Sorter io.mapfile.bloom.size 1048576 The size of BloomFilter-s used in BloomMapFile. Each time this many keys is appended the next BloomFilter will be created (inside a DynamicBloomFilter). Larger values minimize the number of filters, which slightly increases the performance, but may waste too much space if the total number of keys is usually much smaller than this number. io.mapfile.bloom.error.rate 0.005 The rate of false positives in BloomFilter-s used in BloomMapFile. As this value decreases, the size of BloomFilter-s increases exponentially. This value is the probability of encountering false positives (default is 0.5%). hadoop.util.hash.type murmur The default implementation of Hash. Currently this can take one of the two values: 'murmur' to select MurmurHash and 'jenkins' to select JenkinsHash. ipc.client.idlethreshold 4000 Defines the threshold number of connections after which connections will be inspected for idleness. ipc.client.kill.max 10 Defines the maximum number of clients to disconnect in one go. ipc.client.connection.maxidletime 10000 The maximum time in msec after which a client will bring down the connection to the server. ipc.client.connect.max.retries 10 Indicates the number of retries a client will make to establish a server connection. ipc.client.connect.retry.interval 1000 Indicates the number of milliseconds a client will wait for before retrying to establish a server connection. ipc.client.connect.timeout 20000 Indicates the number of milliseconds a client will wait for the socket to establish a server connection. ipc.client.connect.max.retries.on.timeouts 45 Indicates the number of retries a client will make on socket timeout to establish a server connection. ipc.server.listen.queue.size 128 Indicates the length of the listen queue for servers accepting client connections. hadoop.security.impersonation.provider.class A class which implements ImpersonationProvider interface, used to authorize whether one user can impersonate a specific user. If not specified, the DefaultImpersonationProvider will be used. If a class is specified, then that class will be used to determine the impersonation capability. hadoop.rpc.socket.factory.class.default org.apache.hadoop.net.StandardSocketFactory Default SocketFactory to use. This parameter is expected to be formatted as "package.FactoryClassName". hadoop.rpc.socket.factory.class.ClientProtocol SocketFactory to use to connect to a DFS. If null or empty, use hadoop.rpc.socket.class.default. This socket factory is also used by DFSClient to create sockets to DataNodes. hadoop.socks.server Address (host:port) of the SOCKS server to be used by the SocksSocketFactory. net.topology.node.switch.mapping.impl org.apache.hadoop.net.ScriptBasedMapping The default implementation of the DNSToSwitchMapping. It invokes a script specified in net.topology.script.file.name to resolve node names. If the value for net.topology.script.file.name is not set, the default value of DEFAULT_RACK is returned for all node names. net.topology.impl org.apache.hadoop.net.NetworkTopology The default implementation of NetworkTopology which is classic three layer one. net.topology.script.file.name The script name that should be invoked to resolve DNS names to NetworkTopology names. Example: the script would take host.foo.bar as an argument, and return /rack1 as the output. net.topology.script.number.args 100 The max number of args that the script configured with net.topology.script.file.name should be run with. Each arg is an IP address. net.topology.table.file.name The file name for a topology file, which is used when the net.topology.node.switch.mapping.impl property is set to org.apache.hadoop.net.TableMapping. The file format is a two column text file, with columns separated by whitespace. The first column is a DNS or IP address and the second column specifies the rack where the address maps. If no entry corresponding to a host in the cluster is found, then /default-rack is assumed. file.stream-buffer-size 4096 The size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. file.bytes-per-checksum 512 The number of bytes per checksum. Must not be larger than file.stream-buffer-size file.client-write-packet-size 65536 Packet size for clients to write file.blocksize 67108864 Block size file.replication 1 Replication factor s3.stream-buffer-size 4096 The size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. s3.bytes-per-checksum 512 The number of bytes per checksum. Must not be larger than s3.stream-buffer-size s3.client-write-packet-size 65536 Packet size for clients to write s3.blocksize 67108864 Block size s3.replication 3 Replication factor s3native.stream-buffer-size 4096 The size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. s3native.bytes-per-checksum 512 The number of bytes per checksum. Must not be larger than s3native.stream-buffer-size s3native.client-write-packet-size 65536 Packet size for clients to write s3native.blocksize 67108864 Block size s3native.replication 3 Replication factor ftp.stream-buffer-size 4096 The size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. ftp.bytes-per-checksum 512 The number of bytes per checksum. Must not be larger than ftp.stream-buffer-size ftp.client-write-packet-size 65536 Packet size for clients to write ftp.blocksize 67108864 Block size ftp.replication 3 Replication factor tfile.io.chunk.size 1048576 Value chunk size in bytes. Default to 1MB. Values of the length less than the chunk size is guaranteed to have known value length in read time (See also TFile.Reader.Scanner.Entry.isValueLengthKnown()). tfile.fs.output.buffer.size 262144 Buffer size used for FSDataOutputStream in bytes. tfile.fs.input.buffer.size 262144 Buffer size used for FSDataInputStream in bytes. hadoop.http.authentication.type simple Defines authentication used for Oozie HTTP endpoint. Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME# hadoop.http.authentication.token.validity 36000 Indicates how long (in seconds) an authentication token is valid before it has to be renewed. hadoop.http.authentication.signature.secret.file ${user.home}/hadoop-http-auth-signature-secret The signature secret for signing the authentication tokens. The same secret should be used for JT/NN/DN/TT configurations. hadoop.http.authentication.cookie.domain The domain to use for the HTTP cookie that stores the authentication token. In order to authentiation to work correctly across all Hadoop nodes web-consoles the domain must be correctly set. IMPORTANT: when using IP addresses, browsers ignore cookies with domain settings. For this setting to work properly all nodes in the cluster must be configured to generate URLs with hostname.domain names on it. hadoop.http.authentication.simple.anonymous.allowed true Indicates if anonymous requests are allowed when using 'simple' authentication. hadoop.http.authentication.kerberos.principal HTTP/_HOST@LOCALHOST Indicates the Kerberos principal to be used for HTTP endpoint. The principal MUST start with 'HTTP/' as per Kerberos HTTP SPNEGO specification. hadoop.http.authentication.kerberos.keytab ${user.home}/hadoop.keytab Location of the keytab file with the credentials for the principal. Referring to the same keytab file Oozie uses for its Kerberos credentials for Hadoop. dfs.ha.fencing.methods List of fencing methods to use for service fencing. May contain builtin methods (eg shell and sshfence) or user-defined method. dfs.ha.fencing.ssh.connect-timeout 30000 SSH connection timeout, in milliseconds, to use with the builtin sshfence fencer. dfs.ha.fencing.ssh.private-key-files The SSH private key files to use with the builtin sshfence fencer. The user name to filter as, on static web filters while rendering content. An example use is the HDFS web UI (user to be used for browsing files). hadoop.http.staticuser.user dr.who ha.zookeeper.quorum A list of ZooKeeper server addresses, separated by commas, that are to be used by the ZKFailoverController in automatic failover. ha.zookeeper.session-timeout.ms 5000 The session timeout to use when the ZKFC connects to ZooKeeper. Setting this value to a lower value implies that server crashes will be detected more quickly, but risks triggering failover too aggressively in the case of a transient error or network blip. ha.zookeeper.parent-znode /hadoop-ha The ZooKeeper znode under which the ZK failover controller stores its information. Note that the nameservice ID is automatically appended to this znode, so it is not normally necessary to configure this, even in a federated environment. ha.zookeeper.acl world:anyone:rwcda A comma-separated list of ZooKeeper ACLs to apply to the znodes used by automatic failover. These ACLs are specified in the same format as used by the ZooKeeper CLI. If the ACL itself contains secrets, you may instead specify a path to a file, prefixed with the '@' symbol, and the value of this configuration will be loaded from within. ha.zookeeper.auth A comma-separated list of ZooKeeper authentications to add when connecting to ZooKeeper. These are specified in the same format as used by the "addauth" command in the ZK CLI. It is important that the authentications specified here are sufficient to access znodes with the ACL specified in ha.zookeeper.acl. If the auths contain secrets, you may instead specify a path to a file, prefixed with the '@' symbol, and the value of this configuration will be loaded from within. hadoop.ssl.keystores.factory.class org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory The keystores factory to use for retrieving certificates. hadoop.ssl.require.client.cert false Whether client certificates are required hadoop.ssl.hostname.verifier DEFAULT The hostname verifier to provide for HttpsURLConnections. Valid values are: DEFAULT, STRICT, STRICT_I6, DEFAULT_AND_LOCALHOST and ALLOW_ALL hadoop.ssl.server.conf ssl-server.xml Resource file from which ssl server keystore information will be extracted. This file is looked up in the classpath, typically it should be in Hadoop conf/ directory. hadoop.ssl.client.conf ssl-client.xml Resource file from which ssl client keystore information will be extracted This file is looked up in the classpath, typically it should be in Hadoop conf/ directory. hadoop.ssl.enabled false Deprecated. Use dfs.http.policy and yarn.http.policy instead. hadoop.ssl.enabled.protocols TLSv1 Protocols supported by the ssl. hadoop.jetty.logs.serve.aliases true Enable/Disable aliases serving from jetty fs.permissions.umask-mode 022 The umask used when creating files and directories. Can be in octal or in symbolic. Examples are: "022" (octal for u=rwx,g=r-x,o=r-x in symbolic), or "u=rwx,g=rwx,o=" (symbolic for 007 in octal). ha.health-monitor.connect-retry-interval.ms 1000 How often to retry connecting to the service. ha.health-monitor.check-interval.ms 1000 How often to check the service. ha.health-monitor.sleep-after-disconnect.ms 1000 How long to sleep after an unexpected RPC error. ha.health-monitor.rpc-timeout.ms 45000 Timeout for the actual monitorHealth() calls. ha.failover-controller.new-active.rpc-timeout.ms 60000 Timeout that the FC waits for the new active to become active ha.failover-controller.graceful-fence.rpc-timeout.ms 5000 Timeout that the FC waits for the old active to go to standby ha.failover-controller.graceful-fence.connection.retries 1 FC connection retries for graceful fencing ha.failover-controller.cli-check.rpc-timeout.ms 20000 Timeout that the CLI (manual) FC waits for monitorHealth, getServiceState ipc.client.fallback-to-simple-auth-allowed false When a client is configured to attempt a secure connection, but attempts to connect to an insecure server, that server may instruct the client to switch to SASL SIMPLE (unsecure) authentication. This setting controls whether or not the client will accept this instruction from the server. When false (the default), the client will not allow the fallback to SIMPLE authentication, and will abort the connection. fs.client.resolve.remote.symlinks true Whether to resolve symlinks when accessing a remote Hadoop filesystem. Setting this to false causes an exception to be thrown upon encountering a symlink. This setting does not apply to local filesystems, which automatically resolve local symlinks. nfs.exports.allowed.hosts * rw By default, the export can be mounted by any client. The value string contains machine name and access privilege, separated by whitespace characters. The machine name format can be a single host, a Java regular expression, or an IPv4 address. The access privilege uses rw or ro to specify read/write or read-only access of the machines to exports. If the access privilege is not provided, the default is read-only. Entries are separated by ";". For example: "192.168.0.0/22 rw ; host.*\.example\.com ; host1.test.org ro;". Only the NFS gateway needs to restart after this property is updated. hadoop.user.group.static.mapping.overrides dr.who=; Static mapping of user to groups. This will override the groups if available in the system for the specified user. In otherwords, groups look-up will not happen for these users, instead groups mapped in this configuration will be used. Mapping should be in this format. user1=group1,group2;user2=;user3=group2; Default, "dr.who=;" will consider "dr.who" as user without groups. rpc.metrics.quantile.enable false Setting this property to true and rpc.metrics.percentiles.intervals to a comma-separated list of the granularity in seconds, the 50/75/90/95/99th percentile latency for rpc queue/processing time in milliseconds are added to rpc metrics. rpc.metrics.percentiles.intervals A comma-separated list of the granularity in seconds for the metrics which describe the 50/75/90/95/99th percentile latency for rpc queue/processing time. The metrics are outputted if rpc.metrics.quantile.enable is set to true. hadoop.security.crypto.codec.classes.EXAMPLECIPHERSUITE The prefix for a given crypto codec, contains a comma-separated list of implementation classes for a given crypto codec (eg EXAMPLECIPHERSUITE). The first implementation will be used if available, others are fallbacks. hadoop.security.crypto.codec.classes.aes.ctr.nopadding org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec,org.apache.hadoop.crypto.JceAesCtrCryptoCodec Comma-separated list of crypto codec implementations for AES/CTR/NoPadding. The first implementation will be used if available, others are fallbacks. hadoop.security.crypto.cipher.suite AES/CTR/NoPadding Cipher suite for crypto codec. hadoop.security.crypto.jce.provider The JCE provider name used in CryptoCodec. hadoop.security.crypto.buffer.size 8192 The buffer size used by CryptoInputStream and CryptoOutputStream. hadoop.security.java.secure.random.algorithm SHA1PRNG The java secure random algorithm. hadoop.security.secure.random.impl Implementation of secure random. hadoop.security.random.device.file.path /dev/urandom OS security random device file path. fs.har.impl.disable.cache true Don't cache 'har' filesystem instances. hadoop.security.kms.client.authentication.retry-count 1 Number of time to retry connecting to KMS on authentication failure hadoop.security.kms.client.encrypted.key.cache.size 500 Size of the EncryptedKeyVersion cache Queue for each key hadoop.security.kms.client.encrypted.key.cache.low-watermark 0.3f If size of the EncryptedKeyVersion cache Queue falls below the low watermark, this cache queue will be scheduled for a refill hadoop.security.kms.client.encrypted.key.cache.num.refill.threads 2 Number of threads to use for refilling depleted EncryptedKeyVersion cache Queues hadoop.security.kms.client.encrypted.key.cache.expiry 43200000 Cache expiry time for a Key, after which the cache Queue for this key will be dropped. Default = 12hrs hadoop.htrace.spanreceiver.classes A comma separated list of the fully-qualified class name of classes implementing SpanReceiver. The tracing system works by collecting information in structs called 'Spans'. It is up to you to choose how you want to receive this information by implementing the SpanReceiver interface. ipc.server.max.connections 0 The maximum number of concurrent connections a server is allowed to accept. If this limit is exceeded, incoming connections will first fill the listen queue and then may go to an OS-specific listen overflow queue. The client may fail or timeout, but the server can avoid running out of file descriptors using this feature. 0 means no limit. Is the registry enabled in the YARN Resource Manager? If true, the YARN RM will, as needed. create the user and system paths, and purge service records when containers, application attempts and applications complete. If false, the paths must be created by other means, and no automatic cleanup of service records will take place. hadoop.registry.rm.enabled false The root zookeeper node for the registry hadoop.registry.zk.root /registry Zookeeper session timeout in milliseconds hadoop.registry.zk.session.timeout.ms 60000 Zookeeper connection timeout in milliseconds hadoop.registry.zk.connection.timeout.ms 15000 Zookeeper connection retry count before failing hadoop.registry.zk.retry.times 5 hadoop.registry.zk.retry.interval.ms 1000 Zookeeper retry limit in milliseconds, during exponential backoff. This places a limit even if the retry times and interval limit, combined with the backoff policy, result in a long retry period hadoop.registry.zk.retry.ceiling.ms 60000 List of hostname:port pairs defining the zookeeper quorum binding for the registry hadoop.registry.zk.quorum localhost:2181 Key to set if the registry is secure. Turning it on changes the permissions policy from "open access" to restrictions on kerberos with the option of a user adding one or more auth key pairs down their own tree. hadoop.registry.secure false A comma separated list of Zookeeper ACL identifiers with system access to the registry in a secure cluster. These are given full access to all entries. If there is an "@" at the end of a SASL entry it instructs the registry client to append the default kerberos domain. hadoop.registry.system.acls sasl:yarn@, sasl:mapred@, sasl:hdfs@ The kerberos realm: used to set the realm of system principals which do not declare their realm, and any other accounts that need the value. If empty, the default realm of the running process is used. If neither are known and the realm is needed, then the registry service/client will fail. hadoop.registry.kerberos.realm Key to define the JAAS context. Used in secure mode hadoop.registry.jaas.context Client ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_1/resources/create_hive_db.sql 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_1/resources/create_hive_db.s0000664000175000017500000000066200000000000033546 0ustar00zuulzuul00000000000000CREATE DATABASE metastore; USE metastore; SOURCE /opt/hive/scripts/metastore/upgrade/mysql/hive-schema-0.10.0.mysql.sql; CREATE USER 'hive'@'localhost' IDENTIFIED BY '{{password}}'; REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'hive'@'localhost'; GRANT ALL PRIVILEGES ON metastore.* TO 'hive'@'localhost' IDENTIFIED BY '{{password}}'; GRANT ALL PRIVILEGES ON metastore.* TO 'hive'@'%' IDENTIFIED BY '{{password}}'; FLUSH PRIVILEGES; exit ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_1/resources/hdfs-default.xml0000664000175000017500000023710100000000000033527 0ustar00zuulzuul00000000000000 hadoop.hdfs.configuration.version 1 version of this configuration file dfs.namenode.rpc-address RPC address that handles all clients requests. In the case of HA/Federation where multiple namenodes exist, the name service id is added to the name e.g. dfs.namenode.rpc-address.ns1 dfs.namenode.rpc-address.EXAMPLENAMESERVICE The value of this property will take the form of nn-host1:rpc-port. dfs.namenode.rpc-bind-host The actual address the RPC server will bind to. If this optional address is set, it overrides only the hostname portion of dfs.namenode.rpc-address. It can also be specified per name node or name service for HA/Federation. This is useful for making the name node listen on all interfaces by setting it to 0.0.0.0. dfs.namenode.servicerpc-address RPC address for HDFS Services communication. BackupNode, Datanodes and all other services should be connecting to this address if it is configured. In the case of HA/Federation where multiple namenodes exist, the name service id is added to the name e.g. dfs.namenode.servicerpc-address.ns1 dfs.namenode.rpc-address.EXAMPLENAMESERVICE The value of this property will take the form of nn-host1:rpc-port. If the value of this property is unset the value of dfs.namenode.rpc-address will be used as the default. dfs.namenode.servicerpc-bind-host The actual address the service RPC server will bind to. If this optional address is set, it overrides only the hostname portion of dfs.namenode.servicerpc-address. It can also be specified per name node or name service for HA/Federation. This is useful for making the name node listen on all interfaces by setting it to 0.0.0.0. dfs.namenode.secondary.http-address 0.0.0.0:50090 The secondary namenode http server address and port. dfs.namenode.secondary.https-address 0.0.0.0:50091 The secondary namenode HTTPS server address and port. dfs.datanode.address 0.0.0.0:50010 The datanode server address and port for data transfer. dfs.datanode.http.address 0.0.0.0:50075 The datanode http server address and port. dfs.datanode.ipc.address 0.0.0.0:50020 The datanode ipc server address and port. dfs.datanode.handler.count 10 The number of server threads for the datanode. dfs.namenode.http-address 0.0.0.0:50070 The address and the base port where the dfs namenode web ui will listen on. dfs.namenode.http-bind-host The actual adress the HTTP server will bind to. If this optional address is set, it overrides only the hostname portion of dfs.namenode.http-address. It can also be specified per name node or name service for HA/Federation. This is useful for making the name node HTTP server listen on all interfaces by setting it to 0.0.0.0. dfs.namenode.heartbeat.recheck-interval 300000 This time decides the interval to check for expired datanodes. With this value and dfs.heartbeat.interval, the interval of deciding the datanode is stale or not is also calculated. The unit of this configuration is millisecond. dfs.http.policy HTTP_ONLY Decide if HTTPS(SSL) is supported on HDFS This configures the HTTP endpoint for HDFS daemons: The following values are supported: - HTTP_ONLY : Service is provided only on http - HTTPS_ONLY : Service is provided only on https - HTTP_AND_HTTPS : Service is provided both on http and https dfs.client.https.need-auth false Whether SSL client certificate authentication is required dfs.client.cached.conn.retry 3 The number of times the HDFS client will pull a socket from the cache. Once this number is exceeded, the client will try to create a new socket. dfs.https.server.keystore.resource ssl-server.xml Resource file from which ssl server keystore information will be extracted dfs.client.https.keystore.resource ssl-client.xml Resource file from which ssl client keystore information will be extracted dfs.datanode.https.address 0.0.0.0:50475 The datanode secure http server address and port. dfs.namenode.https-address 0.0.0.0:50470 The namenode secure http server address and port. dfs.namenode.https-bind-host The actual adress the HTTPS server will bind to. If this optional address is set, it overrides only the hostname portion of dfs.namenode.https-address. It can also be specified per name node or name service for HA/Federation. This is useful for making the name node HTTPS server listen on all interfaces by setting it to 0.0.0.0. dfs.datanode.dns.interface default The name of the Network Interface from which a data node should report its IP address. dfs.datanode.dns.nameserver default The host name or IP address of the name server (DNS) which a DataNode should use to determine the host name used by the NameNode for communication and display purposes. dfs.namenode.backup.address 0.0.0.0:50100 The backup node server address and port. If the port is 0 then the server will start on a free port. dfs.namenode.backup.http-address 0.0.0.0:50105 The backup node http server address and port. If the port is 0 then the server will start on a free port. dfs.namenode.replication.considerLoad true Decide if chooseTarget considers the target's load or not dfs.default.chunk.view.size 32768 The number of bytes to view for a file on the browser. dfs.datanode.du.reserved 0 Reserved space in bytes per volume. Always leave this much space free for non dfs use. dfs.namenode.name.dir file://${hadoop.tmp.dir}/dfs/name Determines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy. dfs.namenode.name.dir.restore false Set to true to enable NameNode to attempt recovering a previously failed dfs.namenode.name.dir. When enabled, a recovery of any failed directory is attempted during checkpoint. dfs.namenode.fs-limits.max-component-length 255 Defines the maximum number of bytes in UTF-8 encoding in each component of a path. A value of 0 will disable the check. dfs.namenode.fs-limits.max-directory-items 1048576 Defines the maximum number of items that a directory may contain. Cannot set the property to a value less than 1 or more than 6400000. dfs.namenode.fs-limits.min-block-size 1048576 Minimum block size in bytes, enforced by the Namenode at create time. This prevents the accidental creation of files with tiny block sizes (and thus many blocks), which can degrade performance. dfs.namenode.fs-limits.max-blocks-per-file 1048576 Maximum number of blocks per file, enforced by the Namenode on write. This prevents the creation of extremely large files which can degrade performance. dfs.namenode.edits.dir ${dfs.namenode.name.dir} Determines where on the local filesystem the DFS name node should store the transaction (edits) file. If this is a comma-delimited list of directories then the transaction file is replicated in all of the directories, for redundancy. Default value is same as dfs.namenode.name.dir dfs.namenode.shared.edits.dir A directory on shared storage between the multiple namenodes in an HA cluster. This directory will be written by the active and read by the standby in order to keep the namespaces synchronized. This directory does not need to be listed in dfs.namenode.edits.dir above. It should be left empty in a non-HA cluster. dfs.namenode.edits.journal-plugin.qjournal org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager dfs.permissions.enabled true If "true", enable permission checking in HDFS. If "false", permission checking is turned off, but all other behavior is unchanged. Switching from one parameter value to the other does not change the mode, owner or group of files or directories. dfs.permissions.superusergroup supergroup The name of the group of super-users. dfs.namenode.acls.enabled false Set to true to enable support for HDFS ACLs (Access Control Lists). By default, ACLs are disabled. When ACLs are disabled, the NameNode rejects all RPCs related to setting or getting ACLs. dfs.namenode.lazypersist.file.scrub.interval.sec 300 The NameNode periodically scans the namespace for LazyPersist files with missing blocks and unlinks them from the namespace. This configuration key controls the interval between successive scans. Set it to a negative value to disable this behavior. dfs.block.access.token.enable false If "true", access tokens are used as capabilities for accessing datanodes. If "false", no access tokens are checked on accessing datanodes. dfs.block.access.key.update.interval 600 Interval in minutes at which namenode updates its access keys. dfs.block.access.token.lifetime 600 The lifetime of access tokens in minutes. dfs.datanode.data.dir file://${hadoop.tmp.dir}/dfs/data Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. The directories should be tagged with corresponding storage types ([SSD]/[DISK]/[ARCHIVE]/[RAM_DISK]) for HDFS storage policies. The default storage type will be DISK if the directory does not have a storage type tagged explicitly. Directories that do not exist will be created if local filesystem permission allows. dfs.datanode.data.dir.perm 700 Permissions for the directories on on the local filesystem where the DFS data node store its blocks. The permissions can either be octal or symbolic. dfs.replication 3 Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. dfs.replication.max 512 Maximal block replication. dfs.namenode.replication.min 1 Minimal block replication. dfs.blocksize 134217728 The default block size for new files, in bytes. You can use the following suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) to specify the size (such as 128k, 512m, 1g, etc.), Or provide complete size in bytes (such as 134217728 for 128 MB). dfs.client.block.write.retries 3 The number of retries for writing blocks to the data nodes, before we signal failure to the application. dfs.client.block.write.replace-datanode-on-failure.enable true If there is a datanode/network failure in the write pipeline, DFSClient will try to remove the failed datanode from the pipeline and then continue writing with the remaining datanodes. As a result, the number of datanodes in the pipeline is decreased. The feature is to add new datanodes to the pipeline. This is a site-wide property to enable/disable the feature. When the cluster size is extremely small, e.g. 3 nodes or less, cluster administrators may want to set the policy to NEVER in the default configuration file or disable this feature. Otherwise, users may experience an unusually high rate of pipeline failures since it is impossible to find new datanodes for replacement. See also dfs.client.block.write.replace-datanode-on-failure.policy dfs.client.block.write.replace-datanode-on-failure.policy DEFAULT This property is used only if the value of dfs.client.block.write.replace-datanode-on-failure.enable is true. ALWAYS: always add a new datanode when an existing datanode is removed. NEVER: never add a new datanode. DEFAULT: Let r be the replication number. Let n be the number of existing datanodes. Add a new datanode only if r is greater than or equal to 3 and either (1) floor(r/2) is greater than or equal to n; or (2) r is greater than n and the block is hflushed/appended. dfs.client.block.write.replace-datanode-on-failure.best-effort false This property is used only if the value of dfs.client.block.write.replace-datanode-on-failure.enable is true. Best effort means that the client will try to replace a failed datanode in write pipeline (provided that the policy is satisfied), however, it continues the write operation in case that the datanode replacement also fails. Suppose the datanode replacement fails. false: An exception should be thrown so that the write will fail. true : The write should be resumed with the remaining datandoes. Note that setting this property to true allows writing to a pipeline with a smaller number of datanodes. As a result, it increases the probability of data loss. dfs.blockreport.intervalMsec 21600000 Determines block reporting interval in milliseconds. dfs.blockreport.initialDelay 0 Delay for first block report in seconds. dfs.blockreport.split.threshold 1000000 If the number of blocks on the DataNode is below this threshold then it will send block reports for all Storage Directories in a single message. If the number of blocks exceeds this threshold then the DataNode will send block reports for each Storage Directory in separate messages. Set to zero to always split. dfs.datanode.directoryscan.interval 21600 Interval in seconds for Datanode to scan data directories and reconcile the difference between blocks in memory and on the disk. dfs.datanode.directoryscan.threads 1 How many threads should the threadpool used to compile reports for volumes in parallel have. dfs.heartbeat.interval 3 Determines datanode heartbeat interval in seconds. dfs.namenode.handler.count 10 The number of server threads for the namenode. dfs.namenode.safemode.threshold-pct 0.999f Specifies the percentage of blocks that should satisfy the minimal replication requirement defined by dfs.namenode.replication.min. Values less than or equal to 0 mean not to wait for any particular percentage of blocks before exiting safemode. Values greater than 1 will make safe mode permanent. dfs.namenode.safemode.min.datanodes 0 Specifies the number of datanodes that must be considered alive before the name node exits safemode. Values less than or equal to 0 mean not to take the number of live datanodes into account when deciding whether to remain in safe mode during startup. Values greater than the number of datanodes in the cluster will make safe mode permanent. dfs.namenode.safemode.extension 30000 Determines extension of safe mode in milliseconds after the threshold level is reached. dfs.namenode.resource.check.interval 5000 The interval in milliseconds at which the NameNode resource checker runs. The checker calculates the number of the NameNode storage volumes whose available spaces are more than dfs.namenode.resource.du.reserved, and enters safemode if the number becomes lower than the minimum value specified by dfs.namenode.resource.checked.volumes.minimum. dfs.namenode.resource.du.reserved 104857600 The amount of space to reserve/require for a NameNode storage directory in bytes. The default is 100MB. dfs.namenode.resource.checked.volumes A list of local directories for the NameNode resource checker to check in addition to the local edits directories. dfs.namenode.resource.checked.volumes.minimum 1 The minimum number of redundant NameNode storage volumes required. dfs.datanode.balance.bandwidthPerSec 1048576 Specifies the maximum amount of bandwidth that each datanode can utilize for the balancing purpose in term of the number of bytes per second. dfs.hosts Names a file that contains a list of hosts that are permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, all hosts are permitted. dfs.hosts.exclude Names a file that contains a list of hosts that are not permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, no hosts are excluded. dfs.namenode.max.objects 0 The maximum number of files, directories and blocks dfs supports. A value of zero indicates no limit to the number of objects that dfs supports. dfs.namenode.datanode.registration.ip-hostname-check true If true (the default), then the namenode requires that a connecting datanode's address must be resolved to a hostname. If necessary, a reverse DNS lookup is performed. All attempts to register a datanode from an unresolvable address are rejected. It is recommended that this setting be left on to prevent accidental registration of datanodes listed by hostname in the excludes file during a DNS outage. Only set this to false in environments where there is no infrastructure to support reverse DNS lookup. dfs.namenode.decommission.interval 30 Namenode periodicity in seconds to check if decommission is complete. dfs.namenode.decommission.blocks.per.interval 500000 The approximate number of blocks to process per decommission interval, as defined in dfs.namenode.decommission.interval. dfs.namenode.decommission.max.concurrent.tracked.nodes 100 The maximum number of decommission-in-progress datanodes nodes that will be tracked at one time by the namenode. Tracking a decommission-in-progress datanode consumes additional NN memory proportional to the number of blocks on the datnode. Having a conservative limit reduces the potential impact of decomissioning a large number of nodes at once. A value of 0 means no limit will be enforced. dfs.namenode.replication.interval 3 The periodicity in seconds with which the namenode computes replication work for datanodes. dfs.namenode.accesstime.precision 3600000 The access time for HDFS file is precise upto this value. The default value is 1 hour. Setting a value of 0 disables access times for HDFS. dfs.datanode.plugins Comma-separated list of datanode plug-ins to be activated. dfs.namenode.plugins Comma-separated list of namenode plug-ins to be activated. dfs.stream-buffer-size 4096 The size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. dfs.bytes-per-checksum 512 The number of bytes per checksum. Must not be larger than dfs.stream-buffer-size dfs.client-write-packet-size 65536 Packet size for clients to write dfs.client.write.exclude.nodes.cache.expiry.interval.millis 600000 The maximum period to keep a DN in the excluded nodes list at a client. After this period, in milliseconds, the previously excluded node(s) will be removed automatically from the cache and will be considered good for block allocations again. Useful to lower or raise in situations where you keep a file open for very long periods (such as a Write-Ahead-Log (WAL) file) to make the writer tolerant to cluster maintenance restarts. Defaults to 10 minutes. dfs.namenode.checkpoint.dir file://${hadoop.tmp.dir}/dfs/namesecondary Determines where on the local filesystem the DFS secondary name node should store the temporary images to merge. If this is a comma-delimited list of directories then the image is replicated in all of the directories for redundancy. dfs.namenode.checkpoint.edits.dir ${dfs.namenode.checkpoint.dir} Determines where on the local filesystem the DFS secondary name node should store the temporary edits to merge. If this is a comma-delimited list of directories then the edits is replicated in all of the directories for redundancy. Default value is same as dfs.namenode.checkpoint.dir dfs.namenode.checkpoint.period 3600 The number of seconds between two periodic checkpoints. dfs.namenode.checkpoint.txns 1000000 The Secondary NameNode or CheckpointNode will create a checkpoint of the namespace every 'dfs.namenode.checkpoint.txns' transactions, regardless of whether 'dfs.namenode.checkpoint.period' has expired. dfs.namenode.checkpoint.check.period 60 The SecondaryNameNode and CheckpointNode will poll the NameNode every 'dfs.namenode.checkpoint.check.period' seconds to query the number of uncheckpointed transactions. dfs.namenode.checkpoint.max-retries 3 The SecondaryNameNode retries failed checkpointing. If the failure occurs while loading fsimage or replaying edits, the number of retries is limited by this variable. dfs.namenode.num.checkpoints.retained 2 The number of image checkpoint files (fsimage_*) that will be retained by the NameNode and Secondary NameNode in their storage directories. All edit logs (stored on edits_* files) necessary to recover an up-to-date namespace from the oldest retained checkpoint will also be retained. dfs.namenode.num.extra.edits.retained 1000000 The number of extra transactions which should be retained beyond what is minimally necessary for a NN restart. It does not translate directly to file's age, or the number of files kept, but to the number of transactions (here "edits" means transactions). One edit file may contain several transactions (edits). During checkpoint, NameNode will identify the total number of edits to retain as extra by checking the latest checkpoint transaction value, subtracted by the value of this property. Then, it scans edits files to identify the older ones that don't include the computed range of retained transactions that are to be kept around, and purges them subsequently. The retainment can be useful for audit purposes or for an HA setup where a remote Standby Node may have been offline for some time and need to have a longer backlog of retained edits in order to start again. Typically each edit is on the order of a few hundred bytes, so the default of 1 million edits should be on the order of hundreds of MBs or low GBs. NOTE: Fewer extra edits may be retained than value specified for this setting if doing so would mean that more segments would be retained than the number configured by dfs.namenode.max.extra.edits.segments.retained. dfs.namenode.max.extra.edits.segments.retained 10000 The maximum number of extra edit log segments which should be retained beyond what is minimally necessary for a NN restart. When used in conjunction with dfs.namenode.num.extra.edits.retained, this configuration property serves to cap the number of extra edits files to a reasonable value. dfs.namenode.delegation.key.update-interval 86400000 The update interval for master key for delegation tokens in the namenode in milliseconds. dfs.namenode.delegation.token.max-lifetime 604800000 The maximum lifetime in milliseconds for which a delegation token is valid. dfs.namenode.delegation.token.renew-interval 86400000 The renewal interval for delegation token in milliseconds. dfs.datanode.failed.volumes.tolerated 0 The number of volumes that are allowed to fail before a datanode stops offering service. By default any volume failure will cause a datanode to shutdown. dfs.image.compress false Should the dfs image be compressed? dfs.image.compression.codec org.apache.hadoop.io.compress.DefaultCodec If the dfs image is compressed, how should they be compressed? This has to be a codec defined in io.compression.codecs. dfs.image.transfer.timeout 60000 Socket timeout for image transfer in milliseconds. This timeout and the related dfs.image.transfer.bandwidthPerSec parameter should be configured such that normal image transfer can complete successfully. This timeout prevents client hangs when the sender fails during image transfer. This is socket timeout during image transfer. dfs.image.transfer.bandwidthPerSec 0 Maximum bandwidth used for image transfer in bytes per second. This can help keep normal namenode operations responsive during checkpointing. The maximum bandwidth and timeout in dfs.image.transfer.timeout should be set such that normal image transfers can complete successfully. A default value of 0 indicates that throttling is disabled. dfs.image.transfer.chunksize 65536 Chunksize in bytes to upload the checkpoint. Chunked streaming is used to avoid internal buffering of contents of image file of huge size. dfs.namenode.support.allow.format true Does HDFS namenode allow itself to be formatted? You may consider setting this to false for any production cluster, to avoid any possibility of formatting a running DFS. dfs.datanode.max.transfer.threads 4096 Specifies the maximum number of threads to use for transferring data in and out of the DN. dfs.datanode.scan.period.hours 504 If this is positive, the DataNode will not scan any individual block more than once in the specified scan period. If this is negative, the block scanner is disabled. If this is set to zero, then the default value of 504 hours or 3 weeks is used. Prior versions of HDFS incorrectly documented that setting this key to zero will disable the block scanner. dfs.block.scanner.volume.bytes.per.second 1048576 If this is 0, the DataNode's block scanner will be disabled. If this is positive, this is the number of bytes per second that the DataNode's block scanner will try to scan from each volume. dfs.datanode.readahead.bytes 4194304 While reading block files, if the Hadoop native libraries are available, the datanode can use the posix_fadvise system call to explicitly page data into the operating system buffer cache ahead of the current reader's position. This can improve performance especially when disks are highly contended. This configuration specifies the number of bytes ahead of the current read position which the datanode will attempt to read ahead. This feature may be disabled by configuring this property to 0. If the native libraries are not available, this configuration has no effect. dfs.datanode.drop.cache.behind.reads false In some workloads, the data read from HDFS is known to be significantly large enough that it is unlikely to be useful to cache it in the operating system buffer cache. In this case, the DataNode may be configured to automatically purge all data from the buffer cache after it is delivered to the client. This behavior is automatically disabled for workloads which read only short sections of a block (e.g HBase random-IO workloads). This may improve performance for some workloads by freeing buffer cache space usage for more cacheable data. If the Hadoop native libraries are not available, this configuration has no effect. dfs.datanode.drop.cache.behind.writes false In some workloads, the data written to HDFS is known to be significantly large enough that it is unlikely to be useful to cache it in the operating system buffer cache. In this case, the DataNode may be configured to automatically purge all data from the buffer cache after it is written to disk. This may improve performance for some workloads by freeing buffer cache space usage for more cacheable data. If the Hadoop native libraries are not available, this configuration has no effect. dfs.datanode.sync.behind.writes false If this configuration is enabled, the datanode will instruct the operating system to enqueue all written data to the disk immediately after it is written. This differs from the usual OS policy which may wait for up to 30 seconds before triggering writeback. This may improve performance for some workloads by smoothing the IO profile for data written to disk. If the Hadoop native libraries are not available, this configuration has no effect. dfs.client.failover.max.attempts 15 Expert only. The number of client failover attempts that should be made before the failover is considered failed. dfs.client.failover.sleep.base.millis 500 Expert only. The time to wait, in milliseconds, between failover attempts increases exponentially as a function of the number of attempts made so far, with a random factor of +/- 50%. This option specifies the base value used in the failover calculation. The first failover will retry immediately. The 2nd failover attempt will delay at least dfs.client.failover.sleep.base.millis milliseconds. And so on. dfs.client.failover.sleep.max.millis 15000 Expert only. The time to wait, in milliseconds, between failover attempts increases exponentially as a function of the number of attempts made so far, with a random factor of +/- 50%. This option specifies the maximum value to wait between failovers. Specifically, the time between two failover attempts will not exceed +/- 50% of dfs.client.failover.sleep.max.millis milliseconds. dfs.client.failover.connection.retries 0 Expert only. Indicates the number of retries a failover IPC client will make to establish a server connection. dfs.client.failover.connection.retries.on.timeouts 0 Expert only. The number of retry attempts a failover IPC client will make on socket timeout when establishing a server connection. dfs.client.datanode-restart.timeout 30 Expert only. The time to wait, in seconds, from reception of an datanode shutdown notification for quick restart, until declaring the datanode dead and invoking the normal recovery mechanisms. The notification is sent by a datanode when it is being shutdown using the shutdownDatanode admin command with the upgrade option. dfs.nameservices Comma-separated list of nameservices. dfs.nameservice.id The ID of this nameservice. If the nameservice ID is not configured or more than one nameservice is configured for dfs.nameservices it is determined automatically by matching the local node's address with the configured address. dfs.internal.nameservices Comma-separated list of nameservices that belong to this cluster. Datanode will report to all the nameservices in this list. By default this is set to the value of dfs.nameservices. dfs.ha.namenodes.EXAMPLENAMESERVICE The prefix for a given nameservice, contains a comma-separated list of namenodes for a given nameservice (eg EXAMPLENAMESERVICE). dfs.ha.namenode.id The ID of this namenode. If the namenode ID is not configured it is determined automatically by matching the local node's address with the configured address. dfs.ha.log-roll.period 120 How often, in seconds, the StandbyNode should ask the active to roll edit logs. Since the StandbyNode only reads from finalized log segments, the StandbyNode will only be as up-to-date as how often the logs are rolled. Note that failover triggers a log roll so the StandbyNode will be up to date before it becomes active. dfs.ha.tail-edits.period 60 How often, in seconds, the StandbyNode should check for new finalized log segments in the shared edits log. dfs.ha.automatic-failover.enabled false Whether automatic failover is enabled. See the HDFS High Availability documentation for details on automatic HA configuration. dfs.client.use.datanode.hostname false Whether clients should use datanode hostnames when connecting to datanodes. dfs.datanode.use.datanode.hostname false Whether datanodes should use datanode hostnames when connecting to other datanodes for data transfer. dfs.client.local.interfaces A comma separated list of network interface names to use for data transfer between the client and datanodes. When creating a connection to read from or write to a datanode, the client chooses one of the specified interfaces at random and binds its socket to the IP of that interface. Individual names may be specified as either an interface name (eg "eth0"), a subinterface name (eg "eth0:0"), or an IP address (which may be specified using CIDR notation to match a range of IPs). dfs.datanode.shared.file.descriptor.paths /dev/shm,/tmp A comma-separated list of paths to use when creating file descriptors that will be shared between the DataNode and the DFSClient. Typically we use /dev/shm, so that the file descriptors will not be written to disk. Systems that don't have /dev/shm will fall back to /tmp by default. dfs.short.circuit.shared.memory.watcher.interrupt.check.ms 60000 The length of time in milliseconds that the short-circuit shared memory watcher will go between checking for java interruptions sent from other threads. This is provided mainly for unit tests. dfs.namenode.kerberos.internal.spnego.principal ${dfs.web.authentication.kerberos.principal} dfs.secondary.namenode.kerberos.internal.spnego.principal ${dfs.web.authentication.kerberos.principal} dfs.namenode.kerberos.principal.pattern * A client-side RegEx that can be configured to control allowed realms to authenticate with (useful in cross-realm env.) dfs.namenode.avoid.read.stale.datanode false Indicate whether or not to avoid reading from "stale" datanodes whose heartbeat messages have not been received by the namenode for more than a specified time interval. Stale datanodes will be moved to the end of the node list returned for reading. See dfs.namenode.avoid.write.stale.datanode for a similar setting for writes. dfs.namenode.avoid.write.stale.datanode false Indicate whether or not to avoid writing to "stale" datanodes whose heartbeat messages have not been received by the namenode for more than a specified time interval. Writes will avoid using stale datanodes unless more than a configured ratio (dfs.namenode.write.stale.datanode.ratio) of datanodes are marked as stale. See dfs.namenode.avoid.read.stale.datanode for a similar setting for reads. dfs.namenode.stale.datanode.interval 30000 Default time interval for marking a datanode as "stale", i.e., if the namenode has not received heartbeat msg from a datanode for more than this time interval, the datanode will be marked and treated as "stale" by default. The stale interval cannot be too small since otherwise this may cause too frequent change of stale states. We thus set a minimum stale interval value (the default value is 3 times of heartbeat interval) and guarantee that the stale interval cannot be less than the minimum value. A stale data node is avoided during lease/block recovery. It can be conditionally avoided for reads (see dfs.namenode.avoid.read.stale.datanode) and for writes (see dfs.namenode.avoid.write.stale.datanode). dfs.namenode.write.stale.datanode.ratio 0.5f When the ratio of number stale datanodes to total datanodes marked is greater than this ratio, stop avoiding writing to stale nodes so as to prevent causing hotspots. dfs.namenode.invalidate.work.pct.per.iteration 0.32f *Note*: Advanced property. Change with caution. This determines the percentage amount of block invalidations (deletes) to do over a single DN heartbeat deletion command. The final deletion count is determined by applying this percentage to the number of live nodes in the system. The resultant number is the number of blocks from the deletion list chosen for proper invalidation over a single heartbeat of a single DN. Value should be a positive, non-zero percentage in float notation (X.Yf), with 1.0f meaning 100%. dfs.namenode.replication.work.multiplier.per.iteration 2 *Note*: Advanced property. Change with caution. This determines the total amount of block transfers to begin in parallel at a DN, for replication, when such a command list is being sent over a DN heartbeat by the NN. The actual number is obtained by multiplying this multiplier with the total number of live nodes in the cluster. The result number is the number of blocks to begin transfers immediately for, per DN heartbeat. This number can be any positive, non-zero integer. nfs.server.port 2049 Specify the port number used by Hadoop NFS. nfs.mountd.port 4242 Specify the port number used by Hadoop mount daemon. nfs.dump.dir /tmp/.hdfs-nfs This directory is used to temporarily save out-of-order writes before writing to HDFS. For each file, the out-of-order writes are dumped after they are accumulated to exceed certain threshold (e.g., 1MB) in memory. One needs to make sure the directory has enough space. nfs.rtmax 1048576 This is the maximum size in bytes of a READ request supported by the NFS gateway. If you change this, make sure you also update the nfs mount's rsize(add rsize= # of bytes to the mount directive). nfs.wtmax 1048576 This is the maximum size in bytes of a WRITE request supported by the NFS gateway. If you change this, make sure you also update the nfs mount's wsize(add wsize= # of bytes to the mount directive). nfs.keytab.file *Note*: Advanced property. Change with caution. This is the path to the keytab file for the hdfs-nfs gateway. This is required when the cluster is kerberized. nfs.kerberos.principal *Note*: Advanced property. Change with caution. This is the name of the kerberos principal. This is required when the cluster is kerberized.It must be of this format: nfs-gateway-user/nfs-gateway-host@kerberos-realm nfs.allow.insecure.ports true When set to false, client connections originating from unprivileged ports (those above 1023) will be rejected. This is to ensure that clients connecting to this NFS Gateway must have had root privilege on the machine where they're connecting from. dfs.webhdfs.enabled true Enable WebHDFS (REST API) in Namenodes and Datanodes. hadoop.fuse.connection.timeout 300 The minimum number of seconds that we'll cache libhdfs connection objects in fuse_dfs. Lower values will result in lower memory consumption; higher values may speed up access by avoiding the overhead of creating new connection objects. hadoop.fuse.timer.period 5 The number of seconds between cache expiry checks in fuse_dfs. Lower values will result in fuse_dfs noticing changes to Kerberos ticket caches more quickly. dfs.metrics.percentiles.intervals Comma-delimited set of integers denoting the desired rollover intervals (in seconds) for percentile latency metrics on the Namenode and Datanode. By default, percentile latency metrics are disabled. hadoop.user.group.metrics.percentiles.intervals A comma-separated list of the granularity in seconds for the metrics which describe the 50/75/90/95/99th percentile latency for group resolution in milliseconds. By default, percentile latency metrics are disabled. dfs.encrypt.data.transfer false Whether or not actual block data that is read/written from/to HDFS should be encrypted on the wire. This only needs to be set on the NN and DNs, clients will deduce this automatically. It is possible to override this setting per connection by specifying custom logic via dfs.trustedchannel.resolver.class. dfs.encrypt.data.transfer.algorithm This value may be set to either "3des" or "rc4". If nothing is set, then the configured JCE default on the system is used (usually 3DES.) It is widely believed that 3DES is more cryptographically secure, but RC4 is substantially faster. Note that if AES is supported by both the client and server then this encryption algorithm will only be used to initially transfer keys for AES. (See dfs.encrypt.data.transfer.cipher.suites.) dfs.encrypt.data.transfer.cipher.suites This value may be either undefined or AES/CTR/NoPadding. If defined, then dfs.encrypt.data.transfer uses the specified cipher suite for data encryption. If not defined, then only the algorithm specified in dfs.encrypt.data.transfer.algorithm is used. By default, the property is not defined. dfs.encrypt.data.transfer.cipher.key.bitlength 128 The key bitlength negotiated by dfsclient and datanode for encryption. This value may be set to either 128, 192 or 256. dfs.trustedchannel.resolver.class TrustedChannelResolver is used to determine whether a channel is trusted for plain data transfer. The TrustedChannelResolver is invoked on both client and server side. If the resolver indicates that the channel is trusted, then the data transfer will not be encrypted even if dfs.encrypt.data.transfer is set to true. The default implementation returns false indicating that the channel is not trusted. dfs.data.transfer.protection A comma-separated list of SASL protection values used for secured connections to the DataNode when reading or writing block data. Possible values are authentication, integrity and privacy. authentication means authentication only and no integrity or privacy; integrity implies authentication and integrity are enabled; and privacy implies all of authentication, integrity and privacy are enabled. If dfs.encrypt.data.transfer is set to true, then it supersedes the setting for dfs.data.transfer.protection and enforces that all connections must use a specialized encrypted SASL handshake. This property is ignored for connections to a DataNode listening on a privileged port. In this case, it is assumed that the use of a privileged port establishes sufficient trust. dfs.data.transfer.saslproperties.resolver.class SaslPropertiesResolver used to resolve the QOP used for a connection to the DataNode when reading or writing block data. If not specified, the value of hadoop.security.saslproperties.resolver.class is used as the default value. dfs.datanode.hdfs-blocks-metadata.enabled false Boolean which enables backend datanode-side support for the experimental DistributedFileSystem#getFileVBlockStorageLocations API. dfs.client.file-block-storage-locations.num-threads 10 Number of threads used for making parallel RPCs in DistributedFileSystem#getFileBlockStorageLocations(). dfs.client.file-block-storage-locations.timeout.millis 1000 Timeout (in milliseconds) for the parallel RPCs made in DistributedFileSystem#getFileBlockStorageLocations(). dfs.journalnode.rpc-address 0.0.0.0:8485 The JournalNode RPC server address and port. dfs.journalnode.http-address 0.0.0.0:8480 The address and port the JournalNode HTTP server listens on. If the port is 0 then the server will start on a free port. dfs.journalnode.https-address 0.0.0.0:8481 The address and port the JournalNode HTTPS server listens on. If the port is 0 then the server will start on a free port. dfs.namenode.audit.loggers default List of classes implementing audit loggers that will receive audit events. These should be implementations of org.apache.hadoop.hdfs.server.namenode.AuditLogger. The special value "default" can be used to reference the default audit logger, which uses the configured log system. Installing custom audit loggers may affect the performance and stability of the NameNode. Refer to the custom logger's documentation for more details. dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold 10737418240 Only used when the dfs.datanode.fsdataset.volume.choosing.policy is set to org.apache.hadoop.hdfs.server.datanode.fsdataset.AvailableSpaceVolumeChoosingPolicy. This setting controls how much DN volumes are allowed to differ in terms of bytes of free disk space before they are considered imbalanced. If the free space of all the volumes are within this range of each other, the volumes will be considered balanced and block assignments will be done on a pure round robin basis. dfs.datanode.available-space-volume-choosing-policy.balanced-space-preference-fraction 0.75f Only used when the dfs.datanode.fsdataset.volume.choosing.policy is set to org.apache.hadoop.hdfs.server.datanode.fsdataset.AvailableSpaceVolumeChoosingPolicy. This setting controls what percentage of new block allocations will be sent to volumes with more available disk space than others. This setting should be in the range 0.0 - 1.0, though in practice 0.5 - 1.0, since there should be no reason to prefer that volumes with less available disk space receive more block allocations. dfs.namenode.edits.noeditlogchannelflush false Specifies whether to flush edit log file channel. When set, expensive FileChannel#force calls are skipped and synchronous disk writes are enabled instead by opening the edit log file with RandomAccessFile("rws") flags. This can significantly improve the performance of edit log writes on the Windows platform. Note that the behavior of the "rws" flags is platform and hardware specific and might not provide the same level of guarantees as FileChannel#force. For example, the write will skip the disk-cache on SAS and SCSI devices while it might not on SATA devices. This is an expert level setting, change with caution. dfs.client.cache.drop.behind.writes Just like dfs.datanode.drop.cache.behind.writes, this setting causes the page cache to be dropped behind HDFS writes, potentially freeing up more memory for other uses. Unlike dfs.datanode.drop.cache.behind.writes, this is a client-side setting rather than a setting for the entire datanode. If present, this setting will override the DataNode default. If the native libraries are not available to the DataNode, this configuration has no effect. dfs.client.cache.drop.behind.reads Just like dfs.datanode.drop.cache.behind.reads, this setting causes the page cache to be dropped behind HDFS reads, potentially freeing up more memory for other uses. Unlike dfs.datanode.drop.cache.behind.reads, this is a client-side setting rather than a setting for the entire datanode. If present, this setting will override the DataNode default. If the native libraries are not available to the DataNode, this configuration has no effect. dfs.client.cache.readahead When using remote reads, this setting causes the datanode to read ahead in the block file using posix_fadvise, potentially decreasing I/O wait times. Unlike dfs.datanode.readahead.bytes, this is a client-side setting rather than a setting for the entire datanode. If present, this setting will override the DataNode default. When using local reads, this setting determines how much readahead we do in BlockReaderLocal. If the native libraries are not available to the DataNode, this configuration has no effect. dfs.namenode.enable.retrycache true This enables the retry cache on the namenode. Namenode tracks for non-idempotent requests the corresponding response. If a client retries the request, the response from the retry cache is sent. Such operations are tagged with annotation @AtMostOnce in namenode protocols. It is recommended that this flag be set to true. Setting it to false, will result in clients getting failure responses to retried request. This flag must be enabled in HA setup for transparent fail-overs. The entries in the cache have expiration time configurable using dfs.namenode.retrycache.expirytime.millis. dfs.namenode.retrycache.expirytime.millis 600000 The time for which retry cache entries are retained. dfs.namenode.retrycache.heap.percent 0.03f This parameter configures the heap size allocated for retry cache (excluding the response cached). This corresponds to approximately 4096 entries for every 64MB of namenode process java heap size. Assuming retry cache entry expiration time (configured using dfs.namenode.retrycache.expirytime.millis) of 10 minutes, this enables retry cache to support 7 operations per second sustained for 10 minutes. As the heap size is increased, the operation rate linearly increases. dfs.client.mmap.enabled true If this is set to false, the client won't attempt to perform memory-mapped reads. dfs.client.mmap.cache.size 256 When zero-copy reads are used, the DFSClient keeps a cache of recently used memory mapped regions. This parameter controls the maximum number of entries that we will keep in that cache. The larger this number is, the more file descriptors we will potentially use for memory-mapped files. mmaped files also use virtual address space. You may need to increase your ulimit virtual address space limits before increasing the client mmap cache size. Note that you can still do zero-copy reads when this size is set to 0. dfs.client.mmap.cache.timeout.ms 3600000 The minimum length of time that we will keep an mmap entry in the cache between uses. If an entry is in the cache longer than this, and nobody uses it, it will be removed by a background thread. dfs.client.mmap.retry.timeout.ms 300000 The minimum amount of time that we will wait before retrying a failed mmap operation. dfs.client.short.circuit.replica.stale.threshold.ms 1800000 The maximum amount of time that we will consider a short-circuit replica to be valid, if there is no communication from the DataNode. After this time has elapsed, we will re-fetch the short-circuit replica even if it is in the cache. dfs.namenode.path.based.cache.block.map.allocation.percent 0.25 The percentage of the Java heap which we will allocate to the cached blocks map. The cached blocks map is a hash map which uses chained hashing. Smaller maps may be accessed more slowly if the number of cached blocks is large; larger maps will consume more memory. dfs.datanode.max.locked.memory 0 The amount of memory in bytes to use for caching of block replicas in memory on the datanode. The datanode's maximum locked memory soft ulimit (RLIMIT_MEMLOCK) must be set to at least this value, else the datanode will abort on startup. By default, this parameter is set to 0, which disables in-memory caching. If the native libraries are not available to the DataNode, this configuration has no effect. dfs.namenode.list.cache.directives.num.responses 100 This value controls the number of cache directives that the NameNode will send over the wire in response to a listDirectives RPC. dfs.namenode.list.cache.pools.num.responses 100 This value controls the number of cache pools that the NameNode will send over the wire in response to a listPools RPC. dfs.namenode.path.based.cache.refresh.interval.ms 30000 The amount of milliseconds between subsequent path cache rescans. Path cache rescans are when we calculate which blocks should be cached, and on what datanodes. By default, this parameter is set to 30 seconds. dfs.namenode.path.based.cache.retry.interval.ms 30000 When the NameNode needs to uncache something that is cached, or cache something that is not cached, it must direct the DataNodes to do so by sending a DNA_CACHE or DNA_UNCACHE command in response to a DataNode heartbeat. This parameter controls how frequently the NameNode will resend these commands. dfs.datanode.fsdatasetcache.max.threads.per.volume 4 The maximum number of threads per volume to use for caching new data on the datanode. These threads consume both I/O and CPU. This can affect normal datanode operations. dfs.cachereport.intervalMsec 10000 Determines cache reporting interval in milliseconds. After this amount of time, the DataNode sends a full report of its cache state to the NameNode. The NameNode uses the cache report to update its map of cached blocks to DataNode locations. This configuration has no effect if in-memory caching has been disabled by setting dfs.datanode.max.locked.memory to 0 (which is the default). If the native libraries are not available to the DataNode, this configuration has no effect. dfs.namenode.edit.log.autoroll.multiplier.threshold 2.0 Determines when an active namenode will roll its own edit log. The actual threshold (in number of edits) is determined by multiplying this value by dfs.namenode.checkpoint.txns. This prevents extremely large edit files from accumulating on the active namenode, which can cause timeouts during namenode startup and pose an administrative hassle. This behavior is intended as a failsafe for when the standby or secondary namenode fail to roll the edit log by the normal checkpoint threshold. dfs.namenode.edit.log.autoroll.check.interval.ms 300000 How often an active namenode will check if it needs to roll its edit log, in milliseconds. dfs.webhdfs.user.provider.user.pattern ^[A-Za-z_][A-Za-z0-9._-]*[$]?$ Valid pattern for user and group names for webhdfs, it must be a valid java regex. dfs.client.context default The name of the DFSClient context that we should use. Clients that share a context share a socket cache and short-circuit cache, among other things. You should only change this if you don't want to share with another set of threads. dfs.client.read.shortcircuit false This configuration parameter turns on short-circuit local reads. dfs.domain.socket.path Optional. This is a path to a UNIX domain socket that will be used for communication between the DataNode and local HDFS clients. If the string "_PORT" is present in this path, it will be replaced by the TCP port of the DataNode. dfs.client.read.shortcircuit.skip.checksum false If this configuration parameter is set, short-circuit local reads will skip checksums. This is normally not recommended, but it may be useful for special setups. You might consider using this if you are doing your own checksumming outside of HDFS. dfs.client.read.shortcircuit.streams.cache.size 256 The DFSClient maintains a cache of recently opened file descriptors. This parameter controls the size of that cache. Setting this higher will use more file descriptors, but potentially provide better performance on workloads involving lots of seeks. dfs.client.read.shortcircuit.streams.cache.expiry.ms 300000 This controls the minimum amount of time file descriptors need to sit in the client cache context before they can be closed for being inactive for too long. dfs.datanode.shared.file.descriptor.paths /dev/shm,/tmp Comma separated paths to the directory on which shared memory segments are created. The client and the DataNode exchange information via this shared memory segment. It tries paths in order until creation of shared memory segment succeeds. dfs.client.use.legacy.blockreader.local false Legacy short-circuit reader implementation based on HDFS-2246 is used if this configuration parameter is true. This is for the platforms other than Linux where the new implementation based on HDFS-347 is not available. dfs.block.local-path-access.user Comma separated list of the users allowd to open block files on legacy short-circuit local read. dfs.client.domain.socket.data.traffic false This control whether we will try to pass normal data traffic over UNIX domain socket rather than over TCP socket on node-local data transfer. This is currently experimental and turned off by default. dfs.namenode.reject-unresolved-dn-topology-mapping false If the value is set to true, then namenode will reject datanode registration if the topology mapping for a datanode is not resolved and NULL is returned (script defined by net.topology.script.file.name fails to execute). Otherwise, datanode will be registered and the default rack will be assigned as the topology path. Topology paths are important for data resiliency, since they define fault domains. Thus it may be unwanted behavior to allow datanode registration with the default rack if the resolving topology failed. dfs.client.slow.io.warning.threshold.ms 30000 The threshold in milliseconds at which we will log a slow io warning in a dfsclient. By default, this parameter is set to 30000 milliseconds (30 seconds). dfs.datanode.slow.io.warning.threshold.ms 300 The threshold in milliseconds at which we will log a slow io warning in a datanode. By default, this parameter is set to 300 milliseconds. dfs.namenode.xattrs.enabled true Whether support for extended attributes is enabled on the NameNode. dfs.namenode.fs-limits.max-xattrs-per-inode 32 Maximum number of extended attributes per inode. dfs.namenode.fs-limits.max-xattr-size 16384 The maximum combined size of the name and value of an extended attribute in bytes. dfs.namenode.startup.delay.block.deletion.sec 0 The delay in seconds at which we will pause the blocks deletion after Namenode startup. By default it's disabled. In the case a directory has large number of directories and files are deleted, suggested delay is one hour to give the administrator enough time to notice large number of pending deletion blocks and take corrective action. dfs.namenode.list.encryption.zones.num.responses 100 When listing encryption zones, the maximum number of zones that will be returned in a batch. Fetching the list incrementally in batches improves namenode performance. dfs.namenode.inotify.max.events.per.rpc 1000 Maximum number of events that will be sent to an inotify client in a single RPC response. The default value attempts to amortize away the overhead for this RPC while avoiding huge memory requirements for the client and NameNode (1000 events should consume no more than 1 MB.) dfs.user.home.dir.prefix /user The directory to prepend to user name to get the user's home direcotry. dfs.datanode.cache.revocation.timeout.ms 900000 When the DFSClient reads from a block file which the DataNode is caching, the DFSClient can skip verifying checksums. The DataNode will keep the block file in cache until the client is done. If the client takes an unusually long time, though, the DataNode may need to evict the block file from the cache anyway. This value controls how long the DataNode will wait for the client to release a replica that it is reading without checksums. dfs.datanode.cache.revocation.polling.ms 500 How often the DataNode should poll to see if the clients have stopped using a replica that the DataNode wants to uncache. dfs.datanode.block.id.layout.upgrade.threads 12 The number of threads to use when creating hard links from current to previous blocks during upgrade of a DataNode to block ID-based block layout (see HDFS-6482 for details on the layout). dfs.encryption.key.provider.uri The KeyProvider to use when interacting with encryption keys used when reading and writing to an encryption zone. dfs.storage.policy.enabled true Allow users to change the storage policy on files and directories. dfs.namenode.legacy-oiv-image.dir Determines where to save the namespace in the old fsimage format during checkpointing by standby NameNode or SecondaryNameNode. Users can dump the contents of the old format fsimage by oiv_legacy command. If the value is not specified, old format fsimage will not be saved in checkpoint. dfs.namenode.top.enabled true Enable nntop: reporting top users on namenode dfs.namenode.top.window.num.buckets 10 Number of buckets in the rolling window implementation of nntop dfs.namenode.top.num.users 10 Number of top users returned by the top tool dfs.namenode.top.windows.minutes 1,5,25 comma separated list of nntop reporting periods in minutes dfs.namenode.blocks.per.postponedblocks.rescan 10000 Number of blocks to rescan for each iteration of postponedMisreplicatedBlocks. dfs.datanode.block-pinning.enabled false Whether pin blocks on favored DataNode. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_1/resources/hive-default.xml0000664000175000017500000022237500000000000033545 0ustar00zuulzuul00000000000000 mapred.reduce.tasks -1 The default number of reduce tasks per job. Typically set to a prime close to the number of available hosts. Ignored when mapred.job.tracker is "local". Hadoop set this to 1 by default, whereas hive uses -1 as its default value. By setting this property to -1, Hive will automatically figure out what should be the number of reducers. hive.exec.reducers.bytes.per.reducer 1000000000 size per reducer.The default is 1G, i.e if the input size is 10G, it will use 10 reducers. hive.exec.reducers.max 999 max number of reducers will be used. If the one specified in the configuration parameter mapred.reduce.tasks is negative, hive will use this one as the max number of reducers when automatically determine number of reducers. hive.cli.print.header false Whether to print the names of the columns in query output. hive.cli.print.current.db false Whether to include the current database in the hive prompt. hive.cli.prompt hive Command line prompt configuration value. Other hiveconf can be used in this configuration value. Variable substitution will only be invoked at the hive cli startup. hive.cli.pretty.output.num.cols -1 The number of columns to use when formatting output generated by the DESCRIBE PRETTY table_name command. If the value of this property is -1, then hive will use the auto-detected terminal width. hive.exec.scratchdir /tmp/hive-${user.name} Scratch space for Hive jobs hive.exec.local.scratchdir /tmp/${user.name} Local scratch space for Hive jobs hive.test.mode false whether hive is running in test mode. If yes, it turns on sampling and prefixes the output tablename hive.test.mode.prefix test_ if hive is running in test mode, prefixes the output table by this string hive.test.mode.samplefreq 32 if hive is running in test mode and table is not bucketed, sampling frequency hive.test.mode.nosamplelist if hive is running in test mode, dont sample the above comma seperated list of tables hive.metastore.uris Thrift uri for the remote metastore. Used by metastore client to connect to remote metastore. javax.jdo.option.ConnectionURL jdbc:derby:;databaseName=metastore_db;create=true JDBC connect string for a JDBC metastore javax.jdo.option.ConnectionDriverName org.apache.derby.jdbc.EmbeddedDriver Driver class name for a JDBC metastore javax.jdo.PersistenceManagerFactoryClass org.datanucleus.jdo.JDOPersistenceManagerFactory class implementing the jdo persistence javax.jdo.option.DetachAllOnCommit true detaches all objects from session so that they can be used after transaction is committed javax.jdo.option.NonTransactionalRead true reads outside of transactions javax.jdo.option.ConnectionUserName APP username to use against metastore database javax.jdo.option.ConnectionPassword mine password to use against metastore database javax.jdo.option.Multithreaded true Set this to true if multiple threads access metastore through JDO concurrently. datanucleus.connectionPoolingType DBCP Uses a DBCP connection pool for JDBC metastore datanucleus.validateTables false validates existing schema against code. turn this on if you want to verify existing schema datanucleus.validateColumns false validates existing schema against code. turn this on if you want to verify existing schema datanucleus.validateConstraints false validates existing schema against code. turn this on if you want to verify existing schema datanucleus.storeManagerType rdbms metadata store type datanucleus.autoCreateSchema true creates necessary schema on a startup if one doesn't exist. set this to false, after creating it once datanucleus.autoStartMechanismMode checked throw exception if metadata tables are incorrect datanucleus.transactionIsolation read-committed Default transaction isolation level for identity generation. datanucleus.cache.level2 false Use a level 2 cache. Turn this off if metadata is changed independently of hive metastore server datanucleus.cache.level2.type SOFT SOFT=soft reference based cache, WEAK=weak reference based cache. datanucleus.identifierFactory datanucleus Name of the identifier factory to use when generating table/column names etc. 'datanucleus' is used for backward compatibility datanucleus.plugin.pluginRegistryBundleCheck LOG Defines what happens when plugin bundles are found and are duplicated [EXCEPTION|LOG|NONE] hive.metastore.warehouse.dir /user/hive/warehouse location of default database for the warehouse hive.metastore.execute.setugi false In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored. hive.metastore.event.listeners list of comma seperated listeners for metastore events. hive.metastore.partition.inherit.table.properties list of comma seperated keys occurring in table properties which will get inherited to newly created partitions. * implies all the keys will get inherited. hive.metadata.export.location When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, it is the location to which the metadata will be exported. The default is an empty string, which results in the metadata being exported to the current user's home directory on HDFS. hive.metadata.move.exported.metadata.to.trash When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, this setting determines if the metadata that is exported will subsequently be moved to the user's trash directory alongside the dropped table data. This ensures that the metadata will be cleaned up along with the dropped table data. hive.metastore.partition.name.whitelist.pattern Partition names will be checked against this regex pattern and rejected if not matched. hive.metastore.end.function.listeners list of comma separated listeners for the end of metastore functions. hive.metastore.event.expiry.duration 0 Duration after which events expire from events table (in seconds) hive.metastore.event.clean.freq 0 Frequency at which timer task runs to purge expired events in metastore(in seconds). hive.metastore.connect.retries 5 Number of retries while opening a connection to metastore hive.metastore.failure.retries 3 Number of retries upon failure of Thrift metastore calls hive.metastore.client.connect.retry.delay 1 Number of seconds for the client to wait between consecutive connection attempts hive.metastore.client.socket.timeout 20 MetaStore Client socket timeout in seconds hive.metastore.rawstore.impl org.apache.hadoop.hive.metastore.ObjectStore Name of the class that implements org.apache.hadoop.hive.metastore.rawstore interface. This class is used to store and retrieval of raw metadata objects such as table, database hive.metastore.batch.retrieve.max 300 Maximum number of objects (tables/partitions) can be retrieved from metastore in one batch. The higher the number, the less the number of round trips is needed to the Hive metastore server, but it may also cause higher memory requirement at the client side. hive.metastore.batch.retrieve.table.partition.max 1000 Maximum number of table partitions that metastore internally retrieves in one batch. hive.default.fileformat TextFile Default file format for CREATE TABLE statement. Options are TextFile and SequenceFile. Users can explicitly say CREATE TABLE ... STORED AS <TEXTFILE|SEQUENCEFILE> to override hive.fileformat.check true Whether to check file format or not when loading data files hive.map.aggr true Whether to use map-side aggregation in Hive Group By queries hive.groupby.skewindata false Whether there is skew in data to optimize group by queries hive.optimize.multigroupby.common.distincts true Whether to optimize a multi-groupby query with the same distinct. Consider a query like: from src insert overwrite table dest1 select col1, count(distinct colx) group by col1 insert overwrite table dest2 select col2, count(distinct colx) group by col2; With this parameter set to true, first we spray by the distinct value (colx), and then perform the 2 groups bys. This makes sense if map-side aggregation is turned off. However, with maps-side aggregation, it might be useful in some cases to treat the 2 inserts independently, thereby performing the query above in 2MR jobs instead of 3 (due to spraying by distinct key first). If this parameter is turned off, we dont consider the fact that the distinct key is the same across different MR jobs. hive.groupby.mapaggr.checkinterval 100000 Number of rows after which size of the grouping keys/aggregation classes is performed hive.mapred.local.mem 0 For local mode, memory of the mappers/reducers hive.mapjoin.followby.map.aggr.hash.percentmemory 0.3 Portion of total memory to be used by map-side grup aggregation hash table, when this group by is followed by map join hive.map.aggr.hash.force.flush.memory.threshold 0.9 The max memory to be used by map-side grup aggregation hash table, if the memory usage is higher than this number, force to flush data hive.map.aggr.hash.percentmemory 0.5 Portion of total memory to be used by map-side grup aggregation hash table hive.map.aggr.hash.min.reduction 0.5 Hash aggregation will be turned off if the ratio between hash table size and input rows is bigger than this number. Set to 1 to make sure hash aggregation is never turned off. hive.optimize.cp true Whether to enable column pruner hive.optimize.index.filter false Whether to enable automatic use of indexes hive.optimize.index.groupby false Whether to enable optimization of group-by queries using Aggregate indexes. hive.optimize.ppd true Whether to enable predicate pushdown hive.optimize.ppd.storage true Whether to push predicates down into storage handlers. Ignored when hive.optimize.ppd is false. hive.ppd.recognizetransivity true Whether to transitively replicate predicate filters over equijoin conditions. hive.optimize.groupby true Whether to enable the bucketed group by from bucketed partitions/tables. hive.optimize.skewjoin.compiletime false Whether to create a separate plan for skewed keys for the tables in the join. This is based on the skewed keys stored in the metadata. At compile time, the plan is broken into different joins: one for the skewed keys, and the other for the remaining keys. And then, a union is performed for the 2 joins generated above. So unless the same skewed key is present in both the joined tables, the join for the skewed key will be performed as a map-side join. The main difference between this paramater and hive.optimize.skewjoin is that this parameter uses the skew information stored in the metastore to optimize the plan at compile time itself. If there is no skew information in the metadata, this parameter will not have any affect. Both hive.optimize.skewjoin.compiletime and hive.optimize.skewjoin should be set to true. Ideally, hive.optimize.skewjoin should be renamed as hive.optimize.skewjoin.runtime, but not doing so for backward compatibility. If the skew information is correctly stored in the metadata, hive.optimize.skewjoin.compiletime would change the query plan to take care of it, and hive.optimize.skewjoin will be a no-op. hive.optimize.union.remove false Whether to remove the union and push the operators between union and the filesink above union. This avoids an extra scan of the output by union. This is independently useful for union queries, and specially useful when hive.optimize.skewjoin.compiletime is set to true, since an extra union is inserted. The merge is triggered if either of hive.merge.mapfiles or hive.merge.mapredfiles is set to true. If the user has set hive.merge.mapfiles to true and hive.merge.mapredfiles to false, the idea was the number of reducers are few, so the number of files anyway are small. However, with this optimization, we are increasing the number of files possibly by a big margin. So, we merge aggresively. hive.mapred.supports.subdirectories false Whether the version of hadoop which is running supports sub-directories for tables/partitions. Many hive optimizations can be applied if the hadoop version supports sub-directories for tables/partitions. It was added by MAPREDUCE-1501 hive.multigroupby.singlemr false Whether to optimize multi group by query to generate single M/R job plan. If the multi group by query has common group by keys, it will be optimized to generate single M/R job. hive.map.groupby.sorted false If the bucketing/sorting properties of the table exactly match the grouping key, whether to perform the group by in the mapper by using BucketizedHiveInputFormat. The only downside to this is that it limits the number of mappers to the number of files. hive.map.groupby.sorted.testmode false If the bucketing/sorting properties of the table exactly match the grouping key, whether to perform the group by in the mapper by using BucketizedHiveInputFormat. If the test mode is set, the plan is not converted, but a query property is set to denote the same. hive.new.job.grouping.set.cardinality 30 Whether a new map-reduce job should be launched for grouping sets/rollups/cubes. For a query like: select a, b, c, count(1) from T group by a, b, c with rollup; 4 rows are created per row: (a, b, c), (a, b, null), (a, null, null), (null, null, null). This can lead to explosion across map-reduce boundary if the cardinality of T is very high, and map-side aggregation does not do a very good job. This parameter decides if hive should add an additional map-reduce job. If the grouping set cardinality (4 in the example above), is more than this value, a new MR job is added under the assumption that the orginal group by will reduce the data size. hive.join.emit.interval 1000 How many rows in the right-most join operand Hive should buffer before emitting the join result. hive.join.cache.size 25000 How many rows in the joining tables (except the streaming table) should be cached in memory. hive.mapjoin.bucket.cache.size 100 How many values in each keys in the map-joined table should be cached in memory. hive.mapjoin.cache.numrows 25000 How many rows should be cached by jdbm for map join. hive.optimize.skewjoin false Whether to enable skew join optimization. The algorithm is as follows: At runtime, detect the keys with a large skew. Instead of processing those keys, store them temporarily in a hdfs directory. In a follow-up map-reduce job, process those skewed keys. The same key need not be skewed for all the tables, and so, the follow-up map-reduce job (for the skewed keys) would be much faster, since it would be a map-join. hive.skewjoin.key 100000 Determine if we get a skew key in join. If we see more than the specified number of rows with the same key in join operator, we think the key as a skew join key. hive.skewjoin.mapjoin.map.tasks 10000 Determine the number of map task used in the follow up map join job for a skew join. It should be used together with hive.skewjoin.mapjoin.min.split to perform a fine grained control. hive.skewjoin.mapjoin.min.split 33554432 Determine the number of map task at most used in the follow up map join job for a skew join by specifying the minimum split size. It should be used together with hive.skewjoin.mapjoin.map.tasks to perform a fine grained control. hive.mapred.mode nonstrict The mode in which the hive operations are being performed. In strict mode, some risky queries are not allowed to run. They include: Cartesian Product. No partition being picked up for a query. Comparing bigints and strings. Comparing bigints and doubles. Orderby without limit. hive.enforce.bucketmapjoin false If the user asked for bucketed map-side join, and it cannot be performed, should the query fail or not ? For eg, if the buckets in the tables being joined are not a multiple of each other, bucketed map-side join cannot be performed, and the query will fail if hive.enforce.bucketmapjoin is set to true. hive.exec.script.maxerrsize 100000 Maximum number of bytes a script is allowed to emit to standard error (per map-reduce task). This prevents runaway scripts from filling logs partitions to capacity hive.exec.script.allow.partial.consumption false When enabled, this option allows a user script to exit successfully without consuming all the data from the standard input. hive.script.operator.id.env.var HIVE_SCRIPT_OPERATOR_ID Name of the environment variable that holds the unique script operator ID in the user's transform function (the custom mapper/reducer that the user has specified in the query) hive.script.operator.truncate.env false Truncate each environment variable for external script in scripts operator to 20KB (to fit system limits) hive.exec.compress.output false This controls whether the final outputs of a query (to a local/hdfs file or a hive table) is compressed. The compression codec and other options are determined from hadoop config variables mapred.output.compress* hive.exec.compress.intermediate false This controls whether intermediate files produced by hive between multiple map-reduce jobs are compressed. The compression codec and other options are determined from hadoop config variables mapred.output.compress* hive.exec.parallel false Whether to execute jobs in parallel hive.exec.parallel.thread.number 8 How many jobs at most can be executed in parallel hive.exec.rowoffset false Whether to provide the row offset virtual column hive.task.progress false Whether Hive should periodically update task progress counters during execution. Enabling this allows task progress to be monitored more closely in the job tracker, but may impose a performance penalty. This flag is automatically set to true for jobs with hive.exec.dynamic.partition set to true. hive.hwi.war.file lib/hive-hwi-0.11.0.war This sets the path to the HWI war file, relative to ${HIVE_HOME}. hive.hwi.listen.host 0.0.0.0 This is the host address the Hive Web Interface will listen on hive.hwi.listen.port 9999 This is the port the Hive Web Interface will listen on hive.exec.pre.hooks Comma-separated list of pre-execution hooks to be invoked for each statement. A pre-execution hook is specified as the name of a Java class which implements the org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface. hive.exec.post.hooks Comma-separated list of post-execution hooks to be invoked for each statement. A post-execution hook is specified as the name of a Java class which implements the org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface. hive.exec.failure.hooks Comma-separated list of on-failure hooks to be invoked for each statement. An on-failure hook is specified as the name of Java class which implements the org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface. hive.metastore.init.hooks A comma separated list of hooks to be invoked at the beginning of HMSHandler initialization. Aninit hook is specified as the name of Java class which extends org.apache.hadoop.hive.metastore.MetaStoreInitListener. hive.client.stats.publishers Comma-separated list of statistics publishers to be invoked on counters on each job. A client stats publisher is specified as the name of a Java class which implements the org.apache.hadoop.hive.ql.stats.ClientStatsPublisher interface. hive.client.stats.counters Subset of counters that should be of interest for hive.client.stats.publishers (when one wants to limit their publishing). Non-display names should be used hive.merge.mapfiles true Merge small files at the end of a map-only job hive.merge.mapredfiles false Merge small files at the end of a map-reduce job hive.heartbeat.interval 1000 Send a heartbeat after this interval - used by mapjoin and filter operators hive.merge.size.per.task 256000000 Size of merged files at the end of the job hive.merge.smallfiles.avgsize 16000000 When the average output file size of a job is less than this number, Hive will start an additional map-reduce job to merge the output files into bigger files. This is only done for map-only jobs if hive.merge.mapfiles is true, and for map-reduce jobs if hive.merge.mapredfiles is true. hive.mapjoin.smalltable.filesize 25000000 The threshold for the input file size of the small tables; if the file size is smaller than this threshold, it will try to convert the common join into map join hive.ignore.mapjoin.hint true Ignore the mapjoin hint hive.mapjoin.localtask.max.memory.usage 0.90 This number means how much memory the local task can take to hold the key/value into in-memory hash table; If the local task's memory usage is more than this number, the local task will be abort by themself. It means the data of small table is too large to be hold in the memory. hive.mapjoin.followby.gby.localtask.max.memory.usage 0.55 This number means how much memory the local task can take to hold the key/value into in-memory hash table when this map join followed by a group by; If the local task's memory usage is more than this number, the local task will be abort by themself. It means the data of small table is too large to be hold in the memory. hive.mapjoin.check.memory.rows 100000 The number means after how many rows processed it needs to check the memory usage hive.auto.convert.join false Whether Hive enable the optimization about converting common join into mapjoin based on the input file size hive.auto.convert.join.noconditionaltask true Whether Hive enable the optimization about converting common join into mapjoin based on the input file size. If this paramater is on, and the sum of size for n-1 of the tables/partitions for a n-way join is smaller than the specified size, the join is directly converted to a mapjoin (there is no conditional task). hive.auto.convert.join.noconditionaltask.size 10000000 If hive.auto.convert.join.noconditionaltask is off, this parameter does not take affect. However, if it is on, and the sum of size for n-1 of the tables/partitions for a n-way join is smaller than this size, the join is directly converted to a mapjoin(there is no conditional task). The default is 10MB hive.optimize.mapjoin.mapreduce false If hive.auto.convert.join is off, this parameter does not take affect. If it is on, and if there are map-join jobs followed by a map-reduce job (for e.g a group by), each map-only job is merged with the following map-reduce job. hive.script.auto.progress false Whether Hive Tranform/Map/Reduce Clause should automatically send progress information to TaskTracker to avoid the task getting killed because of inactivity. Hive sends progress information when the script is outputting to stderr. This option removes the need of periodically producing stderr messages, but users should be cautious because this may prevent infinite loops in the scripts to be killed by TaskTracker. hive.script.serde org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe The default serde for trasmitting input data to and reading output data from the user scripts. hive.binary.record.max.length 1000 Read from a binary stream and treat each hive.binary.record.max.length bytes as a record. The last record before the end of stream can have less than hive.binary.record.max.length bytes hive.script.recordreader org.apache.hadoop.hive.ql.exec.TextRecordReader The default record reader for reading data from the user scripts. hive.script.recordwriter org.apache.hadoop.hive.ql.exec.TextRecordWriter The default record writer for writing data to the user scripts. hive.input.format org.apache.hadoop.hive.ql.io.CombineHiveInputFormat The default input format. Set this to HiveInputFormat if you encounter problems with CombineHiveInputFormat. hive.udtf.auto.progress false Whether Hive should automatically send progress information to TaskTracker when using UDTF's to prevent the task getting killed because of inactivity. Users should be cautious because this may prevent TaskTracker from killing tasks with infinte loops. hive.mapred.reduce.tasks.speculative.execution true Whether speculative execution for reducers should be turned on. hive.exec.counters.pull.interval 1000 The interval with which to poll the JobTracker for the counters the running job. The smaller it is the more load there will be on the jobtracker, the higher it is the less granular the caught will be. hive.querylog.location /tmp/${user.name} Location of Hive run time structured log file hive.querylog.enable.plan.progress true Whether to log the plan's progress every time a job's progress is checked. These logs are written to the location specified by hive.querylog.location hive.querylog.plan.progress.interval 60000 The interval to wait between logging the plan's progress in milliseconds. If there is a whole number percentage change in the progress of the mappers or the reducers, the progress is logged regardless of this value. The actual interval will be the ceiling of (this value divided by the value of hive.exec.counters.pull.interval) multiplied by the value of hive.exec.counters.pull.interval I.e. if it is not divide evenly by the value of hive.exec.counters.pull.interval it will be logged less frequently than specified. This only has an effect if hive.querylog.enable.plan.progress is set to true. hive.enforce.bucketing false Whether bucketing is enforced. If true, while inserting into the table, bucketing is enforced. hive.enforce.sorting false Whether sorting is enforced. If true, while inserting into the table, sorting is enforced. hive.optimize.bucketingsorting true If hive.enforce.bucketing or hive.enforce.sorting is true, dont create a reducer for enforcing bucketing/sorting for queries of the form: insert overwrite table T2 select * from T1; where T1 and T2 are bucketed/sorted by the same keys into the same number of buckets. hive.enforce.sortmergebucketmapjoin false If the user asked for sort-merge bucketed map-side join, and it cannot be performed, should the query fail or not ? hive.auto.convert.sortmerge.join false Will the join be automatically converted to a sort-merge join, if the joined tables pass the criteria for sort-merge join. hive.auto.convert.sortmerge.join.bigtable.selection.policy org.apache.hadoop.hive.ql.optimizer.AvgPartitionSizeBasedBigTableSelectorForAutoSMJ The policy to choose the big table for automatic conversion to sort-merge join. By default, the table with the largest partitions is assigned the big table. All policies are: . based on position of the table - the leftmost table is selected org.apache.hadoop.hive.ql.optimizer.LeftmostBigTableSMJ. . based on total size (all the partitions selected in the query) of the table org.apache.hadoop.hive.ql.optimizer.TableSizeBasedBigTableSelectorForAutoSMJ. . based on average size (all the partitions selected in the query) of the table org.apache.hadoop.hive.ql.optimizer.AvgPartitionSizeBasedBigTableSelectorForAutoSMJ. New policies can be added in future. hive.metastore.ds.connection.url.hook Name of the hook to use for retriving the JDO connection URL. If empty, the value in javax.jdo.option.ConnectionURL is used hive.metastore.ds.retry.attempts 1 The number of times to retry a metastore call if there were a connection error hive.metastore.ds.retry.interval 1000 The number of miliseconds between metastore retry attempts hive.metastore.server.min.threads 200 Minimum number of worker threads in the Thrift server's pool. hive.metastore.server.max.threads 100000 Maximum number of worker threads in the Thrift server's pool. hive.metastore.server.tcp.keepalive true Whether to enable TCP keepalive for the metastore server. Keepalive will prevent accumulation of half-open connections. hive.metastore.sasl.enabled false If true, the metastore thrift interface will be secured with SASL. Clients must authenticate with Kerberos. hive.metastore.thrift.framed.transport.enabled false If true, the metastore thrift interface will use TFramedTransport. When false (default) a standard TTransport is used. hive.metastore.kerberos.keytab.file The path to the Kerberos Keytab file containing the metastore thrift server's service principal. hive.metastore.kerberos.principal hive-metastore/_HOST@EXAMPLE.COM The service principal for the metastore thrift server. The special string _HOST will be replaced automatically with the correct host name. hive.cluster.delegation.token.store.class org.apache.hadoop.hive.thrift.MemoryTokenStore The delegation token store implementation. Set to org.apache.hadoop.hive.thrift.ZooKeeperTokenStore for load-balanced cluster. hive.cluster.delegation.token.store.zookeeper.connectString localhost:2181 The ZooKeeper token store connect string. hive.cluster.delegation.token.store.zookeeper.znode /hive/cluster/delegation The root path for token store data. hive.cluster.delegation.token.store.zookeeper.acl sasl:hive/host1@EXAMPLE.COM:cdrwa,sasl:hive/host2@EXAMPLE.COM:cdrwa ACL for token store entries. List comma separated all server principals for the cluster. hive.metastore.cache.pinobjtypes Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order List of comma separated metastore object types that should be pinned in the cache hive.optimize.reducededuplication true Remove extra map-reduce jobs if the data is already clustered by the same key which needs to be used again. This should always be set to true. Since it is a new feature, it has been made configurable. hive.optimize.reducededuplication.min.reducer 4 Reduce deduplication merges two RSs by moving key/parts/reducer-num of the child RS to parent RS. That means if reducer-num of the child RS is fixed (order by or forced bucketing) and small, it can make very slow, single MR. The optimization will be disabled if number of reducers is less than specified value. hive.exec.dynamic.partition true Whether or not to allow dynamic partitions in DML/DDL. hive.exec.dynamic.partition.mode strict In strict mode, the user must specify at least one static partition in case the user accidentally overwrites all partitions. hive.exec.max.dynamic.partitions 1000 Maximum number of dynamic partitions allowed to be created in total. hive.exec.max.dynamic.partitions.pernode 100 Maximum number of dynamic partitions allowed to be created in each mapper/reducer node. hive.exec.max.created.files 100000 Maximum number of HDFS files created by all mappers/reducers in a MapReduce job. hive.exec.default.partition.name __HIVE_DEFAULT_PARTITION__ The default partition name in case the dynamic partition column value is null/empty string or anyother values that cannot be escaped. This value must not contain any special character used in HDFS URI (e.g., ':', '%', '/' etc). The user has to be aware that the dynamic partition value should not contain this value to avoid confusions. hive.stats.dbclass jdbc:derby The default database that stores temporary hive statistics. hive.stats.autogather true A flag to gather statistics automatically during the INSERT OVERWRITE command. hive.stats.jdbcdriver org.apache.derby.jdbc.EmbeddedDriver The JDBC driver for the database that stores temporary hive statistics. hive.stats.dbconnectionstring jdbc:derby:;databaseName=TempStatsStore;create=true The default connection string for the database that stores temporary hive statistics. hive.stats.default.publisher The Java class (implementing the StatsPublisher interface) that is used by default if hive.stats.dbclass is not JDBC or HBase. hive.stats.default.aggregator The Java class (implementing the StatsAggregator interface) that is used by default if hive.stats.dbclass is not JDBC or HBase. hive.stats.jdbc.timeout 30 Timeout value (number of seconds) used by JDBC connection and statements. hive.stats.retries.max 0 Maximum number of retries when stats publisher/aggregator got an exception updating intermediate database. Default is no tries on failures. hive.stats.retries.wait 3000 The base waiting window (in milliseconds) before the next retry. The actual wait time is calculated by baseWindow * failues baseWindow * (failure 1) * (random number between [0.0,1.0]). hive.stats.reliable false Whether queries will fail because stats cannot be collected completely accurately. If this is set to true, reading/writing from/into a partition may fail becuase the stats could not be computed accurately. hive.stats.collect.tablekeys false Whether join and group by keys on tables are derived and maintained in the QueryPlan. This is useful to identify how tables are accessed and to determine if they should be bucketed. hive.stats.collect.scancols false Whether column accesses are tracked in the QueryPlan. This is useful to identify how tables are accessed and to determine if there are wasted columns that can be trimmed. hive.stats.ndv.error 20.0 Standard error expressed in percentage. Provides a tradeoff between accuracy and compute cost.A lower value for error indicates higher accuracy and a higher compute cost. hive.stats.key.prefix.max.length 200 Determines if when the prefix of the key used for intermediate stats collection exceeds a certain length, a hash of the key is used instead. If the value < 0 then hashing is never used, if the value >= 0 then hashing is used only when the key prefixes length exceeds that value. The key prefix is defined as everything preceding the task ID in the key. hive.support.concurrency false Whether hive supports concurrency or not. A zookeeper instance must be up and running for the default hive lock manager to support read-write locks. hive.lock.numretries 100 The number of times you want to try to get all the locks hive.unlock.numretries 10 The number of times you want to retry to do one unlock hive.lock.sleep.between.retries 60 The sleep time (in seconds) between various retries hive.zookeeper.quorum The list of zookeeper servers to talk to. This is only needed for read/write locks. hive.zookeeper.client.port 2181 The port of zookeeper servers to talk to. This is only needed for read/write locks. hive.zookeeper.session.timeout 600000 Zookeeper client's session timeout. The client is disconnected, and as a result, all locks released, if a heartbeat is not sent in the timeout. hive.zookeeper.namespace hive_zookeeper_namespace The parent node under which all zookeeper nodes are created. hive.zookeeper.clean.extra.nodes false Clean extra nodes at the end of the session. fs.har.impl org.apache.hadoop.hive.shims.HiveHarFileSystem The implementation for accessing Hadoop Archives. Note that this won't be applicable to Hadoop vers less than 0.20 hive.archive.enabled false Whether archiving operations are permitted hive.fetch.output.serde org.apache.hadoop.hive.serde2.DelimitedJSONSerDe The serde used by FetchTask to serialize the fetch output. hive.exec.mode.local.auto false Let hive determine whether to run in local mode automatically hive.exec.drop.ignorenonexistent true Do not report an error if DROP TABLE/VIEW specifies a non-existent table/view hive.exec.show.job.failure.debug.info true If a job fails, whether to provide a link in the CLI to the task with the most failures, along with debugging hints if applicable. hive.auto.progress.timeout 0 How long to run autoprogressor for the script/UDTF operators (in seconds). Set to 0 for forever. hive.hbase.wal.enabled true Whether writes to HBase should be forced to the write-ahead log. Disabling this improves HBase write performance at the risk of lost writes in case of a crash. hive.table.parameters.default Default property values for newly created tables hive.entity.separator @ Separator used to construct names of tables and partitions. For example, dbname@tablename@partitionname hive.ddl.createtablelike.properties.whitelist Table Properties to copy over when executing a Create Table Like. hive.variable.substitute true This enables substitution using syntax like ${var} ${system:var} and ${env:var}. hive.variable.substitute.depth 40 The maximum replacements the substitution engine will do. hive.conf.validation true Eables type checking for registered hive configurations hive.security.authorization.enabled false enable or disable the hive client authorization hive.security.authorization.manager org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider the hive client authorization manager class name. The user defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveAuthorizationProvider. hive.security.metastore.authorization.manager org.apache.hadoop.hive.ql.security.authorization.DefaultHiveMetastoreAuthorizationProvider authorization manager class name to be used in the metastore for authorization. The user defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveMetastoreAuthorizationProvider. hive.security.authenticator.manager org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator hive client authenticator manager class name. The user defined authenticator should implement interface org.apache.hadoop.hive.ql.security.HiveAuthenticationProvider. hive.security.metastore.authenticator.manager org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator authenticator manager class name to be used in the metastore for authentication. The user defined authenticator should implement interface org.apache.hadoop.hive.ql.security.HiveAuthenticationProvider. hive.security.authorization.createtable.user.grants the privileges automatically granted to some users whenever a table gets created. An example like "userX,userY:select;userZ:create" will grant select privilege to userX and userY, and grant create privilege to userZ whenever a new table created. hive.security.authorization.createtable.group.grants the privileges automatically granted to some groups whenever a table gets created. An example like "groupX,groupY:select;groupZ:create" will grant select privilege to groupX and groupY, and grant create privilege to groupZ whenever a new table created. hive.security.authorization.createtable.role.grants the privileges automatically granted to some roles whenever a table gets created. An example like "roleX,roleY:select;roleZ:create" will grant select privilege to roleX and roleY, and grant create privilege to roleZ whenever a new table created. hive.security.authorization.createtable.owner.grants the privileges automatically granted to the owner whenever a table gets created. An example like "select,drop" will grant select and drop privilege to the owner of the table hive.metastore.authorization.storage.checks false Should the metastore do authorization checks against the underlying storage for operations like drop-partition (disallow the drop-partition if the user in question doesn't have permissions to delete the corresponding directory on the storage). hive.error.on.empty.partition false Whether to throw an excpetion if dynamic partition insert generates empty results. hive.index.compact.file.ignore.hdfs false True the hdfs location stored in the index file will be igbored at runtime. If the data got moved or the name of the cluster got changed, the index data should still be usable. hive.optimize.index.filter.compact.minsize 5368709120 Minimum size (in bytes) of the inputs on which a compact index is automatically used. hive.optimize.index.filter.compact.maxsize -1 Maximum size (in bytes) of the inputs on which a compact index is automatically used. A negative number is equivalent to infinity. hive.index.compact.query.max.size 10737418240 The maximum number of bytes that a query using the compact index can read. Negative value is equivalent to infinity. hive.index.compact.query.max.entries 10000000 The maximum number of index entries to read during a query that uses the compact index. Negative value is equivalent to infinity. hive.index.compact.binary.search true Whether or not to use a binary search to find the entries in an index table that match the filter, where possible hive.exim.uri.scheme.whitelist hdfs,pfile A comma separated list of acceptable URI schemes for import and export. hive.lock.mapred.only.operation false This param is to control whether or not only do lock on queries that need to execute at least one mapred job. hive.limit.row.max.size 100000 When trying a smaller subset of data for simple LIMIT, how much size we need to guarantee each row to have at least. hive.limit.optimize.limit.file 10 When trying a smaller subset of data for simple LIMIT, maximum number of files we can sample. hive.limit.optimize.enable false Whether to enable to optimization to trying a smaller subset of data for simple LIMIT first. hive.limit.optimize.fetch.max 50000 Maximum number of rows allowed for a smaller subset of data for simple LIMIT, if it is a fetch query. Insert queries are not restricted by this limit. hive.rework.mapredwork false should rework the mapred work or not. This is first introduced by SymlinkTextInputFormat to replace symlink files with real paths at compile time. hive.exec.concatenate.check.index true If this sets to true, hive will throw error when doing 'alter table tbl_name [partSpec] concatenate' on a table/partition that has indexes on it. The reason the user want to set this to true is because it can help user to avoid handling all index drop, recreation, rebuild work. This is very helpful for tables with thousands of partitions. hive.sample.seednumber 0 A number used to percentage sampling. By changing this number, user will change the subsets of data sampled. hive.io.exception.handlers A list of io exception handler class names. This is used to construct a list exception handlers to handle exceptions thrown by record readers hive.autogen.columnalias.prefix.label _c String used as a prefix when auto generating column alias. By default the prefix label will be appended with a column position number to form the column alias. Auto generation would happen if an aggregate function is used in a select clause without an explicit alias. hive.autogen.columnalias.prefix.includefuncname false Whether to include function name in the column alias auto generated by hive. hive.exec.perf.logger org.apache.hadoop.hive.ql.log.PerfLogger The class responsible logging client side performance metrics. Must be a subclass of org.apache.hadoop.hive.ql.log.PerfLogger hive.start.cleanup.scratchdir false To cleanup the hive scratchdir while starting the hive server hive.output.file.extension String used as a file extension for output files. If not set, defaults to the codec extension for text files (e.g. ".gz"), or no extension otherwise. hive.insert.into.multilevel.dirs false Where to insert into multilevel directories like "insert directory '/HIVEFT25686/chinna/' from table" hive.warehouse.subdir.inherit.perms false Set this to true if the the table directories should inherit the permission of the warehouse or database directory instead of being created with the permissions derived from dfs umask hive.exec.job.debug.capture.stacktraces true Whether or not stack traces parsed from the task logs of a sampled failed task for each failed job should be stored in the SessionState hive.exec.driver.run.hooks A comma separated list of hooks which implement HiveDriverRunHook and will be run at the beginning and end of Driver.run, these will be run in the order specified hive.ddl.output.format text The data format to use for DDL output. One of "text" (for human readable text) or "json" (for a json object). hive.transform.escape.input false This adds an option to escape special chars (newlines, carriage returns and tabs) when they are passed to the user script. This is useful if the hive tables can contain data that contains special characters. hive.exec.rcfile.use.explicit.header true If this is set the header for RC Files will simply be RCF. If this is not set the header will be that borrowed from sequence files, e.g. SEQ- followed by the input and output RC File formats. hive.multi.insert.move.tasks.share.dependencies false If this is set all move tasks for tables/partitions (not directories) at the end of a multi-insert query will only begin once the dependencies for all these move tasks have been met. Advantages: If concurrency is enabled, the locks will only be released once the query has finished, so with this config enabled, the time when the table/partition is generated will be much closer to when the lock on it is released. Disadvantages: If concurrency is not enabled, with this disabled, the tables/partitions which are produced by this query and finish earlier will be available for querying much earlier. Since the locks are only released once the query finishes, this does not apply if concurrency is enabled. hive.fetch.task.conversion minimal Some select queries can be converted to single FETCH task minimizing latency. Currently the query should be single sourced not having any subquery and should not have any aggregations or distincts (which incurrs RS), lateral views and joins. 1. minimal : SELECT STAR, FILTER on partition columns, LIMIT only 2. more : SELECT, FILTER, LIMIT only (TABLESAMPLE, virtual columns) hive.hmshandler.retry.attempts 1 The number of times to retry a HMSHandler call if there were a connection error hive.hmshandler.retry.interval 1000 The number of miliseconds between HMSHandler retry attempts hive.server.read.socket.timeout 10 Timeout for the HiveServer to close the connection if no response from the client in N seconds, defaults to 10 seconds. hive.server.tcp.keepalive true Whether to enable TCP keepalive for the Hive server. Keepalive will prevent accumulation of half-open connections. hive.decode.partition.name false Whether to show the unquoted partition names in query results. hive.log4j.file Hive log4j configuration file. If the property is not set, then logging will be initialized using hive-log4j.properties found on the classpath. If the property is set, the value must be a valid URI (java.net.URI, e.g. "file:///tmp/my-logging.properties"), which you can then extract a URL from and pass to PropertyConfigurator.configure(URL). hive.exec.log4j.file Hive log4j configuration file for execution mode(sub command). If the property is not set, then logging will be initialized using hive-exec-log4j.properties found on the classpath. If the property is set, the value must be a valid URI (java.net.URI, e.g. "file:///tmp/my-logging.properties"), which you can then extract a URL from and pass to PropertyConfigurator.configure(URL). hive.exec.infer.bucket.sort false If this is set, when writing partitions, the metadata will include the bucketing/sorting properties with which the data was written if any (this will not overwrite the metadata inherited from the table if the table is bucketed/sorted) hive.exec.infer.bucket.sort.num.buckets.power.two false If this is set, when setting the number of reducers for the map reduce task which writes the final output files, it will choose a number which is a power of two, unless the user specifies the number of reducers to use using mapred.reduce.tasks. The number of reducers may be set to a power of two, only to be followed by a merge task meaning preventing anything from being inferred. With hive.exec.infer.bucket.sort set to true: Advantages: If this is not set, the number of buckets for partitions will seem arbitrary, which means that the number of mappers used for optimized joins, for example, will be very low. With this set, since the number of buckets used for any partition is a power of two, the number of mappers used for optimized joins will be the least number of buckets used by any partition being joined. Disadvantages: This may mean a much larger or much smaller number of reducers being used in the final map reduce job, e.g. if a job was originally going to take 257 reducers, it will now take 512 reducers, similarly if the max number of reducers is 511, and a job was going to use this many, it will now use 256 reducers. hive.groupby.orderby.position.alias false Whether to enable using Column Position Alias in Group By or Order By hive.server2.thrift.min.worker.threads 5 Minimum number of Thrift worker threads hive.server2.thrift.max.worker.threads 100 Maximum number of Thrift worker threads hive.server2.thrift.port 10000 Port number of HiveServer2 Thrift interface. Can be overridden by setting $HIVE_SERVER2_THRIFT_PORT hive.server2.thrift.bind.host localhost Bind host on which to run the HiveServer2 Thrift interface. Can be overridden by setting $HIVE_SERVER2_THRIFT_BIND_HOST hive.server2.authentication NONE Client authentication types. NONE: no authentication check LDAP: LDAP/AD based authentication KERBEROS: Kerberos/GSSAPI authentication CUSTOM: Custom authentication provider (Use with property hive.server2.custom.authentication.class) hive.server2.custom.authentication.class Custom authentication class. Used when property 'hive.server2.authentication' is set to 'CUSTOM'. Provided class must be a proper implementation of the interface org.apache.hive.service.auth.PasswdAuthenticationProvider. HiveServer2 will call its Authenticate(user, passed) method to authenticate requests. The implementation may optionally extend the Hadoop's org.apache.hadoop.conf.Configured class to grab Hive's Configuration object. >hive.server2.authentication.kerberos.principal Kerberos server principal >hive.server2.authentication.kerberos.keytab Kerberos keytab file for server principal hive.server2.authentication.ldap.url LDAP connection URL hive.server2.authentication.ldap.baseDN LDAP base DN hive.server2.enable.doAs true Setting this property to true will have hive server2 execute hive operations as the user making the calls to it. ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_1/resources/mapred-default.xml 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_1/resources/mapred-default.x0000664000175000017500000022436000000000000033525 0ustar00zuulzuul00000000000000 mapreduce.jobtracker.jobhistory.location If job tracker is static the history files are stored in this single well known place. If No value is set here, by default, it is in the local file system at ${hadoop.log.dir}/history. mapreduce.jobtracker.jobhistory.task.numberprogresssplits 12 Every task attempt progresses from 0.0 to 1.0 [unless it fails or is killed]. We record, for each task attempt, certain statistics over each twelfth of the progress range. You can change the number of intervals we divide the entire range of progress into by setting this property. Higher values give more precision to the recorded data, but costs more memory in the job tracker at runtime. Each increment in this attribute costs 16 bytes per running task. mapreduce.job.userhistorylocation User can specify a location to store the history files of a particular job. If nothing is specified, the logs are stored in output directory. The files are stored in "_logs/history/" in the directory. User can stop logging by giving the value "none". mapreduce.jobtracker.jobhistory.completed.location The completed job history files are stored at this single well known location. If nothing is specified, the files are stored at ${mapreduce.jobtracker.jobhistory.location}/done. mapreduce.job.committer.setup.cleanup.needed true true, if job needs job-setup and job-cleanup. false, otherwise mapreduce.task.io.sort.factor 10 The number of streams to merge at once while sorting files. This determines the number of open file handles. mapreduce.task.io.sort.mb 100 The total amount of buffer memory to use while sorting files, in megabytes. By default, gives each merge stream 1MB, which should minimize seeks. mapreduce.map.sort.spill.percent 0.80 The soft limit in the serialization buffer. Once reached, a thread will begin to spill the contents to disk in the background. Note that collection will not block if this threshold is exceeded while a spill is already in progress, so spills may be larger than this threshold when it is set to less than .5 mapreduce.jobtracker.address local The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. mapreduce.local.clientfactory.class.name org.apache.hadoop.mapred.LocalClientFactory This the client factory that is responsible for creating local job runner client mapreduce.jobtracker.http.address 0.0.0.0:50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. mapreduce.jobtracker.handler.count 10 The number of server threads for the JobTracker. This should be roughly 4% of the number of tasktracker nodes. mapreduce.tasktracker.report.address 127.0.0.1:0 The interface and port that task tracker server listens on. Since it is only connected to by the tasks, it uses the local interface. EXPERT ONLY. Should only be changed if your host does not have the loopback interface. mapreduce.cluster.local.dir ${hadoop.tmp.dir}/mapred/local The local directory where MapReduce stores intermediate data files. May be a comma-separated list of directories on different devices in order to spread disk i/o. Directories that do not exist are ignored. mapreduce.jobtracker.system.dir ${hadoop.tmp.dir}/mapred/system The directory where MapReduce stores control files. mapreduce.jobtracker.staging.root.dir ${hadoop.tmp.dir}/mapred/staging The root of the staging area for users' job files In practice, this should be the directory where users' home directories are located (usually /user) mapreduce.cluster.temp.dir ${hadoop.tmp.dir}/mapred/temp A shared directory for temporary files. mapreduce.tasktracker.local.dir.minspacestart 0 If the space in mapreduce.cluster.local.dir drops under this, do not ask for more tasks. Value in bytes. mapreduce.tasktracker.local.dir.minspacekill 0 If the space in mapreduce.cluster.local.dir drops under this, do not ask more tasks until all the current ones have finished and cleaned up. Also, to save the rest of the tasks we have running, kill one of them, to clean up some space. Start with the reduce tasks, then go with the ones that have finished the least. Value in bytes. mapreduce.jobtracker.expire.trackers.interval 600000 Expert: The time-interval, in miliseconds, after which a tasktracker is declared 'lost' if it doesn't send heartbeats. mapreduce.tasktracker.instrumentation org.apache.hadoop.mapred.TaskTrackerMetricsInst Expert: The instrumentation class to associate with each TaskTracker. mapreduce.tasktracker.resourcecalculatorplugin Name of the class whose instance will be used to query resource information on the tasktracker. The class must be an instance of org.apache.hadoop.util.ResourceCalculatorPlugin. If the value is null, the tasktracker attempts to use a class appropriate to the platform. Currently, the only platform supported is Linux. mapreduce.tasktracker.taskmemorymanager.monitoringinterval 5000 The interval, in milliseconds, for which the tasktracker waits between two cycles of monitoring its tasks' memory usage. Used only if tasks' memory management is enabled via mapred.tasktracker.tasks.maxmemory. mapreduce.tasktracker.tasks.sleeptimebeforesigkill 5000 The time, in milliseconds, the tasktracker waits for sending a SIGKILL to a task, after it has been sent a SIGTERM. This is currently not used on WINDOWS where tasks are just sent a SIGTERM. mapreduce.job.maps 2 The default number of map tasks per job. Ignored when mapreduce.jobtracker.address is "local". mapreduce.job.reduces 1 The default number of reduce tasks per job. Typically set to 99% of the cluster's reduce capacity, so that if a node fails the reduces can still be executed in a single wave. Ignored when mapreduce.jobtracker.address is "local". mapreduce.jobtracker.restart.recover false "true" to enable (job) recovery upon restart, "false" to start afresh mapreduce.jobtracker.jobhistory.block.size 3145728 The block size of the job history file. Since the job recovery uses job history, its important to dump job history to disk as soon as possible. Note that this is an expert level parameter. The default value is set to 3 MB. mapreduce.jobtracker.taskscheduler org.apache.hadoop.mapred.JobQueueTaskScheduler The class responsible for scheduling the tasks. mapreduce.job.running.map.limit 0 The maximum number of simultaneous map tasks per job. There is no limit if this value is 0 or negative. mapreduce.job.running.reduce.limit 0 The maximum number of simultaneous reduce tasks per job. There is no limit if this value is 0 or negative. mapreduce.job.reducer.preempt.delay.sec 0 The threshold in terms of seconds after which an unsatisfied mapper request triggers reducer preemption to free space. Default 0 implies that the reduces should be preempted immediately after allocation if there is currently no room for newly allocated mappers. mapreduce.job.max.split.locations 10 The max number of block locations to store for each split for locality calculation. mapreduce.job.split.metainfo.maxsize 10000000 The maximum permissible size of the split metainfo file. The JobTracker won't attempt to read split metainfo files bigger than the configured value. No limits if set to -1. mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob The maximum number of running tasks for a job before it gets preempted. No limits if undefined. mapreduce.map.maxattempts 4 Expert: The maximum number of attempts per map task. In other words, framework will try to execute a map task these many number of times before giving up on it. mapreduce.reduce.maxattempts 4 Expert: The maximum number of attempts per reduce task. In other words, framework will try to execute a reduce task these many number of times before giving up on it. mapreduce.reduce.shuffle.fetch.retry.enabled ${yarn.nodemanager.recovery.enabled} Set to enable fetch retry during host restart. mapreduce.reduce.shuffle.fetch.retry.interval-ms 1000 Time of interval that fetcher retry to fetch again when some non-fatal failure happens because of some events like NM restart. mapreduce.reduce.shuffle.fetch.retry.timeout-ms 30000 Timeout value for fetcher to retry to fetch again when some non-fatal failure happens because of some events like NM restart. mapreduce.reduce.shuffle.retry-delay.max.ms 60000 The maximum number of ms the reducer will delay before retrying to download map data. mapreduce.reduce.shuffle.parallelcopies 5 The default number of parallel transfers run by reduce during the copy(shuffle) phase. mapreduce.reduce.shuffle.connect.timeout 180000 Expert: The maximum amount of time (in milli seconds) reduce task spends in trying to connect to a tasktracker for getting map output. mapreduce.reduce.shuffle.read.timeout 180000 Expert: The maximum amount of time (in milli seconds) reduce task waits for map output data to be available for reading after obtaining connection. mapreduce.shuffle.connection-keep-alive.enable false set to true to support keep-alive connections. mapreduce.shuffle.connection-keep-alive.timeout 5 The number of seconds a shuffle client attempts to retain http connection. Refer "Keep-Alive: timeout=" header in Http specification mapreduce.task.timeout 600000 The number of milliseconds before a task will be terminated if it neither reads an input, writes an output, nor updates its status string. A value of 0 disables the timeout. mapreduce.tasktracker.map.tasks.maximum 2 The maximum number of map tasks that will be run simultaneously by a task tracker. mapreduce.tasktracker.reduce.tasks.maximum 2 The maximum number of reduce tasks that will be run simultaneously by a task tracker. mapreduce.map.memory.mb 1024 The amount of memory to request from the scheduler for each map task. mapreduce.map.cpu.vcores 1 The number of virtual cores to request from the scheduler for each map task. mapreduce.reduce.memory.mb 1024 The amount of memory to request from the scheduler for each reduce task. mapreduce.reduce.cpu.vcores 1 The number of virtual cores to request from the scheduler for each reduce task. mapreduce.jobtracker.retiredjobs.cache.size 1000 The number of retired job status to keep in the cache. mapreduce.tasktracker.outofband.heartbeat false Expert: Set this to true to let the tasktracker send an out-of-band heartbeat on task-completion for better latency. mapreduce.jobtracker.jobhistory.lru.cache.size 5 The number of job history files loaded in memory. The jobs are loaded when they are first accessed. The cache is cleared based on LRU. mapreduce.jobtracker.instrumentation org.apache.hadoop.mapred.JobTrackerMetricsInst Expert: The instrumentation class to associate with each JobTracker. mapred.child.java.opts -Xmx200m Java opts for the task processes. The following symbol, if present, will be interpolated: @taskid@ is replaced by current TaskID. Any other occurrences of '@' will go unchanged. For example, to enable verbose gc logging to a file named for the taskid in /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of: -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc Usage of -Djava.library.path can cause programs to no longer function if hadoop native libraries are used. These values should instead be set as part of LD_LIBRARY_PATH in the map / reduce JVM env using the mapreduce.map.env and mapreduce.reduce.env config settings. mapred.child.env User added environment variables for the task processes. Example : 1) A=foo This will set the env variable A to foo 2) B=$B:c This is inherit nodemanager's B env variable on Unix. 3) B=%B%;c This is inherit nodemanager's B env variable on Windows. mapreduce.admin.user.env Expert: Additional execution environment entries for map and reduce task processes. This is not an additive property. You must preserve the original value if you want your map and reduce tasks to have access to native libraries (compression, etc). When this value is empty, the command to set execution envrionment will be OS dependent: For linux, use LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native. For windows, use PATH = %PATH%;%HADOOP_COMMON_HOME%\\bin. mapreduce.map.log.level INFO The logging level for the map task. The allowed levels are: OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL. The setting here could be overridden if "mapreduce.job.log4j-properties-file" is set. mapreduce.reduce.log.level INFO The logging level for the reduce task. The allowed levels are: OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL. The setting here could be overridden if "mapreduce.job.log4j-properties-file" is set. mapreduce.map.cpu.vcores 1 The number of virtual cores required for each map task. mapreduce.reduce.cpu.vcores 1 The number of virtual cores required for each reduce task. mapreduce.reduce.merge.inmem.threshold 1000 The threshold, in terms of the number of files for the in-memory merge process. When we accumulate threshold number of files we initiate the in-memory merge and spill to disk. A value of 0 or less than 0 indicates we want to DON'T have any threshold and instead depend only on the ramfs's memory consumption to trigger the merge. mapreduce.reduce.shuffle.merge.percent 0.66 The usage threshold at which an in-memory merge will be initiated, expressed as a percentage of the total memory allocated to storing in-memory map outputs, as defined by mapreduce.reduce.shuffle.input.buffer.percent. mapreduce.reduce.shuffle.input.buffer.percent 0.70 The percentage of memory to be allocated from the maximum heap size to storing map outputs during the shuffle. mapreduce.reduce.input.buffer.percent 0.0 The percentage of memory- relative to the maximum heap size- to retain map outputs during the reduce. When the shuffle is concluded, any remaining map outputs in memory must consume less than this threshold before the reduce can begin. mapreduce.reduce.shuffle.memory.limit.percent 0.25 Expert: Maximum percentage of the in-memory limit that a single shuffle can consume mapreduce.shuffle.ssl.enabled false Whether to use SSL for for the Shuffle HTTP endpoints. mapreduce.shuffle.ssl.file.buffer.size 65536 Buffer size for reading spills from file when using SSL. mapreduce.shuffle.max.connections 0 Max allowed connections for the shuffle. Set to 0 (zero) to indicate no limit on the number of connections. mapreduce.shuffle.max.threads 0 Max allowed threads for serving shuffle connections. Set to zero to indicate the default of 2 times the number of available processors (as reported by Runtime.availableProcessors()). Netty is used to serve requests, so a thread is not needed for each connection. mapreduce.shuffle.transferTo.allowed This option can enable/disable using nio transferTo method in the shuffle phase. NIO transferTo does not perform well on windows in the shuffle phase. Thus, with this configuration property it is possible to disable it, in which case custom transfer method will be used. Recommended value is false when running Hadoop on Windows. For Linux, it is recommended to set it to true. If nothing is set then the default value is false for Windows, and true for Linux. mapreduce.shuffle.transfer.buffer.size 131072 This property is used only if mapreduce.shuffle.transferTo.allowed is set to false. In that case, this property defines the size of the buffer used in the buffer copy code for the shuffle phase. The size of this buffer determines the size of the IO requests. mapreduce.reduce.markreset.buffer.percent 0.0 The percentage of memory -relative to the maximum heap size- to be used for caching values when using the mark-reset functionality. mapreduce.map.speculative true If true, then multiple instances of some map tasks may be executed in parallel. mapreduce.reduce.speculative true If true, then multiple instances of some reduce tasks may be executed in parallel. mapreduce.job.speculative.speculative-cap-running-tasks 0.1 The max percent (0-1) of running tasks that can be speculatively re-executed at any time. mapreduce.job.speculative.speculative-cap-total-tasks 0.01 The max percent (0-1) of all tasks that can be speculatively re-executed at any time. mapreduce.job.speculative.minimum-allowed-tasks 10 The minimum allowed tasks that can be speculatively re-executed at any time. mapreduce.job.speculative.retry-after-no-speculate 1000 The waiting time(ms) to do next round of speculation if there is no task speculated in this round. mapreduce.job.speculative.retry-after-speculate 15000 The waiting time(ms) to do next round of speculation if there are tasks speculated in this round. mapreduce.job.map.output.collector.class org.apache.hadoop.mapred.MapTask$MapOutputBuffer The MapOutputCollector implementation(s) to use. This may be a comma-separated list of class names, in which case the map task will try to initialize each of the collectors in turn. The first to successfully initialize will be used. mapreduce.job.speculative.slowtaskthreshold 1.0 The number of standard deviations by which a task's ave progress-rates must be lower than the average of all running tasks' for the task to be considered too slow. mapreduce.job.jvm.numtasks 1 How many tasks to run per jvm. If set to -1, there is no limit. mapreduce.job.ubertask.enable false Whether to enable the small-jobs "ubertask" optimization, which runs "sufficiently small" jobs sequentially within a single JVM. "Small" is defined by the following maxmaps, maxreduces, and maxbytes settings. Note that configurations for application masters also affect the "Small" definition - yarn.app.mapreduce.am.resource.mb must be larger than both mapreduce.map.memory.mb and mapreduce.reduce.memory.mb, and yarn.app.mapreduce.am.resource.cpu-vcores must be larger than both mapreduce.map.cpu.vcores and mapreduce.reduce.cpu.vcores to enable ubertask. Users may override this value. mapreduce.job.ubertask.maxmaps 9 Threshold for number of maps, beyond which job is considered too big for the ubertasking optimization. Users may override this value, but only downward. mapreduce.job.ubertask.maxreduces 1 Threshold for number of reduces, beyond which job is considered too big for the ubertasking optimization. CURRENTLY THE CODE CANNOT SUPPORT MORE THAN ONE REDUCE and will ignore larger values. (Zero is a valid max, however.) Users may override this value, but only downward. mapreduce.job.ubertask.maxbytes Threshold for number of input bytes, beyond which job is considered too big for the ubertasking optimization. If no value is specified, dfs.block.size is used as a default. Be sure to specify a default value in mapred-site.xml if the underlying filesystem is not HDFS. Users may override this value, but only downward. mapreduce.job.emit-timeline-data false Specifies if the Application Master should emit timeline data to the timeline server. Individual jobs can override this value. mapreduce.input.fileinputformat.split.minsize 0 The minimum size chunk that map input should be split into. Note that some file formats may have minimum split sizes that take priority over this setting. mapreduce.input.fileinputformat.list-status.num-threads 1 The number of threads to use to list and fetch block locations for the specified input paths. Note: multiple threads should not be used if a custom non thread-safe path filter is used. mapreduce.jobtracker.maxtasks.perjob -1 The maximum number of tasks for a single job. A value of -1 indicates that there is no maximum. mapreduce.input.lineinputformat.linespermap 1 When using NLineInputFormat, the number of lines of input data to include in each split. mapreduce.client.submit.file.replication 10 The replication level for submitted job files. This should be around the square root of the number of nodes. mapreduce.tasktracker.dns.interface default The name of the Network Interface from which a task tracker should report its IP address. mapreduce.tasktracker.dns.nameserver default The host name or IP address of the name server (DNS) which a TaskTracker should use to determine the host name used by the JobTracker for communication and display purposes. mapreduce.tasktracker.http.threads 40 The number of worker threads that for the http server. This is used for map output fetching mapreduce.tasktracker.http.address 0.0.0.0:50060 The task tracker http server address and port. If the port is 0 then the server will start on a free port. mapreduce.task.files.preserve.failedtasks false Should the files for failed tasks be kept. This should only be used on jobs that are failing, because the storage is never reclaimed. It also prevents the map outputs from being erased from the reduce directory as they are consumed. mapreduce.output.fileoutputformat.compress false Should the job outputs be compressed? mapreduce.output.fileoutputformat.compress.type RECORD If the job outputs are to compressed as SequenceFiles, how should they be compressed? Should be one of NONE, RECORD or BLOCK. mapreduce.output.fileoutputformat.compress.codec org.apache.hadoop.io.compress.DefaultCodec If the job outputs are compressed, how should they be compressed? mapreduce.map.output.compress false Should the outputs of the maps be compressed before being sent across the network. Uses SequenceFile compression. mapreduce.map.output.compress.codec org.apache.hadoop.io.compress.DefaultCodec If the map outputs are compressed, how should they be compressed? map.sort.class org.apache.hadoop.util.QuickSort The default sort class for sorting keys. mapreduce.task.userlog.limit.kb 0 The maximum size of user-logs of each task in KB. 0 disables the cap. yarn.app.mapreduce.am.container.log.limit.kb 0 The maximum size of the MRAppMaster attempt container logs in KB. 0 disables the cap. yarn.app.mapreduce.task.container.log.backups 0 Number of backup files for task logs when using ContainerRollingLogAppender (CRLA). See org.apache.log4j.RollingFileAppender.maxBackupIndex. By default, ContainerLogAppender (CLA) is used, and container logs are not rolled. CRLA is enabled for tasks when both mapreduce.task.userlog.limit.kb and yarn.app.mapreduce.task.container.log.backups are greater than zero. yarn.app.mapreduce.am.container.log.backups 0 Number of backup files for the ApplicationMaster logs when using ContainerRollingLogAppender (CRLA). See org.apache.log4j.RollingFileAppender.maxBackupIndex. By default, ContainerLogAppender (CLA) is used, and container logs are not rolled. CRLA is enabled for the ApplicationMaster when both mapreduce.task.userlog.limit.kb and yarn.app.mapreduce.am.container.log.backups are greater than zero. yarn.app.mapreduce.shuffle.log.separate true If enabled ('true') logging generated by the client-side shuffle classes in a reducer will be written in a dedicated log file 'syslog.shuffle' instead of 'syslog'. yarn.app.mapreduce.shuffle.log.limit.kb 0 Maximum size of the syslog.shuffle file in kilobytes (0 for no limit). yarn.app.mapreduce.shuffle.log.backups 0 If yarn.app.mapreduce.shuffle.log.limit.kb and yarn.app.mapreduce.shuffle.log.backups are greater than zero then a ContainerRollngLogAppender is used instead of ContainerLogAppender for syslog.shuffle. See org.apache.log4j.RollingFileAppender.maxBackupIndex mapreduce.job.userlog.retain.hours 24 The maximum time, in hours, for which the user-logs are to be retained after the job completion. mapreduce.jobtracker.hosts.filename Names a file that contains the list of nodes that may connect to the jobtracker. If the value is empty, all hosts are permitted. mapreduce.jobtracker.hosts.exclude.filename Names a file that contains the list of hosts that should be excluded by the jobtracker. If the value is empty, no hosts are excluded. mapreduce.jobtracker.heartbeats.in.second 100 Expert: Approximate number of heart-beats that could arrive at JobTracker in a second. Assuming each RPC can be processed in 10msec, the default value is made 100 RPCs in a second. mapreduce.jobtracker.tasktracker.maxblacklists 4 The number of blacklists for a taskTracker by various jobs after which the task tracker could be blacklisted across all jobs. The tracker will be given a tasks later (after a day). The tracker will become a healthy tracker after a restart. mapreduce.job.maxtaskfailures.per.tracker 3 The number of task-failures on a tasktracker of a given job after which new tasks of that job aren't assigned to it. It MUST be less than mapreduce.map.maxattempts and mapreduce.reduce.maxattempts otherwise the failed task will never be tried on a different node. mapreduce.client.output.filter FAILED The filter for controlling the output of the task's userlogs sent to the console of the JobClient. The permissible options are: NONE, KILLED, FAILED, SUCCEEDED and ALL. mapreduce.client.completion.pollinterval 5000 The interval (in milliseconds) between which the JobClient polls the JobTracker for updates about job status. You may want to set this to a lower value to make tests run faster on a single node system. Adjusting this value in production may lead to unwanted client-server traffic. mapreduce.client.progressmonitor.pollinterval 1000 The interval (in milliseconds) between which the JobClient reports status to the console and checks for job completion. You may want to set this to a lower value to make tests run faster on a single node system. Adjusting this value in production may lead to unwanted client-server traffic. mapreduce.jobtracker.persist.jobstatus.active true Indicates if persistency of job status information is active or not. mapreduce.jobtracker.persist.jobstatus.hours 1 The number of hours job status information is persisted in DFS. The job status information will be available after it drops of the memory queue and between jobtracker restarts. With a zero value the job status information is not persisted at all in DFS. mapreduce.jobtracker.persist.jobstatus.dir /jobtracker/jobsInfo The directory where the job status information is persisted in a file system to be available after it drops of the memory queue and between jobtracker restarts. mapreduce.task.profile false To set whether the system should collect profiler information for some of the tasks in this job? The information is stored in the user log directory. The value is "true" if task profiling is enabled. mapreduce.task.profile.maps 0-2 To set the ranges of map tasks to profile. mapreduce.task.profile has to be set to true for the value to be accounted. mapreduce.task.profile.reduces 0-2 To set the ranges of reduce tasks to profile. mapreduce.task.profile has to be set to true for the value to be accounted. mapreduce.task.profile.params -agentlib:hprof=cpu=samples,heap=sites,force=n,thread=y,verbose=n,file=%s JVM profiler parameters used to profile map and reduce task attempts. This string may contain a single format specifier %s that will be replaced by the path to profile.out in the task attempt log directory. To specify different profiling options for map tasks and reduce tasks, more specific parameters mapreduce.task.profile.map.params and mapreduce.task.profile.reduce.params should be used. mapreduce.task.profile.map.params ${mapreduce.task.profile.params} Map-task-specific JVM profiler parameters. See mapreduce.task.profile.params mapreduce.task.profile.reduce.params ${mapreduce.task.profile.params} Reduce-task-specific JVM profiler parameters. See mapreduce.task.profile.params mapreduce.task.skip.start.attempts 2 The number of Task attempts AFTER which skip mode will be kicked off. When skip mode is kicked off, the tasks reports the range of records which it will process next, to the TaskTracker. So that on failures, TT knows which ones are possibly the bad records. On further executions, those are skipped. mapreduce.map.skip.proc.count.autoincr true The flag which if set to true, SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS is incremented by MapRunner after invoking the map function. This value must be set to false for applications which process the records asynchronously or buffer the input records. For example streaming. In such cases applications should increment this counter on their own. mapreduce.reduce.skip.proc.count.autoincr true The flag which if set to true, SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS is incremented by framework after invoking the reduce function. This value must be set to false for applications which process the records asynchronously or buffer the input records. For example streaming. In such cases applications should increment this counter on their own. mapreduce.job.skip.outdir If no value is specified here, the skipped records are written to the output directory at _logs/skip. User can stop writing skipped records by giving the value "none". mapreduce.map.skip.maxrecords 0 The number of acceptable skip records surrounding the bad record PER bad record in mapper. The number includes the bad record as well. To turn the feature of detection/skipping of bad records off, set the value to 0. The framework tries to narrow down the skipped range by retrying until this threshold is met OR all attempts get exhausted for this task. Set the value to Long.MAX_VALUE to indicate that framework need not try to narrow down. Whatever records(depends on application) get skipped are acceptable. mapreduce.reduce.skip.maxgroups 0 The number of acceptable skip groups surrounding the bad group PER bad group in reducer. The number includes the bad group as well. To turn the feature of detection/skipping of bad groups off, set the value to 0. The framework tries to narrow down the skipped range by retrying until this threshold is met OR all attempts get exhausted for this task. Set the value to Long.MAX_VALUE to indicate that framework need not try to narrow down. Whatever groups(depends on application) get skipped are acceptable. mapreduce.ifile.readahead true Configuration key to enable/disable IFile readahead. mapreduce.ifile.readahead.bytes 4194304 Configuration key to set the IFile readahead length in bytes. mapreduce.jobtracker.taskcache.levels 2 This is the max level of the task cache. For example, if the level is 2, the tasks cached are at the host level and at the rack level. mapreduce.job.queuename default Queue to which a job is submitted. This must match one of the queues defined in mapred-queues.xml for the system. Also, the ACL setup for the queue must allow the current user to submit a job to the queue. Before specifying a queue, ensure that the system is configured with the queue, and access is allowed for submitting jobs to the queue. mapreduce.job.tags Tags for the job that will be passed to YARN at submission time. Queries to YARN for applications can filter on these tags. mapreduce.cluster.acls.enabled false Specifies whether ACLs should be checked for authorization of users for doing various queue and job level operations. ACLs are disabled by default. If enabled, access control checks are made by JobTracker and TaskTracker when requests are made by users for queue operations like submit job to a queue and kill a job in the queue and job operations like viewing the job-details (See mapreduce.job.acl-view-job) or for modifying the job (See mapreduce.job.acl-modify-job) using Map/Reduce APIs, RPCs or via the console and web user interfaces. For enabling this flag(mapreduce.cluster.acls.enabled), this is to be set to true in mapred-site.xml on JobTracker node and on all TaskTracker nodes. mapreduce.job.acl-modify-job Job specific access-control list for 'modifying' the job. It is only used if authorization is enabled in Map/Reduce by setting the configuration property mapreduce.cluster.acls.enabled to true. This specifies the list of users and/or groups who can do modification operations on the job. For specifying a list of users and groups the format to use is "user1,user2 group1,group". If set to '*', it allows all users/groups to modify this job. If set to ' '(i.e. space), it allows none. This configuration is used to guard all the modifications with respect to this job and takes care of all the following operations: o killing this job o killing a task of this job, failing a task of this job o setting the priority of this job Each of these operations are also protected by the per-queue level ACL "acl-administer-jobs" configured via mapred-queues.xml. So a caller should have the authorization to satisfy either the queue-level ACL or the job-level ACL. Irrespective of this ACL configuration, (a) job-owner, (b) the user who started the cluster, (c) members of an admin configured supergroup configured via mapreduce.cluster.permissions.supergroup and (d) queue administrators of the queue to which this job was submitted to configured via acl-administer-jobs for the specific queue in mapred-queues.xml can do all the modification operations on a job. By default, nobody else besides job-owner, the user who started the cluster, members of supergroup and queue administrators can perform modification operations on a job. mapreduce.job.acl-view-job Job specific access-control list for 'viewing' the job. It is only used if authorization is enabled in Map/Reduce by setting the configuration property mapreduce.cluster.acls.enabled to true. This specifies the list of users and/or groups who can view private details about the job. For specifying a list of users and groups the format to use is "user1,user2 group1,group". If set to '*', it allows all users/groups to modify this job. If set to ' '(i.e. space), it allows none. This configuration is used to guard some of the job-views and at present only protects APIs that can return possibly sensitive information of the job-owner like o job-level counters o task-level counters o tasks' diagnostic information o task-logs displayed on the TaskTracker web-UI and o job.xml showed by the JobTracker's web-UI Every other piece of information of jobs is still accessible by any other user, for e.g., JobStatus, JobProfile, list of jobs in the queue, etc. Irrespective of this ACL configuration, (a) job-owner, (b) the user who started the cluster, (c) members of an admin configured supergroup configured via mapreduce.cluster.permissions.supergroup and (d) queue administrators of the queue to which this job was submitted to configured via acl-administer-jobs for the specific queue in mapred-queues.xml can do all the view operations on a job. By default, nobody else besides job-owner, the user who started the cluster, memebers of supergroup and queue administrators can perform view operations on a job. mapreduce.tasktracker.indexcache.mb 10 The maximum memory that a task tracker allows for the index cache that is used when serving map outputs to reducers. mapreduce.job.token.tracking.ids.enabled false Whether to write tracking ids of tokens to job-conf. When true, the configuration property "mapreduce.job.token.tracking.ids" is set to the token-tracking-ids of the job mapreduce.job.token.tracking.ids When mapreduce.job.token.tracking.ids.enabled is set to true, this is set by the framework to the token-tracking-ids used by the job. mapreduce.task.merge.progress.records 10000 The number of records to process during merge before sending a progress notification to the TaskTracker. mapreduce.task.combine.progress.records 10000 The number of records to process during combine output collection before sending a progress notification. mapreduce.job.reduce.slowstart.completedmaps 0.05 Fraction of the number of maps in the job which should be complete before reduces are scheduled for the job. mapreduce.job.complete.cancel.delegation.tokens true if false - do not unregister/cancel delegation tokens from renewal, because same tokens may be used by spawned jobs mapreduce.tasktracker.taskcontroller org.apache.hadoop.mapred.DefaultTaskController TaskController which is used to launch and manage task execution mapreduce.tasktracker.group Expert: Group to which TaskTracker belongs. If LinuxTaskController is configured via mapreduce.tasktracker.taskcontroller, the group owner of the task-controller binary should be same as this group. mapreduce.shuffle.port 13562 Default port that the ShuffleHandler will run on. ShuffleHandler is a service run at the NodeManager to facilitate transfers of intermediate Map outputs to requesting Reducers. mapreduce.job.reduce.shuffle.consumer.plugin.class org.apache.hadoop.mapreduce.task.reduce.Shuffle Name of the class whose instance will be used to send shuffle requests by reducetasks of this job. The class must be an instance of org.apache.hadoop.mapred.ShuffleConsumerPlugin. mapreduce.tasktracker.healthchecker.script.path Absolute path to the script which is periodicallyrun by the node health monitoring service to determine if the node is healthy or not. If the value of this key is empty or the file does not exist in the location configured here, the node health monitoring service is not started. mapreduce.tasktracker.healthchecker.interval 60000 Frequency of the node health script to be run, in milliseconds mapreduce.tasktracker.healthchecker.script.timeout 600000 Time after node health script should be killed if unresponsive and considered that the script has failed. mapreduce.tasktracker.healthchecker.script.args List of arguments which are to be passed to node health script when it is being launched comma seperated. mapreduce.job.counters.limit 120 Limit on the number of user counters allowed per job. mapreduce.framework.name local The runtime framework for executing MapReduce jobs. Can be one of local, classic or yarn. yarn.app.mapreduce.am.staging-dir /tmp/hadoop-yarn/staging The staging dir used while submitting jobs. mapreduce.am.max-attempts 2 The maximum number of application attempts. It is a application-specific setting. It should not be larger than the global number set by resourcemanager. Otherwise, it will be override. The default number is set to 2, to allow at least one retry for AM. mapreduce.job.end-notification.url Indicates url which will be called on completion of job to inform end status of job. User can give at most 2 variables with URI : $jobId and $jobStatus. If they are present in URI, then they will be replaced by their respective values. mapreduce.job.end-notification.retry.attempts 0 The number of times the submitter of the job wants to retry job end notification if it fails. This is capped by mapreduce.job.end-notification.max.attempts mapreduce.job.end-notification.retry.interval 1000 The number of milliseconds the submitter of the job wants to wait before job end notification is retried if it fails. This is capped by mapreduce.job.end-notification.max.retry.interval mapreduce.job.end-notification.max.attempts 5 true The maximum number of times a URL will be read for providing job end notification. Cluster administrators can set this to limit how long after end of a job, the Application Master waits before exiting. Must be marked as final to prevent users from overriding this. mapreduce.job.log4j-properties-file Used to override the default settings of log4j in container-log4j.properties for NodeManager. Like container-log4j.properties, it requires certain framework appenders properly defined in this overriden file. The file on the path will be added to distributed cache and classpath. If no-scheme is given in the path, it defaults to point to a log4j file on the local FS. mapreduce.job.end-notification.max.retry.interval 5000 true The maximum amount of time (in milliseconds) to wait before retrying job end notification. Cluster administrators can set this to limit how long the Application Master waits before exiting. Must be marked as final to prevent users from overriding this. yarn.app.mapreduce.am.env User added environment variables for the MR App Master processes. Example : 1) A=foo This will set the env variable A to foo 2) B=$B:c This is inherit tasktracker's B env variable. yarn.app.mapreduce.am.admin.user.env Environment variables for the MR App Master processes for admin purposes. These values are set first and can be overridden by the user env (yarn.app.mapreduce.am.env) Example : 1) A=foo This will set the env variable A to foo 2) B=$B:c This is inherit app master's B env variable. yarn.app.mapreduce.am.command-opts -Xmx1024m Java opts for the MR App Master processes. The following symbol, if present, will be interpolated: @taskid@ is replaced by current TaskID. Any other occurrences of '@' will go unchanged. For example, to enable verbose gc logging to a file named for the taskid in /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of: -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc Usage of -Djava.library.path can cause programs to no longer function if hadoop native libraries are used. These values should instead be set as part of LD_LIBRARY_PATH in the map / reduce JVM env using the mapreduce.map.env and mapreduce.reduce.env config settings. yarn.app.mapreduce.am.admin-command-opts Java opts for the MR App Master processes for admin purposes. It will appears before the opts set by yarn.app.mapreduce.am.command-opts and thus its options can be overridden user. Usage of -Djava.library.path can cause programs to no longer function if hadoop native libraries are used. These values should instead be set as part of LD_LIBRARY_PATH in the map / reduce JVM env using the mapreduce.map.env and mapreduce.reduce.env config settings. yarn.app.mapreduce.am.job.task.listener.thread-count 30 The number of threads used to handle RPC calls in the MR AppMaster from remote tasks yarn.app.mapreduce.am.job.client.port-range Range of ports that the MapReduce AM can use when binding. Leave blank if you want all possible ports. For example 50000-50050,50100-50200 yarn.app.mapreduce.am.job.committer.cancel-timeout 60000 The amount of time in milliseconds to wait for the output committer to cancel an operation if the job is killed yarn.app.mapreduce.am.job.committer.commit-window 10000 Defines a time window in milliseconds for output commit operations. If contact with the RM has occurred within this window then commits are allowed, otherwise the AM will not allow output commits until contact with the RM has been re-established. mapreduce.fileoutputcommitter.algorithm.version 1 The file output committer algorithm version valid algorithm version number: 1 or 2 default to 1, which is the original algorithm In algorithm version 1, 1. commitTask will rename directory $joboutput/_temporary/$appAttemptID/_temporary/$taskAttemptID/ to $joboutput/_temporary/$appAttemptID/$taskID/ 2. recoverTask will also do a rename $joboutput/_temporary/$appAttemptID/$taskID/ to $joboutput/_temporary/($appAttemptID + 1)/$taskID/ 3. commitJob will merge every task output file in $joboutput/_temporary/$appAttemptID/$taskID/ to $joboutput/, then it will delete $joboutput/_temporary/ and write $joboutput/_SUCCESS It has a performance regression, which is discussed in MAPREDUCE-4815. If a job generates many files to commit then the commitJob method call at the end of the job can take minutes. the commit is single-threaded and waits until all tasks have completed before commencing. algorithm version 2 will change the behavior of commitTask, recoverTask, and commitJob. 1. commitTask will rename all files in $joboutput/_temporary/$appAttemptID/_temporary/$taskAttemptID/ to $joboutput/ 2. recoverTask actually doesn't require to do anything, but for upgrade from version 1 to version 2 case, it will check if there are any files in $joboutput/_temporary/($appAttemptID - 1)/$taskID/ and rename them to $joboutput/ 3. commitJob can simply delete $joboutput/_temporary and write $joboutput/_SUCCESS This algorithm will reduce the output commit time for large jobs by having the tasks commit directly to the final output directory as they were completing and commitJob had very little to do. yarn.app.mapreduce.am.scheduler.heartbeat.interval-ms 1000 The interval in ms at which the MR AppMaster should send heartbeats to the ResourceManager yarn.app.mapreduce.client-am.ipc.max-retries 3 The number of client retries to the AM - before reconnecting to the RM to fetch Application Status. yarn.app.mapreduce.client-am.ipc.max-retries-on-timeouts 3 The number of client retries on socket timeouts to the AM - before reconnecting to the RM to fetch Application Status. yarn.app.mapreduce.client.max-retries 3 The number of client retries to the RM/HS before throwing exception. This is a layer above the ipc. yarn.app.mapreduce.am.resource.mb 1536 The amount of memory the MR AppMaster needs. yarn.app.mapreduce.am.resource.cpu-vcores 1 The number of virtual CPU cores the MR AppMaster needs. yarn.app.mapreduce.am.hard-kill-timeout-ms 10000 Number of milliseconds to wait before the job client kills the application. yarn.app.mapreduce.client.job.max-retries 0 The number of retries the client will make for getJob and dependent calls. The default is 0 as this is generally only needed for non-HDFS DFS where additional, high level retries are required to avoid spurious failures during the getJob call. 30 is a good value for WASB yarn.app.mapreduce.client.job.retry-interval 2000 The delay between getJob retries in ms for retries configured with yarn.app.mapreduce.client.job.max-retries. CLASSPATH for MR applications. A comma-separated list of CLASSPATH entries. If mapreduce.application.framework is set then this must specify the appropriate classpath for that archive, and the name of the archive must be present in the classpath. If mapreduce.app-submission.cross-platform is false, platform-specific environment vairable expansion syntax would be used to construct the default CLASSPATH entries. For Linux: $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*, $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*. For Windows: %HADOOP_MAPRED_HOME%/share/hadoop/mapreduce/*, %HADOOP_MAPRED_HOME%/share/hadoop/mapreduce/lib/*. If mapreduce.app-submission.cross-platform is true, platform-agnostic default CLASSPATH for MR applications would be used: {{HADOOP_MAPRED_HOME}}/share/hadoop/mapreduce/*, {{HADOOP_MAPRED_HOME}}/share/hadoop/mapreduce/lib/* Parameter expansion marker will be replaced by NodeManager on container launch based on the underlying OS accordingly. mapreduce.application.classpath If enabled, user can submit an application cross-platform i.e. submit an application from a Windows client to a Linux/Unix server or vice versa. mapreduce.app-submission.cross-platform false Path to the MapReduce framework archive. If set, the framework archive will automatically be distributed along with the job, and this path would normally reside in a public location in an HDFS filesystem. As with distributed cache files, this can be a URL with a fragment specifying the alias to use for the archive name. For example, hdfs:/mapred/framework/hadoop-mapreduce-2.1.1.tar.gz#mrframework would alias the localized archive as "mrframework". Note that mapreduce.application.classpath must include the appropriate classpath for the specified framework. The base name of the archive, or alias of the archive if an alias is used, must appear in the specified classpath. mapreduce.application.framework.path mapreduce.job.classloader false Whether to use a separate (isolated) classloader for user classes in the task JVM. mapreduce.job.classloader.system.classes Used to override the default definition of the system classes for the job classloader. The system classes are a comma-separated list of patterns that indicate whether to load a class from the system classpath, instead from the user-supplied JARs, when mapreduce.job.classloader is enabled. A positive pattern is defined as: 1. A single class name 'C' that matches 'C' and transitively all nested classes 'C$*' defined in C; 2. A package name ending with a '.' (e.g., "com.example.") that matches all classes from that package. A negative pattern is defined by a '-' in front of a positive pattern (e.g., "-com.example."). A class is considered a system class if and only if it matches one of the positive patterns and none of the negative ones. More formally: A class is a member of the inclusion set I if it matches one of the positive patterns. A class is a member of the exclusion set E if it matches one of the negative patterns. The set of system classes S = I \ E. mapreduce.jobhistory.address 0.0.0.0:10020 MapReduce JobHistory Server IPC host:port mapreduce.jobhistory.webapp.address 0.0.0.0:19888 MapReduce JobHistory Server Web UI host:port mapreduce.jobhistory.keytab Location of the kerberos keytab file for the MapReduce JobHistory Server. /etc/security/keytab/jhs.service.keytab mapreduce.jobhistory.principal Kerberos principal name for the MapReduce JobHistory Server. jhs/_HOST@REALM.TLD mapreduce.jobhistory.intermediate-done-dir ${yarn.app.mapreduce.am.staging-dir}/history/done_intermediate mapreduce.jobhistory.done-dir ${yarn.app.mapreduce.am.staging-dir}/history/done mapreduce.jobhistory.cleaner.enable true mapreduce.jobhistory.cleaner.interval-ms 86400000 How often the job history cleaner checks for files to delete, in milliseconds. Defaults to 86400000 (one day). Files are only deleted if they are older than mapreduce.jobhistory.max-age-ms. mapreduce.jobhistory.max-age-ms 604800000 Job history files older than this many milliseconds will be deleted when the history cleaner runs. Defaults to 604800000 (1 week). mapreduce.jobhistory.client.thread-count 10 The number of threads to handle client API requests mapreduce.jobhistory.datestring.cache.size 200000 Size of the date string cache. Effects the number of directories which will be scanned to find a job. mapreduce.jobhistory.joblist.cache.size 20000 Size of the job list cache mapreduce.jobhistory.loadedjobs.cache.size 5 Size of the loaded job cache mapreduce.jobhistory.move.interval-ms 180000 Scan for history files to more from intermediate done dir to done dir at this frequency. mapreduce.jobhistory.move.thread-count 3 The number of threads used to move files. mapreduce.jobhistory.store.class The HistoryStorage class to use to cache history data. mapreduce.jobhistory.minicluster.fixed.ports false Whether to use fixed ports with the minicluster mapreduce.jobhistory.admin.address 0.0.0.0:10033 The address of the History server admin interface. mapreduce.jobhistory.admin.acl * ACL of who can be admin of the History server. mapreduce.jobhistory.recovery.enable false Enable the history server to store server state and recover server state upon startup. If enabled then mapreduce.jobhistory.recovery.store.class must be specified. mapreduce.jobhistory.recovery.store.class org.apache.hadoop.mapreduce.v2.hs.HistoryServerFileSystemStateStoreService The HistoryServerStateStoreService class to store history server state for recovery. mapreduce.jobhistory.recovery.store.fs.uri ${hadoop.tmp.dir}/mapred/history/recoverystore The URI where history server state will be stored if HistoryServerFileSystemStateStoreService is configured as the recovery storage class. mapreduce.jobhistory.recovery.store.leveldb.path ${hadoop.tmp.dir}/mapred/history/recoverystore The URI where history server state will be stored if HistoryServerLeveldbSystemStateStoreService is configured as the recovery storage class. mapreduce.jobhistory.http.policy HTTP_ONLY This configures the HTTP endpoint for JobHistoryServer web UI. The following values are supported: - HTTP_ONLY : Service is provided only on http - HTTPS_ONLY : Service is provided only on https yarn.app.mapreduce.am.containerlauncher.threadpool-initial-size 10 The initial size of thread pool to launch containers in the app master. ././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_1/resources/oozie-default.xml 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_1/resources/oozie-default.xm0000664000175000017500000031700200000000000033553 0ustar00zuulzuul00000000000000 oozie.output.compression.codec gz The name of the compression codec to use. The implementation class for the codec needs to be specified through another property oozie.compression.codecs. You can specify a comma separated list of 'Codec_name'='Codec_class' for oozie.compression.codecs where codec class implements the interface org.apache.oozie.compression.CompressionCodec. If oozie.compression.codecs is not specified, gz codec implementation is used by default. oozie.action.mapreduce.uber.jar.enable false If true, enables the oozie.mapreduce.uber.jar mapreduce workflow configuration property, which is used to specify an uber jar in HDFS. Submitting a workflow with an uber jar requires at least Hadoop 2.2.0 or 1.2.0. If false, workflows which specify the oozie.mapreduce.uber.jar configuration property will fail. oozie.processing.timezone UTC Oozie server timezone. Valid values are UTC and GMT(+/-)####, for example 'GMT+0530' would be India timezone. All dates parsed and genered dates by Oozie Coordinator/Bundle will be done in the specified timezone. The default value of 'UTC' should not be changed under normal circumtances. If for any reason is changed, note that GMT(+/-)#### timezones do not observe DST changes. oozie.base.url http://localhost:8080/oozie Base Oozie URL. oozie.system.id oozie-${user.name} The Oozie system ID. oozie.systemmode NORMAL System mode for Oozie at startup. oozie.delete.runtime.dir.on.shutdown true If the runtime directory should be kept after Oozie shutdowns down. oozie.services org.apache.oozie.service.SchedulerService, org.apache.oozie.service.InstrumentationService, org.apache.oozie.service.MemoryLocksService, org.apache.oozie.service.UUIDService, org.apache.oozie.service.ELService, org.apache.oozie.service.AuthorizationService, org.apache.oozie.service.UserGroupInformationService, org.apache.oozie.service.HadoopAccessorService, org.apache.oozie.service.JobsConcurrencyService, org.apache.oozie.service.URIHandlerService, org.apache.oozie.service.DagXLogInfoService, org.apache.oozie.service.SchemaService, org.apache.oozie.service.LiteWorkflowAppService, org.apache.oozie.service.JPAService, org.apache.oozie.service.StoreService, org.apache.oozie.service.SLAStoreService, org.apache.oozie.service.DBLiteWorkflowStoreService, org.apache.oozie.service.CallbackService, org.apache.oozie.service.ActionService, org.apache.oozie.service.ShareLibService, org.apache.oozie.service.CallableQueueService, org.apache.oozie.service.ActionCheckerService, org.apache.oozie.service.RecoveryService, org.apache.oozie.service.PurgeService, org.apache.oozie.service.CoordinatorEngineService, org.apache.oozie.service.BundleEngineService, org.apache.oozie.service.DagEngineService, org.apache.oozie.service.CoordMaterializeTriggerService, org.apache.oozie.service.StatusTransitService, org.apache.oozie.service.PauseTransitService, org.apache.oozie.service.GroupsService, org.apache.oozie.service.ProxyUserService, org.apache.oozie.service.XLogStreamingService, org.apache.oozie.service.JvmPauseMonitorService, org.apache.oozie.service.SparkConfigurationService All services to be created and managed by Oozie Services singleton. Class names must be separated by commas. oozie.services.ext To add/replace services defined in 'oozie.services' with custom implementations. Class names must be separated by commas. oozie.service.XLogStreamingService.buffer.len 4096 4K buffer for streaming the logs progressively oozie.service.HCatAccessorService.jmsconnections default=java.naming.factory.initial#org.apache.activemq.jndi.ActiveMQInitialContextFactory;java.naming.provider.url#tcp://localhost:61616;connectionFactoryNames#ConnectionFactory Specify the map of endpoints to JMS configuration properties. In general, endpoint identifies the HCatalog server URL. "default" is used if no endpoint is mentioned in the query. If some JMS property is not defined, the system will use the property defined jndi.properties. jndi.properties files is retrieved from the application classpath. Mapping rules can also be provided for mapping Hcatalog servers to corresponding JMS providers. hcat://${1}.${2}.server.com:8020=java.naming.factory.initial#Dummy.Factory;java.naming.provider.url#tcp://broker.${2}:61616 oozie.service.JMSTopicService.topic.name default=${username} Topic options are ${username} or ${jobId} or a fixed string which can be specified as default or for a particular job type. For e.g To have a fixed string topic for workflows, coordinators and bundles, specify in the following comma-separated format: {jobtype1}={some_string1}, {jobtype2}={some_string2} where job type can be WORKFLOW, COORDINATOR or BUNDLE. e.g. Following defines topic for workflow job, workflow action, coordinator job, coordinator action, bundle job and bundle action WORKFLOW=workflow, COORDINATOR=coordinator, BUNDLE=bundle For jobs with no defined topic, default topic will be ${username} oozie.jms.producer.connection.properties java.naming.factory.initial#org.apache.activemq.jndi.ActiveMQInitialContextFactory;java.naming.provider.url#tcp://localhost:61616;connectionFactoryNames#ConnectionFactory oozie.service.JMSAccessorService.connectioncontext.impl org.apache.oozie.jms.DefaultConnectionContext Specifies the Connection Context implementation oozie.service.ConfigurationService.ignore.system.properties oozie.service.AuthorizationService.security.enabled Specifies "oozie.*" properties to cannot be overriden via Java system properties. Property names must be separted by commas. oozie.service.ConfigurationService.verify.available.properties true Specifies whether the available configurations check is enabled or not. oozie.service.SchedulerService.threads 10 The number of threads to be used by the SchedulerService to run deamon tasks. If maxed out, scheduled daemon tasks will be queued up and delayed until threads become available. oozie.service.AuthorizationService.authorization.enabled false Specifies whether security (user name/admin role) is enabled or not. If disabled any user can manage Oozie system and manage any job. oozie.service.AuthorizationService.default.group.as.acl false Enables old behavior where the User's default group is the job's ACL. oozie.service.InstrumentationService.logging.interval 60 Interval, in seconds, at which instrumentation should be logged by the InstrumentationService. If set to 0 it will not log instrumentation data. oozie.service.PurgeService.older.than 30 Completed workflow jobs older than this value, in days, will be purged by the PurgeService. oozie.service.PurgeService.coord.older.than 7 Completed coordinator jobs older than this value, in days, will be purged by the PurgeService. oozie.service.PurgeService.bundle.older.than 7 Completed bundle jobs older than this value, in days, will be purged by the PurgeService. oozie.service.PurgeService.purge.old.coord.action false Whether to purge completed workflows and their corresponding coordinator actions of long running coordinator jobs if the completed workflow jobs are older than the value specified in oozie.service.PurgeService.older.than. oozie.service.PurgeService.purge.limit 100 Completed Actions purge - limit each purge to this value oozie.service.PurgeService.purge.interval 3600 Interval at which the purge service will run, in seconds. oozie.service.RecoveryService.wf.actions.older.than 120 Age of the actions which are eligible to be queued for recovery, in seconds. oozie.service.RecoveryService.wf.actions.created.time.interval 7 Created time period of the actions which are eligible to be queued for recovery in days. oozie.service.RecoveryService.callable.batch.size 10 This value determines the number of callable which will be batched together to be executed by a single thread. oozie.service.RecoveryService.push.dependency.interval 200 This value determines the delay for push missing dependency command queueing in Recovery Service oozie.service.RecoveryService.interval 60 Interval at which the RecoverService will run, in seconds. oozie.service.RecoveryService.coord.older.than 600 Age of the Coordinator jobs or actions which are eligible to be queued for recovery, in seconds. oozie.service.RecoveryService.bundle.older.than 600 Age of the Bundle jobs which are eligible to be queued for recovery, in seconds. oozie.service.CallableQueueService.queue.size 10000 Max callable queue size oozie.service.CallableQueueService.threads 10 Number of threads used for executing callables oozie.service.CallableQueueService.callable.concurrency 3 Maximum concurrency for a given callable type. Each command is a callable type (submit, start, run, signal, job, jobs, suspend,resume, etc). Each action type is a callable type (Map-Reduce, Pig, SSH, FS, sub-workflow, etc). All commands that use action executors (action-start, action-end, action-kill and action-check) use the action type as the callable type. oozie.service.CallableQueueService.callable.next.eligible true If true, when a callable in the queue has already reached max concurrency, Oozie continuously find next one which has not yet reach max concurrency. oozie.service.CallableQueueService.InterruptMapMaxSize 500 Maximum Size of the Interrupt Map, the interrupt element will not be inserted in the map if exceeded the size. oozie.service.CallableQueueService.InterruptTypes kill,resume,suspend,bundle_kill,bundle_resume,bundle_suspend,coord_kill,coord_change,coord_resume,coord_suspend Getting the types of XCommands that are considered to be of Interrupt type oozie.service.CoordMaterializeTriggerService.lookup.interval 300 Coordinator Job Lookup interval.(in seconds). oozie.service.CoordMaterializeTriggerService.materialization.window 3600 Coordinator Job Lookup command materialized each job for this next "window" duration oozie.service.CoordMaterializeTriggerService.callable.batch.size 10 This value determines the number of callable which will be batched together to be executed by a single thread. oozie.service.CoordMaterializeTriggerService.materialization.system.limit 50 This value determines the number of coordinator jobs to be materialized at a given time. oozie.service.coord.normal.default.timeout 120 Default timeout for a coordinator action input check (in minutes) for normal job. -1 means infinite timeout oozie.service.coord.default.max.timeout 86400 Default maximum timeout for a coordinator action input check (in minutes). 86400= 60days oozie.service.coord.input.check.requeue.interval 60000 Command re-queue interval for coordinator data input check (in millisecond). oozie.service.coord.push.check.requeue.interval 600000 Command re-queue interval for push dependencies (in millisecond). oozie.service.coord.default.concurrency 1 Default concurrency for a coordinator job to determine how many maximum action should be executed at the same time. -1 means infinite concurrency. oozie.service.coord.default.throttle 12 Default throttle for a coordinator job to determine how many maximum action should be in WAITING state at the same time. oozie.service.coord.materialization.throttling.factor 0.05 Determine how many maximum actions should be in WAITING state for a single job at any time. The value is calculated by this factor X the total queue size. oozie.service.coord.check.maximum.frequency true When true, Oozie will reject any coordinators with a frequency faster than 5 minutes. It is not recommended to disable this check or submit coordinators with frequencies faster than 5 minutes: doing so can cause unintended behavior and additional system stress. oozie.service.ELService.groups job-submit,workflow,wf-sla-submit,coord-job-submit-freq,coord-job-submit-nofuncs,coord-job-submit-data,coord-job-submit-instances,coord-sla-submit,coord-action-create,coord-action-create-inst,coord-sla-create,coord-action-start,coord-job-wait-timeout List of groups for different ELServices oozie.service.ELService.constants.job-submit EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.functions.job-submit EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.constants.job-submit EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions without having to include all the built in ones. oozie.service.ELService.ext.functions.job-submit EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions without having to include all the built in ones. oozie.service.ELService.constants.workflow KB=org.apache.oozie.util.ELConstantsFunctions#KB, MB=org.apache.oozie.util.ELConstantsFunctions#MB, GB=org.apache.oozie.util.ELConstantsFunctions#GB, TB=org.apache.oozie.util.ELConstantsFunctions#TB, PB=org.apache.oozie.util.ELConstantsFunctions#PB, RECORDS=org.apache.oozie.action.hadoop.HadoopELFunctions#RECORDS, MAP_IN=org.apache.oozie.action.hadoop.HadoopELFunctions#MAP_IN, MAP_OUT=org.apache.oozie.action.hadoop.HadoopELFunctions#MAP_OUT, REDUCE_IN=org.apache.oozie.action.hadoop.HadoopELFunctions#REDUCE_IN, REDUCE_OUT=org.apache.oozie.action.hadoop.HadoopELFunctions#REDUCE_OUT, GROUPS=org.apache.oozie.action.hadoop.HadoopELFunctions#GROUPS EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.workflow EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.workflow firstNotNull=org.apache.oozie.util.ELConstantsFunctions#firstNotNull, concat=org.apache.oozie.util.ELConstantsFunctions#concat, replaceAll=org.apache.oozie.util.ELConstantsFunctions#replaceAll, appendAll=org.apache.oozie.util.ELConstantsFunctions#appendAll, trim=org.apache.oozie.util.ELConstantsFunctions#trim, timestamp=org.apache.oozie.util.ELConstantsFunctions#timestamp, urlEncode=org.apache.oozie.util.ELConstantsFunctions#urlEncode, toJsonStr=org.apache.oozie.util.ELConstantsFunctions#toJsonStr, toPropertiesStr=org.apache.oozie.util.ELConstantsFunctions#toPropertiesStr, toConfigurationStr=org.apache.oozie.util.ELConstantsFunctions#toConfigurationStr, wf:id=org.apache.oozie.DagELFunctions#wf_id, wf:name=org.apache.oozie.DagELFunctions#wf_name, wf:appPath=org.apache.oozie.DagELFunctions#wf_appPath, wf:conf=org.apache.oozie.DagELFunctions#wf_conf, wf:user=org.apache.oozie.DagELFunctions#wf_user, wf:group=org.apache.oozie.DagELFunctions#wf_group, wf:callback=org.apache.oozie.DagELFunctions#wf_callback, wf:transition=org.apache.oozie.DagELFunctions#wf_transition, wf:lastErrorNode=org.apache.oozie.DagELFunctions#wf_lastErrorNode, wf:errorCode=org.apache.oozie.DagELFunctions#wf_errorCode, wf:errorMessage=org.apache.oozie.DagELFunctions#wf_errorMessage, wf:run=org.apache.oozie.DagELFunctions#wf_run, wf:actionData=org.apache.oozie.DagELFunctions#wf_actionData, wf:actionExternalId=org.apache.oozie.DagELFunctions#wf_actionExternalId, wf:actionTrackerUri=org.apache.oozie.DagELFunctions#wf_actionTrackerUri, wf:actionExternalStatus=org.apache.oozie.DagELFunctions#wf_actionExternalStatus, hadoop:counters=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_counters, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf, fs:exists=org.apache.oozie.action.hadoop.FsELFunctions#fs_exists, fs:isDir=org.apache.oozie.action.hadoop.FsELFunctions#fs_isDir, fs:dirSize=org.apache.oozie.action.hadoop.FsELFunctions#fs_dirSize, fs:fileSize=org.apache.oozie.action.hadoop.FsELFunctions#fs_fileSize, fs:blockSize=org.apache.oozie.action.hadoop.FsELFunctions#fs_blockSize, hcat:exists=org.apache.oozie.coord.HCatELFunctions#hcat_exists EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.WorkflowAppService.WorkflowDefinitionMaxLength 100000 The maximum length of the workflow definition in bytes An error will be reported if the length exceeds the given maximum oozie.service.ELService.ext.functions.workflow EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.wf-sla-submit MINUTES=org.apache.oozie.util.ELConstantsFunctions#SUBMIT_MINUTES, HOURS=org.apache.oozie.util.ELConstantsFunctions#SUBMIT_HOURS, DAYS=org.apache.oozie.util.ELConstantsFunctions#SUBMIT_DAYS EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.wf-sla-submit EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.wf-sla-submit EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.wf-sla-submit EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. l oozie.service.ELService.constants.coord-job-submit-freq EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-job-submit-freq EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-job-submit-freq coord:days=org.apache.oozie.coord.CoordELFunctions#ph1_coord_days, coord:months=org.apache.oozie.coord.CoordELFunctions#ph1_coord_months, coord:hours=org.apache.oozie.coord.CoordELFunctions#ph1_coord_hours, coord:minutes=org.apache.oozie.coord.CoordELFunctions#ph1_coord_minutes, coord:endOfDays=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfDays, coord:endOfMonths=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfMonths, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-job-submit-freq EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-job-wait-timeout EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.constants.coord-job-wait-timeout EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions without having to include all the built in ones. oozie.service.ELService.functions.coord-job-wait-timeout coord:days=org.apache.oozie.coord.CoordELFunctions#ph1_coord_days, coord:months=org.apache.oozie.coord.CoordELFunctions#ph1_coord_months, coord:hours=org.apache.oozie.coord.CoordELFunctions#ph1_coord_hours, coord:minutes=org.apache.oozie.coord.CoordELFunctions#ph1_coord_minutes, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-job-wait-timeout EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions without having to include all the built in ones. oozie.service.ELService.constants.coord-job-submit-nofuncs MINUTE=org.apache.oozie.coord.CoordELConstants#SUBMIT_MINUTE, HOUR=org.apache.oozie.coord.CoordELConstants#SUBMIT_HOUR, DAY=org.apache.oozie.coord.CoordELConstants#SUBMIT_DAY, MONTH=org.apache.oozie.coord.CoordELConstants#SUBMIT_MONTH, YEAR=org.apache.oozie.coord.CoordELConstants#SUBMIT_YEAR EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-job-submit-nofuncs EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-job-submit-nofuncs coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-job-submit-nofuncs EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-job-submit-instances EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-job-submit-instances EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-job-submit-instances coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph1_coord_hoursInDay_echo, coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph1_coord_daysInMonth_echo, coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_tzOffset_echo, coord:current=org.apache.oozie.coord.CoordELFunctions#ph1_coord_current_echo, coord:currentRange=org.apache.oozie.coord.CoordELFunctions#ph1_coord_currentRange_echo, coord:offset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_offset_echo, coord:latest=org.apache.oozie.coord.CoordELFunctions#ph1_coord_latest_echo, coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph1_coord_latestRange_echo, coord:future=org.apache.oozie.coord.CoordELFunctions#ph1_coord_future_echo, coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph1_coord_futureRange_echo, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:absolute=org.apache.oozie.coord.CoordELFunctions#ph1_coord_absolute_echo, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-job-submit-instances EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-job-submit-data EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-job-submit-data EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-job-submit-data coord:dataIn=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataIn_echo, coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataOut_echo, coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_wrap, coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actualTime_echo_wrap, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo, coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actionId_echo, coord:name=org.apache.oozie.coord.CoordELFunctions#ph1_coord_name_echo, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:databaseIn=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseIn_echo, coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseOut_echo, coord:tableIn=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableIn_echo, coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableOut_echo, coord:dataInPartitionFilter=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionFilter_echo, coord:dataInPartitionMin=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionMin_echo, coord:dataInPartitionMax=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionMax_echo, coord:dataInPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitions_echo, coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitions_echo, coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitionValue_echo, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-job-submit-data EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-sla-submit MINUTES=org.apache.oozie.coord.CoordELConstants#SUBMIT_MINUTES, HOURS=org.apache.oozie.coord.CoordELConstants#SUBMIT_HOURS, DAYS=org.apache.oozie.coord.CoordELConstants#SUBMIT_DAYS EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-sla-submit EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-sla-submit coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataOut_echo, coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_fixed, coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actualTime_echo_wrap, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo, coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actionId_echo, coord:name=org.apache.oozie.coord.CoordELFunctions#ph1_coord_name_echo, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseOut_echo, coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableOut_echo, coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitions_echo, coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitionValue_echo, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-sla-submit EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-action-create EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-action-create EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-action-create coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph2_coord_hoursInDay, coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph2_coord_daysInMonth, coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_tzOffset, coord:current=org.apache.oozie.coord.CoordELFunctions#ph2_coord_current, coord:currentRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_currentRange, coord:offset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_offset, coord:latest=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latest_echo, coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latestRange_echo, coord:future=org.apache.oozie.coord.CoordELFunctions#ph2_coord_future_echo, coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_futureRange_echo, coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actionId, coord:name=org.apache.oozie.coord.CoordELFunctions#ph2_coord_name, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:absolute=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_echo, coord:absoluteRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_range, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-action-create EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-action-create-inst EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-action-create-inst EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-action-create-inst coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph2_coord_hoursInDay, coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph2_coord_daysInMonth, coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_tzOffset, coord:current=org.apache.oozie.coord.CoordELFunctions#ph2_coord_current_echo, coord:currentRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_currentRange_echo, coord:offset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_offset_echo, coord:latest=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latest_echo, coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latestRange_echo, coord:future=org.apache.oozie.coord.CoordELFunctions#ph2_coord_future_echo, coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_futureRange_echo, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:absolute=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_echo, coord:absoluteRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_range, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-action-create-inst EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-sla-create EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-sla-create MINUTES=org.apache.oozie.coord.CoordELConstants#SUBMIT_MINUTES, HOURS=org.apache.oozie.coord.CoordELConstants#SUBMIT_HOURS, DAYS=org.apache.oozie.coord.CoordELConstants#SUBMIT_DAYS EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-sla-create coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataOut, coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_nominalTime, coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actualTime, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateOffset, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateTzOffset, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime, coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actionId, coord:name=org.apache.oozie.coord.CoordELFunctions#ph2_coord_name, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_databaseOut, coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_tableOut, coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitions, coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitionValue, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-sla-create EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-action-start EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-action-start EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-action-start coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph3_coord_hoursInDay, coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph3_coord_daysInMonth, coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_tzOffset, coord:latest=org.apache.oozie.coord.CoordELFunctions#ph3_coord_latest, coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph3_coord_latestRange, coord:future=org.apache.oozie.coord.CoordELFunctions#ph3_coord_future, coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph3_coord_futureRange, coord:dataIn=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataIn, coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataOut, coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_nominalTime, coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_actualTime, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dateOffset, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dateTzOffset, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_formatTime, coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph3_coord_actionId, coord:name=org.apache.oozie.coord.CoordELFunctions#ph3_coord_name, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:databaseIn=org.apache.oozie.coord.HCatELFunctions#ph3_coord_databaseIn, coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_databaseOut, coord:tableIn=org.apache.oozie.coord.HCatELFunctions#ph3_coord_tableIn, coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_tableOut, coord:dataInPartitionFilter=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitionFilter, coord:dataInPartitionMin=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitionMin, coord:dataInPartitionMax=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitionMax, coord:dataInPartitions=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitions, coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitions, coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitionValue, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-action-start EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.latest-el.use-current-time false Determine whether to use the current time to determine the latest dependency or the action creation time. This is for backward compatibility with older oozie behaviour. oozie.service.UUIDService.generator counter random : generated UUIDs will be random strings. counter: generated UUIDs generated will be a counter postfixed with the system startup time. oozie.service.DBLiteWorkflowStoreService.status.metrics.collection.interval 5 Workflow Status metrics collection interval in minutes. oozie.service.DBLiteWorkflowStoreService.status.metrics.window 3600 Workflow Status metrics collection window in seconds. Workflow status will be instrumented for the window. oozie.db.schema.name oozie Oozie DataBase Name oozie.service.JPAService.create.db.schema false Creates Oozie DB. If set to true, it creates the DB schema if it does not exist. If the DB schema exists is a NOP. If set to false, it does not create the DB schema. If the DB schema does not exist it fails start up. oozie.service.JPAService.validate.db.connection true Validates DB connections from the DB connection pool. If the 'oozie.service.JPAService.create.db.schema' property is set to true, this property is ignored. oozie.service.JPAService.validate.db.connection.eviction.interval 300000 Validates DB connections from the DB connection pool. When validate db connection 'TestWhileIdle' is true, the number of milliseconds to sleep between runs of the idle object evictor thread. oozie.service.JPAService.validate.db.connection.eviction.num 10 Validates DB connections from the DB connection pool. When validate db connection 'TestWhileIdle' is true, the number of objects to examine during each run of the idle object evictor thread. oozie.service.JPAService.connection.data.source org.apache.commons.dbcp.BasicDataSource DataSource to be used for connection pooling. oozie.service.JPAService.connection.properties DataSource connection properties. oozie.service.JPAService.jdbc.driver org.apache.derby.jdbc.EmbeddedDriver JDBC driver class. oozie.service.JPAService.jdbc.url jdbc:derby:${oozie.data.dir}/${oozie.db.schema.name}-db;create=true JDBC URL. oozie.service.JPAService.jdbc.username sa DB user name. oozie.service.JPAService.jdbc.password DB user password. IMPORTANT: if password is emtpy leave a 1 space string, the service trims the value, if empty Configuration assumes it is NULL. IMPORTANT: if the StoreServicePasswordService is active, it will reset this value with the value given in the console. oozie.service.JPAService.pool.max.active.conn 10 Max number of connections. oozie.service.SchemaService.wf.schemas oozie-workflow-0.1.xsd,oozie-workflow-0.2.xsd,oozie-workflow-0.2.5.xsd,oozie-workflow-0.3.xsd,oozie-workflow-0.4.xsd, oozie-workflow-0.4.5.xsd,oozie-workflow-0.5.xsd, shell-action-0.1.xsd,shell-action-0.2.xsd,shell-action-0.3.xsd, email-action-0.1.xsd,email-action-0.2.xsd, hive-action-0.2.xsd,hive-action-0.3.xsd,hive-action-0.4.xsd,hive-action-0.5.xsd, sqoop-action-0.2.xsd,sqoop-action-0.3.xsd,sqoop-action-0.4.xsd, ssh-action-0.1.xsd,ssh-action-0.2.xsd, distcp-action-0.1.xsd,distcp-action-0.2.xsd, oozie-sla-0.1.xsd,oozie-sla-0.2.xsd, hive2-action-0.1.xsd, spark-action-0.1.xsd List of schemas for workflows (separated by commas). oozie.service.SchemaService.wf.ext.schemas List of additional schemas for workflows (separated by commas). oozie.service.SchemaService.coord.schemas oozie-coordinator-0.1.xsd,oozie-coordinator-0.2.xsd,oozie-coordinator-0.3.xsd,oozie-coordinator-0.4.xsd, oozie-sla-0.1.xsd,oozie-sla-0.2.xsd List of schemas for coordinators (separated by commas). oozie.service.SchemaService.coord.ext.schemas List of additional schemas for coordinators (separated by commas). oozie.service.SchemaService.bundle.schemas oozie-bundle-0.1.xsd,oozie-bundle-0.2.xsd List of schemas for bundles (separated by commas). oozie.service.SchemaService.bundle.ext.schemas List of additional schemas for bundles (separated by commas). oozie.service.SchemaService.sla.schemas gms-oozie-sla-0.1.xsd,oozie-sla-0.2.xsd List of schemas for semantic validation for GMS SLA (separated by commas). oozie.service.SchemaService.sla.ext.schemas List of additional schemas for semantic validation for GMS SLA (separated by commas). oozie.service.CallbackService.base.url ${oozie.base.url}/callback Base callback URL used by ActionExecutors. oozie.service.CallbackService.early.requeue.max.retries 5 If Oozie receives a callback too early (while the action is in PREP state), it will requeue the command this many times to give the action time to transition to RUNNING. oozie.servlet.CallbackServlet.max.data.len 2048 Max size in characters for the action completion data output. oozie.external.stats.max.size -1 Max size in bytes for action stats. -1 means infinite value. oozie.JobCommand.job.console.url ${oozie.base.url}?job= Base console URL for a workflow job. oozie.service.ActionService.executor.classes org.apache.oozie.action.decision.DecisionActionExecutor, org.apache.oozie.action.hadoop.JavaActionExecutor, org.apache.oozie.action.hadoop.FsActionExecutor, org.apache.oozie.action.hadoop.MapReduceActionExecutor, org.apache.oozie.action.hadoop.PigActionExecutor, org.apache.oozie.action.hadoop.HiveActionExecutor, org.apache.oozie.action.hadoop.ShellActionExecutor, org.apache.oozie.action.hadoop.SqoopActionExecutor, org.apache.oozie.action.hadoop.DistcpActionExecutor, org.apache.oozie.action.hadoop.Hive2ActionExecutor, org.apache.oozie.action.ssh.SshActionExecutor, org.apache.oozie.action.oozie.SubWorkflowActionExecutor, org.apache.oozie.action.email.EmailActionExecutor, org.apache.oozie.action.hadoop.SparkActionExecutor List of ActionExecutors classes (separated by commas). Only action types with associated executors can be used in workflows. oozie.service.ActionService.executor.ext.classes List of ActionExecutors extension classes (separated by commas). Only action types with associated executors can be used in workflows. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ActionCheckerService.action.check.interval 60 The frequency at which the ActionCheckService will run. oozie.service.ActionCheckerService.action.check.delay 600 The time, in seconds, between an ActionCheck for the same action. oozie.service.ActionCheckerService.callable.batch.size 10 This value determines the number of actions which will be batched together to be executed by a single thread. oozie.service.StatusTransitService.statusTransit.interval 60 The frequency in seconds at which the StatusTransitService will run. oozie.service.StatusTransitService.backward.support.for.coord.status false true, if coordinator job submits using 'uri:oozie:coordinator:0.1' and wants to keep Oozie 2.x status transit. if set true, 1. SUCCEEDED state in coordinator job means materialization done. 2. No DONEWITHERROR state in coordinator job 3. No PAUSED or PREPPAUSED state in coordinator job 4. PREPSUSPENDED becomes SUSPENDED in coordinator job oozie.service.StatusTransitService.backward.support.for.states.without.error true true, if you want to keep Oozie 3.2 status transit. Change it to false for Oozie 4.x releases. if set true, No states like RUNNINGWITHERROR, SUSPENDEDWITHERROR and PAUSEDWITHERROR for coordinator and bundle oozie.service.PauseTransitService.PauseTransit.interval 60 The frequency in seconds at which the PauseTransitService will run. oozie.action.max.output.data 2048 Max size in characters for output data. oozie.action.fs.glob.max 1000 Maximum number of globbed files. oozie.action.launcher.mapreduce.job.ubertask.enable true Enables Uber Mode for the launcher job in YARN/Hadoop 2 (no effect in Hadoop 1) for all action types by default. This can be overridden on a per-action-type basis by setting oozie.action.#action-type#.launcher.mapreduce.job.ubertask.enable in oozie-site.xml (where #action-type# is the action type; for example, "pig"). And that can be overridden on a per-action basis by setting oozie.launcher.mapreduce.job.ubertask.enable in an action's configuration section in a workflow. In summary, the priority is this: 1. action's configuration section in a workflow 2. oozie.action.#action-type#.launcher.mapreduce.job.ubertask.enable in oozie-site 3. oozie.action.launcher.mapreduce.job.ubertask.enable in oozie-site oozie.action.shell.launcher.mapreduce.job.ubertask.enable false The Shell action may have issues with the $PATH environment when using Uber Mode, and so Uber Mode is disabled by default for it. See oozie.action.launcher.mapreduce.job.ubertask.enable oozie.action.launcher.yarn.timeline-service.enabled false Enables/disables getting delegation tokens for ATS for the launcher job in YARN/Hadoop 2.6 (no effect in Hadoop 1) for all action types by default if tez-site.xml is present in distributed cache. This can be overridden on a per-action basis by setting oozie.launcher.yarn.timeline-service.enabled in an action's configuration section in a workflow. oozie.action.retries.max 3 The number of retries for executing an action in case of failure oozie.action.retry.interval 10 The interval between retries of an action in case of failure oozie.action.retry.policy periodic Retry policy of an action in case of failure. Possible values are periodic/exponential oozie.action.ssh.delete.remote.tmp.dir true If set to true, it will delete temporary directory at the end of execution of ssh action. oozie.action.ssh.http.command curl Command to use for callback to oozie, normally is 'curl' or 'wget'. The command must available in PATH environment variable of the USER@HOST box shell. oozie.action.ssh.http.command.post.options --data-binary @#stdout --request POST --header "content-type:text/plain" The callback command POST options. Used when the ouptut of the ssh action is captured. oozie.action.ssh.allow.user.at.host true Specifies whether the user specified by the ssh action is allowed or is to be replaced by the Job user oozie.action.subworkflow.max.depth 50 The maximum depth for subworkflows. For example, if set to 3, then a workflow can start subwf1, which can start subwf2, which can start subwf3; but if subwf3 tries to start subwf4, then the action will fail. This is helpful in preventing errant workflows from starting infintely recursive subworkflows. oozie.service.HadoopAccessorService.kerberos.enabled false Indicates if Oozie is configured to use Kerberos. local.realm LOCALHOST Kerberos Realm used by Oozie and Hadoop. Using 'local.realm' to be aligned with Hadoop configuration oozie.service.HadoopAccessorService.keytab.file ${user.home}/oozie.keytab Location of the Oozie user keytab file. oozie.service.HadoopAccessorService.kerberos.principal ${user.name}/localhost@${local.realm} Kerberos principal for Oozie service. oozie.service.HadoopAccessorService.jobTracker.whitelist Whitelisted job tracker for Oozie service. oozie.service.HadoopAccessorService.nameNode.whitelist Whitelisted job tracker for Oozie service. oozie.service.HadoopAccessorService.hadoop.configurations *=hadoop-conf Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the HOST:PORT of the Hadoop service (JobTracker, YARN, HDFS). The wildcard '*' configuration is used when there is no exact match for an authority. The HADOOP_CONF_DIR contains the relevant Hadoop *-site.xml files. If the path is relative is looked within the Oozie configuration directory; though the path can be absolute (i.e. to point to Hadoop client conf/ directories in the local filesystem. oozie.service.HadoopAccessorService.action.configurations *=action-conf Comma separated AUTHORITY=ACTION_CONF_DIR, where AUTHORITY is the HOST:PORT of the Hadoop MapReduce service (JobTracker, YARN). The wildcard '*' configuration is used when there is no exact match for an authority. The ACTION_CONF_DIR may contain ACTION.xml files where ACTION is the action type ('java', 'map-reduce', 'pig', 'hive', 'sqoop', etc.). If the ACTION.xml file exists, its properties will be used as defaults properties for the action. If the path is relative is looked within the Oozie configuration directory; though the path can be absolute (i.e. to point to Hadoop client conf/ directories in the local filesystem. oozie.service.HadoopAccessorService.action.configurations.load.default.resources true true means that default and site xml files of hadoop (core-default, core-site, hdfs-default, hdfs-site, mapred-default, mapred-site, yarn-default, yarn-site) are parsed into actionConf on Oozie server. false means that site xml files are not loaded on server, instead loaded on launcher node. This is only done for pig and hive actions which handle loading those files automatically from the classpath on launcher task. It defaults to true. oozie.credentials.credentialclasses A list of credential class mapping for CredentialsProvider oozie.actions.main.classnames distcp=org.apache.hadoop.tools.DistCp A list of class name mapping for Action classes oozie.service.WorkflowAppService.system.libpath /user/${user.name}/share/lib System library path to use for workflow applications. This path is added to workflow application if their job properties sets the property 'oozie.use.system.libpath' to true. oozie.command.default.lock.timeout 5000 Default timeout (in milliseconds) for commands for acquiring an exclusive lock on an entity. oozie.command.default.requeue.delay 10000 Default time (in milliseconds) for commands that are requeued for delayed execution. oozie.service.LiteWorkflowStoreService.user.retry.max 3 Automatic retry max count for workflow action is 3 in default. oozie.service.LiteWorkflowStoreService.user.retry.inteval 10 Automatic retry interval for workflow action is in minutes and the default value is 10 minutes. oozie.service.LiteWorkflowStoreService.user.retry.error.code JA008,JA009,JA017,JA018,JA019,FS009,FS008,FS014 Automatic retry interval for workflow action is handled for these specified error code: FS009, FS008 is file exists error when using chmod in fs action. FS014 is permission error in fs action JA018 is output directory exists error in workflow map-reduce action. JA019 is error while executing distcp action. JA017 is job not exists error in action executor. JA008 is FileNotFoundException in action executor. JA009 is IOException in action executor. ALL is the any kind of error in action executor. oozie.service.LiteWorkflowStoreService.user.retry.error.code.ext Automatic retry interval for workflow action is handled for these specified extra error code: ALL is the any kind of error in action executor. oozie.service.LiteWorkflowStoreService.node.def.version _oozie_inst_v_1 NodeDef default version, _oozie_inst_v_0 or _oozie_inst_v_1 oozie.authentication.type simple Defines authentication used for Oozie HTTP endpoint. Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME# oozie.server.authentication.type ${oozie.authentication.type} Defines authentication used for Oozie server communicating to other Oozie server over HTTP(s). Supported values are: simple | kerberos | #AUTHENTICATOR_CLASSNAME# oozie.authentication.token.validity 36000 Indicates how long (in seconds) an authentication token is valid before it has to be renewed. oozie.authentication.cookie.domain The domain to use for the HTTP cookie that stores the authentication token. In order to authentiation to work correctly across multiple hosts the domain must be correctly set. oozie.authentication.simple.anonymous.allowed true Indicates if anonymous requests are allowed when using 'simple' authentication. oozie.authentication.kerberos.principal HTTP/localhost@${local.realm} Indicates the Kerberos principal to be used for HTTP endpoint. The principal MUST start with 'HTTP/' as per Kerberos HTTP SPNEGO specification. oozie.authentication.kerberos.keytab ${oozie.service.HadoopAccessorService.keytab.file} Location of the keytab file with the credentials for the principal. Referring to the same keytab file Oozie uses for its Kerberos credentials for Hadoop. oozie.authentication.kerberos.name.rules DEFAULT The kerberos names rules is to resolve kerberos principal names, refer to Hadoop's KerberosName for more details. oozie.coord.execution.none.tolerance 1 Default time tolerance in minutes after action nominal time for an action to be skipped when execution order is "NONE" oozie.coord.actions.default.length 1000 Default number of coordinator actions to be retrieved by the info command oozie.validate.ForkJoin true If true, fork and join should be validated at wf submission time. oozie.coord.action.get.all.attributes false Setting to true is not recommended as coord job/action info will bring all columns of the action in memory. Set it true only if backward compatibility for action/job info is required. oozie.service.HadoopAccessorService.supported.filesystems hdfs,hftp,webhdfs Enlist the different filesystems supported for federation. If wildcard "*" is specified, then ALL file schemes will be allowed. oozie.service.URIHandlerService.uri.handlers org.apache.oozie.dependency.FSURIHandler Enlist the different uri handlers supported for data availability checks. oozie.notification.url.connection.timeout 10000 Defines the timeout, in milliseconds, for Oozie HTTP notification callbacks. Oozie does HTTP notifications for workflow jobs which set the 'oozie.wf.action.notification.url', 'oozie.wf.worklfow.notification.url' and/or 'oozie.coord.action.notification.url' properties in their job.properties. Refer to section '5 Oozie Notifications' in the Workflow specification for details. oozie.hadoop-2.0.2-alpha.workaround.for.distributed.cache false Due to a bug in Hadoop 2.0.2-alpha, MAPREDUCE-4820, launcher jobs fail to set the distributed cache for the action job because the local JARs are implicitly included triggering a duplicate check. This flag removes the distributed cache files for the action as they'll be included from the local JARs of the JobClient (MRApps) submitting the action job from the launcher. oozie.service.EventHandlerService.filter.app.types workflow_job, coordinator_action The app-types among workflow/coordinator/bundle job/action for which for which events system is enabled. oozie.service.EventHandlerService.event.queue org.apache.oozie.event.MemoryEventQueue The implementation for EventQueue in use by the EventHandlerService. oozie.service.EventHandlerService.event.listeners org.apache.oozie.jms.JMSJobEventListener oozie.service.EventHandlerService.queue.size 10000 Maximum number of events to be contained in the event queue. oozie.service.EventHandlerService.worker.interval 30 The default interval (seconds) at which the worker threads will be scheduled to run and process events. oozie.service.EventHandlerService.batch.size 10 The batch size for batched draining per thread from the event queue. oozie.service.EventHandlerService.worker.threads 3 Number of worker threads to be scheduled to run and process events. oozie.sla.service.SLAService.capacity 5000 Maximum number of sla records to be contained in the memory structure. oozie.sla.service.SLAService.alert.events END_MISS Default types of SLA events for being alerted of. oozie.sla.service.SLAService.calculator.impl org.apache.oozie.sla.SLACalculatorMemory The implementation for SLACalculator in use by the SLAService. oozie.sla.service.SLAService.job.event.latency 90000 Time in milliseconds to account of latency of getting the job status event to compare against and decide sla miss/met oozie.sla.service.SLAService.check.interval 30 Time interval, in seconds, at which SLA Worker will be scheduled to run oozie.sla.disable.alerts.older.than 48 Time threshold, in HOURS, for disabling SLA alerting for jobs whose nominal time is older than this. oozie.zookeeper.connection.string localhost:2181 Comma-separated values of host:port pairs of the ZooKeeper servers. oozie.zookeeper.namespace oozie The namespace to use. All of the Oozie Servers that are planning on talking to each other should have the same namespace. oozie.zookeeper.connection.timeout 180 Default ZK connection timeout (in sec). If connection is lost for more than timeout, then Oozie server will shutdown itself if oozie.zookeeper.server.shutdown.ontimeout is true. oozie.zookeeper.server.shutdown.ontimeout true If true, Oozie server will shutdown itself on ZK connection timeout. oozie.http.hostname localhost Oozie server host name. oozie.http.port 11000 Oozie server port. oozie.instance.id ${oozie.http.hostname} Each Oozie server should have its own unique instance id. The default is system property =${OOZIE_HTTP_HOSTNAME}= (i.e. the hostname). oozie.service.ShareLibService.mapping.file Sharelib mapping files contains list of key=value, where key will be the sharelib name for the action and value is a comma separated list of DFS directories or jar files. Example. oozie.pig_10=hdfs:///share/lib/pig/pig-0.10.1/lib/ oozie.pig=hdfs:///share/lib/pig/pig-0.11.1/lib/ oozie.distcp=hdfs:///share/lib/hadoop-2.2.0/share/hadoop/tools/lib/hadoop-distcp-2.2.0.jar oozie.service.ShareLibService.fail.fast.on.startup false Fails server starup if sharelib initilzation fails. oozie.service.ShareLibService.purge.interval 1 How often, in days, Oozie should check for old ShareLibs and LauncherLibs to purge from HDFS. oozie.service.ShareLibService.temp.sharelib.retention.days 7 ShareLib retention time in days. oozie.action.ship.launcher.jar false Specifies whether launcher jar is shipped or not. oozie.action.jobinfo.enable false JobInfo will contain information of bundle, coordinator, workflow and actions. If enabled, hadoop job will have property(oozie.job.info) which value is multiple key/value pair separated by ",". This information can be used for analytics like how many oozie jobs are submitted for a particular period, what is the total number of failed pig jobs, etc from mapreduce job history logs and configuration. User can also add custom workflow property to jobinfo by adding property which prefix with "oozie.job.info." Eg. oozie.job.info="bundle.id=,bundle.name=,coord.name=,coord.nominal.time=,coord.name=,wf.id=, wf.name=,action.name=,action.type=,launcher=true" oozie.service.XLogStreamingService.max.log.scan.duration -1 Max log scan duration in hours. If log scan request end_date - start_date > value, then exception is thrown to reduce the scan duration. -1 indicate no limit. oozie.service.XLogStreamingService.actionlist.max.log.scan.duration -1 Max log scan duration in hours for coordinator job when list of actions are specified. If log streaming request end_date - start_date > value, then exception is thrown to reduce the scan duration. -1 indicate no limit. This setting is separate from max.log.scan.duration as we want to allow higher durations when actions are specified. oozie.service.JvmPauseMonitorService.warn-threshold.ms 10000 The JvmPauseMonitorService runs a thread that repeatedly tries to detect when the JVM pauses, which could indicate that the JVM or host machine is overloaded or other problems. This thread sleeps for 500ms; if it sleeps for significantly longer, then there is likely a problem. This property specifies the threadshold for when Oozie should log a WARN level message; there is also a counter named "jvm.pause.warn-threshold". oozie.service.JvmPauseMonitorService.info-threshold.ms 1000 The JvmPauseMonitorService runs a thread that repeatedly tries to detect when the JVM pauses, which could indicate that the JVM or host machine is overloaded or other problems. This thread sleeps for 500ms; if it sleeps for significantly longer, then there is likely a problem. This property specifies the threadshold for when Oozie should log an INFO level message; there is also a counter named "jvm.pause.info-threshold". oozie.service.ZKLocksService.locks.reaper.threshold 300 The frequency at which the ChildReaper will run. Duration should be in sec. Default is 5 min. oozie.service.ZKLocksService.locks.reaper.threads 2 Number of fixed threads used by ChildReaper to delete empty locks. oozie.service.AbandonedCoordCheckerService.check.interval 1440 Interval, in minutes, at which AbandonedCoordCheckerService should run. oozie.service.AbandonedCoordCheckerService.check.delay 60 Delay, in minutes, at which AbandonedCoordCheckerService should run. oozie.service.AbandonedCoordCheckerService.failure.limit 25 Failure limit. A job is considered to be abandoned/faulty if total number of actions in failed/timedout/suspended >= "Failure limit" and there are no succeeded action. oozie.service.AbandonedCoordCheckerService.kill.jobs false If true, AbandonedCoordCheckerService will kill abandoned coords. oozie.service.AbandonedCoordCheckerService.job.older.than 2880 In minutes, job will be considered as abandoned/faulty if job is older than this value. oozie.notification.proxy System level proxy setting for job notifications. oozie.wf.rerun.disablechild false By setting this option, workflow rerun will be disabled if parent workflow or coordinator exist and it will only rerun through parent. oozie.use.system.libpath false Default value of oozie.use.system.libpath. If user haven't specified =oozie.use.system.libpath= in the job.properties and this value is true and Oozie will include sharelib jars for workflow. oozie.service.PauseTransitService.callable.batch.size 10 This value determines the number of callable which will be batched together to be executed by a single thread. oozie.configuration.substitute.depth 20 This value determines the depth of substitution in configurations. If set -1, No limitation on substitution. oozie.service.SparkConfigurationService.spark.configurations *=spark-conf Comma separated AUTHORITY=SPARK_CONF_DIR, where AUTHORITY is the HOST:PORT of the ResourceManager of a YARN cluster. The wildcard '*' configuration is used when there is no exact match for an authority. The SPARK_CONF_DIR contains the relevant spark-defaults.conf properties file. If the path is relative is looked within the Oozie configuration directory; though the path can be absolute. This is only used when the Spark master is set to either "yarn-client" or "yarn-cluster". oozie.email.attachment.enabled true This value determines whether to support email attachment of a file on HDFS. Set it false if there is any security concern. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_1/resources/yarn-default.xml0000664000175000017500000017324600000000000033565 0ustar00zuulzuul00000000000000 Factory to create client IPC classes. yarn.ipc.client.factory.class Factory to create server IPC classes. yarn.ipc.server.factory.class Factory to create serializeable records. yarn.ipc.record.factory.class RPC class implementation yarn.ipc.rpc.class org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC The hostname of the RM. yarn.resourcemanager.hostname 0.0.0.0 The address of the applications manager interface in the RM. yarn.resourcemanager.address ${yarn.resourcemanager.hostname}:8032 The actual address the server will bind to. If this optional address is set, the RPC and webapp servers will bind to this address and the port specified in yarn.resourcemanager.address and yarn.resourcemanager.webapp.address, respectively. This is most useful for making RM listen to all interfaces by setting to 0.0.0.0. yarn.resourcemanager.bind-host The number of threads used to handle applications manager requests. yarn.resourcemanager.client.thread-count 50 Number of threads used to launch/cleanup AM. yarn.resourcemanager.amlauncher.thread-count 50 Retry times to connect with NM. yarn.resourcemanager.nodemanager-connect-retries 10 The expiry interval for application master reporting. yarn.am.liveness-monitor.expiry-interval-ms 600000 The Kerberos principal for the resource manager. yarn.resourcemanager.principal The address of the scheduler interface. yarn.resourcemanager.scheduler.address ${yarn.resourcemanager.hostname}:8030 Number of threads to handle scheduler interface. yarn.resourcemanager.scheduler.client.thread-count 50 This configures the HTTP endpoint for Yarn Daemons.The following values are supported: - HTTP_ONLY : Service is provided only on http - HTTPS_ONLY : Service is provided only on https yarn.http.policy HTTP_ONLY The http address of the RM web application. yarn.resourcemanager.webapp.address ${yarn.resourcemanager.hostname}:8088 The https adddress of the RM web application. yarn.resourcemanager.webapp.https.address ${yarn.resourcemanager.hostname}:8090 yarn.resourcemanager.resource-tracker.address ${yarn.resourcemanager.hostname}:8031 Are acls enabled. yarn.acl.enable false ACL of who can be admin of the YARN cluster. yarn.admin.acl * The address of the RM admin interface. yarn.resourcemanager.admin.address ${yarn.resourcemanager.hostname}:8033 Number of threads used to handle RM admin interface. yarn.resourcemanager.admin.client.thread-count 1 Maximum time to wait to establish connection to ResourceManager. yarn.resourcemanager.connect.max-wait.ms 900000 How often to try connecting to the ResourceManager. yarn.resourcemanager.connect.retry-interval.ms 30000 The maximum number of application attempts. It's a global setting for all application masters. Each application master can specify its individual maximum number of application attempts via the API, but the individual number cannot be more than the global upper bound. If it is, the resourcemanager will override it. The default number is set to 2, to allow at least one retry for AM. yarn.resourcemanager.am.max-attempts 2 How often to check that containers are still alive. yarn.resourcemanager.container.liveness-monitor.interval-ms 600000 The keytab for the resource manager. yarn.resourcemanager.keytab /etc/krb5.keytab Flag to enable override of the default kerberos authentication filter with the RM authentication filter to allow authentication using delegation tokens(fallback to kerberos if the tokens are missing). Only applicable when the http authentication type is kerberos. yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled true How long to wait until a node manager is considered dead. yarn.nm.liveness-monitor.expiry-interval-ms 600000 Path to file with nodes to include. yarn.resourcemanager.nodes.include-path Path to file with nodes to exclude. yarn.resourcemanager.nodes.exclude-path Number of threads to handle resource tracker calls. yarn.resourcemanager.resource-tracker.client.thread-count 50 The class to use as the resource scheduler. yarn.resourcemanager.scheduler.class org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler The minimum allocation for every container request at the RM, in MBs. Memory requests lower than this will throw a InvalidResourceRequestException. yarn.scheduler.minimum-allocation-mb 1024 The maximum allocation for every container request at the RM, in MBs. Memory requests higher than this will throw a InvalidResourceRequestException. yarn.scheduler.maximum-allocation-mb 8192 The minimum allocation for every container request at the RM, in terms of virtual CPU cores. Requests lower than this will throw a InvalidResourceRequestException. yarn.scheduler.minimum-allocation-vcores 1 The maximum allocation for every container request at the RM, in terms of virtual CPU cores. Requests higher than this will throw a InvalidResourceRequestException. yarn.scheduler.maximum-allocation-vcores 32 Enable RM to recover state after starting. If true, then yarn.resourcemanager.store.class must be specified. yarn.resourcemanager.recovery.enabled false Enable RM work preserving recovery. This configuration is private to YARN for experimenting the feature. yarn.resourcemanager.work-preserving-recovery.enabled true Set the amount of time RM waits before allocating new containers on work-preserving-recovery. Such wait period gives RM a chance to settle down resyncing with NMs in the cluster on recovery, before assigning new containers to applications. yarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms 10000 The class to use as the persistent store. If org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore is used, the store is implicitly fenced; meaning a single ResourceManager is able to use the store at any point in time. More details on this implicit fencing, along with setting up appropriate ACLs is discussed under yarn.resourcemanager.zk-state-store.root-node.acl. yarn.resourcemanager.store.class org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore The maximum number of completed applications RM state store keeps, less than or equals to ${yarn.resourcemanager.max-completed-applications}. By default, it equals to ${yarn.resourcemanager.max-completed-applications}. This ensures that the applications kept in the state store are consistent with the applications remembered in RM memory. Any values larger than ${yarn.resourcemanager.max-completed-applications} will be reset to ${yarn.resourcemanager.max-completed-applications}. Note that this value impacts the RM recovery performance.Typically, a smaller value indicates better performance on RM recovery. yarn.resourcemanager.state-store.max-completed-applications ${yarn.resourcemanager.max-completed-applications} Host:Port of the ZooKeeper server to be used by the RM. This must be supplied when using the ZooKeeper based implementation of the RM state store and/or embedded automatic failover in a HA setting. yarn.resourcemanager.zk-address Number of times RM tries to connect to ZooKeeper. yarn.resourcemanager.zk-num-retries 1000 Retry interval in milliseconds when connecting to ZooKeeper. When HA is enabled, the value here is NOT used. It is generated automatically from yarn.resourcemanager.zk-timeout-ms and yarn.resourcemanager.zk-num-retries. yarn.resourcemanager.zk-retry-interval-ms 1000 Full path of the ZooKeeper znode where RM state will be stored. This must be supplied when using org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore as the value for yarn.resourcemanager.store.class yarn.resourcemanager.zk-state-store.parent-path /rmstore ZooKeeper session timeout in milliseconds. Session expiration is managed by the ZooKeeper cluster itself, not by the client. This value is used by the cluster to determine when the client's session expires. Expirations happens when the cluster does not hear from the client within the specified session timeout period (i.e. no heartbeat). yarn.resourcemanager.zk-timeout-ms 10000 ACL's to be used for ZooKeeper znodes. yarn.resourcemanager.zk-acl world:anyone:rwcda ACLs to be used for the root znode when using ZKRMStateStore in a HA scenario for fencing. ZKRMStateStore supports implicit fencing to allow a single ResourceManager write-access to the store. For fencing, the ResourceManagers in the cluster share read-write-admin privileges on the root node, but the Active ResourceManager claims exclusive create-delete permissions. By default, when this property is not set, we use the ACLs from yarn.resourcemanager.zk-acl for shared admin access and rm-address:random-number for username-based exclusive create-delete access. This property allows users to set ACLs of their choice instead of using the default mechanism. For fencing to work, the ACLs should be carefully set differently on each ResourceManger such that all the ResourceManagers have shared admin access and the Active ResourceManger takes over (exclusively) the create-delete access. yarn.resourcemanager.zk-state-store.root-node.acl Specify the auths to be used for the ACL's specified in both the yarn.resourcemanager.zk-acl and yarn.resourcemanager.zk-state-store.root-node.acl properties. This takes a comma-separated list of authentication mechanisms, each of the form 'scheme:auth' (the same syntax used for the 'addAuth' command in the ZK CLI). yarn.resourcemanager.zk-auth URI pointing to the location of the FileSystem path where RM state will be stored. This must be supplied when using org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore as the value for yarn.resourcemanager.store.class yarn.resourcemanager.fs.state-store.uri ${hadoop.tmp.dir}/yarn/system/rmstore hdfs client retry policy specification. hdfs client retry is always enabled. Specified in pairs of sleep-time and number-of-retries and (t0, n0), (t1, n1), ..., the first n0 retries sleep t0 milliseconds on average, the following n1 retries sleep t1 milliseconds on average, and so on. yarn.resourcemanager.fs.state-store.retry-policy-spec 2000, 500 the number of retries to recover from IOException in FileSystemRMStateStore. yarn.resourcemanager.fs.state-store.num-retries 0 Retry interval in milliseconds in FileSystemRMStateStore. yarn.resourcemanager.fs.state-store.retry-interval-ms 1000 Local path where the RM state will be stored when using org.apache.hadoop.yarn.server.resourcemanager.recovery.LeveldbRMStateStore as the value for yarn.resourcemanager.store.class yarn.resourcemanager.leveldb-state-store.path ${hadoop.tmp.dir}/yarn/system/rmstore Enable RM high-availability. When enabled, (1) The RM starts in the Standby mode by default, and transitions to the Active mode when prompted to. (2) The nodes in the RM ensemble are listed in yarn.resourcemanager.ha.rm-ids (3) The id of each RM either comes from yarn.resourcemanager.ha.id if yarn.resourcemanager.ha.id is explicitly specified or can be figured out by matching yarn.resourcemanager.address.{id} with local address (4) The actual physical addresses come from the configs of the pattern - {rpc-config}.{id} yarn.resourcemanager.ha.enabled false Enable automatic failover. By default, it is enabled only when HA is enabled yarn.resourcemanager.ha.automatic-failover.enabled true Enable embedded automatic failover. By default, it is enabled only when HA is enabled. The embedded elector relies on the RM state store to handle fencing, and is primarily intended to be used in conjunction with ZKRMStateStore. yarn.resourcemanager.ha.automatic-failover.embedded true The base znode path to use for storing leader information, when using ZooKeeper based leader election. yarn.resourcemanager.ha.automatic-failover.zk-base-path /yarn-leader-election Name of the cluster. In a HA setting, this is used to ensure the RM participates in leader election for this cluster and ensures it does not affect other clusters yarn.resourcemanager.cluster-id The list of RM nodes in the cluster when HA is enabled. See description of yarn.resourcemanager.ha .enabled for full details on how this is used. yarn.resourcemanager.ha.rm-ids The id (string) of the current RM. When HA is enabled, this is an optional config. The id of current RM can be set by explicitly specifying yarn.resourcemanager.ha.id or figured out by matching yarn.resourcemanager.address.{id} with local address See description of yarn.resourcemanager.ha.enabled for full details on how this is used. yarn.resourcemanager.ha.id When HA is enabled, the class to be used by Clients, AMs and NMs to failover to the Active RM. It should extend org.apache.hadoop.yarn.client.RMFailoverProxyProvider yarn.client.failover-proxy-provider org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider When HA is enabled, the max number of times FailoverProxyProvider should attempt failover. When set, this overrides the yarn.resourcemanager.connect.max-wait.ms. When not set, this is inferred from yarn.resourcemanager.connect.max-wait.ms. yarn.client.failover-max-attempts When HA is enabled, the sleep base (in milliseconds) to be used for calculating the exponential delay between failovers. When set, this overrides the yarn.resourcemanager.connect.* settings. When not set, yarn.resourcemanager.connect.retry-interval.ms is used instead. yarn.client.failover-sleep-base-ms When HA is enabled, the maximum sleep time (in milliseconds) between failovers. When set, this overrides the yarn.resourcemanager.connect.* settings. When not set, yarn.resourcemanager.connect.retry-interval.ms is used instead. yarn.client.failover-sleep-max-ms When HA is enabled, the number of retries per attempt to connect to a ResourceManager. In other words, it is the ipc.client.connect.max.retries to be used during failover attempts yarn.client.failover-retries 0 When HA is enabled, the number of retries per attempt to connect to a ResourceManager on socket timeouts. In other words, it is the ipc.client.connect.max.retries.on.timeouts to be used during failover attempts yarn.client.failover-retries-on-socket-timeouts 0 The maximum number of completed applications RM keeps. yarn.resourcemanager.max-completed-applications 10000 Interval at which the delayed token removal thread runs yarn.resourcemanager.delayed.delegation-token.removal-interval-ms 30000 If true, ResourceManager will have proxy-user privileges. Use case: In a secure cluster, YARN requires the user hdfs delegation-tokens to do localization and log-aggregation on behalf of the user. If this is set to true, ResourceManager is able to request new hdfs delegation tokens on behalf of the user. This is needed by long-running-service, because the hdfs tokens will eventually expire and YARN requires new valid tokens to do localization and log-aggregation. Note that to enable this use case, the corresponding HDFS NameNode has to configure ResourceManager as the proxy-user so that ResourceManager can itself ask for new tokens on behalf of the user when tokens are past their max-life-time. yarn.resourcemanager.proxy-user-privileges.enabled false Interval for the roll over for the master key used to generate application tokens yarn.resourcemanager.am-rm-tokens.master-key-rolling-interval-secs 86400 Interval for the roll over for the master key used to generate container tokens. It is expected to be much greater than yarn.nm.liveness-monitor.expiry-interval-ms and yarn.resourcemanager.rm.container-allocation.expiry-interval-ms. Otherwise the behavior is undefined. yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs 86400 The heart-beat interval in milliseconds for every NodeManager in the cluster. yarn.resourcemanager.nodemanagers.heartbeat-interval-ms 1000 The minimum allowed version of a connecting nodemanager. The valid values are NONE (no version checking), EqualToRM (the nodemanager's version is equal to or greater than the RM version), or a Version String. yarn.resourcemanager.nodemanager.minimum.version NONE Enable a set of periodic monitors (specified in yarn.resourcemanager.scheduler.monitor.policies) that affect the scheduler. yarn.resourcemanager.scheduler.monitor.enable false The list of SchedulingEditPolicy classes that interact with the scheduler. A particular module may be incompatible with the scheduler, other policies, or a configuration of either. yarn.resourcemanager.scheduler.monitor.policies org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy The class to use as the configuration provider. If org.apache.hadoop.yarn.LocalConfigurationProvider is used, the local configuration will be loaded. If org.apache.hadoop.yarn.FileSystemBasedConfigurationProvider is used, the configuration which will be loaded should be uploaded to remote File system first. yarn.resourcemanager.configuration.provider-class org.apache.hadoop.yarn.LocalConfigurationProvider The setting that controls whether yarn system metrics is published on the timeline server or not by RM. yarn.resourcemanager.system-metrics-publisher.enabled false Number of worker threads that send the yarn system metrics data. yarn.resourcemanager.system-metrics-publisher.dispatcher.pool-size 10 The hostname of the NM. yarn.nodemanager.hostname 0.0.0.0 The address of the container manager in the NM. yarn.nodemanager.address ${yarn.nodemanager.hostname}:0 The actual address the server will bind to. If this optional address is set, the RPC and webapp servers will bind to this address and the port specified in yarn.nodemanager.address and yarn.nodemanager.webapp.address, respectively. This is most useful for making NM listen to all interfaces by setting to 0.0.0.0. yarn.nodemanager.bind-host Environment variables that should be forwarded from the NodeManager's environment to the container's. yarn.nodemanager.admin-env MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX Environment variables that containers may override rather than use NodeManager's default. yarn.nodemanager.env-whitelist JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,HADOOP_YARN_HOME who will execute(launch) the containers. yarn.nodemanager.container-executor.class org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor Number of threads container manager uses. yarn.nodemanager.container-manager.thread-count 20 Number of threads used in cleanup. yarn.nodemanager.delete.thread-count 4 Number of seconds after an application finishes before the nodemanager's DeletionService will delete the application's localized file directory and log directory. To diagnose Yarn application problems, set this property's value large enough (for example, to 600 = 10 minutes) to permit examination of these directories. After changing the property's value, you must restart the nodemanager in order for it to have an effect. The roots of Yarn applications' work directories is configurable with the yarn.nodemanager.local-dirs property (see below), and the roots of the Yarn applications' log directories is configurable with the yarn.nodemanager.log-dirs property (see also below). yarn.nodemanager.delete.debug-delay-sec 0 Keytab for NM. yarn.nodemanager.keytab /etc/krb5.keytab List of directories to store localized files in. An application's localized file directory will be found in: ${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}. Individual containers' work directories, called container_${contid}, will be subdirectories of this. yarn.nodemanager.local-dirs ${hadoop.tmp.dir}/nm-local-dir It limits the maximum number of files which will be localized in a single local directory. If the limit is reached then sub-directories will be created and new files will be localized in them. If it is set to a value less than or equal to 36 [which are sub-directories (0-9 and then a-z)] then NodeManager will fail to start. For example; [for public cache] if this is configured with a value of 40 ( 4 files + 36 sub-directories) and the local-dir is "/tmp/local-dir1" then it will allow 4 files to be created directly inside "/tmp/local-dir1/filecache". For files that are localized further it will create a sub-directory "0" inside "/tmp/local-dir1/filecache" and will localize files inside it until it becomes full. If a file is removed from a sub-directory that is marked full, then that sub-directory will be used back again to localize files. yarn.nodemanager.local-cache.max-files-per-directory 8192 Address where the localizer IPC is. yarn.nodemanager.localizer.address ${yarn.nodemanager.hostname}:8040 Interval in between cache cleanups. yarn.nodemanager.localizer.cache.cleanup.interval-ms 600000 Target size of localizer cache in MB, per nodemanager. It is a target retention size that only includes resources with PUBLIC and PRIVATE visibility and excludes resources with APPLICATION visibility yarn.nodemanager.localizer.cache.target-size-mb 10240 Number of threads to handle localization requests. yarn.nodemanager.localizer.client.thread-count 5 Number of threads to use for localization fetching. yarn.nodemanager.localizer.fetch.thread-count 4 Where to store container logs. An application's localized log directory will be found in ${yarn.nodemanager.log-dirs}/application_${appid}. Individual containers' log directories will be below this, in directories named container_{$contid}. Each container directory will contain the files stderr, stdin, and syslog generated by that container. yarn.nodemanager.log-dirs ${yarn.log.dir}/userlogs Whether to enable log aggregation. Log aggregation collects each container's logs and moves these logs onto a file-system, for e.g. HDFS, after the application completes. Users can configure the "yarn.nodemanager.remote-app-log-dir" and "yarn.nodemanager.remote-app-log-dir-suffix" properties to determine where these logs are moved to. Users can access the logs via the Application Timeline Server. yarn.log-aggregation-enable false How long to keep aggregation logs before deleting them. -1 disables. Be careful set this too small and you will spam the name node. yarn.log-aggregation.retain-seconds -1 How long to wait between aggregated log retention checks. If set to 0 or a negative value then the value is computed as one-tenth of the aggregated log retention time. Be careful set this too small and you will spam the name node. yarn.log-aggregation.retain-check-interval-seconds -1 Time in seconds to retain user logs. Only applicable if log aggregation is disabled yarn.nodemanager.log.retain-seconds 10800 Where to aggregate logs to. yarn.nodemanager.remote-app-log-dir /tmp/logs The remote log dir will be created at {yarn.nodemanager.remote-app-log-dir}/${user}/{thisParam} yarn.nodemanager.remote-app-log-dir-suffix logs Amount of physical memory, in MB, that can be allocated for containers. yarn.nodemanager.resource.memory-mb 8192 Whether physical memory limits will be enforced for containers. yarn.nodemanager.pmem-check-enabled true Whether virtual memory limits will be enforced for containers. yarn.nodemanager.vmem-check-enabled true Ratio between virtual memory to physical memory when setting memory limits for containers. Container allocations are expressed in terms of physical memory, and virtual memory usage is allowed to exceed this allocation by this ratio. yarn.nodemanager.vmem-pmem-ratio 2.1 Number of vcores that can be allocated for containers. This is used by the RM scheduler when allocating resources for containers. This is not used to limit the number of physical cores used by YARN containers. yarn.nodemanager.resource.cpu-vcores 8 Percentage of CPU that can be allocated for containers. This setting allows users to limit the amount of CPU that YARN containers use. Currently functional only on Linux using cgroups. The default is to use 100% of CPU. yarn.nodemanager.resource.percentage-physical-cpu-limit 100 NM Webapp address. yarn.nodemanager.webapp.address ${yarn.nodemanager.hostname}:8042 How often to monitor containers. yarn.nodemanager.container-monitor.interval-ms 3000 Class that calculates containers current resource utilization. yarn.nodemanager.container-monitor.resource-calculator.class Frequency of running node health script. yarn.nodemanager.health-checker.interval-ms 600000 Script time out period. yarn.nodemanager.health-checker.script.timeout-ms 1200000 The health check script to run. yarn.nodemanager.health-checker.script.path The arguments to pass to the health check script. yarn.nodemanager.health-checker.script.opts Frequency of running disk health checker code. yarn.nodemanager.disk-health-checker.interval-ms 120000 The minimum fraction of number of disks to be healthy for the nodemanager to launch new containers. This correspond to both yarn-nodemanager.local-dirs and yarn.nodemanager.log-dirs. i.e. If there are less number of healthy local-dirs (or log-dirs) available, then new containers will not be launched on this node. yarn.nodemanager.disk-health-checker.min-healthy-disks 0.25 The maximum percentage of disk space utilization allowed after which a disk is marked as bad. Values can range from 0.0 to 100.0. If the value is greater than or equal to 100, the nodemanager will check for full disk. This applies to yarn-nodemanager.local-dirs and yarn.nodemanager.log-dirs. yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage 90.0 The minimum space that must be available on a disk for it to be used. This applies to yarn-nodemanager.local-dirs and yarn.nodemanager.log-dirs. yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb 0 The path to the Linux container executor. yarn.nodemanager.linux-container-executor.path The class which should help the LCE handle resources. yarn.nodemanager.linux-container-executor.resources-handler.class org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler The cgroups hierarchy under which to place YARN proccesses (cannot contain commas). If yarn.nodemanager.linux-container-executor.cgroups.mount is false (that is, if cgroups have been pre-configured), then this cgroups hierarchy must already exist and be writable by the NodeManager user, otherwise the NodeManager may fail. Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler. yarn.nodemanager.linux-container-executor.cgroups.hierarchy /hadoop-yarn Whether the LCE should attempt to mount cgroups if not found. Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler. yarn.nodemanager.linux-container-executor.cgroups.mount false Where the LCE should attempt to mount cgroups if not found. Common locations include /sys/fs/cgroup and /cgroup; the default location can vary depending on the Linux distribution in use. This path must exist before the NodeManager is launched. Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler, and yarn.nodemanager.linux-container-executor.cgroups.mount is true. yarn.nodemanager.linux-container-executor.cgroups.mount-path This determines which of the two modes that LCE should use on a non-secure cluster. If this value is set to true, then all containers will be launched as the user specified in yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user. If this value is set to false, then containers will run as the user who submitted the application. yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users true The UNIX user that containers will run as when Linux-container-executor is used in nonsecure mode (a use case for this is using cgroups) if the yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users is set to true. yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user nobody The allowed pattern for UNIX user names enforced by Linux-container-executor when used in nonsecure mode (use case for this is using cgroups). The default value is taken from /usr/sbin/adduser yarn.nodemanager.linux-container-executor.nonsecure-mode.user-pattern ^[_.A-Za-z0-9][-@_.A-Za-z0-9]{0,255}?[$]?$ This flag determines whether apps should run with strict resource limits or be allowed to consume spare resources if they need them. For example, turning the flag on will restrict apps to use only their share of CPU, even if the node has spare CPU cycles. The default value is false i.e. use available resources. Please note that turning this flag on may reduce job throughput on the cluster. yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage false This flag determines whether memory limit will be set for the Windows Job Object of the containers launched by the default container executor. yarn.nodemanager.windows-container.memory-limit.enabled false This flag determines whether CPU limit will be set for the Windows Job Object of the containers launched by the default container executor. yarn.nodemanager.windows-container.cpu-limit.enabled false T-file compression types used to compress aggregated logs. yarn.nodemanager.log-aggregation.compression-type none The kerberos principal for the node manager. yarn.nodemanager.principal A comma separated list of services where service name should only contain a-zA-Z0-9_ and can not start with numbers yarn.nodemanager.aux-services No. of ms to wait between sending a SIGTERM and SIGKILL to a container yarn.nodemanager.sleep-delay-before-sigkill.ms 250 Max time to wait for a process to come up when trying to cleanup a container yarn.nodemanager.process-kill-wait.ms 2000 The minimum allowed version of a resourcemanager that a nodemanager will connect to. The valid values are NONE (no version checking), EqualToNM (the resourcemanager's version is equal to or greater than the NM version), or a Version String. yarn.nodemanager.resourcemanager.minimum.version NONE Max number of threads in NMClientAsync to process container management events yarn.client.nodemanager-client-async.thread-pool-max-size 500 Max time to wait to establish a connection to NM yarn.client.nodemanager-connect.max-wait-ms 180000 Time interval between each attempt to connect to NM yarn.client.nodemanager-connect.retry-interval-ms 10000 Maximum number of proxy connections to cache for node managers. If set to a value greater than zero then the cache is enabled and the NMClient and MRAppMaster will cache the specified number of node manager proxies. There will be at max one proxy per node manager. Ex. configuring it to a value of 5 will make sure that client will at max have 5 proxies cached with 5 different node managers. These connections for these proxies will be timed out if idle for more than the system wide idle timeout period. Note that this could cause issues on large clusters as many connections could linger simultaneously and lead to a large number of connection threads. The token used for authentication will be used only at connection creation time. If a new token is received then the earlier connection should be closed in order to use the new token. This and (yarn.client.nodemanager-client-async.thread-pool-max-size) are related and should be in sync (no need for them to be equal). If the value of this property is zero then the connection cache is disabled and connections will use a zero idle timeout to prevent too many connection threads on large clusters. yarn.client.max-cached-nodemanagers-proxies 0 Enable the node manager to recover after starting yarn.nodemanager.recovery.enabled false The local filesystem directory in which the node manager will store state when recovery is enabled. yarn.nodemanager.recovery.dir ${hadoop.tmp.dir}/yarn-nm-recovery yarn.nodemanager.docker-container-executor.exec-name /usr/bin/docker Name or path to the Docker client. yarn.nodemanager.aux-services.mapreduce_shuffle.class org.apache.hadoop.mapred.ShuffleHandler mapreduce.job.jar mapreduce.job.hdfs-servers ${fs.defaultFS} The kerberos principal for the proxy, if the proxy is not running as part of the RM. yarn.web-proxy.principal Keytab for WebAppProxy, if the proxy is not running as part of the RM. yarn.web-proxy.keytab The address for the web proxy as HOST:PORT, if this is not given then the proxy will run as part of the RM yarn.web-proxy.address CLASSPATH for YARN applications. A comma-separated list of CLASSPATH entries. When this value is empty, the following default CLASSPATH for YARN applications would be used. For Linux: $HADOOP_CONF_DIR, $HADOOP_COMMON_HOME/share/hadoop/common/*, $HADOOP_COMMON_HOME/share/hadoop/common/lib/*, $HADOOP_HDFS_HOME/share/hadoop/hdfs/*, $HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*, $HADOOP_YARN_HOME/share/hadoop/yarn/*, $HADOOP_YARN_HOME/share/hadoop/yarn/lib/* For Windows: %HADOOP_CONF_DIR%, %HADOOP_COMMON_HOME%/share/hadoop/common/*, %HADOOP_COMMON_HOME%/share/hadoop/common/lib/*, %HADOOP_HDFS_HOME%/share/hadoop/hdfs/*, %HADOOP_HDFS_HOME%/share/hadoop/hdfs/lib/*, %HADOOP_YARN_HOME%/share/hadoop/yarn/*, %HADOOP_YARN_HOME%/share/hadoop/yarn/lib/* yarn.application.classpath Indicate to clients whether timeline service is enabled or not. If enabled, clients will put entities and events to the timeline server. yarn.timeline-service.enabled false The hostname of the timeline service web application. yarn.timeline-service.hostname 0.0.0.0 This is default address for the timeline server to start the RPC server. yarn.timeline-service.address ${yarn.timeline-service.hostname}:10200 The http address of the timeline service web application. yarn.timeline-service.webapp.address ${yarn.timeline-service.hostname}:8188 The https address of the timeline service web application. yarn.timeline-service.webapp.https.address ${yarn.timeline-service.hostname}:8190 The actual address the server will bind to. If this optional address is set, the RPC and webapp servers will bind to this address and the port specified in yarn.timeline-service.address and yarn.timeline-service.webapp.address, respectively. This is most useful for making the service listen to all interfaces by setting to 0.0.0.0. yarn.timeline-service.bind-host Store class name for timeline store. yarn.timeline-service.store-class org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore Enable age off of timeline store data. yarn.timeline-service.ttl-enable true Time to live for timeline store data in milliseconds. yarn.timeline-service.ttl-ms 604800000 Store file name for leveldb timeline store. yarn.timeline-service.leveldb-timeline-store.path ${hadoop.tmp.dir}/yarn/timeline Length of time to wait between deletion cycles of leveldb timeline store in milliseconds. yarn.timeline-service.leveldb-timeline-store.ttl-interval-ms 300000 Size of read cache for uncompressed blocks for leveldb timeline store in bytes. yarn.timeline-service.leveldb-timeline-store.read-cache-size 104857600 Size of cache for recently read entity start times for leveldb timeline store in number of entities. yarn.timeline-service.leveldb-timeline-store.start-time-read-cache-size 10000 Size of cache for recently written entity start times for leveldb timeline store in number of entities. yarn.timeline-service.leveldb-timeline-store.start-time-write-cache-size 10000 Handler thread count to serve the client RPC requests. yarn.timeline-service.handler-thread-count 10 yarn.timeline-service.http-authentication.type simple Defines authentication used for the timeline server HTTP endpoint. Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME# yarn.timeline-service.http-authentication.simple.anonymous.allowed true Indicates if anonymous requests are allowed by the timeline server when using 'simple' authentication. The Kerberos principal for the timeline server. yarn.timeline-service.principal The Kerberos keytab for the timeline server. yarn.timeline-service.keytab /etc/krb5.keytab Default maximum number of retires for timeline servive client and value -1 means no limit. yarn.timeline-service.client.max-retries 30 Client policy for whether timeline operations are non-fatal yarn.timeline-service.client.best-effort false Default retry time interval for timeline servive client. yarn.timeline-service.client.retry-interval-ms 1000 Enable timeline server to recover state after starting. If true, then yarn.timeline-service.state-store-class must be specified. yarn.timeline-service.recovery.enabled false Store class name for timeline state store. yarn.timeline-service.state-store-class org.apache.hadoop.yarn.server.timeline.recovery.LeveldbTimelineStateStore Store file name for leveldb state store. yarn.timeline-service.leveldb-state-store.path ${hadoop.tmp.dir}/yarn/timeline Whether the shared cache is enabled yarn.sharedcache.enabled false The root directory for the shared cache yarn.sharedcache.root-dir /sharedcache The level of nested directories before getting to the checksum directories. It must be non-negative. yarn.sharedcache.nested-level 3 The implementation to be used for the SCM store yarn.sharedcache.store.class org.apache.hadoop.yarn.server.sharedcachemanager.store.InMemorySCMStore The implementation to be used for the SCM app-checker yarn.sharedcache.app-checker.class org.apache.hadoop.yarn.server.sharedcachemanager.RemoteAppChecker A resource in the in-memory store is considered stale if the time since the last reference exceeds the staleness period. This value is specified in minutes. yarn.sharedcache.store.in-memory.staleness-period-mins 10080 Initial delay before the in-memory store runs its first check to remove dead initial applications. Specified in minutes. yarn.sharedcache.store.in-memory.initial-delay-mins 10 The frequency at which the in-memory store checks to remove dead initial applications. Specified in minutes. yarn.sharedcache.store.in-memory.check-period-mins 720 The address of the admin interface in the SCM (shared cache manager) yarn.sharedcache.admin.address 0.0.0.0:8047 The number of threads used to handle SCM admin interface (1 by default) yarn.sharedcache.admin.thread-count 1 The address of the web application in the SCM (shared cache manager) yarn.sharedcache.webapp.address 0.0.0.0:8788 The frequency at which a cleaner task runs. Specified in minutes. yarn.sharedcache.cleaner.period-mins 1440 Initial delay before the first cleaner task is scheduled. Specified in minutes. yarn.sharedcache.cleaner.initial-delay-mins 10 The time to sleep between processing each shared cache resource. Specified in milliseconds. yarn.sharedcache.cleaner.resource-sleep-ms 0 The address of the node manager interface in the SCM (shared cache manager) yarn.sharedcache.uploader.server.address 0.0.0.0:8046 The number of threads used to handle shared cache manager requests from the node manager (50 by default) yarn.sharedcache.uploader.server.thread-count 50 The address of the client interface in the SCM (shared cache manager) yarn.sharedcache.client-server.address 0.0.0.0:8045 The number of threads used to handle shared cache manager requests from clients (50 by default) yarn.sharedcache.client-server.thread-count 50 The algorithm used to compute checksums of files (SHA-256 by default) yarn.sharedcache.checksum.algo.impl org.apache.hadoop.yarn.sharedcache.ChecksumSHA256Impl The replication factor for the node manager uploader for the shared cache (10 by default) yarn.sharedcache.nm.uploader.replication.factor 10 The number of threads used to upload files from a node manager instance (20 by default) yarn.sharedcache.nm.uploader.thread-count 20 The interval that the yarn client library uses to poll the completion status of the asynchronous API of application client protocol. yarn.client.application-client-protocol.poll-interval-ms 200 RSS usage of a process computed via /proc/pid/stat is not very accurate as it includes shared pages of a process. /proc/pid/smaps provides useful information like Private_Dirty, Private_Clean, Shared_Dirty, Shared_Clean which can be used for computing more accurate RSS. When this flag is enabled, RSS is computed as Min(Shared_Dirty, Pss) + Private_Clean + Private_Dirty. It excludes read-only shared mappings in RSS computation. yarn.nodemanager.container-monitor.procfs-tree.smaps-based-rss.enabled false Defines how often NMs wake up to upload log files. The default value is -1. By default, the logs will be uploaded when the application is finished. By setting this configure, logs can be uploaded periodically when the application is running. The minimum rolling-interval-seconds can be set is 3600. yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds -1 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_1/versionhandler.py0000664000175000017500000001435400000000000032025 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from sahara.plugins import conductor from sahara.plugins import context from sahara.plugins import swift_helper from sahara.plugins import utils from sahara_plugin_vanilla.plugins.vanilla import abstractversionhandler as avm from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import config as c from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import keypairs from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import recommendations_utils from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import run_scripts as run from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import scaling as sc from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import starting_scripts from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import utils as u from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import validation as vl from sahara_plugin_vanilla.plugins.vanilla import utils as vu from sahara_plugin_vanilla.plugins.vanilla.v2_7_1 import config_helper from sahara_plugin_vanilla.plugins.vanilla.v2_7_1 import edp_engine CONF = cfg.CONF class VersionHandler(avm.AbstractVersionHandler): def __init__(self): self.pctx = { 'env_confs': config_helper.get_env_configs(), 'all_confs': config_helper.get_plugin_configs() } def get_plugin_configs(self): return self.pctx['all_confs'] def get_node_processes(self): return { "Hadoop": [], "MapReduce": ["historyserver"], "HDFS": ["namenode", "datanode", "secondarynamenode"], "YARN": ["resourcemanager", "nodemanager"], "JobFlow": ["oozie"], "Hive": ["hiveserver"], "Spark": ["spark history server"], "ZooKeeper": ["zookeeper"] } def validate(self, cluster): vl.validate_cluster_creating(self.pctx, cluster) def update_infra(self, cluster): pass def configure_cluster(self, cluster): c.configure_cluster(self.pctx, cluster) def start_cluster(self, cluster): keypairs.provision_keypairs(cluster) starting_scripts.start_namenode(cluster) starting_scripts.start_secondarynamenode(cluster) starting_scripts.start_resourcemanager(cluster) run.start_dn_nm_processes(utils.get_instances(cluster)) run.await_datanodes(cluster) starting_scripts.start_historyserver(cluster) starting_scripts.start_oozie(self.pctx, cluster) starting_scripts.start_hiveserver(self.pctx, cluster) starting_scripts.start_zookeeper(cluster) swift_helper.install_ssl_certs(utils.get_instances(cluster)) self._set_cluster_info(cluster) starting_scripts.start_spark(cluster) def decommission_nodes(self, cluster, instances): sc.decommission_nodes(self.pctx, cluster, instances) def validate_scaling(self, cluster, existing, additional): vl.validate_additional_ng_scaling(cluster, additional) vl.validate_existing_ng_scaling(self.pctx, cluster, existing) zk_ng = utils.get_node_groups(cluster, "zookeeper") if zk_ng: vl.validate_zookeeper_node_count(zk_ng, existing, additional) def scale_cluster(self, cluster, instances): keypairs.provision_keypairs(cluster, instances) sc.scale_cluster(self.pctx, cluster, instances) def _set_cluster_info(self, cluster): nn = vu.get_namenode(cluster) rm = vu.get_resourcemanager(cluster) hs = vu.get_historyserver(cluster) oo = vu.get_oozie(cluster) sp = vu.get_spark_history_server(cluster) info = {} if rm: info['YARN'] = { 'Web UI': 'http://%s:%s' % (rm.get_ip_or_dns_name(), '8088'), 'ResourceManager': 'http://%s:%s' % ( rm.get_ip_or_dns_name(), '8032') } if nn: info['HDFS'] = { 'Web UI': 'http://%s:%s' % (nn.get_ip_or_dns_name(), '50070'), 'NameNode': 'hdfs://%s:%s' % (nn.hostname(), '9000') } if oo: info['JobFlow'] = { 'Oozie': 'http://%s:%s' % (oo.get_ip_or_dns_name(), '11000') } if hs: info['MapReduce JobHistory Server'] = { 'Web UI': 'http://%s:%s' % (hs.get_ip_or_dns_name(), '19888') } if sp: info['Apache Spark'] = { 'Spark UI': 'http://%s:%s' % (sp.management_ip, '4040'), 'Spark History Server UI': 'http://%s:%s' % (sp.management_ip, '18080') } ctx = context.ctx() conductor.cluster_update(ctx, cluster, {'info': info}) def get_edp_engine(self, cluster, job_type): if job_type in edp_engine.EdpOozieEngine.get_supported_job_types(): return edp_engine.EdpOozieEngine(cluster) if job_type in edp_engine.EdpSparkEngine.get_supported_job_types(): return edp_engine.EdpSparkEngine(cluster) return None def get_edp_job_types(self): return (edp_engine.EdpOozieEngine.get_supported_job_types() + edp_engine.EdpSparkEngine.get_supported_job_types()) def get_edp_config_hints(self, job_type): return edp_engine.EdpOozieEngine.get_possible_job_config(job_type) def on_terminate_cluster(self, cluster): u.delete_oozie_password(cluster) keypairs.drop_key(cluster) def get_open_ports(self, node_group): return c.get_open_ports(node_group) def recommend_configs(self, cluster, scaling): recommendations_utils.recommend_configs(cluster, self.get_plugin_configs(), scaling) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9604795 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_5/0000775000175000017500000000000000000000000026425 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_5/__init__.py0000664000175000017500000000000000000000000030524 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_5/config_helper.py0000664000175000017500000001120100000000000031576 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from oslo_config import cfg import six from sahara.plugins import provisioning as p from sahara.plugins import utils from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import config_helper CONF = cfg.CONF CONF.import_opt("enable_data_locality", "sahara.topology.topology_helper") CORE_DEFAULT = utils.load_hadoop_xml_defaults( 'plugins/vanilla/v2_7_5/resources/core-default.xml', 'sahara_plugin_vanilla') HDFS_DEFAULT = utils.load_hadoop_xml_defaults( 'plugins/vanilla/v2_7_5/resources/hdfs-default.xml', 'sahara_plugin_vanilla') MAPRED_DEFAULT = utils.load_hadoop_xml_defaults( 'plugins/vanilla/v2_7_5/resources/mapred-default.xml', 'sahara_plugin_vanilla') YARN_DEFAULT = utils.load_hadoop_xml_defaults( 'plugins/vanilla/v2_7_5/resources/yarn-default.xml', 'sahara_plugin_vanilla') OOZIE_DEFAULT = utils.load_hadoop_xml_defaults( 'plugins/vanilla/v2_7_5/resources/oozie-default.xml', 'sahara_plugin_vanilla') HIVE_DEFAULT = utils.load_hadoop_xml_defaults( 'plugins/vanilla/v2_7_5/resources/hive-default.xml', 'sahara_plugin_vanilla') _default_executor_classpath = ":".join( ['/opt/hadoop/share/hadoop/tools/lib/hadoop-openstack-2.7.5.jar']) SPARK_CONFS = copy.deepcopy(config_helper.SPARK_CONFS) SPARK_CONFS['Spark']['OPTIONS'].append( { 'name': 'Executor extra classpath', 'description': 'Value for spark.executor.extraClassPath' ' in spark-defaults.conf' ' (default: %s)' % _default_executor_classpath, 'default': '%s' % _default_executor_classpath, 'priority': 2, } ) XML_CONFS = { "Hadoop": [CORE_DEFAULT], "HDFS": [HDFS_DEFAULT], "YARN": [YARN_DEFAULT], "MapReduce": [MAPRED_DEFAULT], "JobFlow": [OOZIE_DEFAULT], "Hive": [HIVE_DEFAULT] } ENV_CONFS = { "YARN": { 'ResourceManager Heap Size': 1024, 'NodeManager Heap Size': 1024 }, "HDFS": { 'NameNode Heap Size': 1024, 'SecondaryNameNode Heap Size': 1024, 'DataNode Heap Size': 1024 }, "MapReduce": { 'JobHistoryServer Heap Size': 1024 }, "JobFlow": { 'Oozie Heap Size': 1024 } } # Initialise plugin Hadoop configurations PLUGIN_XML_CONFIGS = config_helper.init_xml_configs(XML_CONFS) PLUGIN_ENV_CONFIGS = config_helper.init_env_configs(ENV_CONFS) def _init_all_configs(): configs = [] configs.extend(PLUGIN_XML_CONFIGS) configs.extend(PLUGIN_ENV_CONFIGS) configs.extend(config_helper.PLUGIN_GENERAL_CONFIGS) configs.extend(_get_spark_configs()) configs.extend(_get_zookeeper_configs()) return configs def _get_spark_opt_default(opt_name): for opt in SPARK_CONFS["Spark"]["OPTIONS"]: if opt_name == opt["name"]: return opt["default"] return None def _get_spark_configs(): spark_configs = [] for service, config_items in six.iteritems(SPARK_CONFS): for item in config_items['OPTIONS']: cfg = p.Config(name=item["name"], description=item["description"], default_value=item["default"], applicable_target=service, scope="cluster", is_optional=True, priority=item["priority"]) spark_configs.append(cfg) return spark_configs def _get_zookeeper_configs(): zk_configs = [] for service, config_items in six.iteritems(config_helper.ZOOKEEPER_CONFS): for item in config_items['OPTIONS']: cfg = p.Config(name=item["name"], description=item["description"], default_value=item["default"], applicable_target=service, scope="cluster", is_optional=True, priority=item["priority"]) zk_configs.append(cfg) return zk_configs PLUGIN_CONFIGS = _init_all_configs() def get_plugin_configs(): return PLUGIN_CONFIGS def get_xml_configs(): return PLUGIN_XML_CONFIGS def get_env_configs(): return ENV_CONFS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_5/edp_engine.py0000664000175000017500000000674700000000000031112 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os from sahara.plugins import edp from sahara.plugins import exceptions as ex from sahara.plugins import utils as plugin_utils from sahara_plugin_vanilla.i18n import _ from sahara_plugin_vanilla.plugins.vanilla import confighints_helper from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import edp_engine from sahara_plugin_vanilla.plugins.vanilla import utils as v_utils class EdpOozieEngine(edp_engine.EdpOozieEngine): @staticmethod def get_possible_job_config(job_type): if edp.compare_job_type(job_type, edp.JOB_TYPE_HIVE): return { 'job_config': confighints_helper.get_possible_hive_config_from( 'plugins/vanilla/v2_7_5/resources/hive-default.xml')} if edp.compare_job_type(job_type, edp.JOB_TYPE_MAPREDUCE, edp.JOB_TYPE_MAPREDUCE_STREAMING): return { 'job_config': confighints_helper.get_possible_mapreduce_config_from( 'plugins/vanilla/v2_7_5/resources/mapred-default.xml')} if edp.compare_job_type(job_type, edp.JOB_TYPE_PIG): return { 'job_config': confighints_helper.get_possible_pig_config_from( 'plugins/vanilla/v2_7_5/resources/mapred-default.xml')} return edp_engine.EdpOozieEngine.get_possible_job_config(job_type) class EdpSparkEngine(edp.PluginsSparkJobEngine): edp_base_version = "2.7.5" def __init__(self, cluster): super(EdpSparkEngine, self).__init__(cluster) self.master = plugin_utils.get_instance(cluster, "spark history server") self.plugin_params["spark-user"] = "sudo -u hadoop " self.plugin_params["spark-submit"] = os.path.join( plugin_utils.get_config_value_or_default( "Spark", "Spark home", self.cluster), "bin/spark-submit") self.plugin_params["deploy-mode"] = "cluster" self.plugin_params["master"] = "yarn" driver_cp = plugin_utils.get_config_value_or_default( "Spark", "Executor extra classpath", self.cluster) self.plugin_params["driver-class-path"] = driver_cp @staticmethod def edp_supported(version): return version >= EdpSparkEngine.edp_base_version @staticmethod def job_type_supported(job_type): return (job_type in edp.PluginsSparkJobEngine.get_supported_job_types()) def validate_job_execution(self, cluster, job, data): if (not self.edp_supported(cluster.hadoop_version) or not v_utils.get_spark_history_server(cluster)): raise ex.InvalidDataException( _('Spark {base} or higher required to run {type} jobs').format( base=EdpSparkEngine.edp_base_version, type=job.type)) super(EdpSparkEngine, self).validate_job_execution(cluster, job, data) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9644794 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_5/resources/0000775000175000017500000000000000000000000030437 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_5/resources/README.rst0000664000175000017500000000254200000000000032131 0ustar00zuulzuul00000000000000Apache Hadoop Configurations for Sahara ======================================= This directory contains default XML configuration files: * core-default.xml * hdfs-default.xml * mapred-default.xml * yarn-default.xml * oozie-default.xml * hive-default.xml These files are applied for Sahara's plugin of Apache Hadoop version 2.7.5 Files were taken from here: * `core-default.xml `_ * `hdfs-default.xml `_ * `yarn-default.xml `_ * `mapred-default.xml `_ * `oozie-default.xml `_ XML configs are used to expose default Hadoop configurations to the users through Sahara's REST API. It allows users to override some config values which will be pushed to the provisioned VMs running Hadoop services as part of appropriate xml config. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_5/resources/core-default.xml0000664000175000017500000017632600000000000033552 0ustar00zuulzuul00000000000000 hadoop.common.configuration.version 0.23.0 version of this configuration file hadoop.tmp.dir /tmp/hadoop-${user.name} A base for other temporary directories. io.native.lib.available true Controls whether to use native libraries for bz2 and zlib compression codecs or not. The property does not control any other native libraries. hadoop.http.filter.initializers org.apache.hadoop.http.lib.StaticUserWebFilter A comma separated list of class names. Each class in the list must extend org.apache.hadoop.http.FilterInitializer. The corresponding Filter will be initialized. Then, the Filter will be applied to all user facing jsp and servlet web pages. The ordering of the list defines the ordering of the filters. hadoop.security.authorization false Is service-level authorization enabled? hadoop.security.instrumentation.requires.admin false Indicates if administrator ACLs are required to access instrumentation servlets (JMX, METRICS, CONF, STACKS). hadoop.security.authentication simple Possible values are simple (no authentication), and kerberos hadoop.security.group.mapping org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback Class for user to group mapping (get groups for a given user) for ACL. The default implementation, org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback, will determine if the Java Native Interface (JNI) is available. If JNI is available the implementation will use the API within hadoop to resolve a list of groups for a user. If JNI is not available then the shell implementation, ShellBasedUnixGroupsMapping, is used. This implementation shells out to the Linux/Unix environment with the bash -c groups command to resolve a list of groups for a user. hadoop.security.groups.cache.secs 300 This is the config controlling the validity of the entries in the cache containing the user->group mapping. When this duration has expired, then the implementation of the group mapping provider is invoked to get the groups of the user and then cached back. hadoop.security.groups.negative-cache.secs 30 Expiration time for entries in the the negative user-to-group mapping caching, in seconds. This is useful when invalid users are retrying frequently. It is suggested to set a small value for this expiration, since a transient error in group lookup could temporarily lock out a legitimate user. Set this to zero or negative value to disable negative user-to-group caching. hadoop.security.groups.cache.warn.after.ms 5000 If looking up a single user to group takes longer than this amount of milliseconds, we will log a warning message. hadoop.security.group.mapping.ldap.url The URL of the LDAP server to use for resolving user groups when using the LdapGroupsMapping user to group mapping. hadoop.security.group.mapping.ldap.ssl false Whether or not to use SSL when connecting to the LDAP server. hadoop.security.group.mapping.ldap.ssl.keystore File path to the SSL keystore that contains the SSL certificate required by the LDAP server. hadoop.security.group.mapping.ldap.ssl.keystore.password.file The path to a file containing the password of the LDAP SSL keystore. IMPORTANT: This file should be readable only by the Unix user running the daemons. hadoop.security.group.mapping.ldap.bind.user The distinguished name of the user to bind as when connecting to the LDAP server. This may be left blank if the LDAP server supports anonymous binds. hadoop.security.group.mapping.ldap.bind.password.file The path to a file containing the password of the bind user. IMPORTANT: This file should be readable only by the Unix user running the daemons. hadoop.security.group.mapping.ldap.base The search base for the LDAP connection. This is a distinguished name, and will typically be the root of the LDAP directory. hadoop.security.group.mapping.ldap.search.filter.user (&(objectClass=user)(sAMAccountName={0})) An additional filter to use when searching for LDAP users. The default will usually be appropriate for Active Directory installations. If connecting to an LDAP server with a non-AD schema, this should be replaced with (&(objectClass=inetOrgPerson)(uid={0}). {0} is a special string used to denote where the username fits into the filter. hadoop.security.group.mapping.ldap.search.filter.group (objectClass=group) An additional filter to use when searching for LDAP groups. This should be changed when resolving groups against a non-Active Directory installation. posixGroups are currently not a supported group class. hadoop.security.group.mapping.ldap.search.attr.member member The attribute of the group object that identifies the users that are members of the group. The default will usually be appropriate for any LDAP installation. hadoop.security.group.mapping.ldap.search.attr.group.name cn The attribute of the group object that identifies the group name. The default will usually be appropriate for all LDAP systems. hadoop.security.group.mapping.ldap.directory.search.timeout 10000 The attribute applied to the LDAP SearchControl properties to set a maximum time limit when searching and awaiting a result. Set to 0 if infinite wait period is desired. Default is 10 seconds. Units in milliseconds. hadoop.security.service.user.name.key For those cases where the same RPC protocol is implemented by multiple servers, this configuration is required for specifying the principal name to use for the service when the client wishes to make an RPC call. hadoop.security.uid.cache.secs 14400 This is the config controlling the validity of the entries in the cache containing the userId to userName and groupId to groupName used by NativeIO getFstat(). hadoop.rpc.protection authentication A comma-separated list of protection values for secured sasl connections. Possible values are authentication, integrity and privacy. authentication means authentication only and no integrity or privacy; integrity implies authentication and integrity are enabled; and privacy implies all of authentication, integrity and privacy are enabled. hadoop.security.saslproperties.resolver.class can be used to override the hadoop.rpc.protection for a connection at the server side. hadoop.security.saslproperties.resolver.class SaslPropertiesResolver used to resolve the QOP used for a connection. If not specified, the full set of values specified in hadoop.rpc.protection is used while determining the QOP used for the connection. If a class is specified, then the QOP values returned by the class will be used while determining the QOP used for the connection. hadoop.work.around.non.threadsafe.getpwuid false Some operating systems or authentication modules are known to have broken implementations of getpwuid_r and getpwgid_r, such that these calls are not thread-safe. Symptoms of this problem include JVM crashes with a stack trace inside these functions. If your system exhibits this issue, enable this configuration parameter to include a lock around the calls as a workaround. An incomplete list of some systems known to have this issue is available at http://wiki.apache.org/hadoop/KnownBrokenPwuidImplementations hadoop.kerberos.kinit.command kinit Used to periodically renew Kerberos credentials when provided to Hadoop. The default setting assumes that kinit is in the PATH of users running the Hadoop client. Change this to the absolute path to kinit if this is not the case. hadoop.security.auth_to_local Maps kerberos principals to local user names io.file.buffer.size 4096 The size of buffer for use in sequence files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. io.bytes.per.checksum 512 The number of bytes per checksum. Must not be larger than io.file.buffer.size. io.skip.checksum.errors false If true, when a checksum error is encountered while reading a sequence file, entries are skipped, instead of throwing an exception. io.compression.codecs A comma-separated list of the compression codec classes that can be used for compression/decompression. In addition to any classes specified with this property (which take precedence), codec classes on the classpath are discovered using a Java ServiceLoader. io.compression.codec.bzip2.library system-native The native-code library to be used for compression and decompression by the bzip2 codec. This library could be specified either by by name or the full pathname. In the former case, the library is located by the dynamic linker, usually searching the directories specified in the environment variable LD_LIBRARY_PATH. The value of "system-native" indicates that the default system library should be used. To indicate that the algorithm should operate entirely in Java, specify "java-builtin". io.serializations org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization A list of serialization classes that can be used for obtaining serializers and deserializers. io.seqfile.local.dir ${hadoop.tmp.dir}/io/local The local directory where sequence file stores intermediate data files during merge. May be a comma-separated list of directories on different devices in order to spread disk i/o. Directories that do not exist are ignored. io.map.index.skip 0 Number of index entries to skip between each entry. Zero by default. Setting this to values larger than zero can facilitate opening large MapFiles using less memory. io.map.index.interval 128 MapFile consist of two files - data file (tuples) and index file (keys). For every io.map.index.interval records written in the data file, an entry (record-key, data-file-position) is written in the index file. This is to allow for doing binary search later within the index file to look up records by their keys and get their closest positions in the data file. fs.defaultFS file:/// The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem. fs.default.name file:/// Deprecated. Use (fs.defaultFS) property instead fs.trash.interval 0 Number of minutes after which the checkpoint gets deleted. If zero, the trash feature is disabled. This option may be configured both on the server and the client. If trash is disabled server side then the client side configuration is checked. If trash is enabled on the server side then the value configured on the server is used and the client configuration value is ignored. fs.trash.checkpoint.interval 0 Number of minutes between trash checkpoints. Should be smaller or equal to fs.trash.interval. If zero, the value is set to the value of fs.trash.interval. Every time the checkpointer runs it creates a new checkpoint out of current and removes checkpoints created more than fs.trash.interval minutes ago. fs.AbstractFileSystem.file.impl org.apache.hadoop.fs.local.LocalFs The AbstractFileSystem for file: uris. fs.AbstractFileSystem.har.impl org.apache.hadoop.fs.HarFs The AbstractFileSystem for har: uris. fs.AbstractFileSystem.hdfs.impl org.apache.hadoop.fs.Hdfs The FileSystem for hdfs: uris. fs.AbstractFileSystem.viewfs.impl org.apache.hadoop.fs.viewfs.ViewFs The AbstractFileSystem for view file system for viewfs: uris (ie client side mount table:). fs.AbstractFileSystem.ftp.impl org.apache.hadoop.fs.ftp.FtpFs The FileSystem for Ftp: uris. fs.ftp.host 0.0.0.0 FTP filesystem connects to this server fs.ftp.host.port 21 FTP filesystem connects to fs.ftp.host on this port fs.df.interval 60000 Disk usage statistics refresh interval in msec. fs.du.interval 600000 File space usage statistics refresh interval in msec. fs.s3.block.size 67108864 Block size to use when writing files to S3. fs.s3.buffer.dir ${hadoop.tmp.dir}/s3 Determines where on the local filesystem the S3 filesystem should store files before sending them to S3 (or after retrieving them from S3). fs.s3.maxRetries 4 The maximum number of retries for reading or writing files to S3, before we signal failure to the application. fs.s3.sleepTimeSeconds 10 The number of seconds to sleep between each S3 retry. fs.swift.impl org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem The implementation class of the OpenStack Swift Filesystem fs.automatic.close true By default, FileSystem instances are automatically closed at program exit using a JVM shutdown hook. Setting this property to false disables this behavior. This is an advanced option that should only be used by server applications requiring a more carefully orchestrated shutdown sequence. fs.s3n.block.size 67108864 Block size to use when reading files using the native S3 filesystem (s3n: URIs). fs.s3n.multipart.uploads.enabled false Setting this property to true enables multiple uploads to native S3 filesystem. When uploading a file, it is split into blocks if the size is larger than fs.s3n.multipart.uploads.block.size. fs.s3n.multipart.uploads.block.size 67108864 The block size for multipart uploads to native S3 filesystem. Default size is 64MB. fs.s3n.multipart.copy.block.size 5368709120 The block size for multipart copy in native S3 filesystem. Default size is 5GB. fs.s3n.server-side-encryption-algorithm Specify a server-side encryption algorithm for S3. The default is NULL, and the only other currently allowable value is AES256. fs.s3a.awsAccessKeyId AWS access key ID. Omit for Role-based authentication. fs.s3a.awsSecretAccessKey AWS secret key. Omit for Role-based authentication. fs.s3a.connection.maximum 15 Controls the maximum number of simultaneous connections to S3. fs.s3a.connection.ssl.enabled true Enables or disables SSL connections to S3. fs.s3a.endpoint AWS S3 endpoint to connect to. An up-to-date list is provided in the AWS Documentation: regions and endpoints. Without this property, the standard region (s3.amazonaws.com) is assumed. fs.s3a.proxy.host Hostname of the (optional) proxy server for S3 connections. fs.s3a.proxy.port Proxy server port. If this property is not set but fs.s3a.proxy.host is, port 80 or 443 is assumed (consistent with the value of fs.s3a.connection.ssl.enabled). fs.s3a.proxy.username Username for authenticating with proxy server. fs.s3a.proxy.password Password for authenticating with proxy server. fs.s3a.proxy.domain Domain for authenticating with proxy server. fs.s3a.proxy.workstation Workstation for authenticating with proxy server. fs.s3a.attempts.maximum 10 How many times we should retry commands on transient errors. fs.s3a.connection.establish.timeout 5000 Socket connection setup timeout in milliseconds. fs.s3a.connection.timeout 50000 Socket connection timeout in milliseconds. fs.s3a.paging.maximum 5000 How many keys to request from S3 when doing directory listings at a time. fs.s3a.threads.max 256 Maximum number of concurrent active (part)uploads, which each use a thread from the threadpool. fs.s3a.threads.core 15 Number of core threads in the threadpool. fs.s3a.threads.keepalivetime 60 Number of seconds a thread can be idle before being terminated. fs.s3a.max.total.tasks 1000 Number of (part)uploads allowed to the queue before blocking additional uploads. fs.s3a.multipart.size 104857600 How big (in bytes) to split upload or copy operations up into. fs.s3a.multipart.threshold 2147483647 Threshold before uploads or copies use parallel multipart operations. fs.s3a.acl.default Set a canned ACL for newly created and copied objects. Value may be private, public-read, public-read-write, authenticated-read, log-delivery-write, bucket-owner-read, or bucket-owner-full-control. fs.s3a.multipart.purge false True if you want to purge existing multipart uploads that may not have been completed/aborted correctly fs.s3a.multipart.purge.age 86400 Minimum age in seconds of multipart uploads to purge fs.s3a.buffer.dir ${hadoop.tmp.dir}/s3a Comma separated list of directories that will be used to buffer file uploads to. fs.s3a.fast.upload false Upload directly from memory instead of buffering to disk first. Memory usage and parallelism can be controlled as up to fs.s3a.multipart.size memory is consumed for each (part)upload actively uploading (fs.s3a.threads.max) or queueing (fs.s3a.max.total.tasks) fs.s3a.fast.buffer.size 1048576 Size of initial memory buffer in bytes allocated for an upload. No effect if fs.s3a.fast.upload is false. fs.s3a.impl org.apache.hadoop.fs.s3a.S3AFileSystem The implementation class of the S3A Filesystem io.seqfile.compress.blocksize 1000000 The minimum block size for compression in block compressed SequenceFiles. io.seqfile.lazydecompress true Should values of block-compressed SequenceFiles be decompressed only when necessary. io.seqfile.sorter.recordlimit 1000000 The limit on number of records to be kept in memory in a spill in SequenceFiles.Sorter io.mapfile.bloom.size 1048576 The size of BloomFilter-s used in BloomMapFile. Each time this many keys is appended the next BloomFilter will be created (inside a DynamicBloomFilter). Larger values minimize the number of filters, which slightly increases the performance, but may waste too much space if the total number of keys is usually much smaller than this number. io.mapfile.bloom.error.rate 0.005 The rate of false positives in BloomFilter-s used in BloomMapFile. As this value decreases, the size of BloomFilter-s increases exponentially. This value is the probability of encountering false positives (default is 0.5%). hadoop.util.hash.type murmur The default implementation of Hash. Currently this can take one of the two values: 'murmur' to select MurmurHash and 'jenkins' to select JenkinsHash. ipc.client.idlethreshold 4000 Defines the threshold number of connections after which connections will be inspected for idleness. ipc.client.kill.max 10 Defines the maximum number of clients to disconnect in one go. ipc.client.connection.maxidletime 10000 The maximum time in msec after which a client will bring down the connection to the server. ipc.client.connect.max.retries 10 Indicates the number of retries a client will make to establish a server connection. ipc.client.connect.retry.interval 1000 Indicates the number of milliseconds a client will wait for before retrying to establish a server connection. ipc.client.connect.timeout 20000 Indicates the number of milliseconds a client will wait for the socket to establish a server connection. ipc.client.connect.max.retries.on.timeouts 45 Indicates the number of retries a client will make on socket timeout to establish a server connection. ipc.client.ping true Send a ping to the server when timeout on reading the response, if set to true. If no failure is detected, the client retries until at least a byte is read. ipc.ping.interval 60000 Timeout on waiting response from server, in milliseconds. The client will send ping when the interval is passed without receiving bytes, if ipc.client.ping is set to true. ipc.client.rpc-timeout.ms 0 Timeout on waiting response from server, in milliseconds. Currently this timeout works only when ipc.client.ping is set to true because it uses the same facilities with IPC ping. The timeout overrides the ipc.ping.interval and client will throw exception instead of sending ping when the interval is passed. ipc.server.listen.queue.size 128 Indicates the length of the listen queue for servers accepting client connections. ipc.maximum.data.length 67108864 This indicates the maximum IPC message length (bytes) that can be accepted by the server. Messages larger than this value are rejected by server immediately. This setting should rarely need to be changed. It merits investigating whether the cause of long RPC messages can be fixed instead, e.g. by splitting into smaller messages. ipc.server.log.slow.rpc false This setting is useful to troubleshoot performance issues for various services. If this value is set to true then we log requests that fall into 99th percentile as well as increment RpcSlowCalls counter. hadoop.security.impersonation.provider.class A class which implements ImpersonationProvider interface, used to authorize whether one user can impersonate a specific user. If not specified, the DefaultImpersonationProvider will be used. If a class is specified, then that class will be used to determine the impersonation capability. hadoop.rpc.socket.factory.class.default org.apache.hadoop.net.StandardSocketFactory Default SocketFactory to use. This parameter is expected to be formatted as "package.FactoryClassName". hadoop.rpc.socket.factory.class.ClientProtocol SocketFactory to use to connect to a DFS. If null or empty, use hadoop.rpc.socket.class.default. This socket factory is also used by DFSClient to create sockets to DataNodes. hadoop.socks.server Address (host:port) of the SOCKS server to be used by the SocksSocketFactory. net.topology.node.switch.mapping.impl org.apache.hadoop.net.ScriptBasedMapping The default implementation of the DNSToSwitchMapping. It invokes a script specified in net.topology.script.file.name to resolve node names. If the value for net.topology.script.file.name is not set, the default value of DEFAULT_RACK is returned for all node names. net.topology.impl org.apache.hadoop.net.NetworkTopology The default implementation of NetworkTopology which is classic three layer one. net.topology.script.file.name The script name that should be invoked to resolve DNS names to NetworkTopology names. Example: the script would take host.foo.bar as an argument, and return /rack1 as the output. net.topology.script.number.args 100 The max number of args that the script configured with net.topology.script.file.name should be run with. Each arg is an IP address. net.topology.table.file.name The file name for a topology file, which is used when the net.topology.node.switch.mapping.impl property is set to org.apache.hadoop.net.TableMapping. The file format is a two column text file, with columns separated by whitespace. The first column is a DNS or IP address and the second column specifies the rack where the address maps. If no entry corresponding to a host in the cluster is found, then /default-rack is assumed. file.stream-buffer-size 4096 The size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. file.bytes-per-checksum 512 The number of bytes per checksum. Must not be larger than file.stream-buffer-size file.client-write-packet-size 65536 Packet size for clients to write file.blocksize 67108864 Block size file.replication 1 Replication factor s3.stream-buffer-size 4096 The size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. s3.bytes-per-checksum 512 The number of bytes per checksum. Must not be larger than s3.stream-buffer-size s3.client-write-packet-size 65536 Packet size for clients to write s3.blocksize 67108864 Block size s3.replication 3 Replication factor s3native.stream-buffer-size 4096 The size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. s3native.bytes-per-checksum 512 The number of bytes per checksum. Must not be larger than s3native.stream-buffer-size s3native.client-write-packet-size 65536 Packet size for clients to write s3native.blocksize 67108864 Block size s3native.replication 3 Replication factor ftp.stream-buffer-size 4096 The size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. ftp.bytes-per-checksum 512 The number of bytes per checksum. Must not be larger than ftp.stream-buffer-size ftp.client-write-packet-size 65536 Packet size for clients to write ftp.blocksize 67108864 Block size ftp.replication 3 Replication factor tfile.io.chunk.size 1048576 Value chunk size in bytes. Default to 1MB. Values of the length less than the chunk size is guaranteed to have known value length in read time (See also TFile.Reader.Scanner.Entry.isValueLengthKnown()). tfile.fs.output.buffer.size 262144 Buffer size used for FSDataOutputStream in bytes. tfile.fs.input.buffer.size 262144 Buffer size used for FSDataInputStream in bytes. hadoop.http.authentication.type simple Defines authentication used for Oozie HTTP endpoint. Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME# hadoop.http.authentication.token.validity 36000 Indicates how long (in seconds) an authentication token is valid before it has to be renewed. hadoop.http.authentication.signature.secret.file ${user.home}/hadoop-http-auth-signature-secret The signature secret for signing the authentication tokens. The same secret should be used for JT/NN/DN/TT configurations. hadoop.http.authentication.cookie.domain The domain to use for the HTTP cookie that stores the authentication token. In order to authentiation to work correctly across all Hadoop nodes web-consoles the domain must be correctly set. IMPORTANT: when using IP addresses, browsers ignore cookies with domain settings. For this setting to work properly all nodes in the cluster must be configured to generate URLs with hostname.domain names on it. hadoop.http.authentication.simple.anonymous.allowed true Indicates if anonymous requests are allowed when using 'simple' authentication. hadoop.http.authentication.kerberos.principal HTTP/_HOST@LOCALHOST Indicates the Kerberos principal to be used for HTTP endpoint. The principal MUST start with 'HTTP/' as per Kerberos HTTP SPNEGO specification. hadoop.http.authentication.kerberos.keytab ${user.home}/hadoop.keytab Location of the keytab file with the credentials for the principal. Referring to the same keytab file Oozie uses for its Kerberos credentials for Hadoop. Enable/disable the cross-origin (CORS) filter. hadoop.http.cross-origin.enabled false Comma separated list of origins that are allowed for web services needing cross-origin (CORS) support. Wildcards (*) and patterns allowed hadoop.http.cross-origin.allowed-origins * Comma separated list of methods that are allowed for web services needing cross-origin (CORS) support. hadoop.http.cross-origin.allowed-methods GET,POST,HEAD Comma separated list of headers that are allowed for web services needing cross-origin (CORS) support. hadoop.http.cross-origin.allowed-headers X-Requested-With,Content-Type,Accept,Origin The number of seconds a pre-flighted request can be cached for web services needing cross-origin (CORS) support. hadoop.http.cross-origin.max-age 1800 dfs.ha.fencing.methods List of fencing methods to use for service fencing. May contain builtin methods (eg shell and sshfence) or user-defined method. dfs.ha.fencing.ssh.connect-timeout 30000 SSH connection timeout, in milliseconds, to use with the builtin sshfence fencer. dfs.ha.fencing.ssh.private-key-files The SSH private key files to use with the builtin sshfence fencer. The user name to filter as, on static web filters while rendering content. An example use is the HDFS web UI (user to be used for browsing files). hadoop.http.staticuser.user dr.who ha.zookeeper.quorum A list of ZooKeeper server addresses, separated by commas, that are to be used by the ZKFailoverController in automatic failover. ha.zookeeper.session-timeout.ms 5000 The session timeout to use when the ZKFC connects to ZooKeeper. Setting this value to a lower value implies that server crashes will be detected more quickly, but risks triggering failover too aggressively in the case of a transient error or network blip. ha.zookeeper.parent-znode /hadoop-ha The ZooKeeper znode under which the ZK failover controller stores its information. Note that the nameservice ID is automatically appended to this znode, so it is not normally necessary to configure this, even in a federated environment. ha.zookeeper.acl world:anyone:rwcda A comma-separated list of ZooKeeper ACLs to apply to the znodes used by automatic failover. These ACLs are specified in the same format as used by the ZooKeeper CLI. If the ACL itself contains secrets, you may instead specify a path to a file, prefixed with the '@' symbol, and the value of this configuration will be loaded from within. ha.zookeeper.auth A comma-separated list of ZooKeeper authentications to add when connecting to ZooKeeper. These are specified in the same format as used by the "addauth" command in the ZK CLI. It is important that the authentications specified here are sufficient to access znodes with the ACL specified in ha.zookeeper.acl. If the auths contain secrets, you may instead specify a path to a file, prefixed with the '@' symbol, and the value of this configuration will be loaded from within. hadoop.ssl.keystores.factory.class org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory The keystores factory to use for retrieving certificates. hadoop.ssl.require.client.cert false Whether client certificates are required hadoop.ssl.hostname.verifier DEFAULT The hostname verifier to provide for HttpsURLConnections. Valid values are: DEFAULT, STRICT, STRICT_I6, DEFAULT_AND_LOCALHOST and ALLOW_ALL hadoop.ssl.server.conf ssl-server.xml Resource file from which ssl server keystore information will be extracted. This file is looked up in the classpath, typically it should be in Hadoop conf/ directory. hadoop.ssl.client.conf ssl-client.xml Resource file from which ssl client keystore information will be extracted This file is looked up in the classpath, typically it should be in Hadoop conf/ directory. hadoop.ssl.enabled false Deprecated. Use dfs.http.policy and yarn.http.policy instead. hadoop.ssl.enabled.protocols TLSv1 Protocols supported by the ssl. hadoop.jetty.logs.serve.aliases true Enable/Disable aliases serving from jetty fs.permissions.umask-mode 022 The umask used when creating files and directories. Can be in octal or in symbolic. Examples are: "022" (octal for u=rwx,g=r-x,o=r-x in symbolic), or "u=rwx,g=rwx,o=" (symbolic for 007 in octal). ha.health-monitor.connect-retry-interval.ms 1000 How often to retry connecting to the service. ha.health-monitor.check-interval.ms 1000 How often to check the service. ha.health-monitor.sleep-after-disconnect.ms 1000 How long to sleep after an unexpected RPC error. ha.health-monitor.rpc-timeout.ms 45000 Timeout for the actual monitorHealth() calls. ha.failover-controller.new-active.rpc-timeout.ms 60000 Timeout that the FC waits for the new active to become active ha.failover-controller.graceful-fence.rpc-timeout.ms 5000 Timeout that the FC waits for the old active to go to standby ha.failover-controller.graceful-fence.connection.retries 1 FC connection retries for graceful fencing ha.failover-controller.cli-check.rpc-timeout.ms 20000 Timeout that the CLI (manual) FC waits for monitorHealth, getServiceState ipc.client.fallback-to-simple-auth-allowed false When a client is configured to attempt a secure connection, but attempts to connect to an insecure server, that server may instruct the client to switch to SASL SIMPLE (unsecure) authentication. This setting controls whether or not the client will accept this instruction from the server. When false (the default), the client will not allow the fallback to SIMPLE authentication, and will abort the connection. fs.client.resolve.remote.symlinks true Whether to resolve symlinks when accessing a remote Hadoop filesystem. Setting this to false causes an exception to be thrown upon encountering a symlink. This setting does not apply to local filesystems, which automatically resolve local symlinks. nfs.exports.allowed.hosts * rw By default, the export can be mounted by any client. The value string contains machine name and access privilege, separated by whitespace characters. The machine name format can be a single host, a Java regular expression, or an IPv4 address. The access privilege uses rw or ro to specify read/write or read-only access of the machines to exports. If the access privilege is not provided, the default is read-only. Entries are separated by ";". For example: "192.168.0.0/22 rw ; host.*\.example\.com ; host1.test.org ro;". Only the NFS gateway needs to restart after this property is updated. hadoop.user.group.static.mapping.overrides dr.who=; Static mapping of user to groups. This will override the groups if available in the system for the specified user. In otherwords, groups look-up will not happen for these users, instead groups mapped in this configuration will be used. Mapping should be in this format. user1=group1,group2;user2=;user3=group2; Default, "dr.who=;" will consider "dr.who" as user without groups. rpc.metrics.quantile.enable false Setting this property to true and rpc.metrics.percentiles.intervals to a comma-separated list of the granularity in seconds, the 50/75/90/95/99th percentile latency for rpc queue/processing time in milliseconds are added to rpc metrics. rpc.metrics.percentiles.intervals A comma-separated list of the granularity in seconds for the metrics which describe the 50/75/90/95/99th percentile latency for rpc queue/processing time. The metrics are outputted if rpc.metrics.quantile.enable is set to true. hadoop.security.crypto.codec.classes.EXAMPLECIPHERSUITE The prefix for a given crypto codec, contains a comma-separated list of implementation classes for a given crypto codec (eg EXAMPLECIPHERSUITE). The first implementation will be used if available, others are fallbacks. hadoop.security.crypto.codec.classes.aes.ctr.nopadding org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec,org.apache.hadoop.crypto.JceAesCtrCryptoCodec Comma-separated list of crypto codec implementations for AES/CTR/NoPadding. The first implementation will be used if available, others are fallbacks. hadoop.security.crypto.cipher.suite AES/CTR/NoPadding Cipher suite for crypto codec. hadoop.security.crypto.jce.provider The JCE provider name used in CryptoCodec. hadoop.security.crypto.buffer.size 8192 The buffer size used by CryptoInputStream and CryptoOutputStream. hadoop.security.java.secure.random.algorithm SHA1PRNG The java secure random algorithm. hadoop.security.secure.random.impl Implementation of secure random. hadoop.security.random.device.file.path /dev/urandom OS security random device file path. fs.har.impl.disable.cache true Don't cache 'har' filesystem instances. hadoop.security.kms.client.authentication.retry-count 1 Number of time to retry connecting to KMS on authentication failure hadoop.security.kms.client.encrypted.key.cache.size 500 Size of the EncryptedKeyVersion cache Queue for each key hadoop.security.kms.client.encrypted.key.cache.low-watermark 0.3f If size of the EncryptedKeyVersion cache Queue falls below the low watermark, this cache queue will be scheduled for a refill hadoop.security.kms.client.encrypted.key.cache.num.refill.threads 2 Number of threads to use for refilling depleted EncryptedKeyVersion cache Queues hadoop.security.kms.client.encrypted.key.cache.expiry 43200000 Cache expiry time for a Key, after which the cache Queue for this key will be dropped. Default = 12hrs hadoop.htrace.spanreceiver.classes A comma separated list of the fully-qualified class name of classes implementing SpanReceiver. The tracing system works by collecting information in structs called 'Spans'. It is up to you to choose how you want to receive this information by implementing the SpanReceiver interface. ipc.server.max.connections 0 The maximum number of concurrent connections a server is allowed to accept. If this limit is exceeded, incoming connections will first fill the listen queue and then may go to an OS-specific listen overflow queue. The client may fail or timeout, but the server can avoid running out of file descriptors using this feature. 0 means no limit. Is the registry enabled in the YARN Resource Manager? If true, the YARN RM will, as needed. create the user and system paths, and purge service records when containers, application attempts and applications complete. If false, the paths must be created by other means, and no automatic cleanup of service records will take place. hadoop.registry.rm.enabled false The root zookeeper node for the registry hadoop.registry.zk.root /registry Zookeeper session timeout in milliseconds hadoop.registry.zk.session.timeout.ms 60000 Zookeeper connection timeout in milliseconds hadoop.registry.zk.connection.timeout.ms 15000 Zookeeper connection retry count before failing hadoop.registry.zk.retry.times 5 hadoop.registry.zk.retry.interval.ms 1000 Zookeeper retry limit in milliseconds, during exponential backoff. This places a limit even if the retry times and interval limit, combined with the backoff policy, result in a long retry period hadoop.registry.zk.retry.ceiling.ms 60000 List of hostname:port pairs defining the zookeeper quorum binding for the registry hadoop.registry.zk.quorum localhost:2181 Key to set if the registry is secure. Turning it on changes the permissions policy from "open access" to restrictions on kerberos with the option of a user adding one or more auth key pairs down their own tree. hadoop.registry.secure false A comma separated list of Zookeeper ACL identifiers with system access to the registry in a secure cluster. These are given full access to all entries. If there is an "@" at the end of a SASL entry it instructs the registry client to append the default kerberos domain. hadoop.registry.system.acls sasl:yarn@, sasl:mapred@, sasl:hdfs@ The kerberos realm: used to set the realm of system principals which do not declare their realm, and any other accounts that need the value. If empty, the default realm of the running process is used. If neither are known and the realm is needed, then the registry service/client will fail. hadoop.registry.kerberos.realm Key to define the JAAS context. Used in secure mode hadoop.registry.jaas.context Client hadoop.security.sensitive-config-keys password$,fs.s3.*[Ss]ecret.?[Kk]ey,fs.azure.account.key.*,dfs.webhdfs.oauth2.[a-z]+.token,hadoop.security.sensitive-config-keys A comma-separated list of regular expressions to match against configuration keys that should be redacted where appropriate, for example, when logging modified properties during a reconfiguration, private credentials should not be logged. ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_5/resources/create_hive_db.sql 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_5/resources/create_hive_db.s0000664000175000017500000000066100000000000033551 0ustar00zuulzuul00000000000000CREATE DATABASE metastore; USE metastore; SOURCE /opt/hive/scripts/metastore/upgrade/mysql/hive-schema-2.3.0.mysql.sql; CREATE USER 'hive'@'localhost' IDENTIFIED BY '{{password}}'; REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'hive'@'localhost'; GRANT ALL PRIVILEGES ON metastore.* TO 'hive'@'localhost' IDENTIFIED BY '{{password}}'; GRANT ALL PRIVILEGES ON metastore.* TO 'hive'@'%' IDENTIFIED BY '{{password}}'; FLUSH PRIVILEGES; exit ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_5/resources/hdfs-default.xml0000664000175000017500000027122500000000000033540 0ustar00zuulzuul00000000000000 hadoop.hdfs.configuration.version 1 version of this configuration file dfs.namenode.rpc-address RPC address that handles all clients requests. In the case of HA/Federation where multiple namenodes exist, the name service id is added to the name e.g. dfs.namenode.rpc-address.ns1 dfs.namenode.rpc-address.EXAMPLENAMESERVICE The value of this property will take the form of nn-host1:rpc-port. dfs.namenode.rpc-bind-host The actual address the RPC server will bind to. If this optional address is set, it overrides only the hostname portion of dfs.namenode.rpc-address. It can also be specified per name node or name service for HA/Federation. This is useful for making the name node listen on all interfaces by setting it to 0.0.0.0. dfs.namenode.servicerpc-address RPC address for HDFS Services communication. BackupNode, Datanodes and all other services should be connecting to this address if it is configured. In the case of HA/Federation where multiple namenodes exist, the name service id is added to the name e.g. dfs.namenode.servicerpc-address.ns1 dfs.namenode.rpc-address.EXAMPLENAMESERVICE The value of this property will take the form of nn-host1:rpc-port. If the value of this property is unset the value of dfs.namenode.rpc-address will be used as the default. dfs.namenode.servicerpc-bind-host The actual address the service RPC server will bind to. If this optional address is set, it overrides only the hostname portion of dfs.namenode.servicerpc-address. It can also be specified per name node or name service for HA/Federation. This is useful for making the name node listen on all interfaces by setting it to 0.0.0.0. dfs.namenode.secondary.http-address 0.0.0.0:50090 The secondary namenode http server address and port. dfs.namenode.secondary.https-address 0.0.0.0:50091 The secondary namenode HTTPS server address and port. dfs.datanode.address 0.0.0.0:50010 The datanode server address and port for data transfer. dfs.datanode.http.address 0.0.0.0:50075 The datanode http server address and port. dfs.datanode.ipc.address 0.0.0.0:50020 The datanode ipc server address and port. dfs.datanode.handler.count 10 The number of server threads for the datanode. dfs.namenode.http-address 0.0.0.0:50070 The address and the base port where the dfs namenode web ui will listen on. dfs.namenode.http-bind-host The actual adress the HTTP server will bind to. If this optional address is set, it overrides only the hostname portion of dfs.namenode.http-address. It can also be specified per name node or name service for HA/Federation. This is useful for making the name node HTTP server listen on all interfaces by setting it to 0.0.0.0. dfs.namenode.heartbeat.recheck-interval 300000 This time decides the interval to check for expired datanodes. With this value and dfs.heartbeat.interval, the interval of deciding the datanode is stale or not is also calculated. The unit of this configuration is millisecond. dfs.http.policy HTTP_ONLY Decide if HTTPS(SSL) is supported on HDFS This configures the HTTP endpoint for HDFS daemons: The following values are supported: - HTTP_ONLY : Service is provided only on http - HTTPS_ONLY : Service is provided only on https - HTTP_AND_HTTPS : Service is provided both on http and https dfs.client.https.need-auth false Whether SSL client certificate authentication is required dfs.client.cached.conn.retry 3 The number of times the HDFS client will pull a socket from the cache. Once this number is exceeded, the client will try to create a new socket. dfs.https.server.keystore.resource ssl-server.xml Resource file from which ssl server keystore information will be extracted dfs.client.https.keystore.resource ssl-client.xml Resource file from which ssl client keystore information will be extracted dfs.datanode.https.address 0.0.0.0:50475 The datanode secure http server address and port. dfs.namenode.https-address 0.0.0.0:50470 The namenode secure http server address and port. dfs.namenode.https-bind-host The actual adress the HTTPS server will bind to. If this optional address is set, it overrides only the hostname portion of dfs.namenode.https-address. It can also be specified per name node or name service for HA/Federation. This is useful for making the name node HTTPS server listen on all interfaces by setting it to 0.0.0.0. dfs.datanode.dns.interface default The name of the Network Interface from which a data node should report its IP address. dfs.datanode.dns.nameserver default The host name or IP address of the name server (DNS) which a DataNode should use to determine the host name used by the NameNode for communication and display purposes. dfs.namenode.backup.address 0.0.0.0:50100 The backup node server address and port. If the port is 0 then the server will start on a free port. dfs.namenode.backup.http-address 0.0.0.0:50105 The backup node http server address and port. If the port is 0 then the server will start on a free port. dfs.namenode.replication.considerLoad true Decide if chooseTarget considers the target's load or not dfs.default.chunk.view.size 32768 The number of bytes to view for a file on the browser. dfs.datanode.du.reserved 0 Reserved space in bytes per volume. Always leave this much space free for non dfs use. Specific storage type based reservation is also supported. The property can be followed with corresponding storage types ([ssd]/[disk]/[archive]/[ram_disk]) for cluster with heterogeneous storage. For example, reserved space for RAM_DISK storage can be configured using property 'dfs.datanode.du.reserved.ram_disk'. If specific storage type reservation is not configured then dfs.datanode.du.reserved will be used. dfs.namenode.name.dir file://${hadoop.tmp.dir}/dfs/name Determines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy. dfs.namenode.name.dir.restore false Set to true to enable NameNode to attempt recovering a previously failed dfs.namenode.name.dir. When enabled, a recovery of any failed directory is attempted during checkpoint. dfs.namenode.fs-limits.max-component-length 255 Defines the maximum number of bytes in UTF-8 encoding in each component of a path. A value of 0 will disable the check. dfs.namenode.fs-limits.max-directory-items 1048576 Defines the maximum number of items that a directory may contain. Cannot set the property to a value less than 1 or more than 6400000. dfs.namenode.fs-limits.min-block-size 1048576 Minimum block size in bytes, enforced by the Namenode at create time. This prevents the accidental creation of files with tiny block sizes (and thus many blocks), which can degrade performance. dfs.namenode.fs-limits.max-blocks-per-file 1048576 Maximum number of blocks per file, enforced by the Namenode on write. This prevents the creation of extremely large files which can degrade performance. dfs.namenode.edits.dir ${dfs.namenode.name.dir} Determines where on the local filesystem the DFS name node should store the transaction (edits) file. If this is a comma-delimited list of directories then the transaction file is replicated in all of the directories, for redundancy. Default value is same as dfs.namenode.name.dir dfs.namenode.shared.edits.dir A directory on shared storage between the multiple namenodes in an HA cluster. This directory will be written by the active and read by the standby in order to keep the namespaces synchronized. This directory does not need to be listed in dfs.namenode.edits.dir above. It should be left empty in a non-HA cluster. dfs.namenode.edits.journal-plugin.qjournal org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager dfs.permissions.enabled true If "true", enable permission checking in HDFS. If "false", permission checking is turned off, but all other behavior is unchanged. Switching from one parameter value to the other does not change the mode, owner or group of files or directories. dfs.permissions.superusergroup supergroup The name of the group of super-users. The value should be a single group name. dfs.cluster.administrators ACL for the admins, this configuration is used to control who can access the default servlets in the namenode, etc. The value should be a comma separated list of users and groups. The user list comes first and is separated by a space followed by the group list, e.g. "user1,user2 group1,group2". Both users and groups are optional, so "user1", " group1", "", "user1 group1", "user1,user2 group1,group2" are all valid (note the leading space in " group1"). '*' grants access to all users and groups, e.g. '*', '* ' and ' *' are all valid. dfs.namenode.acls.enabled false Set to true to enable support for HDFS ACLs (Access Control Lists). By default, ACLs are disabled. When ACLs are disabled, the NameNode rejects all RPCs related to setting or getting ACLs. dfs.namenode.lazypersist.file.scrub.interval.sec 300 The NameNode periodically scans the namespace for LazyPersist files with missing blocks and unlinks them from the namespace. This configuration key controls the interval between successive scans. Set it to a negative value to disable this behavior. dfs.block.access.token.enable false If "true", access tokens are used as capabilities for accessing datanodes. If "false", no access tokens are checked on accessing datanodes. dfs.block.access.key.update.interval 600 Interval in minutes at which namenode updates its access keys. dfs.block.access.token.lifetime 600 The lifetime of access tokens in minutes. dfs.datanode.data.dir file://${hadoop.tmp.dir}/dfs/data Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. The directories should be tagged with corresponding storage types ([SSD]/[DISK]/[ARCHIVE]/[RAM_DISK]) for HDFS storage policies. The default storage type will be DISK if the directory does not have a storage type tagged explicitly. Directories that do not exist will be created if local filesystem permission allows. dfs.datanode.data.dir.perm 700 Permissions for the directories on on the local filesystem where the DFS data node store its blocks. The permissions can either be octal or symbolic. dfs.replication 3 Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. dfs.replication.max 512 Maximal block replication. dfs.namenode.replication.min 1 Minimal block replication. dfs.blocksize 134217728 The default block size for new files, in bytes. You can use the following suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) to specify the size (such as 128k, 512m, 1g, etc.), Or provide complete size in bytes (such as 134217728 for 128 MB). dfs.client.block.write.retries 3 The number of retries for writing blocks to the data nodes, before we signal failure to the application. dfs.client.block.write.replace-datanode-on-failure.enable true If there is a datanode/network failure in the write pipeline, DFSClient will try to remove the failed datanode from the pipeline and then continue writing with the remaining datanodes. As a result, the number of datanodes in the pipeline is decreased. The feature is to add new datanodes to the pipeline. This is a site-wide property to enable/disable the feature. When the cluster size is extremely small, e.g. 3 nodes or less, cluster administrators may want to set the policy to NEVER in the default configuration file or disable this feature. Otherwise, users may experience an unusually high rate of pipeline failures since it is impossible to find new datanodes for replacement. See also dfs.client.block.write.replace-datanode-on-failure.policy dfs.client.block.write.replace-datanode-on-failure.policy DEFAULT This property is used only if the value of dfs.client.block.write.replace-datanode-on-failure.enable is true. ALWAYS: always add a new datanode when an existing datanode is removed. NEVER: never add a new datanode. DEFAULT: Let r be the replication number. Let n be the number of existing datanodes. Add a new datanode only if r is greater than or equal to 3 and either (1) floor(r/2) is greater than or equal to n; or (2) r is greater than n and the block is hflushed/appended. dfs.client.block.write.replace-datanode-on-failure.best-effort false This property is used only if the value of dfs.client.block.write.replace-datanode-on-failure.enable is true. Best effort means that the client will try to replace a failed datanode in write pipeline (provided that the policy is satisfied), however, it continues the write operation in case that the datanode replacement also fails. Suppose the datanode replacement fails. false: An exception should be thrown so that the write will fail. true : The write should be resumed with the remaining datandoes. Note that setting this property to true allows writing to a pipeline with a smaller number of datanodes. As a result, it increases the probability of data loss. dfs.blockreport.intervalMsec 21600000 Determines block reporting interval in milliseconds. dfs.blockreport.initialDelay 0 Delay for first block report in seconds. dfs.blockreport.split.threshold 1000000 If the number of blocks on the DataNode is below this threshold then it will send block reports for all Storage Directories in a single message. If the number of blocks exceeds this threshold then the DataNode will send block reports for each Storage Directory in separate messages. Set to zero to always split. dfs.datanode.directoryscan.interval 21600 Interval in seconds for Datanode to scan data directories and reconcile the difference between blocks in memory and on the disk. dfs.datanode.directoryscan.threads 1 How many threads should the threadpool used to compile reports for volumes in parallel have. dfs.datanode.directoryscan.throttle.limit.ms.per.sec 0 The report compilation threads are limited to only running for a given number of milliseconds per second, as configured by the property. The limit is taken per thread, not in aggregate, e.g. setting a limit of 100ms for 4 compiler threads will result in each thread being limited to 100ms, not 25ms. Note that the throttle does not interrupt the report compiler threads, so the actual running time of the threads per second will typically be somewhat higher than the throttle limit, usually by no more than 20%. Setting this limit to 1000 disables compiler thread throttling. Only values between 1 and 1000 are valid. Setting an invalid value will result in the throttle being disbled and an error message being logged. 1000 is the default setting. dfs.heartbeat.interval 3 Determines datanode heartbeat interval in seconds. dfs.namenode.handler.count 10 The number of server threads for the namenode. dfs.namenode.safemode.threshold-pct 0.999f Specifies the percentage of blocks that should satisfy the minimal replication requirement defined by dfs.namenode.replication.min. Values less than or equal to 0 mean not to wait for any particular percentage of blocks before exiting safemode. Values greater than 1 will make safe mode permanent. dfs.namenode.safemode.min.datanodes 0 Specifies the number of datanodes that must be considered alive before the name node exits safemode. Values less than or equal to 0 mean not to take the number of live datanodes into account when deciding whether to remain in safe mode during startup. Values greater than the number of datanodes in the cluster will make safe mode permanent. dfs.namenode.safemode.extension 30000 Determines extension of safe mode in milliseconds after the threshold level is reached. dfs.namenode.resource.check.interval 5000 The interval in milliseconds at which the NameNode resource checker runs. The checker calculates the number of the NameNode storage volumes whose available spaces are more than dfs.namenode.resource.du.reserved, and enters safemode if the number becomes lower than the minimum value specified by dfs.namenode.resource.checked.volumes.minimum. dfs.namenode.resource.du.reserved 104857600 The amount of space to reserve/require for a NameNode storage directory in bytes. The default is 100MB. dfs.namenode.resource.checked.volumes A list of local directories for the NameNode resource checker to check in addition to the local edits directories. dfs.namenode.resource.checked.volumes.minimum 1 The minimum number of redundant NameNode storage volumes required. dfs.datanode.balance.bandwidthPerSec 1048576 Specifies the maximum amount of bandwidth that each datanode can utilize for the balancing purpose in term of the number of bytes per second. dfs.mover.max-no-move-interval 60000 If this specified amount of time has elapsed and no block has been moved out of a source DataNode, on more effort will be made to move blocks out of this DataNode in the current Mover iteration. dfs.hosts Names a file that contains a list of hosts that are permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, all hosts are permitted. dfs.hosts.exclude Names a file that contains a list of hosts that are not permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, no hosts are excluded. dfs.namenode.max.objects 0 The maximum number of files, directories and blocks dfs supports. A value of zero indicates no limit to the number of objects that dfs supports. dfs.namenode.datanode.registration.ip-hostname-check true If true (the default), then the namenode requires that a connecting datanode's address must be resolved to a hostname. If necessary, a reverse DNS lookup is performed. All attempts to register a datanode from an unresolvable address are rejected. It is recommended that this setting be left on to prevent accidental registration of datanodes listed by hostname in the excludes file during a DNS outage. Only set this to false in environments where there is no infrastructure to support reverse DNS lookup. dfs.namenode.decommission.interval 30 Namenode periodicity in seconds to check if decommission is complete. dfs.namenode.decommission.blocks.per.interval 500000 The approximate number of blocks to process per decommission interval, as defined in dfs.namenode.decommission.interval. dfs.namenode.decommission.max.concurrent.tracked.nodes 100 The maximum number of decommission-in-progress datanodes nodes that will be tracked at one time by the namenode. Tracking a decommission-in-progress datanode consumes additional NN memory proportional to the number of blocks on the datnode. Having a conservative limit reduces the potential impact of decomissioning a large number of nodes at once. A value of 0 means no limit will be enforced. dfs.namenode.replication.interval 3 The periodicity in seconds with which the namenode computes replication work for datanodes. dfs.namenode.accesstime.precision 3600000 The access time for HDFS file is precise upto this value. The default value is 1 hour. Setting a value of 0 disables access times for HDFS. dfs.datanode.plugins Comma-separated list of datanode plug-ins to be activated. dfs.namenode.plugins Comma-separated list of namenode plug-ins to be activated. dfs.namenode.block-placement-policy.default.prefer-local-node true Controls how the default block placement policy places the first replica of a block. When true, it will prefer the node where the client is running. When false, it will prefer a node in the same rack as the client. Setting to false avoids situations where entire copies of large files end up on a single node, thus creating hotspots. dfs.stream-buffer-size 4096 The size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. dfs.bytes-per-checksum 512 The number of bytes per checksum. Must not be larger than dfs.stream-buffer-size dfs.client-write-packet-size 65536 Packet size for clients to write dfs.client.write.exclude.nodes.cache.expiry.interval.millis 600000 The maximum period to keep a DN in the excluded nodes list at a client. After this period, in milliseconds, the previously excluded node(s) will be removed automatically from the cache and will be considered good for block allocations again. Useful to lower or raise in situations where you keep a file open for very long periods (such as a Write-Ahead-Log (WAL) file) to make the writer tolerant to cluster maintenance restarts. Defaults to 10 minutes. dfs.namenode.checkpoint.dir file://${hadoop.tmp.dir}/dfs/namesecondary Determines where on the local filesystem the DFS secondary name node should store the temporary images to merge. If this is a comma-delimited list of directories then the image is replicated in all of the directories for redundancy. dfs.namenode.checkpoint.edits.dir ${dfs.namenode.checkpoint.dir} Determines where on the local filesystem the DFS secondary name node should store the temporary edits to merge. If this is a comma-delimited list of directories then the edits is replicated in all of the directories for redundancy. Default value is same as dfs.namenode.checkpoint.dir dfs.namenode.checkpoint.period 3600 The number of seconds between two periodic checkpoints. dfs.namenode.checkpoint.txns 1000000 The Secondary NameNode or CheckpointNode will create a checkpoint of the namespace every 'dfs.namenode.checkpoint.txns' transactions, regardless of whether 'dfs.namenode.checkpoint.period' has expired. dfs.namenode.checkpoint.check.period 60 The SecondaryNameNode and CheckpointNode will poll the NameNode every 'dfs.namenode.checkpoint.check.period' seconds to query the number of uncheckpointed transactions. dfs.namenode.checkpoint.max-retries 3 The SecondaryNameNode retries failed checkpointing. If the failure occurs while loading fsimage or replaying edits, the number of retries is limited by this variable. dfs.namenode.num.checkpoints.retained 2 The number of image checkpoint files (fsimage_*) that will be retained by the NameNode and Secondary NameNode in their storage directories. All edit logs (stored on edits_* files) necessary to recover an up-to-date namespace from the oldest retained checkpoint will also be retained. dfs.namenode.num.extra.edits.retained 1000000 The number of extra transactions which should be retained beyond what is minimally necessary for a NN restart. It does not translate directly to file's age, or the number of files kept, but to the number of transactions (here "edits" means transactions). One edit file may contain several transactions (edits). During checkpoint, NameNode will identify the total number of edits to retain as extra by checking the latest checkpoint transaction value, subtracted by the value of this property. Then, it scans edits files to identify the older ones that don't include the computed range of retained transactions that are to be kept around, and purges them subsequently. The retainment can be useful for audit purposes or for an HA setup where a remote Standby Node may have been offline for some time and need to have a longer backlog of retained edits in order to start again. Typically each edit is on the order of a few hundred bytes, so the default of 1 million edits should be on the order of hundreds of MBs or low GBs. NOTE: Fewer extra edits may be retained than value specified for this setting if doing so would mean that more segments would be retained than the number configured by dfs.namenode.max.extra.edits.segments.retained. dfs.namenode.max.extra.edits.segments.retained 10000 The maximum number of extra edit log segments which should be retained beyond what is minimally necessary for a NN restart. When used in conjunction with dfs.namenode.num.extra.edits.retained, this configuration property serves to cap the number of extra edits files to a reasonable value. dfs.namenode.delegation.key.update-interval 86400000 The update interval for master key for delegation tokens in the namenode in milliseconds. dfs.namenode.delegation.token.max-lifetime 604800000 The maximum lifetime in milliseconds for which a delegation token is valid. dfs.namenode.delegation.token.renew-interval 86400000 The renewal interval for delegation token in milliseconds. dfs.datanode.failed.volumes.tolerated 0 The number of volumes that are allowed to fail before a datanode stops offering service. By default any volume failure will cause a datanode to shutdown. dfs.image.compress false Should the dfs image be compressed? dfs.image.compression.codec org.apache.hadoop.io.compress.DefaultCodec If the dfs image is compressed, how should they be compressed? This has to be a codec defined in io.compression.codecs. dfs.image.transfer.timeout 60000 Socket timeout for image transfer in milliseconds. This timeout and the related dfs.image.transfer.bandwidthPerSec parameter should be configured such that normal image transfer can complete successfully. This timeout prevents client hangs when the sender fails during image transfer. This is socket timeout during image transfer. dfs.image.transfer.bandwidthPerSec 0 Maximum bandwidth used for image transfer in bytes per second. This can help keep normal namenode operations responsive during checkpointing. The maximum bandwidth and timeout in dfs.image.transfer.timeout should be set such that normal image transfers can complete successfully. A default value of 0 indicates that throttling is disabled. dfs.image.transfer.chunksize 65536 Chunksize in bytes to upload the checkpoint. Chunked streaming is used to avoid internal buffering of contents of image file of huge size. dfs.namenode.support.allow.format true Does HDFS namenode allow itself to be formatted? You may consider setting this to false for any production cluster, to avoid any possibility of formatting a running DFS. dfs.datanode.max.transfer.threads 4096 Specifies the maximum number of threads to use for transferring data in and out of the DN. dfs.datanode.scan.period.hours 504 If this is positive, the DataNode will not scan any individual block more than once in the specified scan period. If this is negative, the block scanner is disabled. If this is set to zero, then the default value of 504 hours or 3 weeks is used. Prior versions of HDFS incorrectly documented that setting this key to zero will disable the block scanner. dfs.block.scanner.volume.bytes.per.second 1048576 If this is 0, the DataNode's block scanner will be disabled. If this is positive, this is the number of bytes per second that the DataNode's block scanner will try to scan from each volume. dfs.datanode.readahead.bytes 4194304 While reading block files, if the Hadoop native libraries are available, the datanode can use the posix_fadvise system call to explicitly page data into the operating system buffer cache ahead of the current reader's position. This can improve performance especially when disks are highly contended. This configuration specifies the number of bytes ahead of the current read position which the datanode will attempt to read ahead. This feature may be disabled by configuring this property to 0. If the native libraries are not available, this configuration has no effect. dfs.datanode.drop.cache.behind.reads false In some workloads, the data read from HDFS is known to be significantly large enough that it is unlikely to be useful to cache it in the operating system buffer cache. In this case, the DataNode may be configured to automatically purge all data from the buffer cache after it is delivered to the client. This behavior is automatically disabled for workloads which read only short sections of a block (e.g HBase random-IO workloads). This may improve performance for some workloads by freeing buffer cache space usage for more cacheable data. If the Hadoop native libraries are not available, this configuration has no effect. dfs.datanode.drop.cache.behind.writes false In some workloads, the data written to HDFS is known to be significantly large enough that it is unlikely to be useful to cache it in the operating system buffer cache. In this case, the DataNode may be configured to automatically purge all data from the buffer cache after it is written to disk. This may improve performance for some workloads by freeing buffer cache space usage for more cacheable data. If the Hadoop native libraries are not available, this configuration has no effect. dfs.datanode.sync.behind.writes false If this configuration is enabled, the datanode will instruct the operating system to enqueue all written data to the disk immediately after it is written. This differs from the usual OS policy which may wait for up to 30 seconds before triggering writeback. This may improve performance for some workloads by smoothing the IO profile for data written to disk. If the Hadoop native libraries are not available, this configuration has no effect. dfs.client.failover.max.attempts 15 Expert only. The number of client failover attempts that should be made before the failover is considered failed. dfs.client.failover.sleep.base.millis 500 Expert only. The time to wait, in milliseconds, between failover attempts increases exponentially as a function of the number of attempts made so far, with a random factor of +/- 50%. This option specifies the base value used in the failover calculation. The first failover will retry immediately. The 2nd failover attempt will delay at least dfs.client.failover.sleep.base.millis milliseconds. And so on. dfs.client.failover.sleep.max.millis 15000 Expert only. The time to wait, in milliseconds, between failover attempts increases exponentially as a function of the number of attempts made so far, with a random factor of +/- 50%. This option specifies the maximum value to wait between failovers. Specifically, the time between two failover attempts will not exceed +/- 50% of dfs.client.failover.sleep.max.millis milliseconds. dfs.client.failover.connection.retries 0 Expert only. Indicates the number of retries a failover IPC client will make to establish a server connection. dfs.client.failover.connection.retries.on.timeouts 0 Expert only. The number of retry attempts a failover IPC client will make on socket timeout when establishing a server connection. dfs.client.datanode-restart.timeout 30 Expert only. The time to wait, in seconds, from reception of an datanode shutdown notification for quick restart, until declaring the datanode dead and invoking the normal recovery mechanisms. The notification is sent by a datanode when it is being shutdown using the shutdownDatanode admin command with the upgrade option. dfs.nameservices Comma-separated list of nameservices. dfs.nameservice.id The ID of this nameservice. If the nameservice ID is not configured or more than one nameservice is configured for dfs.nameservices it is determined automatically by matching the local node's address with the configured address. dfs.internal.nameservices Comma-separated list of nameservices that belong to this cluster. Datanode will report to all the nameservices in this list. By default this is set to the value of dfs.nameservices. dfs.ha.namenodes.EXAMPLENAMESERVICE The prefix for a given nameservice, contains a comma-separated list of namenodes for a given nameservice (eg EXAMPLENAMESERVICE). dfs.ha.namenode.id The ID of this namenode. If the namenode ID is not configured it is determined automatically by matching the local node's address with the configured address. dfs.ha.log-roll.period 120 How often, in seconds, the StandbyNode should ask the active to roll edit logs. Since the StandbyNode only reads from finalized log segments, the StandbyNode will only be as up-to-date as how often the logs are rolled. Note that failover triggers a log roll so the StandbyNode will be up to date before it becomes active. dfs.ha.tail-edits.period 60 How often, in seconds, the StandbyNode should check for new finalized log segments in the shared edits log. dfs.ha.automatic-failover.enabled false Whether automatic failover is enabled. See the HDFS High Availability documentation for details on automatic HA configuration. dfs.client.use.datanode.hostname false Whether clients should use datanode hostnames when connecting to datanodes. dfs.datanode.use.datanode.hostname false Whether datanodes should use datanode hostnames when connecting to other datanodes for data transfer. dfs.client.local.interfaces A comma separated list of network interface names to use for data transfer between the client and datanodes. When creating a connection to read from or write to a datanode, the client chooses one of the specified interfaces at random and binds its socket to the IP of that interface. Individual names may be specified as either an interface name (eg "eth0"), a subinterface name (eg "eth0:0"), or an IP address (which may be specified using CIDR notation to match a range of IPs). dfs.datanode.shared.file.descriptor.paths /dev/shm,/tmp A comma-separated list of paths to use when creating file descriptors that will be shared between the DataNode and the DFSClient. Typically we use /dev/shm, so that the file descriptors will not be written to disk. Systems that don't have /dev/shm will fall back to /tmp by default. dfs.short.circuit.shared.memory.watcher.interrupt.check.ms 60000 The length of time in milliseconds that the short-circuit shared memory watcher will go between checking for java interruptions sent from other threads. This is provided mainly for unit tests. dfs.namenode.kerberos.principal The NameNode service principal. This is typically set to nn/_HOST@REALM.TLD. Each NameNode will substitute _HOST with its own fully qualified hostname at startup. The _HOST placeholder allows using the same configuration setting on both NameNodes in an HA setup. dfs.namenode.keytab.file The keytab file used by each NameNode daemon to login as its service principal. The principal name is configured with dfs.namenode.kerberos.principal. dfs.datanode.kerberos.principal The DataNode service principal. This is typically set to dn/_HOST@REALM.TLD. Each DataNode will substitute _HOST with its own fully qualified hostname at startup. The _HOST placeholder allows using the same configuration setting on all DataNodes. dfs.datanode.keytab.file The keytab file used by each DataNode daemon to login as its service principal. The principal name is configured with dfs.datanode.kerberos.principal. dfs.journalnode.kerberos.principal The JournalNode service principal. This is typically set to jn/_HOST@REALM.TLD. Each JournalNode will substitute _HOST with its own fully qualified hostname at startup. The _HOST placeholder allows using the same configuration setting on all JournalNodes. dfs.journalnode.keytab.file The keytab file used by each JournalNode daemon to login as its service principal. The principal name is configured with dfs.journalnode.kerberos.principal. dfs.namenode.kerberos.internal.spnego.principal ${dfs.web.authentication.kerberos.principal} The server principal used by the NameNode for web UI SPNEGO authentication when Kerberos security is enabled. This is typically set to HTTP/_HOST@REALM.TLD The SPNEGO server principal begins with the prefix HTTP/ by convention. If the value is '*', the web server will attempt to login with every principal specified in the keytab file dfs.web.authentication.kerberos.keytab. dfs.journalnode.kerberos.internal.spnego.principal The server principal used by the JournalNode HTTP Server for SPNEGO authentication when Kerberos security is enabled. This is typically set to HTTP/_HOST@REALM.TLD. The SPNEGO server principal begins with the prefix HTTP/ by convention. If the value is '*', the web server will attempt to login with every principal specified in the keytab file dfs.web.authentication.kerberos.keytab. For most deployments this can be set to ${dfs.web.authentication.kerberos.principal} i.e use the value of dfs.web.authentication.kerberos.principal. dfs.secondary.namenode.kerberos.internal.spnego.principal ${dfs.web.authentication.kerberos.principal} The server principal used by the Secondary NameNode for web UI SPNEGO authentication when Kerberos security is enabled. Like all other Secondary NameNode settings, it is ignored in an HA setup. If the value is '*', the web server will attempt to login with every principal specified in the keytab file dfs.web.authentication.kerberos.keytab. dfs.web.authentication.kerberos.principal The server principal used by the NameNode for WebHDFS SPNEGO authentication. Required when WebHDFS and security are enabled. In most secure clusters this setting is also used to specify the values for dfs.namenode.kerberos.internal.spnego.principal and dfs.journalnode.kerberos.internal.spnego.principal. dfs.web.authentication.kerberos.keytab The keytab file for the principal corresponding to dfs.web.authentication.kerberos.principal. dfs.namenode.kerberos.principal.pattern * A client-side RegEx that can be configured to control allowed realms to authenticate with (useful in cross-realm env.) dfs.namenode.avoid.read.stale.datanode false Indicate whether or not to avoid reading from "stale" datanodes whose heartbeat messages have not been received by the namenode for more than a specified time interval. Stale datanodes will be moved to the end of the node list returned for reading. See dfs.namenode.avoid.write.stale.datanode for a similar setting for writes. dfs.namenode.avoid.write.stale.datanode false Indicate whether or not to avoid writing to "stale" datanodes whose heartbeat messages have not been received by the namenode for more than a specified time interval. Writes will avoid using stale datanodes unless more than a configured ratio (dfs.namenode.write.stale.datanode.ratio) of datanodes are marked as stale. See dfs.namenode.avoid.read.stale.datanode for a similar setting for reads. dfs.namenode.stale.datanode.interval 30000 Default time interval for marking a datanode as "stale", i.e., if the namenode has not received heartbeat msg from a datanode for more than this time interval, the datanode will be marked and treated as "stale" by default. The stale interval cannot be too small since otherwise this may cause too frequent change of stale states. We thus set a minimum stale interval value (the default value is 3 times of heartbeat interval) and guarantee that the stale interval cannot be less than the minimum value. A stale data node is avoided during lease/block recovery. It can be conditionally avoided for reads (see dfs.namenode.avoid.read.stale.datanode) and for writes (see dfs.namenode.avoid.write.stale.datanode). dfs.namenode.write.stale.datanode.ratio 0.5f When the ratio of number stale datanodes to total datanodes marked is greater than this ratio, stop avoiding writing to stale nodes so as to prevent causing hotspots. dfs.namenode.invalidate.work.pct.per.iteration 0.32f *Note*: Advanced property. Change with caution. This determines the percentage amount of block invalidations (deletes) to do over a single DN heartbeat deletion command. The final deletion count is determined by applying this percentage to the number of live nodes in the system. The resultant number is the number of blocks from the deletion list chosen for proper invalidation over a single heartbeat of a single DN. Value should be a positive, non-zero percentage in float notation (X.Yf), with 1.0f meaning 100%. dfs.namenode.replication.work.multiplier.per.iteration 2 *Note*: Advanced property. Change with caution. This determines the total amount of block transfers to begin in parallel at a DN, for replication, when such a command list is being sent over a DN heartbeat by the NN. The actual number is obtained by multiplying this multiplier with the total number of live nodes in the cluster. The result number is the number of blocks to begin transfers immediately for, per DN heartbeat. This number can be any positive, non-zero integer. nfs.server.port 2049 Specify the port number used by Hadoop NFS. nfs.mountd.port 4242 Specify the port number used by Hadoop mount daemon. nfs.dump.dir /tmp/.hdfs-nfs This directory is used to temporarily save out-of-order writes before writing to HDFS. For each file, the out-of-order writes are dumped after they are accumulated to exceed certain threshold (e.g., 1MB) in memory. One needs to make sure the directory has enough space. nfs.rtmax 1048576 This is the maximum size in bytes of a READ request supported by the NFS gateway. If you change this, make sure you also update the nfs mount's rsize(add rsize= # of bytes to the mount directive). nfs.wtmax 1048576 This is the maximum size in bytes of a WRITE request supported by the NFS gateway. If you change this, make sure you also update the nfs mount's wsize(add wsize= # of bytes to the mount directive). nfs.keytab.file *Note*: Advanced property. Change with caution. This is the path to the keytab file for the hdfs-nfs gateway. This is required when the cluster is kerberized. nfs.kerberos.principal *Note*: Advanced property. Change with caution. This is the name of the kerberos principal. This is required when the cluster is kerberized.It must be of this format: nfs-gateway-user/nfs-gateway-host@kerberos-realm nfs.allow.insecure.ports true When set to false, client connections originating from unprivileged ports (those above 1023) will be rejected. This is to ensure that clients connecting to this NFS Gateway must have had root privilege on the machine where they're connecting from. dfs.webhdfs.enabled true Enable WebHDFS (REST API) in Namenodes and Datanodes. hadoop.fuse.connection.timeout 300 The minimum number of seconds that we'll cache libhdfs connection objects in fuse_dfs. Lower values will result in lower memory consumption; higher values may speed up access by avoiding the overhead of creating new connection objects. hadoop.fuse.timer.period 5 The number of seconds between cache expiry checks in fuse_dfs. Lower values will result in fuse_dfs noticing changes to Kerberos ticket caches more quickly. dfs.metrics.percentiles.intervals Comma-delimited set of integers denoting the desired rollover intervals (in seconds) for percentile latency metrics on the Namenode and Datanode. By default, percentile latency metrics are disabled. hadoop.user.group.metrics.percentiles.intervals A comma-separated list of the granularity in seconds for the metrics which describe the 50/75/90/95/99th percentile latency for group resolution in milliseconds. By default, percentile latency metrics are disabled. dfs.encrypt.data.transfer false Whether or not actual block data that is read/written from/to HDFS should be encrypted on the wire. This only needs to be set on the NN and DNs, clients will deduce this automatically. It is possible to override this setting per connection by specifying custom logic via dfs.trustedchannel.resolver.class. dfs.encrypt.data.transfer.algorithm This value may be set to either "3des" or "rc4". If nothing is set, then the configured JCE default on the system is used (usually 3DES.) It is widely believed that 3DES is more cryptographically secure, but RC4 is substantially faster. Note that if AES is supported by both the client and server then this encryption algorithm will only be used to initially transfer keys for AES. (See dfs.encrypt.data.transfer.cipher.suites.) dfs.encrypt.data.transfer.cipher.suites This value may be either undefined or AES/CTR/NoPadding. If defined, then dfs.encrypt.data.transfer uses the specified cipher suite for data encryption. If not defined, then only the algorithm specified in dfs.encrypt.data.transfer.algorithm is used. By default, the property is not defined. dfs.encrypt.data.transfer.cipher.key.bitlength 128 The key bitlength negotiated by dfsclient and datanode for encryption. This value may be set to either 128, 192 or 256. dfs.trustedchannel.resolver.class TrustedChannelResolver is used to determine whether a channel is trusted for plain data transfer. The TrustedChannelResolver is invoked on both client and server side. If the resolver indicates that the channel is trusted, then the data transfer will not be encrypted even if dfs.encrypt.data.transfer is set to true. The default implementation returns false indicating that the channel is not trusted. dfs.data.transfer.protection A comma-separated list of SASL protection values used for secured connections to the DataNode when reading or writing block data. Possible values are authentication, integrity and privacy. authentication means authentication only and no integrity or privacy; integrity implies authentication and integrity are enabled; and privacy implies all of authentication, integrity and privacy are enabled. If dfs.encrypt.data.transfer is set to true, then it supersedes the setting for dfs.data.transfer.protection and enforces that all connections must use a specialized encrypted SASL handshake. This property is ignored for connections to a DataNode listening on a privileged port. In this case, it is assumed that the use of a privileged port establishes sufficient trust. dfs.data.transfer.saslproperties.resolver.class SaslPropertiesResolver used to resolve the QOP used for a connection to the DataNode when reading or writing block data. If not specified, the value of hadoop.security.saslproperties.resolver.class is used as the default value. dfs.datanode.hdfs-blocks-metadata.enabled false Boolean which enables backend datanode-side support for the experimental DistributedFileSystem#getFileVBlockStorageLocations API. dfs.client.file-block-storage-locations.num-threads 10 Number of threads used for making parallel RPCs in DistributedFileSystem#getFileBlockStorageLocations(). dfs.client.file-block-storage-locations.timeout.millis 1000 Timeout (in milliseconds) for the parallel RPCs made in DistributedFileSystem#getFileBlockStorageLocations(). dfs.journalnode.rpc-address 0.0.0.0:8485 The JournalNode RPC server address and port. dfs.journalnode.http-address 0.0.0.0:8480 The address and port the JournalNode HTTP server listens on. If the port is 0 then the server will start on a free port. dfs.journalnode.https-address 0.0.0.0:8481 The address and port the JournalNode HTTPS server listens on. If the port is 0 then the server will start on a free port. dfs.namenode.audit.loggers default List of classes implementing audit loggers that will receive audit events. These should be implementations of org.apache.hadoop.hdfs.server.namenode.AuditLogger. The special value "default" can be used to reference the default audit logger, which uses the configured log system. Installing custom audit loggers may affect the performance and stability of the NameNode. Refer to the custom logger's documentation for more details. dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold 10737418240 Only used when the dfs.datanode.fsdataset.volume.choosing.policy is set to org.apache.hadoop.hdfs.server.datanode.fsdataset.AvailableSpaceVolumeChoosingPolicy. This setting controls how much DN volumes are allowed to differ in terms of bytes of free disk space before they are considered imbalanced. If the free space of all the volumes are within this range of each other, the volumes will be considered balanced and block assignments will be done on a pure round robin basis. dfs.datanode.available-space-volume-choosing-policy.balanced-space-preference-fraction 0.75f Only used when the dfs.datanode.fsdataset.volume.choosing.policy is set to org.apache.hadoop.hdfs.server.datanode.fsdataset.AvailableSpaceVolumeChoosingPolicy. This setting controls what percentage of new block allocations will be sent to volumes with more available disk space than others. This setting should be in the range 0.0 - 1.0, though in practice 0.5 - 1.0, since there should be no reason to prefer that volumes with less available disk space receive more block allocations. dfs.namenode.edits.noeditlogchannelflush false Specifies whether to flush edit log file channel. When set, expensive FileChannel#force calls are skipped and synchronous disk writes are enabled instead by opening the edit log file with RandomAccessFile("rws") flags. This can significantly improve the performance of edit log writes on the Windows platform. Note that the behavior of the "rws" flags is platform and hardware specific and might not provide the same level of guarantees as FileChannel#force. For example, the write will skip the disk-cache on SAS and SCSI devices while it might not on SATA devices. This is an expert level setting, change with caution. dfs.client.cache.drop.behind.writes Just like dfs.datanode.drop.cache.behind.writes, this setting causes the page cache to be dropped behind HDFS writes, potentially freeing up more memory for other uses. Unlike dfs.datanode.drop.cache.behind.writes, this is a client-side setting rather than a setting for the entire datanode. If present, this setting will override the DataNode default. If the native libraries are not available to the DataNode, this configuration has no effect. dfs.client.cache.drop.behind.reads Just like dfs.datanode.drop.cache.behind.reads, this setting causes the page cache to be dropped behind HDFS reads, potentially freeing up more memory for other uses. Unlike dfs.datanode.drop.cache.behind.reads, this is a client-side setting rather than a setting for the entire datanode. If present, this setting will override the DataNode default. If the native libraries are not available to the DataNode, this configuration has no effect. dfs.client.cache.readahead When using remote reads, this setting causes the datanode to read ahead in the block file using posix_fadvise, potentially decreasing I/O wait times. Unlike dfs.datanode.readahead.bytes, this is a client-side setting rather than a setting for the entire datanode. If present, this setting will override the DataNode default. When using local reads, this setting determines how much readahead we do in BlockReaderLocal. If the native libraries are not available to the DataNode, this configuration has no effect. dfs.namenode.enable.retrycache true This enables the retry cache on the namenode. Namenode tracks for non-idempotent requests the corresponding response. If a client retries the request, the response from the retry cache is sent. Such operations are tagged with annotation @AtMostOnce in namenode protocols. It is recommended that this flag be set to true. Setting it to false, will result in clients getting failure responses to retried request. This flag must be enabled in HA setup for transparent fail-overs. The entries in the cache have expiration time configurable using dfs.namenode.retrycache.expirytime.millis. dfs.namenode.retrycache.expirytime.millis 600000 The time for which retry cache entries are retained. dfs.namenode.retrycache.heap.percent 0.03f This parameter configures the heap size allocated for retry cache (excluding the response cached). This corresponds to approximately 4096 entries for every 64MB of namenode process java heap size. Assuming retry cache entry expiration time (configured using dfs.namenode.retrycache.expirytime.millis) of 10 minutes, this enables retry cache to support 7 operations per second sustained for 10 minutes. As the heap size is increased, the operation rate linearly increases. dfs.client.mmap.enabled true If this is set to false, the client won't attempt to perform memory-mapped reads. dfs.client.mmap.cache.size 256 When zero-copy reads are used, the DFSClient keeps a cache of recently used memory mapped regions. This parameter controls the maximum number of entries that we will keep in that cache. The larger this number is, the more file descriptors we will potentially use for memory-mapped files. mmaped files also use virtual address space. You may need to increase your ulimit virtual address space limits before increasing the client mmap cache size. Note that you can still do zero-copy reads when this size is set to 0. dfs.client.mmap.cache.timeout.ms 3600000 The minimum length of time that we will keep an mmap entry in the cache between uses. If an entry is in the cache longer than this, and nobody uses it, it will be removed by a background thread. dfs.client.mmap.retry.timeout.ms 300000 The minimum amount of time that we will wait before retrying a failed mmap operation. dfs.client.short.circuit.replica.stale.threshold.ms 1800000 The maximum amount of time that we will consider a short-circuit replica to be valid, if there is no communication from the DataNode. After this time has elapsed, we will re-fetch the short-circuit replica even if it is in the cache. dfs.namenode.path.based.cache.block.map.allocation.percent 0.25 The percentage of the Java heap which we will allocate to the cached blocks map. The cached blocks map is a hash map which uses chained hashing. Smaller maps may be accessed more slowly if the number of cached blocks is large; larger maps will consume more memory. dfs.datanode.max.locked.memory 0 The amount of memory in bytes to use for caching of block replicas in memory on the datanode. The datanode's maximum locked memory soft ulimit (RLIMIT_MEMLOCK) must be set to at least this value, else the datanode will abort on startup. By default, this parameter is set to 0, which disables in-memory caching. If the native libraries are not available to the DataNode, this configuration has no effect. dfs.namenode.list.cache.directives.num.responses 100 This value controls the number of cache directives that the NameNode will send over the wire in response to a listDirectives RPC. dfs.namenode.list.cache.pools.num.responses 100 This value controls the number of cache pools that the NameNode will send over the wire in response to a listPools RPC. dfs.namenode.path.based.cache.refresh.interval.ms 30000 The amount of milliseconds between subsequent path cache rescans. Path cache rescans are when we calculate which blocks should be cached, and on what datanodes. By default, this parameter is set to 30 seconds. dfs.namenode.path.based.cache.retry.interval.ms 30000 When the NameNode needs to uncache something that is cached, or cache something that is not cached, it must direct the DataNodes to do so by sending a DNA_CACHE or DNA_UNCACHE command in response to a DataNode heartbeat. This parameter controls how frequently the NameNode will resend these commands. dfs.datanode.fsdatasetcache.max.threads.per.volume 4 The maximum number of threads per volume to use for caching new data on the datanode. These threads consume both I/O and CPU. This can affect normal datanode operations. dfs.cachereport.intervalMsec 10000 Determines cache reporting interval in milliseconds. After this amount of time, the DataNode sends a full report of its cache state to the NameNode. The NameNode uses the cache report to update its map of cached blocks to DataNode locations. This configuration has no effect if in-memory caching has been disabled by setting dfs.datanode.max.locked.memory to 0 (which is the default). If the native libraries are not available to the DataNode, this configuration has no effect. dfs.namenode.edit.log.autoroll.multiplier.threshold 2.0 Determines when an active namenode will roll its own edit log. The actual threshold (in number of edits) is determined by multiplying this value by dfs.namenode.checkpoint.txns. This prevents extremely large edit files from accumulating on the active namenode, which can cause timeouts during namenode startup and pose an administrative hassle. This behavior is intended as a failsafe for when the standby or secondary namenode fail to roll the edit log by the normal checkpoint threshold. dfs.namenode.edit.log.autoroll.check.interval.ms 300000 How often an active namenode will check if it needs to roll its edit log, in milliseconds. dfs.webhdfs.user.provider.user.pattern ^[A-Za-z_][A-Za-z0-9._-]*[$]?$ Valid pattern for user and group names for webhdfs, it must be a valid java regex. dfs.client.context default The name of the DFSClient context that we should use. Clients that share a context share a socket cache and short-circuit cache, among other things. You should only change this if you don't want to share with another set of threads. dfs.client.read.shortcircuit false This configuration parameter turns on short-circuit local reads. dfs.client.socket.send.buffer.size 131072 Socket send buffer size for a write pipeline in DFSClient side. This may affect TCP connection throughput. If it is set to zero or negative value, no buffer size will be set explicitly, thus enable tcp auto-tuning on some system. dfs.domain.socket.path Optional. This is a path to a UNIX domain socket that will be used for communication between the DataNode and local HDFS clients. If the string "_PORT" is present in this path, it will be replaced by the TCP port of the DataNode. dfs.client.read.shortcircuit.skip.checksum false If this configuration parameter is set, short-circuit local reads will skip checksums. This is normally not recommended, but it may be useful for special setups. You might consider using this if you are doing your own checksumming outside of HDFS. dfs.client.read.shortcircuit.streams.cache.size 256 The DFSClient maintains a cache of recently opened file descriptors. This parameter controls the size of that cache. Setting this higher will use more file descriptors, but potentially provide better performance on workloads involving lots of seeks. dfs.client.read.shortcircuit.streams.cache.expiry.ms 300000 This controls the minimum amount of time file descriptors need to sit in the client cache context before they can be closed for being inactive for too long. dfs.datanode.shared.file.descriptor.paths /dev/shm,/tmp Comma separated paths to the directory on which shared memory segments are created. The client and the DataNode exchange information via this shared memory segment. It tries paths in order until creation of shared memory segment succeeds. dfs.client.use.legacy.blockreader.local false Legacy short-circuit reader implementation based on HDFS-2246 is used if this configuration parameter is true. This is for the platforms other than Linux where the new implementation based on HDFS-347 is not available. dfs.block.local-path-access.user Comma separated list of the users allowd to open block files on legacy short-circuit local read. dfs.client.domain.socket.data.traffic false This control whether we will try to pass normal data traffic over UNIX domain socket rather than over TCP socket on node-local data transfer. This is currently experimental and turned off by default. dfs.namenode.reject-unresolved-dn-topology-mapping false If the value is set to true, then namenode will reject datanode registration if the topology mapping for a datanode is not resolved and NULL is returned (script defined by net.topology.script.file.name fails to execute). Otherwise, datanode will be registered and the default rack will be assigned as the topology path. Topology paths are important for data resiliency, since they define fault domains. Thus it may be unwanted behavior to allow datanode registration with the default rack if the resolving topology failed. dfs.client.slow.io.warning.threshold.ms 30000 The threshold in milliseconds at which we will log a slow io warning in a dfsclient. By default, this parameter is set to 30000 milliseconds (30 seconds). dfs.datanode.slow.io.warning.threshold.ms 300 The threshold in milliseconds at which we will log a slow io warning in a datanode. By default, this parameter is set to 300 milliseconds. dfs.namenode.xattrs.enabled true Whether support for extended attributes is enabled on the NameNode. dfs.namenode.fs-limits.max-xattrs-per-inode 32 Maximum number of extended attributes per inode. dfs.namenode.fs-limits.max-xattr-size 16384 The maximum combined size of the name and value of an extended attribute in bytes. dfs.namenode.write-lock-reporting-threshold-ms 5000 When a write lock is held on the namenode for a long time, this will be logged as the lock is released. This sets how long the lock must be held for logging to occur. dfs.namenode.read-lock-reporting-threshold-ms 5000 When a read lock is held on the namenode for a long time, this will be logged as the lock is released. This sets how long the lock must be held for logging to occur. dfs.namenode.lock.detailed-metrics.enabled false If true, the namenode will keep track of how long various operations hold the Namesystem lock for and emit this as metrics. These metrics have names of the form FSN(Read|Write)LockNanosOperationName, where OperationName denotes the name of the operation that initiated the lock hold (this will be OTHER for certain uncategorized operations) and they export the hold time values in nanoseconds. dfs.namenode.fslock.fair true If this is true, the FS Namesystem lock will be used in Fair mode, which will help to prevent writer threads from being starved, but can provide lower lock throughput. See java.util.concurrent.locks.ReentrantReadWriteLock for more information on fair/non-fair locks. dfs.namenode.startup.delay.block.deletion.sec 0 The delay in seconds at which we will pause the blocks deletion after Namenode startup. By default it's disabled. In the case a directory has large number of directories and files are deleted, suggested delay is one hour to give the administrator enough time to notice large number of pending deletion blocks and take corrective action. dfs.namenode.list.encryption.zones.num.responses 100 When listing encryption zones, the maximum number of zones that will be returned in a batch. Fetching the list incrementally in batches improves namenode performance. dfs.namenode.inotify.max.events.per.rpc 1000 Maximum number of events that will be sent to an inotify client in a single RPC response. The default value attempts to amortize away the overhead for this RPC while avoiding huge memory requirements for the client and NameNode (1000 events should consume no more than 1 MB.) dfs.user.home.dir.prefix /user The directory to prepend to user name to get the user's home direcotry. dfs.datanode.cache.revocation.timeout.ms 900000 When the DFSClient reads from a block file which the DataNode is caching, the DFSClient can skip verifying checksums. The DataNode will keep the block file in cache until the client is done. If the client takes an unusually long time, though, the DataNode may need to evict the block file from the cache anyway. This value controls how long the DataNode will wait for the client to release a replica that it is reading without checksums. dfs.datanode.cache.revocation.polling.ms 500 How often the DataNode should poll to see if the clients have stopped using a replica that the DataNode wants to uncache. dfs.datanode.block.id.layout.upgrade.threads 12 The number of threads to use when creating hard links from current to previous blocks during upgrade of a DataNode to block ID-based block layout (see HDFS-6482 for details on the layout). dfs.encryption.key.provider.uri The KeyProvider to use when interacting with encryption keys used when reading and writing to an encryption zone. dfs.storage.policy.enabled true Allow users to change the storage policy on files and directories. dfs.namenode.legacy-oiv-image.dir Determines where to save the namespace in the old fsimage format during checkpointing by standby NameNode or SecondaryNameNode. Users can dump the contents of the old format fsimage by oiv_legacy command. If the value is not specified, old format fsimage will not be saved in checkpoint. dfs.namenode.top.enabled true Enable nntop: reporting top users on namenode dfs.namenode.top.window.num.buckets 10 Number of buckets in the rolling window implementation of nntop dfs.namenode.top.num.users 10 Number of top users returned by the top tool dfs.namenode.top.windows.minutes 1,5,25 comma separated list of nntop reporting periods in minutes dfs.namenode.blocks.per.postponedblocks.rescan 10000 Number of blocks to rescan for each iteration of postponedMisreplicatedBlocks. dfs.datanode.block-pinning.enabled false Whether pin blocks on favored DataNode. dfs.datanode.bp-ready.timeout 20 The maximum wait time for datanode to be ready before failing the received request. Setting this to 0 fails requests right away if the datanode is not yet registered with the namenode. This wait time reduces initial request failures after datanode restart. dfs.balancer.keytab.enabled false Set to true to enable login using a keytab for Kerberized Hadoop. dfs.balancer.address 0.0.0.0:0 The hostname used for a keytab based Kerberos login. Keytab based login can be enabled with dfs.balancer.keytab.enabled. dfs.balancer.keytab.file The keytab file used by the Balancer to login as its service principal. The principal name is configured with dfs.balancer.kerberos.principal. Keytab based login can be enabled with dfs.balancer.keytab.enabled. dfs.balancer.kerberos.principal The Balancer principal. This is typically set to balancer/_HOST@REALM.TLD. The Balancer will substitute _HOST with its own fully qualified hostname at startup. The _HOST placeholder allows using the same configuration setting on different servers. Keytab based login can be enabled with dfs.balancer.keytab.enabled. dfs.balancer.block-move.timeout 0 Maximum amount of time in milliseconds for a block to move. If this is set greater than 0, Balancer will stop waiting for a block move completion after this time. In typical clusters, a 3 to 5 minute timeout is reasonable. If timeout happens to a large proportion of block moves, this needs to be increased. It could also be that too much work is dispatched and many nodes are constantly exceeding the bandwidth limit as a result. In that case, other balancer parameters might need to be adjusted. It is disabled (0) by default. dfs.balancer.max-no-move-interval 60000 If this specified amount of time has elapsed and no block has been moved out of a source DataNode, on more effort will be made to move blocks out of this DataNode in the current Balancer iteration. dfs.lock.suppress.warning.interval 10s Instrumentation reporting long critical sections will suppress consecutive warnings within this interval. dfs.namenode.quota.init-threads 4 The number of concurrent threads to be used in quota initialization. The speed of quota initialization also affects the namenode fail-over latency. If the size of name space is big, try increasing this. dfs.reformat.disabled false Disable reformat of NameNode. If it's value is set to "true" and metadata directories already exist then attempt to format NameNode will throw NameNodeFormatException. dfs.datanode.transfer.socket.send.buffer.size 131072 Socket send buffer size for DataXceiver (mirroring packets to downstream in pipeline). This may affect TCP connection throughput. If it is set to zero or negative value, no buffer size will be set explicitly, thus enable tcp auto-tuning on some system. dfs.datanode.transfer.socket.recv.buffer.size 131072 Socket receive buffer size for DataXceiver (receiving packets from client during block writing). This may affect TCP connection throughput. If it is set to zero or negative value, no buffer size will be set explicitly, thus enable tcp auto-tuning on some system. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_5/resources/hive-default.xml0000664000175000017500000076704500000000000033561 0ustar00zuulzuul00000000000000 hive.exec.script.wrapper hive.exec.plan hive.exec.stagingdir .hive-staging Directory name that will be created inside table locations in order to support HDFS encryption. This is replaces ${hive.exec.scratchdir} for query results with the exception of read-only tables. In all cases ${hive.exec.scratchdir} is still used for other temporary files, such as job plans. hive.exec.scratchdir /tmp/hive HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}. hive.repl.rootdir /user/hive/repl/ HDFS root dir for all replication dumps. hive.repl.cm.enabled false Turn on ChangeManager, so delete files will go to cmrootdir. hive.repl.cmrootdir /user/hive/cmroot/ Root dir for ChangeManager, used for deleted files. hive.repl.cm.retain 24h Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is hour if not specified. Time to retain removed files in cmrootdir. hive.repl.cm.interval 3600s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Inteval for cmroot cleanup thread. hive.exec.local.scratchdir ${system:java.io.tmpdir}/${system:user.name} Local scratch space for Hive jobs hive.downloaded.resources.dir ${system:java.io.tmpdir}/${hive.session.id}_resources Temporary local directory for added resources in the remote file system. hive.scratch.dir.permission 700 The permission for the user specific scratch directories that get created. hive.exec.submitviachild false hive.exec.submit.local.task.via.child true Determines whether local tasks (typically mapjoin hashtable generation phase) runs in separate JVM (true recommended) or not. Avoids the overhead of spawning new JVM, but can lead to out-of-memory issues. hive.exec.script.maxerrsize 100000 Maximum number of bytes a script is allowed to emit to standard error (per map-reduce task). This prevents runaway scripts from filling logs partitions to capacity hive.exec.script.allow.partial.consumption false When enabled, this option allows a user script to exit successfully without consuming all the data from the standard input. stream.stderr.reporter.prefix reporter: Streaming jobs that log to standard error with this prefix can log counter or status information. stream.stderr.reporter.enabled true Enable consumption of status and counter messages for streaming jobs. hive.exec.compress.output false This controls whether the final outputs of a query (to a local/HDFS file or a Hive table) is compressed. The compression codec and other options are determined from Hadoop config variables mapred.output.compress* hive.exec.compress.intermediate false This controls whether intermediate files produced by Hive between multiple map-reduce jobs are compressed. The compression codec and other options are determined from Hadoop config variables mapred.output.compress* hive.intermediate.compression.codec hive.intermediate.compression.type hive.exec.reducers.bytes.per.reducer 256000000 size per reducer.The default is 256Mb, i.e if the input size is 1G, it will use 4 reducers. hive.exec.reducers.max 1009 max number of reducers will be used. If the one specified in the configuration parameter mapred.reduce.tasks is negative, Hive will use this one as the max number of reducers when automatically determine number of reducers. hive.exec.pre.hooks Comma-separated list of pre-execution hooks to be invoked for each statement. A pre-execution hook is specified as the name of a Java class which implements the org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface. hive.exec.post.hooks Comma-separated list of post-execution hooks to be invoked for each statement. A post-execution hook is specified as the name of a Java class which implements the org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface. hive.exec.failure.hooks Comma-separated list of on-failure hooks to be invoked for each statement. An on-failure hook is specified as the name of Java class which implements the org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface. hive.exec.query.redactor.hooks Comma-separated list of hooks to be invoked for each query which can tranform the query before it's placed in the job.xml file. Must be a Java class which extends from the org.apache.hadoop.hive.ql.hooks.Redactor abstract class. hive.client.stats.publishers Comma-separated list of statistics publishers to be invoked on counters on each job. A client stats publisher is specified as the name of a Java class which implements the org.apache.hadoop.hive.ql.stats.ClientStatsPublisher interface. hive.ats.hook.queue.capacity 64 Queue size for the ATS Hook executor. If the number of outstanding submissions to the ATS executor exceed this amount, the Hive ATS Hook will not try to log queries to ATS. hive.exec.parallel false Whether to execute jobs in parallel hive.exec.parallel.thread.number 8 How many jobs at most can be executed in parallel hive.mapred.reduce.tasks.speculative.execution true Whether speculative execution for reducers should be turned on. hive.exec.counters.pull.interval 1000 The interval with which to poll the JobTracker for the counters the running job. The smaller it is the more load there will be on the jobtracker, the higher it is the less granular the caught will be. hive.exec.dynamic.partition true Whether or not to allow dynamic partitions in DML/DDL. hive.exec.dynamic.partition.mode strict In strict mode, the user must specify at least one static partition in case the user accidentally overwrites all partitions. In nonstrict mode all partitions are allowed to be dynamic. hive.exec.max.dynamic.partitions 1000 Maximum number of dynamic partitions allowed to be created in total. hive.exec.max.dynamic.partitions.pernode 100 Maximum number of dynamic partitions allowed to be created in each mapper/reducer node. hive.exec.max.created.files 100000 Maximum number of HDFS files created by all mappers/reducers in a MapReduce job. hive.exec.default.partition.name __HIVE_DEFAULT_PARTITION__ The default partition name in case the dynamic partition column value is null/empty string or any other values that cannot be escaped. This value must not contain any special character used in HDFS URI (e.g., ':', '%', '/' etc). The user has to be aware that the dynamic partition value should not contain this value to avoid confusions. hive.lockmgr.zookeeper.default.partition.name __HIVE_DEFAULT_ZOOKEEPER_PARTITION__ hive.exec.show.job.failure.debug.info true If a job fails, whether to provide a link in the CLI to the task with the most failures, along with debugging hints if applicable. hive.exec.job.debug.capture.stacktraces true Whether or not stack traces parsed from the task logs of a sampled failed task for each failed job should be stored in the SessionState hive.exec.job.debug.timeout 30000 hive.exec.tasklog.debug.timeout 20000 hive.output.file.extension String used as a file extension for output files. If not set, defaults to the codec extension for text files (e.g. ".gz"), or no extension otherwise. hive.exec.mode.local.auto false Let Hive determine whether to run in local mode automatically hive.exec.mode.local.auto.inputbytes.max 134217728 When hive.exec.mode.local.auto is true, input bytes should less than this for local mode. hive.exec.mode.local.auto.input.files.max 4 When hive.exec.mode.local.auto is true, the number of tasks should less than this for local mode. hive.exec.drop.ignorenonexistent true Do not report an error if DROP TABLE/VIEW/Index/Function specifies a non-existent table/view/index/function hive.ignore.mapjoin.hint true Ignore the mapjoin hint hive.file.max.footer 100 maximum number of lines for footer user can define for a table file hive.resultset.use.unique.column.names true Make column names unique in the result set by qualifying column names with table alias if needed. Table alias will be added to column names for queries of type "select *" or if query explicitly uses table alias "select r1.x..". fs.har.impl org.apache.hadoop.hive.shims.HiveHarFileSystem The implementation for accessing Hadoop Archives. Note that this won't be applicable to Hadoop versions less than 0.20 hive.metastore.warehouse.dir /user/hive/warehouse location of default database for the warehouse hive.metastore.uris Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore. hive.metastore.client.capability.check true Whether to check client capabilities for potentially breaking API usage. hive.metastore.fastpath false Used to avoid all of the proxies and object copies in the metastore. Note, if this is set, you MUST use a local metastore (hive.metastore.uris must be empty) otherwise undefined and most likely undesired behavior will result hive.metastore.fshandler.threads 15 Number of threads to be allocated for metastore handler for fs operations. hive.metastore.hbase.catalog.cache.size 50000 Maximum number of objects we will place in the hbase metastore catalog cache. The objects will be divided up by types that we need to cache. hive.metastore.hbase.aggregate.stats.cache.size 10000 Maximum number of aggregate stats nodes that we will place in the hbase metastore aggregate stats cache. hive.metastore.hbase.aggregate.stats.max.partitions 10000 Maximum number of partitions that are aggregated per cache node. hive.metastore.hbase.aggregate.stats.false.positive.probability 0.01 Maximum false positive probability for the Bloom Filter used in each aggregate stats cache node (default 1%). hive.metastore.hbase.aggregate.stats.max.variance 0.1 Maximum tolerable variance in number of partitions between a cached node and our request (default 10%). hive.metastore.hbase.cache.ttl 600s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Number of seconds for a cached node to be active in the cache before they become stale. hive.metastore.hbase.cache.max.writer.wait 5000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Number of milliseconds a writer will wait to acquire the writelock before giving up. hive.metastore.hbase.cache.max.reader.wait 1000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Number of milliseconds a reader will wait to acquire the readlock before giving up. hive.metastore.hbase.cache.max.full 0.9 Maximum cache full % after which the cache cleaner thread kicks in. hive.metastore.hbase.cache.clean.until 0.8 The cleaner thread cleans until cache reaches this % full size. hive.metastore.hbase.connection.class org.apache.hadoop.hive.metastore.hbase.VanillaHBaseConnection Class used to connection to HBase hive.metastore.hbase.aggr.stats.cache.entries 10000 How many in stats objects to cache in memory hive.metastore.hbase.aggr.stats.memory.ttl 60s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Number of seconds stats objects live in memory after they are read from HBase. hive.metastore.hbase.aggr.stats.invalidator.frequency 5s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. How often the stats cache scans its HBase entries and looks for expired entries hive.metastore.hbase.aggr.stats.hbase.ttl 604800s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Number of seconds stats entries live in HBase cache after they are created. They may be invalided by updates or partition drops before this. Default is one week. hive.metastore.hbase.file.metadata.threads 1 Number of threads to use to read file metadata in background to cache it. hive.metastore.connect.retries 3 Number of retries while opening a connection to metastore hive.metastore.failure.retries 1 Number of retries upon failure of Thrift metastore calls hive.metastore.port 9083 Hive metastore listener port hive.metastore.client.connect.retry.delay 1s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Number of seconds for the client to wait between consecutive connection attempts hive.metastore.client.socket.timeout 600s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. MetaStore Client socket timeout in seconds hive.metastore.client.socket.lifetime 0s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. MetaStore Client socket lifetime in seconds. After this time is exceeded, client reconnects on the next MetaStore operation. A value of 0s means the connection has an infinite lifetime. javax.jdo.option.ConnectionPassword mine password to use against metastore database hive.metastore.ds.connection.url.hook Name of the hook to use for retrieving the JDO connection URL. If empty, the value in javax.jdo.option.ConnectionURL is used javax.jdo.option.Multithreaded true Set this to true if multiple threads access metastore through JDO concurrently. javax.jdo.option.ConnectionURL jdbc:derby:;databaseName=metastore_db;create=true JDBC connect string for a JDBC metastore. To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL. For example, jdbc:postgresql://myhost/db?ssl=true for postgres database. hive.metastore.dbaccess.ssl.properties Comma-separated SSL properties for metastore to access database when JDO connection URL enables SSL access. e.g. javax.net.ssl.trustStore=/tmp/truststore,javax.net.ssl.trustStorePassword=pwd. hive.hmshandler.retry.attempts 10 The number of times to retry a HMSHandler call if there were a connection error. hive.hmshandler.retry.interval 2000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. The time between HMSHandler retry attempts on failure. hive.hmshandler.force.reload.conf false Whether to force reloading of the HMSHandler configuration (including the connection URL, before the next metastore query that accesses the datastore. Once reloaded, this value is reset to false. Used for testing only. hive.metastore.server.max.message.size 104857600 Maximum message size in bytes a HMS will accept. hive.metastore.server.min.threads 200 Minimum number of worker threads in the Thrift server's pool. hive.metastore.server.max.threads 1000 Maximum number of worker threads in the Thrift server's pool. hive.metastore.server.tcp.keepalive true Whether to enable TCP keepalive for the metastore server. Keepalive will prevent accumulation of half-open connections. hive.metastore.archive.intermediate.original _INTERMEDIATE_ORIGINAL Intermediate dir suffixes used for archiving. Not important what they are, as long as collisions are avoided hive.metastore.archive.intermediate.archived _INTERMEDIATE_ARCHIVED hive.metastore.archive.intermediate.extracted _INTERMEDIATE_EXTRACTED hive.metastore.kerberos.keytab.file The path to the Kerberos Keytab file containing the metastore Thrift server's service principal. hive.metastore.kerberos.principal hive-metastore/_HOST@EXAMPLE.COM The service principal for the metastore Thrift server. The special string _HOST will be replaced automatically with the correct host name. hive.metastore.sasl.enabled false If true, the metastore Thrift interface will be secured with SASL. Clients must authenticate with Kerberos. hive.metastore.thrift.framed.transport.enabled false If true, the metastore Thrift interface will use TFramedTransport. When false (default) a standard TTransport is used. hive.metastore.thrift.compact.protocol.enabled false If true, the metastore Thrift interface will use TCompactProtocol. When false (default) TBinaryProtocol will be used. Setting it to true will break compatibility with older clients running TBinaryProtocol. hive.metastore.token.signature The delegation token service name to match when selecting a token from the current user's tokens. hive.cluster.delegation.token.store.class org.apache.hadoop.hive.thrift.MemoryTokenStore The delegation token store implementation. Set to org.apache.hadoop.hive.thrift.ZooKeeperTokenStore for load-balanced cluster. hive.cluster.delegation.token.store.zookeeper.connectString The ZooKeeper token store connect string. You can re-use the configuration value set in hive.zookeeper.quorum, by leaving this parameter unset. hive.cluster.delegation.token.store.zookeeper.znode /hivedelegation The root path for token store data. Note that this is used by both HiveServer2 and MetaStore to store delegation Token. One directory gets created for each of them. The final directory names would have the servername appended to it (HIVESERVER2, METASTORE). hive.cluster.delegation.token.store.zookeeper.acl ACL for token store entries. Comma separated list of ACL entries. For example: sasl:hive/host1@MY.DOMAIN:cdrwa,sasl:hive/host2@MY.DOMAIN:cdrwa Defaults to all permissions for the hiveserver2/metastore process user. hive.metastore.cache.pinobjtypes Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order List of comma separated metastore object types that should be pinned in the cache datanucleus.connectionPoolingType BONECP Expects one of [bonecp, dbcp, hikaricp, none]. Specify connection pool library for datanucleus datanucleus.connectionPool.maxPoolSize 10 Specify the maximum number of connections in the connection pool. Note: The configured size will be used by 2 connection pools (TxnHandler and ObjectStore). When configuring the max connection pool size, it is recommended to take into account the number of metastore instances and the number of HiveServer2 instances configured with embedded metastore. To get optimal performance, set config to meet the following condition (2 * pool_size * metastore_instances + 2 * pool_size * HS2_instances_with_embedded_metastore) = (2 * physical_core_count + hard_disk_count). datanucleus.rdbms.initializeColumnInfo NONE initializeColumnInfo setting for DataNucleus; set to NONE at least on Postgres. datanucleus.schema.validateTables false validates existing schema against code. turn this on if you want to verify existing schema datanucleus.schema.validateColumns false validates existing schema against code. turn this on if you want to verify existing schema datanucleus.schema.validateConstraints false validates existing schema against code. turn this on if you want to verify existing schema datanucleus.storeManagerType rdbms metadata store type datanucleus.schema.autoCreateAll false Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead. hive.metastore.schema.verification true Enforce metastore schema version consistency. True: Verify that version information stored in is compatible with one from Hive jars. Also disable automatic schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures proper metastore schema migration. (Default) False: Warn if the version information stored in metastore doesn't match with one from in Hive jars. hive.metastore.schema.verification.record.version false When true the current MS version is recorded in the VERSION table. If this is disabled and verification is enabled the MS will be unusable. datanucleus.transactionIsolation read-committed Default transaction isolation level for identity generation. datanucleus.cache.level2 false Use a level 2 cache. Turn this off if metadata is changed independently of Hive metastore server datanucleus.cache.level2.type none datanucleus.identifierFactory datanucleus1 Name of the identifier factory to use when generating table/column names etc. 'datanucleus1' is used for backward compatibility with DataNucleus v1 datanucleus.rdbms.useLegacyNativeValueStrategy true datanucleus.plugin.pluginRegistryBundleCheck LOG Defines what happens when plugin bundles are found and are duplicated [EXCEPTION|LOG|NONE] hive.metastore.batch.retrieve.max 300 Maximum number of objects (tables/partitions) can be retrieved from metastore in one batch. The higher the number, the less the number of round trips is needed to the Hive metastore server, but it may also cause higher memory requirement at the client side. hive.metastore.batch.retrieve.table.partition.max 1000 Maximum number of objects that metastore internally retrieves in one batch. hive.metastore.init.hooks A comma separated list of hooks to be invoked at the beginning of HMSHandler initialization. An init hook is specified as the name of Java class which extends org.apache.hadoop.hive.metastore.MetaStoreInitListener. hive.metastore.pre.event.listeners List of comma separated listeners for metastore events. hive.metastore.event.listeners A comma separated list of Java classes that implement the org.apache.hadoop.hive.metastore.MetaStoreEventListener interface. The metastore event and corresponding listener method will be invoked in separate JDO transactions. Alternatively, configure hive.metastore.transactional.event.listeners to ensure both are invoked in same JDO transaction. hive.metastore.transactional.event.listeners A comma separated list of Java classes that implement the org.apache.hadoop.hive.metastore.MetaStoreEventListener interface. Both the metastore event and corresponding listener method will be invoked in the same JDO transaction. hive.metastore.event.db.listener.timetolive 86400s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. time after which events will be removed from the database listener queue hive.metastore.authorization.storage.checks false Should the metastore do authorization checks against the underlying storage (usually hdfs) for operations like drop-partition (disallow the drop-partition if the user in question doesn't have permissions to delete the corresponding directory on the storage). hive.metastore.authorization.storage.check.externaltable.drop true Should StorageBasedAuthorization check permission of the storage before dropping external table. StorageBasedAuthorization already does this check for managed table. For external table however, anyone who has read permission of the directory could drop external table, which is surprising. The flag is set to false by default to maintain backward compatibility. hive.metastore.event.clean.freq 0s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Frequency at which timer task runs to purge expired events in metastore. hive.metastore.event.expiry.duration 0s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Duration after which events expire from events table hive.metastore.event.message.factory org.apache.hadoop.hive.metastore.messaging.json.JSONMessageFactory Factory class for making encoding and decoding messages in the events generated. hive.metastore.execute.setugi true In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored. hive.metastore.partition.name.whitelist.pattern Partition names will be checked against this regex pattern and rejected if not matched. hive.metastore.integral.jdo.pushdown false Allow JDO query pushdown for integral partition columns in metastore. Off by default. This improves metastore perf for integral columns, especially if there's a large number of partitions. However, it doesn't work correctly with integral values that are not normalized (e.g. have leading zeroes, like 0012). If metastore direct SQL is enabled and works, this optimization is also irrelevant. hive.metastore.try.direct.sql true Whether the Hive metastore should try to use direct SQL queries instead of the DataNucleus for certain read paths. This can improve metastore performance when fetching many partitions or column statistics by orders of magnitude; however, it is not guaranteed to work on all RDBMS-es and all versions. In case of SQL failures, the metastore will fall back to the DataNucleus, so it's safe even if SQL doesn't work for all queries on your datastore. If all SQL queries fail (for example, your metastore is backed by MongoDB), you might want to disable this to save the try-and-fall-back cost. hive.metastore.direct.sql.batch.size 0 Batch size for partition and other object retrieval from the underlying DB in direct SQL. For some DBs like Oracle and MSSQL, there are hardcoded or perf-based limitations that necessitate this. For DBs that can handle the queries, this isn't necessary and may impede performance. -1 means no batching, 0 means automatic batching. hive.metastore.try.direct.sql.ddl true Same as hive.metastore.try.direct.sql, for read statements within a transaction that modifies metastore data. Due to non-standard behavior in Postgres, if a direct SQL select query has incorrect syntax or something similar inside a transaction, the entire transaction will fail and fall-back to DataNucleus will not be possible. You should disable the usage of direct SQL inside transactions if that happens in your case. hive.direct.sql.max.query.length 100 The maximum size of a query string (in KB). hive.direct.sql.max.elements.in.clause 1000 The maximum number of values in a IN clause. Once exceeded, it will be broken into multiple OR separated IN clauses. hive.direct.sql.max.elements.values.clause 1000 The maximum number of values in a VALUES clause for INSERT statement. hive.metastore.orm.retrieveMapNullsAsEmptyStrings false Thrift does not support nulls in maps, so any nulls present in maps retrieved from ORM must either be pruned or converted to empty strings. Some backing dbs such as Oracle persist empty strings as nulls, so we should set this parameter if we wish to reverse that behaviour. For others, pruning is the correct behaviour hive.metastore.disallow.incompatible.col.type.changes true If true (default is false), ALTER TABLE operations which change the type of a column (say STRING) to an incompatible type (say MAP) are disallowed. RCFile default SerDe (ColumnarSerDe) serializes the values in such a way that the datatypes can be converted from string to any type. The map is also serialized as a string, which can be read as a string as well. However, with any binary serialization, this is not true. Blocking the ALTER TABLE prevents ClassCastExceptions when subsequently trying to access old partitions. Primitive types like INT, STRING, BIGINT, etc., are compatible with each other and are not blocked. See HIVE-4409 for more details. hive.metastore.limit.partition.request -1 This limits the number of partitions that can be requested from the metastore for a given table. The default value "-1" means no limit. hive.table.parameters.default Default property values for newly created tables hive.ddl.createtablelike.properties.whitelist Table Properties to copy over when executing a Create Table Like. hive.metastore.rawstore.impl org.apache.hadoop.hive.metastore.ObjectStore Name of the class that implements org.apache.hadoop.hive.metastore.rawstore interface. This class is used to store and retrieval of raw metadata objects such as table, database hive.metastore.txn.store.impl org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler Name of class that implements org.apache.hadoop.hive.metastore.txn.TxnStore. This class is used to store and retrieve transactions and locks javax.jdo.option.ConnectionDriverName org.apache.derby.jdbc.EmbeddedDriver Driver class name for a JDBC metastore javax.jdo.PersistenceManagerFactoryClass org.datanucleus.api.jdo.JDOPersistenceManagerFactory class implementing the jdo persistence hive.metastore.expression.proxy org.apache.hadoop.hive.ql.optimizer.ppr.PartitionExpressionForMetastore javax.jdo.option.DetachAllOnCommit true Detaches all objects from session so that they can be used after transaction is committed javax.jdo.option.NonTransactionalRead true Reads outside of transactions javax.jdo.option.ConnectionUserName APP Username to use against metastore database hive.metastore.end.function.listeners List of comma separated listeners for the end of metastore functions. hive.metastore.partition.inherit.table.properties List of comma separated keys occurring in table properties which will get inherited to newly created partitions. * implies all the keys will get inherited. hive.metastore.filter.hook org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl Metastore hook class for filtering the metadata read results. If hive.security.authorization.manageris set to instance of HiveAuthorizerFactory, then this value is ignored. hive.metastore.dml.events false If true, the metastore will be asked to fire events for DML operations hive.metastore.client.drop.partitions.using.expressions true Choose whether dropping partitions with HCatClient pushes the partition-predicate to the metastore, or drops partitions iteratively hive.metastore.aggregate.stats.cache.enabled true Whether aggregate stats caching is enabled or not. hive.metastore.aggregate.stats.cache.size 10000 Maximum number of aggregate stats nodes that we will place in the metastore aggregate stats cache. hive.metastore.aggregate.stats.cache.max.partitions 10000 Maximum number of partitions that are aggregated per cache node. hive.metastore.aggregate.stats.cache.fpp 0.01 Maximum false positive probability for the Bloom Filter used in each aggregate stats cache node (default 1%). hive.metastore.aggregate.stats.cache.max.variance 0.01 Maximum tolerable variance in number of partitions between a cached node and our request (default 1%). hive.metastore.aggregate.stats.cache.ttl 600s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Number of seconds for a cached node to be active in the cache before they become stale. hive.metastore.aggregate.stats.cache.max.writer.wait 5000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Number of milliseconds a writer will wait to acquire the writelock before giving up. hive.metastore.aggregate.stats.cache.max.reader.wait 1000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Number of milliseconds a reader will wait to acquire the readlock before giving up. hive.metastore.aggregate.stats.cache.max.full 0.9 Maximum cache full % after which the cache cleaner thread kicks in. hive.metastore.aggregate.stats.cache.clean.until 0.8 The cleaner thread cleans until cache reaches this % full size. hive.metastore.metrics.enabled false Enable metrics on the metastore. hive.metastore.initial.metadata.count.enabled true Enable a metadata count at metastore startup for metrics. hive.metastore.use.SSL false Set this to true for using SSL encryption in HMS server. hive.metastore.keystore.path Metastore SSL certificate keystore location. hive.metastore.keystore.password Metastore SSL certificate keystore password. hive.metastore.truststore.path Metastore SSL certificate truststore location. hive.metastore.truststore.password Metastore SSL certificate truststore password. hive.metadata.export.location When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, it is the location to which the metadata will be exported. The default is an empty string, which results in the metadata being exported to the current user's home directory on HDFS. hive.metadata.move.exported.metadata.to.trash true When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, this setting determines if the metadata that is exported will subsequently be moved to the user's trash directory alongside the dropped table data. This ensures that the metadata will be cleaned up along with the dropped table data. hive.cli.errors.ignore false hive.cli.print.current.db false Whether to include the current database in the Hive prompt. hive.cli.prompt hive Command line prompt configuration value. Other hiveconf can be used in this configuration value. Variable substitution will only be invoked at the Hive CLI startup. hive.cli.pretty.output.num.cols -1 The number of columns to use when formatting output generated by the DESCRIBE PRETTY table_name command. If the value of this property is -1, then Hive will use the auto-detected terminal width. hive.metastore.fs.handler.class org.apache.hadoop.hive.metastore.HiveMetaStoreFsImpl hive.session.id hive.session.silent false hive.session.history.enabled false Whether to log Hive query, query plan, runtime statistics etc. hive.query.string Query being executed (might be multiple per a session) hive.query.id ID for query being executed (might be multiple per a session) hive.jobname.length 50 max jobname length hive.jar.path The location of hive_cli.jar that is used when submitting jobs in a separate jvm. hive.aux.jars.path The location of the plugin jars that contain implementations of user defined functions and serdes. hive.reloadable.aux.jars.path The locations of the plugin jars, which can be a comma-separated folders or jars. Jars can be renewed by executing reload command. And these jars can be used as the auxiliary classes like creating a UDF or SerDe. hive.added.files.path This an internal parameter. hive.added.jars.path This an internal parameter. hive.added.archives.path This an internal parameter. hive.auto.progress.timeout 0s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. How long to run autoprogressor for the script/UDTF operators. Set to 0 for forever. hive.script.auto.progress false Whether Hive Transform/Map/Reduce Clause should automatically send progress information to TaskTracker to avoid the task getting killed because of inactivity. Hive sends progress information when the script is outputting to stderr. This option removes the need of periodically producing stderr messages, but users should be cautious because this may prevent infinite loops in the scripts to be killed by TaskTracker. hive.script.operator.id.env.var HIVE_SCRIPT_OPERATOR_ID Name of the environment variable that holds the unique script operator ID in the user's transform function (the custom mapper/reducer that the user has specified in the query) hive.script.operator.truncate.env false Truncate each environment variable for external script in scripts operator to 20KB (to fit system limits) hive.script.operator.env.blacklist hive.txn.valid.txns,hive.script.operator.env.blacklist Comma separated list of keys from the configuration file not to convert to environment variables when envoking the script operator hive.strict.checks.large.query false Enabling strict large query checks disallows the following: Orderby without limit. No partition being picked up for a query against partitioned table. Note that these checks currently do not consider data size, only the query pattern. hive.strict.checks.type.safety true Enabling strict type safety checks disallows the following: Comparing bigints and strings. Comparing bigints and doubles. hive.strict.checks.cartesian.product true Enabling strict Cartesian join checks disallows the following: Cartesian product (cross join). hive.strict.checks.bucketing true Enabling strict bucketing checks disallows the following: Load into bucketed tables. hive.mapred.mode Deprecated; use hive.strict.checks.* settings instead. hive.alias hive.map.aggr true Whether to use map-side aggregation in Hive Group By queries hive.groupby.skewindata false Whether there is skew in data to optimize group by queries hive.join.emit.interval 1000 How many rows in the right-most join operand Hive should buffer before emitting the join result. hive.join.cache.size 25000 How many rows in the joining tables (except the streaming table) should be cached in memory. hive.cbo.enable true Flag to control enabling Cost Based Optimizations using Calcite framework. hive.cbo.cnf.maxnodes -1 When converting to conjunctive normal form (CNF), fail ifthe expression exceeds this threshold; the threshold is expressed in terms of number of nodes (leaves andinterior nodes). -1 to not set up a threshold. hive.cbo.returnpath.hiveop false Flag to control calcite plan to hive operator conversion hive.cbo.costmodel.extended false Flag to control enabling the extended cost model based onCPU, IO and cardinality. Otherwise, the cost model is based on cardinality. hive.cbo.costmodel.cpu 0.000001 Default cost of a comparison hive.cbo.costmodel.network 150.0 Default cost of a transfering a byte over network; expressed as multiple of CPU cost hive.cbo.costmodel.local.fs.write 4.0 Default cost of writing a byte to local FS; expressed as multiple of NETWORK cost hive.cbo.costmodel.local.fs.read 4.0 Default cost of reading a byte from local FS; expressed as multiple of NETWORK cost hive.cbo.costmodel.hdfs.write 10.0 Default cost of writing a byte to HDFS; expressed as multiple of Local FS write cost hive.cbo.costmodel.hdfs.read 1.5 Default cost of reading a byte from HDFS; expressed as multiple of Local FS read cost hive.cbo.show.warnings true Toggle display of CBO warnings like missing column stats hive.transpose.aggr.join false push aggregates through join hive.optimize.semijoin.conversion true convert group by followed by inner equi join into semijoin hive.order.columnalignment true Flag to control whether we want to try to aligncolumns in operators such as Aggregate or Join so that we try to reduce the number of shuffling stages hive.materializedview.rewriting false Whether to try to rewrite queries using the materialized views enabled for rewriting hive.materializedview.fileformat ORC Expects one of [none, textfile, sequencefile, rcfile, orc]. Default file format for CREATE MATERIALIZED VIEW statement hive.materializedview.serde org.apache.hadoop.hive.ql.io.orc.OrcSerde Default SerDe used for materialized views hive.mapjoin.bucket.cache.size 100 hive.mapjoin.optimized.hashtable true Whether Hive should use memory-optimized hash table for MapJoin. Only works on Tez and Spark, because memory-optimized hashtable cannot be serialized. hive.mapjoin.optimized.hashtable.probe.percent 0.5 Probing space percentage of the optimized hashtable hive.mapjoin.hybridgrace.hashtable true Whether to use hybridgrace hash join as the join method for mapjoin. Tez only. hive.mapjoin.hybridgrace.memcheckfrequency 1024 For hybrid grace hash join, how often (how many rows apart) we check if memory is full. This number should be power of 2. hive.mapjoin.hybridgrace.minwbsize 524288 For hybrid graceHash join, the minimum write buffer size used by optimized hashtable. Default is 512 KB. hive.mapjoin.hybridgrace.minnumpartitions 16 ForHybrid grace hash join, the minimum number of partitions to create. hive.mapjoin.optimized.hashtable.wbsize 8388608 Optimized hashtable (see hive.mapjoin.optimized.hashtable) uses a chain of buffers to store data. This is one buffer size. HT may be slightly faster if this is larger, but for small joins unnecessary memory will be allocated and then trimmed. hive.mapjoin.hybridgrace.bloomfilter true Whether to use BloomFilter in Hybrid grace hash join to minimize unnecessary spilling. hive.smbjoin.cache.rows 10000 How many rows with the same key value should be cached in memory per smb joined table. hive.groupby.mapaggr.checkinterval 100000 Number of rows after which size of the grouping keys/aggregation classes is performed hive.map.aggr.hash.percentmemory 0.5 Portion of total memory to be used by map-side group aggregation hash table hive.mapjoin.followby.map.aggr.hash.percentmemory 0.3 Portion of total memory to be used by map-side group aggregation hash table, when this group by is followed by map join hive.map.aggr.hash.force.flush.memory.threshold 0.9 The max memory to be used by map-side group aggregation hash table. If the memory usage is higher than this number, force to flush data hive.map.aggr.hash.min.reduction 0.5 Hash aggregation will be turned off if the ratio between hash table size and input rows is bigger than this number. Set to 1 to make sure hash aggregation is never turned off. hive.multigroupby.singlereducer true Whether to optimize multi group by query to generate single M/R job plan. If the multi group by query has common group by keys, it will be optimized to generate single M/R job. hive.map.groupby.sorted true If the bucketing/sorting properties of the table exactly match the grouping key, whether to perform the group by in the mapper by using BucketizedHiveInputFormat. The only downside to this is that it limits the number of mappers to the number of files. hive.groupby.position.alias false Whether to enable using Column Position Alias in Group By hive.orderby.position.alias true Whether to enable using Column Position Alias in Order By hive.groupby.orderby.position.alias false Whether to enable using Column Position Alias in Group By or Order By (deprecated). Use hive.orderby.position.alias or hive.groupby.position.alias instead hive.new.job.grouping.set.cardinality 30 Whether a new map-reduce job should be launched for grouping sets/rollups/cubes. For a query like: select a, b, c, count(1) from T group by a, b, c with rollup; 4 rows are created per row: (a, b, c), (a, b, null), (a, null, null), (null, null, null). This can lead to explosion across map-reduce boundary if the cardinality of T is very high, and map-side aggregation does not do a very good job. This parameter decides if Hive should add an additional map-reduce job. If the grouping set cardinality (4 in the example above), is more than this value, a new MR job is added under the assumption that the original group by will reduce the data size. hive.groupby.limit.extrastep true This parameter decides if Hive should create new MR job for sorting final output hive.exec.copyfile.maxnumfiles 1 Maximum number of files Hive uses to do sequential HDFS copies between directories.Distributed copies (distcp) will be used instead for larger numbers of files so that copies can be done faster. hive.exec.copyfile.maxsize 33554432 Maximum file size (in bytes) that Hive uses to do single HDFS copies between directories.Distributed copies (distcp) will be used instead for bigger files so that copies can be done faster. hive.udtf.auto.progress false Whether Hive should automatically send progress information to TaskTracker when using UDTF's to prevent the task getting killed because of inactivity. Users should be cautious because this may prevent TaskTracker from killing tasks with infinite loops. hive.default.fileformat TextFile Expects one of [textfile, sequencefile, rcfile, orc, parquet]. Default file format for CREATE TABLE statement. Users can explicitly override it by CREATE TABLE ... STORED AS [FORMAT] hive.default.fileformat.managed none Expects one of [none, textfile, sequencefile, rcfile, orc, parquet]. Default file format for CREATE TABLE statement applied to managed tables only. External tables will be created with format specified by hive.default.fileformat. Leaving this null will result in using hive.default.fileformat for all tables. hive.query.result.fileformat SequenceFile Expects one of [textfile, sequencefile, rcfile, llap]. Default file format for storing result of the query. hive.fileformat.check true Whether to check file format or not when loading data files hive.default.rcfile.serde org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe The default SerDe Hive will use for the RCFile format hive.default.serde org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe The default SerDe Hive will use for storage formats that do not specify a SerDe. hive.serdes.using.metastore.for.schema org.apache.hadoop.hive.ql.io.orc.OrcSerde,org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe,org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe,org.apache.hadoop.hive.serde2.dynamic_type.DynamicSerDe,org.apache.hadoop.hive.serde2.MetadataTypedColumnsetSerDe,org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe,org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe,org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe SerDes retrieving schema from metastore. This is an internal parameter. hive.querylog.location ${system:java.io.tmpdir}/${system:user.name} Location of Hive run time structured log file hive.querylog.enable.plan.progress true Whether to log the plan's progress every time a job's progress is checked. These logs are written to the location specified by hive.querylog.location hive.querylog.plan.progress.interval 60000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. The interval to wait between logging the plan's progress. If there is a whole number percentage change in the progress of the mappers or the reducers, the progress is logged regardless of this value. The actual interval will be the ceiling of (this value divided by the value of hive.exec.counters.pull.interval) multiplied by the value of hive.exec.counters.pull.interval I.e. if it is not divide evenly by the value of hive.exec.counters.pull.interval it will be logged less frequently than specified. This only has an effect if hive.querylog.enable.plan.progress is set to true. hive.script.serde org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe The default SerDe for transmitting input data to and reading output data from the user scripts. hive.script.recordreader org.apache.hadoop.hive.ql.exec.TextRecordReader The default record reader for reading data from the user scripts. hive.script.recordwriter org.apache.hadoop.hive.ql.exec.TextRecordWriter The default record writer for writing data to the user scripts. hive.transform.escape.input false This adds an option to escape special chars (newlines, carriage returns and tabs) when they are passed to the user script. This is useful if the Hive tables can contain data that contains special characters. hive.binary.record.max.length 1000 Read from a binary stream and treat each hive.binary.record.max.length bytes as a record. The last record before the end of stream can have less than hive.binary.record.max.length bytes hive.mapred.local.mem 0 mapper/reducer memory in local mode hive.mapjoin.smalltable.filesize 25000000 The threshold for the input file size of the small tables; if the file size is smaller than this threshold, it will try to convert the common join into map join hive.exec.schema.evolution true Use schema evolution to convert self-describing file format's data to the schema desired by the reader. hive.transactional.events.mem 10000000 Vectorized ACID readers can often load all the delete events from all the delete deltas into memory to optimize for performance. To prevent out-of-memory errors, this is a rough heuristic that limits the total number of delete events that can be loaded into memory at once. Roughly it has been set to 10 million delete events per bucket (~160 MB). hive.sample.seednumber 0 A number used to percentage sampling. By changing this number, user will change the subsets of data sampled. hive.test.mode false Whether Hive is running in test mode. If yes, it turns on sampling and prefixes the output tablename. hive.test.mode.prefix test_ In test mode, specfies prefixes for the output table hive.test.mode.samplefreq 32 In test mode, specfies sampling frequency for table, which is not bucketed, For example, the following query: INSERT OVERWRITE TABLE dest SELECT col1 from src would be converted to INSERT OVERWRITE TABLE test_dest SELECT col1 from src TABLESAMPLE (BUCKET 1 out of 32 on rand(1)) hive.test.mode.nosamplelist In test mode, specifies comma separated table names which would not apply sampling hive.test.dummystats.aggregator internal variable for test hive.test.dummystats.publisher internal variable for test hive.test.currenttimestamp current timestamp for test hive.test.rollbacktxn false For testing only. Will mark every ACID transaction aborted hive.test.fail.compaction false For testing only. Will cause CompactorMR to fail. hive.test.fail.heartbeater false For testing only. Will cause Heartbeater to fail. hive.merge.mapfiles true Merge small files at the end of a map-only job hive.merge.mapredfiles false Merge small files at the end of a map-reduce job hive.merge.tezfiles false Merge small files at the end of a Tez DAG hive.merge.sparkfiles false Merge small files at the end of a Spark DAG Transformation hive.merge.size.per.task 256000000 Size of merged files at the end of the job hive.merge.smallfiles.avgsize 16000000 When the average output file size of a job is less than this number, Hive will start an additional map-reduce job to merge the output files into bigger files. This is only done for map-only jobs if hive.merge.mapfiles is true, and for map-reduce jobs if hive.merge.mapredfiles is true. hive.merge.rcfile.block.level true hive.merge.orcfile.stripe.level true When hive.merge.mapfiles, hive.merge.mapredfiles or hive.merge.tezfiles is enabled while writing a table with ORC file format, enabling this config will do stripe-level fast merge for small ORC files. Note that enabling this config will not honor the padding tolerance config (hive.exec.orc.block.padding.tolerance). hive.exec.rcfile.use.explicit.header true If this is set the header for RCFiles will simply be RCF. If this is not set the header will be that borrowed from sequence files, e.g. SEQ- followed by the input and output RCFile formats. hive.exec.rcfile.use.sync.cache true hive.io.rcfile.record.interval 2147483647 hive.io.rcfile.column.number.conf 0 hive.io.rcfile.tolerate.corruptions false hive.io.rcfile.record.buffer.size 4194304 parquet.memory.pool.ratio 0.5 Maximum fraction of heap that can be used by Parquet file writers in one task. It is for avoiding OutOfMemory error in tasks. Work with Parquet 1.6.0 and above. This config parameter is defined in Parquet, so that it does not start with 'hive.'. hive.parquet.timestamp.skip.conversion true Current Hive implementation of parquet stores timestamps to UTC, this flag allows skipping of the conversionon reading parquet files from other tools hive.int.timestamp.conversion.in.seconds false Boolean/tinyint/smallint/int/bigint value is interpreted as milliseconds during the timestamp conversion. Set this flag to true to interpret the value as seconds to be consistent with float/double. hive.exec.orc.base.delta.ratio 8 The ratio of base writer and delta writer in terms of STRIPE_SIZE and BUFFER_SIZE. hive.exec.orc.split.strategy HYBRID Expects one of [hybrid, bi, etl]. This is not a user level config. BI strategy is used when the requirement is to spend less time in split generation as opposed to query execution (split generation does not read or cache file footers). ETL strategy is used when spending little more time in split generation is acceptable (split generation reads and caches file footers). HYBRID chooses between the above strategies based on heuristics. hive.orc.splits.ms.footer.cache.enabled false Whether to enable using file metadata cache in metastore for ORC file footers. hive.orc.splits.ms.footer.cache.ppd.enabled true Whether to enable file footer cache PPD (hive.orc.splits.ms.footer.cache.enabled must also be set to true for this to work). hive.orc.splits.include.file.footer false If turned on splits generated by orc will include metadata about the stripes in the file. This data is read remotely (from the client or HS2 machine) and sent to all the tasks. hive.orc.splits.directory.batch.ms 0 How long, in ms, to wait to batch input directories for processing during ORC split generation. 0 means process directories individually. This can increase the number of metastore calls if metastore metadata cache is used. hive.orc.splits.include.fileid true Include file ID in splits on file systems that support it. hive.orc.splits.allow.synthetic.fileid true Allow synthetic file ID in splits on file systems that don't have a native one. hive.orc.cache.stripe.details.mem.size 256Mb Expects a byte size value with unit (blank for bytes, kb, mb, gb, tb, pb). Maximum size of orc splits cached in the client. hive.orc.compute.splits.num.threads 10 How many threads orc should use to create splits in parallel. hive.orc.cache.use.soft.references false By default, the cache that ORC input format uses to store orc file footer use hard references for the cached object. Setting this to true can help avoid out of memory issues under memory pressure (in some cases) at the cost of slight unpredictability in overall query performance. hive.io.sarg.cache.max.weight.mb 10 The max weight allowed for the SearchArgument Cache. By default, the cache allows a max-weight of 10MB, after which entries will be evicted. hive.lazysimple.extended_boolean_literal false LazySimpleSerde uses this property to determine if it treats 'T', 't', 'F', 'f', '1', and '0' as extened, legal boolean literal, in addition to 'TRUE' and 'FALSE'. The default is false, which means only 'TRUE' and 'FALSE' are treated as legal boolean literal. hive.optimize.skewjoin false Whether to enable skew join optimization. The algorithm is as follows: At runtime, detect the keys with a large skew. Instead of processing those keys, store them temporarily in an HDFS directory. In a follow-up map-reduce job, process those skewed keys. The same key need not be skewed for all the tables, and so, the follow-up map-reduce job (for the skewed keys) would be much faster, since it would be a map-join. hive.optimize.dynamic.partition.hashjoin false Whether to enable dynamically partitioned hash join optimization. This setting is also dependent on enabling hive.auto.convert.join hive.auto.convert.join true Whether Hive enables the optimization about converting common join into mapjoin based on the input file size hive.auto.convert.join.noconditionaltask true Whether Hive enables the optimization about converting common join into mapjoin based on the input file size. If this parameter is on, and the sum of size for n-1 of the tables/partitions for a n-way join is smaller than the specified size, the join is directly converted to a mapjoin (there is no conditional task). hive.auto.convert.join.noconditionaltask.size 10000000 If hive.auto.convert.join.noconditionaltask is off, this parameter does not take affect. However, if it is on, and the sum of size for n-1 of the tables/partitions for a n-way join is smaller than this size, the join is directly converted to a mapjoin(there is no conditional task). The default is 10MB hive.auto.convert.join.use.nonstaged false For conditional joins, if input stream from a small alias can be directly applied to join operator without filtering or projection, the alias need not to be pre-staged in distributed cache via mapred local task. Currently, this is not working with vectorization or tez execution engine. hive.skewjoin.key 100000 Determine if we get a skew key in join. If we see more than the specified number of rows with the same key in join operator, we think the key as a skew join key. hive.skewjoin.mapjoin.map.tasks 10000 Determine the number of map task used in the follow up map join job for a skew join. It should be used together with hive.skewjoin.mapjoin.min.split to perform a fine grained control. hive.skewjoin.mapjoin.min.split 33554432 Determine the number of map task at most used in the follow up map join job for a skew join by specifying the minimum split size. It should be used together with hive.skewjoin.mapjoin.map.tasks to perform a fine grained control. hive.heartbeat.interval 1000 Send a heartbeat after this interval - used by mapjoin and filter operators hive.limit.row.max.size 100000 When trying a smaller subset of data for simple LIMIT, how much size we need to guarantee each row to have at least. hive.limit.optimize.limit.file 10 When trying a smaller subset of data for simple LIMIT, maximum number of files we can sample. hive.limit.optimize.enable false Whether to enable to optimization to trying a smaller subset of data for simple LIMIT first. hive.limit.optimize.fetch.max 50000 Maximum number of rows allowed for a smaller subset of data for simple LIMIT, if it is a fetch query. Insert queries are not restricted by this limit. hive.limit.pushdown.memory.usage 0.1 Expects value between 0.0f and 1.0f. The fraction of available memory to be used for buffering rows in Reducesink operator for limit pushdown optimization. hive.limit.query.max.table.partition -1 This controls how many partitions can be scanned for each partitioned table. The default value "-1" means no limit. (DEPRECATED: Please use hive.metastore.limit.partition.request in the metastore instead.) hive.auto.convert.join.hashtable.max.entries 40000000 If hive.auto.convert.join.noconditionaltask is off, this parameter does not take affect. However, if it is on, and the predicated number of entries in hashtable for a given join input is larger than this number, the join will not be converted to a mapjoin. The value "-1" means no limit. hive.hashtable.key.count.adjustment 1.0 Adjustment to mapjoin hashtable size derived from table and column statistics; the estimate of the number of keys is divided by this value. If the value is 0, statistics are not usedand hive.hashtable.initialCapacity is used instead. hive.hashtable.initialCapacity 100000 Initial capacity of mapjoin hashtable if statistics are absent, or if hive.hashtable.key.count.adjustment is set to 0 hive.hashtable.loadfactor 0.75 hive.mapjoin.followby.gby.localtask.max.memory.usage 0.55 This number means how much memory the local task can take to hold the key/value into an in-memory hash table when this map join is followed by a group by. If the local task's memory usage is more than this number, the local task will abort by itself. It means the data of the small table is too large to be held in memory. hive.mapjoin.localtask.max.memory.usage 0.9 This number means how much memory the local task can take to hold the key/value into an in-memory hash table. If the local task's memory usage is more than this number, the local task will abort by itself. It means the data of the small table is too large to be held in memory. hive.mapjoin.check.memory.rows 100000 The number means after how many rows processed it needs to check the memory usage hive.debug.localtask false hive.input.format org.apache.hadoop.hive.ql.io.CombineHiveInputFormat The default input format. Set this to HiveInputFormat if you encounter problems with CombineHiveInputFormat. hive.tez.input.format org.apache.hadoop.hive.ql.io.HiveInputFormat The default input format for tez. Tez groups splits in the AM. hive.tez.container.size -1 By default Tez will spawn containers of the size of a mapper. This can be used to overwrite. hive.tez.cpu.vcores -1 By default Tez will ask for however many cpus map-reduce is configured to use per container. This can be used to overwrite. hive.tez.java.opts By default Tez will use the Java options from map tasks. This can be used to overwrite. hive.tez.log.level INFO The log level to use for tasks executing as part of the DAG. Used only if hive.tez.java.opts is used to configure Java options. hive.tez.hs2.user.access true Whether to grant access to the hs2/hive user for queries hive.query.name This named is used by Tez to set the dag name. This name in turn will appear on the Tez UI representing the work that was done. hive.optimize.bucketingsorting true Don't create a reducer for enforcing bucketing/sorting for queries of the form: insert overwrite table T2 select * from T1; where T1 and T2 are bucketed/sorted by the same keys into the same number of buckets. hive.mapred.partitioner org.apache.hadoop.hive.ql.io.DefaultHivePartitioner hive.enforce.sortmergebucketmapjoin false If the user asked for sort-merge bucketed map-side join, and it cannot be performed, should the query fail or not ? hive.enforce.bucketmapjoin false If the user asked for bucketed map-side join, and it cannot be performed, should the query fail or not ? For example, if the buckets in the tables being joined are not a multiple of each other, bucketed map-side join cannot be performed, and the query will fail if hive.enforce.bucketmapjoin is set to true. hive.auto.convert.sortmerge.join false Will the join be automatically converted to a sort-merge join, if the joined tables pass the criteria for sort-merge join. hive.auto.convert.sortmerge.join.reduce.side true Whether hive.auto.convert.sortmerge.join (if enabled) should be applied to reduce side. hive.auto.convert.sortmerge.join.bigtable.selection.policy org.apache.hadoop.hive.ql.optimizer.AvgPartitionSizeBasedBigTableSelectorForAutoSMJ The policy to choose the big table for automatic conversion to sort-merge join. By default, the table with the largest partitions is assigned the big table. All policies are: . based on position of the table - the leftmost table is selected org.apache.hadoop.hive.ql.optimizer.LeftmostBigTableSMJ. . based on total size (all the partitions selected in the query) of the table org.apache.hadoop.hive.ql.optimizer.TableSizeBasedBigTableSelectorForAutoSMJ. . based on average size (all the partitions selected in the query) of the table org.apache.hadoop.hive.ql.optimizer.AvgPartitionSizeBasedBigTableSelectorForAutoSMJ. New policies can be added in future. hive.auto.convert.sortmerge.join.to.mapjoin false If hive.auto.convert.sortmerge.join is set to true, and a join was converted to a sort-merge join, this parameter decides whether each table should be tried as a big table, and effectively a map-join should be tried. That would create a conditional task with n+1 children for a n-way join (1 child for each table as the big table), and the backup task will be the sort-merge join. In some cases, a map-join would be faster than a sort-merge join, if there is no advantage of having the output bucketed and sorted. For example, if a very big sorted and bucketed table with few files (say 10 files) are being joined with a very small sorter and bucketed table with few files (10 files), the sort-merge join will only use 10 mappers, and a simple map-only join might be faster if the complete small table can fit in memory, and a map-join can be performed. hive.exec.script.trust false hive.exec.rowoffset false Whether to provide the row offset virtual column hive.optimize.index.filter false Whether to enable automatic use of indexes hive.optimize.index.autoupdate false Whether to update stale indexes automatically hive.optimize.ppd true Whether to enable predicate pushdown hive.optimize.ppd.windowing true Whether to enable predicate pushdown through windowing hive.ppd.recognizetransivity true Whether to transitively replicate predicate filters over equijoin conditions. hive.ppd.remove.duplicatefilters true During query optimization, filters may be pushed down in the operator tree. If this config is true only pushed down filters remain in the operator tree, and the original filter is removed. If this config is false, the original filter is also left in the operator tree at the original place. hive.optimize.point.lookup true Whether to transform OR clauses in Filter operators into IN clauses hive.optimize.point.lookup.min 31 Minimum number of OR clauses needed to transform into IN clauses hive.optimize.partition.columns.separate true Extract partition columns from IN clauses hive.optimize.constant.propagation true Whether to enable constant propagation optimizer hive.optimize.remove.identity.project true Removes identity project from operator tree hive.optimize.metadataonly false Whether to eliminate scans of the tables from which no columns are selected. Note that, when selecting from empty tables with data files, this can produce incorrect results, so it's disabled by default. It works correctly for normal tables. hive.optimize.null.scan true Dont scan relations which are guaranteed to not generate any rows hive.optimize.ppd.storage true Whether to push predicates down to storage handlers hive.optimize.groupby true Whether to enable the bucketed group by from bucketed partitions/tables. hive.optimize.bucketmapjoin false Whether to try bucket mapjoin hive.optimize.bucketmapjoin.sortedmerge false Whether to try sorted bucket merge map join hive.optimize.reducededuplication true Remove extra map-reduce jobs if the data is already clustered by the same key which needs to be used again. This should always be set to true. Since it is a new feature, it has been made configurable. hive.optimize.reducededuplication.min.reducer 4 Reduce deduplication merges two RSs by moving key/parts/reducer-num of the child RS to parent RS. That means if reducer-num of the child RS is fixed (order by or forced bucketing) and small, it can make very slow, single MR. The optimization will be automatically disabled if number of reducers would be less than specified value. hive.optimize.sort.dynamic.partition false When enabled dynamic partitioning column will be globally sorted. This way we can keep only one record writer open for each partition value in the reducer thereby reducing the memory pressure on reducers. hive.optimize.sampling.orderby false Uses sampling on order-by clause for parallel execution. hive.optimize.sampling.orderby.number 1000 Total number of samples to be obtained. hive.optimize.sampling.orderby.percent 0.1 Expects value between 0.0f and 1.0f. Probability with which a row will be chosen. hive.optimize.distinct.rewrite true When applicable this optimization rewrites distinct aggregates from a single stage to multi-stage aggregation. This may not be optimal in all cases. Ideally, whether to trigger it or not should be cost based decision. Until Hive formalizes cost model for this, this is config driven. hive.optimize.union.remove false Whether to remove the union and push the operators between union and the filesink above union. This avoids an extra scan of the output by union. This is independently useful for union queries, and specially useful when hive.optimize.skewjoin.compiletime is set to true, since an extra union is inserted. The merge is triggered if either of hive.merge.mapfiles or hive.merge.mapredfiles is set to true. If the user has set hive.merge.mapfiles to true and hive.merge.mapredfiles to false, the idea was the number of reducers are few, so the number of files anyway are small. However, with this optimization, we are increasing the number of files possibly by a big margin. So, we merge aggressively. hive.optimize.correlation false exploit intra-query correlations. hive.optimize.limittranspose false Whether to push a limit through left/right outer join or union. If the value is true and the size of the outer input is reduced enough (as specified in hive.optimize.limittranspose.reduction), the limit is pushed to the outer input or union; to remain semantically correct, the limit is kept on top of the join or the union too. hive.optimize.limittranspose.reductionpercentage 1.0 When hive.optimize.limittranspose is true, this variable specifies the minimal reduction of the size of the outer input of the join or input of the union that we should get in order to apply the rule. hive.optimize.limittranspose.reductiontuples 0 When hive.optimize.limittranspose is true, this variable specifies the minimal reduction in the number of tuples of the outer input of the join or the input of the union that you should get in order to apply the rule. hive.optimize.filter.stats.reduction false Whether to simplify comparison expressions in filter operators using column stats hive.optimize.skewjoin.compiletime false Whether to create a separate plan for skewed keys for the tables in the join. This is based on the skewed keys stored in the metadata. At compile time, the plan is broken into different joins: one for the skewed keys, and the other for the remaining keys. And then, a union is performed for the 2 joins generated above. So unless the same skewed key is present in both the joined tables, the join for the skewed key will be performed as a map-side join. The main difference between this parameter and hive.optimize.skewjoin is that this parameter uses the skew information stored in the metastore to optimize the plan at compile time itself. If there is no skew information in the metadata, this parameter will not have any affect. Both hive.optimize.skewjoin.compiletime and hive.optimize.skewjoin should be set to true. Ideally, hive.optimize.skewjoin should be renamed as hive.optimize.skewjoin.runtime, but not doing so for backward compatibility. If the skew information is correctly stored in the metadata, hive.optimize.skewjoin.compiletime would change the query plan to take care of it, and hive.optimize.skewjoin will be a no-op. hive.optimize.cte.materialize.threshold -1 If the number of references to a CTE clause exceeds this threshold, Hive will materialize it before executing the main query block. -1 will disable this feature. hive.optimize.index.filter.compact.minsize 5368709120 Minimum size (in bytes) of the inputs on which a compact index is automatically used. hive.optimize.index.filter.compact.maxsize -1 Maximum size (in bytes) of the inputs on which a compact index is automatically used. A negative number is equivalent to infinity. hive.index.compact.query.max.entries 10000000 The maximum number of index entries to read during a query that uses the compact index. Negative value is equivalent to infinity. hive.index.compact.query.max.size 10737418240 The maximum number of bytes that a query using the compact index can read. Negative value is equivalent to infinity. hive.index.compact.binary.search true Whether or not to use a binary search to find the entries in an index table that match the filter, where possible hive.stats.autogather true A flag to gather statistics (only basic) automatically during the INSERT OVERWRITE command. hive.stats.column.autogather false A flag to gather column statistics automatically. hive.stats.dbclass fs Expects one of the pattern in [custom, fs]. The storage that stores temporary Hive statistics. In filesystem based statistics collection ('fs'), each task writes statistics it has collected in a file on the filesystem, which will be aggregated after the job has finished. Supported values are fs (filesystem) and custom as defined in StatsSetupConst.java. hive.stats.default.publisher The Java class (implementing the StatsPublisher interface) that is used by default if hive.stats.dbclass is custom type. hive.stats.default.aggregator The Java class (implementing the StatsAggregator interface) that is used by default if hive.stats.dbclass is custom type. hive.stats.atomic false whether to update metastore stats only if all stats are available hive.client.stats.counters Subset of counters that should be of interest for hive.client.stats.publishers (when one wants to limit their publishing). Non-display names should be used hive.stats.reliable false Whether queries will fail because stats cannot be collected completely accurately. If this is set to true, reading/writing from/into a partition may fail because the stats could not be computed accurately. hive.analyze.stmt.collect.partlevel.stats true analyze table T compute statistics for columns. Queries like these should compute partitionlevel stats for partitioned table even when no part spec is specified. hive.stats.gather.num.threads 10 Number of threads used by partialscan/noscan analyze command for partitioned tables. This is applicable only for file formats that implement StatsProvidingRecordReader (like ORC). hive.stats.collect.tablekeys false Whether join and group by keys on tables are derived and maintained in the QueryPlan. This is useful to identify how tables are accessed and to determine if they should be bucketed. hive.stats.collect.scancols false Whether column accesses are tracked in the QueryPlan. This is useful to identify how tables are accessed and to determine if there are wasted columns that can be trimmed. hive.stats.ndv.error 20.0 Standard error expressed in percentage. Provides a tradeoff between accuracy and compute cost. A lower value for error indicates higher accuracy and a higher compute cost. hive.metastore.stats.ndv.tuner 0.0 Provides a tunable parameter between the lower bound and the higher bound of ndv for aggregate ndv across all the partitions. The lower bound is equal to the maximum of ndv of all the partitions. The higher bound is equal to the sum of ndv of all the partitions. Its value should be between 0.0 (i.e., choose lower bound) and 1.0 (i.e., choose higher bound) hive.metastore.stats.ndv.densityfunction false Whether to use density function to estimate the NDV for the whole table based on the NDV of partitions hive.stats.max.variable.length 100 To estimate the size of data flowing through operators in Hive/Tez(for reducer estimation etc.), average row size is multiplied with the total number of rows coming out of each operator. Average row size is computed from average column size of all columns in the row. In the absence of column statistics, for variable length columns (like string, bytes etc.), this value will be used. For fixed length columns their corresponding Java equivalent sizes are used (float - 4 bytes, double - 8 bytes etc.). hive.stats.list.num.entries 10 To estimate the size of data flowing through operators in Hive/Tez(for reducer estimation etc.), average row size is multiplied with the total number of rows coming out of each operator. Average row size is computed from average column size of all columns in the row. In the absence of column statistics and for variable length complex columns like list, the average number of entries/values can be specified using this config. hive.stats.map.num.entries 10 To estimate the size of data flowing through operators in Hive/Tez(for reducer estimation etc.), average row size is multiplied with the total number of rows coming out of each operator. Average row size is computed from average column size of all columns in the row. In the absence of column statistics and for variable length complex columns like map, the average number of entries/values can be specified using this config. hive.stats.fetch.partition.stats true Annotation of operator tree with statistics information requires partition level basic statistics like number of rows, data size and file size. Partition statistics are fetched from metastore. Fetching partition statistics for each needed partition can be expensive when the number of partitions is high. This flag can be used to disable fetching of partition statistics from metastore. When this flag is disabled, Hive will make calls to filesystem to get file sizes and will estimate the number of rows from row schema. hive.stats.fetch.column.stats false Annotation of operator tree with statistics information requires column statistics. Column statistics are fetched from metastore. Fetching column statistics for each needed column can be expensive when the number of columns is high. This flag can be used to disable fetching of column statistics from metastore. hive.stats.join.factor 1.1 Hive/Tez optimizer estimates the data size flowing through each of the operators. JOIN operator uses column statistics to estimate the number of rows flowing out of it and hence the data size. In the absence of column statistics, this factor determines the amount of rows that flows out of JOIN operator. hive.stats.deserialization.factor 1.0 Hive/Tez optimizer estimates the data size flowing through each of the operators. In the absence of basic statistics like number of rows and data size, file size is used to estimate the number of rows and data size. Since files in tables/partitions are serialized (and optionally compressed) the estimates of number of rows and data size cannot be reliably determined. This factor is multiplied with the file size to account for serialization and compression. hive.stats.filter.in.factor 1.0 Currently column distribution is assumed to be uniform. This can lead to overestimation/underestimation in the number of rows filtered by a certain operator, which in turn might lead to overprovision or underprovision of resources. This factor is applied to the cardinality estimation of IN clauses in filter operators. hive.support.concurrency false Whether Hive supports concurrency control or not. A ZooKeeper instance must be up and running when using zookeeper Hive lock manager hive.lock.manager org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager hive.lock.numretries 100 The number of times you want to try to get all the locks hive.unlock.numretries 10 The number of times you want to retry to do one unlock hive.lock.sleep.between.retries 60s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. The time should be in between 0 sec (exclusive) and 9223372036854775807 sec (exclusive). The maximum sleep time between various retries hive.lock.mapred.only.operation false This param is to control whether or not only do lock on queries that need to execute at least one mapred job. hive.zookeeper.quorum List of ZooKeeper servers to talk to. This is needed for: 1. Read/write locks - when hive.lock.manager is set to org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager, 2. When HiveServer2 supports service discovery via Zookeeper. 3. For delegation token storage if zookeeper store is used, if hive.cluster.delegation.token.store.zookeeper.connectString is not set 4. LLAP daemon registry service hive.zookeeper.client.port 2181 The port of ZooKeeper servers to talk to. If the list of Zookeeper servers specified in hive.zookeeper.quorum does not contain port numbers, this value is used. hive.zookeeper.session.timeout 1200000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. ZooKeeper client's session timeout (in milliseconds). The client is disconnected, and as a result, all locks released, if a heartbeat is not sent in the timeout. hive.zookeeper.namespace hive_zookeeper_namespace The parent node under which all ZooKeeper nodes are created. hive.zookeeper.clean.extra.nodes false Clean extra nodes at the end of the session. hive.zookeeper.connection.max.retries 3 Max number of times to retry when connecting to the ZooKeeper server. hive.zookeeper.connection.basesleeptime 1000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Initial amount of time (in milliseconds) to wait between retries when connecting to the ZooKeeper server when using ExponentialBackoffRetry policy. hive.txn.manager org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager Set to org.apache.hadoop.hive.ql.lockmgr.DbTxnManager as part of turning on Hive transactions, which also requires appropriate settings for hive.compactor.initiator.on, hive.compactor.worker.threads, hive.support.concurrency (true), and hive.exec.dynamic.partition.mode (nonstrict). The default DummyTxnManager replicates pre-Hive-0.13 behavior and provides no transactions. hive.txn.strict.locking.mode true In strict mode non-ACID resources use standard R/W lock semantics, e.g. INSERT will acquire exclusive lock. In nonstrict mode, for non-ACID resources, INSERT will only acquire shared lock, which allows two concurrent writes to the same partition but still lets lock manager prevent DROP TABLE etc. when the table is being written to hive.txn.timeout 300s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. time after which transactions are declared aborted if the client has not sent a heartbeat. hive.txn.heartbeat.threadpool.size 5 The number of threads to use for heartbeating. For Hive CLI, 1 is enough. For HiveServer2, we need a few hive.txn.manager.dump.lock.state.on.acquire.timeout false Set this to true so that when attempt to acquire a lock on resource times out, the current state of the lock manager is dumped to log file. This is for debugging. See also hive.lock.numretries and hive.lock.sleep.between.retries. hive.txn.operational.properties 0 Sets the operational properties that control the appropriate behavior for various versions of the Hive ACID subsystem. Setting it to zero will turn on the legacy mode for ACID, while setting it to one will enable a split-update feature found in the newer version of Hive ACID subsystem. Mostly it is intended to be used as an internal property for future versions of ACID. (See HIVE-14035 for details.) hive.max.open.txns 100000 Maximum number of open transactions. If current open transactions reach this limit, future open transaction requests will be rejected, until this number goes below the limit. hive.count.open.txns.interval 1s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Time in seconds between checks to count open transactions. hive.txn.max.open.batch 1000 Maximum number of transactions that can be fetched in one call to open_txns(). This controls how many transactions streaming agents such as Flume or Storm open simultaneously. The streaming agent then writes that number of entries into a single file (per Flume agent or Storm bolt). Thus increasing this value decreases the number of delta files created by streaming agents. But it also increases the number of open transactions that Hive has to track at any given time, which may negatively affect read performance. hive.txn.retryable.sqlex.regex Comma separated list of regular expression patterns for SQL state, error code, and error message of retryable SQLExceptions, that's suitable for the metastore DB. For example: Can't serialize.*,40001$,^Deadlock,.*ORA-08176.* The string that the regex will be matched against is of the following form, where ex is a SQLException: ex.getMessage() + " (SQLState=" + ex.getSQLState() + ", ErrorCode=" + ex.getErrorCode() + ")" hive.compactor.initiator.on false Whether to run the initiator and cleaner threads on this metastore instance or not. Set this to true on one instance of the Thrift metastore service as part of turning on Hive transactions. For a complete list of parameters required for turning on transactions, see hive.txn.manager. hive.compactor.worker.threads 0 How many compactor worker threads to run on this metastore instance. Set this to a positive number on one or more instances of the Thrift metastore service as part of turning on Hive transactions. For a complete list of parameters required for turning on transactions, see hive.txn.manager. Worker threads spawn MapReduce jobs to do compactions. They do not do the compactions themselves. Increasing the number of worker threads will decrease the time it takes tables or partitions to be compacted once they are determined to need compaction. It will also increase the background load on the Hadoop cluster as more MapReduce jobs will be running in the background. hive.compactor.worker.timeout 86400s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Time in seconds after which a compaction job will be declared failed and the compaction re-queued. hive.compactor.check.interval 300s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Time in seconds between checks to see if any tables or partitions need to be compacted. This should be kept high because each check for compaction requires many calls against the NameNode. Decreasing this value will reduce the time it takes for compaction to be started for a table or partition that requires compaction. However, checking if compaction is needed requires several calls to the NameNode for each table or partition that has had a transaction done on it since the last major compaction. So decreasing this value will increase the load on the NameNode. hive.compactor.delta.num.threshold 10 Number of delta directories in a table or partition that will trigger a minor compaction. hive.compactor.delta.pct.threshold 0.1 Percentage (fractional) size of the delta files relative to the base that will trigger a major compaction. (1.0 = 100%, so the default 0.1 = 10%.) hive.compactor.max.num.delta 500 Maximum number of delta files that the compactor will attempt to handle in a single job. hive.compactor.abortedtxn.threshold 1000 Number of aborted transactions involving a given table or partition that will trigger a major compaction. hive.compactor.initiator.failed.compacts.threshold 2 Expects value between 1 and 20. Number of consecutive compaction failures (per table/partition) after which automatic compactions will not be scheduled any more. Note that this must be less than hive.compactor.history.retention.failed. hive.compactor.cleaner.run.interval 5000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Time between runs of the cleaner thread hive.compactor.job.queue Used to specify name of Hadoop queue to which Compaction jobs will be submitted. Set to empty string to let Hadoop choose the queue. hive.compactor.history.retention.succeeded 3 Expects value between 0 and 100. Determines how many successful compaction records will be retained in compaction history for a given table/partition. hive.compactor.history.retention.failed 3 Expects value between 0 and 100. Determines how many failed compaction records will be retained in compaction history for a given table/partition. hive.compactor.history.retention.attempted 2 Expects value between 0 and 100. Determines how many attempted compaction records will be retained in compaction history for a given table/partition. hive.compactor.history.reaper.interval 2m Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Determines how often compaction history reaper runs hive.timedout.txn.reaper.start 100s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Time delay of 1st reaper run after metastore start hive.timedout.txn.reaper.interval 180s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Time interval describing how often the reaper runs hive.writeset.reaper.interval 60s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Frequency of WriteSet reaper runs hive.merge.cardinality.check true Set to true to ensure that each SQL Merge statement ensures that for each row in the target table there is at most 1 matching row in the source table per SQL Specification. hive.druid.indexer.segments.granularity DAY Expects one of the pattern in [YEAR, MONTH, WEEK, DAY, HOUR, MINUTE, SECOND]. Granularity for the segments created by the Druid storage handler hive.druid.indexer.partition.size.max 5000000 Maximum number of records per segment partition hive.druid.indexer.memory.rownum.max 75000 Maximum number of records in memory while storing data in Druid hive.druid.broker.address.default localhost:8082 Address of the Druid broker. If we are querying Druid from Hive, this address needs to be declared hive.druid.coordinator.address.default localhost:8081 Address of the Druid coordinator. It is used to check the load status of newly created segments hive.druid.select.distribute true If it is set to true, we distribute the execution of Druid Select queries. Concretely, we retrieve the result for Select queries directly from the Druid nodes containing the segments data. In particular, first we contact the Druid broker node to obtain the nodes containing the segments for the given query, and then we contact those nodes to retrieve the results for the query. If it is set to false, we do not execute the Select queries in a distributed fashion. Instead, results for those queries are returned by the Druid broker node. hive.druid.select.threshold 10000 Takes only effect when hive.druid.select.distribute is set to false. When we can split a Select query, this is the maximum number of rows that we try to retrieve per query. In order to do that, we obtain the estimated size for the complete result. If the number of records of the query results is larger than this threshold, we split the query in total number of rows/threshold parts across the time dimension. Note that we assume the records to be split uniformly across the time dimension. hive.druid.http.numConnection 20 Number of connections used by the HTTP client. hive.druid.http.read.timeout PT1M Read timeout period for the HTTP client in ISO8601 format (for example P2W, P3M, PT1H30M, PT0.750S), default is period of 1 minute. hive.druid.sleep.time PT10S Sleep time between retries in ISO8601 format (for example P2W, P3M, PT1H30M, PT0.750S), default is period of 10 seconds. hive.druid.basePersistDirectory Local temporary directory used to persist intermediate indexing state, will default to JVM system property java.io.tmpdir. hive.druid.storage.storageDirectory /druid/segments druid deep storage location. hive.druid.metadata.base druid Default prefix for metadata tables hive.druid.metadata.db.type mysql Expects one of the pattern in [mysql, postgresql]. Type of the metadata database. hive.druid.metadata.username Username to connect to Type of the metadata DB. hive.druid.metadata.password Password to connect to Type of the metadata DB. hive.druid.metadata.uri URI to connect to the database (for example jdbc:mysql://hostname:port/DBName). hive.druid.working.directory /tmp/workingDirectory Default hdfs working directory used to store some intermediate metadata hive.druid.maxTries 5 Maximum number of retries before giving up hive.druid.passiveWaitTimeMs 30000 Wait time in ms default to 30 seconds. hive.hbase.wal.enabled true Whether writes to HBase should be forced to the write-ahead log. Disabling this improves HBase write performance at the risk of lost writes in case of a crash. hive.hbase.generatehfiles false True when HBaseStorageHandler should generate hfiles instead of operate against the online table. hive.hbase.snapshot.name The HBase table snapshot name to use. hive.hbase.snapshot.restoredir /tmp The directory in which to restore the HBase table snapshot. hive.archive.enabled false Whether archiving operations are permitted hive.optimize.index.groupby false Whether to enable optimization of group-by queries using Aggregate indexes. hive.fetch.task.conversion more Expects one of [none, minimal, more]. Some select queries can be converted to single FETCH task minimizing latency. Currently the query should be single sourced not having any subquery and should not have any aggregations or distincts (which incurs RS), lateral views and joins. 0. none : disable hive.fetch.task.conversion 1. minimal : SELECT STAR, FILTER on partition columns, LIMIT only 2. more : SELECT, FILTER, LIMIT only (support TABLESAMPLE and virtual columns) hive.fetch.task.conversion.threshold 1073741824 Input threshold for applying hive.fetch.task.conversion. If target table is native, input length is calculated by summation of file lengths. If it's not native, storage handler for the table can optionally implement org.apache.hadoop.hive.ql.metadata.InputEstimator interface. hive.fetch.task.aggr false Aggregation queries with no group-by clause (for example, select count(*) from src) execute final aggregations in single reduce task. If this is set true, Hive delegates final aggregation stage to fetch task, possibly decreasing the query time. hive.compute.query.using.stats true When set to true Hive will answer a few queries like count(1) purely using stats stored in metastore. For basic stats collection turn on the config hive.stats.autogather to true. For more advanced stats collection need to run analyze table queries. hive.fetch.output.serde org.apache.hadoop.hive.serde2.DelimitedJSONSerDe The SerDe used by FetchTask to serialize the fetch output. hive.cache.expr.evaluation true If true, the evaluation result of a deterministic expression referenced twice or more will be cached. For example, in a filter condition like '.. where key + 10 = 100 or key + 10 = 0' the expression 'key + 10' will be evaluated/cached once and reused for the following expression ('key + 10 = 0'). Currently, this is applied only to expressions in select or filter operators. hive.variable.substitute true This enables substitution using syntax like ${var} ${system:var} and ${env:var}. hive.variable.substitute.depth 40 The maximum replacements the substitution engine will do. hive.conf.validation true Enables type checking for registered Hive configurations hive.semantic.analyzer.hook hive.security.authorization.enabled false enable or disable the Hive client authorization hive.security.authorization.manager org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactory The Hive client authorization manager class name. The user defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveAuthorizationProvider. hive.security.authenticator.manager org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator hive client authenticator manager class name. The user defined authenticator should implement interface org.apache.hadoop.hive.ql.security.HiveAuthenticationProvider. hive.security.metastore.authorization.manager org.apache.hadoop.hive.ql.security.authorization.DefaultHiveMetastoreAuthorizationProvider Names of authorization manager classes (comma separated) to be used in the metastore for authorization. The user defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveMetastoreAuthorizationProvider. All authorization manager classes have to successfully authorize the metastore API call for the command execution to be allowed. hive.security.metastore.authorization.auth.reads true If this is true, metastore authorizer authorizes read actions on database, table hive.security.metastore.authenticator.manager org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator authenticator manager class name to be used in the metastore for authentication. The user defined authenticator should implement interface org.apache.hadoop.hive.ql.security.HiveAuthenticationProvider. hive.security.authorization.createtable.user.grants the privileges automatically granted to some users whenever a table gets created. An example like "userX,userY:select;userZ:create" will grant select privilege to userX and userY, and grant create privilege to userZ whenever a new table created. hive.security.authorization.createtable.group.grants the privileges automatically granted to some groups whenever a table gets created. An example like "groupX,groupY:select;groupZ:create" will grant select privilege to groupX and groupY, and grant create privilege to groupZ whenever a new table created. hive.security.authorization.createtable.role.grants the privileges automatically granted to some roles whenever a table gets created. An example like "roleX,roleY:select;roleZ:create" will grant select privilege to roleX and roleY, and grant create privilege to roleZ whenever a new table created. hive.security.authorization.createtable.owner.grants The privileges automatically granted to the owner whenever a table gets created. An example like "select,drop" will grant select and drop privilege to the owner of the table. Note that the default gives the creator of a table no access to the table (but see HIVE-8067). hive.security.authorization.task.factory org.apache.hadoop.hive.ql.parse.authorization.HiveAuthorizationTaskFactoryImpl Authorization DDL task factory implementation hive.security.authorization.sqlstd.confwhitelist List of comma separated Java regexes. Configurations parameters that match these regexes can be modified by user when SQL standard authorization is enabled. To get the default value, use the 'set <param>' command. Note that the hive.conf.restricted.list checks are still enforced after the white list check hive.security.authorization.sqlstd.confwhitelist.append List of comma separated Java regexes, to be appended to list set in hive.security.authorization.sqlstd.confwhitelist. Using this list instead of updating the original list means that you can append to the defaults set by SQL standard authorization instead of replacing it entirely. hive.cli.print.header false Whether to print the names of the columns in query output. hive.cli.tez.session.async true Whether to start Tez session in background when running CLI with Tez, allowing CLI to be available earlier. hive.error.on.empty.partition false Whether to throw an exception if dynamic partition insert generates empty results. hive.index.compact.file internal variable hive.index.blockfilter.file internal variable hive.index.compact.file.ignore.hdfs false When true the HDFS location stored in the index file will be ignored at runtime. If the data got moved or the name of the cluster got changed, the index data should still be usable. hive.exim.uri.scheme.whitelist hdfs,pfile,file,s3,s3a A comma separated list of acceptable URI schemes for import and export. hive.exim.strict.repl.tables true Parameter that determines if 'regular' (non-replication) export dumps can be imported on to tables that are the target of replication. If this parameter is set, regular imports will check if the destination table(if it exists) has a 'repl.last.id' set on it. If so, it will fail. hive.repl.task.factory org.apache.hive.hcatalog.api.repl.exim.EximReplicationTaskFactory Parameter that can be used to override which ReplicationTaskFactory will be used to instantiate ReplicationTask events. Override for third party repl plugins hive.mapper.cannot.span.multiple.partitions false hive.rework.mapredwork false should rework the mapred work or not. This is first introduced by SymlinkTextInputFormat to replace symlink files with real paths at compile time. hive.exec.concatenate.check.index true If this is set to true, Hive will throw error when doing 'alter table tbl_name [partSpec] concatenate' on a table/partition that has indexes on it. The reason the user want to set this to true is because it can help user to avoid handling all index drop, recreation, rebuild work. This is very helpful for tables with thousands of partitions. hive.io.exception.handlers A list of io exception handler class names. This is used to construct a list exception handlers to handle exceptions thrown by record readers hive.log4j.file Hive log4j configuration file. If the property is not set, then logging will be initialized using hive-log4j2.properties found on the classpath. If the property is set, the value must be a valid URI (java.net.URI, e.g. "file:///tmp/my-logging.xml"), which you can then extract a URL from and pass to PropertyConfigurator.configure(URL). hive.exec.log4j.file Hive log4j configuration file for execution mode(sub command). If the property is not set, then logging will be initialized using hive-exec-log4j2.properties found on the classpath. If the property is set, the value must be a valid URI (java.net.URI, e.g. "file:///tmp/my-logging.xml"), which you can then extract a URL from and pass to PropertyConfigurator.configure(URL). hive.async.log.enabled true Whether to enable Log4j2's asynchronous logging. Asynchronous logging can give significant performance improvement as logging will be handled in separate thread that uses LMAX disruptor queue for buffering log messages. Refer https://logging.apache.org/log4j/2.x/manual/async.html for benefits and drawbacks. hive.log.explain.output false Whether to log explain output for every query. When enabled, will log EXPLAIN EXTENDED output for the query at INFO log4j log level. hive.explain.user true Whether to show explain result at user level. When enabled, will log EXPLAIN output for the query at user level. hive.autogen.columnalias.prefix.label _c String used as a prefix when auto generating column alias. By default the prefix label will be appended with a column position number to form the column alias. Auto generation would happen if an aggregate function is used in a select clause without an explicit alias. hive.autogen.columnalias.prefix.includefuncname false Whether to include function name in the column alias auto generated by Hive. hive.service.metrics.class org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics Expects one of [org.apache.hadoop.hive.common.metrics.metrics2.codahalemetrics, org.apache.hadoop.hive.common.metrics.legacymetrics]. Hive metrics subsystem implementation class. hive.service.metrics.reporter JSON_FILE, JMX Reporter type for metric class org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics, comma separated list of JMX, CONSOLE, JSON_FILE, HADOOP2 hive.service.metrics.file.location /tmp/report.json For metric class org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics JSON_FILE reporter, the location of local JSON metrics file. This file will get overwritten at every interval. hive.service.metrics.file.frequency 5s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. For metric class org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics JSON_FILE reporter, the frequency of updating JSON metrics file. hive.service.metrics.hadoop2.frequency 30s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. For metric class org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics HADOOP2 reporter, the frequency of updating the HADOOP2 metrics system. hive.service.metrics.hadoop2.component hive Component name to provide to Hadoop2 Metrics system. Ideally 'hivemetastore' for the MetaStore and and 'hiveserver2' for HiveServer2. hive.exec.perf.logger org.apache.hadoop.hive.ql.log.PerfLogger The class responsible for logging client side performance metrics. Must be a subclass of org.apache.hadoop.hive.ql.log.PerfLogger hive.start.cleanup.scratchdir false To cleanup the Hive scratchdir when starting the Hive Server hive.scratchdir.lock false To hold a lock file in scratchdir to prevent to be removed by cleardanglingscratchdir hive.insert.into.multilevel.dirs false Where to insert into multilevel directories like "insert directory '/HIVEFT25686/chinna/' from table" hive.warehouse.subdir.inherit.perms true Set this to false if the table directories should be created with the permissions derived from dfs umask instead of inheriting the permission of the warehouse or database directory. hive.insert.into.external.tables true whether insert into external tables is allowed hive.exec.temporary.table.storage default Expects one of [memory, ssd, default]. Define the storage policy for temporary tables.Choices between memory, ssd and default hive.query.lifetime.hooks A comma separated list of hooks which implement QueryLifeTimeHook. These will be triggered before/after query compilation and before/after query execution, in the order specified hive.exec.driver.run.hooks A comma separated list of hooks which implement HiveDriverRunHook. Will be run at the beginning and end of Driver.run, these will be run in the order specified. hive.ddl.output.format The data format to use for DDL output. One of "text" (for human readable text) or "json" (for a json object). hive.entity.separator @ Separator used to construct names of tables and partitions. For example, dbname@tablename@partitionname hive.entity.capture.transform false Compiler to capture transform URI referred in the query hive.display.partition.cols.separately true In older Hive version (0.10 and earlier) no distinction was made between partition columns or non-partition columns while displaying columns in describe table. From 0.12 onwards, they are displayed separately. This flag will let you get old behavior, if desired. See, test-case in patch for HIVE-6689. hive.ssl.protocol.blacklist SSLv2,SSLv3 SSL Versions to disable for all Hive Servers hive.server2.clear.dangling.scratchdir false Clear dangling scratch dir periodically in HS2 hive.server2.clear.dangling.scratchdir.interval 1800s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Interval to clear dangling scratch dir periodically in HS2 hive.server2.sleep.interval.between.start.attempts 60s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. The time should be in between 0 msec (inclusive) and 9223372036854775807 msec (inclusive). Amount of time to sleep between HiveServer2 start attempts. Primarily meant for tests hive.server2.max.start.attempts 30 Expects value bigger than 0. Number of times HiveServer2 will attempt to start before exiting. The sleep interval between retries is determined by hive.server2.sleep.interval.between.start.attempts The default of 30 will keep trying for 30 minutes. hive.server2.support.dynamic.service.discovery false Whether HiveServer2 supports dynamic service discovery for its clients. To support this, each instance of HiveServer2 currently uses ZooKeeper to register itself, when it is brought up. JDBC/ODBC clients should use the ZooKeeper ensemble: hive.zookeeper.quorum in their connection string. hive.server2.zookeeper.namespace hiveserver2 The parent node in ZooKeeper used by HiveServer2 when supporting dynamic service discovery. hive.server2.zookeeper.publish.configs true Whether we should publish HiveServer2's configs to ZooKeeper. hive.server2.global.init.file.location ${env:HIVE_CONF_DIR} Either the location of a HS2 global init file or a directory containing a .hiverc file. If the property is set, the value must be a valid path to an init file or directory where the init file is located. hive.server2.transport.mode binary Expects one of [binary, http]. Transport mode of HiveServer2. hive.server2.thrift.bind.host Bind host on which to run the HiveServer2 Thrift service. hive.driver.parallel.compilation false Whether to enable parallel compilation of the queries between sessions and within the same session on HiveServer2. The default is false. hive.server2.compile.lock.timeout 0s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Number of seconds a request will wait to acquire the compile lock before giving up. Setting it to 0s disables the timeout. hive.server2.parallel.ops.in.session true Whether to allow several parallel operations (such as SQL statements) in one session. hive.server2.webui.host 0.0.0.0 The host address the HiveServer2 WebUI will listen on hive.server2.webui.port 10002 The port the HiveServer2 WebUI will listen on. This can beset to 0 or a negative integer to disable the web UI hive.server2.webui.max.threads 50 The max HiveServer2 WebUI threads hive.server2.webui.use.ssl false Set this to true for using SSL encryption for HiveServer2 WebUI. hive.server2.webui.keystore.path SSL certificate keystore location for HiveServer2 WebUI. hive.server2.webui.keystore.password SSL certificate keystore password for HiveServer2 WebUI. hive.server2.webui.use.spnego false If true, the HiveServer2 WebUI will be secured with SPNEGO. Clients must authenticate with Kerberos. hive.server2.webui.spnego.keytab The path to the Kerberos Keytab file containing the HiveServer2 WebUI SPNEGO service principal. hive.server2.webui.spnego.principal HTTP/_HOST@EXAMPLE.COM The HiveServer2 WebUI SPNEGO service principal. The special string _HOST will be replaced automatically with the value of hive.server2.webui.host or the correct host name. hive.server2.webui.max.historic.queries 25 The maximum number of past queries to show in HiverSever2 WebUI. hive.server2.tez.default.queues A list of comma separated values corresponding to YARN queues of the same name. When HiveServer2 is launched in Tez mode, this configuration needs to be set for multiple Tez sessions to run in parallel on the cluster. hive.server2.tez.sessions.per.default.queue 1 A positive integer that determines the number of Tez sessions that should be launched on each of the queues specified by "hive.server2.tez.default.queues". Determines the parallelism on each queue. hive.server2.tez.initialize.default.sessions false This flag is used in HiveServer2 to enable a user to use HiveServer2 without turning on Tez for HiveServer2. The user could potentially want to run queries over Tez without the pool of sessions. hive.server2.tez.session.lifetime 162h Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is hour if not specified. The lifetime of the Tez sessions launched by HS2 when default sessions are enabled. Set to 0 to disable session expiration. hive.server2.tez.session.lifetime.jitter 3h Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is hour if not specified. The jitter for Tez session lifetime; prevents all the sessions from restarting at once. hive.server2.tez.sessions.init.threads 16 If hive.server2.tez.initialize.default.sessions is enabled, the maximum number of threads to use to initialize the default sessions. hive.server2.tez.sessions.restricted.configs The configuration settings that cannot be set when submitting jobs to HiveServer2. If any of these are set to values different from those in the server configuration, an exception will be thrown. hive.server2.tez.sessions.custom.queue.allowed true Expects one of [true, false, ignore]. Whether Tez session pool should allow submitting queries to custom queues. The options are true, false (error out), ignore (accept the query but ignore the queue setting). hive.server2.logging.operation.enabled true When true, HS2 will save operation logs and make them available for clients hive.server2.logging.operation.log.location ${system:java.io.tmpdir}/${system:user.name}/operation_logs Top level directory where operation logs are stored if logging functionality is enabled hive.server2.logging.operation.level EXECUTION Expects one of [none, execution, performance, verbose]. HS2 operation logging mode available to clients to be set at session level. For this to work, hive.server2.logging.operation.enabled should be set to true. NONE: Ignore any logging EXECUTION: Log completion of tasks PERFORMANCE: Execution + Performance logs VERBOSE: All logs hive.server2.metrics.enabled false Enable metrics on the HiveServer2. hive.server2.thrift.http.port 10001 Port number of HiveServer2 Thrift interface when hive.server2.transport.mode is 'http'. hive.server2.thrift.http.path cliservice Path component of URL endpoint when in HTTP mode. hive.server2.thrift.max.message.size 104857600 Maximum message size in bytes a HS2 server will accept. hive.server2.thrift.http.max.idle.time 1800s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Maximum idle time for a connection on the server when in HTTP mode. hive.server2.thrift.http.worker.keepalive.time 60s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Keepalive time for an idle http worker thread. When the number of workers exceeds min workers, excessive threads are killed after this time interval. hive.server2.thrift.http.request.header.size 6144 Request header size in bytes, when using HTTP transport mode. Jetty defaults used. hive.server2.thrift.http.response.header.size 6144 Response header size in bytes, when using HTTP transport mode. Jetty defaults used. hive.server2.thrift.http.cookie.auth.enabled true When true, HiveServer2 in HTTP transport mode, will use cookie based authentication mechanism. hive.server2.thrift.http.cookie.max.age 86400s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Maximum age in seconds for server side cookie used by HS2 in HTTP mode. hive.server2.thrift.http.cookie.domain Domain for the HS2 generated cookies hive.server2.thrift.http.cookie.path Path for the HS2 generated cookies hive.server2.thrift.http.cookie.is.secure true Deprecated: Secure attribute of the HS2 generated cookie (this is automatically enabled for SSL enabled HiveServer2). hive.server2.thrift.http.cookie.is.httponly true HttpOnly attribute of the HS2 generated cookie. hive.server2.thrift.port 10000 Port number of HiveServer2 Thrift interface when hive.server2.transport.mode is 'binary'. hive.server2.thrift.sasl.qop auth Expects one of [auth, auth-int, auth-conf]. Sasl QOP value; set it to one of following values to enable higher levels of protection for HiveServer2 communication with clients. Setting hadoop.rpc.protection to a higher level than HiveServer2 does not make sense in most situations. HiveServer2 ignores hadoop.rpc.protection in favor of hive.server2.thrift.sasl.qop. "auth" - authentication only (default) "auth-int" - authentication plus integrity protection "auth-conf" - authentication plus integrity and confidentiality protection This is applicable only if HiveServer2 is configured to use Kerberos authentication. hive.server2.thrift.min.worker.threads 5 Minimum number of Thrift worker threads hive.server2.thrift.max.worker.threads 500 Maximum number of Thrift worker threads hive.server2.thrift.exponential.backoff.slot.length 100ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Binary exponential backoff slot time for Thrift clients during login to HiveServer2, for retries until hitting Thrift client timeout hive.server2.thrift.login.timeout 20s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Timeout for Thrift clients during login to HiveServer2 hive.server2.thrift.worker.keepalive.time 60s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Keepalive time (in seconds) for an idle worker thread. When the number of workers exceeds min workers, excessive threads are killed after this time interval. hive.server2.async.exec.threads 100 Number of threads in the async thread pool for HiveServer2 hive.server2.async.exec.shutdown.timeout 10s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. How long HiveServer2 shutdown will wait for async threads to terminate. hive.server2.async.exec.wait.queue.size 100 Size of the wait queue for async thread pool in HiveServer2. After hitting this limit, the async thread pool will reject new requests. hive.server2.async.exec.keepalive.time 10s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Time that an idle HiveServer2 async thread (from the thread pool) will wait for a new task to arrive before terminating hive.server2.async.exec.async.compile false Whether to enable compiling async query asynchronously. If enabled, it is unknown if the query will have any resultset before compilation completed. hive.server2.long.polling.timeout 5000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Time that HiveServer2 will wait before responding to asynchronous calls that use long polling hive.session.impl.classname Classname for custom implementation of hive session hive.session.impl.withugi.classname Classname for custom implementation of hive session with UGI hive.server2.authentication NONE Expects one of [nosasl, none, ldap, kerberos, pam, custom]. Client authentication types. NONE: no authentication check LDAP: LDAP/AD based authentication KERBEROS: Kerberos/GSSAPI authentication CUSTOM: Custom authentication provider (Use with property hive.server2.custom.authentication.class) PAM: Pluggable authentication module NOSASL: Raw transport hive.server2.allow.user.substitution true Allow alternate user to be specified as part of HiveServer2 open connection request. hive.server2.authentication.kerberos.keytab Kerberos keytab file for server principal hive.server2.authentication.kerberos.principal Kerberos server principal hive.server2.authentication.spnego.keytab keytab file for SPNego principal, optional, typical value would look like /etc/security/keytabs/spnego.service.keytab, This keytab would be used by HiveServer2 when Kerberos security is enabled and HTTP transport mode is used. This needs to be set only if SPNEGO is to be used in authentication. SPNego authentication would be honored only if valid hive.server2.authentication.spnego.principal and hive.server2.authentication.spnego.keytab are specified. hive.server2.authentication.spnego.principal SPNego service principal, optional, typical value would look like HTTP/_HOST@EXAMPLE.COM SPNego service principal would be used by HiveServer2 when Kerberos security is enabled and HTTP transport mode is used. This needs to be set only if SPNEGO is to be used in authentication. hive.server2.authentication.ldap.url LDAP connection URL(s), this value could contain URLs to mutiple LDAP servers instances for HA, each LDAP URL is separated by a SPACE character. URLs are used in the order specified until a connection is successful. hive.server2.authentication.ldap.baseDN LDAP base DN hive.server2.authentication.ldap.Domain hive.server2.authentication.ldap.groupDNPattern COLON-separated list of patterns to use to find DNs for group entities in this directory. Use %s where the actual group name is to be substituted for. For example: CN=%s,CN=Groups,DC=subdomain,DC=domain,DC=com. hive.server2.authentication.ldap.groupFilter COMMA-separated list of LDAP Group names (short name not full DNs). For example: HiveAdmins,HadoopAdmins,Administrators hive.server2.authentication.ldap.userDNPattern COLON-separated list of patterns to use to find DNs for users in this directory. Use %s where the actual group name is to be substituted for. For example: CN=%s,CN=Users,DC=subdomain,DC=domain,DC=com. hive.server2.authentication.ldap.userFilter COMMA-separated list of LDAP usernames (just short names, not full DNs). For example: hiveuser,impalauser,hiveadmin,hadoopadmin hive.server2.authentication.ldap.guidKey uid LDAP attribute name whose values are unique in this LDAP server. For example: uid or CN. hive.server2.authentication.ldap.groupMembershipKey member LDAP attribute name on the group object that contains the list of distinguished names for the user, group, and contact objects that are members of the group. For example: member, uniqueMember or memberUid hive.server2.authentication.ldap.userMembershipKey LDAP attribute name on the user object that contains groups of which the user is a direct member, except for the primary group, which is represented by the primaryGroupId. For example: memberOf hive.server2.authentication.ldap.groupClassKey groupOfNames LDAP attribute name on the group entry that is to be used in LDAP group searches. For example: group, groupOfNames or groupOfUniqueNames. hive.server2.authentication.ldap.customLDAPQuery A full LDAP query that LDAP Atn provider uses to execute against LDAP Server. If this query returns a null resultset, the LDAP Provider fails the Authentication request, succeeds if the user is part of the resultset.For example: (&(objectClass=group)(objectClass=top)(instanceType=4)(cn=Domain*)) (&(objectClass=person)(|(sAMAccountName=admin)(|(memberOf=CN=Domain Admins,CN=Users,DC=domain,DC=com)(memberOf=CN=Administrators,CN=Builtin,DC=domain,DC=com)))) hive.server2.custom.authentication.class Custom authentication class. Used when property 'hive.server2.authentication' is set to 'CUSTOM'. Provided class must be a proper implementation of the interface org.apache.hive.service.auth.PasswdAuthenticationProvider. HiveServer2 will call its Authenticate(user, passed) method to authenticate requests. The implementation may optionally implement Hadoop's org.apache.hadoop.conf.Configurable class to grab Hive's Configuration object. hive.server2.authentication.pam.services List of the underlying pam services that should be used when auth type is PAM A file with the same name must exist in /etc/pam.d hive.server2.enable.doAs true Setting this property to true will have HiveServer2 execute Hive operations as the user making the calls to it. hive.server2.table.type.mapping CLASSIC Expects one of [classic, hive]. This setting reflects how HiveServer2 will report the table types for JDBC and other client implementations that retrieve the available tables and supported table types HIVE : Exposes Hive's native table types like MANAGED_TABLE, EXTERNAL_TABLE, VIRTUAL_VIEW CLASSIC : More generic types like TABLE and VIEW hive.server2.session.hook hive.server2.use.SSL false Set this to true for using SSL encryption in HiveServer2. hive.server2.keystore.path SSL certificate keystore location. hive.server2.keystore.password SSL certificate keystore password. hive.server2.map.fair.scheduler.queue true If the YARN fair scheduler is configured and HiveServer2 is running in non-impersonation mode, this setting determines the user for fair scheduler queue mapping. If set to true (default), the logged-in user determines the fair scheduler queue for submitted jobs, so that map reduce resource usage can be tracked by user. If set to false, all Hive jobs go to the 'hive' user's queue. hive.server2.builtin.udf.whitelist Comma separated list of builtin udf names allowed in queries. An empty whitelist allows all builtin udfs to be executed. The udf black list takes precedence over udf white list hive.server2.builtin.udf.blacklist Comma separated list of udfs names. These udfs will not be allowed in queries. The udf black list takes precedence over udf white list hive.allow.udf.load.on.demand false Whether enable loading UDFs from metastore on demand; this is mostly relevant for HS2 and was the default behavior before Hive 1.2. Off by default. hive.server2.session.check.interval 6h Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. The time should be bigger than or equal to 3000 msec. The check interval for session/operation timeout, which can be disabled by setting to zero or negative value. hive.server2.close.session.on.disconnect true Session will be closed when connection is closed. Set this to false to have session outlive its parent connection. hive.server2.idle.session.timeout 7d Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Session will be closed when it's not accessed for this duration, which can be disabled by setting to zero or negative value. hive.server2.idle.operation.timeout 5d Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Operation will be closed when it's not accessed for this duration of time, which can be disabled by setting to zero value. With positive value, it's checked for operations in terminal state only (FINISHED, CANCELED, CLOSED, ERROR). With negative value, it's checked for all of the operations regardless of state. hive.server2.idle.session.check.operation true Session will be considered to be idle only if there is no activity, and there is no pending operation. This setting takes effect only if session idle timeout (hive.server2.idle.session.timeout) and checking (hive.server2.session.check.interval) are enabled. hive.server2.thrift.client.retry.limit 1 Number of retries upon failure of Thrift HiveServer2 calls hive.server2.thrift.client.connect.retry.limit 1 Number of retries while opening a connection to HiveServe2 hive.server2.thrift.client.retry.delay.seconds 1s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Number of seconds for the HiveServer2 thrift client to wait between consecutive connection attempts. Also specifies the time to wait between retrying thrift calls upon failures hive.server2.thrift.client.user anonymous Username to use against thrift client hive.server2.thrift.client.password anonymous Password to use against thrift client hive.server2.thrift.resultset.serialize.in.tasks false Whether we should serialize the Thrift structures used in JDBC ResultSet RPC in task nodes. We use SequenceFile and ThriftJDBCBinarySerDe to read and write the final results if this is true. hive.server2.thrift.resultset.max.fetch.size 10000 Max number of rows sent in one Fetch RPC call by the server to the client. hive.server2.thrift.resultset.default.fetch.size 1000 The number of rows sent in one Fetch RPC call by the server to the client, if not specified by the client. hive.server2.xsrf.filter.enabled false If enabled, HiveServer2 will block any requests made to it over http if an X-XSRF-HEADER header is not present hive.security.command.whitelist set,reset,dfs,add,list,delete,reload,compile Comma separated list of non-SQL Hive commands users are authorized to execute hive.server2.job.credential.provider.path If set, this configuration property should provide a comma-separated list of URLs that indicates the type and location of providers to be used by hadoop credential provider API. It provides HiveServer2 the ability to provide job-specific credential providers for jobs run using MR and Spark execution engines. This functionality has not been tested against Tez. hive.mv.files.thread 15 Expects a byte size value with unit (blank for bytes, kb, mb, gb, tb, pb). The size should be in between 0Pb (inclusive) and 1Kb (inclusive). Number of threads used to move files in move task. Set it to 0 to disable multi-threaded file moves. This parameter is also used by MSCK to check tables. hive.load.dynamic.partitions.thread 15 Expects a byte size value with unit (blank for bytes, kb, mb, gb, tb, pb). The size should be in between 1 bytes (inclusive) and 1Kb (inclusive). Number of threads used to load dynamic partitions. hive.multi.insert.move.tasks.share.dependencies false If this is set all move tasks for tables/partitions (not directories) at the end of a multi-insert query will only begin once the dependencies for all these move tasks have been met. Advantages: If concurrency is enabled, the locks will only be released once the query has finished, so with this config enabled, the time when the table/partition is generated will be much closer to when the lock on it is released. Disadvantages: If concurrency is not enabled, with this disabled, the tables/partitions which are produced by this query and finish earlier will be available for querying much earlier. Since the locks are only released once the query finishes, this does not apply if concurrency is enabled. hive.exec.infer.bucket.sort false If this is set, when writing partitions, the metadata will include the bucketing/sorting properties with which the data was written if any (this will not overwrite the metadata inherited from the table if the table is bucketed/sorted) hive.exec.infer.bucket.sort.num.buckets.power.two false If this is set, when setting the number of reducers for the map reduce task which writes the final output files, it will choose a number which is a power of two, unless the user specifies the number of reducers to use using mapred.reduce.tasks. The number of reducers may be set to a power of two, only to be followed by a merge task meaning preventing anything from being inferred. With hive.exec.infer.bucket.sort set to true: Advantages: If this is not set, the number of buckets for partitions will seem arbitrary, which means that the number of mappers used for optimized joins, for example, will be very low. With this set, since the number of buckets used for any partition is a power of two, the number of mappers used for optimized joins will be the least number of buckets used by any partition being joined. Disadvantages: This may mean a much larger or much smaller number of reducers being used in the final map reduce job, e.g. if a job was originally going to take 257 reducers, it will now take 512 reducers, similarly if the max number of reducers is 511, and a job was going to use this many, it will now use 256 reducers. hive.optimize.listbucketing false Enable list bucketing optimizer. Default value is false so that we disable it by default. hive.server.read.socket.timeout 10s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Timeout for the HiveServer to close the connection if no response from the client. By default, 10 seconds. hive.server.tcp.keepalive true Whether to enable TCP keepalive for the Hive Server. Keepalive will prevent accumulation of half-open connections. hive.decode.partition.name false Whether to show the unquoted partition names in query results. hive.execution.engine mr Expects one of [mr, tez, spark]. Chooses execution engine. Options are: mr (Map reduce, default), tez, spark. While MR remains the default engine for historical reasons, it is itself a historical engine and is deprecated in Hive 2 line. It may be removed without further warning. hive.execution.mode container Expects one of [container, llap]. Chooses whether query fragments will run in container or in llap hive.jar.directory This is the location hive in tez mode will look for to find a site wide installed hive instance. hive.user.install.directory /user/ If hive (in tez mode only) cannot find a usable hive jar in "hive.jar.directory", it will upload the hive jar to "hive.user.install.directory/user.name" and use it to run queries. hive.vectorized.execution.enabled false This flag should be set to true to enable vectorized mode of query execution. The default value is false. hive.vectorized.execution.reduce.enabled true This flag should be set to true to enable vectorized mode of the reduce-side of query execution. The default value is true. hive.vectorized.execution.reduce.groupby.enabled true This flag should be set to true to enable vectorized mode of the reduce-side GROUP BY query execution. The default value is true. hive.vectorized.execution.mapjoin.native.enabled true This flag should be set to true to enable native (i.e. non-pass through) vectorization of queries using MapJoin. The default value is true. hive.vectorized.execution.mapjoin.native.multikey.only.enabled false This flag should be set to true to restrict use of native vector map join hash tables to the MultiKey in queries using MapJoin. The default value is false. hive.vectorized.execution.mapjoin.minmax.enabled false This flag should be set to true to enable vector map join hash tables to use max / max filtering for integer join queries using MapJoin. The default value is false. hive.vectorized.execution.mapjoin.overflow.repeated.threshold -1 The number of small table rows for a match in vector map join hash tables where we use the repeated field optimization in overflow vectorized row batch for join queries using MapJoin. A value of -1 means do use the join result optimization. Otherwise, threshold value can be 0 to maximum integer. hive.vectorized.execution.mapjoin.native.fast.hashtable.enabled false This flag should be set to true to enable use of native fast vector map join hash tables in queries using MapJoin. The default value is false. hive.vectorized.groupby.checkinterval 100000 Number of entries added to the group by aggregation hash before a recomputation of average entry size is performed. hive.vectorized.groupby.maxentries 1000000 Max number of entries in the vector group by aggregation hashtables. Exceeding this will trigger a flush irrelevant of memory pressure condition. hive.vectorized.groupby.flush.percent 0.1 Percent of entries in the group by aggregation hash flushed when the memory threshold is exceeded. hive.vectorized.execution.reducesink.new.enabled true This flag should be set to true to enable the new vectorization of queries using ReduceSink. iThe default value is true. hive.vectorized.use.vectorized.input.format true This flag should be set to true to enable vectorizing with vectorized input file format capable SerDe. The default value is true. hive.vectorized.use.vector.serde.deserialize true This flag should be set to true to enable vectorizing rows using vector deserialize. The default value is true. hive.vectorized.use.row.serde.deserialize false This flag should be set to true to enable vectorizing using row deserialize. The default value is false. hive.vectorized.adaptor.usage.mode all Expects one of [none, chosen, all]. Specifies the extent to which the VectorUDFAdaptor will be used for UDFs that do not have a cooresponding vectorized class. 0. none : disable any usage of VectorUDFAdaptor 1. chosen : use VectorUDFAdaptor for a small set of UDFs that were choosen for good performance 2. all : use VectorUDFAdaptor for all UDFs hive.typecheck.on.insert true This property has been extended to control whether to check, convert, and normalize partition value to conform to its column type in partition operations including but not limited to insert, such as alter, describe etc. hive.hadoop.classpath For Windows OS, we need to pass HIVE_HADOOP_CLASSPATH Java parameter while starting HiveServer2 using "-hiveconf hive.hadoop.classpath=%HIVE_LIB%". hive.rpc.query.plan false Whether to send the query plan via local resource or RPC hive.compute.splits.in.am true Whether to generate the splits locally or in the AM (tez only) hive.tez.input.generate.consistent.splits true Whether to generate consistent split locations when generating splits in the AM hive.prewarm.enabled false Enables container prewarm for Tez/Spark (Hadoop 2 only) hive.prewarm.numcontainers 10 Controls the number of containers to prewarm for Tez/Spark (Hadoop 2 only) hive.stageid.rearrange none Expects one of [none, idonly, traverse, execution]. hive.explain.dependency.append.tasktype false hive.counters.group.name HIVE The name of counter group for internal Hive variables (CREATED_FILE, FATAL_ERROR, etc.) hive.support.quoted.identifiers column Expects one of [none, column]. Whether to use quoted identifier. 'none' or 'column' can be used. none: default(past) behavior. Implies only alphaNumeric and underscore are valid characters in identifiers. column: implies column names can contain any character. hive.support.special.characters.tablename true This flag should be set to true to enable support for special characters in table names. When it is set to false, only [a-zA-Z_0-9]+ are supported. The only supported special character right now is '/'. This flag applies only to quoted table names. The default value is true. hive.users.in.admin.role Comma separated list of users who are in admin role for bootstrapping. More users can be added in ADMIN role later. hive.compat 0.12 Enable (configurable) deprecated behaviors by setting desired level of backward compatibility. Setting to 0.12: Maintains division behavior: int / int = double hive.convert.join.bucket.mapjoin.tez false Whether joins can be automatically converted to bucket map joins in hive when tez is used as the execution engine. hive.exec.check.crossproducts true Check if a plan contains a Cross Product. If there is one, output a warning to the Session's console. hive.localize.resource.wait.interval 5000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Time to wait for another thread to localize the same resource for hive-tez. hive.localize.resource.num.wait.attempts 5 The number of attempts waiting for localizing a resource in hive-tez. hive.tez.auto.reducer.parallelism false Turn on Tez' auto reducer parallelism feature. When enabled, Hive will still estimate data sizes and set parallelism estimates. Tez will sample source vertices' output sizes and adjust the estimates at runtime as necessary. hive.tez.max.partition.factor 2.0 When auto reducer parallelism is enabled this factor will be used to over-partition data in shuffle edges. hive.tez.min.partition.factor 0.25 When auto reducer parallelism is enabled this factor will be used to put a lower limit to the number of reducers that tez specifies. hive.tez.bucket.pruning false When pruning is enabled, filters on bucket columns will be processed by filtering the splits against a bitset of included buckets. This needs predicates produced by hive.optimize.ppd and hive.optimize.index.filters. hive.tez.bucket.pruning.compat true When pruning is enabled, handle possibly broken inserts due to negative hashcodes. This occasionally doubles the data scan cost, but is default enabled for safety hive.tez.dynamic.partition.pruning true When dynamic pruning is enabled, joins on partition keys will be processed by sending events from the processing vertices to the Tez application master. These events will be used to prune unnecessary partitions. hive.tez.dynamic.partition.pruning.max.event.size 1048576 Maximum size of events sent by processors in dynamic pruning. If this size is crossed no pruning will take place. hive.tez.dynamic.partition.pruning.max.data.size 104857600 Maximum total data size of events in dynamic pruning. hive.tez.dynamic.semijoin.reduction true When dynamic semijoin is enabled, shuffle joins will perform a leaky semijoin before shuffle. This requires hive.tez.dynamic.partition.pruning to be enabled. hive.tez.min.bloom.filter.entries 1000000 Bloom filter should be of at min certain size to be effective hive.tez.max.bloom.filter.entries 100000000 Bloom filter should be of at max certain size to be effective hive.tez.bloom.filter.factor 2.0 Bloom filter should be a multiple of this factor with nDV hive.tez.bigtable.minsize.semijoin.reduction 1000000 Big table for runtime filteting should be of atleast this size hive.tez.dynamic.semijoin.reduction.threshold 0.5 Only perform semijoin optimization if the estimated benefit at or above this fraction of the target table hive.tez.smb.number.waves 0.5 The number of waves in which to run the SMB join. Account for cluster being occupied. Ideally should be 1 wave. hive.tez.exec.print.summary false Display breakdown of execution steps, for every query executed by the shell. hive.tez.exec.inplace.progress true Updates tez job execution progress in-place in the terminal when hive-cli is used. hive.server2.in.place.progress true Allows hive server 2 to send progress bar update information. This is currently available only if the execution engine is tez. hive.spark.exec.inplace.progress true Updates spark job execution progress in-place in the terminal. hive.tez.container.max.java.heap.fraction 0.8 This is to override the tez setting with the same name hive.tez.task.scale.memory.reserve-fraction.min 0.3 This is to override the tez setting tez.task.scale.memory.reserve-fraction hive.tez.task.scale.memory.reserve.fraction.max 0.5 The maximum fraction of JVM memory which Tez will reserve for the processor hive.tez.task.scale.memory.reserve.fraction -1.0 The customized fraction of JVM memory which Tez will reserve for the processor hive.llap.io.enabled Whether the LLAP IO layer is enabled. hive.llap.io.nonvector.wrapper.enabled true Whether the LLAP IO layer is enabled for non-vectorized queries that read inputs that can be vectorized hive.llap.io.memory.mode cache Expects one of [cache, none]. LLAP IO memory usage; 'cache' (the default) uses data and metadata cache with a custom off-heap allocator, 'none' doesn't use either (this mode may result in significant performance degradation) hive.llap.io.allocator.alloc.min 256Kb Expects a byte size value with unit (blank for bytes, kb, mb, gb, tb, pb). Minimum allocation possible from LLAP buddy allocator. Allocations below that are padded to minimum allocation. For ORC, should generally be the same as the expected compression buffer size, or next lowest power of 2. Must be a power of 2. hive.llap.io.allocator.alloc.max 16Mb Expects a byte size value with unit (blank for bytes, kb, mb, gb, tb, pb). Maximum allocation possible from LLAP buddy allocator. For ORC, should be as large as the largest expected ORC compression buffer size. Must be a power of 2. hive.llap.io.metadata.fraction 0.1 Temporary setting for on-heap metadata cache fraction of xmx, set to avoid potential heap problems on very large datasets when on-heap metadata cache takes over everything. -1 managed metadata and data together (which is more flexible). This setting will be removed (in effect become -1) once ORC metadata cache is moved off-heap. hive.llap.io.allocator.arena.count 8 Arena count for LLAP low-level cache; cache will be allocated in the steps of (size/arena_count) bytes. This size must be <= 1Gb and >= max allocation; if it is not the case, an adjusted size will be used. Using powers of 2 is recommended. hive.llap.io.memory.size 1Gb Expects a byte size value with unit (blank for bytes, kb, mb, gb, tb, pb). Maximum size for IO allocator or ORC low-level cache. hive.llap.io.allocator.direct true Whether ORC low-level cache should use direct allocation. hive.llap.io.allocator.mmap false Whether ORC low-level cache should use memory mapped allocation (direct I/O). This is recommended to be used along-side NVDIMM (DAX) or NVMe flash storage. hive.llap.io.allocator.mmap.path /tmp Expects a writable directory on the local filesystem. The directory location for mapping NVDIMM/NVMe flash storage into the ORC low-level cache. hive.llap.io.use.lrfu true Whether ORC low-level cache should use LRFU cache policy instead of default (FIFO). hive.llap.io.lrfu.lambda 0.01 Lambda for ORC low-level cache LRFU cache policy. Must be in [0, 1]. 0 makes LRFU behave like LFU, 1 makes it behave like LRU, values in between balance accordingly. hive.llap.cache.allow.synthetic.fileid false Whether LLAP cache should use synthetic file ID if real one is not available. Systems like HDFS, Isilon, etc. provide a unique file/inode ID. On other FSes (e.g. local FS), the cache would not work by default because LLAP is unable to uniquely track the files; enabling this setting allows LLAP to generate file ID from the path, size and modification time, which is almost certain to identify file uniquely. However, if you use a FS without file IDs and rewrite files a lot (or are paranoid), you might want to avoid this setting. hive.llap.orc.gap.cache true Whether LLAP cache for ORC should remember gaps in ORC compression buffer read estimates, to avoid re-reading the data that was read once and discarded because it is unneeded. This is only necessary for ORC files written before HIVE-9660. hive.llap.io.use.fileid.path true Whether LLAP should use fileId (inode)-based path to ensure better consistency for the cases of file overwrites. This is supported on HDFS. hive.llap.io.encode.enabled true Whether LLAP should try to re-encode and cache data for non-ORC formats. This is used on LLAP Server side to determine if the infrastructure for that is initialized. hive.llap.io.encode.formats org.apache.hadoop.mapred.TextInputFormat, The table input formats for which LLAP IO should re-encode and cache data. Comma-separated list. hive.llap.io.encode.alloc.size 256Kb Expects a byte size value with unit (blank for bytes, kb, mb, gb, tb, pb). Allocation size for the buffers used to cache encoded data from non-ORC files. Must be a power of two between hive.llap.io.allocator.alloc.min and hive.llap.io.allocator.alloc.max. hive.llap.io.encode.vector.serde.enabled true Whether LLAP should use vectorized SerDe reader to read text data when re-encoding. hive.llap.io.encode.vector.serde.async.enabled true Whether LLAP should use async mode in vectorized SerDe reader to read text data. hive.llap.io.encode.slice.row.count 100000 Row count to use to separate cache slices when reading encoded data from row-based inputs into LLAP cache, if this feature is enabled. hive.llap.io.encode.slice.lrr true Whether to separate cache slices when reading encoded data from text inputs via MR MR LineRecordRedader into LLAP cache, if this feature is enabled. Safety flag. hive.llap.io.orc.time.counters true Whether to enable time counters for LLAP IO layer (time spent in HDFS, etc.) hive.llap.auto.allow.uber false Whether or not to allow the planner to run vertices in the AM. hive.llap.auto.enforce.tree true Enforce that all parents are in llap, before considering vertex hive.llap.auto.enforce.vectorized true Enforce that inputs are vectorized, before considering vertex hive.llap.auto.enforce.stats true Enforce that col stats are available, before considering vertex hive.llap.auto.max.input.size 10737418240 Check input size, before considering vertex (-1 disables check) hive.llap.auto.max.output.size 1073741824 Check output size, before considering vertex (-1 disables check) hive.llap.skip.compile.udf.check false Whether to skip the compile-time check for non-built-in UDFs when deciding whether to execute tasks in LLAP. Skipping the check allows executing UDFs from pre-localized jars in LLAP; if the jars are not pre-localized, the UDFs will simply fail to load. hive.llap.allow.permanent.fns true Whether LLAP decider should allow permanent UDFs. hive.llap.execution.mode none Expects one of [auto, none, all, map, only]. Chooses whether query fragments will run in container or in llap hive.llap.object.cache.enabled true Cache objects (plans, hashtables, etc) in llap hive.llap.io.decoding.metrics.percentiles.intervals 30 Comma-delimited set of integers denoting the desired rollover intervals (in seconds) for percentile latency metrics on the LLAP daemon IO decoding time. hive.llap.queue.metrics.percentiles.intervals hive.llap.io.threadpool.size 10 Specify the number of threads to use for low-level IO thread pool. hive.llap.daemon.service.principal The name of the LLAP daemon's service principal. hive.llap.daemon.keytab.file The path to the Kerberos Keytab file containing the LLAP daemon's service principal. hive.llap.zk.sm.principal The name of the principal to use to talk to ZooKeeper for ZooKeeper SecretManager. hive.llap.zk.sm.keytab.file The path to the Kerberos Keytab file containing the principal to use to talk to ZooKeeper for ZooKeeper SecretManager. hive.llap.webui.spnego.keytab The path to the Kerberos Keytab file containing the LLAP WebUI SPNEGO principal. Typical value would look like /etc/security/keytabs/spnego.service.keytab. hive.llap.webui.spnego.principal The LLAP WebUI SPNEGO service principal. Configured similarly to hive.server2.webui.spnego.principal hive.llap.task.principal The name of the principal to use to run tasks. By default, the clients are required to provide tokens to access HDFS/etc. hive.llap.task.keytab.file The path to the Kerberos Keytab file containing the principal to use to run tasks. By default, the clients are required to provide tokens to access HDFS/etc. hive.llap.zk.sm.connectionString ZooKeeper connection string for ZooKeeper SecretManager. hive.llap.zk.registry.user In the LLAP ZooKeeper-based registry, specifies the username in the Zookeeper path. This should be the hive user or whichever user is running the LLAP daemon. hive.llap.zk.registry.namespace In the LLAP ZooKeeper-based registry, overrides the ZK path namespace. Note that using this makes the path management (e.g. setting correct ACLs) your responsibility. hive.llap.daemon.acl * The ACL for LLAP daemon. hive.llap.daemon.acl.blocked The deny ACL for LLAP daemon. hive.llap.management.acl * The ACL for LLAP daemon management. hive.llap.management.acl.blocked The deny ACL for LLAP daemon management. hive.llap.remote.token.requires.signing true Expects one of [false, except_llap_owner, true]. Whether the token returned from LLAP management API should require fragment signing. True by default; can be disabled to allow CLI to get tokens from LLAP in a secure cluster by setting it to true or 'except_llap_owner' (the latter returns such tokens to everyone except the user LLAP cluster is authenticating under). hive.llap.daemon.delegation.token.lifetime 14d Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. LLAP delegation token lifetime, in seconds if specified without a unit. hive.llap.management.rpc.port 15004 RPC port for LLAP daemon management service. hive.llap.auto.auth false Whether or not to set Hadoop configs to enable auth in LLAP web app. hive.llap.daemon.rpc.num.handlers 5 Number of RPC handlers for LLAP daemon. hive.llap.daemon.work.dirs Working directories for the daemon. This should not be set if running as a YARN application via Slider. It must be set when not running via Slider on YARN. If the value is set when running as a Slider YARN application, the specified value will be used. hive.llap.daemon.yarn.shuffle.port 15551 YARN shuffle port for LLAP-daemon-hosted shuffle. hive.llap.daemon.yarn.container.mb -1 llap server yarn container size in MB. Used in LlapServiceDriver and package.py hive.llap.daemon.queue.name Queue name within which the llap slider application will run. Used in LlapServiceDriver and package.py hive.llap.daemon.container.id ContainerId of a running LlapDaemon. Used to publish to the registry hive.llap.daemon.nm.address NM Address host:rpcPort for the NodeManager on which the instance of the daemon is running. Published to the llap registry. Should never be set by users hive.llap.daemon.shuffle.dir.watcher.enabled false TODO doc hive.llap.daemon.am.liveness.heartbeat.interval.ms 10000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Tez AM-LLAP heartbeat interval (milliseconds). This needs to be below the task timeout interval, but otherwise as high as possible to avoid unnecessary traffic. hive.llap.am.liveness.connection.timeout.ms 10000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Amount of time to wait on connection failures to the AM from an LLAP daemon before considering the AM to be dead. hive.llap.am.use.fqdn false Whether to use FQDN of the AM machine when submitting work to LLAP. hive.llap.am.liveness.connection.sleep.between.retries.ms 2000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Sleep duration while waiting to retry connection failures to the AM from the daemon for the general keep-alive thread (milliseconds). hive.llap.task.scheduler.timeout.seconds 60s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Amount of time to wait before failing the query when there are no llap daemons running (alive) in the cluster. hive.llap.daemon.num.executors 4 Number of executors to use in LLAP daemon; essentially, the number of tasks that can be executed in parallel. hive.llap.daemon.am-reporter.max.threads 4 Maximum number of threads to be used for AM reporter. If this is lower than number of executors in llap daemon, it would be set to number of executors at runtime. hive.llap.daemon.rpc.port 0 The LLAP daemon RPC port. hive.llap.daemon.memory.per.instance.mb 4096 The total amount of memory to use for the executors inside LLAP (in megabytes). hive.llap.daemon.xmx.headroom 5% The total amount of heap memory set aside by LLAP and not used by the executors. Can be specified as size (e.g. '512Mb'), or percentage (e.g. '5%'). Note that the latter is derived from the total daemon XMX, which can be different from the total executor memory if the cache is on-heap; although that's not the default configuration. hive.llap.daemon.vcpus.per.instance 4 The total number of vcpus to use for the executors inside LLAP. hive.llap.daemon.num.file.cleaner.threads 1 Number of file cleaner threads in LLAP. hive.llap.file.cleanup.delay.seconds 300s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. How long to delay before cleaning up query files in LLAP (in seconds, for debugging). hive.llap.daemon.service.hosts Explicitly specified hosts to use for LLAP scheduling. Useful for testing. By default, YARN registry is used. hive.llap.daemon.service.refresh.interval.sec 60s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. LLAP YARN registry service list refresh delay, in seconds. hive.llap.daemon.communicator.num.threads 10 Number of threads to use in LLAP task communicator in Tez AM. hive.llap.daemon.download.permanent.fns false Whether LLAP daemon should localize the resources for permanent UDFs. hive.llap.task.scheduler.node.reenable.min.timeout.ms 200ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Minimum time after which a previously disabled node will be re-enabled for scheduling, in milliseconds. This may be modified by an exponential back-off if failures persist. hive.llap.task.scheduler.node.reenable.max.timeout.ms 10000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Maximum time after which a previously disabled node will be re-enabled for scheduling, in milliseconds. This may be modified by an exponential back-off if failures persist. hive.llap.task.scheduler.node.disable.backoff.factor 1.5 Backoff factor on successive blacklists of a node due to some failures. Blacklist times start at the min timeout and go up to the max timeout based on this backoff factor. hive.llap.task.scheduler.num.schedulable.tasks.per.node 0 The number of tasks the AM TaskScheduler will try allocating per node. 0 indicates that this should be picked up from the Registry. -1 indicates unlimited capacity; positive values indicate a specific bound. hive.llap.task.scheduler.locality.delay 0ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. The time should be in between -1 msec (inclusive) and 9223372036854775807 msec (inclusive). Amount of time to wait before allocating a request which contains location information, to a location other than the ones requested. Set to -1 for an infinite delay, 0for no delay. hive.llap.daemon.task.preemption.metrics.intervals 30,60,300 Comma-delimited set of integers denoting the desired rollover intervals (in seconds) for percentile latency metrics. Used by LLAP daemon task scheduler metrics for time taken to kill task (due to pre-emption) and useful time wasted by the task that is about to be preempted. hive.llap.daemon.task.scheduler.wait.queue.size 10 LLAP scheduler maximum queue size. hive.llap.daemon.wait.queue.comparator.class.name org.apache.hadoop.hive.llap.daemon.impl.comparator.ShortestJobFirstComparator The priority comparator to use for LLAP scheduler prioroty queue. The built-in options are org.apache.hadoop.hive.llap.daemon.impl.comparator.ShortestJobFirstComparator and .....FirstInFirstOutComparator hive.llap.daemon.task.scheduler.enable.preemption true Whether non-finishable running tasks (e.g. a reducer waiting for inputs) should be preempted by finishable tasks inside LLAP scheduler. hive.llap.task.communicator.connection.timeout.ms 16000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Connection timeout (in milliseconds) before a failure to an LLAP daemon from Tez AM. hive.llap.task.communicator.listener.thread-count 30 The number of task communicator listener threads. hive.llap.task.communicator.connection.sleep.between.retries.ms 2000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Sleep duration (in milliseconds) to wait before retrying on error when obtaining a connection to LLAP daemon from Tez AM. hive.llap.daemon.web.port 15002 LLAP daemon web UI port. hive.llap.daemon.web.ssl false Whether LLAP daemon web UI should use SSL. hive.llap.client.consistent.splits false Whether to setup split locations to match nodes on which llap daemons are running, instead of using the locations provided by the split itself. If there is no llap daemon running, fall back to locations provided by the split. This is effective only if hive.execution.mode is llap hive.llap.validate.acls true Whether LLAP should reject permissive ACLs in some cases (e.g. its own management protocol or ZK paths), similar to how ssh refuses a key with bad access permissions. hive.llap.daemon.output.service.port 15003 LLAP daemon output service port hive.llap.daemon.output.stream.timeout 120s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. The timeout for the client to connect to LLAP output service and start the fragment output after sending the fragment. The fragment will fail if its output is not claimed. hive.llap.daemon.output.service.send.buffer.size 131072 Send buffer size to be used by LLAP daemon output service hive.llap.daemon.output.service.max.pending.writes 8 Maximum number of queued writes allowed per connection when sending data via the LLAP output service to external clients. hive.llap.enable.grace.join.in.llap false Override if grace join should be allowed to run in llap. hive.llap.hs2.coordinator.enabled true Whether to create the LLAP coordinator; since execution engine and container vs llap settings are both coming from job configs, we don't know at start whether this should be created. Default true. hive.llap.daemon.logger query-routing Expects one of [query-routing, rfa, console]. logger used for llap-daemons. hive.spark.use.op.stats true Whether to use operator stats to determine reducer parallelism for Hive on Spark. If this is false, Hive will use source table stats to determine reducer parallelism for all first level reduce tasks, and the maximum reducer parallelism from all parents for all the rest (second level and onward) reducer tasks. hive.spark.use.file.size.for.mapjoin false If this is set to true, mapjoin optimization in Hive/Spark will use source file sizes associated with TableScan operator on the root of operator tree, instead of using operator statistics. hive.spark.client.future.timeout 60s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Timeout for requests from Hive client to remote Spark driver. hive.spark.job.monitor.timeout 60s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Timeout for job monitor to get Spark job state. hive.spark.client.connect.timeout 1000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Timeout for remote Spark driver in connecting back to Hive client. hive.spark.client.server.connect.timeout 90000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Timeout for handshake between Hive client and remote Spark driver. Checked by both processes. hive.spark.client.secret.bits 256 Number of bits of randomness in the generated secret for communication between Hive client and remote Spark driver. Rounded down to the nearest multiple of 8. hive.spark.client.rpc.threads 8 Maximum number of threads for remote Spark driver's RPC event loop. hive.spark.client.rpc.max.size 52428800 Maximum message size in bytes for communication between Hive client and remote Spark driver. Default is 50MB. hive.spark.client.channel.log.level Channel logging level for remote Spark driver. One of {DEBUG, ERROR, INFO, TRACE, WARN}. hive.spark.client.rpc.sasl.mechanisms DIGEST-MD5 Name of the SASL mechanism to use for authentication. hive.spark.client.rpc.server.address The server address of HiverServer2 host to be used for communication between Hive client and remote Spark driver. Default is empty, which means the address will be determined in the same way as for hive.server2.thrift.bind.host.This is only necessary if the host has mutiple network addresses and if a different network address other than hive.server2.thrift.bind.host is to be used. hive.spark.client.rpc.server.port A list of port ranges which can be used by RPC server with the format of 49152-49222,49228 and a random one is selected from the list. Default is empty, which randomly selects one port from all available ones. hive.spark.dynamic.partition.pruning false When dynamic pruning is enabled, joins on partition keys will be processed by writing to a temporary HDFS file, and read later for removing unnecessary partitions. hive.spark.dynamic.partition.pruning.max.data.size 104857600 Maximum total data size in dynamic pruning. hive.spark.use.groupby.shuffle true Spark groupByKey transformation has better performance but uses unbounded memory.Turn this off when there is a memory issue. hive.reorder.nway.joins true Runs reordering of tables within single n-way join (i.e.: picks streamtable) hive.merge.nway.joins true Merge adjacent joins into a single n-way join hive.log.every.n.records 0 Expects value bigger than 0. If value is greater than 0 logs in fixed intervals of size n rather than exponentially. hive.msck.path.validation throw Expects one of [throw, skip, ignore]. The approach msck should take with HDFS directories that are partition-like but contain unsupported characters. 'throw' (an exception) is the default; 'skip' will skip the invalid directories and still repair the others; 'ignore' will skip the validation (legacy behavior, causes bugs in many cases) hive.msck.repair.batch.size 0 Batch size for the msck repair command. If the value is greater than zero, it will execute batch wise with the configured batch size. The default value is zero. Zero means it will execute directly (Not batch wise) hive.server2.llap.concurrent.queries -1 The number of queries allowed in parallel via llap. Negative number implies 'infinite'. hive.tez.enable.memory.manager true Enable memory manager for tez hive.hash.table.inflation.factor 2.0 Expected inflation factor between disk/in memory representation of hash tables hive.log.trace.id Log tracing id that can be used by upstream clients for tracking respective logs. Truncated to 64 characters. Defaults to use auto-generated session id. hive.conf.restricted.list hive.security.authenticator.manager,hive.security.authorization.manager,hive.security.metastore.authorization.manager,hive.security.metastore.authenticator.manager,hive.users.in.admin.role,hive.server2.xsrf.filter.enabled,hive.security.authorization.enabled,hive.server2.authentication.ldap.baseDN,hive.server2.authentication.ldap.url,hive.server2.authentication.ldap.Domain,hive.server2.authentication.ldap.groupDNPattern,hive.server2.authentication.ldap.groupFilter,hive.server2.authentication.ldap.userDNPattern,hive.server2.authentication.ldap.userFilter,hive.server2.authentication.ldap.groupMembershipKey,hive.server2.authentication.ldap.userMembershipKey,hive.server2.authentication.ldap.groupClassKey,hive.server2.authentication.ldap.customLDAPQuery Comma separated list of configuration options which are immutable at runtime hive.conf.hidden.list javax.jdo.option.ConnectionPassword,hive.server2.keystore.password,fs.s3.awsAccessKeyId,fs.s3.awsSecretAccessKey,fs.s3n.awsAccessKeyId,fs.s3n.awsSecretAccessKey,fs.s3a.access.key,fs.s3a.secret.key,fs.s3a.proxy.password Comma separated list of configuration options which should not be read by normal user like passwords hive.conf.internal.variable.list hive.added.files.path,hive.added.jars.path,hive.added.archives.path Comma separated list of variables which are used internally and should not be configurable. hive.query.timeout.seconds 0s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Timeout for Running Query in seconds. A nonpositive value means infinite. If the query timeout is also set by thrift API call, the smaller one will be taken. hive.exec.input.listing.max.threads 0 Expects a byte size value with unit (blank for bytes, kb, mb, gb, tb, pb). The size should be in between 0Pb (inclusive) and 1Kb (inclusive). Maximum number of threads that Hive uses to list file information from file systems (recommended > 1 for blobstore). hive.blobstore.supported.schemes s3,s3a,s3n Comma-separated list of supported blobstore schemes. hive.blobstore.use.blobstore.as.scratchdir false Enable the use of scratch directories directly on blob storage systems (it may cause performance penalties). hive.blobstore.optimizations.enabled true This parameter enables a number of optimizations when running on blobstores: (1) If hive.blobstore.use.blobstore.as.scratchdir is false, force the last Hive job to write to the blobstore. This is a performance optimization that forces the final FileSinkOperator to write to the blobstore. See HIVE-15121 for details. ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_5/resources/mapred-default.xml 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_5/resources/mapred-default.x0000664000175000017500000023350600000000000033533 0ustar00zuulzuul00000000000000 mapreduce.jobtracker.jobhistory.location If job tracker is static the history files are stored in this single well known place. If No value is set here, by default, it is in the local file system at ${hadoop.log.dir}/history. mapreduce.jobtracker.jobhistory.task.numberprogresssplits 12 Every task attempt progresses from 0.0 to 1.0 [unless it fails or is killed]. We record, for each task attempt, certain statistics over each twelfth of the progress range. You can change the number of intervals we divide the entire range of progress into by setting this property. Higher values give more precision to the recorded data, but costs more memory in the job tracker at runtime. Each increment in this attribute costs 16 bytes per running task. mapreduce.job.userhistorylocation User can specify a location to store the history files of a particular job. If nothing is specified, the logs are stored in output directory. The files are stored in "_logs/history/" in the directory. User can stop logging by giving the value "none". mapreduce.jobtracker.jobhistory.completed.location The completed job history files are stored at this single well known location. If nothing is specified, the files are stored at ${mapreduce.jobtracker.jobhistory.location}/done. mapreduce.job.committer.setup.cleanup.needed true true, if job needs job-setup and job-cleanup. false, otherwise mapreduce.task.io.sort.factor 10 The number of streams to merge at once while sorting files. This determines the number of open file handles. mapreduce.task.io.sort.mb 100 The total amount of buffer memory to use while sorting files, in megabytes. By default, gives each merge stream 1MB, which should minimize seeks. mapreduce.map.sort.spill.percent 0.80 The soft limit in the serialization buffer. Once reached, a thread will begin to spill the contents to disk in the background. Note that collection will not block if this threshold is exceeded while a spill is already in progress, so spills may be larger than this threshold when it is set to less than .5 mapreduce.jobtracker.address local The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. mapreduce.local.clientfactory.class.name org.apache.hadoop.mapred.LocalClientFactory This the client factory that is responsible for creating local job runner client mapreduce.jobtracker.http.address 0.0.0.0:50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. mapreduce.jobtracker.handler.count 10 The number of server threads for the JobTracker. This should be roughly 4% of the number of tasktracker nodes. mapreduce.tasktracker.report.address 127.0.0.1:0 The interface and port that task tracker server listens on. Since it is only connected to by the tasks, it uses the local interface. EXPERT ONLY. Should only be changed if your host does not have the loopback interface. mapreduce.cluster.local.dir ${hadoop.tmp.dir}/mapred/local The local directory where MapReduce stores intermediate data files. May be a comma-separated list of directories on different devices in order to spread disk i/o. Directories that do not exist are ignored. mapreduce.jobtracker.system.dir ${hadoop.tmp.dir}/mapred/system The directory where MapReduce stores control files. mapreduce.jobtracker.staging.root.dir ${hadoop.tmp.dir}/mapred/staging The root of the staging area for users' job files In practice, this should be the directory where users' home directories are located (usually /user) mapreduce.cluster.temp.dir ${hadoop.tmp.dir}/mapred/temp A shared directory for temporary files. mapreduce.tasktracker.local.dir.minspacestart 0 If the space in mapreduce.cluster.local.dir drops under this, do not ask for more tasks. Value in bytes. mapreduce.tasktracker.local.dir.minspacekill 0 If the space in mapreduce.cluster.local.dir drops under this, do not ask more tasks until all the current ones have finished and cleaned up. Also, to save the rest of the tasks we have running, kill one of them, to clean up some space. Start with the reduce tasks, then go with the ones that have finished the least. Value in bytes. mapreduce.jobtracker.expire.trackers.interval 600000 Expert: The time-interval, in miliseconds, after which a tasktracker is declared 'lost' if it doesn't send heartbeats. mapreduce.tasktracker.instrumentation org.apache.hadoop.mapred.TaskTrackerMetricsInst Expert: The instrumentation class to associate with each TaskTracker. mapreduce.tasktracker.resourcecalculatorplugin Name of the class whose instance will be used to query resource information on the tasktracker. The class must be an instance of org.apache.hadoop.util.ResourceCalculatorPlugin. If the value is null, the tasktracker attempts to use a class appropriate to the platform. Currently, the only platform supported is Linux. mapreduce.tasktracker.taskmemorymanager.monitoringinterval 5000 The interval, in milliseconds, for which the tasktracker waits between two cycles of monitoring its tasks' memory usage. Used only if tasks' memory management is enabled via mapred.tasktracker.tasks.maxmemory. mapreduce.tasktracker.tasks.sleeptimebeforesigkill 5000 The time, in milliseconds, the tasktracker waits for sending a SIGKILL to a task, after it has been sent a SIGTERM. This is currently not used on WINDOWS where tasks are just sent a SIGTERM. mapreduce.job.maps 2 The default number of map tasks per job. Ignored when mapreduce.jobtracker.address is "local". mapreduce.job.reduces 1 The default number of reduce tasks per job. Typically set to 99% of the cluster's reduce capacity, so that if a node fails the reduces can still be executed in a single wave. Ignored when mapreduce.jobtracker.address is "local". mapreduce.jobtracker.restart.recover false "true" to enable (job) recovery upon restart, "false" to start afresh mapreduce.jobtracker.jobhistory.block.size 3145728 The block size of the job history file. Since the job recovery uses job history, its important to dump job history to disk as soon as possible. Note that this is an expert level parameter. The default value is set to 3 MB. mapreduce.jobtracker.taskscheduler org.apache.hadoop.mapred.JobQueueTaskScheduler The class responsible for scheduling the tasks. mapreduce.job.running.map.limit 0 The maximum number of simultaneous map tasks per job. There is no limit if this value is 0 or negative. mapreduce.job.running.reduce.limit 0 The maximum number of simultaneous reduce tasks per job. There is no limit if this value is 0 or negative. mapreduce.job.reducer.preempt.delay.sec 0 The threshold (in seconds) after which an unsatisfied mapper request triggers reducer preemption when there is no anticipated headroom. If set to 0 or a negative value, the reducer is preempted as soon as lack of headroom is detected. Default is 0. mapreduce.job.reducer.unconditional-preempt.delay.sec 300 The threshold (in seconds) after which an unsatisfied mapper request triggers a forced reducer preemption irrespective of the anticipated headroom. By default, it is set to 5 mins. Setting it to 0 leads to immediate reducer preemption. Setting to -1 disables this preemption altogether. mapreduce.job.max.split.locations 10 The max number of block locations to store for each split for locality calculation. mapreduce.job.split.metainfo.maxsize 10000000 The maximum permissible size of the split metainfo file. The JobTracker won't attempt to read split metainfo files bigger than the configured value. No limits if set to -1. mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob The maximum number of running tasks for a job before it gets preempted. No limits if undefined. mapreduce.map.maxattempts 4 Expert: The maximum number of attempts per map task. In other words, framework will try to execute a map task these many number of times before giving up on it. mapreduce.reduce.maxattempts 4 Expert: The maximum number of attempts per reduce task. In other words, framework will try to execute a reduce task these many number of times before giving up on it. mapreduce.reduce.shuffle.fetch.retry.enabled ${yarn.nodemanager.recovery.enabled} Set to enable fetch retry during host restart. mapreduce.reduce.shuffle.fetch.retry.interval-ms 1000 Time of interval that fetcher retry to fetch again when some non-fatal failure happens because of some events like NM restart. mapreduce.reduce.shuffle.fetch.retry.timeout-ms 30000 Timeout value for fetcher to retry to fetch again when some non-fatal failure happens because of some events like NM restart. mapreduce.reduce.shuffle.retry-delay.max.ms 60000 The maximum number of ms the reducer will delay before retrying to download map data. mapreduce.reduce.shuffle.parallelcopies 5 The default number of parallel transfers run by reduce during the copy(shuffle) phase. mapreduce.reduce.shuffle.connect.timeout 180000 Expert: The maximum amount of time (in milli seconds) reduce task spends in trying to connect to a tasktracker for getting map output. mapreduce.reduce.shuffle.read.timeout 180000 Expert: The maximum amount of time (in milli seconds) reduce task waits for map output data to be available for reading after obtaining connection. mapreduce.shuffle.listen.queue.size 128 The length of the shuffle server listen queue. mapreduce.shuffle.connection-keep-alive.enable false set to true to support keep-alive connections. mapreduce.shuffle.connection-keep-alive.timeout 5 The number of seconds a shuffle client attempts to retain http connection. Refer "Keep-Alive: timeout=" header in Http specification mapreduce.task.timeout 600000 The number of milliseconds before a task will be terminated if it neither reads an input, writes an output, nor updates its status string. A value of 0 disables the timeout. mapreduce.tasktracker.map.tasks.maximum 2 The maximum number of map tasks that will be run simultaneously by a task tracker. mapreduce.tasktracker.reduce.tasks.maximum 2 The maximum number of reduce tasks that will be run simultaneously by a task tracker. mapreduce.map.memory.mb 1024 The amount of memory to request from the scheduler for each map task. mapreduce.map.cpu.vcores 1 The number of virtual cores to request from the scheduler for each map task. mapreduce.reduce.memory.mb 1024 The amount of memory to request from the scheduler for each reduce task. mapreduce.reduce.cpu.vcores 1 The number of virtual cores to request from the scheduler for each reduce task. mapreduce.jobtracker.retiredjobs.cache.size 1000 The number of retired job status to keep in the cache. mapreduce.tasktracker.outofband.heartbeat false Expert: Set this to true to let the tasktracker send an out-of-band heartbeat on task-completion for better latency. mapreduce.jobtracker.jobhistory.lru.cache.size 5 The number of job history files loaded in memory. The jobs are loaded when they are first accessed. The cache is cleared based on LRU. mapreduce.jobtracker.instrumentation org.apache.hadoop.mapred.JobTrackerMetricsInst Expert: The instrumentation class to associate with each JobTracker. mapred.child.java.opts -Xmx200m Java opts for the task processes. The following symbol, if present, will be interpolated: @taskid@ is replaced by current TaskID. Any other occurrences of '@' will go unchanged. For example, to enable verbose gc logging to a file named for the taskid in /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of: -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc Usage of -Djava.library.path can cause programs to no longer function if hadoop native libraries are used. These values should instead be set as part of LD_LIBRARY_PATH in the map / reduce JVM env using the mapreduce.map.env and mapreduce.reduce.env config settings. mapred.child.env User added environment variables for the task processes. Example : 1) A=foo This will set the env variable A to foo 2) B=$B:c This is inherit nodemanager's B env variable on Unix. 3) B=%B%;c This is inherit nodemanager's B env variable on Windows. mapreduce.admin.user.env Expert: Additional execution environment entries for map and reduce task processes. This is not an additive property. You must preserve the original value if you want your map and reduce tasks to have access to native libraries (compression, etc). When this value is empty, the command to set execution envrionment will be OS dependent: For linux, use LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native. For windows, use PATH = %PATH%;%HADOOP_COMMON_HOME%\\bin. mapreduce.map.log.level INFO The logging level for the map task. The allowed levels are: OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL. The setting here could be overridden if "mapreduce.job.log4j-properties-file" is set. mapreduce.reduce.log.level INFO The logging level for the reduce task. The allowed levels are: OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL. The setting here could be overridden if "mapreduce.job.log4j-properties-file" is set. mapreduce.map.cpu.vcores 1 The number of virtual cores required for each map task. mapreduce.reduce.cpu.vcores 1 The number of virtual cores required for each reduce task. mapreduce.reduce.merge.inmem.threshold 1000 The threshold, in terms of the number of files for the in-memory merge process. When we accumulate threshold number of files we initiate the in-memory merge and spill to disk. A value of 0 or less than 0 indicates we want to DON'T have any threshold and instead depend only on the ramfs's memory consumption to trigger the merge. mapreduce.reduce.shuffle.merge.percent 0.66 The usage threshold at which an in-memory merge will be initiated, expressed as a percentage of the total memory allocated to storing in-memory map outputs, as defined by mapreduce.reduce.shuffle.input.buffer.percent. mapreduce.reduce.shuffle.input.buffer.percent 0.70 The percentage of memory to be allocated from the maximum heap size to storing map outputs during the shuffle. mapreduce.reduce.input.buffer.percent 0.0 The percentage of memory- relative to the maximum heap size- to retain map outputs during the reduce. When the shuffle is concluded, any remaining map outputs in memory must consume less than this threshold before the reduce can begin. mapreduce.reduce.shuffle.memory.limit.percent 0.25 Expert: Maximum percentage of the in-memory limit that a single shuffle can consume mapreduce.shuffle.ssl.enabled false Whether to use SSL for for the Shuffle HTTP endpoints. mapreduce.shuffle.ssl.file.buffer.size 65536 Buffer size for reading spills from file when using SSL. mapreduce.shuffle.max.connections 0 Max allowed connections for the shuffle. Set to 0 (zero) to indicate no limit on the number of connections. mapreduce.shuffle.max.threads 0 Max allowed threads for serving shuffle connections. Set to zero to indicate the default of 2 times the number of available processors (as reported by Runtime.availableProcessors()). Netty is used to serve requests, so a thread is not needed for each connection. mapreduce.shuffle.transferTo.allowed This option can enable/disable using nio transferTo method in the shuffle phase. NIO transferTo does not perform well on windows in the shuffle phase. Thus, with this configuration property it is possible to disable it, in which case custom transfer method will be used. Recommended value is false when running Hadoop on Windows. For Linux, it is recommended to set it to true. If nothing is set then the default value is false for Windows, and true for Linux. mapreduce.shuffle.transfer.buffer.size 131072 This property is used only if mapreduce.shuffle.transferTo.allowed is set to false. In that case, this property defines the size of the buffer used in the buffer copy code for the shuffle phase. The size of this buffer determines the size of the IO requests. mapreduce.reduce.markreset.buffer.percent 0.0 The percentage of memory -relative to the maximum heap size- to be used for caching values when using the mark-reset functionality. mapreduce.map.speculative true If true, then multiple instances of some map tasks may be executed in parallel. mapreduce.reduce.speculative true If true, then multiple instances of some reduce tasks may be executed in parallel. mapreduce.job.speculative.speculative-cap-running-tasks 0.1 The max percent (0-1) of running tasks that can be speculatively re-executed at any time. mapreduce.job.speculative.speculative-cap-total-tasks 0.01 The max percent (0-1) of all tasks that can be speculatively re-executed at any time. mapreduce.job.speculative.minimum-allowed-tasks 10 The minimum allowed tasks that can be speculatively re-executed at any time. mapreduce.job.speculative.retry-after-no-speculate 1000 The waiting time(ms) to do next round of speculation if there is no task speculated in this round. mapreduce.job.speculative.retry-after-speculate 15000 The waiting time(ms) to do next round of speculation if there are tasks speculated in this round. mapreduce.job.map.output.collector.class org.apache.hadoop.mapred.MapTask$MapOutputBuffer The MapOutputCollector implementation(s) to use. This may be a comma-separated list of class names, in which case the map task will try to initialize each of the collectors in turn. The first to successfully initialize will be used. mapreduce.job.speculative.slowtaskthreshold 1.0 The number of standard deviations by which a task's ave progress-rates must be lower than the average of all running tasks' for the task to be considered too slow. mapreduce.job.jvm.numtasks 1 How many tasks to run per jvm. If set to -1, there is no limit. mapreduce.job.ubertask.enable false Whether to enable the small-jobs "ubertask" optimization, which runs "sufficiently small" jobs sequentially within a single JVM. "Small" is defined by the following maxmaps, maxreduces, and maxbytes settings. Note that configurations for application masters also affect the "Small" definition - yarn.app.mapreduce.am.resource.mb must be larger than both mapreduce.map.memory.mb and mapreduce.reduce.memory.mb, and yarn.app.mapreduce.am.resource.cpu-vcores must be larger than both mapreduce.map.cpu.vcores and mapreduce.reduce.cpu.vcores to enable ubertask. Users may override this value. mapreduce.job.ubertask.maxmaps 9 Threshold for number of maps, beyond which job is considered too big for the ubertasking optimization. Users may override this value, but only downward. mapreduce.job.ubertask.maxreduces 1 Threshold for number of reduces, beyond which job is considered too big for the ubertasking optimization. CURRENTLY THE CODE CANNOT SUPPORT MORE THAN ONE REDUCE and will ignore larger values. (Zero is a valid max, however.) Users may override this value, but only downward. mapreduce.job.ubertask.maxbytes Threshold for number of input bytes, beyond which job is considered too big for the ubertasking optimization. If no value is specified, dfs.block.size is used as a default. Be sure to specify a default value in mapred-site.xml if the underlying filesystem is not HDFS. Users may override this value, but only downward. mapreduce.job.emit-timeline-data false Specifies if the Application Master should emit timeline data to the timeline server. Individual jobs can override this value. mapreduce.input.fileinputformat.split.minsize 0 The minimum size chunk that map input should be split into. Note that some file formats may have minimum split sizes that take priority over this setting. mapreduce.input.fileinputformat.list-status.num-threads 1 The number of threads to use to list and fetch block locations for the specified input paths. Note: multiple threads should not be used if a custom non thread-safe path filter is used. mapreduce.jobtracker.maxtasks.perjob -1 The maximum number of tasks for a single job. A value of -1 indicates that there is no maximum. mapreduce.input.lineinputformat.linespermap 1 When using NLineInputFormat, the number of lines of input data to include in each split. mapreduce.client.submit.file.replication 10 The replication level for submitted job files. This should be around the square root of the number of nodes. mapreduce.tasktracker.dns.interface default The name of the Network Interface from which a task tracker should report its IP address. mapreduce.tasktracker.dns.nameserver default The host name or IP address of the name server (DNS) which a TaskTracker should use to determine the host name used by the JobTracker for communication and display purposes. mapreduce.tasktracker.http.threads 40 The number of worker threads that for the http server. This is used for map output fetching mapreduce.tasktracker.http.address 0.0.0.0:50060 The task tracker http server address and port. If the port is 0 then the server will start on a free port. mapreduce.task.files.preserve.failedtasks false Should the files for failed tasks be kept. This should only be used on jobs that are failing, because the storage is never reclaimed. It also prevents the map outputs from being erased from the reduce directory as they are consumed. mapreduce.output.fileoutputformat.compress false Should the job outputs be compressed? mapreduce.output.fileoutputformat.compress.type RECORD If the job outputs are to compressed as SequenceFiles, how should they be compressed? Should be one of NONE, RECORD or BLOCK. mapreduce.output.fileoutputformat.compress.codec org.apache.hadoop.io.compress.DefaultCodec If the job outputs are compressed, how should they be compressed? mapreduce.map.output.compress false Should the outputs of the maps be compressed before being sent across the network. Uses SequenceFile compression. mapreduce.map.output.compress.codec org.apache.hadoop.io.compress.DefaultCodec If the map outputs are compressed, how should they be compressed? map.sort.class org.apache.hadoop.util.QuickSort The default sort class for sorting keys. mapreduce.task.userlog.limit.kb 0 The maximum size of user-logs of each task in KB. 0 disables the cap. yarn.app.mapreduce.am.container.log.limit.kb 0 The maximum size of the MRAppMaster attempt container logs in KB. 0 disables the cap. yarn.app.mapreduce.task.container.log.backups 0 Number of backup files for task logs when using ContainerRollingLogAppender (CRLA). See org.apache.log4j.RollingFileAppender.maxBackupIndex. By default, ContainerLogAppender (CLA) is used, and container logs are not rolled. CRLA is enabled for tasks when both mapreduce.task.userlog.limit.kb and yarn.app.mapreduce.task.container.log.backups are greater than zero. yarn.app.mapreduce.am.container.log.backups 0 Number of backup files for the ApplicationMaster logs when using ContainerRollingLogAppender (CRLA). See org.apache.log4j.RollingFileAppender.maxBackupIndex. By default, ContainerLogAppender (CLA) is used, and container logs are not rolled. CRLA is enabled for the ApplicationMaster when both mapreduce.task.userlog.limit.kb and yarn.app.mapreduce.am.container.log.backups are greater than zero. yarn.app.mapreduce.shuffle.log.separate true If enabled ('true') logging generated by the client-side shuffle classes in a reducer will be written in a dedicated log file 'syslog.shuffle' instead of 'syslog'. yarn.app.mapreduce.shuffle.log.limit.kb 0 Maximum size of the syslog.shuffle file in kilobytes (0 for no limit). yarn.app.mapreduce.shuffle.log.backups 0 If yarn.app.mapreduce.shuffle.log.limit.kb and yarn.app.mapreduce.shuffle.log.backups are greater than zero then a ContainerRollngLogAppender is used instead of ContainerLogAppender for syslog.shuffle. See org.apache.log4j.RollingFileAppender.maxBackupIndex mapreduce.job.userlog.retain.hours 24 The maximum time, in hours, for which the user-logs are to be retained after the job completion. mapreduce.jobtracker.hosts.filename Names a file that contains the list of nodes that may connect to the jobtracker. If the value is empty, all hosts are permitted. mapreduce.jobtracker.hosts.exclude.filename Names a file that contains the list of hosts that should be excluded by the jobtracker. If the value is empty, no hosts are excluded. mapreduce.jobtracker.heartbeats.in.second 100 Expert: Approximate number of heart-beats that could arrive at JobTracker in a second. Assuming each RPC can be processed in 10msec, the default value is made 100 RPCs in a second. mapreduce.jobtracker.tasktracker.maxblacklists 4 The number of blacklists for a taskTracker by various jobs after which the task tracker could be blacklisted across all jobs. The tracker will be given a tasks later (after a day). The tracker will become a healthy tracker after a restart. mapreduce.job.maxtaskfailures.per.tracker 3 The number of task-failures on a tasktracker of a given job after which new tasks of that job aren't assigned to it. It MUST be less than mapreduce.map.maxattempts and mapreduce.reduce.maxattempts otherwise the failed task will never be tried on a different node. mapreduce.client.output.filter FAILED The filter for controlling the output of the task's userlogs sent to the console of the JobClient. The permissible options are: NONE, KILLED, FAILED, SUCCEEDED and ALL. mapreduce.client.completion.pollinterval 5000 The interval (in milliseconds) between which the JobClient polls the JobTracker for updates about job status. You may want to set this to a lower value to make tests run faster on a single node system. Adjusting this value in production may lead to unwanted client-server traffic. mapreduce.client.progressmonitor.pollinterval 1000 The interval (in milliseconds) between which the JobClient reports status to the console and checks for job completion. You may want to set this to a lower value to make tests run faster on a single node system. Adjusting this value in production may lead to unwanted client-server traffic. mapreduce.jobtracker.persist.jobstatus.active true Indicates if persistency of job status information is active or not. mapreduce.jobtracker.persist.jobstatus.hours 1 The number of hours job status information is persisted in DFS. The job status information will be available after it drops of the memory queue and between jobtracker restarts. With a zero value the job status information is not persisted at all in DFS. mapreduce.jobtracker.persist.jobstatus.dir /jobtracker/jobsInfo The directory where the job status information is persisted in a file system to be available after it drops of the memory queue and between jobtracker restarts. mapreduce.task.profile false To set whether the system should collect profiler information for some of the tasks in this job? The information is stored in the user log directory. The value is "true" if task profiling is enabled. mapreduce.task.profile.maps 0-2 To set the ranges of map tasks to profile. mapreduce.task.profile has to be set to true for the value to be accounted. mapreduce.task.profile.reduces 0-2 To set the ranges of reduce tasks to profile. mapreduce.task.profile has to be set to true for the value to be accounted. mapreduce.task.profile.params -agentlib:hprof=cpu=samples,heap=sites,force=n,thread=y,verbose=n,file=%s JVM profiler parameters used to profile map and reduce task attempts. This string may contain a single format specifier %s that will be replaced by the path to profile.out in the task attempt log directory. To specify different profiling options for map tasks and reduce tasks, more specific parameters mapreduce.task.profile.map.params and mapreduce.task.profile.reduce.params should be used. mapreduce.task.profile.map.params ${mapreduce.task.profile.params} Map-task-specific JVM profiler parameters. See mapreduce.task.profile.params mapreduce.task.profile.reduce.params ${mapreduce.task.profile.params} Reduce-task-specific JVM profiler parameters. See mapreduce.task.profile.params mapreduce.task.skip.start.attempts 2 The number of Task attempts AFTER which skip mode will be kicked off. When skip mode is kicked off, the tasks reports the range of records which it will process next, to the TaskTracker. So that on failures, TT knows which ones are possibly the bad records. On further executions, those are skipped. mapreduce.map.skip.proc.count.autoincr true The flag which if set to true, SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS is incremented by MapRunner after invoking the map function. This value must be set to false for applications which process the records asynchronously or buffer the input records. For example streaming. In such cases applications should increment this counter on their own. mapreduce.reduce.skip.proc.count.autoincr true The flag which if set to true, SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS is incremented by framework after invoking the reduce function. This value must be set to false for applications which process the records asynchronously or buffer the input records. For example streaming. In such cases applications should increment this counter on their own. mapreduce.job.skip.outdir If no value is specified here, the skipped records are written to the output directory at _logs/skip. User can stop writing skipped records by giving the value "none". mapreduce.map.skip.maxrecords 0 The number of acceptable skip records surrounding the bad record PER bad record in mapper. The number includes the bad record as well. To turn the feature of detection/skipping of bad records off, set the value to 0. The framework tries to narrow down the skipped range by retrying until this threshold is met OR all attempts get exhausted for this task. Set the value to Long.MAX_VALUE to indicate that framework need not try to narrow down. Whatever records(depends on application) get skipped are acceptable. mapreduce.reduce.skip.maxgroups 0 The number of acceptable skip groups surrounding the bad group PER bad group in reducer. The number includes the bad group as well. To turn the feature of detection/skipping of bad groups off, set the value to 0. The framework tries to narrow down the skipped range by retrying until this threshold is met OR all attempts get exhausted for this task. Set the value to Long.MAX_VALUE to indicate that framework need not try to narrow down. Whatever groups(depends on application) get skipped are acceptable. mapreduce.ifile.readahead true Configuration key to enable/disable IFile readahead. mapreduce.ifile.readahead.bytes 4194304 Configuration key to set the IFile readahead length in bytes. mapreduce.jobtracker.taskcache.levels 2 This is the max level of the task cache. For example, if the level is 2, the tasks cached are at the host level and at the rack level. mapreduce.job.queuename default Queue to which a job is submitted. This must match one of the queues defined in mapred-queues.xml for the system. Also, the ACL setup for the queue must allow the current user to submit a job to the queue. Before specifying a queue, ensure that the system is configured with the queue, and access is allowed for submitting jobs to the queue. mapreduce.job.tags Tags for the job that will be passed to YARN at submission time. Queries to YARN for applications can filter on these tags. mapreduce.cluster.acls.enabled false Specifies whether ACLs should be checked for authorization of users for doing various queue and job level operations. ACLs are disabled by default. If enabled, access control checks are made by JobTracker and TaskTracker when requests are made by users for queue operations like submit job to a queue and kill a job in the queue and job operations like viewing the job-details (See mapreduce.job.acl-view-job) or for modifying the job (See mapreduce.job.acl-modify-job) using Map/Reduce APIs, RPCs or via the console and web user interfaces. For enabling this flag(mapreduce.cluster.acls.enabled), this is to be set to true in mapred-site.xml on JobTracker node and on all TaskTracker nodes. mapreduce.job.acl-modify-job Job specific access-control list for 'modifying' the job. It is only used if authorization is enabled in Map/Reduce by setting the configuration property mapreduce.cluster.acls.enabled to true. This specifies the list of users and/or groups who can do modification operations on the job. For specifying a list of users and groups the format to use is "user1,user2 group1,group". If set to '*', it allows all users/groups to modify this job. If set to ' '(i.e. space), it allows none. This configuration is used to guard all the modifications with respect to this job and takes care of all the following operations: o killing this job o killing a task of this job, failing a task of this job o setting the priority of this job Each of these operations are also protected by the per-queue level ACL "acl-administer-jobs" configured via mapred-queues.xml. So a caller should have the authorization to satisfy either the queue-level ACL or the job-level ACL. Irrespective of this ACL configuration, (a) job-owner, (b) the user who started the cluster, (c) members of an admin configured supergroup configured via mapreduce.cluster.permissions.supergroup and (d) queue administrators of the queue to which this job was submitted to configured via acl-administer-jobs for the specific queue in mapred-queues.xml can do all the modification operations on a job. By default, nobody else besides job-owner, the user who started the cluster, members of supergroup and queue administrators can perform modification operations on a job. mapreduce.job.acl-view-job Job specific access-control list for 'viewing' the job. It is only used if authorization is enabled in Map/Reduce by setting the configuration property mapreduce.cluster.acls.enabled to true. This specifies the list of users and/or groups who can view private details about the job. For specifying a list of users and groups the format to use is "user1,user2 group1,group". If set to '*', it allows all users/groups to modify this job. If set to ' '(i.e. space), it allows none. This configuration is used to guard some of the job-views and at present only protects APIs that can return possibly sensitive information of the job-owner like o job-level counters o task-level counters o tasks' diagnostic information o task-logs displayed on the TaskTracker web-UI and o job.xml showed by the JobTracker's web-UI Every other piece of information of jobs is still accessible by any other user, for e.g., JobStatus, JobProfile, list of jobs in the queue, etc. Irrespective of this ACL configuration, (a) job-owner, (b) the user who started the cluster, (c) members of an admin configured supergroup configured via mapreduce.cluster.permissions.supergroup and (d) queue administrators of the queue to which this job was submitted to configured via acl-administer-jobs for the specific queue in mapred-queues.xml can do all the view operations on a job. By default, nobody else besides job-owner, the user who started the cluster, memebers of supergroup and queue administrators can perform view operations on a job. mapreduce.tasktracker.indexcache.mb 10 The maximum memory that a task tracker allows for the index cache that is used when serving map outputs to reducers. mapreduce.job.finish-when-all-reducers-done false Specifies whether the job should complete once all reducers have finished, regardless of whether there are still running mappers. mapreduce.job.token.tracking.ids.enabled false Whether to write tracking ids of tokens to job-conf. When true, the configuration property "mapreduce.job.token.tracking.ids" is set to the token-tracking-ids of the job mapreduce.job.token.tracking.ids When mapreduce.job.token.tracking.ids.enabled is set to true, this is set by the framework to the token-tracking-ids used by the job. mapreduce.task.merge.progress.records 10000 The number of records to process during merge before sending a progress notification to the TaskTracker. mapreduce.task.combine.progress.records 10000 The number of records to process during combine output collection before sending a progress notification. mapreduce.job.reduce.slowstart.completedmaps 0.05 Fraction of the number of maps in the job which should be complete before reduces are scheduled for the job. mapreduce.job.complete.cancel.delegation.tokens true if false - do not unregister/cancel delegation tokens from renewal, because same tokens may be used by spawned jobs mapreduce.tasktracker.taskcontroller org.apache.hadoop.mapred.DefaultTaskController TaskController which is used to launch and manage task execution mapreduce.tasktracker.group Expert: Group to which TaskTracker belongs. If LinuxTaskController is configured via mapreduce.tasktracker.taskcontroller, the group owner of the task-controller binary should be same as this group. mapreduce.shuffle.port 13562 Default port that the ShuffleHandler will run on. ShuffleHandler is a service run at the NodeManager to facilitate transfers of intermediate Map outputs to requesting Reducers. mapreduce.job.reduce.shuffle.consumer.plugin.class org.apache.hadoop.mapreduce.task.reduce.Shuffle Name of the class whose instance will be used to send shuffle requests by reducetasks of this job. The class must be an instance of org.apache.hadoop.mapred.ShuffleConsumerPlugin. mapreduce.tasktracker.healthchecker.script.path Absolute path to the script which is periodicallyrun by the node health monitoring service to determine if the node is healthy or not. If the value of this key is empty or the file does not exist in the location configured here, the node health monitoring service is not started. mapreduce.tasktracker.healthchecker.interval 60000 Frequency of the node health script to be run, in milliseconds mapreduce.tasktracker.healthchecker.script.timeout 600000 Time after node health script should be killed if unresponsive and considered that the script has failed. mapreduce.tasktracker.healthchecker.script.args List of arguments which are to be passed to node health script when it is being launched comma seperated. mapreduce.job.node-label-expression All the containers of the Map Reduce job will be run with this node label expression. If the node-label-expression for job is not set, then it will use queue's default-node-label-expression for all job's containers. mapreduce.job.am.node-label-expression This is node-label configuration for Map Reduce Application Master container. If not configured it will make use of mapreduce.job.node-label-expression and if job's node-label expression is not configured then it will use queue's default-node-label-expression. mapreduce.map.node-label-expression This is node-label configuration for Map task containers. If not configured it will use mapreduce.job.node-label-expression and if job's node-label expression is not configured then it will use queue's default-node-label-expression. mapreduce.reduce.node-label-expression This is node-label configuration for Reduce task containers. If not configured it will use mapreduce.job.node-label-expression and if job's node-label expression is not configured then it will use queue's default-node-label-expression. mapreduce.job.counters.limit 120 Limit on the number of user counters allowed per job. mapreduce.framework.name local The runtime framework for executing MapReduce jobs. Can be one of local, classic or yarn. yarn.app.mapreduce.am.staging-dir /tmp/hadoop-yarn/staging The staging dir used while submitting jobs. mapreduce.am.max-attempts 2 The maximum number of application attempts. It is a application-specific setting. It should not be larger than the global number set by resourcemanager. Otherwise, it will be override. The default number is set to 2, to allow at least one retry for AM. mapreduce.job.end-notification.url Indicates url which will be called on completion of job to inform end status of job. User can give at most 2 variables with URI : $jobId and $jobStatus. If they are present in URI, then they will be replaced by their respective values. mapreduce.job.end-notification.retry.attempts 0 The number of times the submitter of the job wants to retry job end notification if it fails. This is capped by mapreduce.job.end-notification.max.attempts mapreduce.job.end-notification.retry.interval 1000 The number of milliseconds the submitter of the job wants to wait before job end notification is retried if it fails. This is capped by mapreduce.job.end-notification.max.retry.interval mapreduce.job.end-notification.max.attempts 5 true The maximum number of times a URL will be read for providing job end notification. Cluster administrators can set this to limit how long after end of a job, the Application Master waits before exiting. Must be marked as final to prevent users from overriding this. mapreduce.job.log4j-properties-file Used to override the default settings of log4j in container-log4j.properties for NodeManager. Like container-log4j.properties, it requires certain framework appenders properly defined in this overriden file. The file on the path will be added to distributed cache and classpath. If no-scheme is given in the path, it defaults to point to a log4j file on the local FS. mapreduce.job.end-notification.max.retry.interval 5000 true The maximum amount of time (in milliseconds) to wait before retrying job end notification. Cluster administrators can set this to limit how long the Application Master waits before exiting. Must be marked as final to prevent users from overriding this. yarn.app.mapreduce.am.env User added environment variables for the MR App Master processes. Example : 1) A=foo This will set the env variable A to foo 2) B=$B:c This is inherit tasktracker's B env variable. yarn.app.mapreduce.am.admin.user.env Environment variables for the MR App Master processes for admin purposes. These values are set first and can be overridden by the user env (yarn.app.mapreduce.am.env) Example : 1) A=foo This will set the env variable A to foo 2) B=$B:c This is inherit app master's B env variable. yarn.app.mapreduce.am.command-opts -Xmx1024m Java opts for the MR App Master processes. The following symbol, if present, will be interpolated: @taskid@ is replaced by current TaskID. Any other occurrences of '@' will go unchanged. For example, to enable verbose gc logging to a file named for the taskid in /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of: -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc Usage of -Djava.library.path can cause programs to no longer function if hadoop native libraries are used. These values should instead be set as part of LD_LIBRARY_PATH in the map / reduce JVM env using the mapreduce.map.env and mapreduce.reduce.env config settings. yarn.app.mapreduce.am.admin-command-opts Java opts for the MR App Master processes for admin purposes. It will appears before the opts set by yarn.app.mapreduce.am.command-opts and thus its options can be overridden user. Usage of -Djava.library.path can cause programs to no longer function if hadoop native libraries are used. These values should instead be set as part of LD_LIBRARY_PATH in the map / reduce JVM env using the mapreduce.map.env and mapreduce.reduce.env config settings. yarn.app.mapreduce.am.job.task.listener.thread-count 30 The number of threads used to handle RPC calls in the MR AppMaster from remote tasks yarn.app.mapreduce.am.job.client.port-range Range of ports that the MapReduce AM can use when binding. Leave blank if you want all possible ports. For example 50000-50050,50100-50200 yarn.app.mapreduce.am.job.committer.cancel-timeout 60000 The amount of time in milliseconds to wait for the output committer to cancel an operation if the job is killed yarn.app.mapreduce.am.job.committer.commit-window 10000 Defines a time window in milliseconds for output commit operations. If contact with the RM has occurred within this window then commits are allowed, otherwise the AM will not allow output commits until contact with the RM has been re-established. mapreduce.fileoutputcommitter.algorithm.version 1 The file output committer algorithm version valid algorithm version number: 1 or 2 default to 1, which is the original algorithm In algorithm version 1, 1. commitTask will rename directory $joboutput/_temporary/$appAttemptID/_temporary/$taskAttemptID/ to $joboutput/_temporary/$appAttemptID/$taskID/ 2. recoverTask will also do a rename $joboutput/_temporary/$appAttemptID/$taskID/ to $joboutput/_temporary/($appAttemptID + 1)/$taskID/ 3. commitJob will merge every task output file in $joboutput/_temporary/$appAttemptID/$taskID/ to $joboutput/, then it will delete $joboutput/_temporary/ and write $joboutput/_SUCCESS It has a performance regression, which is discussed in MAPREDUCE-4815. If a job generates many files to commit then the commitJob method call at the end of the job can take minutes. the commit is single-threaded and waits until all tasks have completed before commencing. algorithm version 2 will change the behavior of commitTask, recoverTask, and commitJob. 1. commitTask will rename all files in $joboutput/_temporary/$appAttemptID/_temporary/$taskAttemptID/ to $joboutput/ 2. recoverTask actually doesn't require to do anything, but for upgrade from version 1 to version 2 case, it will check if there are any files in $joboutput/_temporary/($appAttemptID - 1)/$taskID/ and rename them to $joboutput/ 3. commitJob can simply delete $joboutput/_temporary and write $joboutput/_SUCCESS This algorithm will reduce the output commit time for large jobs by having the tasks commit directly to the final output directory as they were completing and commitJob had very little to do. yarn.app.mapreduce.am.scheduler.heartbeat.interval-ms 1000 The interval in ms at which the MR AppMaster should send heartbeats to the ResourceManager yarn.app.mapreduce.client-am.ipc.max-retries 3 The number of client retries to the AM - before reconnecting to the RM to fetch Application Status. yarn.app.mapreduce.client-am.ipc.max-retries-on-timeouts 3 The number of client retries on socket timeouts to the AM - before reconnecting to the RM to fetch Application Status. yarn.app.mapreduce.client.max-retries 3 The number of client retries to the RM/HS before throwing exception. This is a layer above the ipc. yarn.app.mapreduce.am.resource.mb 1536 The amount of memory the MR AppMaster needs. yarn.app.mapreduce.am.resource.cpu-vcores 1 The number of virtual CPU cores the MR AppMaster needs. yarn.app.mapreduce.am.hard-kill-timeout-ms 10000 Number of milliseconds to wait before the job client kills the application. yarn.app.mapreduce.client.job.max-retries 0 The number of retries the client will make for getJob and dependent calls. The default is 0 as this is generally only needed for non-HDFS DFS where additional, high level retries are required to avoid spurious failures during the getJob call. 30 is a good value for WASB yarn.app.mapreduce.client.job.retry-interval 2000 The delay between getJob retries in ms for retries configured with yarn.app.mapreduce.client.job.max-retries. CLASSPATH for MR applications. A comma-separated list of CLASSPATH entries. If mapreduce.application.framework is set then this must specify the appropriate classpath for that archive, and the name of the archive must be present in the classpath. If mapreduce.app-submission.cross-platform is false, platform-specific environment vairable expansion syntax would be used to construct the default CLASSPATH entries. For Linux: $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*, $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*. For Windows: %HADOOP_MAPRED_HOME%/share/hadoop/mapreduce/*, %HADOOP_MAPRED_HOME%/share/hadoop/mapreduce/lib/*. If mapreduce.app-submission.cross-platform is true, platform-agnostic default CLASSPATH for MR applications would be used: {{HADOOP_MAPRED_HOME}}/share/hadoop/mapreduce/*, {{HADOOP_MAPRED_HOME}}/share/hadoop/mapreduce/lib/* Parameter expansion marker will be replaced by NodeManager on container launch based on the underlying OS accordingly. mapreduce.application.classpath If enabled, user can submit an application cross-platform i.e. submit an application from a Windows client to a Linux/Unix server or vice versa. mapreduce.app-submission.cross-platform false Path to the MapReduce framework archive. If set, the framework archive will automatically be distributed along with the job, and this path would normally reside in a public location in an HDFS filesystem. As with distributed cache files, this can be a URL with a fragment specifying the alias to use for the archive name. For example, hdfs:/mapred/framework/hadoop-mapreduce-2.1.1.tar.gz#mrframework would alias the localized archive as "mrframework". Note that mapreduce.application.classpath must include the appropriate classpath for the specified framework. The base name of the archive, or alias of the archive if an alias is used, must appear in the specified classpath. mapreduce.application.framework.path mapreduce.job.classloader false Whether to use a separate (isolated) classloader for user classes in the task JVM. mapreduce.job.classloader.system.classes Used to override the default definition of the system classes for the job classloader. The system classes are a comma-separated list of patterns that indicate whether to load a class from the system classpath, instead from the user-supplied JARs, when mapreduce.job.classloader is enabled. A positive pattern is defined as: 1. A single class name 'C' that matches 'C' and transitively all nested classes 'C$*' defined in C; 2. A package name ending with a '.' (e.g., "com.example.") that matches all classes from that package. A negative pattern is defined by a '-' in front of a positive pattern (e.g., "-com.example."). A class is considered a system class if and only if it matches one of the positive patterns and none of the negative ones. More formally: A class is a member of the inclusion set I if it matches one of the positive patterns. A class is a member of the exclusion set E if it matches one of the negative patterns. The set of system classes S = I \ E. mapreduce.jobhistory.address 0.0.0.0:10020 MapReduce JobHistory Server IPC host:port mapreduce.jobhistory.webapp.address 0.0.0.0:19888 MapReduce JobHistory Server Web UI host:port mapreduce.jobhistory.keytab Location of the kerberos keytab file for the MapReduce JobHistory Server. /etc/security/keytab/jhs.service.keytab mapreduce.jobhistory.principal Kerberos principal name for the MapReduce JobHistory Server. jhs/_HOST@REALM.TLD mapreduce.jobhistory.intermediate-done-dir ${yarn.app.mapreduce.am.staging-dir}/history/done_intermediate mapreduce.jobhistory.done-dir ${yarn.app.mapreduce.am.staging-dir}/history/done mapreduce.jobhistory.cleaner.enable true mapreduce.jobhistory.cleaner.interval-ms 86400000 How often the job history cleaner checks for files to delete, in milliseconds. Defaults to 86400000 (one day). Files are only deleted if they are older than mapreduce.jobhistory.max-age-ms. mapreduce.jobhistory.max-age-ms 604800000 Job history files older than this many milliseconds will be deleted when the history cleaner runs. Defaults to 604800000 (1 week). mapreduce.jobhistory.client.thread-count 10 The number of threads to handle client API requests mapreduce.jobhistory.datestring.cache.size 200000 Size of the date string cache. Effects the number of directories which will be scanned to find a job. mapreduce.jobhistory.joblist.cache.size 20000 Size of the job list cache mapreduce.jobhistory.loadedjobs.cache.size 5 Size of the loaded job cache. This property is ignored if the property mapreduce.jobhistory.loadedtasks.cache.size is set to a positive value. mapreduce.jobhistory.loadedtasks.cache.size Change the job history cache limit to be set in terms of total task count. If the total number of tasks loaded exceeds this value, then the job cache will be shrunk down until it is under this limit (minimum 1 job in cache). If this value is empty or nonpositive then the cache reverts to using the property mapreduce.jobhistory.loadedjobs.cache.size as a job cache size. Two recommendations for the mapreduce.jobhistory.loadedtasks.cache.size property: 1) For every 100k of cache size, set the heap size of the Job History Server to 1.2GB. For example, mapreduce.jobhistory.loadedtasks.cache.size=500000, heap size=6GB. 2) Make sure that the cache size is larger than the number of tasks required for the largest job run on the cluster. It might be a good idea to set the value slightly higher (say, 20%) in order to allow for job size growth. mapreduce.jobhistory.move.interval-ms 180000 Scan for history files to more from intermediate done dir to done dir at this frequency. mapreduce.jobhistory.move.thread-count 3 The number of threads used to move files. mapreduce.jobhistory.store.class The HistoryStorage class to use to cache history data. mapreduce.jobhistory.minicluster.fixed.ports false Whether to use fixed ports with the minicluster mapreduce.jobhistory.admin.address 0.0.0.0:10033 The address of the History server admin interface. mapreduce.jobhistory.admin.acl * ACL of who can be admin of the History server. mapreduce.jobhistory.recovery.enable false Enable the history server to store server state and recover server state upon startup. If enabled then mapreduce.jobhistory.recovery.store.class must be specified. mapreduce.jobhistory.recovery.store.class org.apache.hadoop.mapreduce.v2.hs.HistoryServerFileSystemStateStoreService The HistoryServerStateStoreService class to store history server state for recovery. mapreduce.jobhistory.recovery.store.fs.uri ${hadoop.tmp.dir}/mapred/history/recoverystore The URI where history server state will be stored if HistoryServerFileSystemStateStoreService is configured as the recovery storage class. mapreduce.jobhistory.recovery.store.leveldb.path ${hadoop.tmp.dir}/mapred/history/recoverystore The URI where history server state will be stored if HistoryServerLeveldbSystemStateStoreService is configured as the recovery storage class. mapreduce.jobhistory.http.policy HTTP_ONLY This configures the HTTP endpoint for JobHistoryServer web UI. The following values are supported: - HTTP_ONLY : Service is provided only on http - HTTPS_ONLY : Service is provided only on https yarn.app.mapreduce.am.containerlauncher.threadpool-initial-size 10 The initial size of thread pool to launch containers in the app master. The list of job configuration properties whose value will be redacted. mapreduce.job.redacted-properties ././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_5/resources/oozie-default.xml 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_5/resources/oozie-default.xm0000664000175000017500000035122100000000000033560 0ustar00zuulzuul00000000000000 oozie.output.compression.codec gz The name of the compression codec to use. The implementation class for the codec needs to be specified through another property oozie.compression.codecs. You can specify a comma separated list of 'Codec_name'='Codec_class' for oozie.compression.codecs where codec class implements the interface org.apache.oozie.compression.CompressionCodec. If oozie.compression.codecs is not specified, gz codec implementation is used by default. oozie.external_monitoring.enable false If the oozie functional metrics needs to be exposed to the metrics-server backend, set it to true If set to true, the following properties has to be specified : oozie.metrics.server.name, oozie.metrics.host, oozie.metrics.prefix, oozie.metrics.report.interval.sec, oozie.metrics.port oozie.external_monitoring.type graphite The name of the server to which we want to send the metrics, would be graphite or ganglia. oozie.external_monitoring.address http://localhost:2020 oozie.external_monitoring.metricPrefix oozie oozie.external_monitoring.reporterIntervalSecs 60 oozie.jmx_monitoring.enable false If the oozie functional metrics needs to be exposed via JMX interface, set it to true. oozie.action.mapreduce.uber.jar.enable false If true, enables the oozie.mapreduce.uber.jar mapreduce workflow configuration property, which is used to specify an uber jar in HDFS. Submitting a workflow with an uber jar requires at least Hadoop 2.2.0 or 1.2.0. If false, workflows which specify the oozie.mapreduce.uber.jar configuration property will fail. oozie.processing.timezone UTC Oozie server timezone. Valid values are UTC and GMT(+/-)####, for example 'GMT+0530' would be India timezone. All dates parsed and genered dates by Oozie Coordinator/Bundle will be done in the specified timezone. The default value of 'UTC' should not be changed under normal circumtances. If for any reason is changed, note that GMT(+/-)#### timezones do not observe DST changes. oozie.base.url http://localhost:8080/oozie Base Oozie URL. oozie.system.id oozie-${user.name} The Oozie system ID. oozie.systemmode NORMAL System mode for Oozie at startup. oozie.delete.runtime.dir.on.shutdown true If the runtime directory should be kept after Oozie shutdowns down. oozie.services org.apache.oozie.service.SchedulerService, org.apache.oozie.service.InstrumentationService, org.apache.oozie.service.MemoryLocksService, org.apache.oozie.service.UUIDService, org.apache.oozie.service.ELService, org.apache.oozie.service.AuthorizationService, org.apache.oozie.service.UserGroupInformationService, org.apache.oozie.service.HadoopAccessorService, org.apache.oozie.service.JobsConcurrencyService, org.apache.oozie.service.URIHandlerService, org.apache.oozie.service.DagXLogInfoService, org.apache.oozie.service.SchemaService, org.apache.oozie.service.LiteWorkflowAppService, org.apache.oozie.service.JPAService, org.apache.oozie.service.StoreService, org.apache.oozie.service.SLAStoreService, org.apache.oozie.service.DBLiteWorkflowStoreService, org.apache.oozie.service.CallbackService, org.apache.oozie.service.ActionService, org.apache.oozie.service.ShareLibService, org.apache.oozie.service.CallableQueueService, org.apache.oozie.service.ActionCheckerService, org.apache.oozie.service.RecoveryService, org.apache.oozie.service.PurgeService, org.apache.oozie.service.CoordinatorEngineService, org.apache.oozie.service.BundleEngineService, org.apache.oozie.service.DagEngineService, org.apache.oozie.service.CoordMaterializeTriggerService, org.apache.oozie.service.StatusTransitService, org.apache.oozie.service.PauseTransitService, org.apache.oozie.service.GroupsService, org.apache.oozie.service.ProxyUserService, org.apache.oozie.service.XLogStreamingService, org.apache.oozie.service.JvmPauseMonitorService, org.apache.oozie.service.SparkConfigurationService All services to be created and managed by Oozie Services singleton. Class names must be separated by commas. oozie.services.ext To add/replace services defined in 'oozie.services' with custom implementations. Class names must be separated by commas. oozie.service.XLogStreamingService.buffer.len 4096 4K buffer for streaming the logs progressively oozie.service.HCatAccessorService.jmsconnections default=java.naming.factory.initial#org.apache.activemq.jndi.ActiveMQInitialContextFactory;java.naming.provider.url#tcp://localhost:61616;connectionFactoryNames#ConnectionFactory Specify the map of endpoints to JMS configuration properties. In general, endpoint identifies the HCatalog server URL. "default" is used if no endpoint is mentioned in the query. If some JMS property is not defined, the system will use the property defined jndi.properties. jndi.properties files is retrieved from the application classpath. Mapping rules can also be provided for mapping Hcatalog servers to corresponding JMS providers. hcat://${1}.${2}.server.com:8020=java.naming.factory.initial#Dummy.Factory;java.naming.provider.url#tcp://broker.${2}:61616 oozie.service.JMSTopicService.topic.name default=${username} Topic options are ${username} or ${jobId} or a fixed string which can be specified as default or for a particular job type. For e.g To have a fixed string topic for workflows, coordinators and bundles, specify in the following comma-separated format: {jobtype1}={some_string1}, {jobtype2}={some_string2} where job type can be WORKFLOW, COORDINATOR or BUNDLE. e.g. Following defines topic for workflow job, workflow action, coordinator job, coordinator action, bundle job and bundle action WORKFLOW=workflow, COORDINATOR=coordinator, BUNDLE=bundle For jobs with no defined topic, default topic will be ${username} oozie.jms.producer.connection.properties java.naming.factory.initial#org.apache.activemq.jndi.ActiveMQInitialContextFactory;java.naming.provider.url#tcp://localhost:61616;connectionFactoryNames#ConnectionFactory oozie.service.JMSAccessorService.connectioncontext.impl org.apache.oozie.jms.DefaultConnectionContext Specifies the Connection Context implementation oozie.service.ConfigurationService.ignore.system.properties oozie.service.AuthorizationService.security.enabled Specifies "oozie.*" properties to cannot be overriden via Java system properties. Property names must be separted by commas. oozie.service.ConfigurationService.verify.available.properties true Specifies whether the available configurations check is enabled or not. oozie.service.SchedulerService.threads 10 The number of threads to be used by the SchedulerService to run deamon tasks. If maxed out, scheduled daemon tasks will be queued up and delayed until threads become available. oozie.service.AuthorizationService.authorization.enabled false Specifies whether security (user name/admin role) is enabled or not. If disabled any user can manage Oozie system and manage any job. oozie.service.AuthorizationService.default.group.as.acl false Enables old behavior where the User's default group is the job's ACL. oozie.service.InstrumentationService.logging.interval 60 Interval, in seconds, at which instrumentation should be logged by the InstrumentationService. If set to 0 it will not log instrumentation data. oozie.service.PurgeService.older.than 30 Completed workflow jobs older than this value, in days, will be purged by the PurgeService. oozie.service.PurgeService.coord.older.than 7 Completed coordinator jobs older than this value, in days, will be purged by the PurgeService. oozie.service.PurgeService.bundle.older.than 7 Completed bundle jobs older than this value, in days, will be purged by the PurgeService. oozie.service.PurgeService.purge.old.coord.action false Whether to purge completed workflows and their corresponding coordinator actions of long running coordinator jobs if the completed workflow jobs are older than the value specified in oozie.service.PurgeService.older.than. oozie.service.PurgeService.purge.limit 100 Completed Actions purge - limit each purge to this value oozie.service.PurgeService.purge.interval 3600 Interval at which the purge service will run, in seconds. oozie.service.RecoveryService.wf.actions.older.than 120 Age of the actions which are eligible to be queued for recovery, in seconds. oozie.service.RecoveryService.wf.actions.created.time.interval 7 Created time period of the actions which are eligible to be queued for recovery in days. oozie.service.RecoveryService.callable.batch.size 10 This value determines the number of callable which will be batched together to be executed by a single thread. oozie.service.RecoveryService.push.dependency.interval 200 This value determines the delay for push missing dependency command queueing in Recovery Service oozie.service.RecoveryService.interval 60 Interval at which the RecoverService will run, in seconds. oozie.service.RecoveryService.coord.older.than 600 Age of the Coordinator jobs or actions which are eligible to be queued for recovery, in seconds. oozie.service.RecoveryService.bundle.older.than 600 Age of the Bundle jobs which are eligible to be queued for recovery, in seconds. oozie.service.CallableQueueService.queue.size 10000 Max callable queue size oozie.service.CallableQueueService.threads 10 Number of threads used for executing callables oozie.service.CallableQueueService.callable.concurrency 3 Maximum concurrency for a given callable type. Each command is a callable type (submit, start, run, signal, job, jobs, suspend,resume, etc). Each action type is a callable type (Map-Reduce, Pig, SSH, FS, sub-workflow, etc). All commands that use action executors (action-start, action-end, action-kill and action-check) use the action type as the callable type. oozie.service.CallableQueueService.callable.next.eligible true If true, when a callable in the queue has already reached max concurrency, Oozie continuously find next one which has not yet reach max concurrency. oozie.service.CallableQueueService.InterruptMapMaxSize 500 Maximum Size of the Interrupt Map, the interrupt element will not be inserted in the map if exceeded the size. oozie.service.CallableQueueService.InterruptTypes kill,resume,suspend,bundle_kill,bundle_resume,bundle_suspend,coord_kill,coord_change,coord_resume,coord_suspend Getting the types of XCommands that are considered to be of Interrupt type oozie.service.CoordMaterializeTriggerService.lookup.interval 300 Coordinator Job Lookup interval.(in seconds). oozie.service.CoordMaterializeTriggerService.materialization.window 3600 Coordinator Job Lookup command materialized each job for this next "window" duration oozie.service.CoordMaterializeTriggerService.callable.batch.size 10 This value determines the number of callable which will be batched together to be executed by a single thread. oozie.service.CoordMaterializeTriggerService.materialization.system.limit 50 This value determines the number of coordinator jobs to be materialized at a given time. oozie.service.coord.normal.default.timeout 120 Default timeout for a coordinator action input check (in minutes) for normal job. -1 means infinite timeout oozie.service.coord.default.max.timeout 86400 Default maximum timeout for a coordinator action input check (in minutes). 86400= 60days oozie.service.coord.input.check.requeue.interval 60000 Command re-queue interval for coordinator data input check (in millisecond). oozie.service.coord.input.check.requeue.interval.additional.delay 0 This value (in seconds) will be added into oozie.service.coord.input.check.requeue.interval and resulting value will be the requeue interval for the actions which are waiting for a long time without any input. oozie.service.coord.push.check.requeue.interval 600000 Command re-queue interval for push dependencies (in millisecond). oozie.service.coord.default.concurrency 1 Default concurrency for a coordinator job to determine how many maximum action should be executed at the same time. -1 means infinite concurrency. oozie.service.coord.default.throttle 12 Default throttle for a coordinator job to determine how many maximum action should be in WAITING state at the same time. oozie.service.coord.materialization.throttling.factor 0.05 Determine how many maximum actions should be in WAITING state for a single job at any time. The value is calculated by this factor X the total queue size. oozie.service.coord.check.maximum.frequency true When true, Oozie will reject any coordinators with a frequency faster than 5 minutes. It is not recommended to disable this check or submit coordinators with frequencies faster than 5 minutes: doing so can cause unintended behavior and additional system stress. oozie.service.ELService.groups job-submit,workflow,wf-sla-submit,coord-job-submit-freq,coord-job-submit-nofuncs,coord-job-submit-data,coord-job-submit-instances,coord-sla-submit,coord-action-create,coord-action-create-inst,coord-sla-create,coord-action-start,coord-job-wait-timeout,bundle-submit,coord-job-submit-initial-instance List of groups for different ELServices oozie.service.ELService.constants.job-submit EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.functions.job-submit EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.constants.job-submit EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions without having to include all the built in ones. oozie.service.ELService.ext.functions.job-submit EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions without having to include all the built in ones. oozie.service.ELService.constants.workflow KB=org.apache.oozie.util.ELConstantsFunctions#KB, MB=org.apache.oozie.util.ELConstantsFunctions#MB, GB=org.apache.oozie.util.ELConstantsFunctions#GB, TB=org.apache.oozie.util.ELConstantsFunctions#TB, PB=org.apache.oozie.util.ELConstantsFunctions#PB, RECORDS=org.apache.oozie.action.hadoop.HadoopELFunctions#RECORDS, MAP_IN=org.apache.oozie.action.hadoop.HadoopELFunctions#MAP_IN, MAP_OUT=org.apache.oozie.action.hadoop.HadoopELFunctions#MAP_OUT, REDUCE_IN=org.apache.oozie.action.hadoop.HadoopELFunctions#REDUCE_IN, REDUCE_OUT=org.apache.oozie.action.hadoop.HadoopELFunctions#REDUCE_OUT, GROUPS=org.apache.oozie.action.hadoop.HadoopELFunctions#GROUPS EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.workflow EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.workflow firstNotNull=org.apache.oozie.util.ELConstantsFunctions#firstNotNull, concat=org.apache.oozie.util.ELConstantsFunctions#concat, replaceAll=org.apache.oozie.util.ELConstantsFunctions#replaceAll, appendAll=org.apache.oozie.util.ELConstantsFunctions#appendAll, trim=org.apache.oozie.util.ELConstantsFunctions#trim, timestamp=org.apache.oozie.util.ELConstantsFunctions#timestamp, urlEncode=org.apache.oozie.util.ELConstantsFunctions#urlEncode, toJsonStr=org.apache.oozie.util.ELConstantsFunctions#toJsonStr, toPropertiesStr=org.apache.oozie.util.ELConstantsFunctions#toPropertiesStr, toConfigurationStr=org.apache.oozie.util.ELConstantsFunctions#toConfigurationStr, wf:id=org.apache.oozie.DagELFunctions#wf_id, wf:name=org.apache.oozie.DagELFunctions#wf_name, wf:appPath=org.apache.oozie.DagELFunctions#wf_appPath, wf:conf=org.apache.oozie.DagELFunctions#wf_conf, wf:user=org.apache.oozie.DagELFunctions#wf_user, wf:group=org.apache.oozie.DagELFunctions#wf_group, wf:callback=org.apache.oozie.DagELFunctions#wf_callback, wf:transition=org.apache.oozie.DagELFunctions#wf_transition, wf:lastErrorNode=org.apache.oozie.DagELFunctions#wf_lastErrorNode, wf:errorCode=org.apache.oozie.DagELFunctions#wf_errorCode, wf:errorMessage=org.apache.oozie.DagELFunctions#wf_errorMessage, wf:run=org.apache.oozie.DagELFunctions#wf_run, wf:actionData=org.apache.oozie.DagELFunctions#wf_actionData, wf:actionExternalId=org.apache.oozie.DagELFunctions#wf_actionExternalId, wf:actionTrackerUri=org.apache.oozie.DagELFunctions#wf_actionTrackerUri, wf:actionExternalStatus=org.apache.oozie.DagELFunctions#wf_actionExternalStatus, hadoop:counters=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_counters, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf, fs:exists=org.apache.oozie.action.hadoop.FsELFunctions#fs_exists, fs:isDir=org.apache.oozie.action.hadoop.FsELFunctions#fs_isDir, fs:dirSize=org.apache.oozie.action.hadoop.FsELFunctions#fs_dirSize, fs:fileSize=org.apache.oozie.action.hadoop.FsELFunctions#fs_fileSize, fs:blockSize=org.apache.oozie.action.hadoop.FsELFunctions#fs_blockSize, hcat:exists=org.apache.oozie.coord.HCatELFunctions#hcat_exists EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.WorkflowAppService.WorkflowDefinitionMaxLength 100000 The maximum length of the workflow definition in bytes An error will be reported if the length exceeds the given maximum oozie.service.ELService.ext.functions.workflow EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.wf-sla-submit MINUTES=org.apache.oozie.util.ELConstantsFunctions#SUBMIT_MINUTES, HOURS=org.apache.oozie.util.ELConstantsFunctions#SUBMIT_HOURS, DAYS=org.apache.oozie.util.ELConstantsFunctions#SUBMIT_DAYS EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.wf-sla-submit EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.wf-sla-submit EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.wf-sla-submit EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. l oozie.service.ELService.constants.coord-job-submit-freq EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-job-submit-freq EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-job-submit-freq coord:days=org.apache.oozie.coord.CoordELFunctions#ph1_coord_days, coord:months=org.apache.oozie.coord.CoordELFunctions#ph1_coord_months, coord:hours=org.apache.oozie.coord.CoordELFunctions#ph1_coord_hours, coord:minutes=org.apache.oozie.coord.CoordELFunctions#ph1_coord_minutes, coord:endOfDays=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfDays, coord:endOfMonths=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfMonths, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.functions.coord-job-submit-initial-instance ${oozie.service.ELService.functions.coord-job-submit-nofuncs}, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateOffset, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateTzOffset EL functions for coord job submit initial instance, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-job-submit-freq EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-job-wait-timeout EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.constants.coord-job-wait-timeout EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions without having to include all the built in ones. oozie.service.ELService.functions.coord-job-wait-timeout coord:days=org.apache.oozie.coord.CoordELFunctions#ph1_coord_days, coord:months=org.apache.oozie.coord.CoordELFunctions#ph1_coord_months, coord:hours=org.apache.oozie.coord.CoordELFunctions#ph1_coord_hours, coord:minutes=org.apache.oozie.coord.CoordELFunctions#ph1_coord_minutes, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-job-wait-timeout EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions without having to include all the built in ones. oozie.service.ELService.constants.coord-job-submit-nofuncs MINUTE=org.apache.oozie.coord.CoordELConstants#SUBMIT_MINUTE, HOUR=org.apache.oozie.coord.CoordELConstants#SUBMIT_HOUR, DAY=org.apache.oozie.coord.CoordELConstants#SUBMIT_DAY, MONTH=org.apache.oozie.coord.CoordELConstants#SUBMIT_MONTH, YEAR=org.apache.oozie.coord.CoordELConstants#SUBMIT_YEAR EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-job-submit-nofuncs EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-job-submit-nofuncs coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-job-submit-nofuncs EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-job-submit-instances EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-job-submit-instances EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-job-submit-instances coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph1_coord_hoursInDay_echo, coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph1_coord_daysInMonth_echo, coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_tzOffset_echo, coord:current=org.apache.oozie.coord.CoordELFunctions#ph1_coord_current_echo, coord:currentRange=org.apache.oozie.coord.CoordELFunctions#ph1_coord_currentRange_echo, coord:offset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_offset_echo, coord:latest=org.apache.oozie.coord.CoordELFunctions#ph1_coord_latest_echo, coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph1_coord_latestRange_echo, coord:future=org.apache.oozie.coord.CoordELFunctions#ph1_coord_future_echo, coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph1_coord_futureRange_echo, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo, coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_epochTime_echo, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:absolute=org.apache.oozie.coord.CoordELFunctions#ph1_coord_absolute_echo, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-job-submit-instances EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-job-submit-data EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-job-submit-data EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-job-submit-data coord:dataIn=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataIn_echo, coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataOut_echo, coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_wrap, coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actualTime_echo_wrap, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo, coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_epochTime_echo, coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actionId_echo, coord:name=org.apache.oozie.coord.CoordELFunctions#ph1_coord_name_echo, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:databaseIn=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseIn_echo, coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseOut_echo, coord:tableIn=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableIn_echo, coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableOut_echo, coord:dataInPartitionFilter=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionFilter_echo, coord:dataInPartitionMin=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionMin_echo, coord:dataInPartitionMax=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionMax_echo, coord:dataInPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitions_echo, coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitions_echo, coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitionValue_echo, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-job-submit-data EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-sla-submit MINUTES=org.apache.oozie.coord.CoordELConstants#SUBMIT_MINUTES, HOURS=org.apache.oozie.coord.CoordELConstants#SUBMIT_HOURS, DAYS=org.apache.oozie.coord.CoordELConstants#SUBMIT_DAYS EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-sla-submit EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.bundle-submit bundle:conf=org.apache.oozie.bundle.BundleELFunctions#bundle_conf oozie.service.ELService.functions.coord-sla-submit coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataOut_echo, coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_fixed, coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actualTime_echo_wrap, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo, coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_epochTime_echo, coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actionId_echo, coord:name=org.apache.oozie.coord.CoordELFunctions#ph1_coord_name_echo, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseOut_echo, coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableOut_echo, coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitions_echo, coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitionValue_echo, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-sla-submit EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-action-create EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-action-create EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-action-create coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph2_coord_hoursInDay, coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph2_coord_daysInMonth, coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_tzOffset, coord:current=org.apache.oozie.coord.CoordELFunctions#ph2_coord_current, coord:currentRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_currentRange, coord:offset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_offset, coord:latest=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latest_echo, coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latestRange_echo, coord:future=org.apache.oozie.coord.CoordELFunctions#ph2_coord_future_echo, coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_futureRange_echo, coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actionId, coord:name=org.apache.oozie.coord.CoordELFunctions#ph2_coord_name, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime, coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_epochTime, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:absolute=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_echo, coord:absoluteRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_range, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-action-create EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-action-create-inst EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-action-create-inst EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-action-create-inst coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph2_coord_hoursInDay, coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph2_coord_daysInMonth, coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_tzOffset, coord:current=org.apache.oozie.coord.CoordELFunctions#ph2_coord_current_echo, coord:currentRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_currentRange_echo, coord:offset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_offset_echo, coord:latest=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latest_echo, coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latestRange_echo, coord:future=org.apache.oozie.coord.CoordELFunctions#ph2_coord_future_echo, coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_futureRange_echo, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime, coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_epochTime, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:absolute=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_echo, coord:absoluteRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_range, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateOffset, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateTzOffset EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-action-create-inst EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-sla-create EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-sla-create MINUTES=org.apache.oozie.coord.CoordELConstants#SUBMIT_MINUTES, HOURS=org.apache.oozie.coord.CoordELConstants#SUBMIT_HOURS, DAYS=org.apache.oozie.coord.CoordELConstants#SUBMIT_DAYS EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-sla-create coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataOut, coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_nominalTime, coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actualTime, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateOffset, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateTzOffset, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime, coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_epochTime, coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actionId, coord:name=org.apache.oozie.coord.CoordELFunctions#ph2_coord_name, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_databaseOut, coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_tableOut, coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitions, coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitionValue, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-sla-create EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-action-start EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-action-start EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-action-start coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph3_coord_hoursInDay, coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph3_coord_daysInMonth, coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_tzOffset, coord:latest=org.apache.oozie.coord.CoordELFunctions#ph3_coord_latest, coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph3_coord_latestRange, coord:future=org.apache.oozie.coord.CoordELFunctions#ph3_coord_future, coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph3_coord_futureRange, coord:dataIn=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataIn, coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataOut, coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_nominalTime, coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_actualTime, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dateOffset, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dateTzOffset, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_formatTime, coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_epochTime, coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph3_coord_actionId, coord:name=org.apache.oozie.coord.CoordELFunctions#ph3_coord_name, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:databaseIn=org.apache.oozie.coord.HCatELFunctions#ph3_coord_databaseIn, coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_databaseOut, coord:tableIn=org.apache.oozie.coord.HCatELFunctions#ph3_coord_tableIn, coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_tableOut, coord:dataInPartitionFilter=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitionFilter, coord:dataInPartitionMin=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitionMin, coord:dataInPartitionMax=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitionMax, coord:dataInPartitions=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitions, coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitions, coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitionValue, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-action-start EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.latest-el.use-current-time false Determine whether to use the current time to determine the latest dependency or the action creation time. This is for backward compatibility with older oozie behaviour. oozie.service.UUIDService.generator counter random : generated UUIDs will be random strings. counter: generated UUIDs generated will be a counter postfixed with the system startup time. oozie.service.DBLiteWorkflowStoreService.status.metrics.collection.interval 5 Workflow Status metrics collection interval in minutes. oozie.service.DBLiteWorkflowStoreService.status.metrics.window 3600 Workflow Status metrics collection window in seconds. Workflow status will be instrumented for the window. oozie.db.schema.name oozie Oozie DataBase Name oozie.service.JPAService.create.db.schema false Creates Oozie DB. If set to true, it creates the DB schema if it does not exist. If the DB schema exists is a NOP. If set to false, it does not create the DB schema. If the DB schema does not exist it fails start up. oozie.service.JPAService.validate.db.connection true Validates DB connections from the DB connection pool. If the 'oozie.service.JPAService.create.db.schema' property is set to true, this property is ignored. oozie.service.JPAService.validate.db.connection.eviction.interval 300000 Validates DB connections from the DB connection pool. When validate db connection 'TestWhileIdle' is true, the number of milliseconds to sleep between runs of the idle object evictor thread. oozie.service.JPAService.validate.db.connection.eviction.num 10 Validates DB connections from the DB connection pool. When validate db connection 'TestWhileIdle' is true, the number of objects to examine during each run of the idle object evictor thread. oozie.service.JPAService.connection.data.source org.apache.commons.dbcp.BasicDataSource DataSource to be used for connection pooling. oozie.service.JPAService.connection.properties DataSource connection properties. oozie.service.JPAService.jdbc.driver org.apache.derby.jdbc.EmbeddedDriver JDBC driver class. oozie.service.JPAService.jdbc.url jdbc:derby:${oozie.data.dir}/${oozie.db.schema.name}-db;create=true JDBC URL. oozie.service.JPAService.jdbc.username sa DB user name. oozie.service.JPAService.jdbc.password DB user password. IMPORTANT: if password is emtpy leave a 1 space string, the service trims the value, if empty Configuration assumes it is NULL. IMPORTANT: if the StoreServicePasswordService is active, it will reset this value with the value given in the console. oozie.service.JPAService.pool.max.active.conn 10 Max number of connections. oozie.service.JPAService.openjpa.BrokerImpl non-finalizing The default OpenJPAEntityManager implementation automatically closes itself during instance finalization. This guards against accidental resource leaks that may occur if a developer fails to explicitly close EntityManagers when finished with them, but it also incurs a scalability bottleneck, since the JVM must perform synchronization during instance creation, and since the finalizer thread will have more instances to monitor. To avoid this overhead, set the openjpa.BrokerImpl configuration property to non-finalizing. To use default implementation set it to empty space. oozie.service.SchemaService.wf.schemas oozie-workflow-0.1.xsd,oozie-workflow-0.2.xsd,oozie-workflow-0.2.5.xsd,oozie-workflow-0.3.xsd,oozie-workflow-0.4.xsd, oozie-workflow-0.4.5.xsd,oozie-workflow-0.5.xsd, shell-action-0.1.xsd,shell-action-0.2.xsd,shell-action-0.3.xsd, email-action-0.1.xsd,email-action-0.2.xsd, hive-action-0.2.xsd,hive-action-0.3.xsd,hive-action-0.4.xsd,hive-action-0.5.xsd,hive-action-0.6.xsd, sqoop-action-0.2.xsd,sqoop-action-0.3.xsd,sqoop-action-0.4.xsd, ssh-action-0.1.xsd,ssh-action-0.2.xsd, distcp-action-0.1.xsd,distcp-action-0.2.xsd, oozie-sla-0.1.xsd,oozie-sla-0.2.xsd, hive2-action-0.1.xsd, hive2-action-0.2.xsd, spark-action-0.1.xsd,spark-action-0.2.xsd List of schemas for workflows (separated by commas). oozie.service.SchemaService.wf.ext.schemas List of additional schemas for workflows (separated by commas). oozie.service.SchemaService.coord.schemas oozie-coordinator-0.1.xsd,oozie-coordinator-0.2.xsd,oozie-coordinator-0.3.xsd,oozie-coordinator-0.4.xsd, oozie-coordinator-0.5.xsd,oozie-sla-0.1.xsd,oozie-sla-0.2.xsd List of schemas for coordinators (separated by commas). oozie.service.SchemaService.coord.ext.schemas List of additional schemas for coordinators (separated by commas). oozie.service.SchemaService.bundle.schemas oozie-bundle-0.1.xsd,oozie-bundle-0.2.xsd List of schemas for bundles (separated by commas). oozie.service.SchemaService.bundle.ext.schemas List of additional schemas for bundles (separated by commas). oozie.service.SchemaService.sla.schemas gms-oozie-sla-0.1.xsd,oozie-sla-0.2.xsd List of schemas for semantic validation for GMS SLA (separated by commas). oozie.service.SchemaService.sla.ext.schemas List of additional schemas for semantic validation for GMS SLA (separated by commas). oozie.service.CallbackService.base.url ${oozie.base.url}/callback Base callback URL used by ActionExecutors. oozie.service.CallbackService.early.requeue.max.retries 5 If Oozie receives a callback too early (while the action is in PREP state), it will requeue the command this many times to give the action time to transition to RUNNING. oozie.servlet.CallbackServlet.max.data.len 2048 Max size in characters for the action completion data output. oozie.external.stats.max.size -1 Max size in bytes for action stats. -1 means infinite value. oozie.JobCommand.job.console.url ${oozie.base.url}?job= Base console URL for a workflow job. oozie.service.ActionService.executor.classes org.apache.oozie.action.decision.DecisionActionExecutor, org.apache.oozie.action.hadoop.JavaActionExecutor, org.apache.oozie.action.hadoop.FsActionExecutor, org.apache.oozie.action.hadoop.MapReduceActionExecutor, org.apache.oozie.action.hadoop.PigActionExecutor, org.apache.oozie.action.hadoop.HiveActionExecutor, org.apache.oozie.action.hadoop.ShellActionExecutor, org.apache.oozie.action.hadoop.SqoopActionExecutor, org.apache.oozie.action.hadoop.DistcpActionExecutor, org.apache.oozie.action.hadoop.Hive2ActionExecutor, org.apache.oozie.action.ssh.SshActionExecutor, org.apache.oozie.action.oozie.SubWorkflowActionExecutor, org.apache.oozie.action.email.EmailActionExecutor, org.apache.oozie.action.hadoop.SparkActionExecutor List of ActionExecutors classes (separated by commas). Only action types with associated executors can be used in workflows. oozie.service.ActionService.executor.ext.classes List of ActionExecutors extension classes (separated by commas). Only action types with associated executors can be used in workflows. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ActionCheckerService.action.check.interval 60 The frequency at which the ActionCheckService will run. oozie.service.ActionCheckerService.action.check.delay 600 The time, in seconds, between an ActionCheck for the same action. oozie.service.ActionCheckerService.callable.batch.size 10 This value determines the number of actions which will be batched together to be executed by a single thread. oozie.service.StatusTransitService.statusTransit.interval 60 The frequency in seconds at which the StatusTransitService will run. oozie.service.StatusTransitService.backward.support.for.coord.status false true, if coordinator job submits using 'uri:oozie:coordinator:0.1' and wants to keep Oozie 2.x status transit. if set true, 1. SUCCEEDED state in coordinator job means materialization done. 2. No DONEWITHERROR state in coordinator job 3. No PAUSED or PREPPAUSED state in coordinator job 4. PREPSUSPENDED becomes SUSPENDED in coordinator job oozie.service.StatusTransitService.backward.support.for.states.without.error true true, if you want to keep Oozie 3.2 status transit. Change it to false for Oozie 4.x releases. if set true, No states like RUNNINGWITHERROR, SUSPENDEDWITHERROR and PAUSEDWITHERROR for coordinator and bundle oozie.service.PauseTransitService.PauseTransit.interval 60 The frequency in seconds at which the PauseTransitService will run. oozie.action.max.output.data 2048 Max size in characters for output data. oozie.action.fs.glob.max 50000 Maximum number of globbed files. oozie.action.launcher.am.restart.kill.childjobs true Multiple instances of launcher jobs can happen due to RM non-work preserving recovery on RM restart, AM recovery due to crashes or AM network connectivity loss. This could also lead to orphaned child jobs of the old AM attempts leading to conflicting runs. This kills child jobs of previous attempts using YARN application tags. oozie.action.launcher.mapreduce.job.ubertask.enable true Enables Uber Mode for the launcher job in YARN/Hadoop 2 (no effect in Hadoop 1) for all action types by default. This can be overridden on a per-action-type basis by setting oozie.action.#action-type#.launcher.mapreduce.job.ubertask.enable in oozie-site.xml (where #action-type# is the action type; for example, "pig"). And that can be overridden on a per-action basis by setting oozie.launcher.mapreduce.job.ubertask.enable in an action's configuration section in a workflow. In summary, the priority is this: 1. action's configuration section in a workflow 2. oozie.action.#action-type#.launcher.mapreduce.job.ubertask.enable in oozie-site 3. oozie.action.launcher.mapreduce.job.ubertask.enable in oozie-site oozie.action.shell.launcher.mapreduce.job.ubertask.enable false The Shell action may have issues with the $PATH environment when using Uber Mode, and so Uber Mode is disabled by default for it. See oozie.action.launcher.mapreduce.job.ubertask.enable oozie.action.spark.setup.hadoop.conf.dir false Oozie action.xml (oozie.action.conf.xml) contains all the hadoop configuration and user provided configurations. This property will allow users to copy Oozie action.xml as hadoop *-site configurations files. The advantage is, user need not to manage these files into spark sharelib. If user wants to manage the hadoop configurations themselves, it should should disable it. oozie.action.shell.setup.hadoop.conf.dir false The Shell action is commonly used to run programs that rely on HADOOP_CONF_DIR (e.g. hive, beeline, sqoop, etc). With YARN, HADOO_CONF_DIR is set to the NodeManager's copies of Hadoop's *-site.xml files, which can be problematic because (a) they are for meant for the NM, not necessarily clients, and (b) they won't have any of the configs that Oozie, or the user through Oozie, sets. When this property is set to true, The Shell action will prepare the *-site.xml files based on the correct config and set HADOOP_CONF_DIR to point to it. Setting it to false will make Oozie leave HADOOP_CONF_DIR alone. This can also be set at the Action level by putting it in the Shell Action's configuration section, which also has priorty. That all said, it's recommended to use the appropriate action type when possible. oozie.action.shell.setup.hadoop.conf.dir.write.log4j.properties true Toggle to control if a log4j.properties file should be written into the configuration directory prepared when oozie.action.shell.setup.hadoop.conf.dir is enabled. This is used to control logging behavior of log4j using commands run within the shell action script, and to ensure logging does not impact output data capture if leaked to stdout. Content of the written file is determined by the value of oozie.action.shell.setup.hadoop.conf.dir.log4j.content. oozie.action.shell.setup.hadoop.conf.dir.log4j.content log4j.rootLogger=${hadoop.root.logger} hadoop.root.logger=INFO,console log4j.appender.console=org.apache.log4j.ConsoleAppender log4j.appender.console.target=System.err log4j.appender.console.layout=org.apache.log4j.PatternLayout log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n The value to write into a log4j.properties file under the config directory created when oozie.action.shell.setup.hadoop.conf.dir and oozie.action.shell.setup.hadoop.conf.dir.write.log4j.properties properties are both enabled. The values must be properly newline separated and in format expected by Log4J. Trailing and preceding whitespaces will be trimmed when reading this property. This is used to control logging behavior of log4j using commands run within the shell action script. oozie.action.launcher.yarn.timeline-service.enabled false Enables/disables getting delegation tokens for ATS for the launcher job in YARN/Hadoop 2.6 (no effect in Hadoop 1) for all action types by default if tez-site.xml is present in distributed cache. This can be overridden on a per-action basis by setting oozie.launcher.yarn.timeline-service.enabled in an action's configuration section in a workflow. oozie.action.rootlogger.log.level INFO Logging level for root logger oozie.action.retries.max 3 The number of retries for executing an action in case of failure oozie.action.retry.interval 10 The interval between retries of an action in case of failure oozie.action.retry.policy periodic Retry policy of an action in case of failure. Possible values are periodic/exponential oozie.action.ssh.delete.remote.tmp.dir true If set to true, it will delete temporary directory at the end of execution of ssh action. oozie.action.ssh.http.command curl Command to use for callback to oozie, normally is 'curl' or 'wget'. The command must available in PATH environment variable of the USER@HOST box shell. oozie.action.ssh.http.command.post.options --data-binary @#stdout --request POST --header "content-type:text/plain" The callback command POST options. Used when the ouptut of the ssh action is captured. oozie.action.ssh.allow.user.at.host true Specifies whether the user specified by the ssh action is allowed or is to be replaced by the Job user oozie.action.subworkflow.max.depth 50 The maximum depth for subworkflows. For example, if set to 3, then a workflow can start subwf1, which can start subwf2, which can start subwf3; but if subwf3 tries to start subwf4, then the action will fail. This is helpful in preventing errant workflows from starting infintely recursive subworkflows. oozie.service.HadoopAccessorService.kerberos.enabled false Indicates if Oozie is configured to use Kerberos. local.realm LOCALHOST Kerberos Realm used by Oozie and Hadoop. Using 'local.realm' to be aligned with Hadoop configuration oozie.service.HadoopAccessorService.keytab.file ${user.home}/oozie.keytab Location of the Oozie user keytab file. oozie.service.HadoopAccessorService.kerberos.principal ${user.name}/localhost@${local.realm} Kerberos principal for Oozie service. oozie.service.HadoopAccessorService.jobTracker.whitelist Whitelisted job tracker for Oozie service. oozie.service.HadoopAccessorService.nameNode.whitelist Whitelisted job tracker for Oozie service. oozie.service.HadoopAccessorService.hadoop.configurations *=hadoop-conf Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the HOST:PORT of the Hadoop service (JobTracker, YARN, HDFS). The wildcard '*' configuration is used when there is no exact match for an authority. The HADOOP_CONF_DIR contains the relevant Hadoop *-site.xml files. If the path is relative is looked within the Oozie configuration directory; though the path can be absolute (i.e. to point to Hadoop client conf/ directories in the local filesystem. oozie.service.HadoopAccessorService.action.configurations *=action-conf Comma separated AUTHORITY=ACTION_CONF_DIR, where AUTHORITY is the HOST:PORT of the Hadoop MapReduce service (JobTracker, YARN). The wildcard '*' configuration is used when there is no exact match for an authority. The ACTION_CONF_DIR may contain ACTION.xml files where ACTION is the action type ('java', 'map-reduce', 'pig', 'hive', 'sqoop', etc.). If the ACTION.xml file exists, its properties will be used as defaults properties for the action. If the path is relative is looked within the Oozie configuration directory; though the path can be absolute (i.e. to point to Hadoop client conf/ directories in the local filesystem. oozie.service.HadoopAccessorService.action.configurations.load.default.resources true true means that default and site xml files of hadoop (core-default, core-site, hdfs-default, hdfs-site, mapred-default, mapred-site, yarn-default, yarn-site) are parsed into actionConf on Oozie server. false means that site xml files are not loaded on server, instead loaded on launcher node. This is only done for pig and hive actions which handle loading those files automatically from the classpath on launcher task. It defaults to true. oozie.credentials.credentialclasses A list of credential class mapping for CredentialsProvider oozie.credentials.skip false This determines if Oozie should skip getting credentials from the credential providers. This can be overwritten at a job-level or action-level. oozie.actions.main.classnames distcp=org.apache.hadoop.tools.DistCp A list of class name mapping for Action classes oozie.service.WorkflowAppService.system.libpath /user/${user.name}/share/lib System library path to use for workflow applications. This path is added to workflow application if their job properties sets the property 'oozie.use.system.libpath' to true. oozie.command.default.lock.timeout 5000 Default timeout (in milliseconds) for commands for acquiring an exclusive lock on an entity. oozie.command.default.requeue.delay 10000 Default time (in milliseconds) for commands that are requeued for delayed execution. oozie.service.LiteWorkflowStoreService.user.retry.max 3 Automatic retry max count for workflow action is 3 in default. oozie.service.LiteWorkflowStoreService.user.retry.inteval 10 Automatic retry interval for workflow action is in minutes and the default value is 10 minutes. oozie.service.LiteWorkflowStoreService.user.retry.policy periodic Automatic retry policy for workflow action. Possible values are periodic or exponential, periodic being the default. oozie.service.LiteWorkflowStoreService.user.retry.error.code JA008,JA009,JA017,JA018,JA019,FS009,FS008,FS014 Automatic retry interval for workflow action is handled for these specified error code: FS009, FS008 is file exists error when using chmod in fs action. FS014 is permission error in fs action JA018 is output directory exists error in workflow map-reduce action. JA019 is error while executing distcp action. JA017 is job not exists error in action executor. JA008 is FileNotFoundException in action executor. JA009 is IOException in action executor. ALL is the any kind of error in action executor. oozie.service.LiteWorkflowStoreService.user.retry.error.code.ext Automatic retry interval for workflow action is handled for these specified extra error code: ALL is the any kind of error in action executor. oozie.service.LiteWorkflowStoreService.node.def.version _oozie_inst_v_2 NodeDef default version, _oozie_inst_v_0, _oozie_inst_v_1 or _oozie_inst_v_2 oozie.authentication.type simple Defines authentication used for Oozie HTTP endpoint. Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME# oozie.server.authentication.type ${oozie.authentication.type} Defines authentication used for Oozie server communicating to other Oozie server over HTTP(s). Supported values are: simple | kerberos | #AUTHENTICATOR_CLASSNAME# oozie.authentication.token.validity 36000 Indicates how long (in seconds) an authentication token is valid before it has to be renewed. oozie.authentication.cookie.domain The domain to use for the HTTP cookie that stores the authentication token. In order to authentiation to work correctly across multiple hosts the domain must be correctly set. oozie.authentication.simple.anonymous.allowed true Indicates if anonymous requests are allowed when using 'simple' authentication. oozie.authentication.kerberos.principal HTTP/localhost@${local.realm} Indicates the Kerberos principal to be used for HTTP endpoint. The principal MUST start with 'HTTP/' as per Kerberos HTTP SPNEGO specification. oozie.authentication.kerberos.keytab ${oozie.service.HadoopAccessorService.keytab.file} Location of the keytab file with the credentials for the principal. Referring to the same keytab file Oozie uses for its Kerberos credentials for Hadoop. oozie.authentication.kerberos.name.rules DEFAULT The kerberos names rules is to resolve kerberos principal names, refer to Hadoop's KerberosName for more details. oozie.coord.execution.none.tolerance 1 Default time tolerance in minutes after action nominal time for an action to be skipped when execution order is "NONE" oozie.coord.actions.default.length 1000 Default number of coordinator actions to be retrieved by the info command oozie.validate.ForkJoin true If true, fork and join should be validated at wf submission time. oozie.workflow.parallel.fork.action.start true Determines how Oozie processes starting of forked actions. If true, forked actions and their job submissions are done in parallel which is best for performance. If false, they are submitted sequentially. oozie.coord.action.get.all.attributes false Setting to true is not recommended as coord job/action info will bring all columns of the action in memory. Set it true only if backward compatibility for action/job info is required. oozie.service.HadoopAccessorService.supported.filesystems hdfs,hftp,webhdfs Enlist the different filesystems supported for federation. If wildcard "*" is specified, then ALL file schemes will be allowed. oozie.service.URIHandlerService.uri.handlers org.apache.oozie.dependency.FSURIHandler Enlist the different uri handlers supported for data availability checks. oozie.notification.url.connection.timeout 10000 Defines the timeout, in milliseconds, for Oozie HTTP notification callbacks. Oozie does HTTP notifications for workflow jobs which set the 'oozie.wf.action.notification.url', 'oozie.wf.worklfow.notification.url' and/or 'oozie.coord.action.notification.url' properties in their job.properties. Refer to section '5 Oozie Notifications' in the Workflow specification for details. oozie.hadoop-2.0.2-alpha.workaround.for.distributed.cache false Due to a bug in Hadoop 2.0.2-alpha, MAPREDUCE-4820, launcher jobs fail to set the distributed cache for the action job because the local JARs are implicitly included triggering a duplicate check. This flag removes the distributed cache files for the action as they'll be included from the local JARs of the JobClient (MRApps) submitting the action job from the launcher. oozie.service.EventHandlerService.filter.app.types workflow_job, coordinator_action The app-types among workflow/coordinator/bundle job/action for which for which events system is enabled. oozie.service.EventHandlerService.event.queue org.apache.oozie.event.MemoryEventQueue The implementation for EventQueue in use by the EventHandlerService. oozie.service.EventHandlerService.event.listeners org.apache.oozie.jms.JMSJobEventListener oozie.service.EventHandlerService.queue.size 10000 Maximum number of events to be contained in the event queue. oozie.service.EventHandlerService.worker.interval 30 The default interval (seconds) at which the worker threads will be scheduled to run and process events. oozie.service.EventHandlerService.batch.size 10 The batch size for batched draining per thread from the event queue. oozie.service.EventHandlerService.worker.threads 3 Number of worker threads to be scheduled to run and process events. oozie.sla.service.SLAService.capacity 5000 Maximum number of sla records to be contained in the memory structure. oozie.sla.service.SLAService.alert.events END_MISS Default types of SLA events for being alerted of. oozie.sla.service.SLAService.calculator.impl org.apache.oozie.sla.SLACalculatorMemory The implementation for SLACalculator in use by the SLAService. oozie.sla.service.SLAService.job.event.latency 90000 Time in milliseconds to account of latency of getting the job status event to compare against and decide sla miss/met oozie.sla.service.SLAService.check.interval 30 Time interval, in seconds, at which SLA Worker will be scheduled to run oozie.sla.disable.alerts.older.than 48 Time threshold, in HOURS, for disabling SLA alerting for jobs whose nominal time is older than this. oozie.zookeeper.connection.string localhost:2181 Comma-separated values of host:port pairs of the ZooKeeper servers. oozie.zookeeper.namespace oozie The namespace to use. All of the Oozie Servers that are planning on talking to each other should have the same namespace. oozie.zookeeper.connection.timeout 180 Default ZK connection timeout (in sec). oozie.zookeeper.session.timeout 300 Default ZK session timeout (in sec). If connection is lost even after retry, then Oozie server will shutdown itself if oozie.zookeeper.server.shutdown.ontimeout is true. oozie.zookeeper.max.retries 10 Maximum number of times to retry. oozie.zookeeper.server.shutdown.ontimeout true If true, Oozie server will shutdown itself on ZK connection timeout. oozie.http.hostname localhost Oozie server host name. oozie.http.port 11000 Oozie server port. oozie.https.port 11443 Oozie ssl server port. oozie.instance.id ${oozie.http.hostname} Each Oozie server should have its own unique instance id. The default is system property =${OOZIE_HTTP_HOSTNAME}= (i.e. the hostname). oozie.service.ShareLibService.mapping.file Sharelib mapping files contains list of key=value, where key will be the sharelib name for the action and value is a comma separated list of DFS directories or jar files. Example. oozie.pig_10=hdfs:///share/lib/pig/pig-0.10.1/lib/ oozie.pig=hdfs:///share/lib/pig/pig-0.11.1/lib/ oozie.distcp=hdfs:///share/lib/hadoop-2.2.0/share/hadoop/tools/lib/hadoop-distcp-2.2.0.jar oozie.service.ShareLibService.fail.fast.on.startup false Fails server starup if sharelib initilzation fails. oozie.service.ShareLibService.purge.interval 1 How often, in days, Oozie should check for old ShareLibs and LauncherLibs to purge from HDFS. oozie.service.ShareLibService.temp.sharelib.retention.days 7 ShareLib retention time in days. oozie.action.ship.launcher.jar false Specifies whether launcher jar is shipped or not. oozie.action.jobinfo.enable false JobInfo will contain information of bundle, coordinator, workflow and actions. If enabled, hadoop job will have property(oozie.job.info) which value is multiple key/value pair separated by ",". This information can be used for analytics like how many oozie jobs are submitted for a particular period, what is the total number of failed pig jobs, etc from mapreduce job history logs and configuration. User can also add custom workflow property to jobinfo by adding property which prefix with "oozie.job.info." Eg. oozie.job.info="bundle.id=,bundle.name=,coord.name=,coord.nominal.time=,coord.name=,wf.id=, wf.name=,action.name=,action.type=,launcher=true" oozie.service.XLogStreamingService.max.log.scan.duration -1 Max log scan duration in hours. If log scan request end_date - start_date > value, then exception is thrown to reduce the scan duration. -1 indicate no limit. oozie.service.XLogStreamingService.actionlist.max.log.scan.duration -1 Max log scan duration in hours for coordinator job when list of actions are specified. If log streaming request end_date - start_date > value, then exception is thrown to reduce the scan duration. -1 indicate no limit. This setting is separate from max.log.scan.duration as we want to allow higher durations when actions are specified. oozie.service.JvmPauseMonitorService.warn-threshold.ms 10000 The JvmPauseMonitorService runs a thread that repeatedly tries to detect when the JVM pauses, which could indicate that the JVM or host machine is overloaded or other problems. This thread sleeps for 500ms; if it sleeps for significantly longer, then there is likely a problem. This property specifies the threadshold for when Oozie should log a WARN level message; there is also a counter named "jvm.pause.warn-threshold". oozie.service.JvmPauseMonitorService.info-threshold.ms 1000 The JvmPauseMonitorService runs a thread that repeatedly tries to detect when the JVM pauses, which could indicate that the JVM or host machine is overloaded or other problems. This thread sleeps for 500ms; if it sleeps for significantly longer, then there is likely a problem. This property specifies the threadshold for when Oozie should log an INFO level message; there is also a counter named "jvm.pause.info-threshold". oozie.service.ZKLocksService.locks.reaper.threshold 300 The frequency at which the ChildReaper will run. Duration should be in sec. Default is 5 min. oozie.service.ZKLocksService.locks.reaper.threads 2 Number of fixed threads used by ChildReaper to delete empty locks. oozie.service.AbandonedCoordCheckerService.check.interval 1440 Interval, in minutes, at which AbandonedCoordCheckerService should run. oozie.service.AbandonedCoordCheckerService.check.delay 60 Delay, in minutes, at which AbandonedCoordCheckerService should run. oozie.service.AbandonedCoordCheckerService.failure.limit 25 Failure limit. A job is considered to be abandoned/faulty if total number of actions in failed/timedout/suspended >= "Failure limit" and there are no succeeded action. oozie.service.AbandonedCoordCheckerService.kill.jobs false If true, AbandonedCoordCheckerService will kill abandoned coords. oozie.service.AbandonedCoordCheckerService.job.older.than 2880 In minutes, job will be considered as abandoned/faulty if job is older than this value. oozie.notification.proxy System level proxy setting for job notifications. oozie.wf.rerun.disablechild false By setting this option, workflow rerun will be disabled if parent workflow or coordinator exist and it will only rerun through parent. oozie.use.system.libpath false Default value of oozie.use.system.libpath. If user haven't specified =oozie.use.system.libpath= in the job.properties and this value is true and Oozie will include sharelib jars for workflow. oozie.service.PauseTransitService.callable.batch.size 10 This value determines the number of callable which will be batched together to be executed by a single thread. oozie.configuration.substitute.depth 20 This value determines the depth of substitution in configurations. If set -1, No limitation on substitution. oozie.service.SparkConfigurationService.spark.configurations *=spark-conf Comma separated AUTHORITY=SPARK_CONF_DIR, where AUTHORITY is the HOST:PORT of the ResourceManager of a YARN cluster. The wildcard '*' configuration is used when there is no exact match for an authority. The SPARK_CONF_DIR contains the relevant spark-defaults.conf properties file. If the path is relative is looked within the Oozie configuration directory; though the path can be absolute. This is only used when the Spark master is set to either "yarn-client" or "yarn-cluster". oozie.service.SparkConfigurationService.spark.configurations.ignore.spark.yarn.jar true If true, Oozie will ignore the "spark.yarn.jar" property from any Spark configurations specified in oozie.service.SparkConfigurationService.spark.configurations. If false, Oozie will not ignore it. It is recommended to leave this as true because it can interfere with the jars in the Spark sharelib. oozie.email.attachment.enabled true This value determines whether to support email attachment of a file on HDFS. Set it false if there is any security concern. oozie.email.smtp.host localhost The host where the email action may find the SMTP server. oozie.email.smtp.port 25 The port to connect to for the SMTP server, for email actions. oozie.email.smtp.auth false Boolean property that toggles if authentication is to be done or not when using email actions. oozie.email.smtp.username If authentication is enabled for email actions, the username to login as (to the SMTP server). oozie.email.smtp.password If authentication is enabled for email actions, the password to login with (to the SMTP server). oozie.email.from.address oozie@localhost The from address to be used for mailing all emails done via the email action. oozie.email.smtp.socket.timeout.ms 10000 The timeout to apply over all SMTP server socket operations done during the email action. oozie.actions.default.name-node The default value to use for the <name-node> element in applicable action types. This value will be used when neither the action itself nor the global section specifies a <name-node>. As expected, it should be of the form "hdfs://HOST:PORT". oozie.actions.default.job-tracker The default value to use for the <job-tracker> element in applicable action types. This value will be used when neither the action itself nor the global section specifies a <job-tracker>. As expected, it should be of the form "HOST:PORT". ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_5/resources/yarn-default.xml0000664000175000017500000020200000000000000033546 0ustar00zuulzuul00000000000000 Factory to create client IPC classes. yarn.ipc.client.factory.class Factory to create server IPC classes. yarn.ipc.server.factory.class Factory to create serializeable records. yarn.ipc.record.factory.class RPC class implementation yarn.ipc.rpc.class org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC The hostname of the RM. yarn.resourcemanager.hostname 0.0.0.0 The address of the applications manager interface in the RM. yarn.resourcemanager.address ${yarn.resourcemanager.hostname}:8032 The actual address the server will bind to. If this optional address is set, the RPC and webapp servers will bind to this address and the port specified in yarn.resourcemanager.address and yarn.resourcemanager.webapp.address, respectively. This is most useful for making RM listen to all interfaces by setting to 0.0.0.0. yarn.resourcemanager.bind-host The number of threads used to handle applications manager requests. yarn.resourcemanager.client.thread-count 50 Number of threads used to launch/cleanup AM. yarn.resourcemanager.amlauncher.thread-count 50 Retry times to connect with NM. yarn.resourcemanager.nodemanager-connect-retries 10 Timeout in milliseconds when YARN dispatcher tries to drain the events. Typically, this happens when service is stopping. e.g. RM drains the ATS events dispatcher when stopping. yarn.dispatcher.drain-events.timeout 300000 The expiry interval for application master reporting. yarn.am.liveness-monitor.expiry-interval-ms 600000 The Kerberos principal for the resource manager. yarn.resourcemanager.principal The address of the scheduler interface. yarn.resourcemanager.scheduler.address ${yarn.resourcemanager.hostname}:8030 Number of threads to handle scheduler interface. yarn.resourcemanager.scheduler.client.thread-count 50 This configures the HTTP endpoint for Yarn Daemons.The following values are supported: - HTTP_ONLY : Service is provided only on http - HTTPS_ONLY : Service is provided only on https yarn.http.policy HTTP_ONLY The http address of the RM web application. yarn.resourcemanager.webapp.address ${yarn.resourcemanager.hostname}:8088 The https adddress of the RM web application. yarn.resourcemanager.webapp.https.address ${yarn.resourcemanager.hostname}:8090 yarn.resourcemanager.resource-tracker.address ${yarn.resourcemanager.hostname}:8031 Are acls enabled. yarn.acl.enable false ACL of who can be admin of the YARN cluster. yarn.admin.acl * The address of the RM admin interface. yarn.resourcemanager.admin.address ${yarn.resourcemanager.hostname}:8033 Number of threads used to handle RM admin interface. yarn.resourcemanager.admin.client.thread-count 1 Maximum time to wait to establish connection to ResourceManager. yarn.resourcemanager.connect.max-wait.ms 900000 How often to try connecting to the ResourceManager. yarn.resourcemanager.connect.retry-interval.ms 30000 The maximum number of application attempts. It's a global setting for all application masters. Each application master can specify its individual maximum number of application attempts via the API, but the individual number cannot be more than the global upper bound. If it is, the resourcemanager will override it. The default number is set to 2, to allow at least one retry for AM. yarn.resourcemanager.am.max-attempts 2 How often to check that containers are still alive. yarn.resourcemanager.container.liveness-monitor.interval-ms 600000 The keytab for the resource manager. yarn.resourcemanager.keytab /etc/krb5.keytab Flag to enable override of the default kerberos authentication filter with the RM authentication filter to allow authentication using delegation tokens(fallback to kerberos if the tokens are missing). Only applicable when the http authentication type is kerberos. yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled true Flag to enable cross-origin (CORS) support in the RM. This flag requires the CORS filter initializer to be added to the filter initializers list in core-site.xml. yarn.resourcemanager.webapp.cross-origin.enabled false How long to wait until a node manager is considered dead. yarn.nm.liveness-monitor.expiry-interval-ms 600000 Path to file with nodes to include. yarn.resourcemanager.nodes.include-path Path to file with nodes to exclude. yarn.resourcemanager.nodes.exclude-path Number of threads to handle resource tracker calls. yarn.resourcemanager.resource-tracker.client.thread-count 50 The class to use as the resource scheduler. yarn.resourcemanager.scheduler.class org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler The minimum allocation for every container request at the RM, in MBs. Memory requests lower than this will throw a InvalidResourceRequestException. yarn.scheduler.minimum-allocation-mb 1024 The maximum allocation for every container request at the RM, in MBs. Memory requests higher than this will throw a InvalidResourceRequestException. yarn.scheduler.maximum-allocation-mb 8192 The minimum allocation for every container request at the RM, in terms of virtual CPU cores. Requests lower than this will throw a InvalidResourceRequestException. yarn.scheduler.minimum-allocation-vcores 1 The maximum allocation for every container request at the RM, in terms of virtual CPU cores. Requests higher than this will throw a InvalidResourceRequestException. yarn.scheduler.maximum-allocation-vcores 32 Enable RM to recover state after starting. If true, then yarn.resourcemanager.store.class must be specified. yarn.resourcemanager.recovery.enabled false Should RM fail fast if it encounters any errors. By defalt, it points to ${yarn.fail-fast}. Errors include: 1) exceptions when state-store write/read operations fails. yarn.resourcemanager.fail-fast ${yarn.fail-fast} Should YARN fail fast if it encounters any errors. This is a global config for all other components including RM,NM etc. If no value is set for component-specific config (e.g yarn.resourcemanager.fail-fast), this value will be the default. yarn.fail-fast false Enable RM work preserving recovery. This configuration is private to YARN for experimenting the feature. yarn.resourcemanager.work-preserving-recovery.enabled true Set the amount of time RM waits before allocating new containers on work-preserving-recovery. Such wait period gives RM a chance to settle down resyncing with NMs in the cluster on recovery, before assigning new containers to applications. yarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms 10000 The class to use as the persistent store. If org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore is used, the store is implicitly fenced; meaning a single ResourceManager is able to use the store at any point in time. More details on this implicit fencing, along with setting up appropriate ACLs is discussed under yarn.resourcemanager.zk-state-store.root-node.acl. yarn.resourcemanager.store.class org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore The maximum number of completed applications RM state store keeps, less than or equals to ${yarn.resourcemanager.max-completed-applications}. By default, it equals to ${yarn.resourcemanager.max-completed-applications}. This ensures that the applications kept in the state store are consistent with the applications remembered in RM memory. Any values larger than ${yarn.resourcemanager.max-completed-applications} will be reset to ${yarn.resourcemanager.max-completed-applications}. Note that this value impacts the RM recovery performance.Typically, a smaller value indicates better performance on RM recovery. yarn.resourcemanager.state-store.max-completed-applications ${yarn.resourcemanager.max-completed-applications} Host:Port of the ZooKeeper server to be used by the RM. This must be supplied when using the ZooKeeper based implementation of the RM state store and/or embedded automatic failover in a HA setting. yarn.resourcemanager.zk-address Number of times RM tries to connect to ZooKeeper. yarn.resourcemanager.zk-num-retries 1000 Retry interval in milliseconds when connecting to ZooKeeper. When HA is enabled, the value here is NOT used. It is generated automatically from yarn.resourcemanager.zk-timeout-ms and yarn.resourcemanager.zk-num-retries. yarn.resourcemanager.zk-retry-interval-ms 1000 Full path of the ZooKeeper znode where RM state will be stored. This must be supplied when using org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore as the value for yarn.resourcemanager.store.class yarn.resourcemanager.zk-state-store.parent-path /rmstore ZooKeeper session timeout in milliseconds. Session expiration is managed by the ZooKeeper cluster itself, not by the client. This value is used by the cluster to determine when the client's session expires. Expirations happens when the cluster does not hear from the client within the specified session timeout period (i.e. no heartbeat). yarn.resourcemanager.zk-timeout-ms 10000 ACL's to be used for ZooKeeper znodes. yarn.resourcemanager.zk-acl world:anyone:rwcda ACLs to be used for the root znode when using ZKRMStateStore in a HA scenario for fencing. ZKRMStateStore supports implicit fencing to allow a single ResourceManager write-access to the store. For fencing, the ResourceManagers in the cluster share read-write-admin privileges on the root node, but the Active ResourceManager claims exclusive create-delete permissions. By default, when this property is not set, we use the ACLs from yarn.resourcemanager.zk-acl for shared admin access and rm-address:random-number for username-based exclusive create-delete access. This property allows users to set ACLs of their choice instead of using the default mechanism. For fencing to work, the ACLs should be carefully set differently on each ResourceManger such that all the ResourceManagers have shared admin access and the Active ResourceManger takes over (exclusively) the create-delete access. yarn.resourcemanager.zk-state-store.root-node.acl Specify the auths to be used for the ACL's specified in both the yarn.resourcemanager.zk-acl and yarn.resourcemanager.zk-state-store.root-node.acl properties. This takes a comma-separated list of authentication mechanisms, each of the form 'scheme:auth' (the same syntax used for the 'addAuth' command in the ZK CLI). yarn.resourcemanager.zk-auth URI pointing to the location of the FileSystem path where RM state will be stored. This must be supplied when using org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore as the value for yarn.resourcemanager.store.class yarn.resourcemanager.fs.state-store.uri ${hadoop.tmp.dir}/yarn/system/rmstore hdfs client retry policy specification. hdfs client retry is always enabled. Specified in pairs of sleep-time and number-of-retries and (t0, n0), (t1, n1), ..., the first n0 retries sleep t0 milliseconds on average, the following n1 retries sleep t1 milliseconds on average, and so on. yarn.resourcemanager.fs.state-store.retry-policy-spec 2000, 500 the number of retries to recover from IOException in FileSystemRMStateStore. yarn.resourcemanager.fs.state-store.num-retries 0 Retry interval in milliseconds in FileSystemRMStateStore. yarn.resourcemanager.fs.state-store.retry-interval-ms 1000 Local path where the RM state will be stored when using org.apache.hadoop.yarn.server.resourcemanager.recovery.LeveldbRMStateStore as the value for yarn.resourcemanager.store.class yarn.resourcemanager.leveldb-state-store.path ${hadoop.tmp.dir}/yarn/system/rmstore The time in seconds between full compactions of the leveldb database. Setting the interval to zero disables the full compaction cycles. yarn.resourcemanager.leveldb-state-store.compaction-interval-secs 3600 Enable RM high-availability. When enabled, (1) The RM starts in the Standby mode by default, and transitions to the Active mode when prompted to. (2) The nodes in the RM ensemble are listed in yarn.resourcemanager.ha.rm-ids (3) The id of each RM either comes from yarn.resourcemanager.ha.id if yarn.resourcemanager.ha.id is explicitly specified or can be figured out by matching yarn.resourcemanager.address.{id} with local address (4) The actual physical addresses come from the configs of the pattern - {rpc-config}.{id} yarn.resourcemanager.ha.enabled false Enable automatic failover. By default, it is enabled only when HA is enabled yarn.resourcemanager.ha.automatic-failover.enabled true Enable embedded automatic failover. By default, it is enabled only when HA is enabled. The embedded elector relies on the RM state store to handle fencing, and is primarily intended to be used in conjunction with ZKRMStateStore. yarn.resourcemanager.ha.automatic-failover.embedded true The base znode path to use for storing leader information, when using ZooKeeper based leader election. yarn.resourcemanager.ha.automatic-failover.zk-base-path /yarn-leader-election Name of the cluster. In a HA setting, this is used to ensure the RM participates in leader election for this cluster and ensures it does not affect other clusters yarn.resourcemanager.cluster-id The list of RM nodes in the cluster when HA is enabled. See description of yarn.resourcemanager.ha .enabled for full details on how this is used. yarn.resourcemanager.ha.rm-ids The id (string) of the current RM. When HA is enabled, this is an optional config. The id of current RM can be set by explicitly specifying yarn.resourcemanager.ha.id or figured out by matching yarn.resourcemanager.address.{id} with local address See description of yarn.resourcemanager.ha.enabled for full details on how this is used. yarn.resourcemanager.ha.id When HA is enabled, the class to be used by Clients, AMs and NMs to failover to the Active RM. It should extend org.apache.hadoop.yarn.client.RMFailoverProxyProvider yarn.client.failover-proxy-provider org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider When HA is enabled, the max number of times FailoverProxyProvider should attempt failover. When set, this overrides the yarn.resourcemanager.connect.max-wait.ms. When not set, this is inferred from yarn.resourcemanager.connect.max-wait.ms. yarn.client.failover-max-attempts When HA is enabled, the sleep base (in milliseconds) to be used for calculating the exponential delay between failovers. When set, this overrides the yarn.resourcemanager.connect.* settings. When not set, yarn.resourcemanager.connect.retry-interval.ms is used instead. yarn.client.failover-sleep-base-ms When HA is enabled, the maximum sleep time (in milliseconds) between failovers. When set, this overrides the yarn.resourcemanager.connect.* settings. When not set, yarn.resourcemanager.connect.retry-interval.ms is used instead. yarn.client.failover-sleep-max-ms When HA is enabled, the number of retries per attempt to connect to a ResourceManager. In other words, it is the ipc.client.connect.max.retries to be used during failover attempts yarn.client.failover-retries 0 When HA is enabled, the number of retries per attempt to connect to a ResourceManager on socket timeouts. In other words, it is the ipc.client.connect.max.retries.on.timeouts to be used during failover attempts yarn.client.failover-retries-on-socket-timeouts 0 The maximum number of completed applications RM keeps. yarn.resourcemanager.max-completed-applications 10000 Interval at which the delayed token removal thread runs yarn.resourcemanager.delayed.delegation-token.removal-interval-ms 30000 If true, ResourceManager will have proxy-user privileges. Use case: In a secure cluster, YARN requires the user hdfs delegation-tokens to do localization and log-aggregation on behalf of the user. If this is set to true, ResourceManager is able to request new hdfs delegation tokens on behalf of the user. This is needed by long-running-service, because the hdfs tokens will eventually expire and YARN requires new valid tokens to do localization and log-aggregation. Note that to enable this use case, the corresponding HDFS NameNode has to configure ResourceManager as the proxy-user so that ResourceManager can itself ask for new tokens on behalf of the user when tokens are past their max-life-time. yarn.resourcemanager.proxy-user-privileges.enabled false Interval for the roll over for the master key used to generate application tokens yarn.resourcemanager.am-rm-tokens.master-key-rolling-interval-secs 86400 Interval for the roll over for the master key used to generate container tokens. It is expected to be much greater than yarn.nm.liveness-monitor.expiry-interval-ms and yarn.resourcemanager.rm.container-allocation.expiry-interval-ms. Otherwise the behavior is undefined. yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs 86400 The heart-beat interval in milliseconds for every NodeManager in the cluster. yarn.resourcemanager.nodemanagers.heartbeat-interval-ms 1000 The minimum allowed version of a connecting nodemanager. The valid values are NONE (no version checking), EqualToRM (the nodemanager's version is equal to or greater than the RM version), or a Version String. yarn.resourcemanager.nodemanager.minimum.version NONE Enable a set of periodic monitors (specified in yarn.resourcemanager.scheduler.monitor.policies) that affect the scheduler. yarn.resourcemanager.scheduler.monitor.enable false The list of SchedulingEditPolicy classes that interact with the scheduler. A particular module may be incompatible with the scheduler, other policies, or a configuration of either. yarn.resourcemanager.scheduler.monitor.policies org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy The class to use as the configuration provider. If org.apache.hadoop.yarn.LocalConfigurationProvider is used, the local configuration will be loaded. If org.apache.hadoop.yarn.FileSystemBasedConfigurationProvider is used, the configuration which will be loaded should be uploaded to remote File system first. yarn.resourcemanager.configuration.provider-class org.apache.hadoop.yarn.LocalConfigurationProvider The setting that controls whether yarn system metrics is published on the timeline server or not by RM. yarn.resourcemanager.system-metrics-publisher.enabled false Number of worker threads that send the yarn system metrics data. yarn.resourcemanager.system-metrics-publisher.dispatcher.pool-size 10 The hostname of the NM. yarn.nodemanager.hostname 0.0.0.0 The address of the container manager in the NM. yarn.nodemanager.address ${yarn.nodemanager.hostname}:0 The actual address the server will bind to. If this optional address is set, the RPC and webapp servers will bind to this address and the port specified in yarn.nodemanager.address and yarn.nodemanager.webapp.address, respectively. This is most useful for making NM listen to all interfaces by setting to 0.0.0.0. yarn.nodemanager.bind-host Environment variables that should be forwarded from the NodeManager's environment to the container's. yarn.nodemanager.admin-env MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX Environment variables that containers may override rather than use NodeManager's default. yarn.nodemanager.env-whitelist JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,HADOOP_YARN_HOME who will execute(launch) the containers. yarn.nodemanager.container-executor.class org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor Number of threads container manager uses. yarn.nodemanager.container-manager.thread-count 20 Number of threads used in cleanup. yarn.nodemanager.delete.thread-count 4 Number of seconds after an application finishes before the nodemanager's DeletionService will delete the application's localized file directory and log directory. To diagnose Yarn application problems, set this property's value large enough (for example, to 600 = 10 minutes) to permit examination of these directories. After changing the property's value, you must restart the nodemanager in order for it to have an effect. The roots of Yarn applications' work directories is configurable with the yarn.nodemanager.local-dirs property (see below), and the roots of the Yarn applications' log directories is configurable with the yarn.nodemanager.log-dirs property (see also below). yarn.nodemanager.delete.debug-delay-sec 0 Keytab for NM. yarn.nodemanager.keytab /etc/krb5.keytab List of directories to store localized files in. An application's localized file directory will be found in: ${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}. Individual containers' work directories, called container_${contid}, will be subdirectories of this. yarn.nodemanager.local-dirs ${hadoop.tmp.dir}/nm-local-dir It limits the maximum number of files which will be localized in a single local directory. If the limit is reached then sub-directories will be created and new files will be localized in them. If it is set to a value less than or equal to 36 [which are sub-directories (0-9 and then a-z)] then NodeManager will fail to start. For example; [for public cache] if this is configured with a value of 40 ( 4 files + 36 sub-directories) and the local-dir is "/tmp/local-dir1" then it will allow 4 files to be created directly inside "/tmp/local-dir1/filecache". For files that are localized further it will create a sub-directory "0" inside "/tmp/local-dir1/filecache" and will localize files inside it until it becomes full. If a file is removed from a sub-directory that is marked full, then that sub-directory will be used back again to localize files. yarn.nodemanager.local-cache.max-files-per-directory 8192 Address where the localizer IPC is. yarn.nodemanager.localizer.address ${yarn.nodemanager.hostname}:8040 Interval in between cache cleanups. yarn.nodemanager.localizer.cache.cleanup.interval-ms 600000 Target size of localizer cache in MB, per nodemanager. It is a target retention size that only includes resources with PUBLIC and PRIVATE visibility and excludes resources with APPLICATION visibility yarn.nodemanager.localizer.cache.target-size-mb 10240 Number of threads to handle localization requests. yarn.nodemanager.localizer.client.thread-count 5 Number of threads to use for localization fetching. yarn.nodemanager.localizer.fetch.thread-count 4 Where to store container logs. An application's localized log directory will be found in ${yarn.nodemanager.log-dirs}/application_${appid}. Individual containers' log directories will be below this, in directories named container_{$contid}. Each container directory will contain the files stderr, stdin, and syslog generated by that container. yarn.nodemanager.log-dirs ${yarn.log.dir}/userlogs Whether to enable log aggregation. Log aggregation collects each container's logs and moves these logs onto a file-system, for e.g. HDFS, after the application completes. Users can configure the "yarn.nodemanager.remote-app-log-dir" and "yarn.nodemanager.remote-app-log-dir-suffix" properties to determine where these logs are moved to. Users can access the logs via the Application Timeline Server. yarn.log-aggregation-enable false How long to keep aggregation logs before deleting them. -1 disables. Be careful set this too small and you will spam the name node. yarn.log-aggregation.retain-seconds -1 How long to wait between aggregated log retention checks. If set to 0 or a negative value then the value is computed as one-tenth of the aggregated log retention time. Be careful set this too small and you will spam the name node. yarn.log-aggregation.retain-check-interval-seconds -1 Time in seconds to retain user logs. Only applicable if log aggregation is disabled yarn.nodemanager.log.retain-seconds 10800 Where to aggregate logs to. yarn.nodemanager.remote-app-log-dir /tmp/logs The remote log dir will be created at {yarn.nodemanager.remote-app-log-dir}/${user}/{thisParam} yarn.nodemanager.remote-app-log-dir-suffix logs Amount of physical memory, in MB, that can be allocated for containers. yarn.nodemanager.resource.memory-mb 8192 Whether physical memory limits will be enforced for containers. yarn.nodemanager.pmem-check-enabled true Whether virtual memory limits will be enforced for containers. yarn.nodemanager.vmem-check-enabled true Ratio between virtual memory to physical memory when setting memory limits for containers. Container allocations are expressed in terms of physical memory, and virtual memory usage is allowed to exceed this allocation by this ratio. yarn.nodemanager.vmem-pmem-ratio 2.1 Number of vcores that can be allocated for containers. This is used by the RM scheduler when allocating resources for containers. This is not used to limit the number of physical cores used by YARN containers. yarn.nodemanager.resource.cpu-vcores 8 Percentage of CPU that can be allocated for containers. This setting allows users to limit the amount of CPU that YARN containers use. Currently functional only on Linux using cgroups. The default is to use 100% of CPU. yarn.nodemanager.resource.percentage-physical-cpu-limit 100 NM Webapp address. yarn.nodemanager.webapp.address ${yarn.nodemanager.hostname}:8042 How often to monitor containers. yarn.nodemanager.container-monitor.interval-ms 3000 Class that calculates containers current resource utilization. yarn.nodemanager.container-monitor.resource-calculator.class Frequency of running node health script. yarn.nodemanager.health-checker.interval-ms 600000 Script time out period. yarn.nodemanager.health-checker.script.timeout-ms 1200000 The health check script to run. yarn.nodemanager.health-checker.script.path The arguments to pass to the health check script. yarn.nodemanager.health-checker.script.opts Frequency of running disk health checker code. yarn.nodemanager.disk-health-checker.interval-ms 120000 The minimum fraction of number of disks to be healthy for the nodemanager to launch new containers. This correspond to both yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs. i.e. If there are less number of healthy local-dirs (or log-dirs) available, then new containers will not be launched on this node. yarn.nodemanager.disk-health-checker.min-healthy-disks 0.25 The maximum percentage of disk space utilization allowed after which a disk is marked as bad. Values can range from 0.0 to 100.0. If the value is greater than or equal to 100, the nodemanager will check for full disk. This applies to yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs. yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage 90.0 The minimum space that must be available on a disk for it to be used. This applies to yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs. yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb 0 The path to the Linux container executor. yarn.nodemanager.linux-container-executor.path The class which should help the LCE handle resources. yarn.nodemanager.linux-container-executor.resources-handler.class org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler The cgroups hierarchy under which to place YARN proccesses (cannot contain commas). If yarn.nodemanager.linux-container-executor.cgroups.mount is false (that is, if cgroups have been pre-configured), then this cgroups hierarchy must already exist and be writable by the NodeManager user, otherwise the NodeManager may fail. Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler. yarn.nodemanager.linux-container-executor.cgroups.hierarchy /hadoop-yarn Whether the LCE should attempt to mount cgroups if not found. Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler. yarn.nodemanager.linux-container-executor.cgroups.mount false Where the LCE should attempt to mount cgroups if not found. Common locations include /sys/fs/cgroup and /cgroup; the default location can vary depending on the Linux distribution in use. This path must exist before the NodeManager is launched. Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler, and yarn.nodemanager.linux-container-executor.cgroups.mount is true. yarn.nodemanager.linux-container-executor.cgroups.mount-path This determines which of the two modes that LCE should use on a non-secure cluster. If this value is set to true, then all containers will be launched as the user specified in yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user. If this value is set to false, then containers will run as the user who submitted the application. yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users true The UNIX user that containers will run as when Linux-container-executor is used in nonsecure mode (a use case for this is using cgroups) if the yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users is set to true. yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user nobody The allowed pattern for UNIX user names enforced by Linux-container-executor when used in nonsecure mode (use case for this is using cgroups). The default value is taken from /usr/sbin/adduser yarn.nodemanager.linux-container-executor.nonsecure-mode.user-pattern ^[_.A-Za-z0-9][-@_.A-Za-z0-9]{0,255}?[$]?$ This flag determines whether apps should run with strict resource limits or be allowed to consume spare resources if they need them. For example, turning the flag on will restrict apps to use only their share of CPU, even if the node has spare CPU cycles. The default value is false i.e. use available resources. Please note that turning this flag on may reduce job throughput on the cluster. yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage false This flag determines whether memory limit will be set for the Windows Job Object of the containers launched by the default container executor. yarn.nodemanager.windows-container.memory-limit.enabled false This flag determines whether CPU limit will be set for the Windows Job Object of the containers launched by the default container executor. yarn.nodemanager.windows-container.cpu-limit.enabled false T-file compression types used to compress aggregated logs. yarn.nodemanager.log-aggregation.compression-type none The kerberos principal for the node manager. yarn.nodemanager.principal A comma separated list of services where service name should only contain a-zA-Z0-9_ and can not start with numbers yarn.nodemanager.aux-services No. of ms to wait between sending a SIGTERM and SIGKILL to a container yarn.nodemanager.sleep-delay-before-sigkill.ms 250 Max time to wait for a process to come up when trying to cleanup a container yarn.nodemanager.process-kill-wait.ms 2000 The minimum allowed version of a resourcemanager that a nodemanager will connect to. The valid values are NONE (no version checking), EqualToNM (the resourcemanager's version is equal to or greater than the NM version), or a Version String. yarn.nodemanager.resourcemanager.minimum.version NONE Max number of threads in NMClientAsync to process container management events yarn.client.nodemanager-client-async.thread-pool-max-size 500 Max time to wait to establish a connection to NM yarn.client.nodemanager-connect.max-wait-ms 180000 Time interval between each attempt to connect to NM yarn.client.nodemanager-connect.retry-interval-ms 10000 Maximum number of proxy connections to cache for node managers. If set to a value greater than zero then the cache is enabled and the NMClient and MRAppMaster will cache the specified number of node manager proxies. There will be at max one proxy per node manager. Ex. configuring it to a value of 5 will make sure that client will at max have 5 proxies cached with 5 different node managers. These connections for these proxies will be timed out if idle for more than the system wide idle timeout period. Note that this could cause issues on large clusters as many connections could linger simultaneously and lead to a large number of connection threads. The token used for authentication will be used only at connection creation time. If a new token is received then the earlier connection should be closed in order to use the new token. This and (yarn.client.nodemanager-client-async.thread-pool-max-size) are related and should be in sync (no need for them to be equal). If the value of this property is zero then the connection cache is disabled and connections will use a zero idle timeout to prevent too many connection threads on large clusters. yarn.client.max-cached-nodemanagers-proxies 0 Enable the node manager to recover after starting yarn.nodemanager.recovery.enabled false The local filesystem directory in which the node manager will store state when recovery is enabled. yarn.nodemanager.recovery.dir ${hadoop.tmp.dir}/yarn-nm-recovery The time in seconds between full compactions of the NM state database. Setting the interval to zero disables the full compaction cycles. yarn.nodemanager.recovery.compaction-interval-secs 3600 The delay time ms to unregister container metrics after completion. yarn.nodemanager.container-metrics.unregister-delay-ms 10000 yarn.nodemanager.docker-container-executor.exec-name /usr/bin/docker Name or path to the Docker client. yarn.nodemanager.aux-services.mapreduce_shuffle.class org.apache.hadoop.mapred.ShuffleHandler mapreduce.job.jar mapreduce.job.hdfs-servers ${fs.defaultFS} The kerberos principal for the proxy, if the proxy is not running as part of the RM. yarn.web-proxy.principal Keytab for WebAppProxy, if the proxy is not running as part of the RM. yarn.web-proxy.keytab The address for the web proxy as HOST:PORT, if this is not given then the proxy will run as part of the RM yarn.web-proxy.address CLASSPATH for YARN applications. A comma-separated list of CLASSPATH entries. When this value is empty, the following default CLASSPATH for YARN applications would be used. For Linux: $HADOOP_CONF_DIR, $HADOOP_COMMON_HOME/share/hadoop/common/*, $HADOOP_COMMON_HOME/share/hadoop/common/lib/*, $HADOOP_HDFS_HOME/share/hadoop/hdfs/*, $HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*, $HADOOP_YARN_HOME/share/hadoop/yarn/*, $HADOOP_YARN_HOME/share/hadoop/yarn/lib/* For Windows: %HADOOP_CONF_DIR%, %HADOOP_COMMON_HOME%/share/hadoop/common/*, %HADOOP_COMMON_HOME%/share/hadoop/common/lib/*, %HADOOP_HDFS_HOME%/share/hadoop/hdfs/*, %HADOOP_HDFS_HOME%/share/hadoop/hdfs/lib/*, %HADOOP_YARN_HOME%/share/hadoop/yarn/*, %HADOOP_YARN_HOME%/share/hadoop/yarn/lib/* yarn.application.classpath In the server side it indicates whether timeline service is enabled or not. And in the client side, users can enable it to indicate whether client wants to use timeline service. If it's enabled in the client side along with security, then yarn client tries to fetch the delegation tokens for the timeline server. yarn.timeline-service.enabled false The hostname of the timeline service web application. yarn.timeline-service.hostname 0.0.0.0 This is default address for the timeline server to start the RPC server. yarn.timeline-service.address ${yarn.timeline-service.hostname}:10200 The http address of the timeline service web application. yarn.timeline-service.webapp.address ${yarn.timeline-service.hostname}:8188 The https address of the timeline service web application. yarn.timeline-service.webapp.https.address ${yarn.timeline-service.hostname}:8190 The actual address the server will bind to. If this optional address is set, the RPC and webapp servers will bind to this address and the port specified in yarn.timeline-service.address and yarn.timeline-service.webapp.address, respectively. This is most useful for making the service listen to all interfaces by setting to 0.0.0.0. yarn.timeline-service.bind-host Defines the max number of applications could be fetched using REST API or application history protocol and shown in timeline server web ui. yarn.timeline-service.generic-application-history.max-applications 10000 Store class name for timeline store. yarn.timeline-service.store-class org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore Enable age off of timeline store data. yarn.timeline-service.ttl-enable true Time to live for timeline store data in milliseconds. yarn.timeline-service.ttl-ms 604800000 Store file name for leveldb timeline store. yarn.timeline-service.leveldb-timeline-store.path ${hadoop.tmp.dir}/yarn/timeline Length of time to wait between deletion cycles of leveldb timeline store in milliseconds. yarn.timeline-service.leveldb-timeline-store.ttl-interval-ms 300000 Size of read cache for uncompressed blocks for leveldb timeline store in bytes. yarn.timeline-service.leveldb-timeline-store.read-cache-size 104857600 Size of cache for recently read entity start times for leveldb timeline store in number of entities. yarn.timeline-service.leveldb-timeline-store.start-time-read-cache-size 10000 Size of cache for recently written entity start times for leveldb timeline store in number of entities. yarn.timeline-service.leveldb-timeline-store.start-time-write-cache-size 10000 Handler thread count to serve the client RPC requests. yarn.timeline-service.handler-thread-count 10 yarn.timeline-service.http-authentication.type simple Defines authentication used for the timeline server HTTP endpoint. Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME# yarn.timeline-service.http-authentication.simple.anonymous.allowed true Indicates if anonymous requests are allowed by the timeline server when using 'simple' authentication. The Kerberos principal for the timeline server. yarn.timeline-service.principal The Kerberos keytab for the timeline server. yarn.timeline-service.keytab /etc/krb5.keytab Comma separated list of UIs that will be hosted yarn.timeline-service.ui-names Default maximum number of retires for timeline servive client and value -1 means no limit. yarn.timeline-service.client.max-retries 30 Client policy for whether timeline operations are non-fatal. Should the failure to obtain a delegation token be considered an application failure (option = false), or should the client attempt to continue to publish information without it (option=true) yarn.timeline-service.client.best-effort false Default retry time interval for timeline servive client. yarn.timeline-service.client.retry-interval-ms 1000 Enable timeline server to recover state after starting. If true, then yarn.timeline-service.state-store-class must be specified. yarn.timeline-service.recovery.enabled false Store class name for timeline state store. yarn.timeline-service.state-store-class org.apache.hadoop.yarn.server.timeline.recovery.LeveldbTimelineStateStore Store file name for leveldb state store. yarn.timeline-service.leveldb-state-store.path ${hadoop.tmp.dir}/yarn/timeline Whether the shared cache is enabled yarn.sharedcache.enabled false The root directory for the shared cache yarn.sharedcache.root-dir /sharedcache The level of nested directories before getting to the checksum directories. It must be non-negative. yarn.sharedcache.nested-level 3 The implementation to be used for the SCM store yarn.sharedcache.store.class org.apache.hadoop.yarn.server.sharedcachemanager.store.InMemorySCMStore The implementation to be used for the SCM app-checker yarn.sharedcache.app-checker.class org.apache.hadoop.yarn.server.sharedcachemanager.RemoteAppChecker A resource in the in-memory store is considered stale if the time since the last reference exceeds the staleness period. This value is specified in minutes. yarn.sharedcache.store.in-memory.staleness-period-mins 10080 Initial delay before the in-memory store runs its first check to remove dead initial applications. Specified in minutes. yarn.sharedcache.store.in-memory.initial-delay-mins 10 The frequency at which the in-memory store checks to remove dead initial applications. Specified in minutes. yarn.sharedcache.store.in-memory.check-period-mins 720 The address of the admin interface in the SCM (shared cache manager) yarn.sharedcache.admin.address 0.0.0.0:8047 The number of threads used to handle SCM admin interface (1 by default) yarn.sharedcache.admin.thread-count 1 The address of the web application in the SCM (shared cache manager) yarn.sharedcache.webapp.address 0.0.0.0:8788 The frequency at which a cleaner task runs. Specified in minutes. yarn.sharedcache.cleaner.period-mins 1440 Initial delay before the first cleaner task is scheduled. Specified in minutes. yarn.sharedcache.cleaner.initial-delay-mins 10 The time to sleep between processing each shared cache resource. Specified in milliseconds. yarn.sharedcache.cleaner.resource-sleep-ms 0 The address of the node manager interface in the SCM (shared cache manager) yarn.sharedcache.uploader.server.address 0.0.0.0:8046 The number of threads used to handle shared cache manager requests from the node manager (50 by default) yarn.sharedcache.uploader.server.thread-count 50 The address of the client interface in the SCM (shared cache manager) yarn.sharedcache.client-server.address 0.0.0.0:8045 The number of threads used to handle shared cache manager requests from clients (50 by default) yarn.sharedcache.client-server.thread-count 50 The algorithm used to compute checksums of files (SHA-256 by default) yarn.sharedcache.checksum.algo.impl org.apache.hadoop.yarn.sharedcache.ChecksumSHA256Impl The replication factor for the node manager uploader for the shared cache (10 by default) yarn.sharedcache.nm.uploader.replication.factor 10 The number of threads used to upload files from a node manager instance (20 by default) yarn.sharedcache.nm.uploader.thread-count 20 The interval that the yarn client library uses to poll the completion status of the asynchronous API of application client protocol. yarn.client.application-client-protocol.poll-interval-ms 200 RSS usage of a process computed via /proc/pid/stat is not very accurate as it includes shared pages of a process. /proc/pid/smaps provides useful information like Private_Dirty, Private_Clean, Shared_Dirty, Shared_Clean which can be used for computing more accurate RSS. When this flag is enabled, RSS is computed as Min(Shared_Dirty, Pss) + Private_Clean + Private_Dirty. It excludes read-only shared mappings in RSS computation. yarn.nodemanager.container-monitor.procfs-tree.smaps-based-rss.enabled false Defines how often NMs wake up to upload log files. The default value is -1. By default, the logs will be uploaded when the application is finished. By setting this configure, logs can be uploaded periodically when the application is running. The minimum rolling-interval-seconds can be set is 3600. yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds -1 Flag to enable cross-origin (CORS) support in the NM. This flag requires the CORS filter initializer to be added to the filter initializers list in core-site.xml. yarn.nodemanager.webapp.cross-origin.enabled false ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_7_5/versionhandler.py0000664000175000017500000001435400000000000032031 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from sahara.plugins import conductor from sahara.plugins import context from sahara.plugins import swift_helper from sahara.plugins import utils from sahara_plugin_vanilla.plugins.vanilla import abstractversionhandler as avm from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import config as c from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import keypairs from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import recommendations_utils from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import run_scripts as run from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import scaling as sc from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import starting_scripts from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import utils as u from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import validation as vl from sahara_plugin_vanilla.plugins.vanilla import utils as vu from sahara_plugin_vanilla.plugins.vanilla.v2_7_5 import config_helper from sahara_plugin_vanilla.plugins.vanilla.v2_7_5 import edp_engine CONF = cfg.CONF class VersionHandler(avm.AbstractVersionHandler): def __init__(self): self.pctx = { 'env_confs': config_helper.get_env_configs(), 'all_confs': config_helper.get_plugin_configs() } def get_plugin_configs(self): return self.pctx['all_confs'] def get_node_processes(self): return { "Hadoop": [], "MapReduce": ["historyserver"], "HDFS": ["namenode", "datanode", "secondarynamenode"], "YARN": ["resourcemanager", "nodemanager"], "JobFlow": ["oozie"], "Hive": ["hiveserver"], "Spark": ["spark history server"], "ZooKeeper": ["zookeeper"] } def validate(self, cluster): vl.validate_cluster_creating(self.pctx, cluster) def update_infra(self, cluster): pass def configure_cluster(self, cluster): c.configure_cluster(self.pctx, cluster) def start_cluster(self, cluster): keypairs.provision_keypairs(cluster) starting_scripts.start_namenode(cluster) starting_scripts.start_secondarynamenode(cluster) starting_scripts.start_resourcemanager(cluster) run.start_dn_nm_processes(utils.get_instances(cluster)) run.await_datanodes(cluster) starting_scripts.start_historyserver(cluster) starting_scripts.start_oozie(self.pctx, cluster) starting_scripts.start_hiveserver(self.pctx, cluster) starting_scripts.start_zookeeper(cluster) swift_helper.install_ssl_certs(utils.get_instances(cluster)) self._set_cluster_info(cluster) starting_scripts.start_spark(cluster) def decommission_nodes(self, cluster, instances): sc.decommission_nodes(self.pctx, cluster, instances) def validate_scaling(self, cluster, existing, additional): vl.validate_additional_ng_scaling(cluster, additional) vl.validate_existing_ng_scaling(self.pctx, cluster, existing) zk_ng = utils.get_node_groups(cluster, "zookeeper") if zk_ng: vl.validate_zookeeper_node_count(zk_ng, existing, additional) def scale_cluster(self, cluster, instances): keypairs.provision_keypairs(cluster, instances) sc.scale_cluster(self.pctx, cluster, instances) def _set_cluster_info(self, cluster): nn = vu.get_namenode(cluster) rm = vu.get_resourcemanager(cluster) hs = vu.get_historyserver(cluster) oo = vu.get_oozie(cluster) sp = vu.get_spark_history_server(cluster) info = {} if rm: info['YARN'] = { 'Web UI': 'http://%s:%s' % (rm.get_ip_or_dns_name(), '8088'), 'ResourceManager': 'http://%s:%s' % ( rm.get_ip_or_dns_name(), '8032') } if nn: info['HDFS'] = { 'Web UI': 'http://%s:%s' % (nn.get_ip_or_dns_name(), '50070'), 'NameNode': 'hdfs://%s:%s' % (nn.hostname(), '9000') } if oo: info['JobFlow'] = { 'Oozie': 'http://%s:%s' % (oo.get_ip_or_dns_name(), '11000') } if hs: info['MapReduce JobHistory Server'] = { 'Web UI': 'http://%s:%s' % (hs.get_ip_or_dns_name(), '19888') } if sp: info['Apache Spark'] = { 'Spark UI': 'http://%s:%s' % (sp.management_ip, '4040'), 'Spark History Server UI': 'http://%s:%s' % (sp.management_ip, '18080') } ctx = context.ctx() conductor.cluster_update(ctx, cluster, {'info': info}) def get_edp_engine(self, cluster, job_type): if job_type in edp_engine.EdpOozieEngine.get_supported_job_types(): return edp_engine.EdpOozieEngine(cluster) if job_type in edp_engine.EdpSparkEngine.get_supported_job_types(): return edp_engine.EdpSparkEngine(cluster) return None def get_edp_job_types(self): return (edp_engine.EdpOozieEngine.get_supported_job_types() + edp_engine.EdpSparkEngine.get_supported_job_types()) def get_edp_config_hints(self, job_type): return edp_engine.EdpOozieEngine.get_possible_job_config(job_type) def on_terminate_cluster(self, cluster): u.delete_oozie_password(cluster) keypairs.drop_key(cluster) def get_open_ports(self, node_group): return c.get_open_ports(node_group) def recommend_configs(self, cluster, scaling): recommendations_utils.recommend_configs(cluster, self.get_plugin_configs(), scaling) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9644794 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_8_2/0000775000175000017500000000000000000000000026423 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_8_2/__init__.py0000664000175000017500000000000000000000000030522 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_8_2/config_helper.py0000664000175000017500000001120100000000000031574 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from oslo_config import cfg import six from sahara.plugins import provisioning as p from sahara.plugins import utils from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import config_helper CONF = cfg.CONF CONF.import_opt("enable_data_locality", "sahara.topology.topology_helper") CORE_DEFAULT = utils.load_hadoop_xml_defaults( 'plugins/vanilla/v2_8_2/resources/core-default.xml', 'sahara_plugin_vanilla') HDFS_DEFAULT = utils.load_hadoop_xml_defaults( 'plugins/vanilla/v2_8_2/resources/hdfs-default.xml', 'sahara_plugin_vanilla') MAPRED_DEFAULT = utils.load_hadoop_xml_defaults( 'plugins/vanilla/v2_8_2/resources/mapred-default.xml', 'sahara_plugin_vanilla') YARN_DEFAULT = utils.load_hadoop_xml_defaults( 'plugins/vanilla/v2_8_2/resources/yarn-default.xml', 'sahara_plugin_vanilla') OOZIE_DEFAULT = utils.load_hadoop_xml_defaults( 'plugins/vanilla/v2_8_2/resources/oozie-default.xml', 'sahara_plugin_vanilla') HIVE_DEFAULT = utils.load_hadoop_xml_defaults( 'plugins/vanilla/v2_8_2/resources/hive-default.xml', 'sahara_plugin_vanilla') _default_executor_classpath = ":".join( ['/opt/hadoop/share/hadoop/tools/lib/hadoop-openstack-2.8.2.jar']) SPARK_CONFS = copy.deepcopy(config_helper.SPARK_CONFS) SPARK_CONFS['Spark']['OPTIONS'].append( { 'name': 'Executor extra classpath', 'description': 'Value for spark.executor.extraClassPath' ' in spark-defaults.conf' ' (default: %s)' % _default_executor_classpath, 'default': '%s' % _default_executor_classpath, 'priority': 2, } ) XML_CONFS = { "Hadoop": [CORE_DEFAULT], "HDFS": [HDFS_DEFAULT], "YARN": [YARN_DEFAULT], "MapReduce": [MAPRED_DEFAULT], "JobFlow": [OOZIE_DEFAULT], "Hive": [HIVE_DEFAULT] } ENV_CONFS = { "YARN": { 'ResourceManager Heap Size': 1024, 'NodeManager Heap Size': 1024 }, "HDFS": { 'NameNode Heap Size': 1024, 'SecondaryNameNode Heap Size': 1024, 'DataNode Heap Size': 1024 }, "MapReduce": { 'JobHistoryServer Heap Size': 1024 }, "JobFlow": { 'Oozie Heap Size': 1024 } } # Initialise plugin Hadoop configurations PLUGIN_XML_CONFIGS = config_helper.init_xml_configs(XML_CONFS) PLUGIN_ENV_CONFIGS = config_helper.init_env_configs(ENV_CONFS) def _init_all_configs(): configs = [] configs.extend(PLUGIN_XML_CONFIGS) configs.extend(PLUGIN_ENV_CONFIGS) configs.extend(config_helper.PLUGIN_GENERAL_CONFIGS) configs.extend(_get_spark_configs()) configs.extend(_get_zookeeper_configs()) return configs def _get_spark_opt_default(opt_name): for opt in SPARK_CONFS["Spark"]["OPTIONS"]: if opt_name == opt["name"]: return opt["default"] return None def _get_spark_configs(): spark_configs = [] for service, config_items in six.iteritems(SPARK_CONFS): for item in config_items['OPTIONS']: cfg = p.Config(name=item["name"], description=item["description"], default_value=item["default"], applicable_target=service, scope="cluster", is_optional=True, priority=item["priority"]) spark_configs.append(cfg) return spark_configs def _get_zookeeper_configs(): zk_configs = [] for service, config_items in six.iteritems(config_helper.ZOOKEEPER_CONFS): for item in config_items['OPTIONS']: cfg = p.Config(name=item["name"], description=item["description"], default_value=item["default"], applicable_target=service, scope="cluster", is_optional=True, priority=item["priority"]) zk_configs.append(cfg) return zk_configs PLUGIN_CONFIGS = _init_all_configs() def get_plugin_configs(): return PLUGIN_CONFIGS def get_xml_configs(): return PLUGIN_XML_CONFIGS def get_env_configs(): return ENV_CONFS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_8_2/edp_engine.py0000664000175000017500000000675500000000000031107 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os from sahara.plugins import edp from sahara.plugins import exceptions as ex from sahara.plugins import utils as plugin_utils from sahara_plugin_vanilla.i18n import _ from sahara_plugin_vanilla.plugins.vanilla import confighints_helper from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import edp_engine from sahara_plugin_vanilla.plugins.vanilla import utils as v_utils class EdpOozieEngine(edp_engine.EdpOozieEngine): @staticmethod def get_possible_job_config(job_type): if edp.compare_job_type(job_type, edp.JOB_TYPE_HIVE): return { 'job_config': confighints_helper.get_possible_hive_config_from( 'plugins/vanilla/v2_8_2/resources/hive-default.xml')} if edp.compare_job_type(job_type, edp.JOB_TYPE_MAPREDUCE, edp.JOB_TYPE_MAPREDUCE_STREAMING): return { 'job_config': confighints_helper.get_possible_mapreduce_config_from( 'plugins/vanilla/v2_8_2/resources/mapred-default.xml')} if edp.compare_job_type(job_type, edp.JOB_TYPE_PIG): return { 'job_config': confighints_helper.get_possible_pig_config_from( 'plugins/vanilla/v2_8_2/resources/mapred-default.xml')} return edp_engine.EdpOozieEngine.get_possible_job_config(job_type) class EdpSparkEngine(edp.PluginsSparkJobEngine): edp_base_version = "2.8.2" def __init__(self, cluster): super(EdpSparkEngine, self).__init__(cluster) self.master = plugin_utils.get_instance(cluster, "spark history server") self.plugin_params["spark-user"] = "sudo -u hadoop " self.plugin_params["spark-submit"] = os.path.join( plugin_utils.get_config_value_or_default( "Spark", "Spark home", self.cluster), "bin/spark-submit") self.plugin_params["deploy-mode"] = "cluster" self.plugin_params["master"] = "yarn" driver_cp = plugin_utils.get_config_value_or_default( "Spark", "Executor extra classpath", self.cluster) self.plugin_params["driver-class-path"] = driver_cp @staticmethod def edp_supported(version): return version >= EdpSparkEngine.edp_base_version @staticmethod def job_type_supported(job_type): return (job_type in edp.PluginsSparkJobEngine.get_supported_job_types()) def validate_job_execution(self, cluster, job, data): if (not self.edp_supported(cluster.hadoop_version) or not v_utils.get_spark_history_server(cluster)): raise ex.PluginInvalidDataException( _('Spark {base} or higher required to run {type} jobs').format( base=EdpSparkEngine.edp_base_version, type=job.type)) super(EdpSparkEngine, self).validate_job_execution(cluster, job, data) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9684794 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_8_2/resources/0000775000175000017500000000000000000000000030435 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_8_2/resources/README.rst0000664000175000017500000000254200000000000032127 0ustar00zuulzuul00000000000000Apache Hadoop Configurations for Sahara ======================================= This directory contains default XML configuration files: * core-default.xml * hdfs-default.xml * mapred-default.xml * yarn-default.xml * oozie-default.xml * hive-default.xml These files are applied for Sahara's plugin of Apache Hadoop version 2.8.2 Files were taken from here: * `core-default.xml `_ * `hdfs-default.xml `_ * `yarn-default.xml `_ * `mapred-default.xml `_ * `oozie-default.xml `_ XML configs are used to expose default Hadoop configurations to the users through Sahara's REST API. It allows users to override some config values which will be pushed to the provisioned VMs running Hadoop services as part of appropriate xml config. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_8_2/resources/core-default.xml0000664000175000017500000023172100000000000033537 0ustar00zuulzuul00000000000000 hadoop.common.configuration.version 0.23.0 version of this configuration file hadoop.tmp.dir /tmp/hadoop-${user.name} A base for other temporary directories. io.native.lib.available true Controls whether to use native libraries for bz2 and zlib compression codecs or not. The property does not control any other native libraries. hadoop.http.filter.initializers org.apache.hadoop.http.lib.StaticUserWebFilter A comma separated list of class names. Each class in the list must extend org.apache.hadoop.http.FilterInitializer. The corresponding Filter will be initialized. Then, the Filter will be applied to all user facing jsp and servlet web pages. The ordering of the list defines the ordering of the filters. hadoop.security.authorization false Is service-level authorization enabled? hadoop.security.instrumentation.requires.admin false Indicates if administrator ACLs are required to access instrumentation servlets (JMX, METRICS, CONF, STACKS). hadoop.security.authentication simple Possible values are simple (no authentication), and kerberos hadoop.security.group.mapping org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback Class for user to group mapping (get groups for a given user) for ACL. The default implementation, org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback, will determine if the Java Native Interface (JNI) is available. If JNI is available the implementation will use the API within hadoop to resolve a list of groups for a user. If JNI is not available then the shell implementation, ShellBasedUnixGroupsMapping, is used. This implementation shells out to the Linux/Unix environment with the bash -c groups command to resolve a list of groups for a user. hadoop.security.dns.interface The name of the Network Interface from which the service should determine its host name for Kerberos login. e.g. eth2. In a multi-homed environment, the setting can be used to affect the _HOST subsitution in the service Kerberos principal. If this configuration value is not set, the service will use its default hostname as returned by InetAddress.getLocalHost().getCanonicalHostName(). Most clusters will not require this setting. hadoop.security.dns.nameserver The host name or IP address of the name server (DNS) which a service Node should use to determine its own host name for Kerberos Login. Requires hadoop.security.dns.interface. Most clusters will not require this setting. hadoop.security.dns.log-slow-lookups.enabled false Time name lookups (via SecurityUtil) and log them if they exceed the configured threshold. hadoop.security.dns.log-slow-lookups.threshold.ms 1000 If slow lookup logging is enabled, this threshold is used to decide if a lookup is considered slow enough to be logged. hadoop.security.groups.cache.secs 300 This is the config controlling the validity of the entries in the cache containing the user->group mapping. When this duration has expired, then the implementation of the group mapping provider is invoked to get the groups of the user and then cached back. hadoop.security.groups.negative-cache.secs 30 Expiration time for entries in the the negative user-to-group mapping caching, in seconds. This is useful when invalid users are retrying frequently. It is suggested to set a small value for this expiration, since a transient error in group lookup could temporarily lock out a legitimate user. Set this to zero or negative value to disable negative user-to-group caching. hadoop.security.groups.cache.warn.after.ms 5000 If looking up a single user to group takes longer than this amount of milliseconds, we will log a warning message. hadoop.security.groups.cache.background.reload false Whether to reload expired user->group mappings using a background thread pool. If set to true, a pool of hadoop.security.groups.cache.background.reload.threads is created to update the cache in the background. hadoop.security.groups.cache.background.reload.threads 3 Only relevant if hadoop.security.groups.cache.background.reload is true. Controls the number of concurrent background user->group cache entry refreshes. Pending refresh requests beyond this value are queued and processed when a thread is free. hadoop.security.group.mapping.ldap.connection.timeout.ms 60000 This property is the connection timeout (in milliseconds) for LDAP operations. If the LDAP provider doesn't establish a connection within the specified period, it will abort the connect attempt. Non-positive value means no LDAP connection timeout is specified in which case it waits for the connection to establish until the underlying network times out. hadoop.security.group.mapping.ldap.read.timeout.ms 60000 This property is the read timeout (in milliseconds) for LDAP operations. If the LDAP provider doesn't get a LDAP response within the specified period, it will abort the read attempt. Non-positive value means no read timeout is specified in which case it waits for the response infinitely. hadoop.security.group.mapping.ldap.url The URL of the LDAP server to use for resolving user groups when using the LdapGroupsMapping user to group mapping. hadoop.security.group.mapping.ldap.ssl false Whether or not to use SSL when connecting to the LDAP server. hadoop.security.group.mapping.ldap.ssl.keystore File path to the SSL keystore that contains the SSL certificate required by the LDAP server. hadoop.security.group.mapping.ldap.ssl.keystore.password.file The path to a file containing the password of the LDAP SSL keystore. IMPORTANT: This file should be readable only by the Unix user running the daemons. hadoop.security.group.mapping.ldap.bind.user The distinguished name of the user to bind as when connecting to the LDAP server. This may be left blank if the LDAP server supports anonymous binds. hadoop.security.group.mapping.ldap.bind.password.file The path to a file containing the password of the bind user. IMPORTANT: This file should be readable only by the Unix user running the daemons. hadoop.security.group.mapping.ldap.base The search base for the LDAP connection. This is a distinguished name, and will typically be the root of the LDAP directory. hadoop.security.group.mapping.ldap.search.filter.user (&(objectClass=user)(sAMAccountName={0})) An additional filter to use when searching for LDAP users. The default will usually be appropriate for Active Directory installations. If connecting to an LDAP server with a non-AD schema, this should be replaced with (&(objectClass=inetOrgPerson)(uid={0}). {0} is a special string used to denote where the username fits into the filter. If the LDAP server supports posixGroups, Hadoop can enable the feature by setting the value of this property to "posixAccount" and the value of the hadoop.security.group.mapping.ldap.search.filter.group property to "posixGroup". hadoop.security.group.mapping.ldap.search.filter.group (objectClass=group) An additional filter to use when searching for LDAP groups. This should be changed when resolving groups against a non-Active Directory installation. See the description of hadoop.security.group.mapping.ldap.search.filter.user to enable posixGroups support. hadoop.security.group.mapping.ldap.search.attr.member member The attribute of the group object that identifies the users that are members of the group. The default will usually be appropriate for any LDAP installation. hadoop.security.group.mapping.ldap.search.attr.group.name cn The attribute of the group object that identifies the group name. The default will usually be appropriate for all LDAP systems. hadoop.security.group.mapping.ldap.posix.attr.uid.name uidNumber The attribute of posixAccount to use when groups for membership. Mostly useful for schemas wherein groups have memberUids that use an attribute other than uidNumber. hadoop.security.group.mapping.ldap.posix.attr.gid.name gidNumber The attribute of posixAccount indicating the group id. hadoop.security.group.mapping.ldap.directory.search.timeout 10000 The attribute applied to the LDAP SearchControl properties to set a maximum time limit when searching and awaiting a result. Set to 0 if infinite wait period is desired. Default is 10 seconds. Units in milliseconds. hadoop.security.group.mapping.providers Comma separated of names of other providers to provide user to group mapping. Used by CompositeGroupsMapping. hadoop.security.group.mapping.providers.combined true true or false to indicate whether groups from the providers are combined or not. The default value is true. If true, then all the providers will be tried to get groups and all the groups are combined to return as the final results. Otherwise, providers are tried one by one in the configured list order, and if any groups are retrieved from any provider, then the groups will be returned without trying the left ones. hadoop.security.service.user.name.key For those cases where the same RPC protocol is implemented by multiple servers, this configuration is required for specifying the principal name to use for the service when the client wishes to make an RPC call. hadoop.security.uid.cache.secs 14400 This is the config controlling the validity of the entries in the cache containing the userId to userName and groupId to groupName used by NativeIO getFstat(). hadoop.rpc.protection authentication A comma-separated list of protection values for secured sasl connections. Possible values are authentication, integrity and privacy. authentication means authentication only and no integrity or privacy; integrity implies authentication and integrity are enabled; and privacy implies all of authentication, integrity and privacy are enabled. hadoop.security.saslproperties.resolver.class can be used to override the hadoop.rpc.protection for a connection at the server side. hadoop.security.saslproperties.resolver.class SaslPropertiesResolver used to resolve the QOP used for a connection. If not specified, the full set of values specified in hadoop.rpc.protection is used while determining the QOP used for the connection. If a class is specified, then the QOP values returned by the class will be used while determining the QOP used for the connection. hadoop.security.sensitive-config-keys secret$ password$ ssl.keystore.pass$ fs.s3.*[Ss]ecret.?[Kk]ey fs.azure.account.key.* credential$ oauth.*token$ hadoop.security.sensitive-config-keys A comma-separated or multi-line list of regular expressions to match configuration keys that should be redacted where appropriate, for example, when logging modified properties during a reconfiguration, private credentials should not be logged. hadoop.workaround.non.threadsafe.getpwuid true Some operating systems or authentication modules are known to have broken implementations of getpwuid_r and getpwgid_r, such that these calls are not thread-safe. Symptoms of this problem include JVM crashes with a stack trace inside these functions. If your system exhibits this issue, enable this configuration parameter to include a lock around the calls as a workaround. An incomplete list of some systems known to have this issue is available at http://wiki.apache.org/hadoop/KnownBrokenPwuidImplementations hadoop.kerberos.kinit.command kinit Used to periodically renew Kerberos credentials when provided to Hadoop. The default setting assumes that kinit is in the PATH of users running the Hadoop client. Change this to the absolute path to kinit if this is not the case. hadoop.kerberos.min.seconds.before.relogin 60 The minimum time between relogin attempts for Kerberos, in seconds. hadoop.security.auth_to_local Maps kerberos principals to local user names io.file.buffer.size 4096 The size of buffer for use in sequence files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. io.bytes.per.checksum 512 The number of bytes per checksum. Must not be larger than io.file.buffer.size. io.skip.checksum.errors false If true, when a checksum error is encountered while reading a sequence file, entries are skipped, instead of throwing an exception. io.compression.codecs A comma-separated list of the compression codec classes that can be used for compression/decompression. In addition to any classes specified with this property (which take precedence), codec classes on the classpath are discovered using a Java ServiceLoader. io.compression.codec.bzip2.library system-native The native-code library to be used for compression and decompression by the bzip2 codec. This library could be specified either by by name or the full pathname. In the former case, the library is located by the dynamic linker, usually searching the directories specified in the environment variable LD_LIBRARY_PATH. The value of "system-native" indicates that the default system library should be used. To indicate that the algorithm should operate entirely in Java, specify "java-builtin". io.serializations org.apache.hadoop.io.serializer.WritableSerialization, org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization, org.apache.hadoop.io.serializer.avro.AvroReflectSerialization A list of serialization classes that can be used for obtaining serializers and deserializers. io.seqfile.local.dir ${hadoop.tmp.dir}/io/local The local directory where sequence file stores intermediate data files during merge. May be a comma-separated list of directories on different devices in order to spread disk i/o. Directories that do not exist are ignored. io.map.index.skip 0 Number of index entries to skip between each entry. Zero by default. Setting this to values larger than zero can facilitate opening large MapFiles using less memory. io.map.index.interval 128 MapFile consist of two files - data file (tuples) and index file (keys). For every io.map.index.interval records written in the data file, an entry (record-key, data-file-position) is written in the index file. This is to allow for doing binary search later within the index file to look up records by their keys and get their closest positions in the data file. fs.defaultFS file:/// The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem. fs.default.name file:/// Deprecated. Use (fs.defaultFS) property instead fs.trash.interval 0 Number of minutes after which the checkpoint gets deleted. If zero, the trash feature is disabled. This option may be configured both on the server and the client. If trash is disabled server side then the client side configuration is checked. If trash is enabled on the server side then the value configured on the server is used and the client configuration value is ignored. fs.trash.checkpoint.interval 0 Number of minutes between trash checkpoints. Should be smaller or equal to fs.trash.interval. If zero, the value is set to the value of fs.trash.interval. Every time the checkpointer runs it creates a new checkpoint out of current and removes checkpoints created more than fs.trash.interval minutes ago. fs.protected.directories A comma-separated list of directories which cannot be deleted even by the superuser unless they are empty. This setting can be used to guard important system directories against accidental deletion due to administrator error. fs.AbstractFileSystem.file.impl org.apache.hadoop.fs.local.LocalFs The AbstractFileSystem for file: uris. fs.AbstractFileSystem.har.impl org.apache.hadoop.fs.HarFs The AbstractFileSystem for har: uris. fs.AbstractFileSystem.hdfs.impl org.apache.hadoop.fs.Hdfs The FileSystem for hdfs: uris. fs.AbstractFileSystem.viewfs.impl org.apache.hadoop.fs.viewfs.ViewFs The AbstractFileSystem for view file system for viewfs: uris (ie client side mount table:). fs.AbstractFileSystem.ftp.impl org.apache.hadoop.fs.ftp.FtpFs The FileSystem for Ftp: uris. fs.AbstractFileSystem.webhdfs.impl org.apache.hadoop.fs.WebHdfs The FileSystem for webhdfs: uris. fs.AbstractFileSystem.swebhdfs.impl org.apache.hadoop.fs.SWebHdfs The FileSystem for swebhdfs: uris. fs.ftp.host 0.0.0.0 FTP filesystem connects to this server fs.ftp.host.port 21 FTP filesystem connects to fs.ftp.host on this port fs.df.interval 60000 Disk usage statistics refresh interval in msec. fs.du.interval 600000 File space usage statistics refresh interval in msec. fs.s3.block.size 67108864 Block size to use when writing files to S3. fs.s3.buffer.dir ${hadoop.tmp.dir}/s3 Determines where on the local filesystem the s3:/s3n: filesystem should store files before sending them to S3 (or after retrieving them from S3). fs.s3.maxRetries 4 The maximum number of retries for reading or writing files to S3, before we signal failure to the application. fs.s3.sleepTimeSeconds 10 The number of seconds to sleep between each S3 retry. fs.swift.impl org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem The implementation class of the OpenStack Swift Filesystem fs.automatic.close true By default, FileSystem instances are automatically closed at program exit using a JVM shutdown hook. Setting this property to false disables this behavior. This is an advanced option that should only be used by server applications requiring a more carefully orchestrated shutdown sequence. fs.s3n.block.size 67108864 Block size to use when reading files using the native S3 filesystem (s3n: URIs). fs.s3n.multipart.uploads.enabled false Setting this property to true enables multiple uploads to native S3 filesystem. When uploading a file, it is split into blocks if the size is larger than fs.s3n.multipart.uploads.block.size. fs.s3n.multipart.uploads.block.size 67108864 The block size for multipart uploads to native S3 filesystem. Default size is 64MB. fs.s3n.multipart.copy.block.size 5368709120 The block size for multipart copy in native S3 filesystem. Default size is 5GB. fs.s3n.server-side-encryption-algorithm Specify a server-side encryption algorithm for S3. Unset by default, and the only other currently allowable value is AES256. fs.s3a.access.key AWS access key ID used by S3A file system. Omit for IAM role-based or provider-based authentication. fs.s3a.secret.key AWS secret key used by S3A file system. Omit for IAM role-based or provider-based authentication. fs.s3a.aws.credentials.provider Comma-separated class names of credential provider classes which implement com.amazonaws.auth.AWSCredentialsProvider. These are loaded and queried in sequence for a valid set of credentials. Each listed class must implement one of the following means of construction, which are attempted in order: 1. a public constructor accepting java.net.URI and org.apache.hadoop.conf.Configuration, 2. a public static method named getInstance that accepts no arguments and returns an instance of com.amazonaws.auth.AWSCredentialsProvider, or 3. a public default constructor. Specifying org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider allows anonymous access to a publicly accessible S3 bucket without any credentials. Please note that allowing anonymous access to an S3 bucket compromises security and therefore is unsuitable for most use cases. It can be useful for accessing public data sets without requiring AWS credentials. If unspecified, then the default list of credential provider classes, queried in sequence, is: 1. org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider: supports static configuration of AWS access key ID and secret access key. See also fs.s3a.access.key and fs.s3a.secret.key. 2. com.amazonaws.auth.EnvironmentVariableCredentialsProvider: supports configuration of AWS access key ID and secret access key in environment variables named AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, as documented in the AWS SDK. 3. org.apache.hadoop.fs.s3a.SharedInstanceProfileCredentialsProvider: a shared instance of com.amazonaws.auth.InstanceProfileCredentialsProvider from the AWS SDK, which supports use of instance profile credentials if running in an EC2 VM. Using this shared instance potentially reduces load on the EC2 instance metadata service for multi-threaded applications. fs.s3a.session.token Session token, when using org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider as one of the providers. fs.s3a.security.credential.provider.path Optional comma separated list of credential providers, a list which is prepended to that set in hadoop.security.credential.provider.path fs.s3a.connection.maximum 15 Controls the maximum number of simultaneous connections to S3. fs.s3a.connection.ssl.enabled true Enables or disables SSL connections to S3. fs.s3a.endpoint AWS S3 endpoint to connect to. An up-to-date list is provided in the AWS Documentation: regions and endpoints. Without this property, the standard region (s3.amazonaws.com) is assumed. fs.s3a.path.style.access false Enable S3 path style access ie disabling the default virtual hosting behaviour. Useful for S3A-compliant storage providers as it removes the need to set up DNS for virtual hosting. fs.s3a.proxy.host Hostname of the (optional) proxy server for S3 connections. fs.s3a.proxy.port Proxy server port. If this property is not set but fs.s3a.proxy.host is, port 80 or 443 is assumed (consistent with the value of fs.s3a.connection.ssl.enabled). fs.s3a.proxy.username Username for authenticating with proxy server. fs.s3a.proxy.password Password for authenticating with proxy server. fs.s3a.proxy.domain Domain for authenticating with proxy server. fs.s3a.proxy.workstation Workstation for authenticating with proxy server. fs.s3a.attempts.maximum 20 How many times we should retry commands on transient errors. fs.s3a.connection.establish.timeout 5000 Socket connection setup timeout in milliseconds. fs.s3a.connection.timeout 200000 Socket connection timeout in milliseconds. fs.s3a.socket.send.buffer 8192 Socket send buffer hint to amazon connector. Represented in bytes. fs.s3a.socket.recv.buffer 8192 Socket receive buffer hint to amazon connector. Represented in bytes. fs.s3a.paging.maximum 5000 How many keys to request from S3 when doing directory listings at a time. fs.s3a.threads.max 10 The total number of threads available in the filesystem for data uploads *or any other queued filesystem operation*. fs.s3a.threads.keepalivetime 60 Number of seconds a thread can be idle before being terminated. fs.s3a.max.total.tasks 5 The number of operations which can be queued for execution fs.s3a.multipart.size 100M How big (in bytes) to split upload or copy operations up into. A suffix from the set {K,M,G,T,P} may be used to scale the numeric value. fs.s3a.multipart.threshold 2147483647 How big (in bytes) to split upload or copy operations up into. This also controls the partition size in renamed files, as rename() involves copying the source file(s). A suffix from the set {K,M,G,T,P} may be used to scale the numeric value. fs.s3a.multiobjectdelete.enable true When enabled, multiple single-object delete requests are replaced by a single 'delete multiple objects'-request, reducing the number of requests. Beware: legacy S3-compatible object stores might not support this request. fs.s3a.acl.default Set a canned ACL for newly created and copied objects. Value may be Private, PublicRead, PublicReadWrite, AuthenticatedRead, LogDeliveryWrite, BucketOwnerRead, or BucketOwnerFullControl. fs.s3a.multipart.purge false True if you want to purge existing multipart uploads that may not have been completed/aborted correctly. The corresponding purge age is defined in fs.s3a.multipart.purge.age. If set, when the filesystem is instantiated then all outstanding uploads older than the purge age will be terminated -across the entire bucket. This will impact multipart uploads by other applications and users. so should be used sparingly, with an age value chosen to stop failed uploads, without breaking ongoing operations. fs.s3a.multipart.purge.age 86400 Minimum age in seconds of multipart uploads to purge. fs.s3a.server-side-encryption-algorithm Specify a server-side encryption algorithm for s3a: file system. Unset by default, and the only other currently allowable value is AES256. fs.s3a.signing-algorithm Override the default signing algorithm so legacy implementations can still be used fs.s3a.block.size 32M Block size to use when reading files using s3a: file system. A suffix from the set {K,M,G,T,P} may be used to scale the numeric value. fs.s3a.buffer.dir ${hadoop.tmp.dir}/s3a Comma separated list of directories that will be used to buffer file uploads to. fs.s3a.fast.upload false Use the incremental block-based fast upload mechanism with the buffering mechanism set in fs.s3a.fast.upload.buffer. fs.s3a.fast.upload.buffer disk The buffering mechanism to use when using S3A fast upload (fs.s3a.fast.upload=true). Values: disk, array, bytebuffer. This configuration option has no effect if fs.s3a.fast.upload is false. "disk" will use the directories listed in fs.s3a.buffer.dir as the location(s) to save data prior to being uploaded. "array" uses arrays in the JVM heap "bytebuffer" uses off-heap memory within the JVM. Both "array" and "bytebuffer" will consume memory in a single stream up to the number of blocks set by: fs.s3a.multipart.size * fs.s3a.fast.upload.active.blocks. If using either of these mechanisms, keep this value low The total number of threads performing work across all threads is set by fs.s3a.threads.max, with fs.s3a.max.total.tasks values setting the number of queued work items. fs.s3a.fast.upload.active.blocks 4 Maximum Number of blocks a single output stream can have active (uploading, or queued to the central FileSystem instance's pool of queued operations. This stops a single stream overloading the shared thread pool. fs.s3a.readahead.range 64K Bytes to read ahead during a seek() before closing and re-opening the S3 HTTP connection. This option will be overridden if any call to setReadahead() is made to an open stream. A suffix from the set {K,M,G,T,P} may be used to scale the numeric value. fs.s3a.user.agent.prefix Sets a custom value that will be prepended to the User-Agent header sent in HTTP requests to the S3 back-end by S3AFileSystem. The User-Agent header always includes the Hadoop version number followed by a string generated by the AWS SDK. An example is "User-Agent: Hadoop 2.8.0, aws-sdk-java/1.10.6". If this optional property is set, then its value is prepended to create a customized User-Agent. For example, if this configuration property was set to "MyApp", then an example of the resulting User-Agent would be "User-Agent: MyApp, Hadoop 2.8.0, aws-sdk-java/1.10.6". fs.s3a.impl org.apache.hadoop.fs.s3a.S3AFileSystem The implementation class of the S3A Filesystem fs.AbstractFileSystem.s3a.impl org.apache.hadoop.fs.s3a.S3A The implementation class of the S3A AbstractFileSystem. fs.adl.impl org.apache.hadoop.fs.adl.AdlFileSystem fs.AbstractFileSystem.adl.impl org.apache.hadoop.fs.adl.Adl io.seqfile.compress.blocksize 1000000 The minimum block size for compression in block compressed SequenceFiles. io.mapfile.bloom.size 1048576 The size of BloomFilter-s used in BloomMapFile. Each time this many keys is appended the next BloomFilter will be created (inside a DynamicBloomFilter). Larger values minimize the number of filters, which slightly increases the performance, but may waste too much space if the total number of keys is usually much smaller than this number. io.mapfile.bloom.error.rate 0.005 The rate of false positives in BloomFilter-s used in BloomMapFile. As this value decreases, the size of BloomFilter-s increases exponentially. This value is the probability of encountering false positives (default is 0.5%). hadoop.util.hash.type murmur The default implementation of Hash. Currently this can take one of the two values: 'murmur' to select MurmurHash and 'jenkins' to select JenkinsHash. ipc.client.idlethreshold 4000 Defines the threshold number of connections after which connections will be inspected for idleness. ipc.client.kill.max 10 Defines the maximum number of clients to disconnect in one go. ipc.client.connection.maxidletime 10000 The maximum time in msec after which a client will bring down the connection to the server. ipc.client.connect.max.retries 10 Indicates the number of retries a client will make to establish a server connection. ipc.client.connect.retry.interval 1000 Indicates the number of milliseconds a client will wait for before retrying to establish a server connection. ipc.client.connect.timeout 20000 Indicates the number of milliseconds a client will wait for the socket to establish a server connection. ipc.client.connect.max.retries.on.timeouts 45 Indicates the number of retries a client will make on socket timeout to establish a server connection. ipc.client.tcpnodelay true Use TCP_NODELAY flag to bypass Nagle's algorithm transmission delays. ipc.client.low-latency false Use low-latency QoS markers for IPC connections. ipc.client.ping true Send a ping to the server when timeout on reading the response, if set to true. If no failure is detected, the client retries until at least a byte is read or the time given by ipc.client.rpc-timeout.ms is passed. ipc.ping.interval 60000 Timeout on waiting response from server, in milliseconds. The client will send ping when the interval is passed without receiving bytes, if ipc.client.ping is set to true. ipc.client.rpc-timeout.ms 0 Timeout on waiting response from server, in milliseconds. If ipc.client.ping is set to true and this rpc-timeout is greater than the value of ipc.ping.interval, the effective value of the rpc-timeout is rounded up to multiple of ipc.ping.interval. ipc.server.listen.queue.size 128 Indicates the length of the listen queue for servers accepting client connections. ipc.server.log.slow.rpc false This setting is useful to troubleshoot performance issues for various services. If this value is set to true then we log requests that fall into 99th percentile as well as increment RpcSlowCalls counter. ipc.maximum.data.length 67108864 This indicates the maximum IPC message length (bytes) that can be accepted by the server. Messages larger than this value are rejected by the immediately to avoid possible OOMs. This setting should rarely need to be changed. ipc.maximum.response.length 134217728 This indicates the maximum IPC message length (bytes) that can be accepted by the client. Messages larger than this value are rejected immediately to avoid possible OOMs. This setting should rarely need to be changed. Set to 0 to disable. hadoop.security.impersonation.provider.class A class which implements ImpersonationProvider interface, used to authorize whether one user can impersonate a specific user. If not specified, the DefaultImpersonationProvider will be used. If a class is specified, then that class will be used to determine the impersonation capability. hadoop.rpc.socket.factory.class.default org.apache.hadoop.net.StandardSocketFactory Default SocketFactory to use. This parameter is expected to be formatted as "package.FactoryClassName". hadoop.rpc.socket.factory.class.ClientProtocol SocketFactory to use to connect to a DFS. If null or empty, use hadoop.rpc.socket.class.default. This socket factory is also used by DFSClient to create sockets to DataNodes. hadoop.socks.server Address (host:port) of the SOCKS server to be used by the SocksSocketFactory. net.topology.node.switch.mapping.impl org.apache.hadoop.net.ScriptBasedMapping The default implementation of the DNSToSwitchMapping. It invokes a script specified in net.topology.script.file.name to resolve node names. If the value for net.topology.script.file.name is not set, the default value of DEFAULT_RACK is returned for all node names. net.topology.impl org.apache.hadoop.net.NetworkTopology The default implementation of NetworkTopology which is classic three layer one. net.topology.script.file.name The script name that should be invoked to resolve DNS names to NetworkTopology names. Example: the script would take host.foo.bar as an argument, and return /rack1 as the output. net.topology.script.number.args 100 The max number of args that the script configured with net.topology.script.file.name should be run with. Each arg is an IP address. net.topology.table.file.name The file name for a topology file, which is used when the net.topology.node.switch.mapping.impl property is set to org.apache.hadoop.net.TableMapping. The file format is a two column text file, with columns separated by whitespace. The first column is a DNS or IP address and the second column specifies the rack where the address maps. If no entry corresponding to a host in the cluster is found, then /default-rack is assumed. file.stream-buffer-size 4096 The size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. file.bytes-per-checksum 512 The number of bytes per checksum. Must not be larger than file.stream-buffer-size file.client-write-packet-size 65536 Packet size for clients to write file.blocksize 67108864 Block size file.replication 1 Replication factor s3.stream-buffer-size 4096 The size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. s3.bytes-per-checksum 512 The number of bytes per checksum. Must not be larger than s3.stream-buffer-size s3.client-write-packet-size 65536 Packet size for clients to write s3.blocksize 67108864 Block size s3.replication 3 Replication factor s3native.stream-buffer-size 4096 The size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. s3native.bytes-per-checksum 512 The number of bytes per checksum. Must not be larger than s3native.stream-buffer-size s3native.client-write-packet-size 65536 Packet size for clients to write s3native.blocksize 67108864 Block size s3native.replication 3 Replication factor ftp.stream-buffer-size 4096 The size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. ftp.bytes-per-checksum 512 The number of bytes per checksum. Must not be larger than ftp.stream-buffer-size ftp.client-write-packet-size 65536 Packet size for clients to write ftp.blocksize 67108864 Block size ftp.replication 3 Replication factor tfile.io.chunk.size 1048576 Value chunk size in bytes. Default to 1MB. Values of the length less than the chunk size is guaranteed to have known value length in read time (See also TFile.Reader.Scanner.Entry.isValueLengthKnown()). tfile.fs.output.buffer.size 262144 Buffer size used for FSDataOutputStream in bytes. tfile.fs.input.buffer.size 262144 Buffer size used for FSDataInputStream in bytes. hadoop.http.authentication.type simple Defines authentication used for Oozie HTTP endpoint. Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME# hadoop.http.authentication.token.validity 36000 Indicates how long (in seconds) an authentication token is valid before it has to be renewed. hadoop.http.authentication.signature.secret.file ${user.home}/hadoop-http-auth-signature-secret The signature secret for signing the authentication tokens. The same secret should be used for JT/NN/DN/TT configurations. hadoop.http.authentication.cookie.domain The domain to use for the HTTP cookie that stores the authentication token. In order to authentiation to work correctly across all Hadoop nodes web-consoles the domain must be correctly set. IMPORTANT: when using IP addresses, browsers ignore cookies with domain settings. For this setting to work properly all nodes in the cluster must be configured to generate URLs with hostname.domain names on it. hadoop.http.authentication.simple.anonymous.allowed true Indicates if anonymous requests are allowed when using 'simple' authentication. hadoop.http.authentication.kerberos.principal HTTP/_HOST@LOCALHOST Indicates the Kerberos principal to be used for HTTP endpoint. The principal MUST start with 'HTTP/' as per Kerberos HTTP SPNEGO specification. hadoop.http.authentication.kerberos.keytab ${user.home}/hadoop.keytab Location of the keytab file with the credentials for the principal. Referring to the same keytab file Oozie uses for its Kerberos credentials for Hadoop. Enable/disable the cross-origin (CORS) filter. hadoop.http.cross-origin.enabled false Comma separated list of origins that are allowed for web services needing cross-origin (CORS) support. Wildcards (*) and patterns allowed hadoop.http.cross-origin.allowed-origins * Comma separated list of methods that are allowed for web services needing cross-origin (CORS) support. hadoop.http.cross-origin.allowed-methods GET,POST,HEAD Comma separated list of headers that are allowed for web services needing cross-origin (CORS) support. hadoop.http.cross-origin.allowed-headers X-Requested-With,Content-Type,Accept,Origin The number of seconds a pre-flighted request can be cached for web services needing cross-origin (CORS) support. hadoop.http.cross-origin.max-age 1800 dfs.ha.fencing.methods List of fencing methods to use for service fencing. May contain builtin methods (eg shell and sshfence) or user-defined method. dfs.ha.fencing.ssh.connect-timeout 30000 SSH connection timeout, in milliseconds, to use with the builtin sshfence fencer. dfs.ha.fencing.ssh.private-key-files The SSH private key files to use with the builtin sshfence fencer. The user name to filter as, on static web filters while rendering content. An example use is the HDFS web UI (user to be used for browsing files). hadoop.http.staticuser.user dr.who ha.zookeeper.quorum A list of ZooKeeper server addresses, separated by commas, that are to be used by the ZKFailoverController in automatic failover. ha.zookeeper.session-timeout.ms 5000 The session timeout to use when the ZKFC connects to ZooKeeper. Setting this value to a lower value implies that server crashes will be detected more quickly, but risks triggering failover too aggressively in the case of a transient error or network blip. ha.zookeeper.parent-znode /hadoop-ha The ZooKeeper znode under which the ZK failover controller stores its information. Note that the nameservice ID is automatically appended to this znode, so it is not normally necessary to configure this, even in a federated environment. ha.zookeeper.acl world:anyone:rwcda A comma-separated list of ZooKeeper ACLs to apply to the znodes used by automatic failover. These ACLs are specified in the same format as used by the ZooKeeper CLI. If the ACL itself contains secrets, you may instead specify a path to a file, prefixed with the '@' symbol, and the value of this configuration will be loaded from within. ha.zookeeper.auth A comma-separated list of ZooKeeper authentications to add when connecting to ZooKeeper. These are specified in the same format as used by the "addauth" command in the ZK CLI. It is important that the authentications specified here are sufficient to access znodes with the ACL specified in ha.zookeeper.acl. If the auths contain secrets, you may instead specify a path to a file, prefixed with the '@' symbol, and the value of this configuration will be loaded from within. hadoop.ssl.keystores.factory.class org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory The keystores factory to use for retrieving certificates. hadoop.ssl.require.client.cert false Whether client certificates are required hadoop.ssl.hostname.verifier DEFAULT The hostname verifier to provide for HttpsURLConnections. Valid values are: DEFAULT, STRICT, STRICT_I6, DEFAULT_AND_LOCALHOST and ALLOW_ALL hadoop.ssl.server.conf ssl-server.xml Resource file from which ssl server keystore information will be extracted. This file is looked up in the classpath, typically it should be in Hadoop conf/ directory. hadoop.ssl.client.conf ssl-client.xml Resource file from which ssl client keystore information will be extracted This file is looked up in the classpath, typically it should be in Hadoop conf/ directory. hadoop.ssl.enabled false Deprecated. Use dfs.http.policy and yarn.http.policy instead. hadoop.ssl.enabled.protocols TLSv1 Protocols supported by the ssl. hadoop.jetty.logs.serve.aliases true Enable/Disable aliases serving from jetty fs.permissions.umask-mode 022 The umask used when creating files and directories. Can be in octal or in symbolic. Examples are: "022" (octal for u=rwx,g=r-x,o=r-x in symbolic), or "u=rwx,g=rwx,o=" (symbolic for 007 in octal). ha.health-monitor.connect-retry-interval.ms 1000 How often to retry connecting to the service. ha.health-monitor.check-interval.ms 1000 How often to check the service. ha.health-monitor.sleep-after-disconnect.ms 1000 How long to sleep after an unexpected RPC error. ha.health-monitor.rpc-timeout.ms 45000 Timeout for the actual monitorHealth() calls. ha.failover-controller.new-active.rpc-timeout.ms 60000 Timeout that the FC waits for the new active to become active ha.failover-controller.graceful-fence.rpc-timeout.ms 5000 Timeout that the FC waits for the old active to go to standby ha.failover-controller.graceful-fence.connection.retries 1 FC connection retries for graceful fencing ha.failover-controller.cli-check.rpc-timeout.ms 20000 Timeout that the CLI (manual) FC waits for monitorHealth, getServiceState ipc.client.fallback-to-simple-auth-allowed false When a client is configured to attempt a secure connection, but attempts to connect to an insecure server, that server may instruct the client to switch to SASL SIMPLE (unsecure) authentication. This setting controls whether or not the client will accept this instruction from the server. When false (the default), the client will not allow the fallback to SIMPLE authentication, and will abort the connection. fs.client.resolve.remote.symlinks true Whether to resolve symlinks when accessing a remote Hadoop filesystem. Setting this to false causes an exception to be thrown upon encountering a symlink. This setting does not apply to local filesystems, which automatically resolve local symlinks. nfs.exports.allowed.hosts * rw By default, the export can be mounted by any client. The value string contains machine name and access privilege, separated by whitespace characters. The machine name format can be a single host, a Java regular expression, or an IPv4 address. The access privilege uses rw or ro to specify read/write or read-only access of the machines to exports. If the access privilege is not provided, the default is read-only. Entries are separated by ";". For example: "192.168.0.0/22 rw ; host.*\.example\.com ; host1.test.org ro;". Only the NFS gateway needs to restart after this property is updated. hadoop.user.group.static.mapping.overrides dr.who=; Static mapping of user to groups. This will override the groups if available in the system for the specified user. In otherwords, groups look-up will not happen for these users, instead groups mapped in this configuration will be used. Mapping should be in this format. user1=group1,group2;user2=;user3=group2; Default, "dr.who=;" will consider "dr.who" as user without groups. rpc.metrics.quantile.enable false Setting this property to true and rpc.metrics.percentiles.intervals to a comma-separated list of the granularity in seconds, the 50/75/90/95/99th percentile latency for rpc queue/processing time in milliseconds are added to rpc metrics. rpc.metrics.percentiles.intervals A comma-separated list of the granularity in seconds for the metrics which describe the 50/75/90/95/99th percentile latency for rpc queue/processing time. The metrics are outputted if rpc.metrics.quantile.enable is set to true. hadoop.security.crypto.codec.classes.EXAMPLECIPHERSUITE The prefix for a given crypto codec, contains a comma-separated list of implementation classes for a given crypto codec (eg EXAMPLECIPHERSUITE). The first implementation will be used if available, others are fallbacks. hadoop.security.crypto.codec.classes.aes.ctr.nopadding org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec, org.apache.hadoop.crypto.JceAesCtrCryptoCodec Comma-separated list of crypto codec implementations for AES/CTR/NoPadding. The first implementation will be used if available, others are fallbacks. hadoop.security.crypto.cipher.suite AES/CTR/NoPadding Cipher suite for crypto codec. hadoop.security.crypto.jce.provider The JCE provider name used in CryptoCodec. hadoop.security.crypto.buffer.size 8192 The buffer size used by CryptoInputStream and CryptoOutputStream. hadoop.security.java.secure.random.algorithm SHA1PRNG The java secure random algorithm. hadoop.security.secure.random.impl Implementation of secure random. hadoop.security.random.device.file.path /dev/urandom OS security random device file path. hadoop.security.key.provider.path The KeyProvider to use when managing zone keys, and interacting with encryption keys when reading and writing to an encryption zone. For hdfs clients, the provider path will be same as namenode's provider path. fs.har.impl.disable.cache true Don't cache 'har' filesystem instances. hadoop.security.kms.client.authentication.retry-count 1 Number of time to retry connecting to KMS on authentication failure hadoop.security.kms.client.encrypted.key.cache.size 500 Size of the EncryptedKeyVersion cache Queue for each key hadoop.security.kms.client.encrypted.key.cache.low-watermark 0.3f If size of the EncryptedKeyVersion cache Queue falls below the low watermark, this cache queue will be scheduled for a refill hadoop.security.kms.client.encrypted.key.cache.num.refill.threads 2 Number of threads to use for refilling depleted EncryptedKeyVersion cache Queues hadoop.security.kms.client.encrypted.key.cache.expiry 43200000 Cache expiry time for a Key, after which the cache Queue for this key will be dropped. Default = 12hrs ipc.server.max.connections 0 The maximum number of concurrent connections a server is allowed to accept. If this limit is exceeded, incoming connections will first fill the listen queue and then may go to an OS-specific listen overflow queue. The client may fail or timeout, but the server can avoid running out of file descriptors using this feature. 0 means no limit. Is the registry enabled in the YARN Resource Manager? If true, the YARN RM will, as needed. create the user and system paths, and purge service records when containers, application attempts and applications complete. If false, the paths must be created by other means, and no automatic cleanup of service records will take place. hadoop.registry.rm.enabled false The root zookeeper node for the registry hadoop.registry.zk.root /registry Zookeeper session timeout in milliseconds hadoop.registry.zk.session.timeout.ms 60000 Zookeeper connection timeout in milliseconds hadoop.registry.zk.connection.timeout.ms 15000 Zookeeper connection retry count before failing hadoop.registry.zk.retry.times 5 hadoop.registry.zk.retry.interval.ms 1000 Zookeeper retry limit in milliseconds, during exponential backoff. This places a limit even if the retry times and interval limit, combined with the backoff policy, result in a long retry period hadoop.registry.zk.retry.ceiling.ms 60000 List of hostname:port pairs defining the zookeeper quorum binding for the registry hadoop.registry.zk.quorum localhost:2181 Key to set if the registry is secure. Turning it on changes the permissions policy from "open access" to restrictions on kerberos with the option of a user adding one or more auth key pairs down their own tree. hadoop.registry.secure false A comma separated list of Zookeeper ACL identifiers with system access to the registry in a secure cluster. These are given full access to all entries. If there is an "@" at the end of a SASL entry it instructs the registry client to append the default kerberos domain. hadoop.registry.system.acls sasl:yarn@, sasl:mapred@, sasl:hdfs@ The kerberos realm: used to set the realm of system principals which do not declare their realm, and any other accounts that need the value. If empty, the default realm of the running process is used. If neither are known and the realm is needed, then the registry service/client will fail. hadoop.registry.kerberos.realm Key to define the JAAS context. Used in secure mode hadoop.registry.jaas.context Client Enable hdfs shell commands to display warnings if (fs.defaultFS) property is not set. hadoop.shell.missing.defaultFs.warning false hadoop.shell.safely.delete.limit.num.files 100 Used by -safely option of hadoop fs shell -rm command to avoid accidental deletion of large directories. When enabled, the -rm command requires confirmation if the number of files to be deleted is greater than this limit. The default limit is 100 files. The warning is disabled if the limit is 0 or the -safely is not specified in -rm command. fs.client.htrace.sampler.classes The class names of the HTrace Samplers to use for Hadoop filesystem clients. hadoop.htrace.span.receiver.classes The class names of the Span Receivers to use for Hadoop. fs.adl.impl org.apache.hadoop.fs.adl.AdlFileSystem fs.AbstractFileSystem.adl.impl org.apache.hadoop.fs.adl.Adl adl.feature.ownerandgroup.enableupn false When true : User and Group in FileStatus/AclStatus response is represented as user friendly name as per Azure AD profile. When false (default) : User and Group in FileStatus/AclStatus response is represented by the unique identifier from Azure AD profile (Object ID as GUID). For optimal performance, false is recommended. fs.adl.oauth2.access.token.provider.type ClientCredential Defines Azure Active Directory OAuth2 access token provider type. Supported types are ClientCredential, RefreshToken, and Custom. The ClientCredential type requires property fs.adl.oauth2.client.id, fs.adl.oauth2.credential, and fs.adl.oauth2.refresh.url. The RefreshToken type requires property fs.adl.oauth2.client.id and fs.adl.oauth2.refresh.token. The Custom type requires property fs.adl.oauth2.access.token.provider. fs.adl.oauth2.client.id The OAuth2 client id. fs.adl.oauth2.credential The OAuth2 access key. fs.adl.oauth2.refresh.url The OAuth2 token endpoint. fs.adl.oauth2.refresh.token The OAuth2 refresh token. fs.adl.oauth2.access.token.provider The class name of the OAuth2 access token provider. hadoop.caller.context.enabled false When the feature is enabled, additional fields are written into name-node audit log records for auditing coarse granularity operations. hadoop.caller.context.max.size 128 The maximum bytes a caller context string can have. If the passed caller context is longer than this maximum bytes, client will truncate it before sending to server. Note that the server may have a different maximum size, and will truncate the caller context to the maximum size it allows. hadoop.caller.context.signature.max.size 40 The caller's signature (optional) is for offline validation. If the signature exceeds the maximum allowed bytes in server, the caller context will be abandoned, in which case the caller context will not be recorded in audit logs. ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_8_2/resources/create_hive_db.sql 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_8_2/resources/create_hive_db.s0000664000175000017500000000066100000000000033547 0ustar00zuulzuul00000000000000CREATE DATABASE metastore; USE metastore; SOURCE /opt/hive/scripts/metastore/upgrade/mysql/hive-schema-2.3.0.mysql.sql; CREATE USER 'hive'@'localhost' IDENTIFIED BY '{{password}}'; REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'hive'@'localhost'; GRANT ALL PRIVILEGES ON metastore.* TO 'hive'@'localhost' IDENTIFIED BY '{{password}}'; GRANT ALL PRIVILEGES ON metastore.* TO 'hive'@'%' IDENTIFIED BY '{{password}}'; FLUSH PRIVILEGES; exit ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_8_2/resources/hdfs-default.xml0000664000175000017500000033475600000000000033547 0ustar00zuulzuul00000000000000 hadoop.hdfs.configuration.version 1 version of this configuration file dfs.namenode.rpc-address RPC address that handles all clients requests. In the case of HA/Federation where multiple namenodes exist, the name service id is added to the name e.g. dfs.namenode.rpc-address.ns1 dfs.namenode.rpc-address.EXAMPLENAMESERVICE The value of this property will take the form of nn-host1:rpc-port. dfs.namenode.rpc-bind-host The actual address the RPC server will bind to. If this optional address is set, it overrides only the hostname portion of dfs.namenode.rpc-address. It can also be specified per name node or name service for HA/Federation. This is useful for making the name node listen on all interfaces by setting it to 0.0.0.0. dfs.namenode.servicerpc-address RPC address for HDFS Services communication. BackupNode, Datanodes and all other services should be connecting to this address if it is configured. In the case of HA/Federation where multiple namenodes exist, the name service id is added to the name e.g. dfs.namenode.servicerpc-address.ns1 dfs.namenode.rpc-address.EXAMPLENAMESERVICE The value of this property will take the form of nn-host1:rpc-port. If the value of this property is unset the value of dfs.namenode.rpc-address will be used as the default. dfs.namenode.servicerpc-bind-host The actual address the service RPC server will bind to. If this optional address is set, it overrides only the hostname portion of dfs.namenode.servicerpc-address. It can also be specified per name node or name service for HA/Federation. This is useful for making the name node listen on all interfaces by setting it to 0.0.0.0. dfs.namenode.lifeline.rpc-address NameNode RPC lifeline address. This is an optional separate RPC address that can be used to isolate health checks and liveness to protect against resource exhaustion in the main RPC handler pool. In the case of HA/Federation where multiple NameNodes exist, the name service ID is added to the name e.g. dfs.namenode.lifeline.rpc-address.ns1. The value of this property will take the form of nn-host1:rpc-port. If this property is not defined, then the NameNode will not start a lifeline RPC server. By default, the property is not defined. dfs.namenode.lifeline.rpc-bind-host The actual address the lifeline RPC server will bind to. If this optional address is set, it overrides only the hostname portion of dfs.namenode.lifeline.rpc-address. It can also be specified per name node or name service for HA/Federation. This is useful for making the name node listen on all interfaces by setting it to 0.0.0.0. dfs.namenode.secondary.http-address 0.0.0.0:50090 The secondary namenode http server address and port. dfs.namenode.secondary.https-address 0.0.0.0:50091 The secondary namenode HTTPS server address and port. dfs.datanode.address 0.0.0.0:50010 The datanode server address and port for data transfer. dfs.datanode.http.address 0.0.0.0:50075 The datanode http server address and port. dfs.datanode.ipc.address 0.0.0.0:50020 The datanode ipc server address and port. dfs.datanode.handler.count 10 The number of server threads for the datanode. dfs.namenode.http-address 0.0.0.0:50070 The address and the base port where the dfs namenode web ui will listen on. dfs.namenode.http-bind-host The actual adress the HTTP server will bind to. If this optional address is set, it overrides only the hostname portion of dfs.namenode.http-address. It can also be specified per name node or name service for HA/Federation. This is useful for making the name node HTTP server listen on all interfaces by setting it to 0.0.0.0. dfs.namenode.heartbeat.recheck-interval 300000 This time decides the interval to check for expired datanodes. With this value and dfs.heartbeat.interval, the interval of deciding the datanode is stale or not is also calculated. The unit of this configuration is millisecond. dfs.http.policy HTTP_ONLY Decide if HTTPS(SSL) is supported on HDFS This configures the HTTP endpoint for HDFS daemons: The following values are supported: - HTTP_ONLY : Service is provided only on http - HTTPS_ONLY : Service is provided only on https - HTTP_AND_HTTPS : Service is provided both on http and https dfs.client.https.need-auth false Whether SSL client certificate authentication is required dfs.client.cached.conn.retry 3 The number of times the HDFS client will pull a socket from the cache. Once this number is exceeded, the client will try to create a new socket. dfs.https.server.keystore.resource ssl-server.xml Resource file from which ssl server keystore information will be extracted dfs.client.https.keystore.resource ssl-client.xml Resource file from which ssl client keystore information will be extracted dfs.datanode.https.address 0.0.0.0:50475 The datanode secure http server address and port. dfs.namenode.https-address 0.0.0.0:50470 The namenode secure http server address and port. dfs.namenode.https-bind-host The actual adress the HTTPS server will bind to. If this optional address is set, it overrides only the hostname portion of dfs.namenode.https-address. It can also be specified per name node or name service for HA/Federation. This is useful for making the name node HTTPS server listen on all interfaces by setting it to 0.0.0.0. dfs.datanode.dns.interface default The name of the Network Interface from which a data node should report its IP address. e.g. eth2. This setting may be required for some multi-homed nodes where the DataNodes are assigned multiple hostnames and it is desirable for the DataNodes to use a non-default hostname. Prefer using hadoop.security.dns.interface over dfs.datanode.dns.interface. dfs.datanode.dns.nameserver default The host name or IP address of the name server (DNS) which a DataNode should use to determine its own host name. Prefer using hadoop.security.dns.nameserver over dfs.datanode.dns.nameserver. dfs.namenode.backup.address 0.0.0.0:50100 The backup node server address and port. If the port is 0 then the server will start on a free port. dfs.namenode.backup.http-address 0.0.0.0:50105 The backup node http server address and port. If the port is 0 then the server will start on a free port. dfs.namenode.replication.considerLoad true Decide if chooseTarget considers the target's load or not dfs.namenode.replication.considerLoad.factor 2.0 The factor by which a node's load can exceed the average before being rejected for writes, only if considerLoad is true. dfs.default.chunk.view.size 32768 The number of bytes to view for a file on the browser. dfs.datanode.du.reserved 0 Reserved space in bytes per volume. Always leave this much space free for non dfs use. Specific storage type based reservation is also supported. The property can be followed with corresponding storage types ([ssd]/[disk]/[archive]/[ram_disk]) for cluster with heterogeneous storage. For example, reserved space for RAM_DISK storage can be configured using property 'dfs.datanode.du.reserved.ram_disk'. If specific storage type reservation is not configured then dfs.datanode.du.reserved will be used. dfs.namenode.name.dir file://${hadoop.tmp.dir}/dfs/name Determines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy. dfs.namenode.name.dir.restore false Set to true to enable NameNode to attempt recovering a previously failed dfs.namenode.name.dir. When enabled, a recovery of any failed directory is attempted during checkpoint. dfs.namenode.fs-limits.max-component-length 255 Defines the maximum number of bytes in UTF-8 encoding in each component of a path. A value of 0 will disable the check. dfs.namenode.fs-limits.max-directory-items 1048576 Defines the maximum number of items that a directory may contain. Cannot set the property to a value less than 1 or more than 6400000. dfs.namenode.fs-limits.min-block-size 1048576 Minimum block size in bytes, enforced by the Namenode at create time. This prevents the accidental creation of files with tiny block sizes (and thus many blocks), which can degrade performance. dfs.namenode.fs-limits.max-blocks-per-file 1048576 Maximum number of blocks per file, enforced by the Namenode on write. This prevents the creation of extremely large files which can degrade performance. dfs.namenode.edits.dir ${dfs.namenode.name.dir} Determines where on the local filesystem the DFS name node should store the transaction (edits) file. If this is a comma-delimited list of directories then the transaction file is replicated in all of the directories, for redundancy. Default value is same as dfs.namenode.name.dir dfs.namenode.edits.dir.required This should be a subset of dfs.namenode.edits.dir, to ensure that the transaction (edits) file in these places is always up-to-date. dfs.namenode.shared.edits.dir A directory on shared storage between the multiple namenodes in an HA cluster. This directory will be written by the active and read by the standby in order to keep the namespaces synchronized. This directory does not need to be listed in dfs.namenode.edits.dir above. It should be left empty in a non-HA cluster. dfs.namenode.edits.journal-plugin.qjournal org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager dfs.permissions.enabled true If "true", enable permission checking in HDFS. If "false", permission checking is turned off, but all other behavior is unchanged. Switching from one parameter value to the other does not change the mode, owner or group of files or directories. dfs.permissions.superusergroup supergroup The name of the group of super-users. The value should be a single group name. dfs.cluster.administrators ACL for the admins, this configuration is used to control who can access the default servlets in the namenode, etc. The value should be a comma separated list of users and groups. The user list comes first and is separated by a space followed by the group list, e.g. "user1,user2 group1,group2". Both users and groups are optional, so "user1", " group1", "", "user1 group1", "user1,user2 group1,group2" are all valid (note the leading space in " group1"). '*' grants access to all users and groups, e.g. '*', '* ' and ' *' are all valid. dfs.namenode.acls.enabled false Set to true to enable support for HDFS ACLs (Access Control Lists). By default, ACLs are disabled. When ACLs are disabled, the NameNode rejects all RPCs related to setting or getting ACLs. dfs.namenode.lazypersist.file.scrub.interval.sec 300 The NameNode periodically scans the namespace for LazyPersist files with missing blocks and unlinks them from the namespace. This configuration key controls the interval between successive scans. Set it to a negative value to disable this behavior. dfs.block.access.token.enable false If "true", access tokens are used as capabilities for accessing datanodes. If "false", no access tokens are checked on accessing datanodes. dfs.block.access.key.update.interval 600 Interval in minutes at which namenode updates its access keys. dfs.block.access.token.lifetime 600 The lifetime of access tokens in minutes. dfs.datanode.data.dir file://${hadoop.tmp.dir}/dfs/data Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. The directories should be tagged with corresponding storage types ([SSD]/[DISK]/[ARCHIVE]/[RAM_DISK]) for HDFS storage policies. The default storage type will be DISK if the directory does not have a storage type tagged explicitly. Directories that do not exist will be created if local filesystem permission allows. dfs.datanode.data.dir.perm 700 Permissions for the directories on on the local filesystem where the DFS data node store its blocks. The permissions can either be octal or symbolic. dfs.replication 3 Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. dfs.replication.max 512 Maximal block replication. dfs.namenode.replication.min 1 Minimal block replication. dfs.namenode.safemode.replication.min a separate minimum replication factor for calculating safe block count. This is an expert level setting. Setting this lower than the dfs.namenode.replication.min is not recommend and/or dangerous for production setups. When it's not set it takes value from dfs.namenode.replication.min dfs.blocksize 134217728 The default block size for new files, in bytes. You can use the following suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) to specify the size (such as 128k, 512m, 1g, etc.), Or provide complete size in bytes (such as 134217728 for 128 MB). dfs.client.block.write.retries 3 The number of retries for writing blocks to the data nodes, before we signal failure to the application. dfs.client.block.write.replace-datanode-on-failure.enable true If there is a datanode/network failure in the write pipeline, DFSClient will try to remove the failed datanode from the pipeline and then continue writing with the remaining datanodes. As a result, the number of datanodes in the pipeline is decreased. The feature is to add new datanodes to the pipeline. This is a site-wide property to enable/disable the feature. When the cluster size is extremely small, e.g. 3 nodes or less, cluster administrators may want to set the policy to NEVER in the default configuration file or disable this feature. Otherwise, users may experience an unusually high rate of pipeline failures since it is impossible to find new datanodes for replacement. See also dfs.client.block.write.replace-datanode-on-failure.policy dfs.client.block.write.replace-datanode-on-failure.policy DEFAULT This property is used only if the value of dfs.client.block.write.replace-datanode-on-failure.enable is true. ALWAYS: always add a new datanode when an existing datanode is removed. NEVER: never add a new datanode. DEFAULT: Let r be the replication number. Let n be the number of existing datanodes. Add a new datanode only if r is greater than or equal to 3 and either (1) floor(r/2) is greater than or equal to n; or (2) r is greater than n and the block is hflushed/appended. dfs.client.block.write.replace-datanode-on-failure.best-effort false This property is used only if the value of dfs.client.block.write.replace-datanode-on-failure.enable is true. Best effort means that the client will try to replace a failed datanode in write pipeline (provided that the policy is satisfied), however, it continues the write operation in case that the datanode replacement also fails. Suppose the datanode replacement fails. false: An exception should be thrown so that the write will fail. true : The write should be resumed with the remaining datandoes. Note that setting this property to true allows writing to a pipeline with a smaller number of datanodes. As a result, it increases the probability of data loss. dfs.blockreport.intervalMsec 21600000 Determines block reporting interval in milliseconds. dfs.blockreport.initialDelay 0 Delay for first block report in seconds. dfs.blockreport.split.threshold 1000000 If the number of blocks on the DataNode is below this threshold then it will send block reports for all Storage Directories in a single message. If the number of blocks exceeds this threshold then the DataNode will send block reports for each Storage Directory in separate messages. Set to zero to always split. dfs.namenode.max.full.block.report.leases 6 The maximum number of leases for full block reports that the NameNode will issue at any given time. This prevents the NameNode from being flooded with full block reports that use up all the RPC handler threads. This number should never be more than the number of RPC handler threads or less than 1. dfs.namenode.full.block.report.lease.length.ms 300000 The number of milliseconds that the NameNode will wait before invalidating a full block report lease. This prevents a crashed DataNode from permanently using up a full block report lease. dfs.datanode.directoryscan.interval 21600 Interval in seconds for Datanode to scan data directories and reconcile the difference between blocks in memory and on the disk. dfs.datanode.directoryscan.threads 1 How many threads should the threadpool used to compile reports for volumes in parallel have. dfs.datanode.directoryscan.throttle.limit.ms.per.sec 1000 The report compilation threads are limited to only running for a given number of milliseconds per second, as configured by the property. The limit is taken per thread, not in aggregate, e.g. setting a limit of 100ms for 4 compiler threads will result in each thread being limited to 100ms, not 25ms. Note that the throttle does not interrupt the report compiler threads, so the actual running time of the threads per second will typically be somewhat higher than the throttle limit, usually by no more than 20%. Setting this limit to 1000 disables compiler thread throttling. Only values between 1 and 1000 are valid. Setting an invalid value will result in the throttle being disbled and an error message being logged. 1000 is the default setting. dfs.heartbeat.interval 3 Determines datanode heartbeat interval in seconds. dfs.datanode.lifeline.interval.seconds Sets the interval in seconds between sending DataNode Lifeline Protocol messages from the DataNode to the NameNode. The value must be greater than the value of dfs.heartbeat.interval. If this property is not defined, then the default behavior is to calculate the interval as 3x the value of dfs.heartbeat.interval. Note that normal heartbeat processing may cause the DataNode to postpone sending lifeline messages if they are not required. Under normal operations with speedy heartbeat processing, it is possible that no lifeline messages will need to be sent at all. This property has no effect if dfs.namenode.lifeline.rpc-address is not defined. dfs.namenode.handler.count 10 The number of Namenode RPC server threads that listen to requests from clients. If dfs.namenode.servicerpc-address is not configured then Namenode RPC server threads listen to requests from all nodes. dfs.namenode.service.handler.count 10 The number of Namenode RPC server threads that listen to requests from DataNodes and from all other non-client nodes. dfs.namenode.service.handler.count will be valid only if dfs.namenode.servicerpc-address is configured. dfs.namenode.lifeline.handler.ratio 0.10 A ratio applied to the value of dfs.namenode.handler.count, which then provides the number of RPC server threads the NameNode runs for handling the lifeline RPC server. For example, if dfs.namenode.handler.count is 100, and dfs.namenode.lifeline.handler.factor is 0.10, then the NameNode starts 100 * 0.10 = 10 threads for handling the lifeline RPC server. It is common to tune the value of dfs.namenode.handler.count as a function of the number of DataNodes in a cluster. Using this property allows for the lifeline RPC server handler threads to be tuned automatically without needing to touch a separate property. Lifeline message processing is lightweight, so it is expected to require many fewer threads than the main NameNode RPC server. This property is not used if dfs.namenode.lifeline.handler.count is defined, which sets an absolute thread count. This property has no effect if dfs.namenode.lifeline.rpc-address is not defined. dfs.namenode.lifeline.handler.count Sets an absolute number of RPC server threads the NameNode runs for handling the DataNode Lifeline Protocol and HA health check requests from ZKFC. If this property is defined, then it overrides the behavior of dfs.namenode.lifeline.handler.ratio. By default, it is not defined. This property has no effect if dfs.namenode.lifeline.rpc-address is not defined. dfs.namenode.safemode.threshold-pct 0.999f Specifies the percentage of blocks that should satisfy the minimal replication requirement defined by dfs.namenode.replication.min. Values less than or equal to 0 mean not to wait for any particular percentage of blocks before exiting safemode. Values greater than 1 will make safe mode permanent. dfs.namenode.safemode.min.datanodes 0 Specifies the number of datanodes that must be considered alive before the name node exits safemode. Values less than or equal to 0 mean not to take the number of live datanodes into account when deciding whether to remain in safe mode during startup. Values greater than the number of datanodes in the cluster will make safe mode permanent. dfs.namenode.safemode.extension 30000 Determines extension of safe mode in milliseconds after the threshold level is reached. dfs.namenode.resource.check.interval 5000 The interval in milliseconds at which the NameNode resource checker runs. The checker calculates the number of the NameNode storage volumes whose available spaces are more than dfs.namenode.resource.du.reserved, and enters safemode if the number becomes lower than the minimum value specified by dfs.namenode.resource.checked.volumes.minimum. dfs.namenode.resource.du.reserved 104857600 The amount of space to reserve/require for a NameNode storage directory in bytes. The default is 100MB. dfs.namenode.resource.checked.volumes A list of local directories for the NameNode resource checker to check in addition to the local edits directories. dfs.namenode.resource.checked.volumes.minimum 1 The minimum number of redundant NameNode storage volumes required. dfs.datanode.balance.bandwidthPerSec 10m Specifies the maximum amount of bandwidth that each datanode can utilize for the balancing purpose in term of the number of bytes per second. You can use the following suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa)to specify the size (such as 128k, 512m, 1g, etc.). Or provide complete size in bytes (such as 134217728 for 128 MB). dfs.mover.max-no-move-interval 60000 If this specified amount of time has elapsed and no block has been moved out of a source DataNode, on more effort will be made to move blocks out of this DataNode in the current Mover iteration. dfs.hosts Names a file that contains a list of hosts that are permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, all hosts are permitted. dfs.hosts.exclude Names a file that contains a list of hosts that are not permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, no hosts are excluded. dfs.namenode.max.objects 0 The maximum number of files, directories and blocks dfs supports. A value of zero indicates no limit to the number of objects that dfs supports. dfs.namenode.datanode.registration.ip-hostname-check true If true (the default), then the namenode requires that a connecting datanode's address must be resolved to a hostname. If necessary, a reverse DNS lookup is performed. All attempts to register a datanode from an unresolvable address are rejected. It is recommended that this setting be left on to prevent accidental registration of datanodes listed by hostname in the excludes file during a DNS outage. Only set this to false in environments where there is no infrastructure to support reverse DNS lookup. dfs.namenode.decommission.interval 30 Namenode periodicity in seconds to check if decommission is complete. dfs.namenode.decommission.blocks.per.interval 500000 The approximate number of blocks to process per decommission interval, as defined in dfs.namenode.decommission.interval. dfs.namenode.decommission.max.concurrent.tracked.nodes 100 The maximum number of decommission-in-progress datanodes nodes that will be tracked at one time by the namenode. Tracking a decommission-in-progress datanode consumes additional NN memory proportional to the number of blocks on the datnode. Having a conservative limit reduces the potential impact of decomissioning a large number of nodes at once. A value of 0 means no limit will be enforced. dfs.namenode.replication.interval 3 The periodicity in seconds with which the namenode computes replication work for datanodes. dfs.namenode.accesstime.precision 3600000 The access time for HDFS file is precise upto this value. The default value is 1 hour. Setting a value of 0 disables access times for HDFS. dfs.datanode.plugins Comma-separated list of datanode plug-ins to be activated. dfs.namenode.plugins Comma-separated list of namenode plug-ins to be activated. dfs.namenode.block-placement-policy.default.prefer-local-node true Controls how the default block placement policy places the first replica of a block. When true, it will prefer the node where the client is running. When false, it will prefer a node in the same rack as the client. Setting to false avoids situations where entire copies of large files end up on a single node, thus creating hotspots. dfs.stream-buffer-size 4096 The size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. dfs.bytes-per-checksum 512 The number of bytes per checksum. Must not be larger than dfs.stream-buffer-size dfs.client-write-packet-size 65536 Packet size for clients to write dfs.client.write.exclude.nodes.cache.expiry.interval.millis 600000 The maximum period to keep a DN in the excluded nodes list at a client. After this period, in milliseconds, the previously excluded node(s) will be removed automatically from the cache and will be considered good for block allocations again. Useful to lower or raise in situations where you keep a file open for very long periods (such as a Write-Ahead-Log (WAL) file) to make the writer tolerant to cluster maintenance restarts. Defaults to 10 minutes. dfs.namenode.checkpoint.dir file://${hadoop.tmp.dir}/dfs/namesecondary Determines where on the local filesystem the DFS secondary name node should store the temporary images to merge. If this is a comma-delimited list of directories then the image is replicated in all of the directories for redundancy. dfs.namenode.checkpoint.edits.dir ${dfs.namenode.checkpoint.dir} Determines where on the local filesystem the DFS secondary name node should store the temporary edits to merge. If this is a comma-delimited list of directories then the edits is replicated in all of the directories for redundancy. Default value is same as dfs.namenode.checkpoint.dir dfs.namenode.checkpoint.period 3600 The number of seconds between two periodic checkpoints. dfs.namenode.checkpoint.txns 1000000 The Secondary NameNode or CheckpointNode will create a checkpoint of the namespace every 'dfs.namenode.checkpoint.txns' transactions, regardless of whether 'dfs.namenode.checkpoint.period' has expired. dfs.namenode.checkpoint.check.period 60 The SecondaryNameNode and CheckpointNode will poll the NameNode every 'dfs.namenode.checkpoint.check.period' seconds to query the number of uncheckpointed transactions. dfs.namenode.checkpoint.max-retries 3 The SecondaryNameNode retries failed checkpointing. If the failure occurs while loading fsimage or replaying edits, the number of retries is limited by this variable. dfs.namenode.num.checkpoints.retained 2 The number of image checkpoint files (fsimage_*) that will be retained by the NameNode and Secondary NameNode in their storage directories. All edit logs (stored on edits_* files) necessary to recover an up-to-date namespace from the oldest retained checkpoint will also be retained. dfs.namenode.num.extra.edits.retained 1000000 The number of extra transactions which should be retained beyond what is minimally necessary for a NN restart. It does not translate directly to file's age, or the number of files kept, but to the number of transactions (here "edits" means transactions). One edit file may contain several transactions (edits). During checkpoint, NameNode will identify the total number of edits to retain as extra by checking the latest checkpoint transaction value, subtracted by the value of this property. Then, it scans edits files to identify the older ones that don't include the computed range of retained transactions that are to be kept around, and purges them subsequently. The retainment can be useful for audit purposes or for an HA setup where a remote Standby Node may have been offline for some time and need to have a longer backlog of retained edits in order to start again. Typically each edit is on the order of a few hundred bytes, so the default of 1 million edits should be on the order of hundreds of MBs or low GBs. NOTE: Fewer extra edits may be retained than value specified for this setting if doing so would mean that more segments would be retained than the number configured by dfs.namenode.max.extra.edits.segments.retained. dfs.namenode.max.extra.edits.segments.retained 10000 The maximum number of extra edit log segments which should be retained beyond what is minimally necessary for a NN restart. When used in conjunction with dfs.namenode.num.extra.edits.retained, this configuration property serves to cap the number of extra edits files to a reasonable value. dfs.namenode.delegation.key.update-interval 86400000 The update interval for master key for delegation tokens in the namenode in milliseconds. dfs.namenode.delegation.token.max-lifetime 604800000 The maximum lifetime in milliseconds for which a delegation token is valid. dfs.namenode.delegation.token.renew-interval 86400000 The renewal interval for delegation token in milliseconds. dfs.datanode.failed.volumes.tolerated 0 The number of volumes that are allowed to fail before a datanode stops offering service. By default any volume failure will cause a datanode to shutdown. dfs.image.compress false Should the dfs image be compressed? dfs.image.compression.codec org.apache.hadoop.io.compress.DefaultCodec If the dfs image is compressed, how should they be compressed? This has to be a codec defined in io.compression.codecs. dfs.image.transfer.timeout 60000 Socket timeout for image transfer in milliseconds. This timeout and the related dfs.image.transfer.bandwidthPerSec parameter should be configured such that normal image transfer can complete successfully. This timeout prevents client hangs when the sender fails during image transfer. This is socket timeout during image transfer. dfs.image.transfer.bandwidthPerSec 0 Maximum bandwidth used for regular image transfers (instead of bootstrapping the standby namenode), in bytes per second. This can help keep normal namenode operations responsive during checkpointing. The maximum bandwidth and timeout in dfs.image.transfer.timeout should be set such that normal image transfers can complete successfully. A default value of 0 indicates that throttling is disabled. The maximum bandwidth used for bootstrapping standby namenode is configured with dfs.image.transfer-bootstrap-standby.bandwidthPerSec. dfs.image.transfer-bootstrap-standby.bandwidthPerSec 0 Maximum bandwidth used for transferring image to bootstrap standby namenode, in bytes per second. A default value of 0 indicates that throttling is disabled. This default value should be used in most cases, to ensure timely HA operations. The maximum bandwidth used for regular image transfers is configured with dfs.image.transfer.bandwidthPerSec. dfs.image.transfer.chunksize 65536 Chunksize in bytes to upload the checkpoint. Chunked streaming is used to avoid internal buffering of contents of image file of huge size. dfs.namenode.support.allow.format true Does HDFS namenode allow itself to be formatted? You may consider setting this to false for any production cluster, to avoid any possibility of formatting a running DFS. dfs.datanode.max.transfer.threads 4096 Specifies the maximum number of threads to use for transferring data in and out of the DN. dfs.datanode.scan.period.hours 504 If this is positive, the DataNode will not scan any individual block more than once in the specified scan period. If this is negative, the block scanner is disabled. If this is set to zero, then the default value of 504 hours or 3 weeks is used. Prior versions of HDFS incorrectly documented that setting this key to zero will disable the block scanner. dfs.block.scanner.volume.bytes.per.second 1048576 If this is 0, the DataNode's block scanner will be disabled. If this is positive, this is the number of bytes per second that the DataNode's block scanner will try to scan from each volume. dfs.datanode.readahead.bytes 4194304 While reading block files, if the Hadoop native libraries are available, the datanode can use the posix_fadvise system call to explicitly page data into the operating system buffer cache ahead of the current reader's position. This can improve performance especially when disks are highly contended. This configuration specifies the number of bytes ahead of the current read position which the datanode will attempt to read ahead. This feature may be disabled by configuring this property to 0. If the native libraries are not available, this configuration has no effect. dfs.datanode.drop.cache.behind.reads false In some workloads, the data read from HDFS is known to be significantly large enough that it is unlikely to be useful to cache it in the operating system buffer cache. In this case, the DataNode may be configured to automatically purge all data from the buffer cache after it is delivered to the client. This behavior is automatically disabled for workloads which read only short sections of a block (e.g HBase random-IO workloads). This may improve performance for some workloads by freeing buffer cache space usage for more cacheable data. If the Hadoop native libraries are not available, this configuration has no effect. dfs.datanode.drop.cache.behind.writes false In some workloads, the data written to HDFS is known to be significantly large enough that it is unlikely to be useful to cache it in the operating system buffer cache. In this case, the DataNode may be configured to automatically purge all data from the buffer cache after it is written to disk. This may improve performance for some workloads by freeing buffer cache space usage for more cacheable data. If the Hadoop native libraries are not available, this configuration has no effect. dfs.datanode.sync.behind.writes false If this configuration is enabled, the datanode will instruct the operating system to enqueue all written data to the disk immediately after it is written. This differs from the usual OS policy which may wait for up to 30 seconds before triggering writeback. This may improve performance for some workloads by smoothing the IO profile for data written to disk. If the Hadoop native libraries are not available, this configuration has no effect. dfs.client.failover.max.attempts 15 Expert only. The number of client failover attempts that should be made before the failover is considered failed. dfs.client.failover.sleep.base.millis 500 Expert only. The time to wait, in milliseconds, between failover attempts increases exponentially as a function of the number of attempts made so far, with a random factor of +/- 50%. This option specifies the base value used in the failover calculation. The first failover will retry immediately. The 2nd failover attempt will delay at least dfs.client.failover.sleep.base.millis milliseconds. And so on. dfs.client.failover.sleep.max.millis 15000 Expert only. The time to wait, in milliseconds, between failover attempts increases exponentially as a function of the number of attempts made so far, with a random factor of +/- 50%. This option specifies the maximum value to wait between failovers. Specifically, the time between two failover attempts will not exceed +/- 50% of dfs.client.failover.sleep.max.millis milliseconds. dfs.client.failover.connection.retries 0 Expert only. Indicates the number of retries a failover IPC client will make to establish a server connection. dfs.client.failover.connection.retries.on.timeouts 0 Expert only. The number of retry attempts a failover IPC client will make on socket timeout when establishing a server connection. dfs.client.datanode-restart.timeout 30 Expert only. The time to wait, in seconds, from reception of an datanode shutdown notification for quick restart, until declaring the datanode dead and invoking the normal recovery mechanisms. The notification is sent by a datanode when it is being shutdown using the shutdownDatanode admin command with the upgrade option. dfs.nameservices Comma-separated list of nameservices. dfs.nameservice.id The ID of this nameservice. If the nameservice ID is not configured or more than one nameservice is configured for dfs.nameservices it is determined automatically by matching the local node's address with the configured address. dfs.internal.nameservices Comma-separated list of nameservices that belong to this cluster. Datanode will report to all the nameservices in this list. By default this is set to the value of dfs.nameservices. dfs.ha.namenodes.EXAMPLENAMESERVICE The prefix for a given nameservice, contains a comma-separated list of namenodes for a given nameservice (eg EXAMPLENAMESERVICE). dfs.ha.namenode.id The ID of this namenode. If the namenode ID is not configured it is determined automatically by matching the local node's address with the configured address. dfs.ha.log-roll.period 120 How often, in seconds, the StandbyNode should ask the active to roll edit logs. Since the StandbyNode only reads from finalized log segments, the StandbyNode will only be as up-to-date as how often the logs are rolled. Note that failover triggers a log roll so the StandbyNode will be up to date before it becomes active. dfs.ha.tail-edits.period 60 How often, in seconds, the StandbyNode should check for new finalized log segments in the shared edits log. dfs.ha.automatic-failover.enabled false Whether automatic failover is enabled. See the HDFS High Availability documentation for details on automatic HA configuration. dfs.client.use.datanode.hostname false Whether clients should use datanode hostnames when connecting to datanodes. dfs.datanode.use.datanode.hostname false Whether datanodes should use datanode hostnames when connecting to other datanodes for data transfer. dfs.client.local.interfaces A comma separated list of network interface names to use for data transfer between the client and datanodes. When creating a connection to read from or write to a datanode, the client chooses one of the specified interfaces at random and binds its socket to the IP of that interface. Individual names may be specified as either an interface name (eg "eth0"), a subinterface name (eg "eth0:0"), or an IP address (which may be specified using CIDR notation to match a range of IPs). dfs.datanode.shared.file.descriptor.paths /dev/shm,/tmp A comma-separated list of paths to use when creating file descriptors that will be shared between the DataNode and the DFSClient. Typically we use /dev/shm, so that the file descriptors will not be written to disk. Systems that don't have /dev/shm will fall back to /tmp by default. dfs.short.circuit.shared.memory.watcher.interrupt.check.ms 60000 The length of time in milliseconds that the short-circuit shared memory watcher will go between checking for java interruptions sent from other threads. This is provided mainly for unit tests. dfs.namenode.kerberos.principal The NameNode service principal. This is typically set to nn/_HOST@REALM.TLD. Each NameNode will substitute _HOST with its own fully qualified hostname at startup. The _HOST placeholder allows using the same configuration setting on both NameNodes in an HA setup. dfs.namenode.keytab.file The keytab file used by each NameNode daemon to login as its service principal. The principal name is configured with dfs.namenode.kerberos.principal. dfs.datanode.kerberos.principal The DataNode service principal. This is typically set to dn/_HOST@REALM.TLD. Each DataNode will substitute _HOST with its own fully qualified hostname at startup. The _HOST placeholder allows using the same configuration setting on all DataNodes. dfs.datanode.keytab.file The keytab file used by each DataNode daemon to login as its service principal. The principal name is configured with dfs.datanode.kerberos.principal. dfs.journalnode.kerberos.principal The JournalNode service principal. This is typically set to jn/_HOST@REALM.TLD. Each JournalNode will substitute _HOST with its own fully qualified hostname at startup. The _HOST placeholder allows using the same configuration setting on all JournalNodes. dfs.journalnode.keytab.file The keytab file used by each JournalNode daemon to login as its service principal. The principal name is configured with dfs.journalnode.kerberos.principal. dfs.namenode.kerberos.internal.spnego.principal ${dfs.web.authentication.kerberos.principal} The server principal used by the NameNode for web UI SPNEGO authentication when Kerberos security is enabled. This is typically set to HTTP/_HOST@REALM.TLD The SPNEGO server principal begins with the prefix HTTP/ by convention. If the value is '*', the web server will attempt to login with every principal specified in the keytab file dfs.web.authentication.kerberos.keytab. dfs.journalnode.kerberos.internal.spnego.principal The server principal used by the JournalNode HTTP Server for SPNEGO authentication when Kerberos security is enabled. This is typically set to HTTP/_HOST@REALM.TLD. The SPNEGO server principal begins with the prefix HTTP/ by convention. If the value is '*', the web server will attempt to login with every principal specified in the keytab file dfs.web.authentication.kerberos.keytab. For most deployments this can be set to ${dfs.web.authentication.kerberos.principal} i.e use the value of dfs.web.authentication.kerberos.principal. dfs.secondary.namenode.kerberos.internal.spnego.principal ${dfs.web.authentication.kerberos.principal} The server principal used by the Secondary NameNode for web UI SPNEGO authentication when Kerberos security is enabled. Like all other Secondary NameNode settings, it is ignored in an HA setup. If the value is '*', the web server will attempt to login with every principal specified in the keytab file dfs.web.authentication.kerberos.keytab. dfs.web.authentication.kerberos.principal The server principal used by the NameNode for WebHDFS SPNEGO authentication. Required when WebHDFS and security are enabled. In most secure clusters this setting is also used to specify the values for dfs.namenode.kerberos.internal.spnego.principal and dfs.journalnode.kerberos.internal.spnego.principal. dfs.web.authentication.kerberos.keytab The keytab file for the principal corresponding to dfs.web.authentication.kerberos.principal. dfs.namenode.kerberos.principal.pattern * A client-side RegEx that can be configured to control allowed realms to authenticate with (useful in cross-realm env.) dfs.namenode.avoid.read.stale.datanode false Indicate whether or not to avoid reading from "stale" datanodes whose heartbeat messages have not been received by the namenode for more than a specified time interval. Stale datanodes will be moved to the end of the node list returned for reading. See dfs.namenode.avoid.write.stale.datanode for a similar setting for writes. dfs.namenode.avoid.write.stale.datanode false Indicate whether or not to avoid writing to "stale" datanodes whose heartbeat messages have not been received by the namenode for more than a specified time interval. Writes will avoid using stale datanodes unless more than a configured ratio (dfs.namenode.write.stale.datanode.ratio) of datanodes are marked as stale. See dfs.namenode.avoid.read.stale.datanode for a similar setting for reads. dfs.namenode.stale.datanode.interval 30000 Default time interval in milliseconds for marking a datanode as "stale", i.e., if the namenode has not received heartbeat msg from a datanode for more than this time interval, the datanode will be marked and treated as "stale" by default. The stale interval cannot be too small since otherwise this may cause too frequent change of stale states. We thus set a minimum stale interval value (the default value is 3 times of heartbeat interval) and guarantee that the stale interval cannot be less than the minimum value. A stale data node is avoided during lease/block recovery. It can be conditionally avoided for reads (see dfs.namenode.avoid.read.stale.datanode) and for writes (see dfs.namenode.avoid.write.stale.datanode). dfs.namenode.write.stale.datanode.ratio 0.5f When the ratio of number stale datanodes to total datanodes marked is greater than this ratio, stop avoiding writing to stale nodes so as to prevent causing hotspots. dfs.namenode.invalidate.work.pct.per.iteration 0.32f *Note*: Advanced property. Change with caution. This determines the percentage amount of block invalidations (deletes) to do over a single DN heartbeat deletion command. The final deletion count is determined by applying this percentage to the number of live nodes in the system. The resultant number is the number of blocks from the deletion list chosen for proper invalidation over a single heartbeat of a single DN. Value should be a positive, non-zero percentage in float notation (X.Yf), with 1.0f meaning 100%. dfs.namenode.replication.work.multiplier.per.iteration 2 *Note*: Advanced property. Change with caution. This determines the total amount of block transfers to begin in parallel at a DN, for replication, when such a command list is being sent over a DN heartbeat by the NN. The actual number is obtained by multiplying this multiplier with the total number of live nodes in the cluster. The result number is the number of blocks to begin transfers immediately for, per DN heartbeat. This number can be any positive, non-zero integer. nfs.server.port 2049 Specify the port number used by Hadoop NFS. nfs.mountd.port 4242 Specify the port number used by Hadoop mount daemon. nfs.dump.dir /tmp/.hdfs-nfs This directory is used to temporarily save out-of-order writes before writing to HDFS. For each file, the out-of-order writes are dumped after they are accumulated to exceed certain threshold (e.g., 1MB) in memory. One needs to make sure the directory has enough space. nfs.rtmax 1048576 This is the maximum size in bytes of a READ request supported by the NFS gateway. If you change this, make sure you also update the nfs mount's rsize(add rsize= # of bytes to the mount directive). nfs.wtmax 1048576 This is the maximum size in bytes of a WRITE request supported by the NFS gateway. If you change this, make sure you also update the nfs mount's wsize(add wsize= # of bytes to the mount directive). nfs.keytab.file *Note*: Advanced property. Change with caution. This is the path to the keytab file for the hdfs-nfs gateway. This is required when the cluster is kerberized. nfs.kerberos.principal *Note*: Advanced property. Change with caution. This is the name of the kerberos principal. This is required when the cluster is kerberized.It must be of this format: nfs-gateway-user/nfs-gateway-host@kerberos-realm nfs.allow.insecure.ports true When set to false, client connections originating from unprivileged ports (those above 1023) will be rejected. This is to ensure that clients connecting to this NFS Gateway must have had root privilege on the machine where they're connecting from. dfs.webhdfs.enabled true Enable WebHDFS (REST API) in Namenodes and Datanodes. hadoop.fuse.connection.timeout 300 The minimum number of seconds that we'll cache libhdfs connection objects in fuse_dfs. Lower values will result in lower memory consumption; higher values may speed up access by avoiding the overhead of creating new connection objects. hadoop.fuse.timer.period 5 The number of seconds between cache expiry checks in fuse_dfs. Lower values will result in fuse_dfs noticing changes to Kerberos ticket caches more quickly. dfs.namenode.metrics.logger.period.seconds 600 This setting controls how frequently the NameNode logs its metrics. The logging configuration must also define one or more appenders for NameNodeMetricsLog for the metrics to be logged. NameNode metrics logging is disabled if this value is set to zero or less than zero. dfs.datanode.metrics.logger.period.seconds 600 This setting controls how frequently the DataNode logs its metrics. The logging configuration must also define one or more appenders for DataNodeMetricsLog for the metrics to be logged. DataNode metrics logging is disabled if this value is set to zero or less than zero. dfs.metrics.percentiles.intervals Comma-delimited set of integers denoting the desired rollover intervals (in seconds) for percentile latency metrics on the Namenode and Datanode. By default, percentile latency metrics are disabled. hadoop.user.group.metrics.percentiles.intervals A comma-separated list of the granularity in seconds for the metrics which describe the 50/75/90/95/99th percentile latency for group resolution in milliseconds. By default, percentile latency metrics are disabled. dfs.encrypt.data.transfer false Whether or not actual block data that is read/written from/to HDFS should be encrypted on the wire. This only needs to be set on the NN and DNs, clients will deduce this automatically. It is possible to override this setting per connection by specifying custom logic via dfs.trustedchannel.resolver.class. dfs.encrypt.data.transfer.algorithm This value may be set to either "3des" or "rc4". If nothing is set, then the configured JCE default on the system is used (usually 3DES.) It is widely believed that 3DES is more cryptographically secure, but RC4 is substantially faster. Note that if AES is supported by both the client and server then this encryption algorithm will only be used to initially transfer keys for AES. (See dfs.encrypt.data.transfer.cipher.suites.) dfs.encrypt.data.transfer.cipher.suites This value may be either undefined or AES/CTR/NoPadding. If defined, then dfs.encrypt.data.transfer uses the specified cipher suite for data encryption. If not defined, then only the algorithm specified in dfs.encrypt.data.transfer.algorithm is used. By default, the property is not defined. dfs.encrypt.data.transfer.cipher.key.bitlength 128 The key bitlength negotiated by dfsclient and datanode for encryption. This value may be set to either 128, 192 or 256. dfs.trustedchannel.resolver.class TrustedChannelResolver is used to determine whether a channel is trusted for plain data transfer. The TrustedChannelResolver is invoked on both client and server side. If the resolver indicates that the channel is trusted, then the data transfer will not be encrypted even if dfs.encrypt.data.transfer is set to true. The default implementation returns false indicating that the channel is not trusted. dfs.data.transfer.protection A comma-separated list of SASL protection values used for secured connections to the DataNode when reading or writing block data. Possible values are authentication, integrity and privacy. authentication means authentication only and no integrity or privacy; integrity implies authentication and integrity are enabled; and privacy implies all of authentication, integrity and privacy are enabled. If dfs.encrypt.data.transfer is set to true, then it supersedes the setting for dfs.data.transfer.protection and enforces that all connections must use a specialized encrypted SASL handshake. This property is ignored for connections to a DataNode listening on a privileged port. In this case, it is assumed that the use of a privileged port establishes sufficient trust. dfs.data.transfer.saslproperties.resolver.class SaslPropertiesResolver used to resolve the QOP used for a connection to the DataNode when reading or writing block data. If not specified, the value of hadoop.security.saslproperties.resolver.class is used as the default value. dfs.datanode.hdfs-blocks-metadata.enabled false Boolean which enables backend datanode-side support for the experimental DistributedFileSystem#getFileVBlockStorageLocations API. dfs.client.file-block-storage-locations.num-threads 10 Number of threads used for making parallel RPCs in DistributedFileSystem#getFileBlockStorageLocations(). dfs.client.file-block-storage-locations.timeout.millis 1000 Timeout (in milliseconds) for the parallel RPCs made in DistributedFileSystem#getFileBlockStorageLocations(). dfs.journalnode.rpc-address 0.0.0.0:8485 The JournalNode RPC server address and port. dfs.journalnode.http-address 0.0.0.0:8480 The address and port the JournalNode HTTP server listens on. If the port is 0 then the server will start on a free port. dfs.journalnode.https-address 0.0.0.0:8481 The address and port the JournalNode HTTPS server listens on. If the port is 0 then the server will start on a free port. dfs.namenode.audit.loggers default List of classes implementing audit loggers that will receive audit events. These should be implementations of org.apache.hadoop.hdfs.server.namenode.AuditLogger. The special value "default" can be used to reference the default audit logger, which uses the configured log system. Installing custom audit loggers may affect the performance and stability of the NameNode. Refer to the custom logger's documentation for more details. dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold 10737418240 Only used when the dfs.datanode.fsdataset.volume.choosing.policy is set to org.apache.hadoop.hdfs.server.datanode.fsdataset.AvailableSpaceVolumeChoosingPolicy. This setting controls how much DN volumes are allowed to differ in terms of bytes of free disk space before they are considered imbalanced. If the free space of all the volumes are within this range of each other, the volumes will be considered balanced and block assignments will be done on a pure round robin basis. dfs.datanode.available-space-volume-choosing-policy.balanced-space-preference-fraction 0.75f Only used when the dfs.datanode.fsdataset.volume.choosing.policy is set to org.apache.hadoop.hdfs.server.datanode.fsdataset.AvailableSpaceVolumeChoosingPolicy. This setting controls what percentage of new block allocations will be sent to volumes with more available disk space than others. This setting should be in the range 0.0 - 1.0, though in practice 0.5 - 1.0, since there should be no reason to prefer that volumes with less available disk space receive more block allocations. dfs.namenode.edits.noeditlogchannelflush false Specifies whether to flush edit log file channel. When set, expensive FileChannel#force calls are skipped and synchronous disk writes are enabled instead by opening the edit log file with RandomAccessFile("rws") flags. This can significantly improve the performance of edit log writes on the Windows platform. Note that the behavior of the "rws" flags is platform and hardware specific and might not provide the same level of guarantees as FileChannel#force. For example, the write will skip the disk-cache on SAS and SCSI devices while it might not on SATA devices. This is an expert level setting, change with caution. dfs.client.cache.drop.behind.writes Just like dfs.datanode.drop.cache.behind.writes, this setting causes the page cache to be dropped behind HDFS writes, potentially freeing up more memory for other uses. Unlike dfs.datanode.drop.cache.behind.writes, this is a client-side setting rather than a setting for the entire datanode. If present, this setting will override the DataNode default. If the native libraries are not available to the DataNode, this configuration has no effect. dfs.client.cache.drop.behind.reads Just like dfs.datanode.drop.cache.behind.reads, this setting causes the page cache to be dropped behind HDFS reads, potentially freeing up more memory for other uses. Unlike dfs.datanode.drop.cache.behind.reads, this is a client-side setting rather than a setting for the entire datanode. If present, this setting will override the DataNode default. If the native libraries are not available to the DataNode, this configuration has no effect. dfs.client.cache.readahead When using remote reads, this setting causes the datanode to read ahead in the block file using posix_fadvise, potentially decreasing I/O wait times. Unlike dfs.datanode.readahead.bytes, this is a client-side setting rather than a setting for the entire datanode. If present, this setting will override the DataNode default. When using local reads, this setting determines how much readahead we do in BlockReaderLocal. If the native libraries are not available to the DataNode, this configuration has no effect. dfs.namenode.enable.retrycache true This enables the retry cache on the namenode. Namenode tracks for non-idempotent requests the corresponding response. If a client retries the request, the response from the retry cache is sent. Such operations are tagged with annotation @AtMostOnce in namenode protocols. It is recommended that this flag be set to true. Setting it to false, will result in clients getting failure responses to retried request. This flag must be enabled in HA setup for transparent fail-overs. The entries in the cache have expiration time configurable using dfs.namenode.retrycache.expirytime.millis. dfs.namenode.retrycache.expirytime.millis 600000 The time for which retry cache entries are retained. dfs.namenode.retrycache.heap.percent 0.03f This parameter configures the heap size allocated for retry cache (excluding the response cached). This corresponds to approximately 4096 entries for every 64MB of namenode process java heap size. Assuming retry cache entry expiration time (configured using dfs.namenode.retrycache.expirytime.millis) of 10 minutes, this enables retry cache to support 7 operations per second sustained for 10 minutes. As the heap size is increased, the operation rate linearly increases. dfs.client.mmap.enabled true If this is set to false, the client won't attempt to perform memory-mapped reads. dfs.client.mmap.cache.size 256 When zero-copy reads are used, the DFSClient keeps a cache of recently used memory mapped regions. This parameter controls the maximum number of entries that we will keep in that cache. The larger this number is, the more file descriptors we will potentially use for memory-mapped files. mmaped files also use virtual address space. You may need to increase your ulimit virtual address space limits before increasing the client mmap cache size. Note that you can still do zero-copy reads when this size is set to 0. dfs.client.mmap.cache.timeout.ms 3600000 The minimum length of time that we will keep an mmap entry in the cache between uses. If an entry is in the cache longer than this, and nobody uses it, it will be removed by a background thread. dfs.client.mmap.retry.timeout.ms 300000 The minimum amount of time that we will wait before retrying a failed mmap operation. dfs.client.short.circuit.replica.stale.threshold.ms 1800000 The maximum amount of time that we will consider a short-circuit replica to be valid, if there is no communication from the DataNode. After this time has elapsed, we will re-fetch the short-circuit replica even if it is in the cache. dfs.namenode.path.based.cache.block.map.allocation.percent 0.25 The percentage of the Java heap which we will allocate to the cached blocks map. The cached blocks map is a hash map which uses chained hashing. Smaller maps may be accessed more slowly if the number of cached blocks is large; larger maps will consume more memory. dfs.datanode.max.locked.memory 0 The amount of memory in bytes to use for caching of block replicas in memory on the datanode. The datanode's maximum locked memory soft ulimit (RLIMIT_MEMLOCK) must be set to at least this value, else the datanode will abort on startup. By default, this parameter is set to 0, which disables in-memory caching. If the native libraries are not available to the DataNode, this configuration has no effect. dfs.namenode.list.cache.directives.num.responses 100 This value controls the number of cache directives that the NameNode will send over the wire in response to a listDirectives RPC. dfs.namenode.list.cache.pools.num.responses 100 This value controls the number of cache pools that the NameNode will send over the wire in response to a listPools RPC. dfs.namenode.path.based.cache.refresh.interval.ms 30000 The amount of milliseconds between subsequent path cache rescans. Path cache rescans are when we calculate which blocks should be cached, and on what datanodes. By default, this parameter is set to 30 seconds. dfs.namenode.path.based.cache.retry.interval.ms 30000 When the NameNode needs to uncache something that is cached, or cache something that is not cached, it must direct the DataNodes to do so by sending a DNA_CACHE or DNA_UNCACHE command in response to a DataNode heartbeat. This parameter controls how frequently the NameNode will resend these commands. dfs.datanode.fsdatasetcache.max.threads.per.volume 4 The maximum number of threads per volume to use for caching new data on the datanode. These threads consume both I/O and CPU. This can affect normal datanode operations. dfs.cachereport.intervalMsec 10000 Determines cache reporting interval in milliseconds. After this amount of time, the DataNode sends a full report of its cache state to the NameNode. The NameNode uses the cache report to update its map of cached blocks to DataNode locations. This configuration has no effect if in-memory caching has been disabled by setting dfs.datanode.max.locked.memory to 0 (which is the default). If the native libraries are not available to the DataNode, this configuration has no effect. dfs.namenode.edit.log.autoroll.multiplier.threshold 2.0 Determines when an active namenode will roll its own edit log. The actual threshold (in number of edits) is determined by multiplying this value by dfs.namenode.checkpoint.txns. This prevents extremely large edit files from accumulating on the active namenode, which can cause timeouts during namenode startup and pose an administrative hassle. This behavior is intended as a failsafe for when the standby or secondary namenode fail to roll the edit log by the normal checkpoint threshold. dfs.namenode.edit.log.autoroll.check.interval.ms 300000 How often an active namenode will check if it needs to roll its edit log, in milliseconds. dfs.webhdfs.user.provider.user.pattern ^[A-Za-z_][A-Za-z0-9._-]*[$]?$ Valid pattern for user and group names for webhdfs, it must be a valid java regex. dfs.webhdfs.socket.connect-timeout 60s Socket timeout for connecting to WebHDFS servers. This prevents a WebHDFS client from hanging if the server hostname is misconfigured, or the server does not response before the timeout expires. Value is followed by a unit specifier: ns, us, ms, s, m, h, d for nanoseconds, microseconds, milliseconds, seconds, minutes, hours, days respectively. Values should provide units, but milliseconds are assumed. dfs.webhdfs.socket.read-timeout 60s Socket timeout for reading data from WebHDFS servers. This prevents a WebHDFS client from hanging if the server stops sending data. Value is followed by a unit specifier: ns, us, ms, s, m, h, d for nanoseconds, microseconds, milliseconds, seconds, minutes, hours, days respectively. Values should provide units, but milliseconds are assumed. dfs.client.context default The name of the DFSClient context that we should use. Clients that share a context share a socket cache and short-circuit cache, among other things. You should only change this if you don't want to share with another set of threads. dfs.client.read.shortcircuit false This configuration parameter turns on short-circuit local reads. dfs.client.socket.send.buffer.size 0 Socket send buffer size for a write pipeline in DFSClient side. This may affect TCP connection throughput. If it is set to zero or negative value, no buffer size will be set explicitly, thus enable tcp auto-tuning on some system. The default value is 0. dfs.domain.socket.path Optional. This is a path to a UNIX domain socket that will be used for communication between the DataNode and local HDFS clients. If the string "_PORT" is present in this path, it will be replaced by the TCP port of the DataNode. dfs.client.read.shortcircuit.skip.checksum false If this configuration parameter is set, short-circuit local reads will skip checksums. This is normally not recommended, but it may be useful for special setups. You might consider using this if you are doing your own checksumming outside of HDFS. dfs.client.read.shortcircuit.streams.cache.size 256 The DFSClient maintains a cache of recently opened file descriptors. This parameter controls the maximum number of file descriptors in the cache. Setting this higher will use more file descriptors, but potentially provide better performance on workloads involving lots of seeks. dfs.client.read.shortcircuit.streams.cache.expiry.ms 300000 This controls the minimum amount of time file descriptors need to sit in the client cache context before they can be closed for being inactive for too long. dfs.datanode.shared.file.descriptor.paths /dev/shm,/tmp Comma separated paths to the directory on which shared memory segments are created. The client and the DataNode exchange information via this shared memory segment. It tries paths in order until creation of shared memory segment succeeds. dfs.namenode.audit.log.debug.cmdlist A comma separated list of NameNode commands that are written to the HDFS namenode audit log only if the audit log level is debug. dfs.client.use.legacy.blockreader.local false Legacy short-circuit reader implementation based on HDFS-2246 is used if this configuration parameter is true. This is for the platforms other than Linux where the new implementation based on HDFS-347 is not available. dfs.block.local-path-access.user Comma separated list of the users allowd to open block files on legacy short-circuit local read. dfs.client.domain.socket.data.traffic false This control whether we will try to pass normal data traffic over UNIX domain socket rather than over TCP socket on node-local data transfer. This is currently experimental and turned off by default. dfs.namenode.reject-unresolved-dn-topology-mapping false If the value is set to true, then namenode will reject datanode registration if the topology mapping for a datanode is not resolved and NULL is returned (script defined by net.topology.script.file.name fails to execute). Otherwise, datanode will be registered and the default rack will be assigned as the topology path. Topology paths are important for data resiliency, since they define fault domains. Thus it may be unwanted behavior to allow datanode registration with the default rack if the resolving topology failed. dfs.client.slow.io.warning.threshold.ms 30000 The threshold in milliseconds at which we will log a slow io warning in a dfsclient. By default, this parameter is set to 30000 milliseconds (30 seconds). dfs.datanode.slow.io.warning.threshold.ms 300 The threshold in milliseconds at which we will log a slow io warning in a datanode. By default, this parameter is set to 300 milliseconds. dfs.namenode.xattrs.enabled true Whether support for extended attributes is enabled on the NameNode. dfs.namenode.fs-limits.max-xattrs-per-inode 32 Maximum number of extended attributes per inode. dfs.namenode.fs-limits.max-xattr-size 16384 The maximum combined size of the name and value of an extended attribute in bytes. It should be larger than 0, and less than or equal to maximum size hard limit which is 32768. dfs.namenode.lease-recheck-interval-ms 2000 During the release of lease a lock is hold that make any operations on the namenode stuck. In order to not block them during a too long duration we stop releasing lease after this max lock limit. dfs.namenode.max-lock-hold-to-release-lease-ms 25 During the release of lease a lock is hold that make any operations on the namenode stuck. In order to not block them during a too long duration we stop releasing lease after this max lock limit. dfs.namenode.write-lock-reporting-threshold-ms 5000 When a write lock is held on the namenode for a long time, this will be logged as the lock is released. This sets how long the lock must be held for logging to occur. dfs.namenode.read-lock-reporting-threshold-ms 5000 When a read lock is held on the namenode for a long time, this will be logged as the lock is released. This sets how long the lock must be held for logging to occur. dfs.namenode.lock.detailed-metrics.enabled false If true, the namenode will keep track of how long various operations hold the Namesystem lock for and emit this as metrics. These metrics have names of the form FSN(Read|Write)LockNanosOperationName, where OperationName denotes the name of the operation that initiated the lock hold (this will be OTHER for certain uncategorized operations) and they export the hold time values in nanoseconds. dfs.namenode.fslock.fair true If this is true, the FS Namesystem lock will be used in Fair mode, which will help to prevent writer threads from being starved, but can provide lower lock throughput. See java.util.concurrent.locks.ReentrantReadWriteLock for more information on fair/non-fair locks. dfs.namenode.startup.delay.block.deletion.sec 0 The delay in seconds at which we will pause the blocks deletion after Namenode startup. By default it's disabled. In the case a directory has large number of directories and files are deleted, suggested delay is one hour to give the administrator enough time to notice large number of pending deletion blocks and take corrective action. dfs.namenode.list.encryption.zones.num.responses 100 When listing encryption zones, the maximum number of zones that will be returned in a batch. Fetching the list incrementally in batches improves namenode performance. dfs.namenode.edekcacheloader.interval.ms 1000 When KeyProvider is configured, the interval time of warming up edek cache on NN starts up / becomes active. All edeks will be loaded from KMS into provider cache. The edek cache loader will try to warm up the cache until succeed or NN leaves active state. dfs.namenode.edekcacheloader.initial.delay.ms 3000 When KeyProvider is configured, the time delayed until the first attempt to warm up edek cache on NN start up / become active. dfs.namenode.inotify.max.events.per.rpc 1000 Maximum number of events that will be sent to an inotify client in a single RPC response. The default value attempts to amortize away the overhead for this RPC while avoiding huge memory requirements for the client and NameNode (1000 events should consume no more than 1 MB.) dfs.user.home.dir.prefix /user The directory to prepend to user name to get the user's home direcotry. dfs.datanode.cache.revocation.timeout.ms 900000 When the DFSClient reads from a block file which the DataNode is caching, the DFSClient can skip verifying checksums. The DataNode will keep the block file in cache until the client is done. If the client takes an unusually long time, though, the DataNode may need to evict the block file from the cache anyway. This value controls how long the DataNode will wait for the client to release a replica that it is reading without checksums. dfs.datanode.cache.revocation.polling.ms 500 How often the DataNode should poll to see if the clients have stopped using a replica that the DataNode wants to uncache. dfs.datanode.block.id.layout.upgrade.threads 12 The number of threads to use when creating hard links from current to previous blocks during upgrade of a DataNode to block ID-based block layout (see HDFS-6482 for details on the layout). dfs.storage.policy.enabled true Allow users to change the storage policy on files and directories. dfs.namenode.legacy-oiv-image.dir Determines where to save the namespace in the old fsimage format during checkpointing by standby NameNode or SecondaryNameNode. Users can dump the contents of the old format fsimage by oiv_legacy command. If the value is not specified, old format fsimage will not be saved in checkpoint. dfs.namenode.top.enabled true Enable nntop: reporting top users on namenode dfs.namenode.top.window.num.buckets 10 Number of buckets in the rolling window implementation of nntop dfs.namenode.top.num.users 10 Number of top users returned by the top tool dfs.namenode.top.windows.minutes 1,5,25 comma separated list of nntop reporting periods in minutes dfs.webhdfs.ugi.expire.after.access 600000 How long in milliseconds after the last access the cached UGI will expire. With 0, never expire. dfs.namenode.blocks.per.postponedblocks.rescan 10000 Number of blocks to rescan for each iteration of postponedMisreplicatedBlocks. dfs.datanode.block-pinning.enabled false Whether pin blocks on favored DataNode. dfs.client.block.write.locateFollowingBlock.initial.delay.ms 400 The initial delay (unit is ms) for locateFollowingBlock, the delay time will increase exponentially(double) for each retry. dfs.ha.zkfc.nn.http.timeout.ms 20000 The HTTP connection and read timeout value (unit is ms ) when DFS ZKFC tries to get local NN thread dump after local NN becomes SERVICE_NOT_RESPONDING or SERVICE_UNHEALTHY. If it is set to zero, DFS ZKFC won't get local NN thread dump. dfs.namenode.quota.init-threads 4 The number of concurrent threads to be used in quota initialization. The speed of quota initialization also affects the namenode fail-over latency. If the size of name space is big, try increasing this. dfs.datanode.transfer.socket.send.buffer.size 0 Socket send buffer size for DataXceiver (mirroring packets to downstream in pipeline). This may affect TCP connection throughput. If it is set to zero or negative value, no buffer size will be set explicitly, thus enable tcp auto-tuning on some system. The default value is 0. dfs.datanode.transfer.socket.recv.buffer.size 0 Socket receive buffer size for DataXceiver (receiving packets from client during block writing). This may affect TCP connection throughput. If it is set to zero or negative value, no buffer size will be set explicitly, thus enable tcp auto-tuning on some system. The default value is 0. dfs.namenode.upgrade.domain.factor ${dfs.replication} This is valid only when block placement policy is set to BlockPlacementPolicyWithUpgradeDomain. It defines the number of unique upgrade domains any block's replicas should have. When the number of replicas is less or equal to this value, the policy ensures each replica has an unique upgrade domain. When the number of replicas is greater than this value, the policy ensures the number of unique domains is at least this value. dfs.datanode.bp-ready.timeout 20 The maximum wait time for datanode to be ready before failing the received request. Setting this to 0 fails requests right away if the datanode is not yet registered with the namenode. This wait time reduces initial request failures after datanode restart. dfs.webhdfs.rest-csrf.enabled false If true, then enables WebHDFS protection against cross-site request forgery (CSRF). The WebHDFS client also uses this property to determine whether or not it needs to send the custom CSRF prevention header in its HTTP requests. dfs.webhdfs.rest-csrf.custom-header X-XSRF-HEADER The name of a custom header that HTTP requests must send when protection against cross-site request forgery (CSRF) is enabled for WebHDFS by setting dfs.webhdfs.rest-csrf.enabled to true. The WebHDFS client also uses this property to determine whether or not it needs to send the custom CSRF prevention header in its HTTP requests. dfs.webhdfs.rest-csrf.methods-to-ignore GET,OPTIONS,HEAD,TRACE A comma-separated list of HTTP methods that do not require HTTP requests to include a custom header when protection against cross-site request forgery (CSRF) is enabled for WebHDFS by setting dfs.webhdfs.rest-csrf.enabled to true. The WebHDFS client also uses this property to determine whether or not it needs to send the custom CSRF prevention header in its HTTP requests. dfs.webhdfs.rest-csrf.browser-useragents-regex ^Mozilla.*,^Opera.* A comma-separated list of regular expressions used to match against an HTTP request's User-Agent header when protection against cross-site request forgery (CSRF) is enabled for WebHDFS by setting dfs.webhdfs.reset-csrf.enabled to true. If the incoming User-Agent matches any of these regular expressions, then the request is considered to be sent by a browser, and therefore CSRF prevention is enforced. If the request's User-Agent does not match any of these regular expressions, then the request is considered to be sent by something other than a browser, such as scripted automation. In this case, CSRF is not a potential attack vector, so the prevention is not enforced. This helps achieve backwards-compatibility with existing automation that has not been updated to send the CSRF prevention header. dfs.xframe.enabled true If true, then enables protection against clickjacking by returning X_FRAME_OPTIONS header value set to SAMEORIGIN. Clickjacking protection prevents an attacker from using transparent or opaque layers to trick a user into clicking on a button or link on another page. dfs.xframe.value SAMEORIGIN This configration value allows user to specify the value for the X-FRAME-OPTIONS. The possible values for this field are DENY, SAMEORIGIN and ALLOW-FROM. Any other value will throw an exception when namenode and datanodes are starting up. dfs.http.client.retry.policy.enabled false If "true", enable the retry policy of WebHDFS client. If "false", retry policy is turned off. Enabling the retry policy can be quite useful while using WebHDFS to copy large files between clusters that could timeout, or copy files between HA clusters that could failover during the copy. dfs.http.client.retry.policy.spec 10000,6,60000,10 Specify a policy of multiple linear random retry for WebHDFS client, e.g. given pairs of number of retries and sleep time (n0, t0), (n1, t1), ..., the first n0 retries sleep t0 milliseconds on average, the following n1 retries sleep t1 milliseconds on average, and so on. dfs.http.client.failover.max.attempts 15 Specify the max number of failover attempts for WebHDFS client in case of network exception. dfs.http.client.retry.max.attempts 10 Specify the max number of retry attempts for WebHDFS client, if the difference between retried attempts and failovered attempts is larger than the max number of retry attempts, there will be no more retries. dfs.http.client.failover.sleep.base.millis 500 Specify the base amount of time in milliseconds upon which the exponentially increased sleep time between retries or failovers is calculated for WebHDFS client. dfs.http.client.failover.sleep.max.millis 15000 Specify the upper bound of sleep time in milliseconds between retries or failovers for WebHDFS client. dfs.balancer.keytab.enabled false Set to true to enable login using a keytab for Kerberized Hadoop. dfs.balancer.address 0.0.0.0:0 The hostname used for a keytab based Kerberos login. Keytab based login can be enabled with dfs.balancer.keytab.enabled. dfs.balancer.keytab.file The keytab file used by the Balancer to login as its service principal. The principal name is configured with dfs.balancer.kerberos.principal. Keytab based login can be enabled with dfs.balancer.keytab.enabled. dfs.balancer.kerberos.principal The Balancer principal. This is typically set to balancer/_HOST@REALM.TLD. The Balancer will substitute _HOST with its own fully qualified hostname at startup. The _HOST placeholder allows using the same configuration setting on different servers. Keytab based login can be enabled with dfs.balancer.keytab.enabled. dfs.balancer.block-move.timeout 0 Maximum amount of time in milliseconds for a block to move. If this is set greater than 0, Balancer will stop waiting for a block move completion after this time. In typical clusters, a 3 to 5 minute timeout is reasonable. If timeout happens to a large proportion of block moves, this needs to be increased. It could also be that too much work is dispatched and many nodes are constantly exceeding the bandwidth limit as a result. In that case, other balancer parameters might need to be adjusted. It is disabled (0) by default. dfs.balancer.max-no-move-interval 60000 If this specified amount of time has elapsed and no block has been moved out of a source DataNode, on more effort will be made to move blocks out of this DataNode in the current Balancer iteration. dfs.lock.suppress.warning.interval 10s Instrumentation reporting long critical sections will suppress consecutive warnings within this interval. dfs.webhdfs.use.ipc.callq true Enables routing of webhdfs calls through rpc call queue httpfs.buffer.size 4096 The size buffer to be used when creating or opening httpfs filesystem IO stream. dfs.namenode.hosts.provider.classname org.apache.hadoop.hdfs.server.blockmanagement.HostFileManager The class that provides access for host files. org.apache.hadoop.hdfs.server.blockmanagement.HostFileManager is used by default which loads files specified by dfs.hosts and dfs.hosts.exclude. If org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager is used, it will load the JSON file defined in dfs.hosts. To change class name, nn restart is required. "dfsadmin -refreshNodes" only refreshes the configuration files used by the class. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_8_2/resources/hive-default.xml0000664000175000017500000076704500000000000033557 0ustar00zuulzuul00000000000000 hive.exec.script.wrapper hive.exec.plan hive.exec.stagingdir .hive-staging Directory name that will be created inside table locations in order to support HDFS encryption. This is replaces ${hive.exec.scratchdir} for query results with the exception of read-only tables. In all cases ${hive.exec.scratchdir} is still used for other temporary files, such as job plans. hive.exec.scratchdir /tmp/hive HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}. hive.repl.rootdir /user/hive/repl/ HDFS root dir for all replication dumps. hive.repl.cm.enabled false Turn on ChangeManager, so delete files will go to cmrootdir. hive.repl.cmrootdir /user/hive/cmroot/ Root dir for ChangeManager, used for deleted files. hive.repl.cm.retain 24h Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is hour if not specified. Time to retain removed files in cmrootdir. hive.repl.cm.interval 3600s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Inteval for cmroot cleanup thread. hive.exec.local.scratchdir ${system:java.io.tmpdir}/${system:user.name} Local scratch space for Hive jobs hive.downloaded.resources.dir ${system:java.io.tmpdir}/${hive.session.id}_resources Temporary local directory for added resources in the remote file system. hive.scratch.dir.permission 700 The permission for the user specific scratch directories that get created. hive.exec.submitviachild false hive.exec.submit.local.task.via.child true Determines whether local tasks (typically mapjoin hashtable generation phase) runs in separate JVM (true recommended) or not. Avoids the overhead of spawning new JVM, but can lead to out-of-memory issues. hive.exec.script.maxerrsize 100000 Maximum number of bytes a script is allowed to emit to standard error (per map-reduce task). This prevents runaway scripts from filling logs partitions to capacity hive.exec.script.allow.partial.consumption false When enabled, this option allows a user script to exit successfully without consuming all the data from the standard input. stream.stderr.reporter.prefix reporter: Streaming jobs that log to standard error with this prefix can log counter or status information. stream.stderr.reporter.enabled true Enable consumption of status and counter messages for streaming jobs. hive.exec.compress.output false This controls whether the final outputs of a query (to a local/HDFS file or a Hive table) is compressed. The compression codec and other options are determined from Hadoop config variables mapred.output.compress* hive.exec.compress.intermediate false This controls whether intermediate files produced by Hive between multiple map-reduce jobs are compressed. The compression codec and other options are determined from Hadoop config variables mapred.output.compress* hive.intermediate.compression.codec hive.intermediate.compression.type hive.exec.reducers.bytes.per.reducer 256000000 size per reducer.The default is 256Mb, i.e if the input size is 1G, it will use 4 reducers. hive.exec.reducers.max 1009 max number of reducers will be used. If the one specified in the configuration parameter mapred.reduce.tasks is negative, Hive will use this one as the max number of reducers when automatically determine number of reducers. hive.exec.pre.hooks Comma-separated list of pre-execution hooks to be invoked for each statement. A pre-execution hook is specified as the name of a Java class which implements the org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface. hive.exec.post.hooks Comma-separated list of post-execution hooks to be invoked for each statement. A post-execution hook is specified as the name of a Java class which implements the org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface. hive.exec.failure.hooks Comma-separated list of on-failure hooks to be invoked for each statement. An on-failure hook is specified as the name of Java class which implements the org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface. hive.exec.query.redactor.hooks Comma-separated list of hooks to be invoked for each query which can tranform the query before it's placed in the job.xml file. Must be a Java class which extends from the org.apache.hadoop.hive.ql.hooks.Redactor abstract class. hive.client.stats.publishers Comma-separated list of statistics publishers to be invoked on counters on each job. A client stats publisher is specified as the name of a Java class which implements the org.apache.hadoop.hive.ql.stats.ClientStatsPublisher interface. hive.ats.hook.queue.capacity 64 Queue size for the ATS Hook executor. If the number of outstanding submissions to the ATS executor exceed this amount, the Hive ATS Hook will not try to log queries to ATS. hive.exec.parallel false Whether to execute jobs in parallel hive.exec.parallel.thread.number 8 How many jobs at most can be executed in parallel hive.mapred.reduce.tasks.speculative.execution true Whether speculative execution for reducers should be turned on. hive.exec.counters.pull.interval 1000 The interval with which to poll the JobTracker for the counters the running job. The smaller it is the more load there will be on the jobtracker, the higher it is the less granular the caught will be. hive.exec.dynamic.partition true Whether or not to allow dynamic partitions in DML/DDL. hive.exec.dynamic.partition.mode strict In strict mode, the user must specify at least one static partition in case the user accidentally overwrites all partitions. In nonstrict mode all partitions are allowed to be dynamic. hive.exec.max.dynamic.partitions 1000 Maximum number of dynamic partitions allowed to be created in total. hive.exec.max.dynamic.partitions.pernode 100 Maximum number of dynamic partitions allowed to be created in each mapper/reducer node. hive.exec.max.created.files 100000 Maximum number of HDFS files created by all mappers/reducers in a MapReduce job. hive.exec.default.partition.name __HIVE_DEFAULT_PARTITION__ The default partition name in case the dynamic partition column value is null/empty string or any other values that cannot be escaped. This value must not contain any special character used in HDFS URI (e.g., ':', '%', '/' etc). The user has to be aware that the dynamic partition value should not contain this value to avoid confusions. hive.lockmgr.zookeeper.default.partition.name __HIVE_DEFAULT_ZOOKEEPER_PARTITION__ hive.exec.show.job.failure.debug.info true If a job fails, whether to provide a link in the CLI to the task with the most failures, along with debugging hints if applicable. hive.exec.job.debug.capture.stacktraces true Whether or not stack traces parsed from the task logs of a sampled failed task for each failed job should be stored in the SessionState hive.exec.job.debug.timeout 30000 hive.exec.tasklog.debug.timeout 20000 hive.output.file.extension String used as a file extension for output files. If not set, defaults to the codec extension for text files (e.g. ".gz"), or no extension otherwise. hive.exec.mode.local.auto false Let Hive determine whether to run in local mode automatically hive.exec.mode.local.auto.inputbytes.max 134217728 When hive.exec.mode.local.auto is true, input bytes should less than this for local mode. hive.exec.mode.local.auto.input.files.max 4 When hive.exec.mode.local.auto is true, the number of tasks should less than this for local mode. hive.exec.drop.ignorenonexistent true Do not report an error if DROP TABLE/VIEW/Index/Function specifies a non-existent table/view/index/function hive.ignore.mapjoin.hint true Ignore the mapjoin hint hive.file.max.footer 100 maximum number of lines for footer user can define for a table file hive.resultset.use.unique.column.names true Make column names unique in the result set by qualifying column names with table alias if needed. Table alias will be added to column names for queries of type "select *" or if query explicitly uses table alias "select r1.x..". fs.har.impl org.apache.hadoop.hive.shims.HiveHarFileSystem The implementation for accessing Hadoop Archives. Note that this won't be applicable to Hadoop versions less than 0.20 hive.metastore.warehouse.dir /user/hive/warehouse location of default database for the warehouse hive.metastore.uris Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore. hive.metastore.client.capability.check true Whether to check client capabilities for potentially breaking API usage. hive.metastore.fastpath false Used to avoid all of the proxies and object copies in the metastore. Note, if this is set, you MUST use a local metastore (hive.metastore.uris must be empty) otherwise undefined and most likely undesired behavior will result hive.metastore.fshandler.threads 15 Number of threads to be allocated for metastore handler for fs operations. hive.metastore.hbase.catalog.cache.size 50000 Maximum number of objects we will place in the hbase metastore catalog cache. The objects will be divided up by types that we need to cache. hive.metastore.hbase.aggregate.stats.cache.size 10000 Maximum number of aggregate stats nodes that we will place in the hbase metastore aggregate stats cache. hive.metastore.hbase.aggregate.stats.max.partitions 10000 Maximum number of partitions that are aggregated per cache node. hive.metastore.hbase.aggregate.stats.false.positive.probability 0.01 Maximum false positive probability for the Bloom Filter used in each aggregate stats cache node (default 1%). hive.metastore.hbase.aggregate.stats.max.variance 0.1 Maximum tolerable variance in number of partitions between a cached node and our request (default 10%). hive.metastore.hbase.cache.ttl 600s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Number of seconds for a cached node to be active in the cache before they become stale. hive.metastore.hbase.cache.max.writer.wait 5000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Number of milliseconds a writer will wait to acquire the writelock before giving up. hive.metastore.hbase.cache.max.reader.wait 1000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Number of milliseconds a reader will wait to acquire the readlock before giving up. hive.metastore.hbase.cache.max.full 0.9 Maximum cache full % after which the cache cleaner thread kicks in. hive.metastore.hbase.cache.clean.until 0.8 The cleaner thread cleans until cache reaches this % full size. hive.metastore.hbase.connection.class org.apache.hadoop.hive.metastore.hbase.VanillaHBaseConnection Class used to connection to HBase hive.metastore.hbase.aggr.stats.cache.entries 10000 How many in stats objects to cache in memory hive.metastore.hbase.aggr.stats.memory.ttl 60s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Number of seconds stats objects live in memory after they are read from HBase. hive.metastore.hbase.aggr.stats.invalidator.frequency 5s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. How often the stats cache scans its HBase entries and looks for expired entries hive.metastore.hbase.aggr.stats.hbase.ttl 604800s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Number of seconds stats entries live in HBase cache after they are created. They may be invalided by updates or partition drops before this. Default is one week. hive.metastore.hbase.file.metadata.threads 1 Number of threads to use to read file metadata in background to cache it. hive.metastore.connect.retries 3 Number of retries while opening a connection to metastore hive.metastore.failure.retries 1 Number of retries upon failure of Thrift metastore calls hive.metastore.port 9083 Hive metastore listener port hive.metastore.client.connect.retry.delay 1s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Number of seconds for the client to wait between consecutive connection attempts hive.metastore.client.socket.timeout 600s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. MetaStore Client socket timeout in seconds hive.metastore.client.socket.lifetime 0s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. MetaStore Client socket lifetime in seconds. After this time is exceeded, client reconnects on the next MetaStore operation. A value of 0s means the connection has an infinite lifetime. javax.jdo.option.ConnectionPassword mine password to use against metastore database hive.metastore.ds.connection.url.hook Name of the hook to use for retrieving the JDO connection URL. If empty, the value in javax.jdo.option.ConnectionURL is used javax.jdo.option.Multithreaded true Set this to true if multiple threads access metastore through JDO concurrently. javax.jdo.option.ConnectionURL jdbc:derby:;databaseName=metastore_db;create=true JDBC connect string for a JDBC metastore. To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL. For example, jdbc:postgresql://myhost/db?ssl=true for postgres database. hive.metastore.dbaccess.ssl.properties Comma-separated SSL properties for metastore to access database when JDO connection URL enables SSL access. e.g. javax.net.ssl.trustStore=/tmp/truststore,javax.net.ssl.trustStorePassword=pwd. hive.hmshandler.retry.attempts 10 The number of times to retry a HMSHandler call if there were a connection error. hive.hmshandler.retry.interval 2000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. The time between HMSHandler retry attempts on failure. hive.hmshandler.force.reload.conf false Whether to force reloading of the HMSHandler configuration (including the connection URL, before the next metastore query that accesses the datastore. Once reloaded, this value is reset to false. Used for testing only. hive.metastore.server.max.message.size 104857600 Maximum message size in bytes a HMS will accept. hive.metastore.server.min.threads 200 Minimum number of worker threads in the Thrift server's pool. hive.metastore.server.max.threads 1000 Maximum number of worker threads in the Thrift server's pool. hive.metastore.server.tcp.keepalive true Whether to enable TCP keepalive for the metastore server. Keepalive will prevent accumulation of half-open connections. hive.metastore.archive.intermediate.original _INTERMEDIATE_ORIGINAL Intermediate dir suffixes used for archiving. Not important what they are, as long as collisions are avoided hive.metastore.archive.intermediate.archived _INTERMEDIATE_ARCHIVED hive.metastore.archive.intermediate.extracted _INTERMEDIATE_EXTRACTED hive.metastore.kerberos.keytab.file The path to the Kerberos Keytab file containing the metastore Thrift server's service principal. hive.metastore.kerberos.principal hive-metastore/_HOST@EXAMPLE.COM The service principal for the metastore Thrift server. The special string _HOST will be replaced automatically with the correct host name. hive.metastore.sasl.enabled false If true, the metastore Thrift interface will be secured with SASL. Clients must authenticate with Kerberos. hive.metastore.thrift.framed.transport.enabled false If true, the metastore Thrift interface will use TFramedTransport. When false (default) a standard TTransport is used. hive.metastore.thrift.compact.protocol.enabled false If true, the metastore Thrift interface will use TCompactProtocol. When false (default) TBinaryProtocol will be used. Setting it to true will break compatibility with older clients running TBinaryProtocol. hive.metastore.token.signature The delegation token service name to match when selecting a token from the current user's tokens. hive.cluster.delegation.token.store.class org.apache.hadoop.hive.thrift.MemoryTokenStore The delegation token store implementation. Set to org.apache.hadoop.hive.thrift.ZooKeeperTokenStore for load-balanced cluster. hive.cluster.delegation.token.store.zookeeper.connectString The ZooKeeper token store connect string. You can re-use the configuration value set in hive.zookeeper.quorum, by leaving this parameter unset. hive.cluster.delegation.token.store.zookeeper.znode /hivedelegation The root path for token store data. Note that this is used by both HiveServer2 and MetaStore to store delegation Token. One directory gets created for each of them. The final directory names would have the servername appended to it (HIVESERVER2, METASTORE). hive.cluster.delegation.token.store.zookeeper.acl ACL for token store entries. Comma separated list of ACL entries. For example: sasl:hive/host1@MY.DOMAIN:cdrwa,sasl:hive/host2@MY.DOMAIN:cdrwa Defaults to all permissions for the hiveserver2/metastore process user. hive.metastore.cache.pinobjtypes Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order List of comma separated metastore object types that should be pinned in the cache datanucleus.connectionPoolingType BONECP Expects one of [bonecp, dbcp, hikaricp, none]. Specify connection pool library for datanucleus datanucleus.connectionPool.maxPoolSize 10 Specify the maximum number of connections in the connection pool. Note: The configured size will be used by 2 connection pools (TxnHandler and ObjectStore). When configuring the max connection pool size, it is recommended to take into account the number of metastore instances and the number of HiveServer2 instances configured with embedded metastore. To get optimal performance, set config to meet the following condition (2 * pool_size * metastore_instances + 2 * pool_size * HS2_instances_with_embedded_metastore) = (2 * physical_core_count + hard_disk_count). datanucleus.rdbms.initializeColumnInfo NONE initializeColumnInfo setting for DataNucleus; set to NONE at least on Postgres. datanucleus.schema.validateTables false validates existing schema against code. turn this on if you want to verify existing schema datanucleus.schema.validateColumns false validates existing schema against code. turn this on if you want to verify existing schema datanucleus.schema.validateConstraints false validates existing schema against code. turn this on if you want to verify existing schema datanucleus.storeManagerType rdbms metadata store type datanucleus.schema.autoCreateAll false Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead. hive.metastore.schema.verification true Enforce metastore schema version consistency. True: Verify that version information stored in is compatible with one from Hive jars. Also disable automatic schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures proper metastore schema migration. (Default) False: Warn if the version information stored in metastore doesn't match with one from in Hive jars. hive.metastore.schema.verification.record.version false When true the current MS version is recorded in the VERSION table. If this is disabled and verification is enabled the MS will be unusable. datanucleus.transactionIsolation read-committed Default transaction isolation level for identity generation. datanucleus.cache.level2 false Use a level 2 cache. Turn this off if metadata is changed independently of Hive metastore server datanucleus.cache.level2.type none datanucleus.identifierFactory datanucleus1 Name of the identifier factory to use when generating table/column names etc. 'datanucleus1' is used for backward compatibility with DataNucleus v1 datanucleus.rdbms.useLegacyNativeValueStrategy true datanucleus.plugin.pluginRegistryBundleCheck LOG Defines what happens when plugin bundles are found and are duplicated [EXCEPTION|LOG|NONE] hive.metastore.batch.retrieve.max 300 Maximum number of objects (tables/partitions) can be retrieved from metastore in one batch. The higher the number, the less the number of round trips is needed to the Hive metastore server, but it may also cause higher memory requirement at the client side. hive.metastore.batch.retrieve.table.partition.max 1000 Maximum number of objects that metastore internally retrieves in one batch. hive.metastore.init.hooks A comma separated list of hooks to be invoked at the beginning of HMSHandler initialization. An init hook is specified as the name of Java class which extends org.apache.hadoop.hive.metastore.MetaStoreInitListener. hive.metastore.pre.event.listeners List of comma separated listeners for metastore events. hive.metastore.event.listeners A comma separated list of Java classes that implement the org.apache.hadoop.hive.metastore.MetaStoreEventListener interface. The metastore event and corresponding listener method will be invoked in separate JDO transactions. Alternatively, configure hive.metastore.transactional.event.listeners to ensure both are invoked in same JDO transaction. hive.metastore.transactional.event.listeners A comma separated list of Java classes that implement the org.apache.hadoop.hive.metastore.MetaStoreEventListener interface. Both the metastore event and corresponding listener method will be invoked in the same JDO transaction. hive.metastore.event.db.listener.timetolive 86400s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. time after which events will be removed from the database listener queue hive.metastore.authorization.storage.checks false Should the metastore do authorization checks against the underlying storage (usually hdfs) for operations like drop-partition (disallow the drop-partition if the user in question doesn't have permissions to delete the corresponding directory on the storage). hive.metastore.authorization.storage.check.externaltable.drop true Should StorageBasedAuthorization check permission of the storage before dropping external table. StorageBasedAuthorization already does this check for managed table. For external table however, anyone who has read permission of the directory could drop external table, which is surprising. The flag is set to false by default to maintain backward compatibility. hive.metastore.event.clean.freq 0s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Frequency at which timer task runs to purge expired events in metastore. hive.metastore.event.expiry.duration 0s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Duration after which events expire from events table hive.metastore.event.message.factory org.apache.hadoop.hive.metastore.messaging.json.JSONMessageFactory Factory class for making encoding and decoding messages in the events generated. hive.metastore.execute.setugi true In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored. hive.metastore.partition.name.whitelist.pattern Partition names will be checked against this regex pattern and rejected if not matched. hive.metastore.integral.jdo.pushdown false Allow JDO query pushdown for integral partition columns in metastore. Off by default. This improves metastore perf for integral columns, especially if there's a large number of partitions. However, it doesn't work correctly with integral values that are not normalized (e.g. have leading zeroes, like 0012). If metastore direct SQL is enabled and works, this optimization is also irrelevant. hive.metastore.try.direct.sql true Whether the Hive metastore should try to use direct SQL queries instead of the DataNucleus for certain read paths. This can improve metastore performance when fetching many partitions or column statistics by orders of magnitude; however, it is not guaranteed to work on all RDBMS-es and all versions. In case of SQL failures, the metastore will fall back to the DataNucleus, so it's safe even if SQL doesn't work for all queries on your datastore. If all SQL queries fail (for example, your metastore is backed by MongoDB), you might want to disable this to save the try-and-fall-back cost. hive.metastore.direct.sql.batch.size 0 Batch size for partition and other object retrieval from the underlying DB in direct SQL. For some DBs like Oracle and MSSQL, there are hardcoded or perf-based limitations that necessitate this. For DBs that can handle the queries, this isn't necessary and may impede performance. -1 means no batching, 0 means automatic batching. hive.metastore.try.direct.sql.ddl true Same as hive.metastore.try.direct.sql, for read statements within a transaction that modifies metastore data. Due to non-standard behavior in Postgres, if a direct SQL select query has incorrect syntax or something similar inside a transaction, the entire transaction will fail and fall-back to DataNucleus will not be possible. You should disable the usage of direct SQL inside transactions if that happens in your case. hive.direct.sql.max.query.length 100 The maximum size of a query string (in KB). hive.direct.sql.max.elements.in.clause 1000 The maximum number of values in a IN clause. Once exceeded, it will be broken into multiple OR separated IN clauses. hive.direct.sql.max.elements.values.clause 1000 The maximum number of values in a VALUES clause for INSERT statement. hive.metastore.orm.retrieveMapNullsAsEmptyStrings false Thrift does not support nulls in maps, so any nulls present in maps retrieved from ORM must either be pruned or converted to empty strings. Some backing dbs such as Oracle persist empty strings as nulls, so we should set this parameter if we wish to reverse that behaviour. For others, pruning is the correct behaviour hive.metastore.disallow.incompatible.col.type.changes true If true (default is false), ALTER TABLE operations which change the type of a column (say STRING) to an incompatible type (say MAP) are disallowed. RCFile default SerDe (ColumnarSerDe) serializes the values in such a way that the datatypes can be converted from string to any type. The map is also serialized as a string, which can be read as a string as well. However, with any binary serialization, this is not true. Blocking the ALTER TABLE prevents ClassCastExceptions when subsequently trying to access old partitions. Primitive types like INT, STRING, BIGINT, etc., are compatible with each other and are not blocked. See HIVE-4409 for more details. hive.metastore.limit.partition.request -1 This limits the number of partitions that can be requested from the metastore for a given table. The default value "-1" means no limit. hive.table.parameters.default Default property values for newly created tables hive.ddl.createtablelike.properties.whitelist Table Properties to copy over when executing a Create Table Like. hive.metastore.rawstore.impl org.apache.hadoop.hive.metastore.ObjectStore Name of the class that implements org.apache.hadoop.hive.metastore.rawstore interface. This class is used to store and retrieval of raw metadata objects such as table, database hive.metastore.txn.store.impl org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler Name of class that implements org.apache.hadoop.hive.metastore.txn.TxnStore. This class is used to store and retrieve transactions and locks javax.jdo.option.ConnectionDriverName org.apache.derby.jdbc.EmbeddedDriver Driver class name for a JDBC metastore javax.jdo.PersistenceManagerFactoryClass org.datanucleus.api.jdo.JDOPersistenceManagerFactory class implementing the jdo persistence hive.metastore.expression.proxy org.apache.hadoop.hive.ql.optimizer.ppr.PartitionExpressionForMetastore javax.jdo.option.DetachAllOnCommit true Detaches all objects from session so that they can be used after transaction is committed javax.jdo.option.NonTransactionalRead true Reads outside of transactions javax.jdo.option.ConnectionUserName APP Username to use against metastore database hive.metastore.end.function.listeners List of comma separated listeners for the end of metastore functions. hive.metastore.partition.inherit.table.properties List of comma separated keys occurring in table properties which will get inherited to newly created partitions. * implies all the keys will get inherited. hive.metastore.filter.hook org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl Metastore hook class for filtering the metadata read results. If hive.security.authorization.manageris set to instance of HiveAuthorizerFactory, then this value is ignored. hive.metastore.dml.events false If true, the metastore will be asked to fire events for DML operations hive.metastore.client.drop.partitions.using.expressions true Choose whether dropping partitions with HCatClient pushes the partition-predicate to the metastore, or drops partitions iteratively hive.metastore.aggregate.stats.cache.enabled true Whether aggregate stats caching is enabled or not. hive.metastore.aggregate.stats.cache.size 10000 Maximum number of aggregate stats nodes that we will place in the metastore aggregate stats cache. hive.metastore.aggregate.stats.cache.max.partitions 10000 Maximum number of partitions that are aggregated per cache node. hive.metastore.aggregate.stats.cache.fpp 0.01 Maximum false positive probability for the Bloom Filter used in each aggregate stats cache node (default 1%). hive.metastore.aggregate.stats.cache.max.variance 0.01 Maximum tolerable variance in number of partitions between a cached node and our request (default 1%). hive.metastore.aggregate.stats.cache.ttl 600s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Number of seconds for a cached node to be active in the cache before they become stale. hive.metastore.aggregate.stats.cache.max.writer.wait 5000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Number of milliseconds a writer will wait to acquire the writelock before giving up. hive.metastore.aggregate.stats.cache.max.reader.wait 1000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Number of milliseconds a reader will wait to acquire the readlock before giving up. hive.metastore.aggregate.stats.cache.max.full 0.9 Maximum cache full % after which the cache cleaner thread kicks in. hive.metastore.aggregate.stats.cache.clean.until 0.8 The cleaner thread cleans until cache reaches this % full size. hive.metastore.metrics.enabled false Enable metrics on the metastore. hive.metastore.initial.metadata.count.enabled true Enable a metadata count at metastore startup for metrics. hive.metastore.use.SSL false Set this to true for using SSL encryption in HMS server. hive.metastore.keystore.path Metastore SSL certificate keystore location. hive.metastore.keystore.password Metastore SSL certificate keystore password. hive.metastore.truststore.path Metastore SSL certificate truststore location. hive.metastore.truststore.password Metastore SSL certificate truststore password. hive.metadata.export.location When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, it is the location to which the metadata will be exported. The default is an empty string, which results in the metadata being exported to the current user's home directory on HDFS. hive.metadata.move.exported.metadata.to.trash true When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, this setting determines if the metadata that is exported will subsequently be moved to the user's trash directory alongside the dropped table data. This ensures that the metadata will be cleaned up along with the dropped table data. hive.cli.errors.ignore false hive.cli.print.current.db false Whether to include the current database in the Hive prompt. hive.cli.prompt hive Command line prompt configuration value. Other hiveconf can be used in this configuration value. Variable substitution will only be invoked at the Hive CLI startup. hive.cli.pretty.output.num.cols -1 The number of columns to use when formatting output generated by the DESCRIBE PRETTY table_name command. If the value of this property is -1, then Hive will use the auto-detected terminal width. hive.metastore.fs.handler.class org.apache.hadoop.hive.metastore.HiveMetaStoreFsImpl hive.session.id hive.session.silent false hive.session.history.enabled false Whether to log Hive query, query plan, runtime statistics etc. hive.query.string Query being executed (might be multiple per a session) hive.query.id ID for query being executed (might be multiple per a session) hive.jobname.length 50 max jobname length hive.jar.path The location of hive_cli.jar that is used when submitting jobs in a separate jvm. hive.aux.jars.path The location of the plugin jars that contain implementations of user defined functions and serdes. hive.reloadable.aux.jars.path The locations of the plugin jars, which can be a comma-separated folders or jars. Jars can be renewed by executing reload command. And these jars can be used as the auxiliary classes like creating a UDF or SerDe. hive.added.files.path This an internal parameter. hive.added.jars.path This an internal parameter. hive.added.archives.path This an internal parameter. hive.auto.progress.timeout 0s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. How long to run autoprogressor for the script/UDTF operators. Set to 0 for forever. hive.script.auto.progress false Whether Hive Transform/Map/Reduce Clause should automatically send progress information to TaskTracker to avoid the task getting killed because of inactivity. Hive sends progress information when the script is outputting to stderr. This option removes the need of periodically producing stderr messages, but users should be cautious because this may prevent infinite loops in the scripts to be killed by TaskTracker. hive.script.operator.id.env.var HIVE_SCRIPT_OPERATOR_ID Name of the environment variable that holds the unique script operator ID in the user's transform function (the custom mapper/reducer that the user has specified in the query) hive.script.operator.truncate.env false Truncate each environment variable for external script in scripts operator to 20KB (to fit system limits) hive.script.operator.env.blacklist hive.txn.valid.txns,hive.script.operator.env.blacklist Comma separated list of keys from the configuration file not to convert to environment variables when envoking the script operator hive.strict.checks.large.query false Enabling strict large query checks disallows the following: Orderby without limit. No partition being picked up for a query against partitioned table. Note that these checks currently do not consider data size, only the query pattern. hive.strict.checks.type.safety true Enabling strict type safety checks disallows the following: Comparing bigints and strings. Comparing bigints and doubles. hive.strict.checks.cartesian.product true Enabling strict Cartesian join checks disallows the following: Cartesian product (cross join). hive.strict.checks.bucketing true Enabling strict bucketing checks disallows the following: Load into bucketed tables. hive.mapred.mode Deprecated; use hive.strict.checks.* settings instead. hive.alias hive.map.aggr true Whether to use map-side aggregation in Hive Group By queries hive.groupby.skewindata false Whether there is skew in data to optimize group by queries hive.join.emit.interval 1000 How many rows in the right-most join operand Hive should buffer before emitting the join result. hive.join.cache.size 25000 How many rows in the joining tables (except the streaming table) should be cached in memory. hive.cbo.enable true Flag to control enabling Cost Based Optimizations using Calcite framework. hive.cbo.cnf.maxnodes -1 When converting to conjunctive normal form (CNF), fail ifthe expression exceeds this threshold; the threshold is expressed in terms of number of nodes (leaves andinterior nodes). -1 to not set up a threshold. hive.cbo.returnpath.hiveop false Flag to control calcite plan to hive operator conversion hive.cbo.costmodel.extended false Flag to control enabling the extended cost model based onCPU, IO and cardinality. Otherwise, the cost model is based on cardinality. hive.cbo.costmodel.cpu 0.000001 Default cost of a comparison hive.cbo.costmodel.network 150.0 Default cost of a transfering a byte over network; expressed as multiple of CPU cost hive.cbo.costmodel.local.fs.write 4.0 Default cost of writing a byte to local FS; expressed as multiple of NETWORK cost hive.cbo.costmodel.local.fs.read 4.0 Default cost of reading a byte from local FS; expressed as multiple of NETWORK cost hive.cbo.costmodel.hdfs.write 10.0 Default cost of writing a byte to HDFS; expressed as multiple of Local FS write cost hive.cbo.costmodel.hdfs.read 1.5 Default cost of reading a byte from HDFS; expressed as multiple of Local FS read cost hive.cbo.show.warnings true Toggle display of CBO warnings like missing column stats hive.transpose.aggr.join false push aggregates through join hive.optimize.semijoin.conversion true convert group by followed by inner equi join into semijoin hive.order.columnalignment true Flag to control whether we want to try to aligncolumns in operators such as Aggregate or Join so that we try to reduce the number of shuffling stages hive.materializedview.rewriting false Whether to try to rewrite queries using the materialized views enabled for rewriting hive.materializedview.fileformat ORC Expects one of [none, textfile, sequencefile, rcfile, orc]. Default file format for CREATE MATERIALIZED VIEW statement hive.materializedview.serde org.apache.hadoop.hive.ql.io.orc.OrcSerde Default SerDe used for materialized views hive.mapjoin.bucket.cache.size 100 hive.mapjoin.optimized.hashtable true Whether Hive should use memory-optimized hash table for MapJoin. Only works on Tez and Spark, because memory-optimized hashtable cannot be serialized. hive.mapjoin.optimized.hashtable.probe.percent 0.5 Probing space percentage of the optimized hashtable hive.mapjoin.hybridgrace.hashtable true Whether to use hybridgrace hash join as the join method for mapjoin. Tez only. hive.mapjoin.hybridgrace.memcheckfrequency 1024 For hybrid grace hash join, how often (how many rows apart) we check if memory is full. This number should be power of 2. hive.mapjoin.hybridgrace.minwbsize 524288 For hybrid graceHash join, the minimum write buffer size used by optimized hashtable. Default is 512 KB. hive.mapjoin.hybridgrace.minnumpartitions 16 ForHybrid grace hash join, the minimum number of partitions to create. hive.mapjoin.optimized.hashtable.wbsize 8388608 Optimized hashtable (see hive.mapjoin.optimized.hashtable) uses a chain of buffers to store data. This is one buffer size. HT may be slightly faster if this is larger, but for small joins unnecessary memory will be allocated and then trimmed. hive.mapjoin.hybridgrace.bloomfilter true Whether to use BloomFilter in Hybrid grace hash join to minimize unnecessary spilling. hive.smbjoin.cache.rows 10000 How many rows with the same key value should be cached in memory per smb joined table. hive.groupby.mapaggr.checkinterval 100000 Number of rows after which size of the grouping keys/aggregation classes is performed hive.map.aggr.hash.percentmemory 0.5 Portion of total memory to be used by map-side group aggregation hash table hive.mapjoin.followby.map.aggr.hash.percentmemory 0.3 Portion of total memory to be used by map-side group aggregation hash table, when this group by is followed by map join hive.map.aggr.hash.force.flush.memory.threshold 0.9 The max memory to be used by map-side group aggregation hash table. If the memory usage is higher than this number, force to flush data hive.map.aggr.hash.min.reduction 0.5 Hash aggregation will be turned off if the ratio between hash table size and input rows is bigger than this number. Set to 1 to make sure hash aggregation is never turned off. hive.multigroupby.singlereducer true Whether to optimize multi group by query to generate single M/R job plan. If the multi group by query has common group by keys, it will be optimized to generate single M/R job. hive.map.groupby.sorted true If the bucketing/sorting properties of the table exactly match the grouping key, whether to perform the group by in the mapper by using BucketizedHiveInputFormat. The only downside to this is that it limits the number of mappers to the number of files. hive.groupby.position.alias false Whether to enable using Column Position Alias in Group By hive.orderby.position.alias true Whether to enable using Column Position Alias in Order By hive.groupby.orderby.position.alias false Whether to enable using Column Position Alias in Group By or Order By (deprecated). Use hive.orderby.position.alias or hive.groupby.position.alias instead hive.new.job.grouping.set.cardinality 30 Whether a new map-reduce job should be launched for grouping sets/rollups/cubes. For a query like: select a, b, c, count(1) from T group by a, b, c with rollup; 4 rows are created per row: (a, b, c), (a, b, null), (a, null, null), (null, null, null). This can lead to explosion across map-reduce boundary if the cardinality of T is very high, and map-side aggregation does not do a very good job. This parameter decides if Hive should add an additional map-reduce job. If the grouping set cardinality (4 in the example above), is more than this value, a new MR job is added under the assumption that the original group by will reduce the data size. hive.groupby.limit.extrastep true This parameter decides if Hive should create new MR job for sorting final output hive.exec.copyfile.maxnumfiles 1 Maximum number of files Hive uses to do sequential HDFS copies between directories.Distributed copies (distcp) will be used instead for larger numbers of files so that copies can be done faster. hive.exec.copyfile.maxsize 33554432 Maximum file size (in bytes) that Hive uses to do single HDFS copies between directories.Distributed copies (distcp) will be used instead for bigger files so that copies can be done faster. hive.udtf.auto.progress false Whether Hive should automatically send progress information to TaskTracker when using UDTF's to prevent the task getting killed because of inactivity. Users should be cautious because this may prevent TaskTracker from killing tasks with infinite loops. hive.default.fileformat TextFile Expects one of [textfile, sequencefile, rcfile, orc, parquet]. Default file format for CREATE TABLE statement. Users can explicitly override it by CREATE TABLE ... STORED AS [FORMAT] hive.default.fileformat.managed none Expects one of [none, textfile, sequencefile, rcfile, orc, parquet]. Default file format for CREATE TABLE statement applied to managed tables only. External tables will be created with format specified by hive.default.fileformat. Leaving this null will result in using hive.default.fileformat for all tables. hive.query.result.fileformat SequenceFile Expects one of [textfile, sequencefile, rcfile, llap]. Default file format for storing result of the query. hive.fileformat.check true Whether to check file format or not when loading data files hive.default.rcfile.serde org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe The default SerDe Hive will use for the RCFile format hive.default.serde org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe The default SerDe Hive will use for storage formats that do not specify a SerDe. hive.serdes.using.metastore.for.schema org.apache.hadoop.hive.ql.io.orc.OrcSerde,org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe,org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe,org.apache.hadoop.hive.serde2.dynamic_type.DynamicSerDe,org.apache.hadoop.hive.serde2.MetadataTypedColumnsetSerDe,org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe,org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe,org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe SerDes retrieving schema from metastore. This is an internal parameter. hive.querylog.location ${system:java.io.tmpdir}/${system:user.name} Location of Hive run time structured log file hive.querylog.enable.plan.progress true Whether to log the plan's progress every time a job's progress is checked. These logs are written to the location specified by hive.querylog.location hive.querylog.plan.progress.interval 60000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. The interval to wait between logging the plan's progress. If there is a whole number percentage change in the progress of the mappers or the reducers, the progress is logged regardless of this value. The actual interval will be the ceiling of (this value divided by the value of hive.exec.counters.pull.interval) multiplied by the value of hive.exec.counters.pull.interval I.e. if it is not divide evenly by the value of hive.exec.counters.pull.interval it will be logged less frequently than specified. This only has an effect if hive.querylog.enable.plan.progress is set to true. hive.script.serde org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe The default SerDe for transmitting input data to and reading output data from the user scripts. hive.script.recordreader org.apache.hadoop.hive.ql.exec.TextRecordReader The default record reader for reading data from the user scripts. hive.script.recordwriter org.apache.hadoop.hive.ql.exec.TextRecordWriter The default record writer for writing data to the user scripts. hive.transform.escape.input false This adds an option to escape special chars (newlines, carriage returns and tabs) when they are passed to the user script. This is useful if the Hive tables can contain data that contains special characters. hive.binary.record.max.length 1000 Read from a binary stream and treat each hive.binary.record.max.length bytes as a record. The last record before the end of stream can have less than hive.binary.record.max.length bytes hive.mapred.local.mem 0 mapper/reducer memory in local mode hive.mapjoin.smalltable.filesize 25000000 The threshold for the input file size of the small tables; if the file size is smaller than this threshold, it will try to convert the common join into map join hive.exec.schema.evolution true Use schema evolution to convert self-describing file format's data to the schema desired by the reader. hive.transactional.events.mem 10000000 Vectorized ACID readers can often load all the delete events from all the delete deltas into memory to optimize for performance. To prevent out-of-memory errors, this is a rough heuristic that limits the total number of delete events that can be loaded into memory at once. Roughly it has been set to 10 million delete events per bucket (~160 MB). hive.sample.seednumber 0 A number used to percentage sampling. By changing this number, user will change the subsets of data sampled. hive.test.mode false Whether Hive is running in test mode. If yes, it turns on sampling and prefixes the output tablename. hive.test.mode.prefix test_ In test mode, specfies prefixes for the output table hive.test.mode.samplefreq 32 In test mode, specfies sampling frequency for table, which is not bucketed, For example, the following query: INSERT OVERWRITE TABLE dest SELECT col1 from src would be converted to INSERT OVERWRITE TABLE test_dest SELECT col1 from src TABLESAMPLE (BUCKET 1 out of 32 on rand(1)) hive.test.mode.nosamplelist In test mode, specifies comma separated table names which would not apply sampling hive.test.dummystats.aggregator internal variable for test hive.test.dummystats.publisher internal variable for test hive.test.currenttimestamp current timestamp for test hive.test.rollbacktxn false For testing only. Will mark every ACID transaction aborted hive.test.fail.compaction false For testing only. Will cause CompactorMR to fail. hive.test.fail.heartbeater false For testing only. Will cause Heartbeater to fail. hive.merge.mapfiles true Merge small files at the end of a map-only job hive.merge.mapredfiles false Merge small files at the end of a map-reduce job hive.merge.tezfiles false Merge small files at the end of a Tez DAG hive.merge.sparkfiles false Merge small files at the end of a Spark DAG Transformation hive.merge.size.per.task 256000000 Size of merged files at the end of the job hive.merge.smallfiles.avgsize 16000000 When the average output file size of a job is less than this number, Hive will start an additional map-reduce job to merge the output files into bigger files. This is only done for map-only jobs if hive.merge.mapfiles is true, and for map-reduce jobs if hive.merge.mapredfiles is true. hive.merge.rcfile.block.level true hive.merge.orcfile.stripe.level true When hive.merge.mapfiles, hive.merge.mapredfiles or hive.merge.tezfiles is enabled while writing a table with ORC file format, enabling this config will do stripe-level fast merge for small ORC files. Note that enabling this config will not honor the padding tolerance config (hive.exec.orc.block.padding.tolerance). hive.exec.rcfile.use.explicit.header true If this is set the header for RCFiles will simply be RCF. If this is not set the header will be that borrowed from sequence files, e.g. SEQ- followed by the input and output RCFile formats. hive.exec.rcfile.use.sync.cache true hive.io.rcfile.record.interval 2147483647 hive.io.rcfile.column.number.conf 0 hive.io.rcfile.tolerate.corruptions false hive.io.rcfile.record.buffer.size 4194304 parquet.memory.pool.ratio 0.5 Maximum fraction of heap that can be used by Parquet file writers in one task. It is for avoiding OutOfMemory error in tasks. Work with Parquet 1.6.0 and above. This config parameter is defined in Parquet, so that it does not start with 'hive.'. hive.parquet.timestamp.skip.conversion true Current Hive implementation of parquet stores timestamps to UTC, this flag allows skipping of the conversionon reading parquet files from other tools hive.int.timestamp.conversion.in.seconds false Boolean/tinyint/smallint/int/bigint value is interpreted as milliseconds during the timestamp conversion. Set this flag to true to interpret the value as seconds to be consistent with float/double. hive.exec.orc.base.delta.ratio 8 The ratio of base writer and delta writer in terms of STRIPE_SIZE and BUFFER_SIZE. hive.exec.orc.split.strategy HYBRID Expects one of [hybrid, bi, etl]. This is not a user level config. BI strategy is used when the requirement is to spend less time in split generation as opposed to query execution (split generation does not read or cache file footers). ETL strategy is used when spending little more time in split generation is acceptable (split generation reads and caches file footers). HYBRID chooses between the above strategies based on heuristics. hive.orc.splits.ms.footer.cache.enabled false Whether to enable using file metadata cache in metastore for ORC file footers. hive.orc.splits.ms.footer.cache.ppd.enabled true Whether to enable file footer cache PPD (hive.orc.splits.ms.footer.cache.enabled must also be set to true for this to work). hive.orc.splits.include.file.footer false If turned on splits generated by orc will include metadata about the stripes in the file. This data is read remotely (from the client or HS2 machine) and sent to all the tasks. hive.orc.splits.directory.batch.ms 0 How long, in ms, to wait to batch input directories for processing during ORC split generation. 0 means process directories individually. This can increase the number of metastore calls if metastore metadata cache is used. hive.orc.splits.include.fileid true Include file ID in splits on file systems that support it. hive.orc.splits.allow.synthetic.fileid true Allow synthetic file ID in splits on file systems that don't have a native one. hive.orc.cache.stripe.details.mem.size 256Mb Expects a byte size value with unit (blank for bytes, kb, mb, gb, tb, pb). Maximum size of orc splits cached in the client. hive.orc.compute.splits.num.threads 10 How many threads orc should use to create splits in parallel. hive.orc.cache.use.soft.references false By default, the cache that ORC input format uses to store orc file footer use hard references for the cached object. Setting this to true can help avoid out of memory issues under memory pressure (in some cases) at the cost of slight unpredictability in overall query performance. hive.io.sarg.cache.max.weight.mb 10 The max weight allowed for the SearchArgument Cache. By default, the cache allows a max-weight of 10MB, after which entries will be evicted. hive.lazysimple.extended_boolean_literal false LazySimpleSerde uses this property to determine if it treats 'T', 't', 'F', 'f', '1', and '0' as extened, legal boolean literal, in addition to 'TRUE' and 'FALSE'. The default is false, which means only 'TRUE' and 'FALSE' are treated as legal boolean literal. hive.optimize.skewjoin false Whether to enable skew join optimization. The algorithm is as follows: At runtime, detect the keys with a large skew. Instead of processing those keys, store them temporarily in an HDFS directory. In a follow-up map-reduce job, process those skewed keys. The same key need not be skewed for all the tables, and so, the follow-up map-reduce job (for the skewed keys) would be much faster, since it would be a map-join. hive.optimize.dynamic.partition.hashjoin false Whether to enable dynamically partitioned hash join optimization. This setting is also dependent on enabling hive.auto.convert.join hive.auto.convert.join true Whether Hive enables the optimization about converting common join into mapjoin based on the input file size hive.auto.convert.join.noconditionaltask true Whether Hive enables the optimization about converting common join into mapjoin based on the input file size. If this parameter is on, and the sum of size for n-1 of the tables/partitions for a n-way join is smaller than the specified size, the join is directly converted to a mapjoin (there is no conditional task). hive.auto.convert.join.noconditionaltask.size 10000000 If hive.auto.convert.join.noconditionaltask is off, this parameter does not take affect. However, if it is on, and the sum of size for n-1 of the tables/partitions for a n-way join is smaller than this size, the join is directly converted to a mapjoin(there is no conditional task). The default is 10MB hive.auto.convert.join.use.nonstaged false For conditional joins, if input stream from a small alias can be directly applied to join operator without filtering or projection, the alias need not to be pre-staged in distributed cache via mapred local task. Currently, this is not working with vectorization or tez execution engine. hive.skewjoin.key 100000 Determine if we get a skew key in join. If we see more than the specified number of rows with the same key in join operator, we think the key as a skew join key. hive.skewjoin.mapjoin.map.tasks 10000 Determine the number of map task used in the follow up map join job for a skew join. It should be used together with hive.skewjoin.mapjoin.min.split to perform a fine grained control. hive.skewjoin.mapjoin.min.split 33554432 Determine the number of map task at most used in the follow up map join job for a skew join by specifying the minimum split size. It should be used together with hive.skewjoin.mapjoin.map.tasks to perform a fine grained control. hive.heartbeat.interval 1000 Send a heartbeat after this interval - used by mapjoin and filter operators hive.limit.row.max.size 100000 When trying a smaller subset of data for simple LIMIT, how much size we need to guarantee each row to have at least. hive.limit.optimize.limit.file 10 When trying a smaller subset of data for simple LIMIT, maximum number of files we can sample. hive.limit.optimize.enable false Whether to enable to optimization to trying a smaller subset of data for simple LIMIT first. hive.limit.optimize.fetch.max 50000 Maximum number of rows allowed for a smaller subset of data for simple LIMIT, if it is a fetch query. Insert queries are not restricted by this limit. hive.limit.pushdown.memory.usage 0.1 Expects value between 0.0f and 1.0f. The fraction of available memory to be used for buffering rows in Reducesink operator for limit pushdown optimization. hive.limit.query.max.table.partition -1 This controls how many partitions can be scanned for each partitioned table. The default value "-1" means no limit. (DEPRECATED: Please use hive.metastore.limit.partition.request in the metastore instead.) hive.auto.convert.join.hashtable.max.entries 40000000 If hive.auto.convert.join.noconditionaltask is off, this parameter does not take affect. However, if it is on, and the predicated number of entries in hashtable for a given join input is larger than this number, the join will not be converted to a mapjoin. The value "-1" means no limit. hive.hashtable.key.count.adjustment 1.0 Adjustment to mapjoin hashtable size derived from table and column statistics; the estimate of the number of keys is divided by this value. If the value is 0, statistics are not usedand hive.hashtable.initialCapacity is used instead. hive.hashtable.initialCapacity 100000 Initial capacity of mapjoin hashtable if statistics are absent, or if hive.hashtable.key.count.adjustment is set to 0 hive.hashtable.loadfactor 0.75 hive.mapjoin.followby.gby.localtask.max.memory.usage 0.55 This number means how much memory the local task can take to hold the key/value into an in-memory hash table when this map join is followed by a group by. If the local task's memory usage is more than this number, the local task will abort by itself. It means the data of the small table is too large to be held in memory. hive.mapjoin.localtask.max.memory.usage 0.9 This number means how much memory the local task can take to hold the key/value into an in-memory hash table. If the local task's memory usage is more than this number, the local task will abort by itself. It means the data of the small table is too large to be held in memory. hive.mapjoin.check.memory.rows 100000 The number means after how many rows processed it needs to check the memory usage hive.debug.localtask false hive.input.format org.apache.hadoop.hive.ql.io.CombineHiveInputFormat The default input format. Set this to HiveInputFormat if you encounter problems with CombineHiveInputFormat. hive.tez.input.format org.apache.hadoop.hive.ql.io.HiveInputFormat The default input format for tez. Tez groups splits in the AM. hive.tez.container.size -1 By default Tez will spawn containers of the size of a mapper. This can be used to overwrite. hive.tez.cpu.vcores -1 By default Tez will ask for however many cpus map-reduce is configured to use per container. This can be used to overwrite. hive.tez.java.opts By default Tez will use the Java options from map tasks. This can be used to overwrite. hive.tez.log.level INFO The log level to use for tasks executing as part of the DAG. Used only if hive.tez.java.opts is used to configure Java options. hive.tez.hs2.user.access true Whether to grant access to the hs2/hive user for queries hive.query.name This named is used by Tez to set the dag name. This name in turn will appear on the Tez UI representing the work that was done. hive.optimize.bucketingsorting true Don't create a reducer for enforcing bucketing/sorting for queries of the form: insert overwrite table T2 select * from T1; where T1 and T2 are bucketed/sorted by the same keys into the same number of buckets. hive.mapred.partitioner org.apache.hadoop.hive.ql.io.DefaultHivePartitioner hive.enforce.sortmergebucketmapjoin false If the user asked for sort-merge bucketed map-side join, and it cannot be performed, should the query fail or not ? hive.enforce.bucketmapjoin false If the user asked for bucketed map-side join, and it cannot be performed, should the query fail or not ? For example, if the buckets in the tables being joined are not a multiple of each other, bucketed map-side join cannot be performed, and the query will fail if hive.enforce.bucketmapjoin is set to true. hive.auto.convert.sortmerge.join false Will the join be automatically converted to a sort-merge join, if the joined tables pass the criteria for sort-merge join. hive.auto.convert.sortmerge.join.reduce.side true Whether hive.auto.convert.sortmerge.join (if enabled) should be applied to reduce side. hive.auto.convert.sortmerge.join.bigtable.selection.policy org.apache.hadoop.hive.ql.optimizer.AvgPartitionSizeBasedBigTableSelectorForAutoSMJ The policy to choose the big table for automatic conversion to sort-merge join. By default, the table with the largest partitions is assigned the big table. All policies are: . based on position of the table - the leftmost table is selected org.apache.hadoop.hive.ql.optimizer.LeftmostBigTableSMJ. . based on total size (all the partitions selected in the query) of the table org.apache.hadoop.hive.ql.optimizer.TableSizeBasedBigTableSelectorForAutoSMJ. . based on average size (all the partitions selected in the query) of the table org.apache.hadoop.hive.ql.optimizer.AvgPartitionSizeBasedBigTableSelectorForAutoSMJ. New policies can be added in future. hive.auto.convert.sortmerge.join.to.mapjoin false If hive.auto.convert.sortmerge.join is set to true, and a join was converted to a sort-merge join, this parameter decides whether each table should be tried as a big table, and effectively a map-join should be tried. That would create a conditional task with n+1 children for a n-way join (1 child for each table as the big table), and the backup task will be the sort-merge join. In some cases, a map-join would be faster than a sort-merge join, if there is no advantage of having the output bucketed and sorted. For example, if a very big sorted and bucketed table with few files (say 10 files) are being joined with a very small sorter and bucketed table with few files (10 files), the sort-merge join will only use 10 mappers, and a simple map-only join might be faster if the complete small table can fit in memory, and a map-join can be performed. hive.exec.script.trust false hive.exec.rowoffset false Whether to provide the row offset virtual column hive.optimize.index.filter false Whether to enable automatic use of indexes hive.optimize.index.autoupdate false Whether to update stale indexes automatically hive.optimize.ppd true Whether to enable predicate pushdown hive.optimize.ppd.windowing true Whether to enable predicate pushdown through windowing hive.ppd.recognizetransivity true Whether to transitively replicate predicate filters over equijoin conditions. hive.ppd.remove.duplicatefilters true During query optimization, filters may be pushed down in the operator tree. If this config is true only pushed down filters remain in the operator tree, and the original filter is removed. If this config is false, the original filter is also left in the operator tree at the original place. hive.optimize.point.lookup true Whether to transform OR clauses in Filter operators into IN clauses hive.optimize.point.lookup.min 31 Minimum number of OR clauses needed to transform into IN clauses hive.optimize.partition.columns.separate true Extract partition columns from IN clauses hive.optimize.constant.propagation true Whether to enable constant propagation optimizer hive.optimize.remove.identity.project true Removes identity project from operator tree hive.optimize.metadataonly false Whether to eliminate scans of the tables from which no columns are selected. Note that, when selecting from empty tables with data files, this can produce incorrect results, so it's disabled by default. It works correctly for normal tables. hive.optimize.null.scan true Dont scan relations which are guaranteed to not generate any rows hive.optimize.ppd.storage true Whether to push predicates down to storage handlers hive.optimize.groupby true Whether to enable the bucketed group by from bucketed partitions/tables. hive.optimize.bucketmapjoin false Whether to try bucket mapjoin hive.optimize.bucketmapjoin.sortedmerge false Whether to try sorted bucket merge map join hive.optimize.reducededuplication true Remove extra map-reduce jobs if the data is already clustered by the same key which needs to be used again. This should always be set to true. Since it is a new feature, it has been made configurable. hive.optimize.reducededuplication.min.reducer 4 Reduce deduplication merges two RSs by moving key/parts/reducer-num of the child RS to parent RS. That means if reducer-num of the child RS is fixed (order by or forced bucketing) and small, it can make very slow, single MR. The optimization will be automatically disabled if number of reducers would be less than specified value. hive.optimize.sort.dynamic.partition false When enabled dynamic partitioning column will be globally sorted. This way we can keep only one record writer open for each partition value in the reducer thereby reducing the memory pressure on reducers. hive.optimize.sampling.orderby false Uses sampling on order-by clause for parallel execution. hive.optimize.sampling.orderby.number 1000 Total number of samples to be obtained. hive.optimize.sampling.orderby.percent 0.1 Expects value between 0.0f and 1.0f. Probability with which a row will be chosen. hive.optimize.distinct.rewrite true When applicable this optimization rewrites distinct aggregates from a single stage to multi-stage aggregation. This may not be optimal in all cases. Ideally, whether to trigger it or not should be cost based decision. Until Hive formalizes cost model for this, this is config driven. hive.optimize.union.remove false Whether to remove the union and push the operators between union and the filesink above union. This avoids an extra scan of the output by union. This is independently useful for union queries, and specially useful when hive.optimize.skewjoin.compiletime is set to true, since an extra union is inserted. The merge is triggered if either of hive.merge.mapfiles or hive.merge.mapredfiles is set to true. If the user has set hive.merge.mapfiles to true and hive.merge.mapredfiles to false, the idea was the number of reducers are few, so the number of files anyway are small. However, with this optimization, we are increasing the number of files possibly by a big margin. So, we merge aggressively. hive.optimize.correlation false exploit intra-query correlations. hive.optimize.limittranspose false Whether to push a limit through left/right outer join or union. If the value is true and the size of the outer input is reduced enough (as specified in hive.optimize.limittranspose.reduction), the limit is pushed to the outer input or union; to remain semantically correct, the limit is kept on top of the join or the union too. hive.optimize.limittranspose.reductionpercentage 1.0 When hive.optimize.limittranspose is true, this variable specifies the minimal reduction of the size of the outer input of the join or input of the union that we should get in order to apply the rule. hive.optimize.limittranspose.reductiontuples 0 When hive.optimize.limittranspose is true, this variable specifies the minimal reduction in the number of tuples of the outer input of the join or the input of the union that you should get in order to apply the rule. hive.optimize.filter.stats.reduction false Whether to simplify comparison expressions in filter operators using column stats hive.optimize.skewjoin.compiletime false Whether to create a separate plan for skewed keys for the tables in the join. This is based on the skewed keys stored in the metadata. At compile time, the plan is broken into different joins: one for the skewed keys, and the other for the remaining keys. And then, a union is performed for the 2 joins generated above. So unless the same skewed key is present in both the joined tables, the join for the skewed key will be performed as a map-side join. The main difference between this parameter and hive.optimize.skewjoin is that this parameter uses the skew information stored in the metastore to optimize the plan at compile time itself. If there is no skew information in the metadata, this parameter will not have any affect. Both hive.optimize.skewjoin.compiletime and hive.optimize.skewjoin should be set to true. Ideally, hive.optimize.skewjoin should be renamed as hive.optimize.skewjoin.runtime, but not doing so for backward compatibility. If the skew information is correctly stored in the metadata, hive.optimize.skewjoin.compiletime would change the query plan to take care of it, and hive.optimize.skewjoin will be a no-op. hive.optimize.cte.materialize.threshold -1 If the number of references to a CTE clause exceeds this threshold, Hive will materialize it before executing the main query block. -1 will disable this feature. hive.optimize.index.filter.compact.minsize 5368709120 Minimum size (in bytes) of the inputs on which a compact index is automatically used. hive.optimize.index.filter.compact.maxsize -1 Maximum size (in bytes) of the inputs on which a compact index is automatically used. A negative number is equivalent to infinity. hive.index.compact.query.max.entries 10000000 The maximum number of index entries to read during a query that uses the compact index. Negative value is equivalent to infinity. hive.index.compact.query.max.size 10737418240 The maximum number of bytes that a query using the compact index can read. Negative value is equivalent to infinity. hive.index.compact.binary.search true Whether or not to use a binary search to find the entries in an index table that match the filter, where possible hive.stats.autogather true A flag to gather statistics (only basic) automatically during the INSERT OVERWRITE command. hive.stats.column.autogather false A flag to gather column statistics automatically. hive.stats.dbclass fs Expects one of the pattern in [custom, fs]. The storage that stores temporary Hive statistics. In filesystem based statistics collection ('fs'), each task writes statistics it has collected in a file on the filesystem, which will be aggregated after the job has finished. Supported values are fs (filesystem) and custom as defined in StatsSetupConst.java. hive.stats.default.publisher The Java class (implementing the StatsPublisher interface) that is used by default if hive.stats.dbclass is custom type. hive.stats.default.aggregator The Java class (implementing the StatsAggregator interface) that is used by default if hive.stats.dbclass is custom type. hive.stats.atomic false whether to update metastore stats only if all stats are available hive.client.stats.counters Subset of counters that should be of interest for hive.client.stats.publishers (when one wants to limit their publishing). Non-display names should be used hive.stats.reliable false Whether queries will fail because stats cannot be collected completely accurately. If this is set to true, reading/writing from/into a partition may fail because the stats could not be computed accurately. hive.analyze.stmt.collect.partlevel.stats true analyze table T compute statistics for columns. Queries like these should compute partitionlevel stats for partitioned table even when no part spec is specified. hive.stats.gather.num.threads 10 Number of threads used by partialscan/noscan analyze command for partitioned tables. This is applicable only for file formats that implement StatsProvidingRecordReader (like ORC). hive.stats.collect.tablekeys false Whether join and group by keys on tables are derived and maintained in the QueryPlan. This is useful to identify how tables are accessed and to determine if they should be bucketed. hive.stats.collect.scancols false Whether column accesses are tracked in the QueryPlan. This is useful to identify how tables are accessed and to determine if there are wasted columns that can be trimmed. hive.stats.ndv.error 20.0 Standard error expressed in percentage. Provides a tradeoff between accuracy and compute cost. A lower value for error indicates higher accuracy and a higher compute cost. hive.metastore.stats.ndv.tuner 0.0 Provides a tunable parameter between the lower bound and the higher bound of ndv for aggregate ndv across all the partitions. The lower bound is equal to the maximum of ndv of all the partitions. The higher bound is equal to the sum of ndv of all the partitions. Its value should be between 0.0 (i.e., choose lower bound) and 1.0 (i.e., choose higher bound) hive.metastore.stats.ndv.densityfunction false Whether to use density function to estimate the NDV for the whole table based on the NDV of partitions hive.stats.max.variable.length 100 To estimate the size of data flowing through operators in Hive/Tez(for reducer estimation etc.), average row size is multiplied with the total number of rows coming out of each operator. Average row size is computed from average column size of all columns in the row. In the absence of column statistics, for variable length columns (like string, bytes etc.), this value will be used. For fixed length columns their corresponding Java equivalent sizes are used (float - 4 bytes, double - 8 bytes etc.). hive.stats.list.num.entries 10 To estimate the size of data flowing through operators in Hive/Tez(for reducer estimation etc.), average row size is multiplied with the total number of rows coming out of each operator. Average row size is computed from average column size of all columns in the row. In the absence of column statistics and for variable length complex columns like list, the average number of entries/values can be specified using this config. hive.stats.map.num.entries 10 To estimate the size of data flowing through operators in Hive/Tez(for reducer estimation etc.), average row size is multiplied with the total number of rows coming out of each operator. Average row size is computed from average column size of all columns in the row. In the absence of column statistics and for variable length complex columns like map, the average number of entries/values can be specified using this config. hive.stats.fetch.partition.stats true Annotation of operator tree with statistics information requires partition level basic statistics like number of rows, data size and file size. Partition statistics are fetched from metastore. Fetching partition statistics for each needed partition can be expensive when the number of partitions is high. This flag can be used to disable fetching of partition statistics from metastore. When this flag is disabled, Hive will make calls to filesystem to get file sizes and will estimate the number of rows from row schema. hive.stats.fetch.column.stats false Annotation of operator tree with statistics information requires column statistics. Column statistics are fetched from metastore. Fetching column statistics for each needed column can be expensive when the number of columns is high. This flag can be used to disable fetching of column statistics from metastore. hive.stats.join.factor 1.1 Hive/Tez optimizer estimates the data size flowing through each of the operators. JOIN operator uses column statistics to estimate the number of rows flowing out of it and hence the data size. In the absence of column statistics, this factor determines the amount of rows that flows out of JOIN operator. hive.stats.deserialization.factor 1.0 Hive/Tez optimizer estimates the data size flowing through each of the operators. In the absence of basic statistics like number of rows and data size, file size is used to estimate the number of rows and data size. Since files in tables/partitions are serialized (and optionally compressed) the estimates of number of rows and data size cannot be reliably determined. This factor is multiplied with the file size to account for serialization and compression. hive.stats.filter.in.factor 1.0 Currently column distribution is assumed to be uniform. This can lead to overestimation/underestimation in the number of rows filtered by a certain operator, which in turn might lead to overprovision or underprovision of resources. This factor is applied to the cardinality estimation of IN clauses in filter operators. hive.support.concurrency false Whether Hive supports concurrency control or not. A ZooKeeper instance must be up and running when using zookeeper Hive lock manager hive.lock.manager org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager hive.lock.numretries 100 The number of times you want to try to get all the locks hive.unlock.numretries 10 The number of times you want to retry to do one unlock hive.lock.sleep.between.retries 60s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. The time should be in between 0 sec (exclusive) and 9223372036854775807 sec (exclusive). The maximum sleep time between various retries hive.lock.mapred.only.operation false This param is to control whether or not only do lock on queries that need to execute at least one mapred job. hive.zookeeper.quorum List of ZooKeeper servers to talk to. This is needed for: 1. Read/write locks - when hive.lock.manager is set to org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager, 2. When HiveServer2 supports service discovery via Zookeeper. 3. For delegation token storage if zookeeper store is used, if hive.cluster.delegation.token.store.zookeeper.connectString is not set 4. LLAP daemon registry service hive.zookeeper.client.port 2181 The port of ZooKeeper servers to talk to. If the list of Zookeeper servers specified in hive.zookeeper.quorum does not contain port numbers, this value is used. hive.zookeeper.session.timeout 1200000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. ZooKeeper client's session timeout (in milliseconds). The client is disconnected, and as a result, all locks released, if a heartbeat is not sent in the timeout. hive.zookeeper.namespace hive_zookeeper_namespace The parent node under which all ZooKeeper nodes are created. hive.zookeeper.clean.extra.nodes false Clean extra nodes at the end of the session. hive.zookeeper.connection.max.retries 3 Max number of times to retry when connecting to the ZooKeeper server. hive.zookeeper.connection.basesleeptime 1000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Initial amount of time (in milliseconds) to wait between retries when connecting to the ZooKeeper server when using ExponentialBackoffRetry policy. hive.txn.manager org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager Set to org.apache.hadoop.hive.ql.lockmgr.DbTxnManager as part of turning on Hive transactions, which also requires appropriate settings for hive.compactor.initiator.on, hive.compactor.worker.threads, hive.support.concurrency (true), and hive.exec.dynamic.partition.mode (nonstrict). The default DummyTxnManager replicates pre-Hive-0.13 behavior and provides no transactions. hive.txn.strict.locking.mode true In strict mode non-ACID resources use standard R/W lock semantics, e.g. INSERT will acquire exclusive lock. In nonstrict mode, for non-ACID resources, INSERT will only acquire shared lock, which allows two concurrent writes to the same partition but still lets lock manager prevent DROP TABLE etc. when the table is being written to hive.txn.timeout 300s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. time after which transactions are declared aborted if the client has not sent a heartbeat. hive.txn.heartbeat.threadpool.size 5 The number of threads to use for heartbeating. For Hive CLI, 1 is enough. For HiveServer2, we need a few hive.txn.manager.dump.lock.state.on.acquire.timeout false Set this to true so that when attempt to acquire a lock on resource times out, the current state of the lock manager is dumped to log file. This is for debugging. See also hive.lock.numretries and hive.lock.sleep.between.retries. hive.txn.operational.properties 0 Sets the operational properties that control the appropriate behavior for various versions of the Hive ACID subsystem. Setting it to zero will turn on the legacy mode for ACID, while setting it to one will enable a split-update feature found in the newer version of Hive ACID subsystem. Mostly it is intended to be used as an internal property for future versions of ACID. (See HIVE-14035 for details.) hive.max.open.txns 100000 Maximum number of open transactions. If current open transactions reach this limit, future open transaction requests will be rejected, until this number goes below the limit. hive.count.open.txns.interval 1s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Time in seconds between checks to count open transactions. hive.txn.max.open.batch 1000 Maximum number of transactions that can be fetched in one call to open_txns(). This controls how many transactions streaming agents such as Flume or Storm open simultaneously. The streaming agent then writes that number of entries into a single file (per Flume agent or Storm bolt). Thus increasing this value decreases the number of delta files created by streaming agents. But it also increases the number of open transactions that Hive has to track at any given time, which may negatively affect read performance. hive.txn.retryable.sqlex.regex Comma separated list of regular expression patterns for SQL state, error code, and error message of retryable SQLExceptions, that's suitable for the metastore DB. For example: Can't serialize.*,40001$,^Deadlock,.*ORA-08176.* The string that the regex will be matched against is of the following form, where ex is a SQLException: ex.getMessage() + " (SQLState=" + ex.getSQLState() + ", ErrorCode=" + ex.getErrorCode() + ")" hive.compactor.initiator.on false Whether to run the initiator and cleaner threads on this metastore instance or not. Set this to true on one instance of the Thrift metastore service as part of turning on Hive transactions. For a complete list of parameters required for turning on transactions, see hive.txn.manager. hive.compactor.worker.threads 0 How many compactor worker threads to run on this metastore instance. Set this to a positive number on one or more instances of the Thrift metastore service as part of turning on Hive transactions. For a complete list of parameters required for turning on transactions, see hive.txn.manager. Worker threads spawn MapReduce jobs to do compactions. They do not do the compactions themselves. Increasing the number of worker threads will decrease the time it takes tables or partitions to be compacted once they are determined to need compaction. It will also increase the background load on the Hadoop cluster as more MapReduce jobs will be running in the background. hive.compactor.worker.timeout 86400s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Time in seconds after which a compaction job will be declared failed and the compaction re-queued. hive.compactor.check.interval 300s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Time in seconds between checks to see if any tables or partitions need to be compacted. This should be kept high because each check for compaction requires many calls against the NameNode. Decreasing this value will reduce the time it takes for compaction to be started for a table or partition that requires compaction. However, checking if compaction is needed requires several calls to the NameNode for each table or partition that has had a transaction done on it since the last major compaction. So decreasing this value will increase the load on the NameNode. hive.compactor.delta.num.threshold 10 Number of delta directories in a table or partition that will trigger a minor compaction. hive.compactor.delta.pct.threshold 0.1 Percentage (fractional) size of the delta files relative to the base that will trigger a major compaction. (1.0 = 100%, so the default 0.1 = 10%.) hive.compactor.max.num.delta 500 Maximum number of delta files that the compactor will attempt to handle in a single job. hive.compactor.abortedtxn.threshold 1000 Number of aborted transactions involving a given table or partition that will trigger a major compaction. hive.compactor.initiator.failed.compacts.threshold 2 Expects value between 1 and 20. Number of consecutive compaction failures (per table/partition) after which automatic compactions will not be scheduled any more. Note that this must be less than hive.compactor.history.retention.failed. hive.compactor.cleaner.run.interval 5000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Time between runs of the cleaner thread hive.compactor.job.queue Used to specify name of Hadoop queue to which Compaction jobs will be submitted. Set to empty string to let Hadoop choose the queue. hive.compactor.history.retention.succeeded 3 Expects value between 0 and 100. Determines how many successful compaction records will be retained in compaction history for a given table/partition. hive.compactor.history.retention.failed 3 Expects value between 0 and 100. Determines how many failed compaction records will be retained in compaction history for a given table/partition. hive.compactor.history.retention.attempted 2 Expects value between 0 and 100. Determines how many attempted compaction records will be retained in compaction history for a given table/partition. hive.compactor.history.reaper.interval 2m Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Determines how often compaction history reaper runs hive.timedout.txn.reaper.start 100s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Time delay of 1st reaper run after metastore start hive.timedout.txn.reaper.interval 180s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Time interval describing how often the reaper runs hive.writeset.reaper.interval 60s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Frequency of WriteSet reaper runs hive.merge.cardinality.check true Set to true to ensure that each SQL Merge statement ensures that for each row in the target table there is at most 1 matching row in the source table per SQL Specification. hive.druid.indexer.segments.granularity DAY Expects one of the pattern in [YEAR, MONTH, WEEK, DAY, HOUR, MINUTE, SECOND]. Granularity for the segments created by the Druid storage handler hive.druid.indexer.partition.size.max 5000000 Maximum number of records per segment partition hive.druid.indexer.memory.rownum.max 75000 Maximum number of records in memory while storing data in Druid hive.druid.broker.address.default localhost:8082 Address of the Druid broker. If we are querying Druid from Hive, this address needs to be declared hive.druid.coordinator.address.default localhost:8081 Address of the Druid coordinator. It is used to check the load status of newly created segments hive.druid.select.distribute true If it is set to true, we distribute the execution of Druid Select queries. Concretely, we retrieve the result for Select queries directly from the Druid nodes containing the segments data. In particular, first we contact the Druid broker node to obtain the nodes containing the segments for the given query, and then we contact those nodes to retrieve the results for the query. If it is set to false, we do not execute the Select queries in a distributed fashion. Instead, results for those queries are returned by the Druid broker node. hive.druid.select.threshold 10000 Takes only effect when hive.druid.select.distribute is set to false. When we can split a Select query, this is the maximum number of rows that we try to retrieve per query. In order to do that, we obtain the estimated size for the complete result. If the number of records of the query results is larger than this threshold, we split the query in total number of rows/threshold parts across the time dimension. Note that we assume the records to be split uniformly across the time dimension. hive.druid.http.numConnection 20 Number of connections used by the HTTP client. hive.druid.http.read.timeout PT1M Read timeout period for the HTTP client in ISO8601 format (for example P2W, P3M, PT1H30M, PT0.750S), default is period of 1 minute. hive.druid.sleep.time PT10S Sleep time between retries in ISO8601 format (for example P2W, P3M, PT1H30M, PT0.750S), default is period of 10 seconds. hive.druid.basePersistDirectory Local temporary directory used to persist intermediate indexing state, will default to JVM system property java.io.tmpdir. hive.druid.storage.storageDirectory /druid/segments druid deep storage location. hive.druid.metadata.base druid Default prefix for metadata tables hive.druid.metadata.db.type mysql Expects one of the pattern in [mysql, postgresql]. Type of the metadata database. hive.druid.metadata.username Username to connect to Type of the metadata DB. hive.druid.metadata.password Password to connect to Type of the metadata DB. hive.druid.metadata.uri URI to connect to the database (for example jdbc:mysql://hostname:port/DBName). hive.druid.working.directory /tmp/workingDirectory Default hdfs working directory used to store some intermediate metadata hive.druid.maxTries 5 Maximum number of retries before giving up hive.druid.passiveWaitTimeMs 30000 Wait time in ms default to 30 seconds. hive.hbase.wal.enabled true Whether writes to HBase should be forced to the write-ahead log. Disabling this improves HBase write performance at the risk of lost writes in case of a crash. hive.hbase.generatehfiles false True when HBaseStorageHandler should generate hfiles instead of operate against the online table. hive.hbase.snapshot.name The HBase table snapshot name to use. hive.hbase.snapshot.restoredir /tmp The directory in which to restore the HBase table snapshot. hive.archive.enabled false Whether archiving operations are permitted hive.optimize.index.groupby false Whether to enable optimization of group-by queries using Aggregate indexes. hive.fetch.task.conversion more Expects one of [none, minimal, more]. Some select queries can be converted to single FETCH task minimizing latency. Currently the query should be single sourced not having any subquery and should not have any aggregations or distincts (which incurs RS), lateral views and joins. 0. none : disable hive.fetch.task.conversion 1. minimal : SELECT STAR, FILTER on partition columns, LIMIT only 2. more : SELECT, FILTER, LIMIT only (support TABLESAMPLE and virtual columns) hive.fetch.task.conversion.threshold 1073741824 Input threshold for applying hive.fetch.task.conversion. If target table is native, input length is calculated by summation of file lengths. If it's not native, storage handler for the table can optionally implement org.apache.hadoop.hive.ql.metadata.InputEstimator interface. hive.fetch.task.aggr false Aggregation queries with no group-by clause (for example, select count(*) from src) execute final aggregations in single reduce task. If this is set true, Hive delegates final aggregation stage to fetch task, possibly decreasing the query time. hive.compute.query.using.stats true When set to true Hive will answer a few queries like count(1) purely using stats stored in metastore. For basic stats collection turn on the config hive.stats.autogather to true. For more advanced stats collection need to run analyze table queries. hive.fetch.output.serde org.apache.hadoop.hive.serde2.DelimitedJSONSerDe The SerDe used by FetchTask to serialize the fetch output. hive.cache.expr.evaluation true If true, the evaluation result of a deterministic expression referenced twice or more will be cached. For example, in a filter condition like '.. where key + 10 = 100 or key + 10 = 0' the expression 'key + 10' will be evaluated/cached once and reused for the following expression ('key + 10 = 0'). Currently, this is applied only to expressions in select or filter operators. hive.variable.substitute true This enables substitution using syntax like ${var} ${system:var} and ${env:var}. hive.variable.substitute.depth 40 The maximum replacements the substitution engine will do. hive.conf.validation true Enables type checking for registered Hive configurations hive.semantic.analyzer.hook hive.security.authorization.enabled false enable or disable the Hive client authorization hive.security.authorization.manager org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactory The Hive client authorization manager class name. The user defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveAuthorizationProvider. hive.security.authenticator.manager org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator hive client authenticator manager class name. The user defined authenticator should implement interface org.apache.hadoop.hive.ql.security.HiveAuthenticationProvider. hive.security.metastore.authorization.manager org.apache.hadoop.hive.ql.security.authorization.DefaultHiveMetastoreAuthorizationProvider Names of authorization manager classes (comma separated) to be used in the metastore for authorization. The user defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveMetastoreAuthorizationProvider. All authorization manager classes have to successfully authorize the metastore API call for the command execution to be allowed. hive.security.metastore.authorization.auth.reads true If this is true, metastore authorizer authorizes read actions on database, table hive.security.metastore.authenticator.manager org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator authenticator manager class name to be used in the metastore for authentication. The user defined authenticator should implement interface org.apache.hadoop.hive.ql.security.HiveAuthenticationProvider. hive.security.authorization.createtable.user.grants the privileges automatically granted to some users whenever a table gets created. An example like "userX,userY:select;userZ:create" will grant select privilege to userX and userY, and grant create privilege to userZ whenever a new table created. hive.security.authorization.createtable.group.grants the privileges automatically granted to some groups whenever a table gets created. An example like "groupX,groupY:select;groupZ:create" will grant select privilege to groupX and groupY, and grant create privilege to groupZ whenever a new table created. hive.security.authorization.createtable.role.grants the privileges automatically granted to some roles whenever a table gets created. An example like "roleX,roleY:select;roleZ:create" will grant select privilege to roleX and roleY, and grant create privilege to roleZ whenever a new table created. hive.security.authorization.createtable.owner.grants The privileges automatically granted to the owner whenever a table gets created. An example like "select,drop" will grant select and drop privilege to the owner of the table. Note that the default gives the creator of a table no access to the table (but see HIVE-8067). hive.security.authorization.task.factory org.apache.hadoop.hive.ql.parse.authorization.HiveAuthorizationTaskFactoryImpl Authorization DDL task factory implementation hive.security.authorization.sqlstd.confwhitelist List of comma separated Java regexes. Configurations parameters that match these regexes can be modified by user when SQL standard authorization is enabled. To get the default value, use the 'set <param>' command. Note that the hive.conf.restricted.list checks are still enforced after the white list check hive.security.authorization.sqlstd.confwhitelist.append List of comma separated Java regexes, to be appended to list set in hive.security.authorization.sqlstd.confwhitelist. Using this list instead of updating the original list means that you can append to the defaults set by SQL standard authorization instead of replacing it entirely. hive.cli.print.header false Whether to print the names of the columns in query output. hive.cli.tez.session.async true Whether to start Tez session in background when running CLI with Tez, allowing CLI to be available earlier. hive.error.on.empty.partition false Whether to throw an exception if dynamic partition insert generates empty results. hive.index.compact.file internal variable hive.index.blockfilter.file internal variable hive.index.compact.file.ignore.hdfs false When true the HDFS location stored in the index file will be ignored at runtime. If the data got moved or the name of the cluster got changed, the index data should still be usable. hive.exim.uri.scheme.whitelist hdfs,pfile,file,s3,s3a A comma separated list of acceptable URI schemes for import and export. hive.exim.strict.repl.tables true Parameter that determines if 'regular' (non-replication) export dumps can be imported on to tables that are the target of replication. If this parameter is set, regular imports will check if the destination table(if it exists) has a 'repl.last.id' set on it. If so, it will fail. hive.repl.task.factory org.apache.hive.hcatalog.api.repl.exim.EximReplicationTaskFactory Parameter that can be used to override which ReplicationTaskFactory will be used to instantiate ReplicationTask events. Override for third party repl plugins hive.mapper.cannot.span.multiple.partitions false hive.rework.mapredwork false should rework the mapred work or not. This is first introduced by SymlinkTextInputFormat to replace symlink files with real paths at compile time. hive.exec.concatenate.check.index true If this is set to true, Hive will throw error when doing 'alter table tbl_name [partSpec] concatenate' on a table/partition that has indexes on it. The reason the user want to set this to true is because it can help user to avoid handling all index drop, recreation, rebuild work. This is very helpful for tables with thousands of partitions. hive.io.exception.handlers A list of io exception handler class names. This is used to construct a list exception handlers to handle exceptions thrown by record readers hive.log4j.file Hive log4j configuration file. If the property is not set, then logging will be initialized using hive-log4j2.properties found on the classpath. If the property is set, the value must be a valid URI (java.net.URI, e.g. "file:///tmp/my-logging.xml"), which you can then extract a URL from and pass to PropertyConfigurator.configure(URL). hive.exec.log4j.file Hive log4j configuration file for execution mode(sub command). If the property is not set, then logging will be initialized using hive-exec-log4j2.properties found on the classpath. If the property is set, the value must be a valid URI (java.net.URI, e.g. "file:///tmp/my-logging.xml"), which you can then extract a URL from and pass to PropertyConfigurator.configure(URL). hive.async.log.enabled true Whether to enable Log4j2's asynchronous logging. Asynchronous logging can give significant performance improvement as logging will be handled in separate thread that uses LMAX disruptor queue for buffering log messages. Refer https://logging.apache.org/log4j/2.x/manual/async.html for benefits and drawbacks. hive.log.explain.output false Whether to log explain output for every query. When enabled, will log EXPLAIN EXTENDED output for the query at INFO log4j log level. hive.explain.user true Whether to show explain result at user level. When enabled, will log EXPLAIN output for the query at user level. hive.autogen.columnalias.prefix.label _c String used as a prefix when auto generating column alias. By default the prefix label will be appended with a column position number to form the column alias. Auto generation would happen if an aggregate function is used in a select clause without an explicit alias. hive.autogen.columnalias.prefix.includefuncname false Whether to include function name in the column alias auto generated by Hive. hive.service.metrics.class org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics Expects one of [org.apache.hadoop.hive.common.metrics.metrics2.codahalemetrics, org.apache.hadoop.hive.common.metrics.legacymetrics]. Hive metrics subsystem implementation class. hive.service.metrics.reporter JSON_FILE, JMX Reporter type for metric class org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics, comma separated list of JMX, CONSOLE, JSON_FILE, HADOOP2 hive.service.metrics.file.location /tmp/report.json For metric class org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics JSON_FILE reporter, the location of local JSON metrics file. This file will get overwritten at every interval. hive.service.metrics.file.frequency 5s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. For metric class org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics JSON_FILE reporter, the frequency of updating JSON metrics file. hive.service.metrics.hadoop2.frequency 30s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. For metric class org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics HADOOP2 reporter, the frequency of updating the HADOOP2 metrics system. hive.service.metrics.hadoop2.component hive Component name to provide to Hadoop2 Metrics system. Ideally 'hivemetastore' for the MetaStore and and 'hiveserver2' for HiveServer2. hive.exec.perf.logger org.apache.hadoop.hive.ql.log.PerfLogger The class responsible for logging client side performance metrics. Must be a subclass of org.apache.hadoop.hive.ql.log.PerfLogger hive.start.cleanup.scratchdir false To cleanup the Hive scratchdir when starting the Hive Server hive.scratchdir.lock false To hold a lock file in scratchdir to prevent to be removed by cleardanglingscratchdir hive.insert.into.multilevel.dirs false Where to insert into multilevel directories like "insert directory '/HIVEFT25686/chinna/' from table" hive.warehouse.subdir.inherit.perms true Set this to false if the table directories should be created with the permissions derived from dfs umask instead of inheriting the permission of the warehouse or database directory. hive.insert.into.external.tables true whether insert into external tables is allowed hive.exec.temporary.table.storage default Expects one of [memory, ssd, default]. Define the storage policy for temporary tables.Choices between memory, ssd and default hive.query.lifetime.hooks A comma separated list of hooks which implement QueryLifeTimeHook. These will be triggered before/after query compilation and before/after query execution, in the order specified hive.exec.driver.run.hooks A comma separated list of hooks which implement HiveDriverRunHook. Will be run at the beginning and end of Driver.run, these will be run in the order specified. hive.ddl.output.format The data format to use for DDL output. One of "text" (for human readable text) or "json" (for a json object). hive.entity.separator @ Separator used to construct names of tables and partitions. For example, dbname@tablename@partitionname hive.entity.capture.transform false Compiler to capture transform URI referred in the query hive.display.partition.cols.separately true In older Hive version (0.10 and earlier) no distinction was made between partition columns or non-partition columns while displaying columns in describe table. From 0.12 onwards, they are displayed separately. This flag will let you get old behavior, if desired. See, test-case in patch for HIVE-6689. hive.ssl.protocol.blacklist SSLv2,SSLv3 SSL Versions to disable for all Hive Servers hive.server2.clear.dangling.scratchdir false Clear dangling scratch dir periodically in HS2 hive.server2.clear.dangling.scratchdir.interval 1800s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Interval to clear dangling scratch dir periodically in HS2 hive.server2.sleep.interval.between.start.attempts 60s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. The time should be in between 0 msec (inclusive) and 9223372036854775807 msec (inclusive). Amount of time to sleep between HiveServer2 start attempts. Primarily meant for tests hive.server2.max.start.attempts 30 Expects value bigger than 0. Number of times HiveServer2 will attempt to start before exiting. The sleep interval between retries is determined by hive.server2.sleep.interval.between.start.attempts The default of 30 will keep trying for 30 minutes. hive.server2.support.dynamic.service.discovery false Whether HiveServer2 supports dynamic service discovery for its clients. To support this, each instance of HiveServer2 currently uses ZooKeeper to register itself, when it is brought up. JDBC/ODBC clients should use the ZooKeeper ensemble: hive.zookeeper.quorum in their connection string. hive.server2.zookeeper.namespace hiveserver2 The parent node in ZooKeeper used by HiveServer2 when supporting dynamic service discovery. hive.server2.zookeeper.publish.configs true Whether we should publish HiveServer2's configs to ZooKeeper. hive.server2.global.init.file.location ${env:HIVE_CONF_DIR} Either the location of a HS2 global init file or a directory containing a .hiverc file. If the property is set, the value must be a valid path to an init file or directory where the init file is located. hive.server2.transport.mode binary Expects one of [binary, http]. Transport mode of HiveServer2. hive.server2.thrift.bind.host Bind host on which to run the HiveServer2 Thrift service. hive.driver.parallel.compilation false Whether to enable parallel compilation of the queries between sessions and within the same session on HiveServer2. The default is false. hive.server2.compile.lock.timeout 0s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Number of seconds a request will wait to acquire the compile lock before giving up. Setting it to 0s disables the timeout. hive.server2.parallel.ops.in.session true Whether to allow several parallel operations (such as SQL statements) in one session. hive.server2.webui.host 0.0.0.0 The host address the HiveServer2 WebUI will listen on hive.server2.webui.port 10002 The port the HiveServer2 WebUI will listen on. This can beset to 0 or a negative integer to disable the web UI hive.server2.webui.max.threads 50 The max HiveServer2 WebUI threads hive.server2.webui.use.ssl false Set this to true for using SSL encryption for HiveServer2 WebUI. hive.server2.webui.keystore.path SSL certificate keystore location for HiveServer2 WebUI. hive.server2.webui.keystore.password SSL certificate keystore password for HiveServer2 WebUI. hive.server2.webui.use.spnego false If true, the HiveServer2 WebUI will be secured with SPNEGO. Clients must authenticate with Kerberos. hive.server2.webui.spnego.keytab The path to the Kerberos Keytab file containing the HiveServer2 WebUI SPNEGO service principal. hive.server2.webui.spnego.principal HTTP/_HOST@EXAMPLE.COM The HiveServer2 WebUI SPNEGO service principal. The special string _HOST will be replaced automatically with the value of hive.server2.webui.host or the correct host name. hive.server2.webui.max.historic.queries 25 The maximum number of past queries to show in HiverSever2 WebUI. hive.server2.tez.default.queues A list of comma separated values corresponding to YARN queues of the same name. When HiveServer2 is launched in Tez mode, this configuration needs to be set for multiple Tez sessions to run in parallel on the cluster. hive.server2.tez.sessions.per.default.queue 1 A positive integer that determines the number of Tez sessions that should be launched on each of the queues specified by "hive.server2.tez.default.queues". Determines the parallelism on each queue. hive.server2.tez.initialize.default.sessions false This flag is used in HiveServer2 to enable a user to use HiveServer2 without turning on Tez for HiveServer2. The user could potentially want to run queries over Tez without the pool of sessions. hive.server2.tez.session.lifetime 162h Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is hour if not specified. The lifetime of the Tez sessions launched by HS2 when default sessions are enabled. Set to 0 to disable session expiration. hive.server2.tez.session.lifetime.jitter 3h Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is hour if not specified. The jitter for Tez session lifetime; prevents all the sessions from restarting at once. hive.server2.tez.sessions.init.threads 16 If hive.server2.tez.initialize.default.sessions is enabled, the maximum number of threads to use to initialize the default sessions. hive.server2.tez.sessions.restricted.configs The configuration settings that cannot be set when submitting jobs to HiveServer2. If any of these are set to values different from those in the server configuration, an exception will be thrown. hive.server2.tez.sessions.custom.queue.allowed true Expects one of [true, false, ignore]. Whether Tez session pool should allow submitting queries to custom queues. The options are true, false (error out), ignore (accept the query but ignore the queue setting). hive.server2.logging.operation.enabled true When true, HS2 will save operation logs and make them available for clients hive.server2.logging.operation.log.location ${system:java.io.tmpdir}/${system:user.name}/operation_logs Top level directory where operation logs are stored if logging functionality is enabled hive.server2.logging.operation.level EXECUTION Expects one of [none, execution, performance, verbose]. HS2 operation logging mode available to clients to be set at session level. For this to work, hive.server2.logging.operation.enabled should be set to true. NONE: Ignore any logging EXECUTION: Log completion of tasks PERFORMANCE: Execution + Performance logs VERBOSE: All logs hive.server2.metrics.enabled false Enable metrics on the HiveServer2. hive.server2.thrift.http.port 10001 Port number of HiveServer2 Thrift interface when hive.server2.transport.mode is 'http'. hive.server2.thrift.http.path cliservice Path component of URL endpoint when in HTTP mode. hive.server2.thrift.max.message.size 104857600 Maximum message size in bytes a HS2 server will accept. hive.server2.thrift.http.max.idle.time 1800s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Maximum idle time for a connection on the server when in HTTP mode. hive.server2.thrift.http.worker.keepalive.time 60s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Keepalive time for an idle http worker thread. When the number of workers exceeds min workers, excessive threads are killed after this time interval. hive.server2.thrift.http.request.header.size 6144 Request header size in bytes, when using HTTP transport mode. Jetty defaults used. hive.server2.thrift.http.response.header.size 6144 Response header size in bytes, when using HTTP transport mode. Jetty defaults used. hive.server2.thrift.http.cookie.auth.enabled true When true, HiveServer2 in HTTP transport mode, will use cookie based authentication mechanism. hive.server2.thrift.http.cookie.max.age 86400s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Maximum age in seconds for server side cookie used by HS2 in HTTP mode. hive.server2.thrift.http.cookie.domain Domain for the HS2 generated cookies hive.server2.thrift.http.cookie.path Path for the HS2 generated cookies hive.server2.thrift.http.cookie.is.secure true Deprecated: Secure attribute of the HS2 generated cookie (this is automatically enabled for SSL enabled HiveServer2). hive.server2.thrift.http.cookie.is.httponly true HttpOnly attribute of the HS2 generated cookie. hive.server2.thrift.port 10000 Port number of HiveServer2 Thrift interface when hive.server2.transport.mode is 'binary'. hive.server2.thrift.sasl.qop auth Expects one of [auth, auth-int, auth-conf]. Sasl QOP value; set it to one of following values to enable higher levels of protection for HiveServer2 communication with clients. Setting hadoop.rpc.protection to a higher level than HiveServer2 does not make sense in most situations. HiveServer2 ignores hadoop.rpc.protection in favor of hive.server2.thrift.sasl.qop. "auth" - authentication only (default) "auth-int" - authentication plus integrity protection "auth-conf" - authentication plus integrity and confidentiality protection This is applicable only if HiveServer2 is configured to use Kerberos authentication. hive.server2.thrift.min.worker.threads 5 Minimum number of Thrift worker threads hive.server2.thrift.max.worker.threads 500 Maximum number of Thrift worker threads hive.server2.thrift.exponential.backoff.slot.length 100ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Binary exponential backoff slot time for Thrift clients during login to HiveServer2, for retries until hitting Thrift client timeout hive.server2.thrift.login.timeout 20s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Timeout for Thrift clients during login to HiveServer2 hive.server2.thrift.worker.keepalive.time 60s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Keepalive time (in seconds) for an idle worker thread. When the number of workers exceeds min workers, excessive threads are killed after this time interval. hive.server2.async.exec.threads 100 Number of threads in the async thread pool for HiveServer2 hive.server2.async.exec.shutdown.timeout 10s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. How long HiveServer2 shutdown will wait for async threads to terminate. hive.server2.async.exec.wait.queue.size 100 Size of the wait queue for async thread pool in HiveServer2. After hitting this limit, the async thread pool will reject new requests. hive.server2.async.exec.keepalive.time 10s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Time that an idle HiveServer2 async thread (from the thread pool) will wait for a new task to arrive before terminating hive.server2.async.exec.async.compile false Whether to enable compiling async query asynchronously. If enabled, it is unknown if the query will have any resultset before compilation completed. hive.server2.long.polling.timeout 5000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Time that HiveServer2 will wait before responding to asynchronous calls that use long polling hive.session.impl.classname Classname for custom implementation of hive session hive.session.impl.withugi.classname Classname for custom implementation of hive session with UGI hive.server2.authentication NONE Expects one of [nosasl, none, ldap, kerberos, pam, custom]. Client authentication types. NONE: no authentication check LDAP: LDAP/AD based authentication KERBEROS: Kerberos/GSSAPI authentication CUSTOM: Custom authentication provider (Use with property hive.server2.custom.authentication.class) PAM: Pluggable authentication module NOSASL: Raw transport hive.server2.allow.user.substitution true Allow alternate user to be specified as part of HiveServer2 open connection request. hive.server2.authentication.kerberos.keytab Kerberos keytab file for server principal hive.server2.authentication.kerberos.principal Kerberos server principal hive.server2.authentication.spnego.keytab keytab file for SPNego principal, optional, typical value would look like /etc/security/keytabs/spnego.service.keytab, This keytab would be used by HiveServer2 when Kerberos security is enabled and HTTP transport mode is used. This needs to be set only if SPNEGO is to be used in authentication. SPNego authentication would be honored only if valid hive.server2.authentication.spnego.principal and hive.server2.authentication.spnego.keytab are specified. hive.server2.authentication.spnego.principal SPNego service principal, optional, typical value would look like HTTP/_HOST@EXAMPLE.COM SPNego service principal would be used by HiveServer2 when Kerberos security is enabled and HTTP transport mode is used. This needs to be set only if SPNEGO is to be used in authentication. hive.server2.authentication.ldap.url LDAP connection URL(s), this value could contain URLs to mutiple LDAP servers instances for HA, each LDAP URL is separated by a SPACE character. URLs are used in the order specified until a connection is successful. hive.server2.authentication.ldap.baseDN LDAP base DN hive.server2.authentication.ldap.Domain hive.server2.authentication.ldap.groupDNPattern COLON-separated list of patterns to use to find DNs for group entities in this directory. Use %s where the actual group name is to be substituted for. For example: CN=%s,CN=Groups,DC=subdomain,DC=domain,DC=com. hive.server2.authentication.ldap.groupFilter COMMA-separated list of LDAP Group names (short name not full DNs). For example: HiveAdmins,HadoopAdmins,Administrators hive.server2.authentication.ldap.userDNPattern COLON-separated list of patterns to use to find DNs for users in this directory. Use %s where the actual group name is to be substituted for. For example: CN=%s,CN=Users,DC=subdomain,DC=domain,DC=com. hive.server2.authentication.ldap.userFilter COMMA-separated list of LDAP usernames (just short names, not full DNs). For example: hiveuser,impalauser,hiveadmin,hadoopadmin hive.server2.authentication.ldap.guidKey uid LDAP attribute name whose values are unique in this LDAP server. For example: uid or CN. hive.server2.authentication.ldap.groupMembershipKey member LDAP attribute name on the group object that contains the list of distinguished names for the user, group, and contact objects that are members of the group. For example: member, uniqueMember or memberUid hive.server2.authentication.ldap.userMembershipKey LDAP attribute name on the user object that contains groups of which the user is a direct member, except for the primary group, which is represented by the primaryGroupId. For example: memberOf hive.server2.authentication.ldap.groupClassKey groupOfNames LDAP attribute name on the group entry that is to be used in LDAP group searches. For example: group, groupOfNames or groupOfUniqueNames. hive.server2.authentication.ldap.customLDAPQuery A full LDAP query that LDAP Atn provider uses to execute against LDAP Server. If this query returns a null resultset, the LDAP Provider fails the Authentication request, succeeds if the user is part of the resultset.For example: (&(objectClass=group)(objectClass=top)(instanceType=4)(cn=Domain*)) (&(objectClass=person)(|(sAMAccountName=admin)(|(memberOf=CN=Domain Admins,CN=Users,DC=domain,DC=com)(memberOf=CN=Administrators,CN=Builtin,DC=domain,DC=com)))) hive.server2.custom.authentication.class Custom authentication class. Used when property 'hive.server2.authentication' is set to 'CUSTOM'. Provided class must be a proper implementation of the interface org.apache.hive.service.auth.PasswdAuthenticationProvider. HiveServer2 will call its Authenticate(user, passed) method to authenticate requests. The implementation may optionally implement Hadoop's org.apache.hadoop.conf.Configurable class to grab Hive's Configuration object. hive.server2.authentication.pam.services List of the underlying pam services that should be used when auth type is PAM A file with the same name must exist in /etc/pam.d hive.server2.enable.doAs true Setting this property to true will have HiveServer2 execute Hive operations as the user making the calls to it. hive.server2.table.type.mapping CLASSIC Expects one of [classic, hive]. This setting reflects how HiveServer2 will report the table types for JDBC and other client implementations that retrieve the available tables and supported table types HIVE : Exposes Hive's native table types like MANAGED_TABLE, EXTERNAL_TABLE, VIRTUAL_VIEW CLASSIC : More generic types like TABLE and VIEW hive.server2.session.hook hive.server2.use.SSL false Set this to true for using SSL encryption in HiveServer2. hive.server2.keystore.path SSL certificate keystore location. hive.server2.keystore.password SSL certificate keystore password. hive.server2.map.fair.scheduler.queue true If the YARN fair scheduler is configured and HiveServer2 is running in non-impersonation mode, this setting determines the user for fair scheduler queue mapping. If set to true (default), the logged-in user determines the fair scheduler queue for submitted jobs, so that map reduce resource usage can be tracked by user. If set to false, all Hive jobs go to the 'hive' user's queue. hive.server2.builtin.udf.whitelist Comma separated list of builtin udf names allowed in queries. An empty whitelist allows all builtin udfs to be executed. The udf black list takes precedence over udf white list hive.server2.builtin.udf.blacklist Comma separated list of udfs names. These udfs will not be allowed in queries. The udf black list takes precedence over udf white list hive.allow.udf.load.on.demand false Whether enable loading UDFs from metastore on demand; this is mostly relevant for HS2 and was the default behavior before Hive 1.2. Off by default. hive.server2.session.check.interval 6h Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. The time should be bigger than or equal to 3000 msec. The check interval for session/operation timeout, which can be disabled by setting to zero or negative value. hive.server2.close.session.on.disconnect true Session will be closed when connection is closed. Set this to false to have session outlive its parent connection. hive.server2.idle.session.timeout 7d Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Session will be closed when it's not accessed for this duration, which can be disabled by setting to zero or negative value. hive.server2.idle.operation.timeout 5d Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Operation will be closed when it's not accessed for this duration of time, which can be disabled by setting to zero value. With positive value, it's checked for operations in terminal state only (FINISHED, CANCELED, CLOSED, ERROR). With negative value, it's checked for all of the operations regardless of state. hive.server2.idle.session.check.operation true Session will be considered to be idle only if there is no activity, and there is no pending operation. This setting takes effect only if session idle timeout (hive.server2.idle.session.timeout) and checking (hive.server2.session.check.interval) are enabled. hive.server2.thrift.client.retry.limit 1 Number of retries upon failure of Thrift HiveServer2 calls hive.server2.thrift.client.connect.retry.limit 1 Number of retries while opening a connection to HiveServe2 hive.server2.thrift.client.retry.delay.seconds 1s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Number of seconds for the HiveServer2 thrift client to wait between consecutive connection attempts. Also specifies the time to wait between retrying thrift calls upon failures hive.server2.thrift.client.user anonymous Username to use against thrift client hive.server2.thrift.client.password anonymous Password to use against thrift client hive.server2.thrift.resultset.serialize.in.tasks false Whether we should serialize the Thrift structures used in JDBC ResultSet RPC in task nodes. We use SequenceFile and ThriftJDBCBinarySerDe to read and write the final results if this is true. hive.server2.thrift.resultset.max.fetch.size 10000 Max number of rows sent in one Fetch RPC call by the server to the client. hive.server2.thrift.resultset.default.fetch.size 1000 The number of rows sent in one Fetch RPC call by the server to the client, if not specified by the client. hive.server2.xsrf.filter.enabled false If enabled, HiveServer2 will block any requests made to it over http if an X-XSRF-HEADER header is not present hive.security.command.whitelist set,reset,dfs,add,list,delete,reload,compile Comma separated list of non-SQL Hive commands users are authorized to execute hive.server2.job.credential.provider.path If set, this configuration property should provide a comma-separated list of URLs that indicates the type and location of providers to be used by hadoop credential provider API. It provides HiveServer2 the ability to provide job-specific credential providers for jobs run using MR and Spark execution engines. This functionality has not been tested against Tez. hive.mv.files.thread 15 Expects a byte size value with unit (blank for bytes, kb, mb, gb, tb, pb). The size should be in between 0Pb (inclusive) and 1Kb (inclusive). Number of threads used to move files in move task. Set it to 0 to disable multi-threaded file moves. This parameter is also used by MSCK to check tables. hive.load.dynamic.partitions.thread 15 Expects a byte size value with unit (blank for bytes, kb, mb, gb, tb, pb). The size should be in between 1 bytes (inclusive) and 1Kb (inclusive). Number of threads used to load dynamic partitions. hive.multi.insert.move.tasks.share.dependencies false If this is set all move tasks for tables/partitions (not directories) at the end of a multi-insert query will only begin once the dependencies for all these move tasks have been met. Advantages: If concurrency is enabled, the locks will only be released once the query has finished, so with this config enabled, the time when the table/partition is generated will be much closer to when the lock on it is released. Disadvantages: If concurrency is not enabled, with this disabled, the tables/partitions which are produced by this query and finish earlier will be available for querying much earlier. Since the locks are only released once the query finishes, this does not apply if concurrency is enabled. hive.exec.infer.bucket.sort false If this is set, when writing partitions, the metadata will include the bucketing/sorting properties with which the data was written if any (this will not overwrite the metadata inherited from the table if the table is bucketed/sorted) hive.exec.infer.bucket.sort.num.buckets.power.two false If this is set, when setting the number of reducers for the map reduce task which writes the final output files, it will choose a number which is a power of two, unless the user specifies the number of reducers to use using mapred.reduce.tasks. The number of reducers may be set to a power of two, only to be followed by a merge task meaning preventing anything from being inferred. With hive.exec.infer.bucket.sort set to true: Advantages: If this is not set, the number of buckets for partitions will seem arbitrary, which means that the number of mappers used for optimized joins, for example, will be very low. With this set, since the number of buckets used for any partition is a power of two, the number of mappers used for optimized joins will be the least number of buckets used by any partition being joined. Disadvantages: This may mean a much larger or much smaller number of reducers being used in the final map reduce job, e.g. if a job was originally going to take 257 reducers, it will now take 512 reducers, similarly if the max number of reducers is 511, and a job was going to use this many, it will now use 256 reducers. hive.optimize.listbucketing false Enable list bucketing optimizer. Default value is false so that we disable it by default. hive.server.read.socket.timeout 10s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Timeout for the HiveServer to close the connection if no response from the client. By default, 10 seconds. hive.server.tcp.keepalive true Whether to enable TCP keepalive for the Hive Server. Keepalive will prevent accumulation of half-open connections. hive.decode.partition.name false Whether to show the unquoted partition names in query results. hive.execution.engine mr Expects one of [mr, tez, spark]. Chooses execution engine. Options are: mr (Map reduce, default), tez, spark. While MR remains the default engine for historical reasons, it is itself a historical engine and is deprecated in Hive 2 line. It may be removed without further warning. hive.execution.mode container Expects one of [container, llap]. Chooses whether query fragments will run in container or in llap hive.jar.directory This is the location hive in tez mode will look for to find a site wide installed hive instance. hive.user.install.directory /user/ If hive (in tez mode only) cannot find a usable hive jar in "hive.jar.directory", it will upload the hive jar to "hive.user.install.directory/user.name" and use it to run queries. hive.vectorized.execution.enabled false This flag should be set to true to enable vectorized mode of query execution. The default value is false. hive.vectorized.execution.reduce.enabled true This flag should be set to true to enable vectorized mode of the reduce-side of query execution. The default value is true. hive.vectorized.execution.reduce.groupby.enabled true This flag should be set to true to enable vectorized mode of the reduce-side GROUP BY query execution. The default value is true. hive.vectorized.execution.mapjoin.native.enabled true This flag should be set to true to enable native (i.e. non-pass through) vectorization of queries using MapJoin. The default value is true. hive.vectorized.execution.mapjoin.native.multikey.only.enabled false This flag should be set to true to restrict use of native vector map join hash tables to the MultiKey in queries using MapJoin. The default value is false. hive.vectorized.execution.mapjoin.minmax.enabled false This flag should be set to true to enable vector map join hash tables to use max / max filtering for integer join queries using MapJoin. The default value is false. hive.vectorized.execution.mapjoin.overflow.repeated.threshold -1 The number of small table rows for a match in vector map join hash tables where we use the repeated field optimization in overflow vectorized row batch for join queries using MapJoin. A value of -1 means do use the join result optimization. Otherwise, threshold value can be 0 to maximum integer. hive.vectorized.execution.mapjoin.native.fast.hashtable.enabled false This flag should be set to true to enable use of native fast vector map join hash tables in queries using MapJoin. The default value is false. hive.vectorized.groupby.checkinterval 100000 Number of entries added to the group by aggregation hash before a recomputation of average entry size is performed. hive.vectorized.groupby.maxentries 1000000 Max number of entries in the vector group by aggregation hashtables. Exceeding this will trigger a flush irrelevant of memory pressure condition. hive.vectorized.groupby.flush.percent 0.1 Percent of entries in the group by aggregation hash flushed when the memory threshold is exceeded. hive.vectorized.execution.reducesink.new.enabled true This flag should be set to true to enable the new vectorization of queries using ReduceSink. iThe default value is true. hive.vectorized.use.vectorized.input.format true This flag should be set to true to enable vectorizing with vectorized input file format capable SerDe. The default value is true. hive.vectorized.use.vector.serde.deserialize true This flag should be set to true to enable vectorizing rows using vector deserialize. The default value is true. hive.vectorized.use.row.serde.deserialize false This flag should be set to true to enable vectorizing using row deserialize. The default value is false. hive.vectorized.adaptor.usage.mode all Expects one of [none, chosen, all]. Specifies the extent to which the VectorUDFAdaptor will be used for UDFs that do not have a cooresponding vectorized class. 0. none : disable any usage of VectorUDFAdaptor 1. chosen : use VectorUDFAdaptor for a small set of UDFs that were choosen for good performance 2. all : use VectorUDFAdaptor for all UDFs hive.typecheck.on.insert true This property has been extended to control whether to check, convert, and normalize partition value to conform to its column type in partition operations including but not limited to insert, such as alter, describe etc. hive.hadoop.classpath For Windows OS, we need to pass HIVE_HADOOP_CLASSPATH Java parameter while starting HiveServer2 using "-hiveconf hive.hadoop.classpath=%HIVE_LIB%". hive.rpc.query.plan false Whether to send the query plan via local resource or RPC hive.compute.splits.in.am true Whether to generate the splits locally or in the AM (tez only) hive.tez.input.generate.consistent.splits true Whether to generate consistent split locations when generating splits in the AM hive.prewarm.enabled false Enables container prewarm for Tez/Spark (Hadoop 2 only) hive.prewarm.numcontainers 10 Controls the number of containers to prewarm for Tez/Spark (Hadoop 2 only) hive.stageid.rearrange none Expects one of [none, idonly, traverse, execution]. hive.explain.dependency.append.tasktype false hive.counters.group.name HIVE The name of counter group for internal Hive variables (CREATED_FILE, FATAL_ERROR, etc.) hive.support.quoted.identifiers column Expects one of [none, column]. Whether to use quoted identifier. 'none' or 'column' can be used. none: default(past) behavior. Implies only alphaNumeric and underscore are valid characters in identifiers. column: implies column names can contain any character. hive.support.special.characters.tablename true This flag should be set to true to enable support for special characters in table names. When it is set to false, only [a-zA-Z_0-9]+ are supported. The only supported special character right now is '/'. This flag applies only to quoted table names. The default value is true. hive.users.in.admin.role Comma separated list of users who are in admin role for bootstrapping. More users can be added in ADMIN role later. hive.compat 0.12 Enable (configurable) deprecated behaviors by setting desired level of backward compatibility. Setting to 0.12: Maintains division behavior: int / int = double hive.convert.join.bucket.mapjoin.tez false Whether joins can be automatically converted to bucket map joins in hive when tez is used as the execution engine. hive.exec.check.crossproducts true Check if a plan contains a Cross Product. If there is one, output a warning to the Session's console. hive.localize.resource.wait.interval 5000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Time to wait for another thread to localize the same resource for hive-tez. hive.localize.resource.num.wait.attempts 5 The number of attempts waiting for localizing a resource in hive-tez. hive.tez.auto.reducer.parallelism false Turn on Tez' auto reducer parallelism feature. When enabled, Hive will still estimate data sizes and set parallelism estimates. Tez will sample source vertices' output sizes and adjust the estimates at runtime as necessary. hive.tez.max.partition.factor 2.0 When auto reducer parallelism is enabled this factor will be used to over-partition data in shuffle edges. hive.tez.min.partition.factor 0.25 When auto reducer parallelism is enabled this factor will be used to put a lower limit to the number of reducers that tez specifies. hive.tez.bucket.pruning false When pruning is enabled, filters on bucket columns will be processed by filtering the splits against a bitset of included buckets. This needs predicates produced by hive.optimize.ppd and hive.optimize.index.filters. hive.tez.bucket.pruning.compat true When pruning is enabled, handle possibly broken inserts due to negative hashcodes. This occasionally doubles the data scan cost, but is default enabled for safety hive.tez.dynamic.partition.pruning true When dynamic pruning is enabled, joins on partition keys will be processed by sending events from the processing vertices to the Tez application master. These events will be used to prune unnecessary partitions. hive.tez.dynamic.partition.pruning.max.event.size 1048576 Maximum size of events sent by processors in dynamic pruning. If this size is crossed no pruning will take place. hive.tez.dynamic.partition.pruning.max.data.size 104857600 Maximum total data size of events in dynamic pruning. hive.tez.dynamic.semijoin.reduction true When dynamic semijoin is enabled, shuffle joins will perform a leaky semijoin before shuffle. This requires hive.tez.dynamic.partition.pruning to be enabled. hive.tez.min.bloom.filter.entries 1000000 Bloom filter should be of at min certain size to be effective hive.tez.max.bloom.filter.entries 100000000 Bloom filter should be of at max certain size to be effective hive.tez.bloom.filter.factor 2.0 Bloom filter should be a multiple of this factor with nDV hive.tez.bigtable.minsize.semijoin.reduction 1000000 Big table for runtime filteting should be of atleast this size hive.tez.dynamic.semijoin.reduction.threshold 0.5 Only perform semijoin optimization if the estimated benefit at or above this fraction of the target table hive.tez.smb.number.waves 0.5 The number of waves in which to run the SMB join. Account for cluster being occupied. Ideally should be 1 wave. hive.tez.exec.print.summary false Display breakdown of execution steps, for every query executed by the shell. hive.tez.exec.inplace.progress true Updates tez job execution progress in-place in the terminal when hive-cli is used. hive.server2.in.place.progress true Allows hive server 2 to send progress bar update information. This is currently available only if the execution engine is tez. hive.spark.exec.inplace.progress true Updates spark job execution progress in-place in the terminal. hive.tez.container.max.java.heap.fraction 0.8 This is to override the tez setting with the same name hive.tez.task.scale.memory.reserve-fraction.min 0.3 This is to override the tez setting tez.task.scale.memory.reserve-fraction hive.tez.task.scale.memory.reserve.fraction.max 0.5 The maximum fraction of JVM memory which Tez will reserve for the processor hive.tez.task.scale.memory.reserve.fraction -1.0 The customized fraction of JVM memory which Tez will reserve for the processor hive.llap.io.enabled Whether the LLAP IO layer is enabled. hive.llap.io.nonvector.wrapper.enabled true Whether the LLAP IO layer is enabled for non-vectorized queries that read inputs that can be vectorized hive.llap.io.memory.mode cache Expects one of [cache, none]. LLAP IO memory usage; 'cache' (the default) uses data and metadata cache with a custom off-heap allocator, 'none' doesn't use either (this mode may result in significant performance degradation) hive.llap.io.allocator.alloc.min 256Kb Expects a byte size value with unit (blank for bytes, kb, mb, gb, tb, pb). Minimum allocation possible from LLAP buddy allocator. Allocations below that are padded to minimum allocation. For ORC, should generally be the same as the expected compression buffer size, or next lowest power of 2. Must be a power of 2. hive.llap.io.allocator.alloc.max 16Mb Expects a byte size value with unit (blank for bytes, kb, mb, gb, tb, pb). Maximum allocation possible from LLAP buddy allocator. For ORC, should be as large as the largest expected ORC compression buffer size. Must be a power of 2. hive.llap.io.metadata.fraction 0.1 Temporary setting for on-heap metadata cache fraction of xmx, set to avoid potential heap problems on very large datasets when on-heap metadata cache takes over everything. -1 managed metadata and data together (which is more flexible). This setting will be removed (in effect become -1) once ORC metadata cache is moved off-heap. hive.llap.io.allocator.arena.count 8 Arena count for LLAP low-level cache; cache will be allocated in the steps of (size/arena_count) bytes. This size must be <= 1Gb and >= max allocation; if it is not the case, an adjusted size will be used. Using powers of 2 is recommended. hive.llap.io.memory.size 1Gb Expects a byte size value with unit (blank for bytes, kb, mb, gb, tb, pb). Maximum size for IO allocator or ORC low-level cache. hive.llap.io.allocator.direct true Whether ORC low-level cache should use direct allocation. hive.llap.io.allocator.mmap false Whether ORC low-level cache should use memory mapped allocation (direct I/O). This is recommended to be used along-side NVDIMM (DAX) or NVMe flash storage. hive.llap.io.allocator.mmap.path /tmp Expects a writable directory on the local filesystem. The directory location for mapping NVDIMM/NVMe flash storage into the ORC low-level cache. hive.llap.io.use.lrfu true Whether ORC low-level cache should use LRFU cache policy instead of default (FIFO). hive.llap.io.lrfu.lambda 0.01 Lambda for ORC low-level cache LRFU cache policy. Must be in [0, 1]. 0 makes LRFU behave like LFU, 1 makes it behave like LRU, values in between balance accordingly. hive.llap.cache.allow.synthetic.fileid false Whether LLAP cache should use synthetic file ID if real one is not available. Systems like HDFS, Isilon, etc. provide a unique file/inode ID. On other FSes (e.g. local FS), the cache would not work by default because LLAP is unable to uniquely track the files; enabling this setting allows LLAP to generate file ID from the path, size and modification time, which is almost certain to identify file uniquely. However, if you use a FS without file IDs and rewrite files a lot (or are paranoid), you might want to avoid this setting. hive.llap.orc.gap.cache true Whether LLAP cache for ORC should remember gaps in ORC compression buffer read estimates, to avoid re-reading the data that was read once and discarded because it is unneeded. This is only necessary for ORC files written before HIVE-9660. hive.llap.io.use.fileid.path true Whether LLAP should use fileId (inode)-based path to ensure better consistency for the cases of file overwrites. This is supported on HDFS. hive.llap.io.encode.enabled true Whether LLAP should try to re-encode and cache data for non-ORC formats. This is used on LLAP Server side to determine if the infrastructure for that is initialized. hive.llap.io.encode.formats org.apache.hadoop.mapred.TextInputFormat, The table input formats for which LLAP IO should re-encode and cache data. Comma-separated list. hive.llap.io.encode.alloc.size 256Kb Expects a byte size value with unit (blank for bytes, kb, mb, gb, tb, pb). Allocation size for the buffers used to cache encoded data from non-ORC files. Must be a power of two between hive.llap.io.allocator.alloc.min and hive.llap.io.allocator.alloc.max. hive.llap.io.encode.vector.serde.enabled true Whether LLAP should use vectorized SerDe reader to read text data when re-encoding. hive.llap.io.encode.vector.serde.async.enabled true Whether LLAP should use async mode in vectorized SerDe reader to read text data. hive.llap.io.encode.slice.row.count 100000 Row count to use to separate cache slices when reading encoded data from row-based inputs into LLAP cache, if this feature is enabled. hive.llap.io.encode.slice.lrr true Whether to separate cache slices when reading encoded data from text inputs via MR MR LineRecordRedader into LLAP cache, if this feature is enabled. Safety flag. hive.llap.io.orc.time.counters true Whether to enable time counters for LLAP IO layer (time spent in HDFS, etc.) hive.llap.auto.allow.uber false Whether or not to allow the planner to run vertices in the AM. hive.llap.auto.enforce.tree true Enforce that all parents are in llap, before considering vertex hive.llap.auto.enforce.vectorized true Enforce that inputs are vectorized, before considering vertex hive.llap.auto.enforce.stats true Enforce that col stats are available, before considering vertex hive.llap.auto.max.input.size 10737418240 Check input size, before considering vertex (-1 disables check) hive.llap.auto.max.output.size 1073741824 Check output size, before considering vertex (-1 disables check) hive.llap.skip.compile.udf.check false Whether to skip the compile-time check for non-built-in UDFs when deciding whether to execute tasks in LLAP. Skipping the check allows executing UDFs from pre-localized jars in LLAP; if the jars are not pre-localized, the UDFs will simply fail to load. hive.llap.allow.permanent.fns true Whether LLAP decider should allow permanent UDFs. hive.llap.execution.mode none Expects one of [auto, none, all, map, only]. Chooses whether query fragments will run in container or in llap hive.llap.object.cache.enabled true Cache objects (plans, hashtables, etc) in llap hive.llap.io.decoding.metrics.percentiles.intervals 30 Comma-delimited set of integers denoting the desired rollover intervals (in seconds) for percentile latency metrics on the LLAP daemon IO decoding time. hive.llap.queue.metrics.percentiles.intervals hive.llap.io.threadpool.size 10 Specify the number of threads to use for low-level IO thread pool. hive.llap.daemon.service.principal The name of the LLAP daemon's service principal. hive.llap.daemon.keytab.file The path to the Kerberos Keytab file containing the LLAP daemon's service principal. hive.llap.zk.sm.principal The name of the principal to use to talk to ZooKeeper for ZooKeeper SecretManager. hive.llap.zk.sm.keytab.file The path to the Kerberos Keytab file containing the principal to use to talk to ZooKeeper for ZooKeeper SecretManager. hive.llap.webui.spnego.keytab The path to the Kerberos Keytab file containing the LLAP WebUI SPNEGO principal. Typical value would look like /etc/security/keytabs/spnego.service.keytab. hive.llap.webui.spnego.principal The LLAP WebUI SPNEGO service principal. Configured similarly to hive.server2.webui.spnego.principal hive.llap.task.principal The name of the principal to use to run tasks. By default, the clients are required to provide tokens to access HDFS/etc. hive.llap.task.keytab.file The path to the Kerberos Keytab file containing the principal to use to run tasks. By default, the clients are required to provide tokens to access HDFS/etc. hive.llap.zk.sm.connectionString ZooKeeper connection string for ZooKeeper SecretManager. hive.llap.zk.registry.user In the LLAP ZooKeeper-based registry, specifies the username in the Zookeeper path. This should be the hive user or whichever user is running the LLAP daemon. hive.llap.zk.registry.namespace In the LLAP ZooKeeper-based registry, overrides the ZK path namespace. Note that using this makes the path management (e.g. setting correct ACLs) your responsibility. hive.llap.daemon.acl * The ACL for LLAP daemon. hive.llap.daemon.acl.blocked The deny ACL for LLAP daemon. hive.llap.management.acl * The ACL for LLAP daemon management. hive.llap.management.acl.blocked The deny ACL for LLAP daemon management. hive.llap.remote.token.requires.signing true Expects one of [false, except_llap_owner, true]. Whether the token returned from LLAP management API should require fragment signing. True by default; can be disabled to allow CLI to get tokens from LLAP in a secure cluster by setting it to true or 'except_llap_owner' (the latter returns such tokens to everyone except the user LLAP cluster is authenticating under). hive.llap.daemon.delegation.token.lifetime 14d Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. LLAP delegation token lifetime, in seconds if specified without a unit. hive.llap.management.rpc.port 15004 RPC port for LLAP daemon management service. hive.llap.auto.auth false Whether or not to set Hadoop configs to enable auth in LLAP web app. hive.llap.daemon.rpc.num.handlers 5 Number of RPC handlers for LLAP daemon. hive.llap.daemon.work.dirs Working directories for the daemon. This should not be set if running as a YARN application via Slider. It must be set when not running via Slider on YARN. If the value is set when running as a Slider YARN application, the specified value will be used. hive.llap.daemon.yarn.shuffle.port 15551 YARN shuffle port for LLAP-daemon-hosted shuffle. hive.llap.daemon.yarn.container.mb -1 llap server yarn container size in MB. Used in LlapServiceDriver and package.py hive.llap.daemon.queue.name Queue name within which the llap slider application will run. Used in LlapServiceDriver and package.py hive.llap.daemon.container.id ContainerId of a running LlapDaemon. Used to publish to the registry hive.llap.daemon.nm.address NM Address host:rpcPort for the NodeManager on which the instance of the daemon is running. Published to the llap registry. Should never be set by users hive.llap.daemon.shuffle.dir.watcher.enabled false TODO doc hive.llap.daemon.am.liveness.heartbeat.interval.ms 10000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Tez AM-LLAP heartbeat interval (milliseconds). This needs to be below the task timeout interval, but otherwise as high as possible to avoid unnecessary traffic. hive.llap.am.liveness.connection.timeout.ms 10000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Amount of time to wait on connection failures to the AM from an LLAP daemon before considering the AM to be dead. hive.llap.am.use.fqdn false Whether to use FQDN of the AM machine when submitting work to LLAP. hive.llap.am.liveness.connection.sleep.between.retries.ms 2000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Sleep duration while waiting to retry connection failures to the AM from the daemon for the general keep-alive thread (milliseconds). hive.llap.task.scheduler.timeout.seconds 60s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Amount of time to wait before failing the query when there are no llap daemons running (alive) in the cluster. hive.llap.daemon.num.executors 4 Number of executors to use in LLAP daemon; essentially, the number of tasks that can be executed in parallel. hive.llap.daemon.am-reporter.max.threads 4 Maximum number of threads to be used for AM reporter. If this is lower than number of executors in llap daemon, it would be set to number of executors at runtime. hive.llap.daemon.rpc.port 0 The LLAP daemon RPC port. hive.llap.daemon.memory.per.instance.mb 4096 The total amount of memory to use for the executors inside LLAP (in megabytes). hive.llap.daemon.xmx.headroom 5% The total amount of heap memory set aside by LLAP and not used by the executors. Can be specified as size (e.g. '512Mb'), or percentage (e.g. '5%'). Note that the latter is derived from the total daemon XMX, which can be different from the total executor memory if the cache is on-heap; although that's not the default configuration. hive.llap.daemon.vcpus.per.instance 4 The total number of vcpus to use for the executors inside LLAP. hive.llap.daemon.num.file.cleaner.threads 1 Number of file cleaner threads in LLAP. hive.llap.file.cleanup.delay.seconds 300s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. How long to delay before cleaning up query files in LLAP (in seconds, for debugging). hive.llap.daemon.service.hosts Explicitly specified hosts to use for LLAP scheduling. Useful for testing. By default, YARN registry is used. hive.llap.daemon.service.refresh.interval.sec 60s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. LLAP YARN registry service list refresh delay, in seconds. hive.llap.daemon.communicator.num.threads 10 Number of threads to use in LLAP task communicator in Tez AM. hive.llap.daemon.download.permanent.fns false Whether LLAP daemon should localize the resources for permanent UDFs. hive.llap.task.scheduler.node.reenable.min.timeout.ms 200ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Minimum time after which a previously disabled node will be re-enabled for scheduling, in milliseconds. This may be modified by an exponential back-off if failures persist. hive.llap.task.scheduler.node.reenable.max.timeout.ms 10000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Maximum time after which a previously disabled node will be re-enabled for scheduling, in milliseconds. This may be modified by an exponential back-off if failures persist. hive.llap.task.scheduler.node.disable.backoff.factor 1.5 Backoff factor on successive blacklists of a node due to some failures. Blacklist times start at the min timeout and go up to the max timeout based on this backoff factor. hive.llap.task.scheduler.num.schedulable.tasks.per.node 0 The number of tasks the AM TaskScheduler will try allocating per node. 0 indicates that this should be picked up from the Registry. -1 indicates unlimited capacity; positive values indicate a specific bound. hive.llap.task.scheduler.locality.delay 0ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. The time should be in between -1 msec (inclusive) and 9223372036854775807 msec (inclusive). Amount of time to wait before allocating a request which contains location information, to a location other than the ones requested. Set to -1 for an infinite delay, 0for no delay. hive.llap.daemon.task.preemption.metrics.intervals 30,60,300 Comma-delimited set of integers denoting the desired rollover intervals (in seconds) for percentile latency metrics. Used by LLAP daemon task scheduler metrics for time taken to kill task (due to pre-emption) and useful time wasted by the task that is about to be preempted. hive.llap.daemon.task.scheduler.wait.queue.size 10 LLAP scheduler maximum queue size. hive.llap.daemon.wait.queue.comparator.class.name org.apache.hadoop.hive.llap.daemon.impl.comparator.ShortestJobFirstComparator The priority comparator to use for LLAP scheduler prioroty queue. The built-in options are org.apache.hadoop.hive.llap.daemon.impl.comparator.ShortestJobFirstComparator and .....FirstInFirstOutComparator hive.llap.daemon.task.scheduler.enable.preemption true Whether non-finishable running tasks (e.g. a reducer waiting for inputs) should be preempted by finishable tasks inside LLAP scheduler. hive.llap.task.communicator.connection.timeout.ms 16000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Connection timeout (in milliseconds) before a failure to an LLAP daemon from Tez AM. hive.llap.task.communicator.listener.thread-count 30 The number of task communicator listener threads. hive.llap.task.communicator.connection.sleep.between.retries.ms 2000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Sleep duration (in milliseconds) to wait before retrying on error when obtaining a connection to LLAP daemon from Tez AM. hive.llap.daemon.web.port 15002 LLAP daemon web UI port. hive.llap.daemon.web.ssl false Whether LLAP daemon web UI should use SSL. hive.llap.client.consistent.splits false Whether to setup split locations to match nodes on which llap daemons are running, instead of using the locations provided by the split itself. If there is no llap daemon running, fall back to locations provided by the split. This is effective only if hive.execution.mode is llap hive.llap.validate.acls true Whether LLAP should reject permissive ACLs in some cases (e.g. its own management protocol or ZK paths), similar to how ssh refuses a key with bad access permissions. hive.llap.daemon.output.service.port 15003 LLAP daemon output service port hive.llap.daemon.output.stream.timeout 120s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. The timeout for the client to connect to LLAP output service and start the fragment output after sending the fragment. The fragment will fail if its output is not claimed. hive.llap.daemon.output.service.send.buffer.size 131072 Send buffer size to be used by LLAP daemon output service hive.llap.daemon.output.service.max.pending.writes 8 Maximum number of queued writes allowed per connection when sending data via the LLAP output service to external clients. hive.llap.enable.grace.join.in.llap false Override if grace join should be allowed to run in llap. hive.llap.hs2.coordinator.enabled true Whether to create the LLAP coordinator; since execution engine and container vs llap settings are both coming from job configs, we don't know at start whether this should be created. Default true. hive.llap.daemon.logger query-routing Expects one of [query-routing, rfa, console]. logger used for llap-daemons. hive.spark.use.op.stats true Whether to use operator stats to determine reducer parallelism for Hive on Spark. If this is false, Hive will use source table stats to determine reducer parallelism for all first level reduce tasks, and the maximum reducer parallelism from all parents for all the rest (second level and onward) reducer tasks. hive.spark.use.file.size.for.mapjoin false If this is set to true, mapjoin optimization in Hive/Spark will use source file sizes associated with TableScan operator on the root of operator tree, instead of using operator statistics. hive.spark.client.future.timeout 60s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Timeout for requests from Hive client to remote Spark driver. hive.spark.job.monitor.timeout 60s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Timeout for job monitor to get Spark job state. hive.spark.client.connect.timeout 1000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Timeout for remote Spark driver in connecting back to Hive client. hive.spark.client.server.connect.timeout 90000ms Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Timeout for handshake between Hive client and remote Spark driver. Checked by both processes. hive.spark.client.secret.bits 256 Number of bits of randomness in the generated secret for communication between Hive client and remote Spark driver. Rounded down to the nearest multiple of 8. hive.spark.client.rpc.threads 8 Maximum number of threads for remote Spark driver's RPC event loop. hive.spark.client.rpc.max.size 52428800 Maximum message size in bytes for communication between Hive client and remote Spark driver. Default is 50MB. hive.spark.client.channel.log.level Channel logging level for remote Spark driver. One of {DEBUG, ERROR, INFO, TRACE, WARN}. hive.spark.client.rpc.sasl.mechanisms DIGEST-MD5 Name of the SASL mechanism to use for authentication. hive.spark.client.rpc.server.address The server address of HiverServer2 host to be used for communication between Hive client and remote Spark driver. Default is empty, which means the address will be determined in the same way as for hive.server2.thrift.bind.host.This is only necessary if the host has mutiple network addresses and if a different network address other than hive.server2.thrift.bind.host is to be used. hive.spark.client.rpc.server.port A list of port ranges which can be used by RPC server with the format of 49152-49222,49228 and a random one is selected from the list. Default is empty, which randomly selects one port from all available ones. hive.spark.dynamic.partition.pruning false When dynamic pruning is enabled, joins on partition keys will be processed by writing to a temporary HDFS file, and read later for removing unnecessary partitions. hive.spark.dynamic.partition.pruning.max.data.size 104857600 Maximum total data size in dynamic pruning. hive.spark.use.groupby.shuffle true Spark groupByKey transformation has better performance but uses unbounded memory.Turn this off when there is a memory issue. hive.reorder.nway.joins true Runs reordering of tables within single n-way join (i.e.: picks streamtable) hive.merge.nway.joins true Merge adjacent joins into a single n-way join hive.log.every.n.records 0 Expects value bigger than 0. If value is greater than 0 logs in fixed intervals of size n rather than exponentially. hive.msck.path.validation throw Expects one of [throw, skip, ignore]. The approach msck should take with HDFS directories that are partition-like but contain unsupported characters. 'throw' (an exception) is the default; 'skip' will skip the invalid directories and still repair the others; 'ignore' will skip the validation (legacy behavior, causes bugs in many cases) hive.msck.repair.batch.size 0 Batch size for the msck repair command. If the value is greater than zero, it will execute batch wise with the configured batch size. The default value is zero. Zero means it will execute directly (Not batch wise) hive.server2.llap.concurrent.queries -1 The number of queries allowed in parallel via llap. Negative number implies 'infinite'. hive.tez.enable.memory.manager true Enable memory manager for tez hive.hash.table.inflation.factor 2.0 Expected inflation factor between disk/in memory representation of hash tables hive.log.trace.id Log tracing id that can be used by upstream clients for tracking respective logs. Truncated to 64 characters. Defaults to use auto-generated session id. hive.conf.restricted.list hive.security.authenticator.manager,hive.security.authorization.manager,hive.security.metastore.authorization.manager,hive.security.metastore.authenticator.manager,hive.users.in.admin.role,hive.server2.xsrf.filter.enabled,hive.security.authorization.enabled,hive.server2.authentication.ldap.baseDN,hive.server2.authentication.ldap.url,hive.server2.authentication.ldap.Domain,hive.server2.authentication.ldap.groupDNPattern,hive.server2.authentication.ldap.groupFilter,hive.server2.authentication.ldap.userDNPattern,hive.server2.authentication.ldap.userFilter,hive.server2.authentication.ldap.groupMembershipKey,hive.server2.authentication.ldap.userMembershipKey,hive.server2.authentication.ldap.groupClassKey,hive.server2.authentication.ldap.customLDAPQuery Comma separated list of configuration options which are immutable at runtime hive.conf.hidden.list javax.jdo.option.ConnectionPassword,hive.server2.keystore.password,fs.s3.awsAccessKeyId,fs.s3.awsSecretAccessKey,fs.s3n.awsAccessKeyId,fs.s3n.awsSecretAccessKey,fs.s3a.access.key,fs.s3a.secret.key,fs.s3a.proxy.password Comma separated list of configuration options which should not be read by normal user like passwords hive.conf.internal.variable.list hive.added.files.path,hive.added.jars.path,hive.added.archives.path Comma separated list of variables which are used internally and should not be configurable. hive.query.timeout.seconds 0s Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Timeout for Running Query in seconds. A nonpositive value means infinite. If the query timeout is also set by thrift API call, the smaller one will be taken. hive.exec.input.listing.max.threads 0 Expects a byte size value with unit (blank for bytes, kb, mb, gb, tb, pb). The size should be in between 0Pb (inclusive) and 1Kb (inclusive). Maximum number of threads that Hive uses to list file information from file systems (recommended > 1 for blobstore). hive.blobstore.supported.schemes s3,s3a,s3n Comma-separated list of supported blobstore schemes. hive.blobstore.use.blobstore.as.scratchdir false Enable the use of scratch directories directly on blob storage systems (it may cause performance penalties). hive.blobstore.optimizations.enabled true This parameter enables a number of optimizations when running on blobstores: (1) If hive.blobstore.use.blobstore.as.scratchdir is false, force the last Hive job to write to the blobstore. This is a performance optimization that forces the final FileSinkOperator to write to the blobstore. See HIVE-15121 for details. ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_8_2/resources/mapred-default.xml 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_8_2/resources/mapred-default.x0000664000175000017500000020570600000000000033532 0ustar00zuulzuul00000000000000 mapreduce.job.committer.setup.cleanup.needed true true, if job needs job-setup and job-cleanup. false, otherwise mapreduce.task.io.sort.factor 10 The number of streams to merge at once while sorting files. This determines the number of open file handles. mapreduce.task.io.sort.mb 100 The total amount of buffer memory to use while sorting files, in megabytes. By default, gives each merge stream 1MB, which should minimize seeks. mapreduce.map.sort.spill.percent 0.80 The soft limit in the serialization buffer. Once reached, a thread will begin to spill the contents to disk in the background. Note that collection will not block if this threshold is exceeded while a spill is already in progress, so spills may be larger than this threshold when it is set to less than .5 mapreduce.jobtracker.address local The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. mapreduce.local.clientfactory.class.name org.apache.hadoop.mapred.LocalClientFactory This the client factory that is responsible for creating local job runner client mapreduce.jobtracker.system.dir ${hadoop.tmp.dir}/mapred/system The directory where MapReduce stores control files. mapreduce.jobtracker.staging.root.dir ${hadoop.tmp.dir}/mapred/staging The root of the staging area for users' job files In practice, this should be the directory where users' home directories are located (usually /user) mapreduce.cluster.temp.dir ${hadoop.tmp.dir}/mapred/temp A shared directory for temporary files. mapreduce.job.maps 2 The default number of map tasks per job. Ignored when mapreduce.framework.name is "local". mapreduce.job.reduces 1 The default number of reduce tasks per job. Typically set to 99% of the cluster's reduce capacity, so that if a node fails the reduces can still be executed in a single wave. Ignored when mapreduce.framework.name is "local". mapreduce.job.running.map.limit 0 The maximum number of simultaneous map tasks per job. There is no limit if this value is 0 or negative. mapreduce.job.running.reduce.limit 0 The maximum number of simultaneous reduce tasks per job. There is no limit if this value is 0 or negative. mapreduce.job.reducer.preempt.delay.sec 0 The threshold (in seconds) after which an unsatisfied mapper request triggers reducer preemption when there is no anticipated headroom. If set to 0 or a negative value, the reducer is preempted as soon as lack of headroom is detected. Default is 0. mapreduce.job.reducer.unconditional-preempt.delay.sec 300 The threshold (in seconds) after which an unsatisfied mapper request triggers a forced reducer preemption irrespective of the anticipated headroom. By default, it is set to 5 mins. Setting it to 0 leads to immediate reducer preemption. Setting to -1 disables this preemption altogether. mapreduce.job.max.split.locations 10 The max number of block locations to store for each split for locality calculation. mapreduce.job.split.metainfo.maxsize 10000000 The maximum permissible size of the split metainfo file. The MapReduce ApplicationMaster won't attempt to read submitted split metainfo files bigger than this configured value. No limits if set to -1. mapreduce.map.maxattempts 4 Expert: The maximum number of attempts per map task. In other words, framework will try to execute a map task these many number of times before giving up on it. mapreduce.reduce.maxattempts 4 Expert: The maximum number of attempts per reduce task. In other words, framework will try to execute a reduce task these many number of times before giving up on it. mapreduce.reduce.shuffle.fetch.retry.enabled ${yarn.nodemanager.recovery.enabled} Set to enable fetch retry during host restart. mapreduce.reduce.shuffle.fetch.retry.interval-ms 1000 Time of interval that fetcher retry to fetch again when some non-fatal failure happens because of some events like NM restart. mapreduce.reduce.shuffle.fetch.retry.timeout-ms 30000 Timeout value for fetcher to retry to fetch again when some non-fatal failure happens because of some events like NM restart. mapreduce.reduce.shuffle.retry-delay.max.ms 60000 The maximum number of ms the reducer will delay before retrying to download map data. mapreduce.reduce.shuffle.parallelcopies 5 The default number of parallel transfers run by reduce during the copy(shuffle) phase. mapreduce.reduce.shuffle.connect.timeout 180000 Expert: The maximum amount of time (in milli seconds) reduce task spends in trying to connect to a remote node for getting map output. mapreduce.reduce.shuffle.read.timeout 180000 Expert: The maximum amount of time (in milli seconds) reduce task waits for map output data to be available for reading after obtaining connection. mapreduce.shuffle.listen.queue.size 128 The length of the shuffle server listen queue. mapreduce.shuffle.connection-keep-alive.enable false set to true to support keep-alive connections. mapreduce.shuffle.connection-keep-alive.timeout 5 The number of seconds a shuffle client attempts to retain http connection. Refer "Keep-Alive: timeout=" header in Http specification mapreduce.task.timeout 600000 The number of milliseconds before a task will be terminated if it neither reads an input, writes an output, nor updates its status string. A value of 0 disables the timeout. mapreduce.map.memory.mb 1024 The amount of memory to request from the scheduler for each map task. mapreduce.map.cpu.vcores 1 The number of virtual cores to request from the scheduler for each map task. mapreduce.reduce.memory.mb 1024 The amount of memory to request from the scheduler for each reduce task. mapreduce.reduce.cpu.vcores 1 The number of virtual cores to request from the scheduler for each reduce task. mapred.child.java.opts -Xmx200m Java opts for the task processes. The following symbol, if present, will be interpolated: @taskid@ is replaced by current TaskID. Any other occurrences of '@' will go unchanged. For example, to enable verbose gc logging to a file named for the taskid in /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of: -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc Usage of -Djava.library.path can cause programs to no longer function if hadoop native libraries are used. These values should instead be set as part of LD_LIBRARY_PATH in the map / reduce JVM env using the mapreduce.map.env and mapreduce.reduce.env config settings. mapred.child.env User added environment variables for the task processes. Example : 1) A=foo This will set the env variable A to foo 2) B=$B:c This is inherit nodemanager's B env variable on Unix. 3) B=%B%;c This is inherit nodemanager's B env variable on Windows. mapreduce.admin.user.env Expert: Additional execution environment entries for map and reduce task processes. This is not an additive property. You must preserve the original value if you want your map and reduce tasks to have access to native libraries (compression, etc). When this value is empty, the command to set execution envrionment will be OS dependent: For linux, use LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native. For windows, use PATH = %PATH%;%HADOOP_COMMON_HOME%\\bin. yarn.app.mapreduce.am.log.level INFO The logging level for the MR ApplicationMaster. The allowed levels are: OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL. The setting here could be overriden if "mapreduce.job.log4j-properties-file" is set. mapreduce.map.log.level INFO The logging level for the map task. The allowed levels are: OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL. The setting here could be overridden if "mapreduce.job.log4j-properties-file" is set. mapreduce.reduce.log.level INFO The logging level for the reduce task. The allowed levels are: OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL. The setting here could be overridden if "mapreduce.job.log4j-properties-file" is set. mapreduce.map.cpu.vcores 1 The number of virtual cores required for each map task. mapreduce.reduce.cpu.vcores 1 The number of virtual cores required for each reduce task. mapreduce.reduce.merge.inmem.threshold 1000 The threshold, in terms of the number of files for the in-memory merge process. When we accumulate threshold number of files we initiate the in-memory merge and spill to disk. A value of 0 or less than 0 indicates we want to DON'T have any threshold and instead depend only on the ramfs's memory consumption to trigger the merge. mapreduce.reduce.shuffle.merge.percent 0.66 The usage threshold at which an in-memory merge will be initiated, expressed as a percentage of the total memory allocated to storing in-memory map outputs, as defined by mapreduce.reduce.shuffle.input.buffer.percent. mapreduce.reduce.shuffle.input.buffer.percent 0.70 The percentage of memory to be allocated from the maximum heap size to storing map outputs during the shuffle. mapreduce.reduce.input.buffer.percent 0.0 The percentage of memory- relative to the maximum heap size- to retain map outputs during the reduce. When the shuffle is concluded, any remaining map outputs in memory must consume less than this threshold before the reduce can begin. mapreduce.reduce.shuffle.memory.limit.percent 0.25 Expert: Maximum percentage of the in-memory limit that a single shuffle can consume mapreduce.shuffle.ssl.enabled false Whether to use SSL for for the Shuffle HTTP endpoints. mapreduce.shuffle.ssl.file.buffer.size 65536 Buffer size for reading spills from file when using SSL. mapreduce.shuffle.max.connections 0 Max allowed connections for the shuffle. Set to 0 (zero) to indicate no limit on the number of connections. mapreduce.shuffle.max.threads 0 Max allowed threads for serving shuffle connections. Set to zero to indicate the default of 2 times the number of available processors (as reported by Runtime.availableProcessors()). Netty is used to serve requests, so a thread is not needed for each connection. mapreduce.shuffle.transferTo.allowed This option can enable/disable using nio transferTo method in the shuffle phase. NIO transferTo does not perform well on windows in the shuffle phase. Thus, with this configuration property it is possible to disable it, in which case custom transfer method will be used. Recommended value is false when running Hadoop on Windows. For Linux, it is recommended to set it to true. If nothing is set then the default value is false for Windows, and true for Linux. mapreduce.shuffle.transfer.buffer.size 131072 This property is used only if mapreduce.shuffle.transferTo.allowed is set to false. In that case, this property defines the size of the buffer used in the buffer copy code for the shuffle phase. The size of this buffer determines the size of the IO requests. mapreduce.reduce.markreset.buffer.percent 0.0 The percentage of memory -relative to the maximum heap size- to be used for caching values when using the mark-reset functionality. mapreduce.map.speculative true If true, then multiple instances of some map tasks may be executed in parallel. mapreduce.reduce.speculative true If true, then multiple instances of some reduce tasks may be executed in parallel. mapreduce.job.speculative.speculative-cap-running-tasks 0.1 The max percent (0-1) of running tasks that can be speculatively re-executed at any time. mapreduce.job.speculative.speculative-cap-total-tasks 0.01 The max percent (0-1) of all tasks that can be speculatively re-executed at any time. mapreduce.job.speculative.minimum-allowed-tasks 10 The minimum allowed tasks that can be speculatively re-executed at any time. mapreduce.job.speculative.retry-after-no-speculate 1000 The waiting time(ms) to do next round of speculation if there is no task speculated in this round. mapreduce.job.speculative.retry-after-speculate 15000 The waiting time(ms) to do next round of speculation if there are tasks speculated in this round. mapreduce.job.map.output.collector.class org.apache.hadoop.mapred.MapTask$MapOutputBuffer The MapOutputCollector implementation(s) to use. This may be a comma-separated list of class names, in which case the map task will try to initialize each of the collectors in turn. The first to successfully initialize will be used. mapreduce.job.speculative.slowtaskthreshold 1.0 The number of standard deviations by which a task's ave progress-rates must be lower than the average of all running tasks' for the task to be considered too slow. mapreduce.job.ubertask.enable false Whether to enable the small-jobs "ubertask" optimization, which runs "sufficiently small" jobs sequentially within a single JVM. "Small" is defined by the following maxmaps, maxreduces, and maxbytes settings. Note that configurations for application masters also affect the "Small" definition - yarn.app.mapreduce.am.resource.mb must be larger than both mapreduce.map.memory.mb and mapreduce.reduce.memory.mb, and yarn.app.mapreduce.am.resource.cpu-vcores must be larger than both mapreduce.map.cpu.vcores and mapreduce.reduce.cpu.vcores to enable ubertask. Users may override this value. mapreduce.job.ubertask.maxmaps 9 Threshold for number of maps, beyond which job is considered too big for the ubertasking optimization. Users may override this value, but only downward. mapreduce.job.ubertask.maxreduces 1 Threshold for number of reduces, beyond which job is considered too big for the ubertasking optimization. CURRENTLY THE CODE CANNOT SUPPORT MORE THAN ONE REDUCE and will ignore larger values. (Zero is a valid max, however.) Users may override this value, but only downward. mapreduce.job.ubertask.maxbytes Threshold for number of input bytes, beyond which job is considered too big for the ubertasking optimization. If no value is specified, dfs.block.size is used as a default. Be sure to specify a default value in mapred-site.xml if the underlying filesystem is not HDFS. Users may override this value, but only downward. mapreduce.job.emit-timeline-data false Specifies if the Application Master should emit timeline data to the timeline server. Individual jobs can override this value. mapreduce.input.fileinputformat.split.minsize 0 The minimum size chunk that map input should be split into. Note that some file formats may have minimum split sizes that take priority over this setting. mapreduce.input.fileinputformat.list-status.num-threads 1 The number of threads to use to list and fetch block locations for the specified input paths. Note: multiple threads should not be used if a custom non thread-safe path filter is used. mapreduce.input.lineinputformat.linespermap 1 When using NLineInputFormat, the number of lines of input data to include in each split. mapreduce.client.submit.file.replication 10 The replication level for submitted job files. This should be around the square root of the number of nodes. mapreduce.task.files.preserve.failedtasks false Should the files for failed tasks be kept. This should only be used on jobs that are failing, because the storage is never reclaimed. It also prevents the map outputs from being erased from the reduce directory as they are consumed. mapreduce.output.fileoutputformat.compress false Should the job outputs be compressed? mapreduce.output.fileoutputformat.compress.type RECORD If the job outputs are to compressed as SequenceFiles, how should they be compressed? Should be one of NONE, RECORD or BLOCK. mapreduce.output.fileoutputformat.compress.codec org.apache.hadoop.io.compress.DefaultCodec If the job outputs are compressed, how should they be compressed? mapreduce.map.output.compress false Should the outputs of the maps be compressed before being sent across the network. Uses SequenceFile compression. mapreduce.map.output.compress.codec org.apache.hadoop.io.compress.DefaultCodec If the map outputs are compressed, how should they be compressed? map.sort.class org.apache.hadoop.util.QuickSort The default sort class for sorting keys. mapreduce.task.userlog.limit.kb 0 The maximum size of user-logs of each task in KB. 0 disables the cap. yarn.app.mapreduce.am.container.log.limit.kb 0 The maximum size of the MRAppMaster attempt container logs in KB. 0 disables the cap. yarn.app.mapreduce.task.container.log.backups 0 Number of backup files for task logs when using ContainerRollingLogAppender (CRLA). See org.apache.log4j.RollingFileAppender.maxBackupIndex. By default, ContainerLogAppender (CLA) is used, and container logs are not rolled. CRLA is enabled for tasks when both mapreduce.task.userlog.limit.kb and yarn.app.mapreduce.task.container.log.backups are greater than zero. yarn.app.mapreduce.am.container.log.backups 0 Number of backup files for the ApplicationMaster logs when using ContainerRollingLogAppender (CRLA). See org.apache.log4j.RollingFileAppender.maxBackupIndex. By default, ContainerLogAppender (CLA) is used, and container logs are not rolled. CRLA is enabled for the ApplicationMaster when both yarn.app.mapreduce.am.container.log.limit.kb and yarn.app.mapreduce.am.container.log.backups are greater than zero. yarn.app.mapreduce.shuffle.log.separate true If enabled ('true') logging generated by the client-side shuffle classes in a reducer will be written in a dedicated log file 'syslog.shuffle' instead of 'syslog'. yarn.app.mapreduce.shuffle.log.limit.kb 0 Maximum size of the syslog.shuffle file in kilobytes (0 for no limit). yarn.app.mapreduce.shuffle.log.backups 0 If yarn.app.mapreduce.shuffle.log.limit.kb and yarn.app.mapreduce.shuffle.log.backups are greater than zero then a ContainerRollngLogAppender is used instead of ContainerLogAppender for syslog.shuffle. See org.apache.log4j.RollingFileAppender.maxBackupIndex mapreduce.job.maxtaskfailures.per.tracker 3 The number of task-failures on a node manager of a given job after which new tasks of that job aren't assigned to it. It MUST be less than mapreduce.map.maxattempts and mapreduce.reduce.maxattempts otherwise the failed task will never be tried on a different node. mapreduce.client.output.filter FAILED The filter for controlling the output of the task's userlogs sent to the console of the JobClient. The permissible options are: NONE, KILLED, FAILED, SUCCEEDED and ALL. mapreduce.client.completion.pollinterval 5000 The interval (in milliseconds) between which the JobClient polls the MapReduce ApplicationMaster for updates about job status. You may want to set this to a lower value to make tests run faster on a single node system. Adjusting this value in production may lead to unwanted client-server traffic. mapreduce.client.progressmonitor.pollinterval 1000 The interval (in milliseconds) between which the JobClient reports status to the console and checks for job completion. You may want to set this to a lower value to make tests run faster on a single node system. Adjusting this value in production may lead to unwanted client-server traffic. mapreduce.task.profile false To set whether the system should collect profiler information for some of the tasks in this job? The information is stored in the user log directory. The value is "true" if task profiling is enabled. mapreduce.task.profile.maps 0-2 To set the ranges of map tasks to profile. mapreduce.task.profile has to be set to true for the value to be accounted. mapreduce.task.profile.reduces 0-2 To set the ranges of reduce tasks to profile. mapreduce.task.profile has to be set to true for the value to be accounted. mapreduce.task.profile.params -agentlib:hprof=cpu=samples,heap=sites,force=n,thread=y,verbose=n,file=%s JVM profiler parameters used to profile map and reduce task attempts. This string may contain a single format specifier %s that will be replaced by the path to profile.out in the task attempt log directory. To specify different profiling options for map tasks and reduce tasks, more specific parameters mapreduce.task.profile.map.params and mapreduce.task.profile.reduce.params should be used. mapreduce.task.profile.map.params ${mapreduce.task.profile.params} Map-task-specific JVM profiler parameters. See mapreduce.task.profile.params mapreduce.task.profile.reduce.params ${mapreduce.task.profile.params} Reduce-task-specific JVM profiler parameters. See mapreduce.task.profile.params mapreduce.task.skip.start.attempts 2 The number of Task attempts AFTER which skip mode will be kicked off. When skip mode is kicked off, the tasks reports the range of records which it will process next, to the MR ApplicationMaster. So that on failures, the MR AM knows which ones are possibly the bad records. On further executions, those are skipped. mapreduce.map.skip.proc-count.auto-incr true The flag which if set to true, SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS is incremented by MapRunner after invoking the map function. This value must be set to false for applications which process the records asynchronously or buffer the input records. For example streaming. In such cases applications should increment this counter on their own. mapreduce.reduce.skip.proc-count.auto-incr true The flag which if set to true, SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS is incremented by framework after invoking the reduce function. This value must be set to false for applications which process the records asynchronously or buffer the input records. For example streaming. In such cases applications should increment this counter on their own. mapreduce.job.skip.outdir If no value is specified here, the skipped records are written to the output directory at _logs/skip. User can stop writing skipped records by giving the value "none". mapreduce.map.skip.maxrecords 0 The number of acceptable skip records surrounding the bad record PER bad record in mapper. The number includes the bad record as well. To turn the feature of detection/skipping of bad records off, set the value to 0. The framework tries to narrow down the skipped range by retrying until this threshold is met OR all attempts get exhausted for this task. Set the value to Long.MAX_VALUE to indicate that framework need not try to narrow down. Whatever records(depends on application) get skipped are acceptable. mapreduce.reduce.skip.maxgroups 0 The number of acceptable skip groups surrounding the bad group PER bad group in reducer. The number includes the bad group as well. To turn the feature of detection/skipping of bad groups off, set the value to 0. The framework tries to narrow down the skipped range by retrying until this threshold is met OR all attempts get exhausted for this task. Set the value to Long.MAX_VALUE to indicate that framework need not try to narrow down. Whatever groups(depends on application) get skipped are acceptable. mapreduce.ifile.readahead true Configuration key to enable/disable IFile readahead. mapreduce.ifile.readahead.bytes 4194304 Configuration key to set the IFile readahead length in bytes. mapreduce.job.queuename default Queue to which a job is submitted. This must match one of the queues defined in mapred-queues.xml for the system. Also, the ACL setup for the queue must allow the current user to submit a job to the queue. Before specifying a queue, ensure that the system is configured with the queue, and access is allowed for submitting jobs to the queue. mapreduce.job.tags Tags for the job that will be passed to YARN at submission time. Queries to YARN for applications can filter on these tags. mapreduce.cluster.local.dir ${hadoop.tmp.dir}/mapred/local The local directory where MapReduce stores intermediate data files. May be a comma-separated list of directories on different devices in order to spread disk i/o. Directories that do not exist are ignored. mapreduce.cluster.acls.enabled false Specifies whether ACLs should be checked for authorization of users for doing various queue and job level operations. ACLs are disabled by default. If enabled, access control checks are made by MapReduce ApplicationMaster when requests are made by users for queue operations like submit job to a queue and kill a job in the queue and job operations like viewing the job-details (See mapreduce.job.acl-view-job) or for modifying the job (See mapreduce.job.acl-modify-job) using Map/Reduce APIs, RPCs or via the console and web user interfaces. For enabling this flag, set to true in mapred-site.xml file of all MapReduce clients (MR job submitting nodes). mapreduce.job.acl-modify-job Job specific access-control list for 'modifying' the job. It is only used if authorization is enabled in Map/Reduce by setting the configuration property mapreduce.cluster.acls.enabled to true. This specifies the list of users and/or groups who can do modification operations on the job. For specifying a list of users and groups the format to use is "user1,user2 group1,group". If set to '*', it allows all users/groups to modify this job. If set to ' '(i.e. space), it allows none. This configuration is used to guard all the modifications with respect to this job and takes care of all the following operations: o killing this job o killing a task of this job, failing a task of this job o setting the priority of this job Each of these operations are also protected by the per-queue level ACL "acl-administer-jobs" configured via mapred-queues.xml. So a caller should have the authorization to satisfy either the queue-level ACL or the job-level ACL. Irrespective of this ACL configuration, (a) job-owner, (b) the user who started the cluster, (c) members of an admin configured supergroup configured via mapreduce.cluster.permissions.supergroup and (d) queue administrators of the queue to which this job was submitted to configured via acl-administer-jobs for the specific queue in mapred-queues.xml can do all the modification operations on a job. By default, nobody else besides job-owner, the user who started the cluster, members of supergroup and queue administrators can perform modification operations on a job. mapreduce.job.acl-view-job Job specific access-control list for 'viewing' the job. It is only used if authorization is enabled in Map/Reduce by setting the configuration property mapreduce.cluster.acls.enabled to true. This specifies the list of users and/or groups who can view private details about the job. For specifying a list of users and groups the format to use is "user1,user2 group1,group". If set to '*', it allows all users/groups to modify this job. If set to ' '(i.e. space), it allows none. This configuration is used to guard some of the job-views and at present only protects APIs that can return possibly sensitive information of the job-owner like o job-level counters o task-level counters o tasks' diagnostic information o task-logs displayed on the HistoryServer's web-UI and o job.xml showed by the HistoryServer's web-UI Every other piece of information of jobs is still accessible by any other user, for e.g., JobStatus, JobProfile, list of jobs in the queue, etc. Irrespective of this ACL configuration, (a) job-owner, (b) the user who started the cluster, (c) members of an admin configured supergroup configured via mapreduce.cluster.permissions.supergroup and (d) queue administrators of the queue to which this job was submitted to configured via acl-administer-jobs for the specific queue in mapred-queues.xml can do all the view operations on a job. By default, nobody else besides job-owner, the user who started the cluster, memebers of supergroup and queue administrators can perform view operations on a job. mapreduce.job.finish-when-all-reducers-done false Specifies whether the job should complete once all reducers have finished, regardless of whether there are still running mappers. mapreduce.job.token.tracking.ids.enabled false Whether to write tracking ids of tokens to job-conf. When true, the configuration property "mapreduce.job.token.tracking.ids" is set to the token-tracking-ids of the job mapreduce.job.token.tracking.ids When mapreduce.job.token.tracking.ids.enabled is set to true, this is set by the framework to the token-tracking-ids used by the job. mapreduce.task.merge.progress.records 10000 The number of records to process during merge before sending a progress notification to the MR ApplicationMaster. mapreduce.task.combine.progress.records 10000 The number of records to process during combine output collection before sending a progress notification. mapreduce.job.reduce.slowstart.completedmaps 0.05 Fraction of the number of maps in the job which should be complete before reduces are scheduled for the job. mapreduce.job.complete.cancel.delegation.tokens true if false - do not unregister/cancel delegation tokens from renewal, because same tokens may be used by spawned jobs mapreduce.shuffle.port 13562 Default port that the ShuffleHandler will run on. ShuffleHandler is a service run at the NodeManager to facilitate transfers of intermediate Map outputs to requesting Reducers. mapreduce.job.reduce.shuffle.consumer.plugin.class org.apache.hadoop.mapreduce.task.reduce.Shuffle Name of the class whose instance will be used to send shuffle requests by reducetasks of this job. The class must be an instance of org.apache.hadoop.mapred.ShuffleConsumerPlugin. mapreduce.job.node-label-expression All the containers of the Map Reduce job will be run with this node label expression. If the node-label-expression for job is not set, then it will use queue's default-node-label-expression for all job's containers. mapreduce.job.am.node-label-expression This is node-label configuration for Map Reduce Application Master container. If not configured it will make use of mapreduce.job.node-label-expression and if job's node-label expression is not configured then it will use queue's default-node-label-expression. mapreduce.map.node-label-expression This is node-label configuration for Map task containers. If not configured it will use mapreduce.job.node-label-expression and if job's node-label expression is not configured then it will use queue's default-node-label-expression. mapreduce.reduce.node-label-expression This is node-label configuration for Reduce task containers. If not configured it will use mapreduce.job.node-label-expression and if job's node-label expression is not configured then it will use queue's default-node-label-expression. mapreduce.job.counters.limit 120 Limit on the number of user counters allowed per job. mapreduce.framework.name local The runtime framework for executing MapReduce jobs. Can be one of local, classic or yarn. yarn.app.mapreduce.am.staging-dir /tmp/hadoop-yarn/staging The staging dir used while submitting jobs. mapreduce.am.max-attempts 2 The maximum number of application attempts. It is a application-specific setting. It should not be larger than the global number set by resourcemanager. Otherwise, it will be override. The default number is set to 2, to allow at least one retry for AM. mapreduce.job.end-notification.url Indicates url which will be called on completion of job to inform end status of job. User can give at most 2 variables with URI : $jobId and $jobStatus. If they are present in URI, then they will be replaced by their respective values. mapreduce.job.end-notification.retry.attempts 0 The number of times the submitter of the job wants to retry job end notification if it fails. This is capped by mapreduce.job.end-notification.max.attempts mapreduce.job.end-notification.retry.interval 1000 The number of milliseconds the submitter of the job wants to wait before job end notification is retried if it fails. This is capped by mapreduce.job.end-notification.max.retry.interval mapreduce.job.end-notification.max.attempts 5 true The maximum number of times a URL will be read for providing job end notification. Cluster administrators can set this to limit how long after end of a job, the Application Master waits before exiting. Must be marked as final to prevent users from overriding this. mapreduce.job.log4j-properties-file Used to override the default settings of log4j in container-log4j.properties for NodeManager. Like container-log4j.properties, it requires certain framework appenders properly defined in this overriden file. The file on the path will be added to distributed cache and classpath. If no-scheme is given in the path, it defaults to point to a log4j file on the local FS. mapreduce.job.end-notification.max.retry.interval 5000 true The maximum amount of time (in milliseconds) to wait before retrying job end notification. Cluster administrators can set this to limit how long the Application Master waits before exiting. Must be marked as final to prevent users from overriding this. yarn.app.mapreduce.am.env User added environment variables for the MR App Master processes. Example : 1) A=foo This will set the env variable A to foo 2) B=$B:c This is inherit tasktracker's B env variable. yarn.app.mapreduce.am.admin.user.env Environment variables for the MR App Master processes for admin purposes. These values are set first and can be overridden by the user env (yarn.app.mapreduce.am.env) Example : 1) A=foo This will set the env variable A to foo 2) B=$B:c This is inherit app master's B env variable. yarn.app.mapreduce.am.command-opts -Xmx1024m Java opts for the MR App Master processes. The following symbol, if present, will be interpolated: @taskid@ is replaced by current TaskID. Any other occurrences of '@' will go unchanged. For example, to enable verbose gc logging to a file named for the taskid in /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of: -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc Usage of -Djava.library.path can cause programs to no longer function if hadoop native libraries are used. These values should instead be set as part of LD_LIBRARY_PATH in the map / reduce JVM env using the mapreduce.map.env and mapreduce.reduce.env config settings. yarn.app.mapreduce.am.admin-command-opts Java opts for the MR App Master processes for admin purposes. It will appears before the opts set by yarn.app.mapreduce.am.command-opts and thus its options can be overridden user. Usage of -Djava.library.path can cause programs to no longer function if hadoop native libraries are used. These values should instead be set as part of LD_LIBRARY_PATH in the map / reduce JVM env using the mapreduce.map.env and mapreduce.reduce.env config settings. yarn.app.mapreduce.am.job.task.listener.thread-count 30 The number of threads used to handle RPC calls in the MR AppMaster from remote tasks yarn.app.mapreduce.am.job.client.port-range Range of ports that the MapReduce AM can use when binding. Leave blank if you want all possible ports. For example 50000-50050,50100-50200 yarn.app.mapreduce.am.job.committer.cancel-timeout 60000 The amount of time in milliseconds to wait for the output committer to cancel an operation if the job is killed yarn.app.mapreduce.am.job.committer.commit-window 10000 Defines a time window in milliseconds for output commit operations. If contact with the RM has occurred within this window then commits are allowed, otherwise the AM will not allow output commits until contact with the RM has been re-established. mapreduce.fileoutputcommitter.algorithm.version 1 The file output committer algorithm version valid algorithm version number: 1 or 2 default to 1, which is the original algorithm In algorithm version 1, 1. commitTask will rename directory $joboutput/_temporary/$appAttemptID/_temporary/$taskAttemptID/ to $joboutput/_temporary/$appAttemptID/$taskID/ 2. recoverTask will also do a rename $joboutput/_temporary/$appAttemptID/$taskID/ to $joboutput/_temporary/($appAttemptID + 1)/$taskID/ 3. commitJob will merge every task output file in $joboutput/_temporary/$appAttemptID/$taskID/ to $joboutput/, then it will delete $joboutput/_temporary/ and write $joboutput/_SUCCESS It has a performance regression, which is discussed in MAPREDUCE-4815. If a job generates many files to commit then the commitJob method call at the end of the job can take minutes. the commit is single-threaded and waits until all tasks have completed before commencing. algorithm version 2 will change the behavior of commitTask, recoverTask, and commitJob. 1. commitTask will rename all files in $joboutput/_temporary/$appAttemptID/_temporary/$taskAttemptID/ to $joboutput/ 2. recoverTask actually doesn't require to do anything, but for upgrade from version 1 to version 2 case, it will check if there are any files in $joboutput/_temporary/($appAttemptID - 1)/$taskID/ and rename them to $joboutput/ 3. commitJob can simply delete $joboutput/_temporary and write $joboutput/_SUCCESS This algorithm will reduce the output commit time for large jobs by having the tasks commit directly to the final output directory as they were completing and commitJob had very little to do. yarn.app.mapreduce.am.scheduler.heartbeat.interval-ms 1000 The interval in ms at which the MR AppMaster should send heartbeats to the ResourceManager yarn.app.mapreduce.client-am.ipc.max-retries 3 The number of client retries to the AM - before reconnecting to the RM to fetch Application Status. yarn.app.mapreduce.client-am.ipc.max-retries-on-timeouts 3 The number of client retries on socket timeouts to the AM - before reconnecting to the RM to fetch Application Status. yarn.app.mapreduce.client.max-retries 3 The number of client retries to the RM/HS before throwing exception. This is a layer above the ipc. yarn.app.mapreduce.am.resource.mb 1536 The amount of memory the MR AppMaster needs. yarn.app.mapreduce.am.resource.cpu-vcores 1 The number of virtual CPU cores the MR AppMaster needs. yarn.app.mapreduce.am.hard-kill-timeout-ms 10000 Number of milliseconds to wait before the job client kills the application. yarn.app.mapreduce.client.job.max-retries 0 The number of retries the client will make for getJob and dependent calls. The default is 0 as this is generally only needed for non-HDFS DFS where additional, high level retries are required to avoid spurious failures during the getJob call. 30 is a good value for WASB yarn.app.mapreduce.client.job.retry-interval 2000 The delay between getJob retries in ms for retries configured with yarn.app.mapreduce.client.job.max-retries. CLASSPATH for MR applications. A comma-separated list of CLASSPATH entries. If mapreduce.application.framework is set then this must specify the appropriate classpath for that archive, and the name of the archive must be present in the classpath. If mapreduce.app-submission.cross-platform is false, platform-specific environment vairable expansion syntax would be used to construct the default CLASSPATH entries. For Linux: $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*, $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*. For Windows: %HADOOP_MAPRED_HOME%/share/hadoop/mapreduce/*, %HADOOP_MAPRED_HOME%/share/hadoop/mapreduce/lib/*. If mapreduce.app-submission.cross-platform is true, platform-agnostic default CLASSPATH for MR applications would be used: {{HADOOP_MAPRED_HOME}}/share/hadoop/mapreduce/*, {{HADOOP_MAPRED_HOME}}/share/hadoop/mapreduce/lib/* Parameter expansion marker will be replaced by NodeManager on container launch based on the underlying OS accordingly. mapreduce.application.classpath If enabled, user can submit an application cross-platform i.e. submit an application from a Windows client to a Linux/Unix server or vice versa. mapreduce.app-submission.cross-platform false Path to the MapReduce framework archive. If set, the framework archive will automatically be distributed along with the job, and this path would normally reside in a public location in an HDFS filesystem. As with distributed cache files, this can be a URL with a fragment specifying the alias to use for the archive name. For example, hdfs:/mapred/framework/hadoop-mapreduce-2.1.1.tar.gz#mrframework would alias the localized archive as "mrframework". Note that mapreduce.application.classpath must include the appropriate classpath for the specified framework. The base name of the archive, or alias of the archive if an alias is used, must appear in the specified classpath. mapreduce.application.framework.path mapreduce.job.classloader false Whether to use a separate (isolated) classloader for user classes in the task JVM. mapreduce.job.classloader.system.classes Used to override the default definition of the system classes for the job classloader. The system classes are a comma-separated list of patterns that indicate whether to load a class from the system classpath, instead from the user-supplied JARs, when mapreduce.job.classloader is enabled. A positive pattern is defined as: 1. A single class name 'C' that matches 'C' and transitively all nested classes 'C$*' defined in C; 2. A package name ending with a '.' (e.g., "com.example.") that matches all classes from that package. A negative pattern is defined by a '-' in front of a positive pattern (e.g., "-com.example."). A class is considered a system class if and only if it matches one of the positive patterns and none of the negative ones. More formally: A class is a member of the inclusion set I if it matches one of the positive patterns. A class is a member of the exclusion set E if it matches one of the negative patterns. The set of system classes S = I \ E. mapreduce.jvm.system-properties-to-log os.name,os.version,java.home,java.runtime.version,java.vendor,java.version,java.vm.name,java.class.path,java.io.tmpdir,user.dir,user.name Comma-delimited list of system properties to log on mapreduce JVM start mapreduce.jobhistory.address 0.0.0.0:10020 MapReduce JobHistory Server IPC host:port mapreduce.jobhistory.webapp.address 0.0.0.0:19888 MapReduce JobHistory Server Web UI host:port mapreduce.jobhistory.webapp.https.address 0.0.0.0:19890 The https address the MapReduce JobHistory Server WebApp is on. mapreduce.jobhistory.keytab Location of the kerberos keytab file for the MapReduce JobHistory Server. /etc/security/keytab/jhs.service.keytab mapreduce.jobhistory.principal Kerberos principal name for the MapReduce JobHistory Server. jhs/_HOST@REALM.TLD mapreduce.jobhistory.intermediate-done-dir ${yarn.app.mapreduce.am.staging-dir}/history/done_intermediate mapreduce.jobhistory.done-dir ${yarn.app.mapreduce.am.staging-dir}/history/done mapreduce.jobhistory.cleaner.enable true mapreduce.jobhistory.cleaner.interval-ms 86400000 How often the job history cleaner checks for files to delete, in milliseconds. Defaults to 86400000 (one day). Files are only deleted if they are older than mapreduce.jobhistory.max-age-ms. mapreduce.jobhistory.max-age-ms 604800000 Job history files older than this many milliseconds will be deleted when the history cleaner runs. Defaults to 604800000 (1 week). mapreduce.jobhistory.client.thread-count 10 The number of threads to handle client API requests mapreduce.jobhistory.datestring.cache.size 200000 Size of the date string cache. Effects the number of directories which will be scanned to find a job. mapreduce.jobhistory.joblist.cache.size 20000 Size of the job list cache mapreduce.jobhistory.loadedjobs.cache.size 5 Size of the loaded job cache. This property is ignored if the property mapreduce.jobhistory.loadedtasks.cache.size is set to a positive value. mapreduce.jobhistory.loadedtasks.cache.size Change the job history cache limit to be set in terms of total task count. If the total number of tasks loaded exceeds this value, then the job cache will be shrunk down until it is under this limit (minimum 1 job in cache). If this value is empty or nonpositive then the cache reverts to using the property mapreduce.jobhistory.loadedjobs.cache.size as a job cache size. Two recommendations for the mapreduce.jobhistory.loadedtasks.cache.size property: 1) For every 100k of cache size, set the heap size of the Job History Server to 1.2GB. For example, mapreduce.jobhistory.loadedtasks.cache.size=500000, heap size=6GB. 2) Make sure that the cache size is larger than the number of tasks required for the largest job run on the cluster. It might be a good idea to set the value slightly higher (say, 20%) in order to allow for job size growth. mapreduce.jobhistory.move.interval-ms 180000 Scan for history files to more from intermediate done dir to done dir at this frequency. mapreduce.jobhistory.move.thread-count 3 The number of threads used to move files. mapreduce.jobhistory.store.class The HistoryStorage class to use to cache history data. mapreduce.jobhistory.minicluster.fixed.ports false Whether to use fixed ports with the minicluster mapreduce.jobhistory.admin.address 0.0.0.0:10033 The address of the History server admin interface. mapreduce.jobhistory.admin.acl * ACL of who can be admin of the History server. mapreduce.jobhistory.recovery.enable false Enable the history server to store server state and recover server state upon startup. If enabled then mapreduce.jobhistory.recovery.store.class must be specified. mapreduce.jobhistory.recovery.store.class org.apache.hadoop.mapreduce.v2.hs.HistoryServerFileSystemStateStoreService The HistoryServerStateStoreService class to store history server state for recovery. mapreduce.jobhistory.recovery.store.fs.uri ${hadoop.tmp.dir}/mapred/history/recoverystore The URI where history server state will be stored if HistoryServerFileSystemStateStoreService is configured as the recovery storage class. mapreduce.jobhistory.recovery.store.leveldb.path ${hadoop.tmp.dir}/mapred/history/recoverystore The URI where history server state will be stored if HistoryServerLeveldbSystemStateStoreService is configured as the recovery storage class. mapreduce.jobhistory.http.policy HTTP_ONLY This configures the HTTP endpoint for JobHistoryServer web UI. The following values are supported: - HTTP_ONLY : Service is provided only on http - HTTPS_ONLY : Service is provided only on https mapreduce.jobhistory.jobname.limit 50 Number of characters allowed for job name in Job History Server web page. File format the AM will use when generating the .jhist file. Valid values are "json" for text output and "binary" for faster parsing. mapreduce.jobhistory.jhist.format json yarn.app.mapreduce.am.containerlauncher.threadpool-initial-size 10 The initial size of thread pool to launch containers in the app master. mapreduce.task.exit.timeout 60000 The number of milliseconds before a task will be terminated if it stays in finishing state for too long. After a task attempt completes from TaskUmbilicalProtocol's point of view, it will be transitioned to finishing state. That will give a chance for the task to exit by itself. mapreduce.task.exit.timeout.check-interval-ms 20000 The interval in milliseconds between which the MR framework checks if task attempts stay in finishing state for too long. mapreduce.task.local-fs.write-limit.bytes -1 Limit on the byte written to the local file system by each task. This limit only applies to writes that go through the Hadoop filesystem APIs within the task process (i.e.: writes that will update the local filesystem's BYTES_WRITTEN counter). It does not cover other writes such as logging, sideband writes from subprocesses (e.g.: streaming jobs), etc. Negative values disable the limit. default is -1 The list of job configuration properties whose value will be redacted. mapreduce.job.redacted-properties ././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_8_2/resources/oozie-default.xml 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_8_2/resources/oozie-default.xm0000664000175000017500000035122100000000000033556 0ustar00zuulzuul00000000000000 oozie.output.compression.codec gz The name of the compression codec to use. The implementation class for the codec needs to be specified through another property oozie.compression.codecs. You can specify a comma separated list of 'Codec_name'='Codec_class' for oozie.compression.codecs where codec class implements the interface org.apache.oozie.compression.CompressionCodec. If oozie.compression.codecs is not specified, gz codec implementation is used by default. oozie.external_monitoring.enable false If the oozie functional metrics needs to be exposed to the metrics-server backend, set it to true If set to true, the following properties has to be specified : oozie.metrics.server.name, oozie.metrics.host, oozie.metrics.prefix, oozie.metrics.report.interval.sec, oozie.metrics.port oozie.external_monitoring.type graphite The name of the server to which we want to send the metrics, would be graphite or ganglia. oozie.external_monitoring.address http://localhost:2020 oozie.external_monitoring.metricPrefix oozie oozie.external_monitoring.reporterIntervalSecs 60 oozie.jmx_monitoring.enable false If the oozie functional metrics needs to be exposed via JMX interface, set it to true. oozie.action.mapreduce.uber.jar.enable false If true, enables the oozie.mapreduce.uber.jar mapreduce workflow configuration property, which is used to specify an uber jar in HDFS. Submitting a workflow with an uber jar requires at least Hadoop 2.2.0 or 1.2.0. If false, workflows which specify the oozie.mapreduce.uber.jar configuration property will fail. oozie.processing.timezone UTC Oozie server timezone. Valid values are UTC and GMT(+/-)####, for example 'GMT+0530' would be India timezone. All dates parsed and genered dates by Oozie Coordinator/Bundle will be done in the specified timezone. The default value of 'UTC' should not be changed under normal circumtances. If for any reason is changed, note that GMT(+/-)#### timezones do not observe DST changes. oozie.base.url http://localhost:8080/oozie Base Oozie URL. oozie.system.id oozie-${user.name} The Oozie system ID. oozie.systemmode NORMAL System mode for Oozie at startup. oozie.delete.runtime.dir.on.shutdown true If the runtime directory should be kept after Oozie shutdowns down. oozie.services org.apache.oozie.service.SchedulerService, org.apache.oozie.service.InstrumentationService, org.apache.oozie.service.MemoryLocksService, org.apache.oozie.service.UUIDService, org.apache.oozie.service.ELService, org.apache.oozie.service.AuthorizationService, org.apache.oozie.service.UserGroupInformationService, org.apache.oozie.service.HadoopAccessorService, org.apache.oozie.service.JobsConcurrencyService, org.apache.oozie.service.URIHandlerService, org.apache.oozie.service.DagXLogInfoService, org.apache.oozie.service.SchemaService, org.apache.oozie.service.LiteWorkflowAppService, org.apache.oozie.service.JPAService, org.apache.oozie.service.StoreService, org.apache.oozie.service.SLAStoreService, org.apache.oozie.service.DBLiteWorkflowStoreService, org.apache.oozie.service.CallbackService, org.apache.oozie.service.ActionService, org.apache.oozie.service.ShareLibService, org.apache.oozie.service.CallableQueueService, org.apache.oozie.service.ActionCheckerService, org.apache.oozie.service.RecoveryService, org.apache.oozie.service.PurgeService, org.apache.oozie.service.CoordinatorEngineService, org.apache.oozie.service.BundleEngineService, org.apache.oozie.service.DagEngineService, org.apache.oozie.service.CoordMaterializeTriggerService, org.apache.oozie.service.StatusTransitService, org.apache.oozie.service.PauseTransitService, org.apache.oozie.service.GroupsService, org.apache.oozie.service.ProxyUserService, org.apache.oozie.service.XLogStreamingService, org.apache.oozie.service.JvmPauseMonitorService, org.apache.oozie.service.SparkConfigurationService All services to be created and managed by Oozie Services singleton. Class names must be separated by commas. oozie.services.ext To add/replace services defined in 'oozie.services' with custom implementations. Class names must be separated by commas. oozie.service.XLogStreamingService.buffer.len 4096 4K buffer for streaming the logs progressively oozie.service.HCatAccessorService.jmsconnections default=java.naming.factory.initial#org.apache.activemq.jndi.ActiveMQInitialContextFactory;java.naming.provider.url#tcp://localhost:61616;connectionFactoryNames#ConnectionFactory Specify the map of endpoints to JMS configuration properties. In general, endpoint identifies the HCatalog server URL. "default" is used if no endpoint is mentioned in the query. If some JMS property is not defined, the system will use the property defined jndi.properties. jndi.properties files is retrieved from the application classpath. Mapping rules can also be provided for mapping Hcatalog servers to corresponding JMS providers. hcat://${1}.${2}.server.com:8020=java.naming.factory.initial#Dummy.Factory;java.naming.provider.url#tcp://broker.${2}:61616 oozie.service.JMSTopicService.topic.name default=${username} Topic options are ${username} or ${jobId} or a fixed string which can be specified as default or for a particular job type. For e.g To have a fixed string topic for workflows, coordinators and bundles, specify in the following comma-separated format: {jobtype1}={some_string1}, {jobtype2}={some_string2} where job type can be WORKFLOW, COORDINATOR or BUNDLE. e.g. Following defines topic for workflow job, workflow action, coordinator job, coordinator action, bundle job and bundle action WORKFLOW=workflow, COORDINATOR=coordinator, BUNDLE=bundle For jobs with no defined topic, default topic will be ${username} oozie.jms.producer.connection.properties java.naming.factory.initial#org.apache.activemq.jndi.ActiveMQInitialContextFactory;java.naming.provider.url#tcp://localhost:61616;connectionFactoryNames#ConnectionFactory oozie.service.JMSAccessorService.connectioncontext.impl org.apache.oozie.jms.DefaultConnectionContext Specifies the Connection Context implementation oozie.service.ConfigurationService.ignore.system.properties oozie.service.AuthorizationService.security.enabled Specifies "oozie.*" properties to cannot be overriden via Java system properties. Property names must be separted by commas. oozie.service.ConfigurationService.verify.available.properties true Specifies whether the available configurations check is enabled or not. oozie.service.SchedulerService.threads 10 The number of threads to be used by the SchedulerService to run deamon tasks. If maxed out, scheduled daemon tasks will be queued up and delayed until threads become available. oozie.service.AuthorizationService.authorization.enabled false Specifies whether security (user name/admin role) is enabled or not. If disabled any user can manage Oozie system and manage any job. oozie.service.AuthorizationService.default.group.as.acl false Enables old behavior where the User's default group is the job's ACL. oozie.service.InstrumentationService.logging.interval 60 Interval, in seconds, at which instrumentation should be logged by the InstrumentationService. If set to 0 it will not log instrumentation data. oozie.service.PurgeService.older.than 30 Completed workflow jobs older than this value, in days, will be purged by the PurgeService. oozie.service.PurgeService.coord.older.than 7 Completed coordinator jobs older than this value, in days, will be purged by the PurgeService. oozie.service.PurgeService.bundle.older.than 7 Completed bundle jobs older than this value, in days, will be purged by the PurgeService. oozie.service.PurgeService.purge.old.coord.action false Whether to purge completed workflows and their corresponding coordinator actions of long running coordinator jobs if the completed workflow jobs are older than the value specified in oozie.service.PurgeService.older.than. oozie.service.PurgeService.purge.limit 100 Completed Actions purge - limit each purge to this value oozie.service.PurgeService.purge.interval 3600 Interval at which the purge service will run, in seconds. oozie.service.RecoveryService.wf.actions.older.than 120 Age of the actions which are eligible to be queued for recovery, in seconds. oozie.service.RecoveryService.wf.actions.created.time.interval 7 Created time period of the actions which are eligible to be queued for recovery in days. oozie.service.RecoveryService.callable.batch.size 10 This value determines the number of callable which will be batched together to be executed by a single thread. oozie.service.RecoveryService.push.dependency.interval 200 This value determines the delay for push missing dependency command queueing in Recovery Service oozie.service.RecoveryService.interval 60 Interval at which the RecoverService will run, in seconds. oozie.service.RecoveryService.coord.older.than 600 Age of the Coordinator jobs or actions which are eligible to be queued for recovery, in seconds. oozie.service.RecoveryService.bundle.older.than 600 Age of the Bundle jobs which are eligible to be queued for recovery, in seconds. oozie.service.CallableQueueService.queue.size 10000 Max callable queue size oozie.service.CallableQueueService.threads 10 Number of threads used for executing callables oozie.service.CallableQueueService.callable.concurrency 3 Maximum concurrency for a given callable type. Each command is a callable type (submit, start, run, signal, job, jobs, suspend,resume, etc). Each action type is a callable type (Map-Reduce, Pig, SSH, FS, sub-workflow, etc). All commands that use action executors (action-start, action-end, action-kill and action-check) use the action type as the callable type. oozie.service.CallableQueueService.callable.next.eligible true If true, when a callable in the queue has already reached max concurrency, Oozie continuously find next one which has not yet reach max concurrency. oozie.service.CallableQueueService.InterruptMapMaxSize 500 Maximum Size of the Interrupt Map, the interrupt element will not be inserted in the map if exceeded the size. oozie.service.CallableQueueService.InterruptTypes kill,resume,suspend,bundle_kill,bundle_resume,bundle_suspend,coord_kill,coord_change,coord_resume,coord_suspend Getting the types of XCommands that are considered to be of Interrupt type oozie.service.CoordMaterializeTriggerService.lookup.interval 300 Coordinator Job Lookup interval.(in seconds). oozie.service.CoordMaterializeTriggerService.materialization.window 3600 Coordinator Job Lookup command materialized each job for this next "window" duration oozie.service.CoordMaterializeTriggerService.callable.batch.size 10 This value determines the number of callable which will be batched together to be executed by a single thread. oozie.service.CoordMaterializeTriggerService.materialization.system.limit 50 This value determines the number of coordinator jobs to be materialized at a given time. oozie.service.coord.normal.default.timeout 120 Default timeout for a coordinator action input check (in minutes) for normal job. -1 means infinite timeout oozie.service.coord.default.max.timeout 86400 Default maximum timeout for a coordinator action input check (in minutes). 86400= 60days oozie.service.coord.input.check.requeue.interval 60000 Command re-queue interval for coordinator data input check (in millisecond). oozie.service.coord.input.check.requeue.interval.additional.delay 0 This value (in seconds) will be added into oozie.service.coord.input.check.requeue.interval and resulting value will be the requeue interval for the actions which are waiting for a long time without any input. oozie.service.coord.push.check.requeue.interval 600000 Command re-queue interval for push dependencies (in millisecond). oozie.service.coord.default.concurrency 1 Default concurrency for a coordinator job to determine how many maximum action should be executed at the same time. -1 means infinite concurrency. oozie.service.coord.default.throttle 12 Default throttle for a coordinator job to determine how many maximum action should be in WAITING state at the same time. oozie.service.coord.materialization.throttling.factor 0.05 Determine how many maximum actions should be in WAITING state for a single job at any time. The value is calculated by this factor X the total queue size. oozie.service.coord.check.maximum.frequency true When true, Oozie will reject any coordinators with a frequency faster than 5 minutes. It is not recommended to disable this check or submit coordinators with frequencies faster than 5 minutes: doing so can cause unintended behavior and additional system stress. oozie.service.ELService.groups job-submit,workflow,wf-sla-submit,coord-job-submit-freq,coord-job-submit-nofuncs,coord-job-submit-data,coord-job-submit-instances,coord-sla-submit,coord-action-create,coord-action-create-inst,coord-sla-create,coord-action-start,coord-job-wait-timeout,bundle-submit,coord-job-submit-initial-instance List of groups for different ELServices oozie.service.ELService.constants.job-submit EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.functions.job-submit EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.constants.job-submit EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions without having to include all the built in ones. oozie.service.ELService.ext.functions.job-submit EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions without having to include all the built in ones. oozie.service.ELService.constants.workflow KB=org.apache.oozie.util.ELConstantsFunctions#KB, MB=org.apache.oozie.util.ELConstantsFunctions#MB, GB=org.apache.oozie.util.ELConstantsFunctions#GB, TB=org.apache.oozie.util.ELConstantsFunctions#TB, PB=org.apache.oozie.util.ELConstantsFunctions#PB, RECORDS=org.apache.oozie.action.hadoop.HadoopELFunctions#RECORDS, MAP_IN=org.apache.oozie.action.hadoop.HadoopELFunctions#MAP_IN, MAP_OUT=org.apache.oozie.action.hadoop.HadoopELFunctions#MAP_OUT, REDUCE_IN=org.apache.oozie.action.hadoop.HadoopELFunctions#REDUCE_IN, REDUCE_OUT=org.apache.oozie.action.hadoop.HadoopELFunctions#REDUCE_OUT, GROUPS=org.apache.oozie.action.hadoop.HadoopELFunctions#GROUPS EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.workflow EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.workflow firstNotNull=org.apache.oozie.util.ELConstantsFunctions#firstNotNull, concat=org.apache.oozie.util.ELConstantsFunctions#concat, replaceAll=org.apache.oozie.util.ELConstantsFunctions#replaceAll, appendAll=org.apache.oozie.util.ELConstantsFunctions#appendAll, trim=org.apache.oozie.util.ELConstantsFunctions#trim, timestamp=org.apache.oozie.util.ELConstantsFunctions#timestamp, urlEncode=org.apache.oozie.util.ELConstantsFunctions#urlEncode, toJsonStr=org.apache.oozie.util.ELConstantsFunctions#toJsonStr, toPropertiesStr=org.apache.oozie.util.ELConstantsFunctions#toPropertiesStr, toConfigurationStr=org.apache.oozie.util.ELConstantsFunctions#toConfigurationStr, wf:id=org.apache.oozie.DagELFunctions#wf_id, wf:name=org.apache.oozie.DagELFunctions#wf_name, wf:appPath=org.apache.oozie.DagELFunctions#wf_appPath, wf:conf=org.apache.oozie.DagELFunctions#wf_conf, wf:user=org.apache.oozie.DagELFunctions#wf_user, wf:group=org.apache.oozie.DagELFunctions#wf_group, wf:callback=org.apache.oozie.DagELFunctions#wf_callback, wf:transition=org.apache.oozie.DagELFunctions#wf_transition, wf:lastErrorNode=org.apache.oozie.DagELFunctions#wf_lastErrorNode, wf:errorCode=org.apache.oozie.DagELFunctions#wf_errorCode, wf:errorMessage=org.apache.oozie.DagELFunctions#wf_errorMessage, wf:run=org.apache.oozie.DagELFunctions#wf_run, wf:actionData=org.apache.oozie.DagELFunctions#wf_actionData, wf:actionExternalId=org.apache.oozie.DagELFunctions#wf_actionExternalId, wf:actionTrackerUri=org.apache.oozie.DagELFunctions#wf_actionTrackerUri, wf:actionExternalStatus=org.apache.oozie.DagELFunctions#wf_actionExternalStatus, hadoop:counters=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_counters, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf, fs:exists=org.apache.oozie.action.hadoop.FsELFunctions#fs_exists, fs:isDir=org.apache.oozie.action.hadoop.FsELFunctions#fs_isDir, fs:dirSize=org.apache.oozie.action.hadoop.FsELFunctions#fs_dirSize, fs:fileSize=org.apache.oozie.action.hadoop.FsELFunctions#fs_fileSize, fs:blockSize=org.apache.oozie.action.hadoop.FsELFunctions#fs_blockSize, hcat:exists=org.apache.oozie.coord.HCatELFunctions#hcat_exists EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.WorkflowAppService.WorkflowDefinitionMaxLength 100000 The maximum length of the workflow definition in bytes An error will be reported if the length exceeds the given maximum oozie.service.ELService.ext.functions.workflow EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.wf-sla-submit MINUTES=org.apache.oozie.util.ELConstantsFunctions#SUBMIT_MINUTES, HOURS=org.apache.oozie.util.ELConstantsFunctions#SUBMIT_HOURS, DAYS=org.apache.oozie.util.ELConstantsFunctions#SUBMIT_DAYS EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.wf-sla-submit EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.wf-sla-submit EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.wf-sla-submit EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. l oozie.service.ELService.constants.coord-job-submit-freq EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-job-submit-freq EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-job-submit-freq coord:days=org.apache.oozie.coord.CoordELFunctions#ph1_coord_days, coord:months=org.apache.oozie.coord.CoordELFunctions#ph1_coord_months, coord:hours=org.apache.oozie.coord.CoordELFunctions#ph1_coord_hours, coord:minutes=org.apache.oozie.coord.CoordELFunctions#ph1_coord_minutes, coord:endOfDays=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfDays, coord:endOfMonths=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfMonths, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.functions.coord-job-submit-initial-instance ${oozie.service.ELService.functions.coord-job-submit-nofuncs}, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateOffset, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateTzOffset EL functions for coord job submit initial instance, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-job-submit-freq EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-job-wait-timeout EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.constants.coord-job-wait-timeout EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions without having to include all the built in ones. oozie.service.ELService.functions.coord-job-wait-timeout coord:days=org.apache.oozie.coord.CoordELFunctions#ph1_coord_days, coord:months=org.apache.oozie.coord.CoordELFunctions#ph1_coord_months, coord:hours=org.apache.oozie.coord.CoordELFunctions#ph1_coord_hours, coord:minutes=org.apache.oozie.coord.CoordELFunctions#ph1_coord_minutes, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-job-wait-timeout EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions without having to include all the built in ones. oozie.service.ELService.constants.coord-job-submit-nofuncs MINUTE=org.apache.oozie.coord.CoordELConstants#SUBMIT_MINUTE, HOUR=org.apache.oozie.coord.CoordELConstants#SUBMIT_HOUR, DAY=org.apache.oozie.coord.CoordELConstants#SUBMIT_DAY, MONTH=org.apache.oozie.coord.CoordELConstants#SUBMIT_MONTH, YEAR=org.apache.oozie.coord.CoordELConstants#SUBMIT_YEAR EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-job-submit-nofuncs EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-job-submit-nofuncs coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-job-submit-nofuncs EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-job-submit-instances EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-job-submit-instances EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-job-submit-instances coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph1_coord_hoursInDay_echo, coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph1_coord_daysInMonth_echo, coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_tzOffset_echo, coord:current=org.apache.oozie.coord.CoordELFunctions#ph1_coord_current_echo, coord:currentRange=org.apache.oozie.coord.CoordELFunctions#ph1_coord_currentRange_echo, coord:offset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_offset_echo, coord:latest=org.apache.oozie.coord.CoordELFunctions#ph1_coord_latest_echo, coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph1_coord_latestRange_echo, coord:future=org.apache.oozie.coord.CoordELFunctions#ph1_coord_future_echo, coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph1_coord_futureRange_echo, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo, coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_epochTime_echo, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:absolute=org.apache.oozie.coord.CoordELFunctions#ph1_coord_absolute_echo, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-job-submit-instances EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-job-submit-data EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-job-submit-data EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-job-submit-data coord:dataIn=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataIn_echo, coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataOut_echo, coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_wrap, coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actualTime_echo_wrap, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo, coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_epochTime_echo, coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actionId_echo, coord:name=org.apache.oozie.coord.CoordELFunctions#ph1_coord_name_echo, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:databaseIn=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseIn_echo, coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseOut_echo, coord:tableIn=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableIn_echo, coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableOut_echo, coord:dataInPartitionFilter=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionFilter_echo, coord:dataInPartitionMin=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionMin_echo, coord:dataInPartitionMax=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionMax_echo, coord:dataInPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitions_echo, coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitions_echo, coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitionValue_echo, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-job-submit-data EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-sla-submit MINUTES=org.apache.oozie.coord.CoordELConstants#SUBMIT_MINUTES, HOURS=org.apache.oozie.coord.CoordELConstants#SUBMIT_HOURS, DAYS=org.apache.oozie.coord.CoordELConstants#SUBMIT_DAYS EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-sla-submit EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.bundle-submit bundle:conf=org.apache.oozie.bundle.BundleELFunctions#bundle_conf oozie.service.ELService.functions.coord-sla-submit coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataOut_echo, coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_fixed, coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actualTime_echo_wrap, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo, coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_epochTime_echo, coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actionId_echo, coord:name=org.apache.oozie.coord.CoordELFunctions#ph1_coord_name_echo, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseOut_echo, coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableOut_echo, coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitions_echo, coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitionValue_echo, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-sla-submit EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-action-create EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-action-create EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-action-create coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph2_coord_hoursInDay, coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph2_coord_daysInMonth, coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_tzOffset, coord:current=org.apache.oozie.coord.CoordELFunctions#ph2_coord_current, coord:currentRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_currentRange, coord:offset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_offset, coord:latest=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latest_echo, coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latestRange_echo, coord:future=org.apache.oozie.coord.CoordELFunctions#ph2_coord_future_echo, coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_futureRange_echo, coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actionId, coord:name=org.apache.oozie.coord.CoordELFunctions#ph2_coord_name, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime, coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_epochTime, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:absolute=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_echo, coord:absoluteRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_range, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-action-create EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-action-create-inst EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-action-create-inst EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-action-create-inst coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph2_coord_hoursInDay, coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph2_coord_daysInMonth, coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_tzOffset, coord:current=org.apache.oozie.coord.CoordELFunctions#ph2_coord_current_echo, coord:currentRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_currentRange_echo, coord:offset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_offset_echo, coord:latest=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latest_echo, coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latestRange_echo, coord:future=org.apache.oozie.coord.CoordELFunctions#ph2_coord_future_echo, coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_futureRange_echo, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime, coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_epochTime, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:absolute=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_echo, coord:absoluteRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_range, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateOffset, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateTzOffset EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-action-create-inst EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-sla-create EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-sla-create MINUTES=org.apache.oozie.coord.CoordELConstants#SUBMIT_MINUTES, HOURS=org.apache.oozie.coord.CoordELConstants#SUBMIT_HOURS, DAYS=org.apache.oozie.coord.CoordELConstants#SUBMIT_DAYS EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-sla-create coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataOut, coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_nominalTime, coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actualTime, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateOffset, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateTzOffset, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime, coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_epochTime, coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actionId, coord:name=org.apache.oozie.coord.CoordELFunctions#ph2_coord_name, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_databaseOut, coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_tableOut, coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitions, coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitionValue, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-sla-create EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.constants.coord-action-start EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. oozie.service.ELService.ext.constants.coord-action-start EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.functions.coord-action-start coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph3_coord_hoursInDay, coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph3_coord_daysInMonth, coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_tzOffset, coord:latest=org.apache.oozie.coord.CoordELFunctions#ph3_coord_latest, coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph3_coord_latestRange, coord:future=org.apache.oozie.coord.CoordELFunctions#ph3_coord_future, coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph3_coord_futureRange, coord:dataIn=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataIn, coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataOut, coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_nominalTime, coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_actualTime, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dateOffset, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dateTzOffset, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_formatTime, coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_epochTime, coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph3_coord_actionId, coord:name=org.apache.oozie.coord.CoordELFunctions#ph3_coord_name, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:databaseIn=org.apache.oozie.coord.HCatELFunctions#ph3_coord_databaseIn, coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_databaseOut, coord:tableIn=org.apache.oozie.coord.HCatELFunctions#ph3_coord_tableIn, coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_tableOut, coord:dataInPartitionFilter=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitionFilter, coord:dataInPartitionMin=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitionMin, coord:dataInPartitionMax=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitionMax, coord:dataInPartitions=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitions, coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitions, coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitionValue, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. oozie.service.ELService.ext.functions.coord-action-start EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ELService.latest-el.use-current-time false Determine whether to use the current time to determine the latest dependency or the action creation time. This is for backward compatibility with older oozie behaviour. oozie.service.UUIDService.generator counter random : generated UUIDs will be random strings. counter: generated UUIDs generated will be a counter postfixed with the system startup time. oozie.service.DBLiteWorkflowStoreService.status.metrics.collection.interval 5 Workflow Status metrics collection interval in minutes. oozie.service.DBLiteWorkflowStoreService.status.metrics.window 3600 Workflow Status metrics collection window in seconds. Workflow status will be instrumented for the window. oozie.db.schema.name oozie Oozie DataBase Name oozie.service.JPAService.create.db.schema false Creates Oozie DB. If set to true, it creates the DB schema if it does not exist. If the DB schema exists is a NOP. If set to false, it does not create the DB schema. If the DB schema does not exist it fails start up. oozie.service.JPAService.validate.db.connection true Validates DB connections from the DB connection pool. If the 'oozie.service.JPAService.create.db.schema' property is set to true, this property is ignored. oozie.service.JPAService.validate.db.connection.eviction.interval 300000 Validates DB connections from the DB connection pool. When validate db connection 'TestWhileIdle' is true, the number of milliseconds to sleep between runs of the idle object evictor thread. oozie.service.JPAService.validate.db.connection.eviction.num 10 Validates DB connections from the DB connection pool. When validate db connection 'TestWhileIdle' is true, the number of objects to examine during each run of the idle object evictor thread. oozie.service.JPAService.connection.data.source org.apache.commons.dbcp.BasicDataSource DataSource to be used for connection pooling. oozie.service.JPAService.connection.properties DataSource connection properties. oozie.service.JPAService.jdbc.driver org.apache.derby.jdbc.EmbeddedDriver JDBC driver class. oozie.service.JPAService.jdbc.url jdbc:derby:${oozie.data.dir}/${oozie.db.schema.name}-db;create=true JDBC URL. oozie.service.JPAService.jdbc.username sa DB user name. oozie.service.JPAService.jdbc.password DB user password. IMPORTANT: if password is emtpy leave a 1 space string, the service trims the value, if empty Configuration assumes it is NULL. IMPORTANT: if the StoreServicePasswordService is active, it will reset this value with the value given in the console. oozie.service.JPAService.pool.max.active.conn 10 Max number of connections. oozie.service.JPAService.openjpa.BrokerImpl non-finalizing The default OpenJPAEntityManager implementation automatically closes itself during instance finalization. This guards against accidental resource leaks that may occur if a developer fails to explicitly close EntityManagers when finished with them, but it also incurs a scalability bottleneck, since the JVM must perform synchronization during instance creation, and since the finalizer thread will have more instances to monitor. To avoid this overhead, set the openjpa.BrokerImpl configuration property to non-finalizing. To use default implementation set it to empty space. oozie.service.SchemaService.wf.schemas oozie-workflow-0.1.xsd,oozie-workflow-0.2.xsd,oozie-workflow-0.2.5.xsd,oozie-workflow-0.3.xsd,oozie-workflow-0.4.xsd, oozie-workflow-0.4.5.xsd,oozie-workflow-0.5.xsd, shell-action-0.1.xsd,shell-action-0.2.xsd,shell-action-0.3.xsd, email-action-0.1.xsd,email-action-0.2.xsd, hive-action-0.2.xsd,hive-action-0.3.xsd,hive-action-0.4.xsd,hive-action-0.5.xsd,hive-action-0.6.xsd, sqoop-action-0.2.xsd,sqoop-action-0.3.xsd,sqoop-action-0.4.xsd, ssh-action-0.1.xsd,ssh-action-0.2.xsd, distcp-action-0.1.xsd,distcp-action-0.2.xsd, oozie-sla-0.1.xsd,oozie-sla-0.2.xsd, hive2-action-0.1.xsd, hive2-action-0.2.xsd, spark-action-0.1.xsd,spark-action-0.2.xsd List of schemas for workflows (separated by commas). oozie.service.SchemaService.wf.ext.schemas List of additional schemas for workflows (separated by commas). oozie.service.SchemaService.coord.schemas oozie-coordinator-0.1.xsd,oozie-coordinator-0.2.xsd,oozie-coordinator-0.3.xsd,oozie-coordinator-0.4.xsd, oozie-coordinator-0.5.xsd,oozie-sla-0.1.xsd,oozie-sla-0.2.xsd List of schemas for coordinators (separated by commas). oozie.service.SchemaService.coord.ext.schemas List of additional schemas for coordinators (separated by commas). oozie.service.SchemaService.bundle.schemas oozie-bundle-0.1.xsd,oozie-bundle-0.2.xsd List of schemas for bundles (separated by commas). oozie.service.SchemaService.bundle.ext.schemas List of additional schemas for bundles (separated by commas). oozie.service.SchemaService.sla.schemas gms-oozie-sla-0.1.xsd,oozie-sla-0.2.xsd List of schemas for semantic validation for GMS SLA (separated by commas). oozie.service.SchemaService.sla.ext.schemas List of additional schemas for semantic validation for GMS SLA (separated by commas). oozie.service.CallbackService.base.url ${oozie.base.url}/callback Base callback URL used by ActionExecutors. oozie.service.CallbackService.early.requeue.max.retries 5 If Oozie receives a callback too early (while the action is in PREP state), it will requeue the command this many times to give the action time to transition to RUNNING. oozie.servlet.CallbackServlet.max.data.len 2048 Max size in characters for the action completion data output. oozie.external.stats.max.size -1 Max size in bytes for action stats. -1 means infinite value. oozie.JobCommand.job.console.url ${oozie.base.url}?job= Base console URL for a workflow job. oozie.service.ActionService.executor.classes org.apache.oozie.action.decision.DecisionActionExecutor, org.apache.oozie.action.hadoop.JavaActionExecutor, org.apache.oozie.action.hadoop.FsActionExecutor, org.apache.oozie.action.hadoop.MapReduceActionExecutor, org.apache.oozie.action.hadoop.PigActionExecutor, org.apache.oozie.action.hadoop.HiveActionExecutor, org.apache.oozie.action.hadoop.ShellActionExecutor, org.apache.oozie.action.hadoop.SqoopActionExecutor, org.apache.oozie.action.hadoop.DistcpActionExecutor, org.apache.oozie.action.hadoop.Hive2ActionExecutor, org.apache.oozie.action.ssh.SshActionExecutor, org.apache.oozie.action.oozie.SubWorkflowActionExecutor, org.apache.oozie.action.email.EmailActionExecutor, org.apache.oozie.action.hadoop.SparkActionExecutor List of ActionExecutors classes (separated by commas). Only action types with associated executors can be used in workflows. oozie.service.ActionService.executor.ext.classes List of ActionExecutors extension classes (separated by commas). Only action types with associated executors can be used in workflows. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. oozie.service.ActionCheckerService.action.check.interval 60 The frequency at which the ActionCheckService will run. oozie.service.ActionCheckerService.action.check.delay 600 The time, in seconds, between an ActionCheck for the same action. oozie.service.ActionCheckerService.callable.batch.size 10 This value determines the number of actions which will be batched together to be executed by a single thread. oozie.service.StatusTransitService.statusTransit.interval 60 The frequency in seconds at which the StatusTransitService will run. oozie.service.StatusTransitService.backward.support.for.coord.status false true, if coordinator job submits using 'uri:oozie:coordinator:0.1' and wants to keep Oozie 2.x status transit. if set true, 1. SUCCEEDED state in coordinator job means materialization done. 2. No DONEWITHERROR state in coordinator job 3. No PAUSED or PREPPAUSED state in coordinator job 4. PREPSUSPENDED becomes SUSPENDED in coordinator job oozie.service.StatusTransitService.backward.support.for.states.without.error true true, if you want to keep Oozie 3.2 status transit. Change it to false for Oozie 4.x releases. if set true, No states like RUNNINGWITHERROR, SUSPENDEDWITHERROR and PAUSEDWITHERROR for coordinator and bundle oozie.service.PauseTransitService.PauseTransit.interval 60 The frequency in seconds at which the PauseTransitService will run. oozie.action.max.output.data 2048 Max size in characters for output data. oozie.action.fs.glob.max 50000 Maximum number of globbed files. oozie.action.launcher.am.restart.kill.childjobs true Multiple instances of launcher jobs can happen due to RM non-work preserving recovery on RM restart, AM recovery due to crashes or AM network connectivity loss. This could also lead to orphaned child jobs of the old AM attempts leading to conflicting runs. This kills child jobs of previous attempts using YARN application tags. oozie.action.launcher.mapreduce.job.ubertask.enable true Enables Uber Mode for the launcher job in YARN/Hadoop 2 (no effect in Hadoop 1) for all action types by default. This can be overridden on a per-action-type basis by setting oozie.action.#action-type#.launcher.mapreduce.job.ubertask.enable in oozie-site.xml (where #action-type# is the action type; for example, "pig"). And that can be overridden on a per-action basis by setting oozie.launcher.mapreduce.job.ubertask.enable in an action's configuration section in a workflow. In summary, the priority is this: 1. action's configuration section in a workflow 2. oozie.action.#action-type#.launcher.mapreduce.job.ubertask.enable in oozie-site 3. oozie.action.launcher.mapreduce.job.ubertask.enable in oozie-site oozie.action.shell.launcher.mapreduce.job.ubertask.enable false The Shell action may have issues with the $PATH environment when using Uber Mode, and so Uber Mode is disabled by default for it. See oozie.action.launcher.mapreduce.job.ubertask.enable oozie.action.spark.setup.hadoop.conf.dir false Oozie action.xml (oozie.action.conf.xml) contains all the hadoop configuration and user provided configurations. This property will allow users to copy Oozie action.xml as hadoop *-site configurations files. The advantage is, user need not to manage these files into spark sharelib. If user wants to manage the hadoop configurations themselves, it should should disable it. oozie.action.shell.setup.hadoop.conf.dir false The Shell action is commonly used to run programs that rely on HADOOP_CONF_DIR (e.g. hive, beeline, sqoop, etc). With YARN, HADOO_CONF_DIR is set to the NodeManager's copies of Hadoop's *-site.xml files, which can be problematic because (a) they are for meant for the NM, not necessarily clients, and (b) they won't have any of the configs that Oozie, or the user through Oozie, sets. When this property is set to true, The Shell action will prepare the *-site.xml files based on the correct config and set HADOOP_CONF_DIR to point to it. Setting it to false will make Oozie leave HADOOP_CONF_DIR alone. This can also be set at the Action level by putting it in the Shell Action's configuration section, which also has priorty. That all said, it's recommended to use the appropriate action type when possible. oozie.action.shell.setup.hadoop.conf.dir.write.log4j.properties true Toggle to control if a log4j.properties file should be written into the configuration directory prepared when oozie.action.shell.setup.hadoop.conf.dir is enabled. This is used to control logging behavior of log4j using commands run within the shell action script, and to ensure logging does not impact output data capture if leaked to stdout. Content of the written file is determined by the value of oozie.action.shell.setup.hadoop.conf.dir.log4j.content. oozie.action.shell.setup.hadoop.conf.dir.log4j.content log4j.rootLogger=${hadoop.root.logger} hadoop.root.logger=INFO,console log4j.appender.console=org.apache.log4j.ConsoleAppender log4j.appender.console.target=System.err log4j.appender.console.layout=org.apache.log4j.PatternLayout log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n The value to write into a log4j.properties file under the config directory created when oozie.action.shell.setup.hadoop.conf.dir and oozie.action.shell.setup.hadoop.conf.dir.write.log4j.properties properties are both enabled. The values must be properly newline separated and in format expected by Log4J. Trailing and preceding whitespaces will be trimmed when reading this property. This is used to control logging behavior of log4j using commands run within the shell action script. oozie.action.launcher.yarn.timeline-service.enabled false Enables/disables getting delegation tokens for ATS for the launcher job in YARN/Hadoop 2.6 (no effect in Hadoop 1) for all action types by default if tez-site.xml is present in distributed cache. This can be overridden on a per-action basis by setting oozie.launcher.yarn.timeline-service.enabled in an action's configuration section in a workflow. oozie.action.rootlogger.log.level INFO Logging level for root logger oozie.action.retries.max 3 The number of retries for executing an action in case of failure oozie.action.retry.interval 10 The interval between retries of an action in case of failure oozie.action.retry.policy periodic Retry policy of an action in case of failure. Possible values are periodic/exponential oozie.action.ssh.delete.remote.tmp.dir true If set to true, it will delete temporary directory at the end of execution of ssh action. oozie.action.ssh.http.command curl Command to use for callback to oozie, normally is 'curl' or 'wget'. The command must available in PATH environment variable of the USER@HOST box shell. oozie.action.ssh.http.command.post.options --data-binary @#stdout --request POST --header "content-type:text/plain" The callback command POST options. Used when the ouptut of the ssh action is captured. oozie.action.ssh.allow.user.at.host true Specifies whether the user specified by the ssh action is allowed or is to be replaced by the Job user oozie.action.subworkflow.max.depth 50 The maximum depth for subworkflows. For example, if set to 3, then a workflow can start subwf1, which can start subwf2, which can start subwf3; but if subwf3 tries to start subwf4, then the action will fail. This is helpful in preventing errant workflows from starting infintely recursive subworkflows. oozie.service.HadoopAccessorService.kerberos.enabled false Indicates if Oozie is configured to use Kerberos. local.realm LOCALHOST Kerberos Realm used by Oozie and Hadoop. Using 'local.realm' to be aligned with Hadoop configuration oozie.service.HadoopAccessorService.keytab.file ${user.home}/oozie.keytab Location of the Oozie user keytab file. oozie.service.HadoopAccessorService.kerberos.principal ${user.name}/localhost@${local.realm} Kerberos principal for Oozie service. oozie.service.HadoopAccessorService.jobTracker.whitelist Whitelisted job tracker for Oozie service. oozie.service.HadoopAccessorService.nameNode.whitelist Whitelisted job tracker for Oozie service. oozie.service.HadoopAccessorService.hadoop.configurations *=hadoop-conf Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the HOST:PORT of the Hadoop service (JobTracker, YARN, HDFS). The wildcard '*' configuration is used when there is no exact match for an authority. The HADOOP_CONF_DIR contains the relevant Hadoop *-site.xml files. If the path is relative is looked within the Oozie configuration directory; though the path can be absolute (i.e. to point to Hadoop client conf/ directories in the local filesystem. oozie.service.HadoopAccessorService.action.configurations *=action-conf Comma separated AUTHORITY=ACTION_CONF_DIR, where AUTHORITY is the HOST:PORT of the Hadoop MapReduce service (JobTracker, YARN). The wildcard '*' configuration is used when there is no exact match for an authority. The ACTION_CONF_DIR may contain ACTION.xml files where ACTION is the action type ('java', 'map-reduce', 'pig', 'hive', 'sqoop', etc.). If the ACTION.xml file exists, its properties will be used as defaults properties for the action. If the path is relative is looked within the Oozie configuration directory; though the path can be absolute (i.e. to point to Hadoop client conf/ directories in the local filesystem. oozie.service.HadoopAccessorService.action.configurations.load.default.resources true true means that default and site xml files of hadoop (core-default, core-site, hdfs-default, hdfs-site, mapred-default, mapred-site, yarn-default, yarn-site) are parsed into actionConf on Oozie server. false means that site xml files are not loaded on server, instead loaded on launcher node. This is only done for pig and hive actions which handle loading those files automatically from the classpath on launcher task. It defaults to true. oozie.credentials.credentialclasses A list of credential class mapping for CredentialsProvider oozie.credentials.skip false This determines if Oozie should skip getting credentials from the credential providers. This can be overwritten at a job-level or action-level. oozie.actions.main.classnames distcp=org.apache.hadoop.tools.DistCp A list of class name mapping for Action classes oozie.service.WorkflowAppService.system.libpath /user/${user.name}/share/lib System library path to use for workflow applications. This path is added to workflow application if their job properties sets the property 'oozie.use.system.libpath' to true. oozie.command.default.lock.timeout 5000 Default timeout (in milliseconds) for commands for acquiring an exclusive lock on an entity. oozie.command.default.requeue.delay 10000 Default time (in milliseconds) for commands that are requeued for delayed execution. oozie.service.LiteWorkflowStoreService.user.retry.max 3 Automatic retry max count for workflow action is 3 in default. oozie.service.LiteWorkflowStoreService.user.retry.inteval 10 Automatic retry interval for workflow action is in minutes and the default value is 10 minutes. oozie.service.LiteWorkflowStoreService.user.retry.policy periodic Automatic retry policy for workflow action. Possible values are periodic or exponential, periodic being the default. oozie.service.LiteWorkflowStoreService.user.retry.error.code JA008,JA009,JA017,JA018,JA019,FS009,FS008,FS014 Automatic retry interval for workflow action is handled for these specified error code: FS009, FS008 is file exists error when using chmod in fs action. FS014 is permission error in fs action JA018 is output directory exists error in workflow map-reduce action. JA019 is error while executing distcp action. JA017 is job not exists error in action executor. JA008 is FileNotFoundException in action executor. JA009 is IOException in action executor. ALL is the any kind of error in action executor. oozie.service.LiteWorkflowStoreService.user.retry.error.code.ext Automatic retry interval for workflow action is handled for these specified extra error code: ALL is the any kind of error in action executor. oozie.service.LiteWorkflowStoreService.node.def.version _oozie_inst_v_2 NodeDef default version, _oozie_inst_v_0, _oozie_inst_v_1 or _oozie_inst_v_2 oozie.authentication.type simple Defines authentication used for Oozie HTTP endpoint. Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME# oozie.server.authentication.type ${oozie.authentication.type} Defines authentication used for Oozie server communicating to other Oozie server over HTTP(s). Supported values are: simple | kerberos | #AUTHENTICATOR_CLASSNAME# oozie.authentication.token.validity 36000 Indicates how long (in seconds) an authentication token is valid before it has to be renewed. oozie.authentication.cookie.domain The domain to use for the HTTP cookie that stores the authentication token. In order to authentiation to work correctly across multiple hosts the domain must be correctly set. oozie.authentication.simple.anonymous.allowed true Indicates if anonymous requests are allowed when using 'simple' authentication. oozie.authentication.kerberos.principal HTTP/localhost@${local.realm} Indicates the Kerberos principal to be used for HTTP endpoint. The principal MUST start with 'HTTP/' as per Kerberos HTTP SPNEGO specification. oozie.authentication.kerberos.keytab ${oozie.service.HadoopAccessorService.keytab.file} Location of the keytab file with the credentials for the principal. Referring to the same keytab file Oozie uses for its Kerberos credentials for Hadoop. oozie.authentication.kerberos.name.rules DEFAULT The kerberos names rules is to resolve kerberos principal names, refer to Hadoop's KerberosName for more details. oozie.coord.execution.none.tolerance 1 Default time tolerance in minutes after action nominal time for an action to be skipped when execution order is "NONE" oozie.coord.actions.default.length 1000 Default number of coordinator actions to be retrieved by the info command oozie.validate.ForkJoin true If true, fork and join should be validated at wf submission time. oozie.workflow.parallel.fork.action.start true Determines how Oozie processes starting of forked actions. If true, forked actions and their job submissions are done in parallel which is best for performance. If false, they are submitted sequentially. oozie.coord.action.get.all.attributes false Setting to true is not recommended as coord job/action info will bring all columns of the action in memory. Set it true only if backward compatibility for action/job info is required. oozie.service.HadoopAccessorService.supported.filesystems hdfs,hftp,webhdfs Enlist the different filesystems supported for federation. If wildcard "*" is specified, then ALL file schemes will be allowed. oozie.service.URIHandlerService.uri.handlers org.apache.oozie.dependency.FSURIHandler Enlist the different uri handlers supported for data availability checks. oozie.notification.url.connection.timeout 10000 Defines the timeout, in milliseconds, for Oozie HTTP notification callbacks. Oozie does HTTP notifications for workflow jobs which set the 'oozie.wf.action.notification.url', 'oozie.wf.worklfow.notification.url' and/or 'oozie.coord.action.notification.url' properties in their job.properties. Refer to section '5 Oozie Notifications' in the Workflow specification for details. oozie.hadoop-2.0.2-alpha.workaround.for.distributed.cache false Due to a bug in Hadoop 2.0.2-alpha, MAPREDUCE-4820, launcher jobs fail to set the distributed cache for the action job because the local JARs are implicitly included triggering a duplicate check. This flag removes the distributed cache files for the action as they'll be included from the local JARs of the JobClient (MRApps) submitting the action job from the launcher. oozie.service.EventHandlerService.filter.app.types workflow_job, coordinator_action The app-types among workflow/coordinator/bundle job/action for which for which events system is enabled. oozie.service.EventHandlerService.event.queue org.apache.oozie.event.MemoryEventQueue The implementation for EventQueue in use by the EventHandlerService. oozie.service.EventHandlerService.event.listeners org.apache.oozie.jms.JMSJobEventListener oozie.service.EventHandlerService.queue.size 10000 Maximum number of events to be contained in the event queue. oozie.service.EventHandlerService.worker.interval 30 The default interval (seconds) at which the worker threads will be scheduled to run and process events. oozie.service.EventHandlerService.batch.size 10 The batch size for batched draining per thread from the event queue. oozie.service.EventHandlerService.worker.threads 3 Number of worker threads to be scheduled to run and process events. oozie.sla.service.SLAService.capacity 5000 Maximum number of sla records to be contained in the memory structure. oozie.sla.service.SLAService.alert.events END_MISS Default types of SLA events for being alerted of. oozie.sla.service.SLAService.calculator.impl org.apache.oozie.sla.SLACalculatorMemory The implementation for SLACalculator in use by the SLAService. oozie.sla.service.SLAService.job.event.latency 90000 Time in milliseconds to account of latency of getting the job status event to compare against and decide sla miss/met oozie.sla.service.SLAService.check.interval 30 Time interval, in seconds, at which SLA Worker will be scheduled to run oozie.sla.disable.alerts.older.than 48 Time threshold, in HOURS, for disabling SLA alerting for jobs whose nominal time is older than this. oozie.zookeeper.connection.string localhost:2181 Comma-separated values of host:port pairs of the ZooKeeper servers. oozie.zookeeper.namespace oozie The namespace to use. All of the Oozie Servers that are planning on talking to each other should have the same namespace. oozie.zookeeper.connection.timeout 180 Default ZK connection timeout (in sec). oozie.zookeeper.session.timeout 300 Default ZK session timeout (in sec). If connection is lost even after retry, then Oozie server will shutdown itself if oozie.zookeeper.server.shutdown.ontimeout is true. oozie.zookeeper.max.retries 10 Maximum number of times to retry. oozie.zookeeper.server.shutdown.ontimeout true If true, Oozie server will shutdown itself on ZK connection timeout. oozie.http.hostname localhost Oozie server host name. oozie.http.port 11000 Oozie server port. oozie.https.port 11443 Oozie ssl server port. oozie.instance.id ${oozie.http.hostname} Each Oozie server should have its own unique instance id. The default is system property =${OOZIE_HTTP_HOSTNAME}= (i.e. the hostname). oozie.service.ShareLibService.mapping.file Sharelib mapping files contains list of key=value, where key will be the sharelib name for the action and value is a comma separated list of DFS directories or jar files. Example. oozie.pig_10=hdfs:///share/lib/pig/pig-0.10.1/lib/ oozie.pig=hdfs:///share/lib/pig/pig-0.11.1/lib/ oozie.distcp=hdfs:///share/lib/hadoop-2.2.0/share/hadoop/tools/lib/hadoop-distcp-2.2.0.jar oozie.service.ShareLibService.fail.fast.on.startup false Fails server starup if sharelib initilzation fails. oozie.service.ShareLibService.purge.interval 1 How often, in days, Oozie should check for old ShareLibs and LauncherLibs to purge from HDFS. oozie.service.ShareLibService.temp.sharelib.retention.days 7 ShareLib retention time in days. oozie.action.ship.launcher.jar false Specifies whether launcher jar is shipped or not. oozie.action.jobinfo.enable false JobInfo will contain information of bundle, coordinator, workflow and actions. If enabled, hadoop job will have property(oozie.job.info) which value is multiple key/value pair separated by ",". This information can be used for analytics like how many oozie jobs are submitted for a particular period, what is the total number of failed pig jobs, etc from mapreduce job history logs and configuration. User can also add custom workflow property to jobinfo by adding property which prefix with "oozie.job.info." Eg. oozie.job.info="bundle.id=,bundle.name=,coord.name=,coord.nominal.time=,coord.name=,wf.id=, wf.name=,action.name=,action.type=,launcher=true" oozie.service.XLogStreamingService.max.log.scan.duration -1 Max log scan duration in hours. If log scan request end_date - start_date > value, then exception is thrown to reduce the scan duration. -1 indicate no limit. oozie.service.XLogStreamingService.actionlist.max.log.scan.duration -1 Max log scan duration in hours for coordinator job when list of actions are specified. If log streaming request end_date - start_date > value, then exception is thrown to reduce the scan duration. -1 indicate no limit. This setting is separate from max.log.scan.duration as we want to allow higher durations when actions are specified. oozie.service.JvmPauseMonitorService.warn-threshold.ms 10000 The JvmPauseMonitorService runs a thread that repeatedly tries to detect when the JVM pauses, which could indicate that the JVM or host machine is overloaded or other problems. This thread sleeps for 500ms; if it sleeps for significantly longer, then there is likely a problem. This property specifies the threadshold for when Oozie should log a WARN level message; there is also a counter named "jvm.pause.warn-threshold". oozie.service.JvmPauseMonitorService.info-threshold.ms 1000 The JvmPauseMonitorService runs a thread that repeatedly tries to detect when the JVM pauses, which could indicate that the JVM or host machine is overloaded or other problems. This thread sleeps for 500ms; if it sleeps for significantly longer, then there is likely a problem. This property specifies the threadshold for when Oozie should log an INFO level message; there is also a counter named "jvm.pause.info-threshold". oozie.service.ZKLocksService.locks.reaper.threshold 300 The frequency at which the ChildReaper will run. Duration should be in sec. Default is 5 min. oozie.service.ZKLocksService.locks.reaper.threads 2 Number of fixed threads used by ChildReaper to delete empty locks. oozie.service.AbandonedCoordCheckerService.check.interval 1440 Interval, in minutes, at which AbandonedCoordCheckerService should run. oozie.service.AbandonedCoordCheckerService.check.delay 60 Delay, in minutes, at which AbandonedCoordCheckerService should run. oozie.service.AbandonedCoordCheckerService.failure.limit 25 Failure limit. A job is considered to be abandoned/faulty if total number of actions in failed/timedout/suspended >= "Failure limit" and there are no succeeded action. oozie.service.AbandonedCoordCheckerService.kill.jobs false If true, AbandonedCoordCheckerService will kill abandoned coords. oozie.service.AbandonedCoordCheckerService.job.older.than 2880 In minutes, job will be considered as abandoned/faulty if job is older than this value. oozie.notification.proxy System level proxy setting for job notifications. oozie.wf.rerun.disablechild false By setting this option, workflow rerun will be disabled if parent workflow or coordinator exist and it will only rerun through parent. oozie.use.system.libpath false Default value of oozie.use.system.libpath. If user haven't specified =oozie.use.system.libpath= in the job.properties and this value is true and Oozie will include sharelib jars for workflow. oozie.service.PauseTransitService.callable.batch.size 10 This value determines the number of callable which will be batched together to be executed by a single thread. oozie.configuration.substitute.depth 20 This value determines the depth of substitution in configurations. If set -1, No limitation on substitution. oozie.service.SparkConfigurationService.spark.configurations *=spark-conf Comma separated AUTHORITY=SPARK_CONF_DIR, where AUTHORITY is the HOST:PORT of the ResourceManager of a YARN cluster. The wildcard '*' configuration is used when there is no exact match for an authority. The SPARK_CONF_DIR contains the relevant spark-defaults.conf properties file. If the path is relative is looked within the Oozie configuration directory; though the path can be absolute. This is only used when the Spark master is set to either "yarn-client" or "yarn-cluster". oozie.service.SparkConfigurationService.spark.configurations.ignore.spark.yarn.jar true If true, Oozie will ignore the "spark.yarn.jar" property from any Spark configurations specified in oozie.service.SparkConfigurationService.spark.configurations. If false, Oozie will not ignore it. It is recommended to leave this as true because it can interfere with the jars in the Spark sharelib. oozie.email.attachment.enabled true This value determines whether to support email attachment of a file on HDFS. Set it false if there is any security concern. oozie.email.smtp.host localhost The host where the email action may find the SMTP server. oozie.email.smtp.port 25 The port to connect to for the SMTP server, for email actions. oozie.email.smtp.auth false Boolean property that toggles if authentication is to be done or not when using email actions. oozie.email.smtp.username If authentication is enabled for email actions, the username to login as (to the SMTP server). oozie.email.smtp.password If authentication is enabled for email actions, the password to login with (to the SMTP server). oozie.email.from.address oozie@localhost The from address to be used for mailing all emails done via the email action. oozie.email.smtp.socket.timeout.ms 10000 The timeout to apply over all SMTP server socket operations done during the email action. oozie.actions.default.name-node The default value to use for the <name-node> element in applicable action types. This value will be used when neither the action itself nor the global section specifies a <name-node>. As expected, it should be of the form "hdfs://HOST:PORT". oozie.actions.default.job-tracker The default value to use for the <job-tracker> element in applicable action types. This value will be used when neither the action itself nor the global section specifies a <job-tracker>. As expected, it should be of the form "HOST:PORT". ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_8_2/resources/yarn-default.xml0000664000175000017500000027760300000000000033571 0ustar00zuulzuul00000000000000 Factory to create client IPC classes. yarn.ipc.client.factory.class Factory to create server IPC classes. yarn.ipc.server.factory.class Factory to create serializeable records. yarn.ipc.record.factory.class RPC class implementation yarn.ipc.rpc.class org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC The hostname of the RM. yarn.resourcemanager.hostname 0.0.0.0 The address of the applications manager interface in the RM. yarn.resourcemanager.address ${yarn.resourcemanager.hostname}:8032 The actual address the server will bind to. If this optional address is set, the RPC and webapp servers will bind to this address and the port specified in yarn.resourcemanager.address and yarn.resourcemanager.webapp.address, respectively. This is most useful for making RM listen to all interfaces by setting to 0.0.0.0. yarn.resourcemanager.bind-host The number of threads used to handle applications manager requests. yarn.resourcemanager.client.thread-count 50 Number of threads used to launch/cleanup AM. yarn.resourcemanager.amlauncher.thread-count 50 Retry times to connect with NM. yarn.resourcemanager.nodemanager-connect-retries 10 Timeout in milliseconds when YARN dispatcher tries to drain the events. Typically, this happens when service is stopping. e.g. RM drains the ATS events dispatcher when stopping. yarn.dispatcher.drain-events.timeout 300000 The expiry interval for application master reporting. yarn.am.liveness-monitor.expiry-interval-ms 600000 The Kerberos principal for the resource manager. yarn.resourcemanager.principal The address of the scheduler interface. yarn.resourcemanager.scheduler.address ${yarn.resourcemanager.hostname}:8030 Number of threads to handle scheduler interface. yarn.resourcemanager.scheduler.client.thread-count 50 This configures the HTTP endpoint for Yarn Daemons.The following values are supported: - HTTP_ONLY : Service is provided only on http - HTTPS_ONLY : Service is provided only on https yarn.http.policy HTTP_ONLY The http address of the RM web application. If only a host is provided as the value, the webapp will be served on a random port. yarn.resourcemanager.webapp.address ${yarn.resourcemanager.hostname}:8088 The https address of the RM web application. If only a host is provided as the value, the webapp will be served on a random port. yarn.resourcemanager.webapp.https.address ${yarn.resourcemanager.hostname}:8090 The Kerberos keytab file to be used for spnego filter for the RM web interface. yarn.resourcemanager.webapp.spnego-keytab-file The Kerberos principal to be used for spnego filter for the RM web interface. yarn.resourcemanager.webapp.spnego-principal Add button to kill application in the RM Application view. yarn.resourcemanager.webapp.ui-actions.enabled true yarn.resourcemanager.resource-tracker.address ${yarn.resourcemanager.hostname}:8031 Are acls enabled. yarn.acl.enable false Are reservation acls enabled. yarn.acl.reservation-enable false ACL of who can be admin of the YARN cluster. yarn.admin.acl * The address of the RM admin interface. yarn.resourcemanager.admin.address ${yarn.resourcemanager.hostname}:8033 Number of threads used to handle RM admin interface. yarn.resourcemanager.admin.client.thread-count 1 Maximum time to wait to establish connection to ResourceManager. yarn.resourcemanager.connect.max-wait.ms 900000 How often to try connecting to the ResourceManager. yarn.resourcemanager.connect.retry-interval.ms 30000 The maximum number of application attempts. It's a global setting for all application masters. Each application master can specify its individual maximum number of application attempts via the API, but the individual number cannot be more than the global upper bound. If it is, the resourcemanager will override it. The default number is set to 2, to allow at least one retry for AM. yarn.resourcemanager.am.max-attempts 2 How often to check that containers are still alive. yarn.resourcemanager.container.liveness-monitor.interval-ms 600000 The keytab for the resource manager. yarn.resourcemanager.keytab /etc/krb5.keytab Flag to enable override of the default kerberos authentication filter with the RM authentication filter to allow authentication using delegation tokens(fallback to kerberos if the tokens are missing). Only applicable when the http authentication type is kerberos. yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled true Flag to enable cross-origin (CORS) support in the RM. This flag requires the CORS filter initializer to be added to the filter initializers list in core-site.xml. yarn.resourcemanager.webapp.cross-origin.enabled false How long to wait until a node manager is considered dead. yarn.nm.liveness-monitor.expiry-interval-ms 600000 Path to file with nodes to include. yarn.resourcemanager.nodes.include-path Path to file with nodes to exclude. yarn.resourcemanager.nodes.exclude-path The expiry interval for node IP caching. -1 disables the caching yarn.resourcemanager.node-ip-cache.expiry-interval-secs -1 Number of threads to handle resource tracker calls. yarn.resourcemanager.resource-tracker.client.thread-count 50 The class to use as the resource scheduler. yarn.resourcemanager.scheduler.class org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler The minimum allocation for every container request at the RM, in MBs. Memory requests lower than this will throw a InvalidResourceRequestException. yarn.scheduler.minimum-allocation-mb 1024 The maximum allocation for every container request at the RM, in MBs. Memory requests higher than this will throw a InvalidResourceRequestException. yarn.scheduler.maximum-allocation-mb 8192 The minimum allocation for every container request at the RM, in terms of virtual CPU cores. Requests lower than this will throw a InvalidResourceRequestException. yarn.scheduler.minimum-allocation-vcores 1 The maximum allocation for every container request at the RM, in terms of virtual CPU cores. Requests higher than this will throw a InvalidResourceRequestException. yarn.scheduler.maximum-allocation-vcores 4 Used by node labels. If set to true, the port should be included in the node name. Only usable if your scheduler supports node labels. yarn.scheduler.include-port-in-node-name false Enable RM to recover state after starting. If true, then yarn.resourcemanager.store.class must be specified. yarn.resourcemanager.recovery.enabled false Should RM fail fast if it encounters any errors. By defalt, it points to ${yarn.fail-fast}. Errors include: 1) exceptions when state-store write/read operations fails. yarn.resourcemanager.fail-fast ${yarn.fail-fast} Should YARN fail fast if it encounters any errors. This is a global config for all other components including RM,NM etc. If no value is set for component-specific config (e.g yarn.resourcemanager.fail-fast), this value will be the default. yarn.fail-fast false Enable RM work preserving recovery. This configuration is private to YARN for experimenting the feature. yarn.resourcemanager.work-preserving-recovery.enabled true Set the amount of time RM waits before allocating new containers on work-preserving-recovery. Such wait period gives RM a chance to settle down resyncing with NMs in the cluster on recovery, before assigning new containers to applications. yarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms 10000 The class to use as the persistent store. If org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore is used, the store is implicitly fenced; meaning a single ResourceManager is able to use the store at any point in time. More details on this implicit fencing, along with setting up appropriate ACLs is discussed under yarn.resourcemanager.zk-state-store.root-node.acl. yarn.resourcemanager.store.class org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore When automatic failover is enabled, number of zookeeper operation retry times in ActiveStandbyElector yarn.resourcemanager.ha.failover-controller.active-standby-elector.zk.retries The maximum number of completed applications RM state store keeps, less than or equals to ${yarn.resourcemanager.max-completed-applications}. By default, it equals to ${yarn.resourcemanager.max-completed-applications}. This ensures that the applications kept in the state store are consistent with the applications remembered in RM memory. Any values larger than ${yarn.resourcemanager.max-completed-applications} will be reset to ${yarn.resourcemanager.max-completed-applications}. Note that this value impacts the RM recovery performance.Typically, a smaller value indicates better performance on RM recovery. yarn.resourcemanager.state-store.max-completed-applications ${yarn.resourcemanager.max-completed-applications} Host:Port of the ZooKeeper server to be used by the RM. This must be supplied when using the ZooKeeper based implementation of the RM state store and/or embedded automatic failover in a HA setting. yarn.resourcemanager.zk-address Number of times RM tries to connect to ZooKeeper. yarn.resourcemanager.zk-num-retries 1000 Retry interval in milliseconds when connecting to ZooKeeper. When HA is enabled, the value here is NOT used. It is generated automatically from yarn.resourcemanager.zk-timeout-ms and yarn.resourcemanager.zk-num-retries. yarn.resourcemanager.zk-retry-interval-ms 1000 Full path of the ZooKeeper znode where RM state will be stored. This must be supplied when using org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore as the value for yarn.resourcemanager.store.class yarn.resourcemanager.zk-state-store.parent-path /rmstore ZooKeeper session timeout in milliseconds. Session expiration is managed by the ZooKeeper cluster itself, not by the client. This value is used by the cluster to determine when the client's session expires. Expirations happens when the cluster does not hear from the client within the specified session timeout period (i.e. no heartbeat). yarn.resourcemanager.zk-timeout-ms 10000 ACL's to be used for ZooKeeper znodes. yarn.resourcemanager.zk-acl world:anyone:rwcda ACLs to be used for the root znode when using ZKRMStateStore in a HA scenario for fencing. ZKRMStateStore supports implicit fencing to allow a single ResourceManager write-access to the store. For fencing, the ResourceManagers in the cluster share read-write-admin privileges on the root node, but the Active ResourceManager claims exclusive create-delete permissions. By default, when this property is not set, we use the ACLs from yarn.resourcemanager.zk-acl for shared admin access and rm-address:random-number for username-based exclusive create-delete access. This property allows users to set ACLs of their choice instead of using the default mechanism. For fencing to work, the ACLs should be carefully set differently on each ResourceManger such that all the ResourceManagers have shared admin access and the Active ResourceManger takes over (exclusively) the create-delete access. yarn.resourcemanager.zk-state-store.root-node.acl Specify the auths to be used for the ACL's specified in both the yarn.resourcemanager.zk-acl and yarn.resourcemanager.zk-state-store.root-node.acl properties. This takes a comma-separated list of authentication mechanisms, each of the form 'scheme:auth' (the same syntax used for the 'addAuth' command in the ZK CLI). yarn.resourcemanager.zk-auth URI pointing to the location of the FileSystem path where RM state will be stored. This must be supplied when using org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore as the value for yarn.resourcemanager.store.class yarn.resourcemanager.fs.state-store.uri ${hadoop.tmp.dir}/yarn/system/rmstore hdfs client retry policy specification. hdfs client retry is always enabled. Specified in pairs of sleep-time and number-of-retries and (t0, n0), (t1, n1), ..., the first n0 retries sleep t0 milliseconds on average, the following n1 retries sleep t1 milliseconds on average, and so on. yarn.resourcemanager.fs.state-store.retry-policy-spec 2000, 500 the number of retries to recover from IOException in FileSystemRMStateStore. yarn.resourcemanager.fs.state-store.num-retries 0 Retry interval in milliseconds in FileSystemRMStateStore. yarn.resourcemanager.fs.state-store.retry-interval-ms 1000 Local path where the RM state will be stored when using org.apache.hadoop.yarn.server.resourcemanager.recovery.LeveldbRMStateStore as the value for yarn.resourcemanager.store.class yarn.resourcemanager.leveldb-state-store.path ${hadoop.tmp.dir}/yarn/system/rmstore The time in seconds between full compactions of the leveldb database. Setting the interval to zero disables the full compaction cycles. yarn.resourcemanager.leveldb-state-store.compaction-interval-secs 3600 Enable RM high-availability. When enabled, (1) The RM starts in the Standby mode by default, and transitions to the Active mode when prompted to. (2) The nodes in the RM ensemble are listed in yarn.resourcemanager.ha.rm-ids (3) The id of each RM either comes from yarn.resourcemanager.ha.id if yarn.resourcemanager.ha.id is explicitly specified or can be figured out by matching yarn.resourcemanager.address.{id} with local address (4) The actual physical addresses come from the configs of the pattern - {rpc-config}.{id} yarn.resourcemanager.ha.enabled false Enable automatic failover. By default, it is enabled only when HA is enabled yarn.resourcemanager.ha.automatic-failover.enabled true Enable embedded automatic failover. By default, it is enabled only when HA is enabled. The embedded elector relies on the RM state store to handle fencing, and is primarily intended to be used in conjunction with ZKRMStateStore. yarn.resourcemanager.ha.automatic-failover.embedded true The base znode path to use for storing leader information, when using ZooKeeper based leader election. yarn.resourcemanager.ha.automatic-failover.zk-base-path /yarn-leader-election Name of the cluster. In a HA setting, this is used to ensure the RM participates in leader election for this cluster and ensures it does not affect other clusters yarn.resourcemanager.cluster-id The list of RM nodes in the cluster when HA is enabled. See description of yarn.resourcemanager.ha .enabled for full details on how this is used. yarn.resourcemanager.ha.rm-ids The id (string) of the current RM. When HA is enabled, this is an optional config. The id of current RM can be set by explicitly specifying yarn.resourcemanager.ha.id or figured out by matching yarn.resourcemanager.address.{id} with local address See description of yarn.resourcemanager.ha.enabled for full details on how this is used. yarn.resourcemanager.ha.id When HA is enabled, the class to be used by Clients, AMs and NMs to failover to the Active RM. It should extend org.apache.hadoop.yarn.client.RMFailoverProxyProvider yarn.client.failover-proxy-provider org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider When HA is enabled, the max number of times FailoverProxyProvider should attempt failover. When set, this overrides the yarn.resourcemanager.connect.max-wait.ms. When not set, this is inferred from yarn.resourcemanager.connect.max-wait.ms. yarn.client.failover-max-attempts When HA is enabled, the sleep base (in milliseconds) to be used for calculating the exponential delay between failovers. When set, this overrides the yarn.resourcemanager.connect.* settings. When not set, yarn.resourcemanager.connect.retry-interval.ms is used instead. yarn.client.failover-sleep-base-ms When HA is enabled, the maximum sleep time (in milliseconds) between failovers. When set, this overrides the yarn.resourcemanager.connect.* settings. When not set, yarn.resourcemanager.connect.retry-interval.ms is used instead. yarn.client.failover-sleep-max-ms When HA is enabled, the number of retries per attempt to connect to a ResourceManager. In other words, it is the ipc.client.connect.max.retries to be used during failover attempts yarn.client.failover-retries 0 When HA is enabled, the number of retries per attempt to connect to a ResourceManager on socket timeouts. In other words, it is the ipc.client.connect.max.retries.on.timeouts to be used during failover attempts yarn.client.failover-retries-on-socket-timeouts 0 The maximum number of completed applications RM keeps. yarn.resourcemanager.max-completed-applications 10000 Interval at which the delayed token removal thread runs yarn.resourcemanager.delayed.delegation-token.removal-interval-ms 30000 If true, ResourceManager will have proxy-user privileges. Use case: In a secure cluster, YARN requires the user hdfs delegation-tokens to do localization and log-aggregation on behalf of the user. If this is set to true, ResourceManager is able to request new hdfs delegation tokens on behalf of the user. This is needed by long-running-service, because the hdfs tokens will eventually expire and YARN requires new valid tokens to do localization and log-aggregation. Note that to enable this use case, the corresponding HDFS NameNode has to configure ResourceManager as the proxy-user so that ResourceManager can itself ask for new tokens on behalf of the user when tokens are past their max-life-time. yarn.resourcemanager.proxy-user-privileges.enabled false Interval for the roll over for the master key used to generate application tokens yarn.resourcemanager.am-rm-tokens.master-key-rolling-interval-secs 86400 Interval for the roll over for the master key used to generate container tokens. It is expected to be much greater than yarn.nm.liveness-monitor.expiry-interval-ms and yarn.resourcemanager.rm.container-allocation.expiry-interval-ms. Otherwise the behavior is undefined. yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs 86400 The heart-beat interval in milliseconds for every NodeManager in the cluster. yarn.resourcemanager.nodemanagers.heartbeat-interval-ms 1000 The minimum allowed version of a connecting nodemanager. The valid values are NONE (no version checking), EqualToRM (the nodemanager's version is equal to or greater than the RM version), or a Version String. yarn.resourcemanager.nodemanager.minimum.version NONE Enable a set of periodic monitors (specified in yarn.resourcemanager.scheduler.monitor.policies) that affect the scheduler. yarn.resourcemanager.scheduler.monitor.enable false The list of SchedulingEditPolicy classes that interact with the scheduler. A particular module may be incompatible with the scheduler, other policies, or a configuration of either. yarn.resourcemanager.scheduler.monitor.policies org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy The class to use as the configuration provider. If org.apache.hadoop.yarn.LocalConfigurationProvider is used, the local configuration will be loaded. If org.apache.hadoop.yarn.FileSystemBasedConfigurationProvider is used, the configuration which will be loaded should be uploaded to remote File system first. yarn.resourcemanager.configuration.provider-class org.apache.hadoop.yarn.LocalConfigurationProvider The value specifies the file system (e.g. HDFS) path where ResourceManager loads configuration if yarn.resourcemanager.configuration.provider-class is set to org.apache.hadoop.yarn.FileSystemBasedConfigurationProvider. yarn.resourcemanager.configuration.file-system-based-store /yarn/conf The setting that controls whether yarn system metrics is published on the timeline server or not by RM. yarn.resourcemanager.system-metrics-publisher.enabled false Number of worker threads that send the yarn system metrics data. yarn.resourcemanager.system-metrics-publisher.dispatcher.pool-size 10 Number of diagnostics/failure messages can be saved in RM for log aggregation. It also defines the number of diagnostics/failure messages can be shown in log aggregation web ui. yarn.resourcemanager.max-log-aggregation-diagnostics-in-memory 10 RM DelegationTokenRenewer thread count yarn.resourcemanager.delegation-token-renewer.thread-count 50 RM secret key update interval in ms yarn.resourcemanager.delegation.key.update-interval 86400000 RM delegation token maximum lifetime in ms yarn.resourcemanager.delegation.token.max-lifetime 604800000 RM delegation token update interval in ms yarn.resourcemanager.delegation.token.renew-interval 86400000 Thread pool size for RMApplicationHistoryWriter. yarn.resourcemanager.history-writer.multi-threaded-dispatcher.pool-size 10 Comma-separated list of values (in minutes) for schedule queue related metrics. yarn.resourcemanager.metrics.runtime.buckets 60,300,1440 Interval for the roll over for the master key used to generate NodeManager tokens. It is expected to be set to a value much larger than yarn.nm.liveness-monitor.expiry-interval-ms. yarn.resourcemanager.nm-tokens.master-key-rolling-interval-secs 86400 Flag to enable the ResourceManager reservation system. yarn.resourcemanager.reservation-system.enable false The Java class to use as the ResourceManager reservation system. By default, is set to org.apache.hadoop.yarn.server.resourcemanager.reservation.CapacityReservationSystem when using CapacityScheduler and is set to org.apache.hadoop.yarn.server.resourcemanager.reservation.FairReservationSystem when using FairScheduler. yarn.resourcemanager.reservation-system.class The plan follower policy class name to use for the ResourceManager reservation system. By default, is set to org.apache.hadoop.yarn.server.resourcemanager.reservation.CapacitySchedulerPlanFollower is used when using CapacityScheduler, and is set to org.apache.hadoop.yarn.server.resourcemanager.reservation.FairSchedulerPlanFollower when using FairScheduler. yarn.resourcemanager.reservation-system.plan.follower Step size of the reservation system in ms yarn.resourcemanager.reservation-system.planfollower.time-step 1000 The expiry interval for a container yarn.resourcemanager.rm.container-allocation.expiry-interval-ms 600000 The hostname of the NM. yarn.nodemanager.hostname 0.0.0.0 The address of the container manager in the NM. yarn.nodemanager.address ${yarn.nodemanager.hostname}:0 The actual address the server will bind to. If this optional address is set, the RPC and webapp servers will bind to this address and the port specified in yarn.nodemanager.address and yarn.nodemanager.webapp.address, respectively. This is most useful for making NM listen to all interfaces by setting to 0.0.0.0. yarn.nodemanager.bind-host Environment variables that should be forwarded from the NodeManager's environment to the container's. yarn.nodemanager.admin-env MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX Environment variables that containers may override rather than use NodeManager's default. yarn.nodemanager.env-whitelist JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME who will execute(launch) the containers. yarn.nodemanager.container-executor.class org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor Number of threads container manager uses. yarn.nodemanager.container-manager.thread-count 20 Number of threads used in cleanup. yarn.nodemanager.delete.thread-count 4 Number of seconds after an application finishes before the nodemanager's DeletionService will delete the application's localized file directory and log directory. To diagnose Yarn application problems, set this property's value large enough (for example, to 600 = 10 minutes) to permit examination of these directories. After changing the property's value, you must restart the nodemanager in order for it to have an effect. The roots of Yarn applications' work directories is configurable with the yarn.nodemanager.local-dirs property (see below), and the roots of the Yarn applications' log directories is configurable with the yarn.nodemanager.log-dirs property (see also below). yarn.nodemanager.delete.debug-delay-sec 0 Keytab for NM. yarn.nodemanager.keytab /etc/krb5.keytab List of directories to store localized files in. An application's localized file directory will be found in: ${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}. Individual containers' work directories, called container_${contid}, will be subdirectories of this. yarn.nodemanager.local-dirs ${hadoop.tmp.dir}/nm-local-dir It limits the maximum number of files which will be localized in a single local directory. If the limit is reached then sub-directories will be created and new files will be localized in them. If it is set to a value less than or equal to 36 [which are sub-directories (0-9 and then a-z)] then NodeManager will fail to start. For example; [for public cache] if this is configured with a value of 40 ( 4 files + 36 sub-directories) and the local-dir is "/tmp/local-dir1" then it will allow 4 files to be created directly inside "/tmp/local-dir1/filecache". For files that are localized further it will create a sub-directory "0" inside "/tmp/local-dir1/filecache" and will localize files inside it until it becomes full. If a file is removed from a sub-directory that is marked full, then that sub-directory will be used back again to localize files. yarn.nodemanager.local-cache.max-files-per-directory 8192 Address where the localizer IPC is. yarn.nodemanager.localizer.address ${yarn.nodemanager.hostname}:8040 Interval in between cache cleanups. yarn.nodemanager.localizer.cache.cleanup.interval-ms 600000 Target size of localizer cache in MB, per nodemanager. It is a target retention size that only includes resources with PUBLIC and PRIVATE visibility and excludes resources with APPLICATION visibility yarn.nodemanager.localizer.cache.target-size-mb 10240 Number of threads to handle localization requests. yarn.nodemanager.localizer.client.thread-count 5 Number of threads to use for localization fetching. yarn.nodemanager.localizer.fetch.thread-count 4 yarn.nodemanager.container-localizer.java.opts -Xmx256m Where to store container logs. An application's localized log directory will be found in ${yarn.nodemanager.log-dirs}/application_${appid}. Individual containers' log directories will be below this, in directories named container_{$contid}. Each container directory will contain the files stderr, stdin, and syslog generated by that container. yarn.nodemanager.log-dirs ${yarn.log.dir}/userlogs Whether to enable log aggregation. Log aggregation collects each container's logs and moves these logs onto a file-system, for e.g. HDFS, after the application completes. Users can configure the "yarn.nodemanager.remote-app-log-dir" and "yarn.nodemanager.remote-app-log-dir-suffix" properties to determine where these logs are moved to. Users can access the logs via the Application Timeline Server. yarn.log-aggregation-enable false How long to keep aggregation logs before deleting them. -1 disables. Be careful set this too small and you will spam the name node. yarn.log-aggregation.retain-seconds -1 How long to wait between aggregated log retention checks. If set to 0 or a negative value then the value is computed as one-tenth of the aggregated log retention time. Be careful set this too small and you will spam the name node. yarn.log-aggregation.retain-check-interval-seconds -1 How long for ResourceManager to wait for NodeManager to report its log aggregation status. If waiting time of which the log aggregation status is reported from NodeManager exceeds the configured value, RM will report log aggregation status for this NodeManager as TIME_OUT yarn.log-aggregation-status.time-out.ms 600000 Time in seconds to retain user logs. Only applicable if log aggregation is disabled yarn.nodemanager.log.retain-seconds 10800 Where to aggregate logs to. yarn.nodemanager.remote-app-log-dir /tmp/logs The remote log dir will be created at {yarn.nodemanager.remote-app-log-dir}/${user}/{thisParam} yarn.nodemanager.remote-app-log-dir-suffix logs Generate additional logs about container launches. Currently, this creates a copy of the launch script and lists the directory contents of the container work dir. When listing directory contents, we follow symlinks to a max-depth of 5(including symlinks which point to outside the container work dir) which may lead to a slowness in launching containers. yarn.nodemanager.log-container-debug-info.enabled false Amount of physical memory, in MB, that can be allocated for containers. If set to -1 and yarn.nodemanager.resource.detect-hardware-capabilities is true, it is automatically calculated(in case of Windows and Linux). In other cases, the default is 8192MB. yarn.nodemanager.resource.memory-mb -1 Amount of physical memory, in MB, that is reserved for non-YARN processes. This configuration is only used if yarn.nodemanager.resource.detect-hardware-capabilities is set to true and yarn.nodemanager.resource.memory-mb is -1. If set to -1, this amount is calculated as 20% of (system memory - 2*HADOOP_HEAPSIZE) yarn.nodemanager.resource.system-reserved-memory-mb -1 Whether physical memory limits will be enforced for containers. yarn.nodemanager.pmem-check-enabled true Whether virtual memory limits will be enforced for containers. yarn.nodemanager.vmem-check-enabled true Ratio between virtual memory to physical memory when setting memory limits for containers. Container allocations are expressed in terms of physical memory, and virtual memory usage is allowed to exceed this allocation by this ratio. yarn.nodemanager.vmem-pmem-ratio 2.1 Number of vcores that can be allocated for containers. This is used by the RM scheduler when allocating resources for containers. This is not used to limit the number of CPUs used by YARN containers. If it is set to -1 and yarn.nodemanager.resource.detect-hardware-capabilities is true, it is automatically determined from the hardware in case of Windows and Linux. In other cases, number of vcores is 8 by default. yarn.nodemanager.resource.cpu-vcores -1 Flag to determine if logical processors(such as hyperthreads) should be counted as cores. Only applicable on Linux when yarn.nodemanager.resource.cpu-vcores is set to -1 and yarn.nodemanager.resource.detect-hardware-capabilities is true. yarn.nodemanager.resource.count-logical-processors-as-cores false Multiplier to determine how to convert phyiscal cores to vcores. This value is used if yarn.nodemanager.resource.cpu-vcores is set to -1(which implies auto-calculate vcores) and yarn.nodemanager.resource.detect-hardware-capabilities is set to true. The number of vcores will be calculated as number of CPUs * multiplier. yarn.nodemanager.resource.pcores-vcores-multiplier 1.0 Percentage of CPU that can be allocated for containers. This setting allows users to limit the amount of CPU that YARN containers use. Currently functional only on Linux using cgroups. The default is to use 100% of CPU. yarn.nodemanager.resource.percentage-physical-cpu-limit 100 Enable auto-detection of node capabilities such as memory and CPU. yarn.nodemanager.resource.detect-hardware-capabilities false NM Webapp address. yarn.nodemanager.webapp.address ${yarn.nodemanager.hostname}:8042 The https adddress of the NM web application. yarn.nodemanager.webapp.https.address 0.0.0.0:8044 The Kerberos keytab file to be used for spnego filter for the NM web interface. yarn.nodemanager.webapp.spnego-keytab-file The Kerberos principal to be used for spnego filter for the NM web interface. yarn.nodemanager.webapp.spnego-principal How often to monitor the node and the containers. yarn.nodemanager.resource-monitor.interval-ms 3000 Class that calculates current resource utilization. yarn.nodemanager.resource-calculator.class How often to monitor containers. If not set, the value for yarn.nodemanager.resource-monitor.interval-ms will be used. yarn.nodemanager.container-monitor.interval-ms Class that calculates containers current resource utilization. If not set, the value for yarn.nodemanager.resource-calculator.class will be used. yarn.nodemanager.container-monitor.resource-calculator.class Frequency of running node health script. yarn.nodemanager.health-checker.interval-ms 600000 Script time out period. yarn.nodemanager.health-checker.script.timeout-ms 1200000 The health check script to run. yarn.nodemanager.health-checker.script.path The arguments to pass to the health check script. yarn.nodemanager.health-checker.script.opts Frequency of running disk health checker code. yarn.nodemanager.disk-health-checker.interval-ms 120000 The minimum fraction of number of disks to be healthy for the nodemanager to launch new containers. This correspond to both yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs. i.e. If there are less number of healthy local-dirs (or log-dirs) available, then new containers will not be launched on this node. yarn.nodemanager.disk-health-checker.min-healthy-disks 0.25 The maximum percentage of disk space utilization allowed after which a disk is marked as bad. Values can range from 0.0 to 100.0. If the value is greater than or equal to 100, the nodemanager will check for full disk. This applies to yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs. yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage 90.0 The low threshold percentage of disk space used when a bad disk is marked as good. Values can range from 0.0 to 100.0. This applies to yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs. Note that if its value is more than yarn.nodemanager.disk-health-checker. max-disk-utilization-per-disk-percentage or not set, it will be set to the same value as yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage. yarn.nodemanager.disk-health-checker.disk-utilization-watermark-low-per-disk-percentage The minimum space that must be available on a disk for it to be used. This applies to yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs. yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb 0 The path to the Linux container executor. yarn.nodemanager.linux-container-executor.path The class which should help the LCE handle resources. yarn.nodemanager.linux-container-executor.resources-handler.class org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler The cgroups hierarchy under which to place YARN proccesses (cannot contain commas). If yarn.nodemanager.linux-container-executor.cgroups.mount is false (that is, if cgroups have been pre-configured), then this cgroups hierarchy must already exist and be writable by the NodeManager user, otherwise the NodeManager may fail. Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler. yarn.nodemanager.linux-container-executor.cgroups.hierarchy /hadoop-yarn Whether the LCE should attempt to mount cgroups if not found. Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler. yarn.nodemanager.linux-container-executor.cgroups.mount false Where the LCE should attempt to mount cgroups if not found. Common locations include /sys/fs/cgroup and /cgroup; the default location can vary depending on the Linux distribution in use. This path must exist before the NodeManager is launched. Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler, and yarn.nodemanager.linux-container-executor.cgroups.mount is true. yarn.nodemanager.linux-container-executor.cgroups.mount-path Delay in ms between attempts to remove linux cgroup yarn.nodemanager.linux-container-executor.cgroups.delete-delay-ms 20 This determines which of the two modes that LCE should use on a non-secure cluster. If this value is set to true, then all containers will be launched as the user specified in yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user. If this value is set to false, then containers will run as the user who submitted the application. yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users true The UNIX user that containers will run as when Linux-container-executor is used in nonsecure mode (a use case for this is using cgroups) if the yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users is set to true. yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user nobody The allowed pattern for UNIX user names enforced by Linux-container-executor when used in nonsecure mode (use case for this is using cgroups). The default value is taken from /usr/sbin/adduser yarn.nodemanager.linux-container-executor.nonsecure-mode.user-pattern ^[_.A-Za-z0-9][-@_.A-Za-z0-9]{0,255}?[$]?$ This flag determines whether apps should run with strict resource limits or be allowed to consume spare resources if they need them. For example, turning the flag on will restrict apps to use only their share of CPU, even if the node has spare CPU cycles. The default value is false i.e. use available resources. Please note that turning this flag on may reduce job throughput on the cluster. yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage false Comma separated list of runtimes that are allowed when using LinuxContainerExecutor. The allowed values are default and docker. yarn.nodemanager.runtime.linux.allowed-runtimes default This configuration setting determines the capabilities assigned to docker containers when they are launched. While these may not be case-sensitive from a docker perspective, it is best to keep these uppercase. yarn.nodemanager.runtime.linux.docker.capabilities CHOWN,DAC_OVERRIDE,FSETID,FOWNER,MKNOD,NET_RAW,SETGID,SETUID,SETFCAP,SETPCAP,NET_BIND_SERVICE,SYS_CHROOT,KILL,AUDIT_WRITE This configuration setting determines if privileged docker containers are allowed on this cluster. Use with extreme care. yarn.nodemanager.runtime.linux.docker.privileged-containers.allowed false This configuration setting determines who is allowed to run privileged docker containers on this cluster. Use with extreme care. yarn.nodemanager.runtime.linux.docker.privileged-containers.acl This flag determines whether memory limit will be set for the Windows Job Object of the containers launched by the default container executor. yarn.nodemanager.windows-container.memory-limit.enabled false This flag determines whether CPU limit will be set for the Windows Job Object of the containers launched by the default container executor. yarn.nodemanager.windows-container.cpu-limit.enabled false Interval of time the linux container executor should try cleaning up cgroups entry when cleaning up a container. yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms 1000 The UNIX group that the linux-container-executor should run as. yarn.nodemanager.linux-container-executor.group T-file compression types used to compress aggregated logs. yarn.nodemanager.log-aggregation.compression-type none The kerberos principal for the node manager. yarn.nodemanager.principal A comma separated list of services where service name should only contain a-zA-Z0-9_ and can not start with numbers yarn.nodemanager.aux-services No. of ms to wait between sending a SIGTERM and SIGKILL to a container yarn.nodemanager.sleep-delay-before-sigkill.ms 250 Max time to wait for a process to come up when trying to cleanup a container yarn.nodemanager.process-kill-wait.ms 2000 The minimum allowed version of a resourcemanager that a nodemanager will connect to. The valid values are NONE (no version checking), EqualToNM (the resourcemanager's version is equal to or greater than the NM version), or a Version String. yarn.nodemanager.resourcemanager.minimum.version NONE Max number of threads in NMClientAsync to process container management events yarn.client.nodemanager-client-async.thread-pool-max-size 500 Max time to wait to establish a connection to NM yarn.client.nodemanager-connect.max-wait-ms 180000 Time interval between each attempt to connect to NM yarn.client.nodemanager-connect.retry-interval-ms 10000 Max time to wait for NM to connect to RM. When not set, proxy will fall back to use value of yarn.resourcemanager.connect.max-wait.ms. yarn.nodemanager.resourcemanager.connect.max-wait.ms Time interval between each NM attempt to connect to RM. When not set, proxy will fall back to use value of yarn.resourcemanager.connect.retry-interval.ms. yarn.nodemanager.resourcemanager.connect.retry-interval.ms Maximum number of proxy connections to cache for node managers. If set to a value greater than zero then the cache is enabled and the NMClient and MRAppMaster will cache the specified number of node manager proxies. There will be at max one proxy per node manager. Ex. configuring it to a value of 5 will make sure that client will at max have 5 proxies cached with 5 different node managers. These connections for these proxies will be timed out if idle for more than the system wide idle timeout period. Note that this could cause issues on large clusters as many connections could linger simultaneously and lead to a large number of connection threads. The token used for authentication will be used only at connection creation time. If a new token is received then the earlier connection should be closed in order to use the new token. This and (yarn.client.nodemanager-client-async.thread-pool-max-size) are related and should be in sync (no need for them to be equal). If the value of this property is zero then the connection cache is disabled and connections will use a zero idle timeout to prevent too many connection threads on large clusters. yarn.client.max-cached-nodemanagers-proxies 0 Enable the node manager to recover after starting yarn.nodemanager.recovery.enabled false The local filesystem directory in which the node manager will store state when recovery is enabled. yarn.nodemanager.recovery.dir ${hadoop.tmp.dir}/yarn-nm-recovery The time in seconds between full compactions of the NM state database. Setting the interval to zero disables the full compaction cycles. yarn.nodemanager.recovery.compaction-interval-secs 3600 Whether the nodemanager is running under supervision. A nodemanager that supports recovery and is running under supervision will not try to cleanup containers as it exits with the assumption it will be immediately be restarted and recover containers. yarn.nodemanager.recovery.supervised false Adjustment to the container OS scheduling priority. In Linux, passed directly to the nice command. yarn.nodemanager.container-executor.os.sched.priority.adjustment 0 Flag to enable container metrics yarn.nodemanager.container-metrics.enable true Container metrics flush period in ms. Set to -1 for flush on completion. yarn.nodemanager.container-metrics.period-ms -1 The delay time ms to unregister container metrics after completion. yarn.nodemanager.container-metrics.unregister-delay-ms 10000 Class used to calculate current container resource utilization. yarn.nodemanager.container-monitor.process-tree.class Flag to enable NodeManager disk health checker yarn.nodemanager.disk-health-checker.enable true Number of threads to use in NM log cleanup. Used when log aggregation is disabled. yarn.nodemanager.log.deletion-threads-count 4 The Windows group that the windows-container-executor should run as. yarn.nodemanager.windows-secure-container-executor.group yarn.nodemanager.docker-container-executor.exec-name /usr/bin/docker Name or path to the Docker client. The Docker image name to use for DockerContainerExecutor yarn.nodemanager.docker-container-executor.image-name mapreduce.job.hdfs-servers ${fs.defaultFS} yarn.nodemanager.aux-services.mapreduce_shuffle.class org.apache.hadoop.mapred.ShuffleHandler The kerberos principal for the proxy, if the proxy is not running as part of the RM. yarn.web-proxy.principal Keytab for WebAppProxy, if the proxy is not running as part of the RM. yarn.web-proxy.keytab The address for the web proxy as HOST:PORT, if this is not given then the proxy will run as part of the RM yarn.web-proxy.address CLASSPATH for YARN applications. A comma-separated list of CLASSPATH entries. When this value is empty, the following default CLASSPATH for YARN applications would be used. For Linux: $HADOOP_CONF_DIR, $HADOOP_COMMON_HOME/share/hadoop/common/*, $HADOOP_COMMON_HOME/share/hadoop/common/lib/*, $HADOOP_HDFS_HOME/share/hadoop/hdfs/*, $HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*, $HADOOP_YARN_HOME/share/hadoop/yarn/*, $HADOOP_YARN_HOME/share/hadoop/yarn/lib/* For Windows: %HADOOP_CONF_DIR%, %HADOOP_COMMON_HOME%/share/hadoop/common/*, %HADOOP_COMMON_HOME%/share/hadoop/common/lib/*, %HADOOP_HDFS_HOME%/share/hadoop/hdfs/*, %HADOOP_HDFS_HOME%/share/hadoop/hdfs/lib/*, %HADOOP_YARN_HOME%/share/hadoop/yarn/*, %HADOOP_YARN_HOME%/share/hadoop/yarn/lib/* yarn.application.classpath Indicate what is the current version of the running timeline service. For example, if "yarn.timeline-service.version" is 1.5, and "yarn.timeline-service.enabled" is true, it means the cluster will and should bring up the timeline service v.1.5. On the client side, if the client uses the same version of timeline service, it should succeed. If the client chooses to use a smaller version in spite of this, then depending on how robust the compatibility story is between versions, the results may vary. yarn.timeline-service.version 1.0f In the server side it indicates whether timeline service is enabled or not. And in the client side, users can enable it to indicate whether client wants to use timeline service. If it's enabled in the client side along with security, then yarn client tries to fetch the delegation tokens for the timeline server. yarn.timeline-service.enabled false The hostname of the timeline service web application. yarn.timeline-service.hostname 0.0.0.0 This is default address for the timeline server to start the RPC server. yarn.timeline-service.address ${yarn.timeline-service.hostname}:10200 The http address of the timeline service web application. yarn.timeline-service.webapp.address ${yarn.timeline-service.hostname}:8188 The https address of the timeline service web application. yarn.timeline-service.webapp.https.address ${yarn.timeline-service.hostname}:8190 The actual address the server will bind to. If this optional address is set, the RPC and webapp servers will bind to this address and the port specified in yarn.timeline-service.address and yarn.timeline-service.webapp.address, respectively. This is most useful for making the service listen to all interfaces by setting to 0.0.0.0. yarn.timeline-service.bind-host Defines the max number of applications could be fetched using REST API or application history protocol and shown in timeline server web ui. yarn.timeline-service.generic-application-history.max-applications 10000 Store class name for timeline store. yarn.timeline-service.store-class org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore Enable age off of timeline store data. yarn.timeline-service.ttl-enable true Time to live for timeline store data in milliseconds. yarn.timeline-service.ttl-ms 604800000 Store file name for leveldb timeline store. yarn.timeline-service.leveldb-timeline-store.path ${hadoop.tmp.dir}/yarn/timeline Length of time to wait between deletion cycles of leveldb timeline store in milliseconds. yarn.timeline-service.leveldb-timeline-store.ttl-interval-ms 300000 Size of read cache for uncompressed blocks for leveldb timeline store in bytes. yarn.timeline-service.leveldb-timeline-store.read-cache-size 104857600 Size of cache for recently read entity start times for leveldb timeline store in number of entities. yarn.timeline-service.leveldb-timeline-store.start-time-read-cache-size 10000 Size of cache for recently written entity start times for leveldb timeline store in number of entities. yarn.timeline-service.leveldb-timeline-store.start-time-write-cache-size 10000 Handler thread count to serve the client RPC requests. yarn.timeline-service.handler-thread-count 10 yarn.timeline-service.http-authentication.type simple Defines authentication used for the timeline server HTTP endpoint. Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME# yarn.timeline-service.http-authentication.simple.anonymous.allowed true Indicates if anonymous requests are allowed by the timeline server when using 'simple' authentication. The Kerberos principal for the timeline server. yarn.timeline-service.principal The Kerberos keytab for the timeline server. yarn.timeline-service.keytab /etc/krb5.keytab Comma separated list of UIs that will be hosted yarn.timeline-service.ui-names Default maximum number of retries for timeline service client and value -1 means no limit. yarn.timeline-service.client.max-retries 30 Client policy for whether timeline operations are non-fatal. Should the failure to obtain a delegation token be considered an application failure (option = false), or should the client attempt to continue to publish information without it (option=true) yarn.timeline-service.client.best-effort false Default retry time interval for timeline servive client. yarn.timeline-service.client.retry-interval-ms 1000 Enable timeline server to recover state after starting. If true, then yarn.timeline-service.state-store-class must be specified. yarn.timeline-service.recovery.enabled false Store class name for timeline state store. yarn.timeline-service.state-store-class org.apache.hadoop.yarn.server.timeline.recovery.LeveldbTimelineStateStore Store file name for leveldb state store. yarn.timeline-service.leveldb-state-store.path ${hadoop.tmp.dir}/yarn/timeline yarn.timeline-service.entity-group-fs-store.active-dir /tmp/entity-file-history/active HDFS path to store active application's timeline data yarn.timeline-service.entity-group-fs-store.done-dir /tmp/entity-file-history/done/ HDFS path to store done application's timeline data yarn.timeline-service.entity-group-fs-store.group-id-plugin-classes Plugins that can translate a timeline entity read request into a list of timeline entity group ids, separated by commas. yarn.timeline-service.entity-group-fs-store.summary-store Summary storage for ATS v1.5 org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore yarn.timeline-service.entity-group-fs-store.scan-interval-seconds Scan interval for ATS v1.5 entity group file system storage reader.This value controls how frequent the reader will scan the HDFS active directory for application status. 60 yarn.timeline-service.entity-group-fs-store.cleaner-interval-seconds Scan interval for ATS v1.5 entity group file system storage cleaner.This value controls how frequent the reader will scan the HDFS done directory for stale application data. 3600 yarn.timeline-service.entity-group-fs-store.retain-seconds How long the ATS v1.5 entity group file system storage will keep an application's data in the done directory. 604800 yarn.timeline-service.entity-group-fs-store.leveldb-cache-read-cache-size Read cache size for the leveldb cache storage in ATS v1.5 plugin storage. 10485760 yarn.timeline-service.entity-group-fs-store.app-cache-size Size of the reader cache for ATS v1.5 reader. This value controls how many entity groups the ATS v1.5 server should cache. If the number of active read entity groups is greater than the number of caches items, some reads may return empty data. This value must be greater than 0. 10 yarn.timeline-service.client.fd-flush-interval-secs Flush interval for ATS v1.5 writer. This value controls how frequent the writer will flush the HDFS FSStream for the entity/domain. 10 yarn.timeline-service.client.fd-clean-interval-secs Scan interval for ATS v1.5 writer. This value controls how frequent the writer will scan the HDFS FSStream for the entity/domain. If the FSStream is stale for a long time, this FSStream will be close. 60 yarn.timeline-service.client.fd-retain-secs How long the ATS v1.5 writer will keep a FSStream open. If this fsstream does not write anything for this configured time, it will be close. 300 yarn.timeline-service.client.internal-timers-ttl-secs How long the internal Timer Tasks can be alive in writer. If there is no write operation for this configured time, the internal timer tasks will be close. 420 Whether the shared cache is enabled yarn.sharedcache.enabled false The root directory for the shared cache yarn.sharedcache.root-dir /sharedcache The level of nested directories before getting to the checksum directories. It must be non-negative. yarn.sharedcache.nested-level 3 The implementation to be used for the SCM store yarn.sharedcache.store.class org.apache.hadoop.yarn.server.sharedcachemanager.store.InMemorySCMStore The implementation to be used for the SCM app-checker yarn.sharedcache.app-checker.class org.apache.hadoop.yarn.server.sharedcachemanager.RemoteAppChecker A resource in the in-memory store is considered stale if the time since the last reference exceeds the staleness period. This value is specified in minutes. yarn.sharedcache.store.in-memory.staleness-period-mins 10080 Initial delay before the in-memory store runs its first check to remove dead initial applications. Specified in minutes. yarn.sharedcache.store.in-memory.initial-delay-mins 10 The frequency at which the in-memory store checks to remove dead initial applications. Specified in minutes. yarn.sharedcache.store.in-memory.check-period-mins 720 The address of the admin interface in the SCM (shared cache manager) yarn.sharedcache.admin.address 0.0.0.0:8047 The number of threads used to handle SCM admin interface (1 by default) yarn.sharedcache.admin.thread-count 1 The address of the web application in the SCM (shared cache manager) yarn.sharedcache.webapp.address 0.0.0.0:8788 The frequency at which a cleaner task runs. Specified in minutes. yarn.sharedcache.cleaner.period-mins 1440 Initial delay before the first cleaner task is scheduled. Specified in minutes. yarn.sharedcache.cleaner.initial-delay-mins 10 The time to sleep between processing each shared cache resource. Specified in milliseconds. yarn.sharedcache.cleaner.resource-sleep-ms 0 The address of the node manager interface in the SCM (shared cache manager) yarn.sharedcache.uploader.server.address 0.0.0.0:8046 The number of threads used to handle shared cache manager requests from the node manager (50 by default) yarn.sharedcache.uploader.server.thread-count 50 The address of the client interface in the SCM (shared cache manager) yarn.sharedcache.client-server.address 0.0.0.0:8045 The number of threads used to handle shared cache manager requests from clients (50 by default) yarn.sharedcache.client-server.thread-count 50 The algorithm used to compute checksums of files (SHA-256 by default) yarn.sharedcache.checksum.algo.impl org.apache.hadoop.yarn.sharedcache.ChecksumSHA256Impl The replication factor for the node manager uploader for the shared cache (10 by default) yarn.sharedcache.nm.uploader.replication.factor 10 The number of threads used to upload files from a node manager instance (20 by default) yarn.sharedcache.nm.uploader.thread-count 20 ACL protocol for use in the Timeline server. security.applicationhistory.protocol.acl Set to true for MiniYARNCluster unit tests yarn.is.minicluster false Set for MiniYARNCluster unit tests to control resource monitoring yarn.minicluster.control-resource-monitoring false Set to false in order to allow MiniYARNCluster to run tests without port conflicts. yarn.minicluster.fixed.ports false Set to false in order to allow the NodeManager in MiniYARNCluster to use RPC to talk to the RM. yarn.minicluster.use-rpc false As yarn.nodemanager.resource.memory-mb property but for the NodeManager in a MiniYARNCluster. yarn.minicluster.yarn.nodemanager.resource.memory-mb 4096 Enable node labels feature yarn.node-labels.enabled false Retry policy used for FileSystem node label store. The policy is specified by N pairs of sleep-time in milliseconds and number-of-retries "s1,n1,s2,n2,...". yarn.node-labels.fs-store.retry-policy-spec 2000, 500 URI for NodeLabelManager. The default value is /tmp/hadoop-yarn-${user}/node-labels/ in the local filesystem. yarn.node-labels.fs-store.root-dir Set configuration type for node labels. Administrators can specify "centralized", "delegated-centralized" or "distributed". yarn.node-labels.configuration-type centralized When "yarn.node-labels.configuration-type" is configured with "distributed" in RM, Administrators can configure in NM the provider for the node labels by configuring this parameter. Administrators can configure "config", "script" or the class name of the provider. Configured class needs to extend org.apache.hadoop.yarn.server.nodemanager.nodelabels.NodeLabelsProvider. If "config" is configured, then "ConfigurationNodeLabelsProvider" and if "script" is configured, then "ScriptNodeLabelsProvider" will be used. yarn.nodemanager.node-labels.provider When "yarn.nodemanager.node-labels.provider" is configured with "config", "Script" or the configured class extends AbstractNodeLabelsProvider, then periodically node labels are retrieved from the node labels provider. This configuration is to define the interval period. If -1 is configured then node labels are retrieved from provider only during initialization. Defaults to 10 mins. yarn.nodemanager.node-labels.provider.fetch-interval-ms 600000 Interval at which NM syncs its node labels with RM. NM will send its loaded labels every x intervals configured, along with heartbeat to RM. yarn.nodemanager.node-labels.resync-interval-ms 120000 When "yarn.nodemanager.node-labels.provider" is configured with "config" then ConfigurationNodeLabelsProvider fetches the partition label from this parameter. yarn.nodemanager.node-labels.provider.configured-node-partition When "yarn.nodemanager.node-labels.provider" is configured with "Script" then this configuration provides the timeout period after which it will interrupt the script which queries the Node labels. Defaults to 20 mins. yarn.nodemanager.node-labels.provider.fetch-timeout-ms 1200000 When node labels "yarn.node-labels.configuration-type" is of type "delegated-centralized", administrators should configure the class for fetching node labels by ResourceManager. Configured class needs to extend org.apache.hadoop.yarn.server.resourcemanager.nodelabels. RMNodeLabelsMappingProvider. yarn.resourcemanager.node-labels.provider When "yarn.node-labels.configuration-type" is configured with "delegated-centralized", then periodically node labels are retrieved from the node labels provider. This configuration is to define the interval. If -1 is configured then node labels are retrieved from provider only once for each node after it registers. Defaults to 30 mins. yarn.resourcemanager.node-labels.provider.fetch-interval-ms 1800000 The Node Label script to run. Script output Line starting with "NODE_PARTITION:" will be considered as Node Label Partition. In case of multiple lines have this pattern, then last one will be considered yarn.nodemanager.node-labels.provider.script.path The arguments to pass to the Node label script. yarn.nodemanager.node-labels.provider.script.opts The interval that the yarn client library uses to poll the completion status of the asynchronous API of application client protocol. yarn.client.application-client-protocol.poll-interval-ms 200 The duration (in ms) the YARN client waits for an expected state change to occur. -1 means unlimited wait time. yarn.client.application-client-protocol.poll-timeout-ms -1 RSS usage of a process computed via /proc/pid/stat is not very accurate as it includes shared pages of a process. /proc/pid/smaps provides useful information like Private_Dirty, Private_Clean, Shared_Dirty, Shared_Clean which can be used for computing more accurate RSS. When this flag is enabled, RSS is computed as Min(Shared_Dirty, Pss) + Private_Clean + Private_Dirty. It excludes read-only shared mappings in RSS computation. yarn.nodemanager.container-monitor.procfs-tree.smaps-based-rss.enabled false URL for log aggregation server yarn.log.server.url RM Application Tracking URL yarn.tracking.url.generator Class to be used for YarnAuthorizationProvider yarn.authorization-provider Defines how often NMs wake up to upload log files. The default value is -1. By default, the logs will be uploaded when the application is finished. By setting this configure, logs can be uploaded periodically when the application is running. The minimum rolling-interval-seconds can be set is 3600. yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds -1 Enable/disable intermediate-data encryption at YARN level. For now, this only is used by the FileSystemRMStateStore to setup right file-system security attributes. yarn.intermediate-data-encryption.enable false Flag to enable cross-origin (CORS) support in the NM. This flag requires the CORS filter initializer to be added to the filter initializers list in core-site.xml. yarn.nodemanager.webapp.cross-origin.enabled false Defines maximum application priority in a cluster. If an application is submitted with a priority higher than this value, it will be reset to this maximum value. yarn.cluster.max-application-priority 0 The default log aggregation policy class. Applications can override it via LogAggregationContext. This configuration can provide some cluster-side default behavior so that if the application doesn't specify any policy via LogAggregationContext administrators of the cluster can adjust the policy globally. yarn.nodemanager.log-aggregation.policy.class org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AllContainerLogAggregationPolicy The default parameters for the log aggregation policy. Applications can override it via LogAggregationContext. This configuration can provide some cluster-side default behavior so that if the application doesn't specify any policy via LogAggregationContext administrators of the cluster can adjust the policy globally. yarn.nodemanager.log-aggregation.policy.parameters Enable/Disable AMRMProxyService in the node manager. This service is used to intercept calls from the application masters to the resource manager. yarn.nodemanager.amrmproxy.enable false The address of the AMRMProxyService listener. yarn.nodemanager.amrmproxy.address 0.0.0.0:8048 The number of threads used to handle requests by the AMRMProxyService. yarn.nodemanager.amrmproxy.client.thread-count 25 The comma separated list of class names that implement the RequestInterceptor interface. This is used by the AMRMProxyService to create the request processing pipeline for applications. yarn.nodemanager.amrmproxy.interceptor-class.pipeline org.apache.hadoop.yarn.server.nodemanager.amrmproxy.DefaultRequestInterceptor Choose different implementation of node label's storage yarn.node-labels.fs-store.impl.class org.apache.hadoop.yarn.nodelabels.FileSystemNodeLabelsStore The least amount of time(msec.) an inactive (decommissioned or shutdown) node can stay in the nodes list of the resourcemanager after being declared untracked. A node is marked untracked if and only if it is absent from both include and exclude nodemanager lists on the RM. All inactive nodes are checked twice per timeout interval or every 10 minutes, whichever is lesser, and marked appropriately. The same is done when refreshNodes command (graceful or otherwise) is invoked. yarn.resourcemanager.node-removal-untracked.timeout-ms 60000 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/v2_8_2/versionhandler.py0000664000175000017500000001435400000000000032027 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from sahara.plugins import conductor from sahara.plugins import context from sahara.plugins import swift_helper from sahara.plugins import utils from sahara_plugin_vanilla.plugins.vanilla import abstractversionhandler as avm from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import config as c from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import keypairs from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import recommendations_utils from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import run_scripts as run from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import scaling as sc from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import starting_scripts from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import utils as u from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import validation as vl from sahara_plugin_vanilla.plugins.vanilla import utils as vu from sahara_plugin_vanilla.plugins.vanilla.v2_8_2 import config_helper from sahara_plugin_vanilla.plugins.vanilla.v2_8_2 import edp_engine CONF = cfg.CONF class VersionHandler(avm.AbstractVersionHandler): def __init__(self): self.pctx = { 'env_confs': config_helper.get_env_configs(), 'all_confs': config_helper.get_plugin_configs() } def get_plugin_configs(self): return self.pctx['all_confs'] def get_node_processes(self): return { "Hadoop": [], "MapReduce": ["historyserver"], "HDFS": ["namenode", "datanode", "secondarynamenode"], "YARN": ["resourcemanager", "nodemanager"], "JobFlow": ["oozie"], "Hive": ["hiveserver"], "Spark": ["spark history server"], "ZooKeeper": ["zookeeper"] } def validate(self, cluster): vl.validate_cluster_creating(self.pctx, cluster) def update_infra(self, cluster): pass def configure_cluster(self, cluster): c.configure_cluster(self.pctx, cluster) def start_cluster(self, cluster): keypairs.provision_keypairs(cluster) starting_scripts.start_namenode(cluster) starting_scripts.start_secondarynamenode(cluster) starting_scripts.start_resourcemanager(cluster) run.start_dn_nm_processes(utils.get_instances(cluster)) run.await_datanodes(cluster) starting_scripts.start_historyserver(cluster) starting_scripts.start_oozie(self.pctx, cluster) starting_scripts.start_hiveserver(self.pctx, cluster) starting_scripts.start_zookeeper(cluster) swift_helper.install_ssl_certs(utils.get_instances(cluster)) self._set_cluster_info(cluster) starting_scripts.start_spark(cluster) def decommission_nodes(self, cluster, instances): sc.decommission_nodes(self.pctx, cluster, instances) def validate_scaling(self, cluster, existing, additional): vl.validate_additional_ng_scaling(cluster, additional) vl.validate_existing_ng_scaling(self.pctx, cluster, existing) zk_ng = utils.get_node_groups(cluster, "zookeeper") if zk_ng: vl.validate_zookeeper_node_count(zk_ng, existing, additional) def scale_cluster(self, cluster, instances): keypairs.provision_keypairs(cluster, instances) sc.scale_cluster(self.pctx, cluster, instances) def _set_cluster_info(self, cluster): nn = vu.get_namenode(cluster) rm = vu.get_resourcemanager(cluster) hs = vu.get_historyserver(cluster) oo = vu.get_oozie(cluster) sp = vu.get_spark_history_server(cluster) info = {} if rm: info['YARN'] = { 'Web UI': 'http://%s:%s' % (rm.get_ip_or_dns_name(), '8088'), 'ResourceManager': 'http://%s:%s' % ( rm.get_ip_or_dns_name(), '8032') } if nn: info['HDFS'] = { 'Web UI': 'http://%s:%s' % (nn.get_ip_or_dns_name(), '50070'), 'NameNode': 'hdfs://%s:%s' % (nn.hostname(), '9000') } if oo: info['JobFlow'] = { 'Oozie': 'http://%s:%s' % (oo.get_ip_or_dns_name(), '11000') } if hs: info['MapReduce JobHistory Server'] = { 'Web UI': 'http://%s:%s' % (hs.get_ip_or_dns_name(), '19888') } if sp: info['Apache Spark'] = { 'Spark UI': 'http://%s:%s' % (sp.management_ip, '4040'), 'Spark History Server UI': 'http://%s:%s' % (sp.management_ip, '18080') } ctx = context.ctx() conductor.cluster_update(ctx, cluster, {'info': info}) def get_edp_engine(self, cluster, job_type): if job_type in edp_engine.EdpOozieEngine.get_supported_job_types(): return edp_engine.EdpOozieEngine(cluster) if job_type in edp_engine.EdpSparkEngine.get_supported_job_types(): return edp_engine.EdpSparkEngine(cluster) return None def get_edp_job_types(self): return (edp_engine.EdpOozieEngine.get_supported_job_types() + edp_engine.EdpSparkEngine.get_supported_job_types()) def get_edp_config_hints(self, job_type): return edp_engine.EdpOozieEngine.get_possible_job_config(job_type) def on_terminate_cluster(self, cluster): u.delete_oozie_password(cluster) keypairs.drop_key(cluster) def get_open_ports(self, node_group): return c.get_open_ports(node_group) def recommend_configs(self, cluster, scaling): recommendations_utils.recommend_configs(cluster, self.get_plugin_configs(), scaling) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/plugins/vanilla/versionfactory.py0000664000175000017500000000364700000000000031065 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import re from sahara.plugins import utils class VersionFactory(object): versions = None modules = None initialized = False @staticmethod def get_instance(): if not VersionFactory.initialized: src_dir = os.path.join(os.path.dirname(__file__), '') versions = ( [name[1:].replace('_', '.') for name in os.listdir(src_dir) if (os.path.isdir(os.path.join(src_dir, name)) and re.match(r'^v\d+_\d+_\d+$', name))]) versions.sort(key=utils.natural_sort_key) VersionFactory.versions = versions VersionFactory.modules = {} for version in VersionFactory.versions: module_name = ('sahara_plugin_vanilla.plugins.vanilla.v%s.' 'versionhandler' % (version.replace('.', '_'))) module_class = getattr( __import__(module_name, fromlist=['sahara']), 'VersionHandler') module = module_class() VersionFactory.modules[version] = module VersionFactory.initialized = True return VersionFactory() def get_versions(self): return VersionFactory.versions def get_version_handler(self, version): return VersionFactory.modules[version] ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9684794 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/0000775000175000017500000000000000000000000023457 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/__init__.py0000664000175000017500000000121300000000000025565 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara_plugin_vanilla.utils import patches patches.patch_all() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9684794 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/0000775000175000017500000000000000000000000024436 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/__init__.py0000664000175000017500000000000000000000000026535 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/base.py0000664000175000017500000000354400000000000025730 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslotest import base from sahara.plugins import context from sahara.plugins import db as db_api from sahara.plugins import main from sahara.plugins import utils class SaharaTestCase(base.BaseTestCase): def setUp(self): super(SaharaTestCase, self).setUp() self.setup_context() utils.rpc_setup('all-in-one') def setup_context(self, username="test_user", tenant_id="tenant_1", auth_token="test_auth_token", tenant_name='test_tenant', service_catalog=None, **kwargs): self.addCleanup(context.set_ctx, context.ctx() if context.has_ctx() else None) context.set_ctx(context.PluginsContext( username=username, tenant_id=tenant_id, auth_token=auth_token, service_catalog=service_catalog or {}, tenant_name=tenant_name, **kwargs)) def override_config(self, name, override, group=None): main.set_override(name, override, group) self.addCleanup(main.clear_override, name, group) class SaharaWithDbTestCase(SaharaTestCase): def setUp(self): super(SaharaWithDbTestCase, self).setUp() self.override_config('connection', "sqlite://", group='database') db_api.setup_db() self.addCleanup(db_api.drop_db) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9684794 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/0000775000175000017500000000000000000000000026117 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/__init__.py0000664000175000017500000000000000000000000030216 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9684794 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/0000775000175000017500000000000000000000000027545 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/__init__.py0000664000175000017500000000000000000000000031644 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9724793 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/0000775000175000017500000000000000000000000031101 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/__init__.py0000664000175000017500000000000000000000000033200 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9724793 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/resources/0000775000175000017500000000000000000000000033113 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/resources/dfs-report.txt 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/resources/dfs-0000664000175000017500000000332700000000000033674 0ustar00zuulzuul00000000000000Configured Capacity: 60249329664 (56.11 GB) Present Capacity: 50438139904 (46.97 GB) DFS Remaining: 50438041600 (46.97 GB) DFS Used: 98304 (96 KB) DFS Used%: 0.00% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 ------------------------------------------------- Datanodes available: 4 (4 total, 0 dead) Live datanodes: Name: 10.50.0.22:50010 (cluster-worker-001.novalocal) Hostname: cluster-worker-001.novalocal Decommission Status : Normal Configured Capacity: 20083101696 (18.70 GB) DFS Used: 24576 (24 KB) Non DFS Used: 3270406144 (3.05 GB) DFS Remaining: 16812670976 (15.66 GB) DFS Used%: 0.00% DFS Remaining%: 83.72% Last contact: Mon Feb 24 13:41:13 UTC 2014 Name: 10.50.0.36:50010 (cluster-worker-003.novalocal) Hostname: cluster-worker-003.novalocal Decommission Status : Normal Configured Capacity: 20083101696 (18.70 GB) DFS Used: 24576 (24 KB) Non DFS Used: 3270393856 (3.05 GB) DFS Remaining: 16812683264 (15.66 GB) DFS Used%: 0.00% DFS Remaining%: 83.72% Last contact: Mon Feb 24 13:41:11 UTC 2014 Name: 10.50.0.25:50010 (cluster-worker-002.novalocal) Hostname: cluster-worker-002.novalocal Decommission Status : Normal Configured Capacity: 20083101696 (18.70 GB) DFS Used: 24576 (24 KB) Non DFS Used: 3270389760 (3.05 GB) DFS Remaining: 16812687360 (15.66 GB) DFS Used%: 0.00% DFS Remaining%: 83.72% Last contact: Mon Feb 24 13:41:12 UTC 2014 Name: 10.50.0.60:50010 (cluster-worker-004.novalocal) Hostname: cluster-worker-004.novalocal Decommission Status : Decommissioned Configured Capacity: 20083101696 (18.70 GB) DFS Used: 24576 (24 KB) Non DFS Used: 3270316032 (3.05 GB) DFS Remaining: 16812761088 (15.66 GB) DFS Used%: 0.00% DFS Remaining%: 83.72% Last contact: Mon Feb 24 13:33:33 UTC 2014 ././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/resources/yarn-report.txt 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/resources/yarn0000664000175000017500000000106700000000000034013 0ustar00zuulzuul00000000000000Total Nodes:4 Node-Id Node-State Node-Http-Address Number-of-Running-Containers cluster-worker-001.novalocal:54746 RUNNING cluster-worker-001.novalocal:8042 0 cluster-worker-002.novalocal:53509 RUNNING cluster-worker-002.novalocal:8042 0 cluster-worker-003.novalocal:60418 RUNNING cluster-worker-003.novalocal:8042 0 cluster-worker-004.novalocal:33876 DECOMMISSIONED cluster-worker-004.novalocal:8042 0 ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_config_helper.py 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_config_he0000664000175000017500000002142300000000000034006 0ustar00zuulzuul00000000000000# Copyright (c) 2017 EasyStack Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from oslo_config import cfg from sahara.plugins import exceptions as ex from sahara.plugins import provisioning as p from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import config_helper from sahara_plugin_vanilla.tests.unit import base class TestConfigHelper(base.SaharaTestCase): plugin_path = 'sahara_plugin_vanilla.plugins.vanilla.hadoop2.' def setUp(self): super(TestConfigHelper, self).setUp() self.pctx = mock.Mock() self.applicable_target = mock.Mock() self.name = mock.Mock() self.cluster = mock.Mock() self.CONF = cfg.CONF self.CONF.import_opt("enable_data_locality", "sahara.topology.topology_helper") def test_init_env_configs(self): ENV_CONFS = { "YARN": { 'ResourceManager Heap Size': 1024, 'NodeManager Heap Size': 1024 }, "HDFS": { 'NameNode Heap Size': 1024, 'SecondaryNameNode Heap Size': 1024, 'DataNode Heap Size': 1024 }, "MapReduce": { 'JobHistoryServer Heap Size': 1024 }, "JobFlow": { 'Oozie Heap Size': 1024 } } configs = config_helper.init_env_configs(ENV_CONFS) for config in configs: self.assertIsInstance(config, p.Config) def test_init_general_configs(self): sample_configs = [config_helper.ENABLE_SWIFT, config_helper.ENABLE_MYSQL, config_helper.DATANODES_STARTUP_TIMEOUT, config_helper.DATANODES_DECOMMISSIONING_TIMEOUT, config_helper.NODEMANAGERS_DECOMMISSIONING_TIMEOUT] self.CONF.enable_data_locality = False self.assertEqual(config_helper._init_general_configs(), sample_configs) sample_configs.append(config_helper.ENABLE_DATA_LOCALITY) self.CONF.enable_data_locality = True self.assertEqual(config_helper._init_general_configs(), sample_configs) def test_get_config_value(self): cluster = mock.Mock() ng = mock.Mock() ng.configuration.return_value = mock.Mock() ng.configuration.return_value.get.return_value = mock.Mock() cl = 'test' ng.configuration.return_value.get.return_value.get.return_value = cl cluster.node_groups = [ng] cl_param = config_helper.get_config_value('pctx', 'service', 'name', cluster) self.assertEqual(cl, cl_param) all_confs = mock.Mock() all_confs.applicable_target = 'service' all_confs.name = 'name' all_confs.default_value = 'default' pctx = {'all_confs': [all_confs]} value = config_helper.get_config_value(pctx, 'service', 'name') self.assertEqual(value, 'default') pctx = {'all_confs': []} self.assertRaises(ex.PluginNotFoundException, config_helper.get_config_value, pctx, 'service', 'name') @mock.patch(plugin_path + 'config_helper.get_config_value') def test_is_swift_enabled(self, get_config_value): target = config_helper.ENABLE_SWIFT.applicable_target name = config_helper.ENABLE_SWIFT.name config_helper.is_swift_enabled(self.pctx, self.cluster) get_config_value.assert_called_once_with(self.pctx, target, name, self.cluster) @mock.patch(plugin_path + 'config_helper.get_config_value') def test_is_mysql_enabled(self, get_config_value): target = config_helper.ENABLE_MYSQL.applicable_target name = config_helper.ENABLE_MYSQL.name config_helper.is_mysql_enabled(self.pctx, self.cluster) get_config_value.assert_called_once_with(self.pctx, target, name, self.cluster) @mock.patch(plugin_path + 'config_helper.get_config_value') def test_is_data_locality_enabled(self, get_config_value): self.CONF.enable_data_locality = False enabled = config_helper.is_data_locality_enabled(self.pctx, self.cluster) self.assertEqual(enabled, False) self.CONF.enable_data_locality = True target = config_helper.ENABLE_DATA_LOCALITY.applicable_target name = config_helper.ENABLE_DATA_LOCALITY.name config_helper.is_data_locality_enabled(self.pctx, self.cluster) get_config_value.assert_called_once_with(self.pctx, target, name, self.cluster) def test_generate_spark_env_configs(self): configs = 'HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop\n' \ 'YARN_CONF_DIR=/opt/hadoop/etc/hadoop' ret = config_helper.generate_spark_env_configs(self.cluster) self.assertEqual(ret, configs) @mock.patch('sahara.plugins.utils.get_config_value_or_default') def test_generate_spark_executor_classpath(self, get_config_value_or_default): get_config_value_or_default.return_value = None path = 'Executor extra classpath' ret = config_helper.generate_spark_executor_classpath(self.cluster) get_config_value_or_default.assert_called_once_with('Spark', path, self.cluster) self.assertEqual(ret, '\n') get_config_value_or_default.return_value = 'test' ret = config_helper.generate_spark_executor_classpath(self.cluster) self.assertEqual(ret, 'spark.executor.extraClassPath test') @mock.patch('sahara.plugins.utils.get_file_text') @mock.patch('sahara.plugins.utils.get_config_value_or_default') def test_generate_job_cleanup_config(self, get_config_value_or_default, get_file_text): cron = 'MINIMUM_CLEANUP_MEGABYTES={minimum_cleanup_megabytes};' + \ 'MINIMUM_CLEANUP_SECONDS={minimum_cleanup_seconds};' + \ 'MAXIMUM_CLEANUP_SECONDS={maximum_cleanup_seconds};' script = 'MINIMUM_CLEANUP_MEGABYTES=1;' + \ 'MINIMUM_CLEANUP_SECONDS=1;' + \ 'MAXIMUM_CLEANUP_SECONDS=1;' job_conf = {'valid': True, 'cron': (cron,), 'script': script} get_file_text.return_value = cron get_config_value_or_default.return_value = 1 ret = config_helper.generate_job_cleanup_config(self.cluster) self.assertEqual(get_config_value_or_default.call_count, 3) self.assertEqual(get_file_text.call_count, 2) self.assertEqual(ret, job_conf) job_conf = {'valid': False} get_config_value_or_default.return_value = 0 ret = config_helper.generate_job_cleanup_config(self.cluster) self.assertEqual(get_config_value_or_default.call_count, 6) self.assertEqual(ret, job_conf) @mock.patch('sahara.plugins.utils.get_config_value_or_default') def test_get_spark_home(self, get_config_value_or_default): get_config_value_or_default.return_value = 1 self.assertEqual(config_helper.get_spark_home(self.cluster), 1) get_config_value_or_default.assert_called_once_with('Spark', 'Spark home', self.cluster) @mock.patch('sahara.plugins.utils.get_file_text') @mock.patch('sahara.plugins.utils.get_config_value_or_default') def test_generate_zk_basic_config(self, get_config_value_or_default, get_file_text): key = ('tickTime={ticktime}\n' + 'initLimit={initlimit}\n' + 'syncLimit={synclimit}\n') actual = 'tickTime=5\ninitLimit=5\nsyncLimit=5\n' get_config_value_or_default.return_value = 5 get_file_text.return_value = key ret = config_helper.generate_zk_basic_config(self.cluster) self.assertEqual(get_config_value_or_default.call_count, 3) self.assertEqual(ret, actual) ././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_configs.py 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_configs.p0000664000175000017500000000274400000000000033760 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import config as c from sahara_plugin_vanilla.tests.unit import base class VanillaTwoConfigTestCase(base.SaharaTestCase): def test_get_hadoop_dirs(self): ng = FakeNG(storage_paths=['/vol1', '/vol2']) dirs = c._get_hadoop_dirs(ng) expected = { 'hadoop_name_dirs': ['/vol1/hdfs/namenode', '/vol2/hdfs/namenode'], 'hadoop_data_dirs': ['/vol1/hdfs/datanode', '/vol2/hdfs/datanode'], 'hadoop_log_dir': '/vol1/hadoop/logs', 'hadoop_secure_dn_log_dir': '/vol1/hadoop/logs/secure', 'yarn_log_dir': '/vol1/yarn/logs' } self.assertEqual(expected, dirs) class FakeNG(object): def __init__(self, storage_paths=None): self.paths = storage_paths def storage_paths(self): return self.paths ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_edp_engine.py 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_edp_engin0000664000175000017500000000712400000000000034017 0ustar00zuulzuul00000000000000# Copyright (c) 2017 EasyStack Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.plugins import base as pb from sahara.plugins import exceptions as ex from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import edp_engine from sahara_plugin_vanilla.tests.unit import base as sahara_base class EdpOozieEngineTest(sahara_base.SaharaTestCase): engine_path = 'sahara.plugins.edp.' def setUp(self): super(EdpOozieEngineTest, self).setUp() self.cluster = mock.Mock() self.override_config("plugins", ["vanilla"]) pb.setup_plugins() with mock.patch('sahara.plugins.edp.get_plugin', return_value='test_plugins'): self.engine = edp_engine.EdpOozieEngine(self.cluster) def test_get_hdfs_user(self): self.assertEqual(self.engine.get_hdfs_user(), 'hadoop') def test_get_name_node_uri(self): cluster = {'info': { 'HDFS': { 'NameNode': 'test_url'}}} ret = self.engine.get_name_node_uri(cluster) self.assertEqual(ret, 'test_url') def test_get_oozie_server_uri(self): cluster = {'info': { 'JobFlow': { 'Oozie': 'test_url'}}} ret = self.engine.get_oozie_server_uri(cluster) self.assertEqual(ret, 'test_url/oozie/') @mock.patch('sahara_plugin_vanilla.plugins.vanilla.utils.get_oozie') def test_get_oozie_server(self, get_oozie): get_oozie.return_value = 'bingo' ret = self.engine.get_oozie_server(self.cluster) get_oozie.assert_called_once_with(self.cluster) self.assertEqual(ret, 'bingo') @mock.patch(engine_path + 'PluginsOozieJobEngine.validate_job_execution') @mock.patch('sahara.plugins.utils.get_instances_count') def test_validate_job_execution(self, get_instances_count, validate_job_execution): job = mock.Mock() data = mock.Mock() get_instances_count.return_value = 0 self.assertRaises(ex.InvalidComponentCountException, self.engine.validate_job_execution, self.cluster, job, data) get_instances_count.return_value = 1 self.engine.validate_job_execution(self.cluster, job, data) validate_job_execution.assert_called_once_with(self.cluster, job, data) @mock.patch('sahara.plugins.edp.create_dir_hadoop2') def test_create_hdfs_dir(self, create_dir_hadoop2): self.engine.get_hdfs_user = mock.Mock(return_value='test_user') remote = mock.Mock() dir_name = mock.Mock() self.engine.create_hdfs_dir(remote, dir_name) create_dir_hadoop2.assert_called_once_with(remote, dir_name, 'test_user') def test_get_resource_manager_uri(self): cluster = {'info': { 'YARN': { 'ResourceManager': 'test_url'}}} ret = self.engine.get_resource_manager_uri(cluster) self.assertEqual(ret, 'test_url') ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_oozie_helper.py 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_oozie_hel0000664000175000017500000000547700000000000034055 0ustar00zuulzuul00000000000000# Copyright (c) 2017 EasyStack Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import oozie_helper from sahara_plugin_vanilla.tests.unit import base class TestOozieHelper(base.SaharaTestCase): def setUp(self): super(TestOozieHelper, self).setUp() def test_get_oozie_required_xml_configs(self): hadoop_conf_dir = '/root' configs = { 'oozie.service.ActionService.executor.ext.classes': 'org.apache.oozie.action.email.EmailActionExecutor,' 'org.apache.oozie.action.hadoop.HiveActionExecutor,' 'org.apache.oozie.action.hadoop.ShellActionExecutor,' 'org.apache.oozie.action.hadoop.SqoopActionExecutor,' 'org.apache.oozie.action.hadoop.DistcpActionExecutor', 'oozie.service.SchemaService.wf.ext.schemas': 'shell-action-0.1.xsd,shell-action-0.2.xsd,shell-action-0.3.xsd,' 'email-action-0.1.xsd,hive-action-0.2.xsd,hive-action-0.3.xsd,' 'hive-action-0.4.xsd,hive-action-0.5.xsd,sqoop-action-0.2.xsd,' 'sqoop-action-0.3.xsd,sqoop-action-0.4.xsd,ssh-action-0.1.xsd,' 'ssh-action-0.2.xsd,distcp-action-0.1.xsd,distcp-action-0.2.xsd,' 'oozie-sla-0.1.xsd,oozie-sla-0.2.xsd', 'oozie.service.JPAService.create.db.schema': 'false', 'oozie.service.HadoopAccessorService.hadoop.configurations': '*=/root' } ret = oozie_helper.get_oozie_required_xml_configs(hadoop_conf_dir) self.assertEqual(ret, configs) @mock.patch('sahara_plugin_vanilla.plugins.vanilla.hadoop2.' 'utils.get_oozie_password') def test_get_oozie_mysql_configs(self, get_oozie_password): get_oozie_password.return_value = '123' configs = { 'oozie.service.JPAService.jdbc.driver': 'com.mysql.jdbc.Driver', 'oozie.service.JPAService.jdbc.url': 'jdbc:mysql://localhost:3306/oozie', 'oozie.service.JPAService.jdbc.username': 'oozie', 'oozie.service.JPAService.jdbc.password': '123' } cluster = mock.Mock() ret = oozie_helper.get_oozie_mysql_configs(cluster) get_oozie_password.assert_called_once_with(cluster) self.assertEqual(ret, configs) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_plugin.py0000664000175000017500000000327500000000000034017 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.plugins import base as pb from sahara.plugins import conductor from sahara.plugins import context from sahara.plugins import edp from sahara_plugin_vanilla.tests.unit import base class VanillaPluginTest(base.SaharaWithDbTestCase): def setUp(self): super(VanillaPluginTest, self).setUp() self.override_config("plugins", ["vanilla"]) pb.setup_plugins() @mock.patch('sahara.plugins.edp.create_dir_hadoop2') def test_edp_calls_hadoop2_create_dir(self, create_dir): for version in ['2.7.1']: cluster_dict = { 'name': 'cluster' + version.replace('.', '_'), 'plugin_name': 'vanilla', 'hadoop_version': version, 'default_image_id': 'image'} cluster = conductor.cluster_create(context.ctx(), cluster_dict) plugin = pb.PLUGINS.get_plugin(cluster.plugin_name) create_dir.reset_mock() plugin.get_edp_engine(cluster, edp.JOB_TYPE_PIG).create_hdfs_dir( mock.Mock(), '/tmp') self.assertEqual(1, create_dir.call_count) ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_recommendation_utils.py 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_recommend0000664000175000017500000000461400000000000034041 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock import testtools from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import recommendations_utils CONFIGURATION_SCHEMA = { 'cluster_configs': { 'dfs.replication': ('HDFS', 'dfs.replication') }, 'node_configs': { 'yarn.app.mapreduce.am.command-opts': ( 'MapReduce', 'yarn.app.mapreduce.am.command-opts'), 'yarn.scheduler.maximum-allocation-mb': ( 'YARN', 'yarn.scheduler.maximum-allocation-mb'), 'yarn.app.mapreduce.am.resource.mb': ( 'MapReduce', 'yarn.app.mapreduce.am.resource.mb'), 'yarn.scheduler.minimum-allocation-mb': ( 'YARN', 'yarn.scheduler.minimum-allocation-mb'), 'yarn.nodemanager.vmem-check-enabled': ( 'YARN', 'yarn.nodemanager.vmem-check-enabled'), 'mapreduce.map.java.opts': ( 'MapReduce', 'mapreduce.map.java.opts'), 'mapreduce.reduce.memory.mb': ( 'MapReduce', 'mapreduce.reduce.memory.mb'), 'yarn.nodemanager.resource.memory-mb': ( 'YARN', 'yarn.nodemanager.resource.memory-mb'), 'mapreduce.reduce.java.opts': ( 'MapReduce', 'mapreduce.reduce.java.opts'), 'mapreduce.map.memory.mb': ( 'MapReduce', 'mapreduce.map.memory.mb'), 'mapreduce.task.io.sort.mb': ( 'MapReduce', 'mapreduce.task.io.sort.mb') } } class TestVersionHandler(testtools.TestCase): @mock.patch('sahara.plugins.recommendations_utils.' 'HadoopAutoConfigsProvider') def test_recommend_configs(self, provider): f_cluster, f_configs = mock.Mock(), mock.Mock() recommendations_utils.recommend_configs(f_cluster, f_configs, False) self.assertEqual([ mock.call(CONFIGURATION_SCHEMA, f_configs, f_cluster, False) ], provider.call_args_list) ././@PaxHeader0000000000000000000000000000021100000000000011447 xustar0000000000000000115 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_run_scripts.py 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_run_scrip0000664000175000017500000003676600000000000034111 0ustar00zuulzuul00000000000000# Copyright (c) 2017 EasyStack Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from functools import wraps from unittest import mock def mock_event_wrapper(*args, **kwargs): def decorator(f): @wraps(f) def decorated_function(*args, **kwargs): return f(*args, **kwargs) return decorated_function return decorator from sahara.plugins import edp from sahara.plugins import utils as pu mock.patch('sahara.plugins.utils.event_wrapper', mock_event_wrapper).start() from sahara_plugin_vanilla.i18n import _ from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import config_helper from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import run_scripts as rs from sahara_plugin_vanilla.tests.unit import base class RunScriptsTest(base.SaharaTestCase): PLUGINS_PATH = 'sahara_plugin_vanilla.plugins.vanilla.hadoop2.' def setUp(self): super(RunScriptsTest, self).setUp() self.instance = mock.Mock() self.r = mock.Mock() self.remote = mock.Mock(return_value=self.r) self.remote.__enter__ = self.remote self.remote.__exit__ = mock.Mock() self.instance.remote.return_value = self.remote pu.event_wrapper = mock_event_wrapper @mock.patch(PLUGINS_PATH + 'run_scripts._start_processes') @mock.patch('sahara.plugins.context.set_current_instance_id') @mock.patch('sahara.plugins.utils.add_provisioning_step') @mock.patch('sahara.plugins.utils.instances_with_services') def test_start_dn_nm_processes(self, instances_with_services, add_provisioning_step, set_current_instance_id, _start_processes): ins = mock.Mock() ins.cluster_id = '111' ins.instance_id = '123' ins.instance_name = 'ins_1' instances = [ins] instances_with_services.return_value = instances mess = pu.start_process_event_message('DataNodes, NodeManagers') ins.node_group.node_processes = ['datanode', 'test'] rs.start_dn_nm_processes(instances) instances_with_services.assert_called_once_with( instances, ['datanode', 'nodemanager']) add_provisioning_step.assert_called_once_with('111', mess, 1) set_current_instance_id.assert_called_once_with('123') _start_processes.assert_called_once_with(ins, ['datanode']) @mock.patch('sahara.plugins.utils.check_cluster_exists') def test_start_processes_datanode(self, check_cluster_exists): processes = ['datanode'] rs._start_processes(self.instance, processes) self.r.execute_command.assert_called_once_with( 'sudo su - -c "hadoop-daemon.sh start datanode" hadoop') @mock.patch('sahara.plugins.utils.check_cluster_exists') def test_start_processes_nodemanager(self, check_cluster_exists): processes = ['nodemanager'] rs._start_processes(self.instance, processes) self.r.execute_command.assert_called_once_with( 'sudo su - -c "yarn-daemon.sh start nodemanager" hadoop') @mock.patch('sahara.plugins.utils.check_cluster_exists') def test_start_processes_both(self, check_cluster_exists): processes = ['datanode', 'nodemanager'] rs._start_processes(self.instance, processes) cmd_1 = 'sudo su - -c "hadoop-daemon.sh start datanode" hadoop' cmd_2 = 'sudo su - -c "yarn-daemon.sh start nodemanager" hadoop' calls = [mock.call(cmd_1), mock.call(cmd_2)] self.r.execute_command.assert_has_calls(calls, any_order=True) def test_start_hadoop_process(self): process = 'test' rs.start_hadoop_process(self.instance, process) self.remote.execute_command.assert_called_once_with( 'sudo su - -c "hadoop-daemon.sh start %s" hadoop' % process) def test_start_yarn_process(self): process = 'test' rs.start_yarn_process(self.instance, process) self.remote.execute_command.assert_called_once_with( 'sudo su - -c "yarn-daemon.sh start %s" hadoop' % process) @mock.patch('sahara.plugins.utils.check_cluster_exists') @mock.patch('sahara.plugins.utils.add_provisioning_step') def test_start_historyserver(self, add_provisioning_step, check_cluster_exists): rs.start_historyserver(self.instance) self.remote.execute_command.assert_called_once_with( 'sudo su - -c "mr-jobhistory-daemon.sh start historyserver" ' + 'hadoop') @mock.patch(PLUGINS_PATH + 'run_scripts._start_oozie') @mock.patch(PLUGINS_PATH + 'run_scripts._oozie_share_lib') @mock.patch(PLUGINS_PATH + 'run_scripts._start_mysql') @mock.patch(PLUGINS_PATH + 'config_helper.is_mysql_enabled') @mock.patch(PLUGINS_PATH + 'utils.get_oozie_password') @mock.patch('sahara.plugins.context.set_current_instance_id') @mock.patch('sahara.plugins.utils.check_cluster_exists') @mock.patch('sahara.plugins.utils.add_provisioning_step') def test_start_oozie_process(self, add_provisioning_step, check_cluster_exists, set_current_instance_id, get_oozie_password, is_mysql_enabled, _start_mysql, _oozie_share_lib, _start_oozie): self.instance.instance_id = '112233' pctx = mock.Mock() is_mysql_enabled.return_value = True sql_script = pu.get_file_text( 'plugins/vanilla/hadoop2/resources/create_oozie_db.sql', 'sahara_plugin_vanilla') get_oozie_password.return_value = '123' pwd_script = sql_script.replace('password', '123') rs.start_oozie_process(pctx, self.instance) set_current_instance_id.assert_called_once_with('112233') is_mysql_enabled.assert_called_once_with(pctx, self.instance.cluster) _start_mysql.assert_called_once_with(self.r) self.r.write_file_to.assert_called_once_with('create_oozie_db.sql', pwd_script) self.r.execute_command.assert_called_once_with( 'mysql -u root < create_oozie_db.sql && ' 'rm create_oozie_db.sql') _oozie_share_lib.assert_called_once_with(self.r) _start_oozie.assert_called_once_with(self.r) @mock.patch(PLUGINS_PATH + 'config_helper.get_spark_home') @mock.patch('sahara.plugins.context.set_current_instance_id') @mock.patch('sahara.plugins.utils.check_cluster_exists') @mock.patch('sahara.plugins.utils.add_provisioning_step') def test_start_spark_history_server(self, add_provisioning_step, check_cluster_exists, set_current_instance_id, get_spark_home): get_spark_home.return_value = '/spark' rs.start_spark_history_server(self.instance) get_spark_home.assert_called_once_with(self.instance.cluster) set_current_instance_id.assert_called_once_with( self.instance.instance_id) self.r.execute_command.assert_called_once_with( 'sudo su - -c "bash /spark/sbin/start-history-server.sh" hadoop') def test_format_namenode(self): rs.format_namenode(self.instance) self.remote.execute_command.assert_called_once_with( 'sudo su - -c "hdfs namenode -format" hadoop') @mock.patch('sahara_plugin_vanilla.plugins.vanilla.utils.get_namenode') @mock.patch('sahara.plugins.utils.check_cluster_exists') @mock.patch('sahara.plugins.utils.add_provisioning_step') def test_refresh_hadoop_nodes(self, add_provisioning_step, check_cluster_exists, get_namenode): cluster = mock.Mock() get_namenode.return_value = self.instance rs.refresh_hadoop_nodes(cluster) get_namenode.assert_called_once_with(cluster) self.remote.execute_command.assert_called_once_with( 'sudo su - -c "hdfs dfsadmin -refreshNodes" hadoop') @mock.patch('sahara_plugin_vanilla.plugins.vanilla.utils.' 'get_resourcemanager') @mock.patch('sahara.plugins.utils.check_cluster_exists') @mock.patch('sahara.plugins.utils.add_provisioning_step') def test_refresh_yarn_nodes(self, add_provisioning_step, check_cluster_exists, get_resourcemanager): cluster = mock.Mock() get_resourcemanager.return_value = self.instance rs.refresh_yarn_nodes(cluster) get_resourcemanager.assert_called_once_with(cluster) self.remote.execute_command.assert_called_once_with( 'sudo su - -c "yarn rmadmin -refreshNodes" hadoop') def test_oozie_share_lib(self): cmd_1 = 'sudo su - -c "mkdir /tmp/oozielib && ' \ 'tar zxf /opt/oozie/oozie-sharelib-*.tar.gz -C ' \ '/tmp/oozielib && ' \ 'hadoop fs -mkdir /user && ' \ 'hadoop fs -mkdir /user/hadoop && ' \ 'hadoop fs -put /tmp/oozielib/share /user/hadoop/ && ' \ 'rm -rf /tmp/oozielib" hadoop' cmd_2 = 'sudo su - -c "/opt/oozie/bin/ooziedb.sh ' \ 'create -sqlfile oozie.sql ' \ '-run Validate DB Connection" hadoop' command = [mock.call(cmd_1), mock.call(cmd_2)] rs._oozie_share_lib(self.r) self.r.execute_command.assert_has_calls(command, any_order=True) def test_start_mysql(self): rs._start_mysql(self.r) self.r.execute_command.assert_called_once_with('/opt/start-mysql.sh') def test_start_oozie(self): rs._start_oozie(self.r) self.r.execute_command.assert_called_once_with( 'sudo su - -c "/opt/oozie/bin/oozied.sh start" hadoop') @mock.patch('sahara_plugin_vanilla.plugins.vanilla.utils.get_namenode') @mock.patch('sahara_plugin_vanilla.plugins.vanilla.utils.get_datanodes') @mock.patch('sahara.plugins.utils.check_cluster_exists') @mock.patch('sahara.plugins.utils.add_provisioning_step') @mock.patch('sahara.plugins.utils.plugin_option_poll') def test_await_datanodes(self, plugin_option_poll, add_provisioning_step, check_cluster_exists, get_datanodes, get_namenode): cluster = mock.Mock() get_datanodes.return_value = ['node1'] r = mock.Mock() remote = mock.Mock(return_value=r) remote.__enter__ = remote remote.__exit__ = mock.Mock() namenode = mock.Mock() namenode.remote.return_value = remote get_namenode.return_value = namenode mess = _('Waiting on 1 datanodes to start up') test_data = {'remote': r, 'count': 1} timeout = config_helper.DATANODES_STARTUP_TIMEOUT rs.await_datanodes(cluster) get_datanodes.assert_called_once_with(cluster) get_namenode.assert_called_once_with(cluster) plugin_option_poll.assert_called_once_with(cluster, rs._check_datanodes_count, timeout, mess, 1, test_data) def test_check_datanodes_count(self): self.r.execute_command = mock.Mock(return_value=(0, '1')) self.assertEqual(rs._check_datanodes_count(self.r, 0), True) self.assertEqual(rs._check_datanodes_count(self.r, 1), True) self.r.execute_command.assert_called_once_with( 'sudo su -lc "hdfs dfsadmin -report" hadoop | ' r'grep \'Live datanodes\|Datanodes available:\' | ' r'grep -o \'[0-9]\+\' | head -n 1') def test_hive_create_warehouse_dir(self): rs._hive_create_warehouse_dir(self.r) self.r.execute_command.assert_called_once_with( "sudo su - -c 'hadoop fs -mkdir -p " "/user/hive/warehouse' hadoop") def test_hive_copy_shared_conf(self): dest = '/root/test.xml' rs._hive_copy_shared_conf(self.r, dest) self.r.execute_command.assert_called_once_with( "sudo su - -c 'hadoop fs -mkdir -p /root && " "hadoop fs -put /opt/hive/conf/hive-site.xml " "/root/test.xml' hadoop") def test_hive_create_db(self): rs._hive_create_db(self.r) self.r.execute_command.assert_called_once_with( 'mysql -u root < /tmp/create_hive_db.sql') def test_hive_metastore_start(self): rs._hive_metastore_start(self.r) self.r.execute_command.assert_called_once_with( "sudo su - -c 'nohup /opt/hive/bin/hive" " --service metastore > /dev/null &' hadoop") @mock.patch(PLUGINS_PATH + 'utils.get_hive_password') @mock.patch(PLUGINS_PATH + 'config_helper.is_mysql_enabled') @mock.patch(PLUGINS_PATH + 'run_scripts._hive_metastore_start') @mock.patch(PLUGINS_PATH + 'run_scripts._hive_create_db') @mock.patch(PLUGINS_PATH + 'run_scripts._start_mysql') @mock.patch(PLUGINS_PATH + 'run_scripts._hive_copy_shared_conf') @mock.patch(PLUGINS_PATH + 'run_scripts._hive_create_warehouse_dir') @mock.patch('sahara_plugin_vanilla.plugins.vanilla.utils.get_oozie') @mock.patch('sahara.plugins.context.set_current_instance_id') @mock.patch('sahara.plugins.utils.check_cluster_exists') @mock.patch('sahara.plugins.utils.add_provisioning_step') def test_start_hiveserver_process(self, add_provisioning_step, check_cluster_exists, set_current_instance_id, get_oozie, _hive_create_warehouse_dir, _hive_copy_shared_conf, _start_mysql, _hive_create_db, _hive_metastore_start, is_mysql_enabled, get_hive_password): pctx = mock.Mock() path = edp.get_hive_shared_conf_path('hadoop') is_mysql_enabled.return_value = True cluster = self.instance.cluster self.instance.cluster.hadoop_version = '2.7.1' ng_cluster = self.instance.node_group.cluster get_oozie.return_value = None sql_script = pu.get_file_text( 'plugins/vanilla/v2_7_1/resources/create_hive_db.sql', 'sahara_plugin_vanilla') get_hive_password.return_value = '123' pwd_script = sql_script.replace('{{password}}', '123') rs.start_hiveserver_process(pctx, self.instance) set_current_instance_id.assert_called_once_with( self.instance.instance_id) _hive_create_warehouse_dir.assert_called_once_with(self.r) _hive_copy_shared_conf.assert_called_once_with(self.r, path) is_mysql_enabled.assert_called_once_with(pctx, cluster) get_oozie.assert_called_once_with(ng_cluster) _start_mysql.assert_called_once_with(self.r) get_hive_password.assert_called_once_with(cluster) self.r.write_file_to.assert_called_once_with( '/tmp/create_hive_db.sql', pwd_script) _hive_create_db.assert_called_once_with(self.r) _hive_metastore_start.assert_called_once_with(self.r) ././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_scaling.py 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_scaling.p0000664000175000017500000003171500000000000033750 0ustar00zuulzuul00000000000000# Copyright (c) 2017 EasyStack Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara_plugin_vanilla.i18n import _ from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import config_helper from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import scaling from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import utils as pu from sahara_plugin_vanilla.tests.unit import base class ScalingTest(base.SaharaTestCase): PLUGINS_PATH = 'sahara_plugin_vanilla.plugins.vanilla.hadoop2.' def setUp(self): super(ScalingTest, self).setUp() self.cluster = mock.Mock() self.instances = mock.Mock() self.r = mock.Mock() self.r.execute_command = mock.Mock() self.instance = mock.Mock() self.instance.remote.return_value.__enter__ = mock.Mock( return_value=self.r) self.instance.remote.return_value.__exit__ = mock.Mock() @mock.patch('sahara.plugins.swift_helper.install_ssl_certs') @mock.patch('sahara_plugin_vanilla.plugins.vanilla.utils.' 'get_resourcemanager') @mock.patch(PLUGINS_PATH + 'run_scripts.refresh_zk_servers') @mock.patch(PLUGINS_PATH + 'config.configure_zookeeper') @mock.patch(PLUGINS_PATH + 'run_scripts.start_dn_nm_processes') @mock.patch(PLUGINS_PATH + 'run_scripts.refresh_yarn_nodes') @mock.patch(PLUGINS_PATH + 'run_scripts.refresh_hadoop_nodes') @mock.patch(PLUGINS_PATH + 'scaling._update_include_files') @mock.patch(PLUGINS_PATH + 'config.configure_topology_data') @mock.patch(PLUGINS_PATH + 'config.configure_instances') def test_scale_cluster(self, configure_instances, configure_topology_data, _update_include_files, refresh_hadoop_nodes, refresh_yarn_nodes, start_dn_nm_processes, configure_zk, refresh_zk, get_resourcemanager, install_ssl_certs): get_resourcemanager.return_value = 'node1' pctx = mock.Mock() scaling.scale_cluster(pctx, self.cluster, self.instances) configure_instances.assert_called_once_with(pctx, self.instances) _update_include_files.assert_called_once_with(self.cluster) refresh_hadoop_nodes.assert_called_once_with(self.cluster) get_resourcemanager.assert_called_once_with(self.cluster) refresh_yarn_nodes.assert_called_once_with(self.cluster) configure_topology_data.assert_called_once_with(pctx, self.cluster) start_dn_nm_processes.assert_called_once_with(self.instances) install_ssl_certs.assert_called_once_with(self.instances) configure_topology_data.assert_called_once_with(pctx, self.cluster) configure_zk.assert_called_once_with(self.cluster) refresh_zk.assert_called_once_with(self.cluster) def test_get_instances_with_service(self): ins_1 = mock.Mock() ins_1.node_group.node_processes = ['nodename'] ins_2 = mock.Mock() ins_2.node_group.node_processes = ['nodedata'] instances = [ins_1, ins_2] service = 'nodename' ret = scaling._get_instances_with_service(instances, service) self.assertEqual(ret, [ins_1]) @mock.patch('sahara_plugin_vanilla.plugins.vanilla.utils.get_nodemanagers') @mock.patch('sahara_plugin_vanilla.plugins.vanilla.utils.get_datanodes') @mock.patch('sahara.plugins.utils.check_cluster_exists') @mock.patch('sahara.plugins.utils.add_provisioning_step') @mock.patch('sahara.plugins.utils.generate_fqdn_host_names') @mock.patch('sahara.plugins.utils.get_instances') def test_update_include_files(self, get_instances, generate_fqdn_host_names, add_provisioning_step, check_cluster_exists, get_datanodes, get_nodemanagers): DIR = scaling.HADOOP_CONF_DIR host = '1.2.3.4' ins_1 = mock.Mock() ins_1.id = 'instance_1' ins_2 = mock.Mock() ins_2.id = 'instance_2' ins_3 = mock.Mock() ins_3.id = 'instance_3' ins_4 = mock.Mock() ins_4.id = 'instance_4' dec_instances = [ins_1, ins_2] get_instances.return_value = [self.instance] get_datanodes.return_value = [ins_3] get_nodemanagers.return_value = [ins_4] generate_fqdn_host_names.return_value = host scaling._update_include_files(self.cluster, dec_instances) get_instances.assert_called_once_with(self.cluster) get_datanodes.assert_called_once_with(self.cluster) get_nodemanagers.assert_called_once_with(self.cluster) count = generate_fqdn_host_names.call_count self.assertEqual(count, 2) command_calls = [mock.call( 'sudo su - -c "echo \'%s\' > %s/dn-include" hadoop' % ( host, DIR)), mock.call( 'sudo su - -c "echo \'%s\' > %s/nm-include" hadoop' % ( host, DIR))] self.r.execute_command.assert_has_calls(command_calls, any_order=True) @mock.patch('sahara_plugin_vanilla.plugins.vanilla.utils.' 'get_resourcemanager') @mock.patch(PLUGINS_PATH + 'run_scripts.refresh_zk_servers') @mock.patch(PLUGINS_PATH + 'config.configure_zookeeper') @mock.patch(PLUGINS_PATH + 'config.configure_topology_data') @mock.patch(PLUGINS_PATH + 'run_scripts.refresh_yarn_nodes') @mock.patch(PLUGINS_PATH + 'run_scripts.refresh_hadoop_nodes') @mock.patch(PLUGINS_PATH + 'scaling._update_exclude_files') @mock.patch(PLUGINS_PATH + 'scaling._clear_exclude_files') @mock.patch(PLUGINS_PATH + 'scaling._update_include_files') @mock.patch(PLUGINS_PATH + 'scaling._check_datanodes_decommission') @mock.patch(PLUGINS_PATH + 'scaling._check_nodemanagers_decommission') @mock.patch(PLUGINS_PATH + 'scaling._get_instances_with_service') def test_decommission_nodes(self, _get_instances_with_service, _check_nodemanagers_decommission, _check_datanodes_decommission, _update_include_files, _clear_exclude_files, _update_exclude_files, refresh_hadoop_nodes, refresh_yarn_nodes, configure_topology_data, configure_zk, refresh_zk, get_resourcemanager): data = 'test_data' _get_instances_with_service.return_value = data get_resourcemanager.return_value = 'node1' pctx = mock.Mock() scaling.decommission_nodes(pctx, self.cluster, self.instances) get_instances_count = _get_instances_with_service.call_count self.assertEqual(get_instances_count, 2) _update_exclude_files.assert_called_once_with(self.cluster, self.instances) refresh_count = refresh_hadoop_nodes.call_count self.assertEqual(refresh_count, 2) get_resourcemanager.assert_called_once_with(self.cluster) refresh_yarn_nodes.assert_called_once_with(self.cluster) _check_nodemanagers_decommission.assert_called_once_with( self.cluster, data) _check_datanodes_decommission.assert_called_once_with( self.cluster, data) _update_include_files.assert_called_once_with(self.cluster, self.instances) _clear_exclude_files.assert_called_once_with(self.cluster) configure_topology_data.assert_called_once_with(pctx, self.cluster) configure_zk.assert_called_once_with(self.cluster, self.instances) refresh_zk.assert_called_once_with(self.cluster, self.instances) @mock.patch(PLUGINS_PATH + 'scaling._get_instances_with_service') @mock.patch('sahara.plugins.utils.generate_fqdn_host_names') @mock.patch('sahara.plugins.utils.get_instances') def test_update_exclude_files(self, get_instances, generate_fqdn_host_names, get_instances_with_service): node = mock.Mock() get_instances_with_service.return_value = node host = '1.2.3.4' generate_fqdn_host_names.return_value = host get_instances.return_value = [self.instance] scaling._update_exclude_files(self.cluster, self.instances) service_calls = [mock.call(self.instances, 'datanode'), mock.call(self.instances, 'nodemanager')] get_instances_with_service.assert_has_calls(service_calls, any_order=True) self.assertEqual(generate_fqdn_host_names.call_count, 2) get_instances.assert_called_once_with(self.cluster) DIR = scaling.HADOOP_CONF_DIR command_calls = [mock.call( 'sudo su - -c "echo \'%s\' > %s/dn-exclude" hadoop' % ( host, DIR)), mock.call( 'sudo su - -c "echo \'%s\' > %s/nm-exclude" hadoop' % ( host, DIR))] self.r.execute_command.assert_has_calls(command_calls, any_order=True) @mock.patch('sahara.plugins.utils.get_instances') def test_clear_exclude_files(self, get_instances): get_instances.return_value = [self.instance] scaling._clear_exclude_files(self.cluster) get_instances.assert_called_once_with(self.cluster) DIR = scaling.HADOOP_CONF_DIR calls = [mock.call('sudo su - -c "echo > %s/dn-exclude" hadoop' % DIR), mock.call('sudo su - -c "echo > %s/nm-exclude" hadoop' % DIR)] self.r.execute_command.assert_has_calls(calls, any_order=True) def test_is_decommissioned(self): def check_func(cluster): statuses = {'status': cluster} return statuses ins = mock.Mock() ins.fqdn.return_value = 'status' instances = [ins] cluster = 'decommissioned' ret = scaling.is_decommissioned(cluster, check_func, instances) self.assertEqual(ret, True) cluster = 'active' ret = scaling.is_decommissioned(cluster, check_func, instances) self.assertEqual(ret, False) @mock.patch('sahara.plugins.utils.plugin_option_poll') def test_check_decommission(self, plugin_option_poll): check_func = mock.Mock() option = mock.Mock() is_dec = scaling.is_decommissioned mess = _("Wait for decommissioning") sample_dict = {'cluster': self.cluster, 'check_func': check_func, 'instances': self.instances} scaling._check_decommission(self.cluster, self.instances, check_func, option) plugin_option_poll.assert_called_once_with(self.cluster, is_dec, option, mess, 5, sample_dict) @mock.patch(PLUGINS_PATH + 'scaling._check_decommission') @mock.patch('sahara.plugins.utils.check_cluster_exists') @mock.patch('sahara.plugins.utils.add_provisioning_step') def test_check_nodemanagers_decommission(self, add_provisioning_step, check_cluster_exists, _check_decommission): timeout = config_helper.NODEMANAGERS_DECOMMISSIONING_TIMEOUT status = pu.get_nodemanagers_status scaling._check_nodemanagers_decommission(self.cluster, self.instances) _check_decommission.assert_called_once_with(self.cluster, self.instances, status, timeout) @mock.patch(PLUGINS_PATH + 'scaling._check_decommission') @mock.patch('sahara.plugins.utils.check_cluster_exists') @mock.patch('sahara.plugins.utils.add_provisioning_step') def test_check_datanodes_decommission(self, add_provisioning_step, check_cluster_exists, _check_decommission): timeout = config_helper.DATANODES_DECOMMISSIONING_TIMEOUT status = pu.get_datanodes_status scaling._check_datanodes_decommission(self.cluster, self.instances) _check_decommission.assert_called_once_with(self.cluster, self.instances, status, timeout) ././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_starting_scripts.py 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_starting_0000664000175000017500000001546000000000000034063 0ustar00zuulzuul00000000000000# Copyright (c) 2017 EasyStack Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import starting_scripts from sahara_plugin_vanilla.tests.unit import base class StartingScriptsTest(base.SaharaTestCase): plugins_path = 'sahara_plugin_vanilla.plugins.vanilla.' def setUp(self): super(StartingScriptsTest, self).setUp() self.cluster = mock.Mock() @mock.patch(plugins_path + 'utils.get_namenode') @mock.patch(plugins_path + 'hadoop2.starting_scripts._start_namenode') def test_start_namenode(self, _start_namenode, get_namenode): namenode = mock.Mock() get_namenode.return_value = namenode starting_scripts.start_namenode(self.cluster) get_namenode.assert_called_once_with(self.cluster) _start_namenode.assert_called_once_with(namenode) @mock.patch('sahara.plugins.utils.check_cluster_exists') @mock.patch(plugins_path + 'hadoop2.run_scripts.start_hadoop_process') @mock.patch(plugins_path + 'hadoop2.run_scripts.format_namenode') def test__start_namenode(self, format_namenode, start_hadoop_process, check_cluster_exists): check_cluster_exists.return_value = None nn = mock.Mock() starting_scripts._start_namenode(nn) format_namenode.assert_called_once_with(nn) start_hadoop_process.assert_called_once_with(nn, 'namenode') @mock.patch(plugins_path + 'hadoop2.starting_scripts._start_secondarynamenode') @mock.patch(plugins_path + 'utils.get_secondarynamenode') def test_start_secondarynamenode(self, get_secondarynamenode, _start_secondarynamenode): get_secondarynamenode.return_value = 0 starting_scripts.start_secondarynamenode(self.cluster) get_secondarynamenode.assert_called_once_with(self.cluster) get_secondarynamenode.return_value = 1 starting_scripts.start_secondarynamenode(self.cluster) _start_secondarynamenode.assert_called_once_with(1) self.assertEqual(get_secondarynamenode.call_count, 2) @mock.patch('sahara.plugins.utils.check_cluster_exists') @mock.patch(plugins_path + 'hadoop2.run_scripts.start_hadoop_process') def test__start_secondarynamenode(self, start_hadoop_process, check_cluster_exists): check_cluster_exists.return_value = None snn = mock.Mock() starting_scripts._start_secondarynamenode(snn) start_hadoop_process.assert_called_once_with(snn, 'secondarynamenode') @mock.patch(plugins_path + 'hadoop2.starting_scripts._start_resourcemanager') @mock.patch(plugins_path + 'utils.get_resourcemanager') def test_start_resourcemanager(self, get_resourcemanager, _start_resourcemanager): get_resourcemanager.return_value = 0 starting_scripts.start_resourcemanager(self.cluster) get_resourcemanager.assert_called_once_with(self.cluster) get_resourcemanager.return_value = 1 starting_scripts.start_resourcemanager(self.cluster) _start_resourcemanager.assert_called_once_with(1) self.assertEqual(get_resourcemanager.call_count, 2) @mock.patch('sahara.plugins.utils.check_cluster_exists') @mock.patch(plugins_path + 'hadoop2.run_scripts.start_yarn_process') def test__start_resourcemanager(self, start_yarn_process, check_cluster_exists): check_cluster_exists.return_value = None snn = mock.Mock() starting_scripts._start_resourcemanager(snn) start_yarn_process.assert_called_once_with(snn, 'resourcemanager') @mock.patch(plugins_path + 'hadoop2.run_scripts.start_historyserver') @mock.patch(plugins_path + 'utils.get_historyserver') def test_start_historyserver(self, get_historyserver, start_historyserver): get_historyserver.return_value = 0 starting_scripts.start_historyserver(self.cluster) get_historyserver.assert_called_once_with(self.cluster) get_historyserver.return_value = 1 starting_scripts.start_historyserver(self.cluster) start_historyserver.assert_called_once_with(1) self.assertEqual(get_historyserver.call_count, 2) @mock.patch(plugins_path + 'hadoop2.run_scripts.start_oozie_process') @mock.patch(plugins_path + 'utils.get_oozie') def test_start_oozie(self, get_oozie, start_oozie_process): pctx = mock.Mock() get_oozie.return_value = 0 starting_scripts.start_oozie(pctx, self.cluster) get_oozie.assert_called_once_with(self.cluster) get_oozie.return_value = 1 starting_scripts.start_oozie(pctx, self.cluster) start_oozie_process.assert_called_once_with(pctx, 1) self.assertEqual(get_oozie.call_count, 2) @mock.patch(plugins_path + 'hadoop2.run_scripts.start_hiveserver_process') @mock.patch(plugins_path + 'utils.get_hiveserver') def test_start_hiveserver(self, get_hiveserver, start_hiveserver_process): pctx = mock.Mock() get_hiveserver.return_value = 0 starting_scripts.start_hiveserver(pctx, self.cluster) get_hiveserver.assert_called_once_with(self.cluster) get_hiveserver.return_value = 1 starting_scripts.start_hiveserver(pctx, self.cluster) start_hiveserver_process.assert_called_once_with(pctx, 1) self.assertEqual(get_hiveserver.call_count, 2) @mock.patch(plugins_path + 'hadoop2.run_scripts.start_spark_history_server') @mock.patch(plugins_path + 'utils.get_spark_history_server') def test_start_spark(self, get_spark_history_server, start_spark_history_server): get_spark_history_server.return_value = 0 starting_scripts.start_spark(self.cluster) get_spark_history_server.assert_called_once_with(self.cluster) get_spark_history_server.return_value = 1 starting_scripts.start_spark(self.cluster) start_spark_history_server.assert_called_once_with(1) self.assertEqual(get_spark_history_server.call_count, 2) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_utils.py0000664000175000017500000001244000000000000033653 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.plugins import utils from sahara_plugin_vanilla.plugins.vanilla.hadoop2 import utils as u from sahara_plugin_vanilla.tests.unit import base class UtilsTestCase(base.SaharaTestCase): @mock.patch('sahara_plugin_vanilla.plugins.vanilla.utils.get_namenode') def test_datanodes_status(self, nn): report = utils.get_file_text( 'tests/unit/plugins/vanilla/hadoop2/resources/dfs-report.txt', 'sahara_plugin_vanilla') nn.return_value = self._get_instance(report) statuses = u.get_datanodes_status(None) expected = { 'cluster-worker-001.novalocal': 'normal', 'cluster-worker-002.novalocal': 'normal', 'cluster-worker-003.novalocal': 'normal', 'cluster-worker-004.novalocal': 'decommissioned' } self.assertEqual(expected, statuses) @mock.patch('sahara_plugin_vanilla.plugins.vanilla.utils.' 'get_resourcemanager') def test_nodemanagers_status(self, rm): report = utils.get_file_text( 'tests/unit/plugins/vanilla/hadoop2/resources/yarn-report.txt', 'sahara_plugin_vanilla') rm.return_value = self._get_instance(report) statuses = u.get_nodemanagers_status(None) expected = { 'cluster-worker-001.novalocal': 'running', 'cluster-worker-002.novalocal': 'running', 'cluster-worker-003.novalocal': 'running', 'cluster-worker-004.novalocal': 'decommissioned' } self.assertEqual(expected, statuses) def _get_instance(self, out): inst_remote = mock.MagicMock() inst_remote.execute_command.return_value = 0, out inst_remote.__enter__.return_value = inst_remote inst = mock.MagicMock() inst.remote.return_value = inst_remote return inst @mock.patch('sahara.plugins.conductor.cluster_get') @mock.patch('sahara.plugins.castellan_utils.get_secret') @mock.patch('sahara.plugins.castellan_utils.store_secret') @mock.patch('sahara_plugin_vanilla.plugins.vanilla.utils') @mock.patch('sahara.plugins.conductor.cluster_update') def test_oozie_password(self, cluster_update, vu, store_secret, get_secret, conductor): cluster = mock.MagicMock() cluster.extra = mock.MagicMock() cluster.extra.to_dict.return_value = {"oozie_pass_id": "31415926"} conductor.return_value = cluster get_secret.return_value = "oozie_pass" result = u.get_oozie_password(cluster) get_secret.assert_called_once_with("31415926") vu.generate_random_password.assert_not_called() self.assertEqual('oozie_pass', result) cluster.extra.to_dict.return_value = {} store_secret.return_value = 'oozie_pass' result = u.get_oozie_password(cluster) self.assertEqual('oozie_pass', result) @mock.patch('sahara.plugins.castellan_utils.delete_secret') def test_delete_oozie_password(self, delete_secret): cluster = mock.MagicMock() cluster.extra.to_dict = mock.MagicMock() cluster.extra.to_dict.return_value = {} u.delete_oozie_password(cluster) delete_secret.assert_not_called() cluster.extra.to_dict.return_value = {"oozie_pass_id": "31415926"} u.delete_oozie_password(cluster) delete_secret.assert_called_once_with("31415926") @mock.patch('sahara.plugins.conductor.cluster_get') @mock.patch('sahara.plugins.castellan_utils.get_secret') @mock.patch('sahara.plugins.castellan_utils.store_secret') @mock.patch('sahara.plugins.conductor.cluster_update') def test_get_hive_password(self, cluster_update, store_secret, get_secret, conductor): cluster = mock.MagicMock() cluster.extra.to_dict.return_value = {"hive_pass_id": "31415926"} conductor.return_value = cluster get_secret.return_value = "hive_pass" result = u.get_hive_password(cluster) get_secret.assert_called_once_with("31415926") self.assertEqual('hive_pass', result) cluster.extra.to_dict.return_value = {} store_secret.return_value = 'hive_pass' result = u.get_hive_password(cluster) self.assertEqual('hive_pass', result) @mock.patch('sahara.plugins.castellan_utils.delete_secret') def test_delete_hive_password(self, delete_secret): cluster = mock.MagicMock() cluster.extra.to_dict.return_value = {} u.delete_hive_password(cluster) delete_secret.assert_not_called() cluster.extra.to_dict.return_value = {"hive_pass_id": "31415926"} u.delete_hive_password(cluster) delete_secret.assert_called_once_with("31415926") ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_validation.py 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_validatio0000664000175000017500000001127200000000000034042 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools from sahara.plugins import exceptions as ex from sahara.plugins import testutils as tu from sahara_plugin_vanilla.plugins.vanilla import plugin as p from sahara_plugin_vanilla.tests.unit import base class ValidationTest(base.SaharaTestCase): def setUp(self): super(ValidationTest, self).setUp() self.pl = p.VanillaProvider() def test_validate(self): self.ng = [] self.ng.append(tu.make_ng_dict("nn", "f1", ["namenode"], 0)) self.ng.append(tu.make_ng_dict("sn", "f1", ["secondarynamenode"], 0)) self.ng.append(tu.make_ng_dict("jt", "f1", ["resourcemanager"], 0)) self.ng.append(tu.make_ng_dict("tt", "f1", ["nodemanager"], 0)) self.ng.append(tu.make_ng_dict("dn", "f1", ["datanode"], 0)) self.ng.append(tu.make_ng_dict("hs", "f1", ["historyserver"], 0)) self.ng.append(tu.make_ng_dict("oo", "f1", ["oozie"], 0)) self._validate_case(1, 1, 1, 10, 10, 0, 0) self._validate_case(1, 0, 1, 1, 4, 0, 0) self._validate_case(1, 1, 1, 0, 3, 0, 0) self._validate_case(1, 0, 1, 0, 3, 0, 0) self._validate_case(1, 1, 0, 0, 3, 0, 0) self._validate_case(1, 0, 1, 1, 3, 1, 1) self._validate_case(1, 1, 1, 1, 3, 1, 0) with testtools.ExpectedException(ex.InvalidComponentCountException): self._validate_case(0, 0, 1, 10, 3, 0, 0) with testtools.ExpectedException(ex.InvalidComponentCountException): self._validate_case(2, 0, 1, 10, 3, 0, 0) with testtools.ExpectedException(ex.InvalidComponentCountException): self._validate_case(1, 2, 1, 1, 3, 1, 1) with testtools.ExpectedException(ex.RequiredServiceMissingException): self._validate_case(1, 0, 0, 10, 3, 0, 0) with testtools.ExpectedException(ex.InvalidComponentCountException): self._validate_case(1, 0, 2, 10, 3, 0, 0) with testtools.ExpectedException(ex.InvalidComponentCountException): self._validate_case(1, 0, 1, 1, 3, 2, 1) with testtools.ExpectedException(ex.InvalidComponentCountException): self._validate_case(1, 0, 1, 1, 3, 1, 2) with testtools.ExpectedException(ex.InvalidComponentCountException): self._validate_case(1, 1, 1, 0, 2, 0, 0) with testtools.ExpectedException(ex.RequiredServiceMissingException): self._validate_case(1, 0, 1, 1, 3, 0, 1) with testtools.ExpectedException(ex.RequiredServiceMissingException): self._validate_case(1, 0, 1, 0, 3, 1, 1) with testtools.ExpectedException(ex.RequiredServiceMissingException): self._validate_case(1, 0, 1, 1, 0, 1, 1) cl = self._create_cluster( 1, 1, 1, 0, 3, 0, 0, cluster_configs={'HDFS': {'dfs.replication': 4}}) with testtools.ExpectedException(ex.InvalidComponentCountException): self.pl.validate(cl) self.ng.append(tu.make_ng_dict("hi", "f1", ["hiveserver"], 0)) self.ng.append(tu.make_ng_dict("sh", "f1", ["spark history server"], 0)) self._validate_case(1, 1, 0, 0, 3, 0, 0, 1, 0) self._validate_case(1, 1, 0, 0, 3, 0, 0, 0, 1) with testtools.ExpectedException(ex.InvalidComponentCountException): self._validate_case(1, 1, 0, 0, 3, 0, 0, 2, 0) with testtools.ExpectedException(ex.InvalidComponentCountException): self._validate_case(1, 1, 0, 0, 3, 0, 0, 0, 2) self.ng.append(tu.make_ng_dict("zk", "f1", ["zookeeper"], 0)) self._validate_case(1, 1, 0, 0, 3, 0, 0, 1, 1, 3) with testtools.ExpectedException(ex.InvalidComponentCountException): self._validate_case(1, 1, 0, 0, 3, 0, 0, 1, 1, 2) def _create_cluster(self, *args, **kwargs): lst = [] for i in range(0, len(args)): self.ng[i]['count'] = args[i] lst.append(self.ng[i]) return tu.create_cluster("cluster1", "tenant1", "vanilla", "2.7.1", lst, **kwargs) def _validate_case(self, *args): cl = self._create_cluster(*args) self.pl.validate(cl) ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/test_confighints_helper.py 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/test_confighints_helpe0000664000175000017500000000511000000000000034214 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara_plugin_vanilla.plugins.vanilla import confighints_helper from sahara_plugin_vanilla.tests.unit import base as sahara_base class ConfigHintsHelperTest(sahara_base.SaharaTestCase): @mock.patch('sahara.plugins.utils.load_hadoop_xml_defaults', return_value=[]) def test_get_possible_hive_config_from(self, load_hadoop_xml_defaults): expected_config = { 'configs': [], 'params': {} } actual_config = confighints_helper.get_possible_hive_config_from( 'sample-config.xml') load_hadoop_xml_defaults.assert_called_once_with( 'sample-config.xml', 'sahara_plugin_vanilla') self.assertEqual(expected_config, actual_config) @mock.patch('sahara.plugins.utils.load_hadoop_xml_defaults', return_value=[]) @mock.patch('sahara.plugins.edp.get_possible_mapreduce_configs', return_value=[]) def test_get_possible_mapreduce_config_from( self, get_possible_mapreduce_configs, load_hadoop_xml_defaults): expected_config = { 'configs': [], } actual_config = confighints_helper.get_possible_mapreduce_config_from( 'sample-config.xml') load_hadoop_xml_defaults.assert_any_call('sample-config.xml', 'sahara_plugin_vanilla') self.assertEqual(expected_config, actual_config) @mock.patch('sahara.plugins.utils.load_hadoop_xml_defaults', return_value=[]) def test_get_possible_pig_config_from( self, load_hadoop_xml_defaults): expected_config = { 'configs': [], 'args': [], 'params': {} } actual_config = confighints_helper.get_possible_pig_config_from( 'sample-config.xml') load_hadoop_xml_defaults.assert_called_once_with( 'sample-config.xml', 'sahara_plugin_vanilla') self.assertEqual(expected_config, actual_config) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/test_utils.py0000664000175000017500000001174200000000000032323 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara.plugins import testutils as tu from sahara_plugin_vanilla.plugins.vanilla import plugin as p from sahara_plugin_vanilla.plugins.vanilla import utils as u from sahara_plugin_vanilla.tests.unit import base class TestUtils(base.SaharaWithDbTestCase): def setUp(self): super(TestUtils, self).setUp() self.plugin = p.VanillaProvider() self.ng_manager = tu.make_ng_dict( 'mng', 'f1', ['manager'], 1, [tu.make_inst_dict('mng1', 'manager')]) self.ng_namenode = tu.make_ng_dict( 'nn', 'f1', ['namenode'], 1, [tu.make_inst_dict('nn1', 'namenode')]) self.ng_resourcemanager = tu.make_ng_dict( 'jt', 'f1', ['resourcemanager'], 1, [tu.make_inst_dict('jt1', 'resourcemanager')]) self.ng_datanode = tu.make_ng_dict( 'dn', 'f1', ['datanode'], 2, [tu.make_inst_dict('dn1', 'datanode-1'), tu.make_inst_dict('dn2', 'datanode-2')]) self.ng_nodemanager = tu.make_ng_dict( 'tt', 'f1', ['nodemanager'], 2, [tu.make_inst_dict('tt1', 'nodemanager-1'), tu.make_inst_dict('tt2', 'nodemanager-2')]) self.ng_oozie = tu.make_ng_dict( 'ooz1', 'f1', ['oozie'], 1, [tu.make_inst_dict('ooz1', 'oozie')]) self.ng_hiveserver = tu.make_ng_dict( 'hs', 'f1', ['hiveserver'], 1, [tu.make_inst_dict('hs1', 'hiveserver')]) self.ng_secondarynamenode = tu.make_ng_dict( 'snn', 'f1', ['secondarynamenode'], 1, [tu.make_inst_dict('snn1', 'secondarynamenode')]) def test_get_namenode(self): cl = tu.create_cluster('cl1', 't1', 'vanilla', '2.7.1', [self.ng_manager, self.ng_namenode]) self.assertEqual('nn1', u.get_namenode(cl).instance_id) cl = tu.create_cluster('cl1', 't1', 'vanilla', '2.7.1', [self.ng_manager]) self.assertIsNone(u.get_namenode(cl)) def test_get_nodemanagers(self): cl = tu.create_cluster('cl1', 't1', 'vanilla', '2.7.1', [self.ng_manager, self.ng_nodemanager]) nodemanagers = u.get_nodemanagers(cl) self.assertEqual(2, len(nodemanagers)) self.assertEqual(set(['tt1', 'tt2']), set([nodemanagers[0].instance_id, nodemanagers[1].instance_id])) cl = tu.create_cluster('cl1', 't1', 'vanilla', '2.7.1', [self.ng_namenode]) self.assertEqual([], u.get_nodemanagers(cl)) def test_get_oozie(self): cl = tu.create_cluster('cl1', 't1', 'vanilla', '2.7.1', [self.ng_manager, self.ng_oozie]) self.assertEqual('ooz1', u.get_oozie(cl).instance_id) cl = tu.create_cluster('cl1', 't1', 'vanilla', '2.7.1', [self.ng_manager]) self.assertIsNone(u.get_oozie(cl)) def test_get_hiveserver(self): cl = tu.create_cluster('cl1', 't1', 'vanilla', '2.7.1', [self.ng_manager, self.ng_hiveserver]) self.assertEqual('hs1', u.get_hiveserver(cl).instance_id) cl = tu.create_cluster('cl1', 't1', 'vanilla', '2.7.1', [self.ng_manager]) self.assertIsNone(u.get_hiveserver(cl)) def test_get_datanodes(self): cl = tu.create_cluster('cl1', 't1', 'vanilla', '2.7.1', [self.ng_manager, self.ng_namenode, self.ng_datanode]) datanodes = u.get_datanodes(cl) self.assertEqual(2, len(datanodes)) self.assertEqual(set(['dn1', 'dn2']), set([datanodes[0].instance_id, datanodes[1].instance_id])) cl = tu.create_cluster('cl1', 't1', 'vanilla', '2.7.1', [self.ng_manager]) self.assertEqual([], u.get_datanodes(cl)) def test_get_secondarynamenodes(self): cl = tu.create_cluster('cl1', 't1', 'vanilla', '2.7.1', [self.ng_manager, self.ng_namenode, self.ng_secondarynamenode]) self.assertEqual('snn1', u.get_secondarynamenode(cl).instance_id) cl = tu.create_cluster('cl1', 't1', 'vanilla', '2.7.1', [self.ng_manager]) self.assertIsNone(u.get_secondarynamenode(cl)) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9724793 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_1/0000775000175000017500000000000000000000000030542 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_1/__init__.py0000664000175000017500000000000000000000000032641 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_1/test_config_helper.py 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_1/test_config_hel0000664000175000017500000000565500000000000033634 0ustar00zuulzuul00000000000000# Copyright (c) 2017 EasyStack Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.plugins import provisioning as p from sahara_plugin_vanilla.plugins.vanilla.v2_7_1 import config_helper from sahara_plugin_vanilla.tests.unit import base class TestConfigHelper(base.SaharaTestCase): plugin_path = 'sahara_plugin_vanilla.plugins.vanilla.v2_7_1.' plugin_hadoop_path = 'sahara_plugin_vanilla.plugins.vanilla.hadoop2.' def setUp(self): super(TestConfigHelper, self).setUp() @mock.patch(plugin_hadoop_path + 'config_helper.PLUGIN_GENERAL_CONFIGS') @mock.patch(plugin_path + 'config_helper.PLUGIN_ENV_CONFIGS') @mock.patch(plugin_path + 'config_helper.PLUGIN_XML_CONFIGS') @mock.patch(plugin_path + 'config_helper._get_spark_configs') @mock.patch(plugin_path + 'config_helper._get_zookeeper_configs') def test_init_all_configs(self, _get_zk_configs, _get_spark_configs, PLUGIN_XML_CONFIGS, PLUGIN_ENV_CONFIGS, PLUGIN_GENERAL_CONFIGS): configs = [] configs.extend(PLUGIN_XML_CONFIGS) configs.extend(PLUGIN_ENV_CONFIGS) configs.extend(PLUGIN_GENERAL_CONFIGS) configs.extend(_get_spark_configs()) configs.extend(_get_zk_configs()) init_configs = config_helper._init_all_configs() self.assertEqual(init_configs, configs) def test_get_spark_opt_default(self): opt_name = 'Executor extra classpath' _default_executor_classpath = ":".join( ['/opt/hadoop/share/hadoop/tools/lib/hadoop-openstack-2.7.1.jar']) default = config_helper._get_spark_opt_default(opt_name) self.assertEqual(default, _default_executor_classpath) def test_get_spark_configs(self): spark_configs = config_helper._get_spark_configs() for i in spark_configs: self.assertIsInstance(i, p.Config) def test_get_plugin_configs(self): self.assertEqual(config_helper.get_plugin_configs(), config_helper.PLUGIN_CONFIGS) def test_get_xml_configs(self): self.assertEqual(config_helper.get_xml_configs(), config_helper.PLUGIN_XML_CONFIGS) def test_get_env_configs(self): self.assertEqual(config_helper.get_env_configs(), config_helper.ENV_CONFS) ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_1/test_edp_engine.py 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_1/test_edp_engine0000664000175000017500000001055600000000000033630 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.plugins import edp from sahara_plugin_vanilla.plugins.vanilla.v2_7_1 import edp_engine from sahara_plugin_vanilla.tests.unit import base as sahara_base class Vanilla2ConfigHintsTest(sahara_base.SaharaTestCase): @mock.patch( 'sahara_plugin_vanilla.plugins.vanilla.confighints_helper.' 'get_possible_hive_config_from', return_value={}) def test_get_possible_job_config_hive( self, get_possible_hive_config_from): expected_config = {'job_config': {}} actual_config = edp_engine.EdpOozieEngine.get_possible_job_config( edp.JOB_TYPE_HIVE) get_possible_hive_config_from.assert_called_once_with( 'plugins/vanilla/v2_7_1/resources/hive-default.xml') self.assertEqual(expected_config, actual_config) @mock.patch('sahara_plugin_vanilla.plugins.vanilla.hadoop2.edp_engine.' 'EdpOozieEngine') def test_get_possible_job_config_java(self, BaseVanillaEdpOozieEngine): expected_config = {'job_config': {}} BaseVanillaEdpOozieEngine.get_possible_job_config.return_value = ( expected_config) actual_config = edp_engine.EdpOozieEngine.get_possible_job_config( edp.JOB_TYPE_JAVA) (BaseVanillaEdpOozieEngine.get_possible_job_config. assert_called_once_with(edp.JOB_TYPE_JAVA)) self.assertEqual(expected_config, actual_config) @mock.patch( 'sahara_plugin_vanilla.plugins.vanilla.confighints_helper.' 'get_possible_mapreduce_config_from', return_value={}) def test_get_possible_job_config_mapreduce( self, get_possible_mapreduce_config_from): expected_config = {'job_config': {}} actual_config = edp_engine.EdpOozieEngine.get_possible_job_config( edp.JOB_TYPE_MAPREDUCE) get_possible_mapreduce_config_from.assert_called_once_with( 'plugins/vanilla/v2_7_1/resources/mapred-default.xml') self.assertEqual(expected_config, actual_config) @mock.patch( 'sahara_plugin_vanilla.plugins.vanilla.confighints_helper.' 'get_possible_mapreduce_config_from', return_value={}) def test_get_possible_job_config_mapreduce_streaming( self, get_possible_mapreduce_config_from): expected_config = {'job_config': {}} actual_config = edp_engine.EdpOozieEngine.get_possible_job_config( edp.JOB_TYPE_MAPREDUCE_STREAMING) get_possible_mapreduce_config_from.assert_called_once_with( 'plugins/vanilla/v2_7_1/resources/mapred-default.xml') self.assertEqual(expected_config, actual_config) @mock.patch( 'sahara_plugin_vanilla.plugins.vanilla.confighints_helper.' 'get_possible_pig_config_from', return_value={}) def test_get_possible_job_config_pig( self, get_possible_pig_config_from): expected_config = {'job_config': {}} actual_config = edp_engine.EdpOozieEngine.get_possible_job_config( edp.JOB_TYPE_PIG) get_possible_pig_config_from.assert_called_once_with( 'plugins/vanilla/v2_7_1/resources/mapred-default.xml') self.assertEqual(expected_config, actual_config) @mock.patch('sahara_plugin_vanilla.plugins.vanilla.hadoop2.edp_engine.' 'EdpOozieEngine') def test_get_possible_job_config_shell(self, BaseVanillaEdpOozieEngine): expected_config = {'job_config': {}} BaseVanillaEdpOozieEngine.get_possible_job_config.return_value = ( expected_config) actual_config = edp_engine.EdpOozieEngine.get_possible_job_config( edp.JOB_TYPE_SHELL) (BaseVanillaEdpOozieEngine.get_possible_job_config. assert_called_once_with(edp.JOB_TYPE_SHELL)) self.assertEqual(expected_config, actual_config) ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_1/test_versionhandler.py 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_1/test_versionhan0000664000175000017500000002670000000000000033705 0ustar00zuulzuul00000000000000# Copyright (c) 2017 EasyStack Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock import six import testtools from sahara.plugins import base as pb from sahara.plugins import exceptions as ex from sahara.plugins import resource as r from sahara.plugins import testutils from sahara_plugin_vanilla.plugins.vanilla.v2_7_1.edp_engine import \ EdpOozieEngine from sahara_plugin_vanilla.plugins.vanilla.v2_7_1.edp_engine import \ EdpSparkEngine from sahara_plugin_vanilla.plugins.vanilla.v2_7_1 import versionhandler as v_h from sahara_plugin_vanilla.tests.unit import base class TestConfig(object): def __init__(self, applicable_target, name, default_value): self.applicable_target = applicable_target self.name = name self.default_value = default_value class VersionHandlerTest(base.SaharaTestCase): plugin_path = 'sahara_plugin_vanilla.plugins.vanilla.' plugin_hadoop2_path = 'sahara_plugin_vanilla.plugins.vanilla.hadoop2.' def setUp(self): super(VersionHandlerTest, self).setUp() self.cluster = mock.Mock() self.vh = v_h.VersionHandler() self.override_config("plugins", ["vanilla"]) pb.setup_plugins() def test_get_plugin_configs(self): self.vh.pctx['all_confs'] = 'haha' conf = self.vh.get_plugin_configs() self.assertEqual(conf, 'haha') def test_get_node_processes(self): processes = self.vh.get_node_processes() for k, v in six.iteritems(processes): for p in v: self.assertIsInstance(p, str) @mock.patch(plugin_hadoop2_path + 'validation.validate_cluster_creating') def test_validate(self, validate_create): self.vh.pctx = mock.Mock() self.vh.validate(self.cluster) validate_create.assert_called_once_with(self.vh.pctx, self.cluster) @mock.patch(plugin_path + 'v2_7_1.versionhandler.VersionHandler.update_infra') def test_update_infra(self, update_infra): self.vh.update_infra(self.cluster) update_infra.assert_called_once_with(self.cluster) @mock.patch(plugin_hadoop2_path + 'config.configure_cluster') def test_configure_cluster(self, configure_cluster): self.vh.pctx = mock.Mock() self.vh.configure_cluster(self.cluster) configure_cluster.assert_called_once_with(self.vh.pctx, self.cluster) @mock.patch(plugin_path + 'v2_7_1.versionhandler.run') @mock.patch(plugin_path + 'v2_7_1.versionhandler.starting_scripts') @mock.patch('sahara.plugins.swift_helper.install_ssl_certs') @mock.patch(plugin_hadoop2_path + 'keypairs.provision_keypairs') @mock.patch('sahara.plugins.utils.get_instances') @mock.patch('sahara.plugins.utils.cluster_get_instances') def test_start_cluster(self, c_get_instances, u_get_instances, provision_keypairs, install_ssl_certs, s_scripts, run): self.vh.pctx = mock.Mock() instances = mock.Mock() c_get_instances.return_value = instances u_get_instances.return_value = instances self.vh._set_cluster_info = mock.Mock() self.vh.start_cluster(self.cluster) provision_keypairs.assert_called_once_with(self.cluster) s_scripts.start_namenode.assert_called_once_with(self.cluster) s_scripts.start_secondarynamenode.assert_called_once_with(self.cluster) s_scripts.start_resourcemanager.assert_called_once_with(self.cluster) s_scripts.start_historyserver.assert_called_once_with(self.cluster) s_scripts.start_oozie.assert_called_once_with(self.vh.pctx, self.cluster) s_scripts.start_hiveserver.assert_called_once_with(self.vh.pctx, self.cluster) s_scripts.start_spark.assert_called_once_with(self.cluster) run.start_dn_nm_processes.assert_called_once_with(instances) run.await_datanodes.assert_called_once_with(self.cluster) install_ssl_certs.assert_called_once_with(instances) self.vh._set_cluster_info.assert_called_once_with(self.cluster) @mock.patch(plugin_hadoop2_path + 'scaling.decommission_nodes') def test_decommission_nodes(self, decommission_nodes): self.vh.pctx = mock.Mock() cluster = mock.Mock() instances = mock.Mock() self.vh.decommission_nodes(cluster, instances) decommission_nodes.assert_called_once_with(self.vh.pctx, cluster, instances) @mock.patch('sahara.plugins.utils.general.get_by_id') @mock.patch(plugin_hadoop2_path + 'validation.validate_additional_ng_scaling') @mock.patch(plugin_hadoop2_path + 'validation.validate_existing_ng_scaling') def test_validate_scaling(self, vls, vla, get_by_id): self.vh.pctx['all_confs'] = [TestConfig('HDFS', 'dfs.replication', -1)] ng1 = testutils.make_ng_dict('ng1', '40', ['namenode'], 1) ng2 = testutils.make_ng_dict('ng2', '41', ['datanode'], 2) ng3 = testutils.make_ng_dict('ng3', '42', ['datanode'], 3) additional = [ng2['id'], ng3['id']] existing = {ng2['id']: 1} cluster = testutils.create_cluster('test-cluster', 'tenant1', 'fake', '0.1', [ng1, ng2, ng3]) self.vh.validate_scaling(cluster, existing, additional) vla.assert_called_once_with(cluster, additional) vls.assert_called_once_with(self.vh.pctx, cluster, existing) ng4 = testutils.make_ng_dict('ng4', '43', ['datanode', 'zookeeper'], 3) ng5 = testutils.make_ng_dict('ng5', '44', ['datanode', 'zookeeper'], 1) existing = {ng4['id']: 2} additional = {ng5['id']} cluster = testutils.create_cluster('test-cluster', 'tenant1', 'fake', '0.1', [ng1, ng4]) with testtools.ExpectedException(ex.ClusterCannotBeScaled): self.vh.validate_scaling(cluster, existing, {}) get_by_id.return_value = r.create_node_group_resource(ng5) with testtools.ExpectedException(ex.ClusterCannotBeScaled): self.vh.validate_scaling(cluster, {}, additional) @mock.patch(plugin_hadoop2_path + 'scaling.scale_cluster') @mock.patch(plugin_hadoop2_path + 'keypairs.provision_keypairs') def test_scale_cluster(self, provision_keypairs, scale_cluster): self.vh.pctx = mock.Mock() instances = mock.Mock() self.vh.scale_cluster(self.cluster, instances) provision_keypairs.assert_called_once_with(self.cluster, instances) scale_cluster.assert_called_once_with(self.vh.pctx, self.cluster, instances) @mock.patch("sahara.plugins.conductor.cluster_update") @mock.patch("sahara.plugins.context.ctx") @mock.patch(plugin_path + 'utils.get_namenode') @mock.patch(plugin_path + 'utils.get_resourcemanager') @mock.patch(plugin_path + 'utils.get_historyserver') @mock.patch(plugin_path + 'utils.get_oozie') @mock.patch(plugin_path + 'utils.get_spark_history_server') def test_set_cluster_info(self, get_spark_history_server, get_oozie, get_historyserver, get_resourcemanager, get_namenode, ctx, cluster_update): get_spark_history_server.return_value.management_ip = '1.2.3.0' get_oozie.return_value.get_ip_or_dns_name = mock.Mock( return_value='1.2.3.1') get_historyserver.return_value.get_ip_or_dns_name = mock.Mock( return_value='1.2.3.2') get_resourcemanager.return_value.get_ip_or_dns_name = mock.Mock( return_value='1.2.3.3') get_namenode.return_value.get_ip_or_dns_name = mock.Mock( return_value='1.2.3.4') get_namenode.return_value.hostname = mock.Mock( return_value='testnode') self.vh._set_cluster_info(self.cluster) info = {'YARN': { 'Web UI': 'http://1.2.3.3:8088', 'ResourceManager': 'http://1.2.3.3:8032' }, 'HDFS': { 'Web UI': 'http://1.2.3.4:50070', 'NameNode': 'hdfs://testnode:9000' }, 'JobFlow': { 'Oozie': 'http://1.2.3.1:11000' }, 'MapReduce JobHistory Server': { 'Web UI': 'http://1.2.3.2:19888' }, 'Apache Spark': { 'Spark UI': 'http://1.2.3.0:4040', 'Spark History Server UI': 'http://1.2.3.0:18080' } } cluster_update.assert_called_once_with(ctx(), self.cluster, {'info': info}) @mock.patch("sahara.plugins.edp.get_plugin") @mock.patch('sahara.plugins.utils.get_instance') @mock.patch('os.path.join') def test_get_edp_engine(self, join, get_instance, get_plugin): job_type = '' ret = self.vh.get_edp_engine(self.cluster, job_type) self.assertIsNone(ret) job_type = 'Java' ret = self.vh.get_edp_engine(self.cluster, job_type) self.assertIsInstance(ret, EdpOozieEngine) job_type = 'Spark' ret = self.vh.get_edp_engine(self.cluster, job_type) self.assertIsInstance(ret, EdpSparkEngine) def test_get_edp_job_types(self): job_types = ['Hive', 'Java', 'MapReduce', 'MapReduce.Streaming', 'Pig', 'Shell', 'Spark'] self.assertEqual(self.vh.get_edp_job_types(), job_types) def test_get_edp_config_hints(self): job_type = 'Java' ret = {'job_config': {'args': [], 'configs': []}} self.assertEqual(self.vh.get_edp_config_hints(job_type), ret) @mock.patch(plugin_hadoop2_path + 'utils.delete_oozie_password') @mock.patch(plugin_hadoop2_path + 'keypairs.drop_key') def test_on_terminate_cluster(self, delete_oozie_password, drop_key): self.vh.on_terminate_cluster(self.cluster) delete_oozie_password.assert_called_once_with(self.cluster) drop_key.assert_called_once_with(self.cluster) @mock.patch(plugin_hadoop2_path + 'config.get_open_ports') def test_get_open_ports(self, get_open_ports): node_group = mock.Mock() self.vh.get_open_ports(node_group) get_open_ports.assert_called_once_with(node_group) @mock.patch(plugin_hadoop2_path + 'recommendations_utils.recommend_configs') def test_recommend_configs(self, recommend_configs): scaling = mock.Mock() configs = mock.Mock() self.vh.pctx['all_confs'] = configs self.vh.recommend_configs(self.cluster, scaling) recommend_configs.assert_called_once_with(self.cluster, configs, scaling) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9724793 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_5/0000775000175000017500000000000000000000000030546 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_5/__init__.py0000664000175000017500000000000000000000000032645 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_5/test_config_helper.py 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_5/test_config_hel0000664000175000017500000000565500000000000033640 0ustar00zuulzuul00000000000000# Copyright (c) 2017 EasyStack Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.plugins import provisioning as p from sahara_plugin_vanilla.plugins.vanilla.v2_7_5 import config_helper from sahara_plugin_vanilla.tests.unit import base class TestConfigHelper(base.SaharaTestCase): plugin_path = 'sahara_plugin_vanilla.plugins.vanilla.v2_7_5.' plugin_hadoop_path = 'sahara_plugin_vanilla.plugins.vanilla.hadoop2.' def setUp(self): super(TestConfigHelper, self).setUp() @mock.patch(plugin_hadoop_path + 'config_helper.PLUGIN_GENERAL_CONFIGS') @mock.patch(plugin_path + 'config_helper.PLUGIN_ENV_CONFIGS') @mock.patch(plugin_path + 'config_helper.PLUGIN_XML_CONFIGS') @mock.patch(plugin_path + 'config_helper._get_spark_configs') @mock.patch(plugin_path + 'config_helper._get_zookeeper_configs') def test_init_all_configs(self, _get_zk_configs, _get_spark_configs, PLUGIN_XML_CONFIGS, PLUGIN_ENV_CONFIGS, PLUGIN_GENERAL_CONFIGS): configs = [] configs.extend(PLUGIN_XML_CONFIGS) configs.extend(PLUGIN_ENV_CONFIGS) configs.extend(PLUGIN_GENERAL_CONFIGS) configs.extend(_get_spark_configs()) configs.extend(_get_zk_configs()) init_configs = config_helper._init_all_configs() self.assertEqual(init_configs, configs) def test_get_spark_opt_default(self): opt_name = 'Executor extra classpath' _default_executor_classpath = ":".join( ['/opt/hadoop/share/hadoop/tools/lib/hadoop-openstack-2.7.5.jar']) default = config_helper._get_spark_opt_default(opt_name) self.assertEqual(default, _default_executor_classpath) def test_get_spark_configs(self): spark_configs = config_helper._get_spark_configs() for i in spark_configs: self.assertIsInstance(i, p.Config) def test_get_plugin_configs(self): self.assertEqual(config_helper.get_plugin_configs(), config_helper.PLUGIN_CONFIGS) def test_get_xml_configs(self): self.assertEqual(config_helper.get_xml_configs(), config_helper.PLUGIN_XML_CONFIGS) def test_get_env_configs(self): self.assertEqual(config_helper.get_env_configs(), config_helper.ENV_CONFS) ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_5/test_edp_engine.py 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_5/test_edp_engine0000664000175000017500000001055600000000000033634 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.plugins import edp from sahara_plugin_vanilla.plugins.vanilla.v2_7_5 import edp_engine from sahara_plugin_vanilla.tests.unit import base as sahara_base class Vanilla2ConfigHintsTest(sahara_base.SaharaTestCase): @mock.patch( 'sahara_plugin_vanilla.plugins.vanilla.confighints_helper.' 'get_possible_hive_config_from', return_value={}) def test_get_possible_job_config_hive( self, get_possible_hive_config_from): expected_config = {'job_config': {}} actual_config = edp_engine.EdpOozieEngine.get_possible_job_config( edp.JOB_TYPE_HIVE) get_possible_hive_config_from.assert_called_once_with( 'plugins/vanilla/v2_7_5/resources/hive-default.xml') self.assertEqual(expected_config, actual_config) @mock.patch('sahara_plugin_vanilla.plugins.vanilla.hadoop2.edp_engine.' 'EdpOozieEngine') def test_get_possible_job_config_java(self, BaseVanillaEdpOozieEngine): expected_config = {'job_config': {}} BaseVanillaEdpOozieEngine.get_possible_job_config.return_value = ( expected_config) actual_config = edp_engine.EdpOozieEngine.get_possible_job_config( edp.JOB_TYPE_JAVA) (BaseVanillaEdpOozieEngine.get_possible_job_config. assert_called_once_with(edp.JOB_TYPE_JAVA)) self.assertEqual(expected_config, actual_config) @mock.patch( 'sahara_plugin_vanilla.plugins.vanilla.confighints_helper.' 'get_possible_mapreduce_config_from', return_value={}) def test_get_possible_job_config_mapreduce( self, get_possible_mapreduce_config_from): expected_config = {'job_config': {}} actual_config = edp_engine.EdpOozieEngine.get_possible_job_config( edp.JOB_TYPE_MAPREDUCE) get_possible_mapreduce_config_from.assert_called_once_with( 'plugins/vanilla/v2_7_5/resources/mapred-default.xml') self.assertEqual(expected_config, actual_config) @mock.patch( 'sahara_plugin_vanilla.plugins.vanilla.confighints_helper.' 'get_possible_mapreduce_config_from', return_value={}) def test_get_possible_job_config_mapreduce_streaming( self, get_possible_mapreduce_config_from): expected_config = {'job_config': {}} actual_config = edp_engine.EdpOozieEngine.get_possible_job_config( edp.JOB_TYPE_MAPREDUCE_STREAMING) get_possible_mapreduce_config_from.assert_called_once_with( 'plugins/vanilla/v2_7_5/resources/mapred-default.xml') self.assertEqual(expected_config, actual_config) @mock.patch( 'sahara_plugin_vanilla.plugins.vanilla.confighints_helper.' 'get_possible_pig_config_from', return_value={}) def test_get_possible_job_config_pig( self, get_possible_pig_config_from): expected_config = {'job_config': {}} actual_config = edp_engine.EdpOozieEngine.get_possible_job_config( edp.JOB_TYPE_PIG) get_possible_pig_config_from.assert_called_once_with( 'plugins/vanilla/v2_7_5/resources/mapred-default.xml') self.assertEqual(expected_config, actual_config) @mock.patch('sahara_plugin_vanilla.plugins.vanilla.hadoop2.edp_engine.' 'EdpOozieEngine') def test_get_possible_job_config_shell(self, BaseVanillaEdpOozieEngine): expected_config = {'job_config': {}} BaseVanillaEdpOozieEngine.get_possible_job_config.return_value = ( expected_config) actual_config = edp_engine.EdpOozieEngine.get_possible_job_config( edp.JOB_TYPE_SHELL) (BaseVanillaEdpOozieEngine.get_possible_job_config. assert_called_once_with(edp.JOB_TYPE_SHELL)) self.assertEqual(expected_config, actual_config) ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_5/test_versionhandler.py 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_5/test_versionhan0000664000175000017500000002670000000000000033711 0ustar00zuulzuul00000000000000# Copyright (c) 2017 EasyStack Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock import six import testtools from sahara.plugins import base as pb from sahara.plugins import exceptions as ex from sahara.plugins import resource as r from sahara.plugins import testutils from sahara_plugin_vanilla.plugins.vanilla.v2_7_5.edp_engine import \ EdpOozieEngine from sahara_plugin_vanilla.plugins.vanilla.v2_7_5.edp_engine import \ EdpSparkEngine from sahara_plugin_vanilla.plugins.vanilla.v2_7_5 import versionhandler as v_h from sahara_plugin_vanilla.tests.unit import base class TestConfig(object): def __init__(self, applicable_target, name, default_value): self.applicable_target = applicable_target self.name = name self.default_value = default_value class VersionHandlerTest(base.SaharaTestCase): plugin_path = 'sahara_plugin_vanilla.plugins.vanilla.' plugin_hadoop2_path = 'sahara_plugin_vanilla.plugins.vanilla.hadoop2.' def setUp(self): super(VersionHandlerTest, self).setUp() self.cluster = mock.Mock() self.vh = v_h.VersionHandler() self.override_config("plugins", ["vanilla"]) pb.setup_plugins() def test_get_plugin_configs(self): self.vh.pctx['all_confs'] = 'haha' conf = self.vh.get_plugin_configs() self.assertEqual(conf, 'haha') def test_get_node_processes(self): processes = self.vh.get_node_processes() for k, v in six.iteritems(processes): for p in v: self.assertIsInstance(p, str) @mock.patch(plugin_hadoop2_path + 'validation.validate_cluster_creating') def test_validate(self, validate_create): self.vh.pctx = mock.Mock() self.vh.validate(self.cluster) validate_create.assert_called_once_with(self.vh.pctx, self.cluster) @mock.patch(plugin_path + 'v2_7_5.versionhandler.VersionHandler.update_infra') def test_update_infra(self, update_infra): self.vh.update_infra(self.cluster) update_infra.assert_called_once_with(self.cluster) @mock.patch(plugin_hadoop2_path + 'config.configure_cluster') def test_configure_cluster(self, configure_cluster): self.vh.pctx = mock.Mock() self.vh.configure_cluster(self.cluster) configure_cluster.assert_called_once_with(self.vh.pctx, self.cluster) @mock.patch(plugin_path + 'v2_7_5.versionhandler.run') @mock.patch(plugin_path + 'v2_7_5.versionhandler.starting_scripts') @mock.patch('sahara.plugins.swift_helper.install_ssl_certs') @mock.patch(plugin_hadoop2_path + 'keypairs.provision_keypairs') @mock.patch('sahara.plugins.utils.get_instances') @mock.patch('sahara.plugins.utils.cluster_get_instances') def test_start_cluster(self, c_get_instances, u_get_instances, provision_keypairs, install_ssl_certs, s_scripts, run): self.vh.pctx = mock.Mock() instances = mock.Mock() c_get_instances.return_value = instances u_get_instances.return_value = instances self.vh._set_cluster_info = mock.Mock() self.vh.start_cluster(self.cluster) provision_keypairs.assert_called_once_with(self.cluster) s_scripts.start_namenode.assert_called_once_with(self.cluster) s_scripts.start_secondarynamenode.assert_called_once_with(self.cluster) s_scripts.start_resourcemanager.assert_called_once_with(self.cluster) s_scripts.start_historyserver.assert_called_once_with(self.cluster) s_scripts.start_oozie.assert_called_once_with(self.vh.pctx, self.cluster) s_scripts.start_hiveserver.assert_called_once_with(self.vh.pctx, self.cluster) s_scripts.start_spark.assert_called_once_with(self.cluster) run.start_dn_nm_processes.assert_called_once_with(instances) run.await_datanodes.assert_called_once_with(self.cluster) install_ssl_certs.assert_called_once_with(instances) self.vh._set_cluster_info.assert_called_once_with(self.cluster) @mock.patch(plugin_hadoop2_path + 'scaling.decommission_nodes') def test_decommission_nodes(self, decommission_nodes): self.vh.pctx = mock.Mock() cluster = mock.Mock() instances = mock.Mock() self.vh.decommission_nodes(cluster, instances) decommission_nodes.assert_called_once_with(self.vh.pctx, cluster, instances) @mock.patch('sahara.plugins.utils.general.get_by_id') @mock.patch(plugin_hadoop2_path + 'validation.validate_additional_ng_scaling') @mock.patch(plugin_hadoop2_path + 'validation.validate_existing_ng_scaling') def test_validate_scaling(self, vls, vla, get_by_id): self.vh.pctx['all_confs'] = [TestConfig('HDFS', 'dfs.replication', -1)] ng1 = testutils.make_ng_dict('ng1', '40', ['namenode'], 1) ng2 = testutils.make_ng_dict('ng2', '41', ['datanode'], 2) ng3 = testutils.make_ng_dict('ng3', '42', ['datanode'], 3) additional = [ng2['id'], ng3['id']] existing = {ng2['id']: 1} cluster = testutils.create_cluster('test-cluster', 'tenant1', 'fake', '0.1', [ng1, ng2, ng3]) self.vh.validate_scaling(cluster, existing, additional) vla.assert_called_once_with(cluster, additional) vls.assert_called_once_with(self.vh.pctx, cluster, existing) ng4 = testutils.make_ng_dict('ng4', '43', ['datanode', 'zookeeper'], 3) ng5 = testutils.make_ng_dict('ng5', '44', ['datanode', 'zookeeper'], 1) existing = {ng4['id']: 2} additional = {ng5['id']} cluster = testutils.create_cluster('test-cluster', 'tenant1', 'fake', '0.1', [ng1, ng4]) with testtools.ExpectedException(ex.ClusterCannotBeScaled): self.vh.validate_scaling(cluster, existing, {}) get_by_id.return_value = r.create_node_group_resource(ng5) with testtools.ExpectedException(ex.ClusterCannotBeScaled): self.vh.validate_scaling(cluster, {}, additional) @mock.patch(plugin_hadoop2_path + 'scaling.scale_cluster') @mock.patch(plugin_hadoop2_path + 'keypairs.provision_keypairs') def test_scale_cluster(self, provision_keypairs, scale_cluster): self.vh.pctx = mock.Mock() instances = mock.Mock() self.vh.scale_cluster(self.cluster, instances) provision_keypairs.assert_called_once_with(self.cluster, instances) scale_cluster.assert_called_once_with(self.vh.pctx, self.cluster, instances) @mock.patch("sahara.plugins.conductor.cluster_update") @mock.patch("sahara.plugins.context.ctx") @mock.patch(plugin_path + 'utils.get_namenode') @mock.patch(plugin_path + 'utils.get_resourcemanager') @mock.patch(plugin_path + 'utils.get_historyserver') @mock.patch(plugin_path + 'utils.get_oozie') @mock.patch(plugin_path + 'utils.get_spark_history_server') def test_set_cluster_info(self, get_spark_history_server, get_oozie, get_historyserver, get_resourcemanager, get_namenode, ctx, cluster_update): get_spark_history_server.return_value.management_ip = '1.2.3.0' get_oozie.return_value.get_ip_or_dns_name = mock.Mock( return_value='1.2.3.1') get_historyserver.return_value.get_ip_or_dns_name = mock.Mock( return_value='1.2.3.2') get_resourcemanager.return_value.get_ip_or_dns_name = mock.Mock( return_value='1.2.3.3') get_namenode.return_value.get_ip_or_dns_name = mock.Mock( return_value='1.2.3.4') get_namenode.return_value.hostname = mock.Mock( return_value='testnode') self.vh._set_cluster_info(self.cluster) info = {'YARN': { 'Web UI': 'http://1.2.3.3:8088', 'ResourceManager': 'http://1.2.3.3:8032' }, 'HDFS': { 'Web UI': 'http://1.2.3.4:50070', 'NameNode': 'hdfs://testnode:9000' }, 'JobFlow': { 'Oozie': 'http://1.2.3.1:11000' }, 'MapReduce JobHistory Server': { 'Web UI': 'http://1.2.3.2:19888' }, 'Apache Spark': { 'Spark UI': 'http://1.2.3.0:4040', 'Spark History Server UI': 'http://1.2.3.0:18080' } } cluster_update.assert_called_once_with(ctx(), self.cluster, {'info': info}) @mock.patch("sahara.plugins.edp.get_plugin") @mock.patch('sahara.plugins.utils.get_instance') @mock.patch('os.path.join') def test_get_edp_engine(self, join, get_instance, get_plugin): job_type = '' ret = self.vh.get_edp_engine(self.cluster, job_type) self.assertIsNone(ret) job_type = 'Java' ret = self.vh.get_edp_engine(self.cluster, job_type) self.assertIsInstance(ret, EdpOozieEngine) job_type = 'Spark' ret = self.vh.get_edp_engine(self.cluster, job_type) self.assertIsInstance(ret, EdpSparkEngine) def test_get_edp_job_types(self): job_types = ['Hive', 'Java', 'MapReduce', 'MapReduce.Streaming', 'Pig', 'Shell', 'Spark'] self.assertEqual(self.vh.get_edp_job_types(), job_types) def test_get_edp_config_hints(self): job_type = 'Java' ret = {'job_config': {'args': [], 'configs': []}} self.assertEqual(self.vh.get_edp_config_hints(job_type), ret) @mock.patch(plugin_hadoop2_path + 'utils.delete_oozie_password') @mock.patch(plugin_hadoop2_path + 'keypairs.drop_key') def test_on_terminate_cluster(self, delete_oozie_password, drop_key): self.vh.on_terminate_cluster(self.cluster) delete_oozie_password.assert_called_once_with(self.cluster) drop_key.assert_called_once_with(self.cluster) @mock.patch(plugin_hadoop2_path + 'config.get_open_ports') def test_get_open_ports(self, get_open_ports): node_group = mock.Mock() self.vh.get_open_ports(node_group) get_open_ports.assert_called_once_with(node_group) @mock.patch(plugin_hadoop2_path + 'recommendations_utils.recommend_configs') def test_recommend_configs(self, recommend_configs): scaling = mock.Mock() configs = mock.Mock() self.vh.pctx['all_confs'] = configs self.vh.recommend_configs(self.cluster, scaling) recommend_configs.assert_called_once_with(self.cluster, configs, scaling) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9724793 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_8_2/0000775000175000017500000000000000000000000030544 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_8_2/__init__.py0000664000175000017500000000000000000000000032643 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_8_2/test_config_helper.py 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_8_2/test_config_hel0000664000175000017500000000565500000000000033636 0ustar00zuulzuul00000000000000# Copyright (c) 2017 EasyStack Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.plugins import provisioning as p from sahara_plugin_vanilla.plugins.vanilla.v2_8_2 import config_helper from sahara_plugin_vanilla.tests.unit import base class TestConfigHelper(base.SaharaTestCase): plugin_path = 'sahara_plugin_vanilla.plugins.vanilla.v2_8_2.' plugin_hadoop_path = 'sahara_plugin_vanilla.plugins.vanilla.hadoop2.' def setUp(self): super(TestConfigHelper, self).setUp() @mock.patch(plugin_hadoop_path + 'config_helper.PLUGIN_GENERAL_CONFIGS') @mock.patch(plugin_path + 'config_helper.PLUGIN_ENV_CONFIGS') @mock.patch(plugin_path + 'config_helper.PLUGIN_XML_CONFIGS') @mock.patch(plugin_path + 'config_helper._get_spark_configs') @mock.patch(plugin_path + 'config_helper._get_zookeeper_configs') def test_init_all_configs(self, _get_zk_configs, _get_spark_configs, PLUGIN_XML_CONFIGS, PLUGIN_ENV_CONFIGS, PLUGIN_GENERAL_CONFIGS): configs = [] configs.extend(PLUGIN_XML_CONFIGS) configs.extend(PLUGIN_ENV_CONFIGS) configs.extend(PLUGIN_GENERAL_CONFIGS) configs.extend(_get_spark_configs()) configs.extend(_get_zk_configs()) init_configs = config_helper._init_all_configs() self.assertEqual(init_configs, configs) def test_get_spark_opt_default(self): opt_name = 'Executor extra classpath' _default_executor_classpath = ":".join( ['/opt/hadoop/share/hadoop/tools/lib/hadoop-openstack-2.8.2.jar']) default = config_helper._get_spark_opt_default(opt_name) self.assertEqual(default, _default_executor_classpath) def test_get_spark_configs(self): spark_configs = config_helper._get_spark_configs() for i in spark_configs: self.assertIsInstance(i, p.Config) def test_get_plugin_configs(self): self.assertEqual(config_helper.get_plugin_configs(), config_helper.PLUGIN_CONFIGS) def test_get_xml_configs(self): self.assertEqual(config_helper.get_xml_configs(), config_helper.PLUGIN_XML_CONFIGS) def test_get_env_configs(self): self.assertEqual(config_helper.get_env_configs(), config_helper.ENV_CONFS) ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_8_2/test_edp_engine.py 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_8_2/test_edp_engine0000664000175000017500000001053700000000000033631 0ustar00zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.plugins import edp from sahara.tests.unit import base as sahara_base from sahara_plugin_vanilla.plugins.vanilla.v2_8_2 import edp_engine class Vanilla2ConfigHintsTest(sahara_base.SaharaTestCase): @mock.patch( 'sahara_plugin_vanilla.plugins.vanilla.confighints_helper.' 'get_possible_hive_config_from', return_value={}) def test_get_possible_job_config_hive( self, get_possible_hive_config_from): expected_config = {'job_config': {}} actual_config = edp_engine.EdpOozieEngine.get_possible_job_config( edp.JOB_TYPE_HIVE) get_possible_hive_config_from.assert_called_once_with( 'plugins/vanilla/v2_8_2/resources/hive-default.xml') self.assertEqual(expected_config, actual_config) @mock.patch('sahara_plugin_vanilla.plugins.vanilla.hadoop2.edp_engine.' 'EdpOozieEngine') def test_get_possible_job_config_java(self, BaseVanillaEdpOozieEngine): expected_config = {'job_config': {}} BaseVanillaEdpOozieEngine.get_possible_job_config.return_value = ( expected_config) actual_config = edp_engine.EdpOozieEngine.get_possible_job_config( edp.JOB_TYPE_JAVA) (BaseVanillaEdpOozieEngine.get_possible_job_config. assert_called_once_with(edp.JOB_TYPE_JAVA)) self.assertEqual(expected_config, actual_config) @mock.patch( 'sahara_plugin_vanilla.plugins.vanilla.confighints_helper.' 'get_possible_mapreduce_config_from', return_value={}) def test_get_possible_job_config_mapreduce( self, get_possible_mapreduce_config_from): expected_config = {'job_config': {}} actual_config = edp_engine.EdpOozieEngine.get_possible_job_config( edp.JOB_TYPE_MAPREDUCE) get_possible_mapreduce_config_from.assert_called_once_with( 'plugins/vanilla/v2_8_2/resources/mapred-default.xml') self.assertEqual(expected_config, actual_config) @mock.patch( 'sahara_plugin_vanilla.plugins.vanilla.confighints_helper.' 'get_possible_mapreduce_config_from', return_value={}) def test_get_possible_job_config_mapreduce_streaming( self, get_possible_mapreduce_config_from): expected_config = {'job_config': {}} actual_config = edp_engine.EdpOozieEngine.get_possible_job_config( edp.JOB_TYPE_MAPREDUCE_STREAMING) get_possible_mapreduce_config_from.assert_called_once_with( 'plugins/vanilla/v2_8_2/resources/mapred-default.xml') self.assertEqual(expected_config, actual_config) @mock.patch( 'sahara_plugin_vanilla.plugins.vanilla.confighints_helper.' 'get_possible_pig_config_from', return_value={}) def test_get_possible_job_config_pig( self, get_possible_pig_config_from): expected_config = {'job_config': {}} actual_config = edp_engine.EdpOozieEngine.get_possible_job_config( edp.JOB_TYPE_PIG) get_possible_pig_config_from.assert_called_once_with( 'plugins/vanilla/v2_8_2/resources/mapred-default.xml') self.assertEqual(expected_config, actual_config) @mock.patch('sahara_plugin_vanilla.plugins.vanilla.hadoop2.edp_engine.' 'EdpOozieEngine') def test_get_possible_job_config_shell(self, BaseVanillaEdpOozieEngine): expected_config = {'job_config': {}} BaseVanillaEdpOozieEngine.get_possible_job_config.return_value = ( expected_config) actual_config = edp_engine.EdpOozieEngine.get_possible_job_config( edp.JOB_TYPE_SHELL) (BaseVanillaEdpOozieEngine.get_possible_job_config. assert_called_once_with(edp.JOB_TYPE_SHELL)) self.assertEqual(expected_config, actual_config) ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_8_2/test_versionhandler.py 22 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_8_2/test_versionhan0000664000175000017500000002651200000000000033710 0ustar00zuulzuul00000000000000# Copyright (c) 2017 EasyStack Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock import six import testtools from sahara.plugins import exceptions as ex from sahara.plugins import resource as r from sahara.plugins import testutils from sahara_plugin_vanilla.plugins.vanilla.v2_8_2.edp_engine import \ EdpOozieEngine from sahara_plugin_vanilla.plugins.vanilla.v2_8_2.edp_engine import \ EdpSparkEngine from sahara_plugin_vanilla.plugins.vanilla.v2_8_2 import versionhandler as v_h from sahara_plugin_vanilla.tests.unit import base class TestConfig(object): def __init__(self, applicable_target, name, default_value): self.applicable_target = applicable_target self.name = name self.default_value = default_value class VersionHandlerTest(base.SaharaTestCase): plugin_path = 'sahara_plugin_vanilla.plugins.vanilla.' plugin_hadoop2_path = 'sahara_plugin_vanilla.plugins.vanilla.hadoop2.' def setUp(self): super(VersionHandlerTest, self).setUp() self.cluster = mock.Mock() self.vh = v_h.VersionHandler() def test_get_plugin_configs(self): self.vh.pctx['all_confs'] = 'haha' conf = self.vh.get_plugin_configs() self.assertEqual(conf, 'haha') def test_get_node_processes(self): processes = self.vh.get_node_processes() for k, v in six.iteritems(processes): for p in v: self.assertIsInstance(p, str) @mock.patch(plugin_hadoop2_path + 'validation.validate_cluster_creating') def test_validate(self, validate_create): self.vh.pctx = mock.Mock() self.vh.validate(self.cluster) validate_create.assert_called_once_with(self.vh.pctx, self.cluster) @mock.patch(plugin_path + 'v2_8_2.versionhandler.VersionHandler.update_infra') def test_update_infra(self, update_infra): self.vh.update_infra(self.cluster) update_infra.assert_called_once_with(self.cluster) @mock.patch(plugin_hadoop2_path + 'config.configure_cluster') def test_configure_cluster(self, configure_cluster): self.vh.pctx = mock.Mock() self.vh.configure_cluster(self.cluster) configure_cluster.assert_called_once_with(self.vh.pctx, self.cluster) @mock.patch(plugin_path + 'v2_8_2.versionhandler.run') @mock.patch(plugin_path + 'v2_8_2.versionhandler.starting_scripts') @mock.patch('sahara.plugins.swift_helper.install_ssl_certs') @mock.patch(plugin_hadoop2_path + 'keypairs.provision_keypairs') @mock.patch('sahara.plugins.utils.get_instances') @mock.patch('sahara.plugins.utils.cluster_get_instances') def test_start_cluster(self, c_get_instances, u_get_instances, provision_keypairs, install_ssl_certs, s_scripts, run): self.vh.pctx = mock.Mock() instances = mock.Mock() c_get_instances.return_value = instances u_get_instances.return_value = instances self.vh._set_cluster_info = mock.Mock() self.vh.start_cluster(self.cluster) provision_keypairs.assert_called_once_with(self.cluster) s_scripts.start_namenode.assert_called_once_with(self.cluster) s_scripts.start_secondarynamenode.assert_called_once_with(self.cluster) s_scripts.start_resourcemanager.assert_called_once_with(self.cluster) s_scripts.start_historyserver.assert_called_once_with(self.cluster) s_scripts.start_oozie.assert_called_once_with(self.vh.pctx, self.cluster) s_scripts.start_hiveserver.assert_called_once_with(self.vh.pctx, self.cluster) s_scripts.start_spark.assert_called_once_with(self.cluster) run.start_dn_nm_processes.assert_called_once_with(instances) run.await_datanodes.assert_called_once_with(self.cluster) install_ssl_certs.assert_called_once_with(instances) self.vh._set_cluster_info.assert_called_once_with(self.cluster) @mock.patch(plugin_hadoop2_path + 'scaling.decommission_nodes') def test_decommission_nodes(self, decommission_nodes): self.vh.pctx = mock.Mock() cluster = mock.Mock() instances = mock.Mock() self.vh.decommission_nodes(cluster, instances) decommission_nodes.assert_called_once_with(self.vh.pctx, cluster, instances) @mock.patch('sahara.plugins.utils.general.get_by_id') @mock.patch(plugin_hadoop2_path + 'validation.validate_additional_ng_scaling') @mock.patch(plugin_hadoop2_path + 'validation.validate_existing_ng_scaling') def test_validate_scaling(self, vls, vla, get_by_id): self.vh.pctx['all_confs'] = [TestConfig('HDFS', 'dfs.replication', -1)] ng1 = testutils.make_ng_dict('ng1', '40', ['namenode'], 1) ng2 = testutils.make_ng_dict('ng2', '41', ['datanode'], 2) ng3 = testutils.make_ng_dict('ng3', '42', ['datanode'], 3) additional = [ng2['id'], ng3['id']] existing = {ng2['id']: 1} cluster = testutils.create_cluster('test-cluster', 'tenant1', 'fake', '0.1', [ng1, ng2, ng3]) self.vh.validate_scaling(cluster, existing, additional) vla.assert_called_once_with(cluster, additional) vls.assert_called_once_with(self.vh.pctx, cluster, existing) ng4 = testutils.make_ng_dict('ng4', '43', ['datanode', 'zookeeper'], 3) ng5 = testutils.make_ng_dict('ng5', '44', ['datanode', 'zookeeper'], 1) existing = {ng4['id']: 2} additional = {ng5['id']} cluster = testutils.create_cluster('test-cluster', 'tenant1', 'fake', '0.1', [ng1, ng4]) with testtools.ExpectedException(ex.ClusterCannotBeScaled): self.vh.validate_scaling(cluster, existing, {}) get_by_id.return_value = r.create_node_group_resource(ng5) with testtools.ExpectedException(ex.ClusterCannotBeScaled): self.vh.validate_scaling(cluster, {}, additional) @mock.patch(plugin_hadoop2_path + 'scaling.scale_cluster') @mock.patch(plugin_hadoop2_path + 'keypairs.provision_keypairs') def test_scale_cluster(self, provision_keypairs, scale_cluster): self.vh.pctx = mock.Mock() instances = mock.Mock() self.vh.scale_cluster(self.cluster, instances) provision_keypairs.assert_called_once_with(self.cluster, instances) scale_cluster.assert_called_once_with(self.vh.pctx, self.cluster, instances) @mock.patch("sahara.plugins.conductor.cluster_update") @mock.patch("sahara.plugins.context.ctx") @mock.patch(plugin_path + 'utils.get_namenode') @mock.patch(plugin_path + 'utils.get_resourcemanager') @mock.patch(plugin_path + 'utils.get_historyserver') @mock.patch(plugin_path + 'utils.get_oozie') @mock.patch(plugin_path + 'utils.get_spark_history_server') def test_set_cluster_info(self, get_spark_history_server, get_oozie, get_historyserver, get_resourcemanager, get_namenode, ctx, cluster_update): get_spark_history_server.return_value.management_ip = '1.2.3.0' get_oozie.return_value.get_ip_or_dns_name = mock.Mock( return_value='1.2.3.1') get_historyserver.return_value.get_ip_or_dns_name = mock.Mock( return_value='1.2.3.2') get_resourcemanager.return_value.get_ip_or_dns_name = mock.Mock( return_value='1.2.3.3') get_namenode.return_value.get_ip_or_dns_name = mock.Mock( return_value='1.2.3.4') get_namenode.return_value.hostname = mock.Mock( return_value='testnode') self.vh._set_cluster_info(self.cluster) info = {'YARN': { 'Web UI': 'http://1.2.3.3:8088', 'ResourceManager': 'http://1.2.3.3:8032' }, 'HDFS': { 'Web UI': 'http://1.2.3.4:50070', 'NameNode': 'hdfs://testnode:9000' }, 'JobFlow': { 'Oozie': 'http://1.2.3.1:11000' }, 'MapReduce JobHistory Server': { 'Web UI': 'http://1.2.3.2:19888' }, 'Apache Spark': { 'Spark UI': 'http://1.2.3.0:4040', 'Spark History Server UI': 'http://1.2.3.0:18080' } } cluster_update.assert_called_once_with(ctx(), self.cluster, {'info': info}) @mock.patch("sahara.plugins.edp.get_plugin") @mock.patch('sahara.plugins.utils.get_instance') @mock.patch('os.path.join') def test_get_edp_engine(self, join, get_instance, get_plugin): job_type = '' ret = self.vh.get_edp_engine(self.cluster, job_type) self.assertIsNone(ret) job_type = 'Java' ret = self.vh.get_edp_engine(self.cluster, job_type) self.assertIsInstance(ret, EdpOozieEngine) job_type = 'Spark' ret = self.vh.get_edp_engine(self.cluster, job_type) self.assertIsInstance(ret, EdpSparkEngine) def test_get_edp_job_types(self): job_types = ['Hive', 'Java', 'MapReduce', 'MapReduce.Streaming', 'Pig', 'Shell', 'Spark'] self.assertEqual(self.vh.get_edp_job_types(), job_types) def test_get_edp_config_hints(self): job_type = 'Java' ret = {'job_config': {'args': [], 'configs': []}} self.assertEqual(self.vh.get_edp_config_hints(job_type), ret) @mock.patch(plugin_hadoop2_path + 'utils.delete_oozie_password') @mock.patch(plugin_hadoop2_path + 'keypairs.drop_key') def test_on_terminate_cluster(self, delete_oozie_password, drop_key): self.vh.on_terminate_cluster(self.cluster) delete_oozie_password.assert_called_once_with(self.cluster) drop_key.assert_called_once_with(self.cluster) @mock.patch(plugin_hadoop2_path + 'config.get_open_ports') def test_get_open_ports(self, get_open_ports): node_group = mock.Mock() self.vh.get_open_ports(node_group) get_open_ports.assert_called_once_with(node_group) @mock.patch(plugin_hadoop2_path + 'recommendations_utils.recommend_configs') def test_recommend_configs(self, recommend_configs): scaling = mock.Mock() configs = mock.Mock() self.vh.pctx['all_confs'] = configs self.vh.recommend_configs(self.cluster, scaling) recommend_configs.assert_called_once_with(self.cluster, configs, scaling) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9724793 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/utils/0000775000175000017500000000000000000000000023455 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/utils/__init__.py0000664000175000017500000000000000000000000025554 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla/utils/patches.py0000664000175000017500000000667400000000000025473 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import eventlet EVENTLET_MONKEY_PATCH_MODULES = dict(os=True, select=True, socket=True, thread=True, time=True) def patch_all(): """Apply all patches. List of patches: * eventlet's monkey patch for all cases; * minidom's writexml patch for py < 2.7.3 only. """ eventlet_monkey_patch() patch_minidom_writexml() def eventlet_monkey_patch(): """Apply eventlet's monkey patch. This call should be the first call in application. It's safe to call monkey_patch multiple times. """ eventlet.monkey_patch(**EVENTLET_MONKEY_PATCH_MODULES) def eventlet_import_monkey_patched(module): """Returns module monkey patched by eventlet. It's needed for some tests, for example, context test. """ return eventlet.import_patched(module, **EVENTLET_MONKEY_PATCH_MODULES) def patch_minidom_writexml(): """Patch for xml.dom.minidom toprettyxml bug with whitespaces around text We apply the patch to avoid excess whitespaces in generated xml configuration files that brakes Hadoop. (This patch will be applied for all Python versions < 2.7.3) Issue: http://bugs.python.org/issue4147 Patch: http://hg.python.org/cpython/rev/cb6614e3438b/ Description: http://ronrothman.com/public/leftbraned/xml-dom-minidom-\ toprettyxml-and-silly-whitespace/#best-solution """ import sys if sys.version_info >= (2, 7, 3): return import xml.dom.minidom as md def element_writexml(self, writer, indent="", addindent="", newl=""): # indent = current indentation # addindent = indentation to add to higher levels # newl = newline string writer.write(indent + "<" + self.tagName) attrs = self._get_attributes() a_names = list(attrs.keys()) a_names.sort() for a_name in a_names: writer.write(" %s=\"" % a_name) md._write_data(writer, attrs[a_name].value) writer.write("\"") if self.childNodes: writer.write(">") if (len(self.childNodes) == 1 and self.childNodes[0].nodeType == md.Node.TEXT_NODE): self.childNodes[0].writexml(writer, '', '', '') else: writer.write(newl) for node in self.childNodes: node.writexml(writer, indent + addindent, addindent, newl) writer.write(indent) writer.write("%s" % (self.tagName, newl)) else: writer.write("/>%s" % (newl)) md.Element.writexml = element_writexml def text_writexml(self, writer, indent="", addindent="", newl=""): md._write_data(writer, "%s%s%s" % (indent, self.data, newl)) md.Text.writexml = text_writexml ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9524794 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla.egg-info/0000775000175000017500000000000000000000000024007 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419356.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla.egg-info/PKG-INFO0000664000175000017500000000467400000000000025117 0ustar00zuulzuul00000000000000Metadata-Version: 1.2 Name: sahara-plugin-vanilla Version: 10.0.0 Summary: Vanilla Plugin for Sahara Project Home-page: https://docs.openstack.org/sahara/latest/ Author: OpenStack Author-email: openstack-discuss@lists.openstack.org License: Apache Software License Description: ======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/sahara.svg :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on OpenStack Data Processing ("Sahara") Vanilla Plugin ==================================================== OpenStack Sahara Vanilla Plugin provides the users the option to start Vanilla clusters on OpenStack Sahara. Check out OpenStack Sahara documentation to see how to deploy the Vanilla Plugin. Sahara at wiki.openstack.org: https://wiki.openstack.org/wiki/Sahara Storyboard project: https://storyboard.openstack.org/#!/project/openstack/sahara-plugin-vanilla Sahara docs site: https://docs.openstack.org/sahara/latest/ Quickstart guide: https://docs.openstack.org/sahara/latest/user/quickstart.html How to participate: https://docs.openstack.org/sahara/latest/contributor/how-to-participate.html Source: https://opendev.org/openstack/sahara-plugin-vanilla Bugs and feature requests: https://storyboard.openstack.org/#!/project/openstack/sahara-plugin-vanilla Release notes: https://docs.openstack.org/releasenotes/sahara-plugin-vanilla/ License ------- Apache License Version 2.0 http://www.apache.org/licenses/LICENSE-2.0 Platform: UNKNOWN Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Requires-Python: >=3.8 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419356.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla.egg-info/SOURCES.txt0000664000175000017500000001773100000000000025704 0ustar00zuulzuul00000000000000.stestr.conf .zuul.yaml AUTHORS CONTRIBUTING.rst ChangeLog LICENSE README.rst babel.cfg errors.txt requirements.txt setup.cfg setup.py test-requirements.txt tox.ini doc/requirements.txt doc/source/conf.py doc/source/index.rst doc/source/contributor/contributing.rst doc/source/contributor/index.rst doc/source/user/index.rst doc/source/user/vanilla-plugin.rst releasenotes/notes/drop-py2-7-345ca486b838f0bb.yaml releasenotes/source/conf.py releasenotes/source/index.rst releasenotes/source/stein.rst releasenotes/source/train.rst releasenotes/source/unreleased.rst releasenotes/source/ussuri.rst releasenotes/source/victoria.rst releasenotes/source/wallaby.rst releasenotes/source/xena.rst releasenotes/source/yoga.rst releasenotes/source/_static/.placeholder releasenotes/source/_templates/.placeholder releasenotes/source/locale/de/LC_MESSAGES/releasenotes.po releasenotes/source/locale/en_GB/LC_MESSAGES/releasenotes.po releasenotes/source/locale/ne/LC_MESSAGES/releasenotes.po sahara_plugin_vanilla/__init__.py sahara_plugin_vanilla/i18n.py sahara_plugin_vanilla.egg-info/PKG-INFO sahara_plugin_vanilla.egg-info/SOURCES.txt sahara_plugin_vanilla.egg-info/dependency_links.txt sahara_plugin_vanilla.egg-info/entry_points.txt sahara_plugin_vanilla.egg-info/not-zip-safe sahara_plugin_vanilla.egg-info/pbr.json sahara_plugin_vanilla.egg-info/requires.txt sahara_plugin_vanilla.egg-info/top_level.txt sahara_plugin_vanilla/locale/de/LC_MESSAGES/sahara_plugin_vanilla.po sahara_plugin_vanilla/locale/en_GB/LC_MESSAGES/sahara_plugin_vanilla.po sahara_plugin_vanilla/locale/id/LC_MESSAGES/sahara_plugin_vanilla.po sahara_plugin_vanilla/locale/ne/LC_MESSAGES/sahara_plugin_vanilla.po sahara_plugin_vanilla/plugins/__init__.py sahara_plugin_vanilla/plugins/vanilla/__init__.py sahara_plugin_vanilla/plugins/vanilla/abstractversionhandler.py sahara_plugin_vanilla/plugins/vanilla/confighints_helper.py sahara_plugin_vanilla/plugins/vanilla/edp_engine.py sahara_plugin_vanilla/plugins/vanilla/plugin.py sahara_plugin_vanilla/plugins/vanilla/utils.py sahara_plugin_vanilla/plugins/vanilla/versionfactory.py sahara_plugin_vanilla/plugins/vanilla/hadoop2/__init__.py sahara_plugin_vanilla/plugins/vanilla/hadoop2/config.py sahara_plugin_vanilla/plugins/vanilla/hadoop2/config_helper.py sahara_plugin_vanilla/plugins/vanilla/hadoop2/edp_engine.py sahara_plugin_vanilla/plugins/vanilla/hadoop2/keypairs.py sahara_plugin_vanilla/plugins/vanilla/hadoop2/oozie_helper.py sahara_plugin_vanilla/plugins/vanilla/hadoop2/recommendations_utils.py sahara_plugin_vanilla/plugins/vanilla/hadoop2/run_scripts.py sahara_plugin_vanilla/plugins/vanilla/hadoop2/scaling.py sahara_plugin_vanilla/plugins/vanilla/hadoop2/starting_scripts.py sahara_plugin_vanilla/plugins/vanilla/hadoop2/utils.py sahara_plugin_vanilla/plugins/vanilla/hadoop2/validation.py sahara_plugin_vanilla/plugins/vanilla/hadoop2/resources/create_oozie_db.sql sahara_plugin_vanilla/plugins/vanilla/hadoop2/resources/post_conf.template sahara_plugin_vanilla/plugins/vanilla/hadoop2/resources/spark-cleanup.cron sahara_plugin_vanilla/plugins/vanilla/hadoop2/resources/tmp-cleanup.sh.template sahara_plugin_vanilla/plugins/vanilla/hadoop2/resources/topology.sh sahara_plugin_vanilla/plugins/vanilla/hadoop2/resources/zoo_sample.cfg sahara_plugin_vanilla/plugins/vanilla/v2_7_1/__init__.py sahara_plugin_vanilla/plugins/vanilla/v2_7_1/config_helper.py sahara_plugin_vanilla/plugins/vanilla/v2_7_1/edp_engine.py sahara_plugin_vanilla/plugins/vanilla/v2_7_1/versionhandler.py sahara_plugin_vanilla/plugins/vanilla/v2_7_1/resources/README.rst sahara_plugin_vanilla/plugins/vanilla/v2_7_1/resources/core-default.xml sahara_plugin_vanilla/plugins/vanilla/v2_7_1/resources/create_hive_db.sql sahara_plugin_vanilla/plugins/vanilla/v2_7_1/resources/hdfs-default.xml sahara_plugin_vanilla/plugins/vanilla/v2_7_1/resources/hive-default.xml sahara_plugin_vanilla/plugins/vanilla/v2_7_1/resources/mapred-default.xml sahara_plugin_vanilla/plugins/vanilla/v2_7_1/resources/oozie-default.xml sahara_plugin_vanilla/plugins/vanilla/v2_7_1/resources/yarn-default.xml sahara_plugin_vanilla/plugins/vanilla/v2_7_5/__init__.py sahara_plugin_vanilla/plugins/vanilla/v2_7_5/config_helper.py sahara_plugin_vanilla/plugins/vanilla/v2_7_5/edp_engine.py sahara_plugin_vanilla/plugins/vanilla/v2_7_5/versionhandler.py sahara_plugin_vanilla/plugins/vanilla/v2_7_5/resources/README.rst sahara_plugin_vanilla/plugins/vanilla/v2_7_5/resources/core-default.xml sahara_plugin_vanilla/plugins/vanilla/v2_7_5/resources/create_hive_db.sql sahara_plugin_vanilla/plugins/vanilla/v2_7_5/resources/hdfs-default.xml sahara_plugin_vanilla/plugins/vanilla/v2_7_5/resources/hive-default.xml sahara_plugin_vanilla/plugins/vanilla/v2_7_5/resources/mapred-default.xml sahara_plugin_vanilla/plugins/vanilla/v2_7_5/resources/oozie-default.xml sahara_plugin_vanilla/plugins/vanilla/v2_7_5/resources/yarn-default.xml sahara_plugin_vanilla/plugins/vanilla/v2_8_2/__init__.py sahara_plugin_vanilla/plugins/vanilla/v2_8_2/config_helper.py sahara_plugin_vanilla/plugins/vanilla/v2_8_2/edp_engine.py sahara_plugin_vanilla/plugins/vanilla/v2_8_2/versionhandler.py sahara_plugin_vanilla/plugins/vanilla/v2_8_2/resources/README.rst sahara_plugin_vanilla/plugins/vanilla/v2_8_2/resources/core-default.xml sahara_plugin_vanilla/plugins/vanilla/v2_8_2/resources/create_hive_db.sql sahara_plugin_vanilla/plugins/vanilla/v2_8_2/resources/hdfs-default.xml sahara_plugin_vanilla/plugins/vanilla/v2_8_2/resources/hive-default.xml sahara_plugin_vanilla/plugins/vanilla/v2_8_2/resources/mapred-default.xml sahara_plugin_vanilla/plugins/vanilla/v2_8_2/resources/oozie-default.xml sahara_plugin_vanilla/plugins/vanilla/v2_8_2/resources/yarn-default.xml sahara_plugin_vanilla/tests/__init__.py sahara_plugin_vanilla/tests/unit/__init__.py sahara_plugin_vanilla/tests/unit/base.py sahara_plugin_vanilla/tests/unit/plugins/__init__.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/__init__.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/test_confighints_helper.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/test_utils.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/__init__.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_config_helper.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_configs.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_edp_engine.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_oozie_helper.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_plugin.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_recommendation_utils.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_run_scripts.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_scaling.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_starting_scripts.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_utils.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/test_validation.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/resources/dfs-report.txt sahara_plugin_vanilla/tests/unit/plugins/vanilla/hadoop2/resources/yarn-report.txt sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_1/__init__.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_1/test_config_helper.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_1/test_edp_engine.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_1/test_versionhandler.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_5/__init__.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_5/test_config_helper.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_5/test_edp_engine.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_7_5/test_versionhandler.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_8_2/__init__.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_8_2/test_config_helper.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_8_2/test_edp_engine.py sahara_plugin_vanilla/tests/unit/plugins/vanilla/v2_8_2/test_versionhandler.py sahara_plugin_vanilla/utils/__init__.py sahara_plugin_vanilla/utils/patches.py././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419356.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla.egg-info/dependency_links.txt0000664000175000017500000000000100000000000030055 0ustar00zuulzuul00000000000000 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419356.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla.egg-info/entry_points.txt0000664000175000017500000000014100000000000027301 0ustar00zuulzuul00000000000000[sahara.cluster.plugins] vanilla = sahara_plugin_vanilla.plugins.vanilla.plugin:VanillaProvider ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419356.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla.egg-info/not-zip-safe0000664000175000017500000000000100000000000026235 0ustar00zuulzuul00000000000000 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419356.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla.egg-info/pbr.json0000664000175000017500000000005600000000000025466 0ustar00zuulzuul00000000000000{"git_version": "d93365f", "is_release": true}././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419356.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla.egg-info/requires.txt0000664000175000017500000000030300000000000026403 0ustar00zuulzuul00000000000000Babel!=2.4.0,>=2.3.4 eventlet>=0.26.0 oslo.i18n>=3.15.3 oslo.log>=3.36.0 oslo.serialization!=2.19.1,>=2.18.0 oslo.utils>=3.33.0 pbr!=2.1.0,>=2.0.0 requests>=2.14.2 sahara>=10.0.0.0b1 six>=1.10.0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419356.0 sahara-plugin-vanilla-10.0.0/sahara_plugin_vanilla.egg-info/top_level.txt0000664000175000017500000000002600000000000026537 0ustar00zuulzuul00000000000000sahara_plugin_vanilla ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1696419356.9764793 sahara-plugin-vanilla-10.0.0/setup.cfg0000664000175000017500000000254000000000000017614 0ustar00zuulzuul00000000000000[metadata] name = sahara-plugin-vanilla summary = Vanilla Plugin for Sahara Project description-file = README.rst license = Apache Software License python-requires = >=3.8 classifiers = Programming Language :: Python Programming Language :: Python :: Implementation :: CPython Programming Language :: Python :: 3 :: Only Programming Language :: Python :: 3 Programming Language :: Python :: 3.8 Programming Language :: Python :: 3.9 Environment :: OpenStack Intended Audience :: Information Technology Intended Audience :: System Administrators License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux author = OpenStack author-email = openstack-discuss@lists.openstack.org home-page = https://docs.openstack.org/sahara/latest/ [files] packages = sahara_plugin_vanilla [entry_points] sahara.cluster.plugins = vanilla = sahara_plugin_vanilla.plugins.vanilla.plugin:VanillaProvider [compile_catalog] directory = sahara_plugin_vanilla/locale domain = sahara_plugin_vanilla [update_catalog] domain = sahara_plugin_vanilla output_dir = sahara_plugin_vanilla/locale input_file = sahara_plugin_vanilla/locale/sahara_plugin_vanilla.pot [extract_messages] keywords = _ gettext ngettext l_ lazy_gettext mapping_file = babel.cfg output_file = sahara_plugin_vanilla/locale/sahara_plugin_vanilla.pot [egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/setup.py0000664000175000017500000000127100000000000017505 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import setuptools setuptools.setup( setup_requires=['pbr>=2.0.0'], pbr=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/test-requirements.txt0000664000175000017500000000102200000000000022226 0ustar00zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. hacking>=3.0.1,<3.1.0 # Apache-2.0 bandit>=1.1.0 # Apache-2.0 bashate>=0.5.1 # Apache-2.0 coverage!=4.4,>=4.0 # Apache-2.0 doc8>=0.6.0 # Apache-2.0 fixtures>=3.0.0 # Apache-2.0/BSD oslotest>=3.2.0 # Apache-2.0 stestr>=1.0.0 # Apache-2.0 pylint==1.4.5 # GPLv2 testscenarios>=0.4 # Apache-2.0/BSD testtools>=2.4.0 # MIT ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1696419316.0 sahara-plugin-vanilla-10.0.0/tox.ini0000664000175000017500000000642300000000000017312 0ustar00zuulzuul00000000000000[tox] envlist = py38,pep8 minversion = 3.1.1 skipsdist = True # this allows tox to infer the base python from the environment name # and override any basepython configured in this file ignore_basepython_conflict = true [testenv] basepython = python3 usedevelop = True install_command = pip install {opts} {packages} setenv = VIRTUAL_ENV={envdir} DISCOVER_DIRECTORY=sahara_plugin_vanilla/tests/unit deps = -c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt commands = stestr run {posargs} passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY [testenv:debug-py36] basepython = python3.6 commands = oslo_debug_helper -t sahara_plugin_vanilla/tests/unit {posargs} [testenv:debug-py37] basepython = python3.7 commands = oslo_debug_helper -t sahara_plugin_vanilla/tests/unit {posargs} [testenv:pep8] deps = -c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt -r{toxinidir}/doc/requirements.txt commands = flake8 {posargs} doc8 doc/source # Run bashate checks bash -c "find sahara_plugin_vanilla -iname '*.sh' -print0 | xargs -0 bashate -v" [testenv:venv] commands = {posargs} [testenv:docs] deps = -c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} -r{toxinidir}/doc/requirements.txt commands = rm -rf doc/build/html sphinx-build -W -b html doc/source doc/build/html whitelist_externals = rm [testenv:pdf-docs] deps = {[testenv:docs]deps} commands = rm -rf doc/build/pdf sphinx-build -W -b latex doc/source doc/build/pdf make -C doc/build/pdf whitelist_externals = make rm [testenv:releasenotes] deps = -c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} -r{toxinidir}/doc/requirements.txt commands = rm -rf releasenotes/build releasenotes/html sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html whitelist_externals = rm [testenv:debug] # It runs tests from the specified dir (default is sahara_plugin_vanilla/tests) # in interactive mode, so, you could use pbr for tests debug. # Example usage: tox -e debug -- -t sahara_plugin_vanilla/tests/unit some.test.path # https://docs.openstack.org/oslotest/latest/features.html#debugging-with-oslo-debug-helper commands = oslo_debug_helper -t sahara_plugin_vanilla/tests/unit {posargs} [flake8] show-source = true builtins = _ exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,tools # [H904] Delay string interpolations at logging calls # [H106] Don't put vim configuration in source files # [H203] Use assertIs(Not)None to check for None. # [H204] Use assert(Not)Equal to check for equality # [H205] Use assert(Greater|Less)(Equal) for comparison enable-extensions=H904,H106,H203,H204,H205 # [E123] Closing bracket does not match indentation of opening bracket's line # [E226] Missing whitespace around arithmetic operator # [E402] Module level import not at top of file # [E731] Do not assign a lambda expression, use a def # [W503] Line break occurred before a binary operator # [W504] line break after binary operator ignore=E123,E226,E402,E731,W503,W504