././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.3034875 cloudkitty-21.0.0/0000775000175000017500000000000000000000000014010 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/.stestr.conf0000664000175000017500000000015500000000000016262 0ustar00zuulzuul00000000000000[DEFAULT] test_path=./cloudkitty/tests top_dir=./ group_regex=gabbi\.(suitemaker|driver)\.(test_[^_]+_[^_]+) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/.zuul.yaml0000664000175000017500000001453700000000000015763 0ustar00zuulzuul00000000000000- job: name: base-cloudkitty-tempest-job parent: devstack-tempest description: | Job testing cloudkitty installation and running tempest tests required-projects: &base_required_projects - name: openstack/cloudkitty - name: openstack/cloudkitty-tempest-plugin - name: openstack/python-cloudkittyclient roles: &base_roles - zuul: openstack-infra/devstack timeout: 5400 irrelevant-files: &base_irrelevant_files - ^.*\.rst$ - ^doc/.*$ - ^releasenotes/.*$ vars: &base_vars devstack_plugins: cloudkitty: https://opendev.org/openstack/cloudkitty cloudkitty-tempest-plugin: https://opendev.org/openstack/cloudkitty-tempest-plugin devstack_services: ck-api: true ck-proc: true horizon: false tempest: true tempest_concurrency: 1 tempest_test_regex: cloudkitty_tempest_plugin.* tox_envlist: all devstack_localrc: CLOUDKITTY_FETCHER: keystone USE_PYTHON3: True TEMPEST_PLUGINS: /opt/stack/cloudkitty-tempest-plugin - job: name: cloudkitty-grenade-job parent: grenade description: | Grenade job to test release upgrades required-projects: - opendev.org/openstack/grenade - opendev.org/openstack/cloudkitty - opendev.org/openstack/cloudkitty-tempest-plugin - opendev.org/openstack/python-cloudkittyclient irrelevant-files: *base_irrelevant_files vars: devstack_plugins: cloudkitty: https://opendev.org/openstack/cloudkitty.git cloudkitty-tempest-plugin: https://opendev.org/openstack/cloudkitty-tempest-plugin.git devstack_services: ck-api: true ck-proc: true tempest_concurrency: 1 tempest_plugins: - cloudkitty-tempest-plugin tempest_test_regex: cloudkitty_tempest_plugin.* tox_envlist: all grenade_devstack_localrc: shared: CLOUDKITTY_USE_MOD_WSGI: false CLOUDKITTY_FETCHER: keystone - job: name: base-cloudkitty-v1-api-tempest-job parent: base-cloudkitty-tempest-job description: | Job running tempest tests on devstack with the v1 API only and the v1 storage driver vars: tempest_test_regex: cloudkitty_tempest_plugin.tests.api.v1.* - job: name: base-cloudkitty-v2-api-tempest-job parent: base-cloudkitty-tempest-job description: | Job running tempest tests on devstack with the v2 API and a v2 storage driver vars: tempest_test_regex: cloudkitty_tempest_plugin.* - job: name: cloudkitty-tempest-full-ipv6-only parent: devstack-tempest-ipv6 description: | Job testing cloudkitty installation on devstack on IPv6 and running tempest tests required-projects: *base_required_projects roles: *base_roles timeout: 5400 irrelevant-files: *base_irrelevant_files vars: *base_vars - job: name: cloudkitty-tempest-full-v1-storage-sqlalchemy parent: base-cloudkitty-v1-api-tempest-job description: | Job testing cloudkitty installation on devstack with python 3 and the SQLAlchemy v1 storage driver and running tempest tests vars: devstack_localrc: CLOUDKITTY_STORAGE_BACKEND: sqlalchemy CLOUDKITTY_STORAGE_VERSION: 1 - job: name: cloudkitty-tempest-full-v2-storage-influxdb parent: base-cloudkitty-v2-api-tempest-job description: | Job testing cloudkitty installation on devstack with python 3, InfluxDB v1 and the InfluxDB v2 storage driver and running tempest tests vars: devstack_localrc: CLOUDKITTY_STORAGE_BACKEND: influxdb CLOUDKITTY_STORAGE_VERSION: 2 CLOUDKITTY_INFLUX_VERSION: 1 - job: name: cloudkitty-tempest-full-v2-storage-influxdb-v2 parent: base-cloudkitty-v2-api-tempest-job description: | Job testing cloudkitty installation on devstack with python 3, InfluxDB v2 and the InfluxDB v2 storage driver and running tempest tests vars: devstack_localrc: CLOUDKITTY_STORAGE_BACKEND: influxdb CLOUDKITTY_STORAGE_VERSION: 2 CLOUDKITTY_INFLUX_VERSION: 2 - job: name: cloudkitty-tempest-full-v2-storage-elasticsearch parent: base-cloudkitty-v2-api-tempest-job description: | Job testing cloudkitty installation on devstack with python 3 and the Elasticsearch v2 storage driver and running tempest tests vars: devstack_localrc: CLOUDKITTY_STORAGE_BACKEND: elasticsearch CLOUDKITTY_STORAGE_VERSION: 2 - job: name: cloudkitty-tempest-full-v2-storage-opensearch parent: base-cloudkitty-v2-api-tempest-job description: | Job testing cloudkitty installation on devstack with python 3 and the OpenSearch v2 storage driver and running tempest tests vars: devstack_localrc: CLOUDKITTY_STORAGE_BACKEND: opensearch CLOUDKITTY_STORAGE_VERSION: 2 - job: name: cloudkitty-tox-bandit parent: openstack-tox timeout: 2400 vars: tox_envlist: bandit required-projects: - openstack/requirements irrelevant-files: - ^.*\.rst$ - ^.*\.txt$ - ^api-ref/.*$ - ^apidocs/.*$ - ^contrib/.*$ - ^doc/.*$ - ^etc/.*$ - ^releasenotes/.*$ - ^setup.cfg$ - ^tools/.*$ - ^cloudkitty/hacking/.*$ - ^cloudkitty/tests/scenario/.*$ - ^cloudkitty/tests/unittests/.*$ - project: queue: cloudkitty templates: - check-requirements - openstack-cover-jobs - openstack-python3-jobs - publish-openstack-docs-pti - release-notes-jobs-python3 check: jobs: - cloudkitty-tempest-full-v2-storage-influxdb - cloudkitty-tempest-full-v2-storage-influxdb-v2 - cloudkitty-tempest-full-v2-storage-elasticsearch - cloudkitty-tempest-full-v2-storage-opensearch - cloudkitty-tempest-full-v1-storage-sqlalchemy - cloudkitty-tempest-full-ipv6-only - cloudkitty-tox-bandit: voting: false - cloudkitty-grenade-job gate: jobs: - cloudkitty-tempest-full-v2-storage-influxdb - cloudkitty-tempest-full-v2-storage-influxdb-v2 - cloudkitty-tempest-full-v2-storage-elasticsearch - cloudkitty-tempest-full-v2-storage-opensearch - cloudkitty-tempest-full-v1-storage-sqlalchemy - cloudkitty-tempest-full-ipv6-only - cloudkitty-grenade-job ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866638.0 cloudkitty-21.0.0/AUTHORS0000664000175000017500000001243600000000000015066 0ustar00zuulzuul00000000000000Aaron-DH <344677472@qq.com> Aaron-DH Abraham Arce Adam Adam Aquesbi Andreas Jaeger Arundhati Surpur Bertrand Lallau Cedric Brandily Chaozhe.Chen Chris Dent Christian Berendt Christian Berendt Christophe Sauthier Claudiu Belu Dao Cong Tien David Höppner <0xffea@gmail.com> Dawud Deepak Doug Hellmann Endre Karlson Flavio Percoco François Magimel François Magimel Gauvain Pocentek Ghanshyam Mann Guillaume Espanel Hervé Beraud Huachao Mao James E. Blair Jeremy Liu Jeremy Stanley Jiang Qin Joel Capitao Jonathan Herlin Justin Ferrieu Lars Wiegman Lei Zhang Longgeek Luka Peschke Luka Peschke Luong Anh Tuan MC Mariusz Karpiarz Mariusz Karpiarz Martin CAMEY Matt Crees Max Gautier Maxime Cottret Michael Krotscheck Michael Rice Michael Sambol Michal Arbet Ngo Quoc Cuong Nguyen Hung Phuong Nguyen Van Trung Ning Yao Olivier Chaze Ondřej Nový OpenStack Release Bot Pedro Henrique Pierre Riteau Quentin Anglade Rafael Weingärtner Robin Naundorf Ronald Bradford Sam Morrison Sean McGinnis Seunghun Lee Shilla Saebi Steve Kowalik Stéphane Albert Stéphane Albert Swapnil Kulkarni (coolsvap) Takashi Kajinami Tatsuro Makita Thomas Goirand Tobias Urdin Van Hung Pham Venkateswarlu Pallamala Vu Cong Tuan XieYingYun Yu Zhiguo Zachary Sais ZhiQiang Fan adriant aimbot31 avnish baiwenteng bhujay caoyuan chenghuiyu davidzhubo deepakmourya elnino09 fpxie gengchc2 ghanshyam guotao kangyufei liangcui lioplhp liu-sheng liujiong liyingjun liyou01 lvdongbing melissaml pawnesh.kumar pedro pengyuesheng qin.jiang qinchunhua rajat29 ricolin ritesh.arya scolinas shangxiaobj shashi.kant shubhendu songwenping sun cheng tianmaofu venkatamahesh wangqi wangzihao wu.chunyang wu.shiming wuchunyang xiangjun Li xiangjun li xuanyandong yuanrunsen zhang.lei zhangchun zhangguoqing zhangyanxian zhouxinyong zhu.boxiang zhulingjie ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/CONTRIBUTING.rst0000664000175000017500000000116200000000000016451 0ustar00zuulzuul00000000000000The source repository for this project can be found at: https://opendev.org/openstack/cloudkitty Pull requests submitted through GitHub are not monitored. To start contributing to OpenStack, follow the steps in the contribution guide to set up and use Gerrit: https://docs.openstack.org/contributors/code-and-documentation/quick-start.html Bugs should be filed on Storyboard: https://storyboard.openstack.org/#!/project/890 For more specific information about contributing to this repository, see the Cloudkitty contributor guide: https://docs.openstack.org/cloudkitty/latest/contributor/contributing.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866638.0 cloudkitty-21.0.0/ChangeLog0000664000175000017500000010032200000000000015560 0ustar00zuulzuul00000000000000CHANGES ======= 21.0.0 ------ * Bump hacking minimum version to 6.1.0 * Fix v1 summary/total with ES/OS storage backend * CI: Remove grenade branch variables * reno: Update master for unmaintained/zed * Fix API report requests when using opensearch * Remove get\_state function and its references * Add rating modules PUT endpoint to v2 API * Remove remnants of RTD documentation * Update README content * Update master for stable/2024.1 * Update example pyscript at documentation 20.0.0 ------ * Set Elastic/OpenSearch jobs as voting in Zuul * Add grenade tests * reno: Update master for unmaintained/xena * reno: Update master for unmaintained/wallaby * reno: Update master for unmaintained/victoria * Replace usage of LegacyEngineFacade * Add support to InfluxDB v2 as storage backend * Add requests to requirements * Removal of Monasca fetcher and collector * Fix devstack runprocess for cloudkitty api * reno: Update master for unmaintained/yoga * Update python classifier in setup.cfg * Add description option to a rating metric definition * Add OpenSearch as a v2 storage backend * Add groupby options by different timeframes * Fix retrieval of reprocessing tasks * Optimize CloudKitty reprocessing process * Patch for \`use\_all\_resource\_revisions\` option * Use correct metadata for metrics gathered from gnocchi * Remove the manual patch for WSME * Fix a concurrency issue when locking reprocessing tasks * Clean up release note * Remove cloudkitty.conf.sample from docs * Update master for stable/2023.2 19.0.0 ------ * Fix docs jobs in the CI that were broken due to Sphinx upgrade * Use DevStack VENV path * Fix random unit test failures * Create concepts definition page for CloudKitty * Remove \`state\` field from API * Improve reprocessing task documentation * Optimize Gnocchi fetcher processing time * Fix formatting in release note * Add detail about reprocessing all scope IDs in reprocessing API * Optimizing SQL queries that filter on a time range * Update master for stable/2023.1 * Fix spacing in log message * Allows multiple rating types for same metric for gnocchi 18.0.0 ------ * Add missing "." for api-ref * Optimize Gnocchi fetcher * Use new get\_rpc\_client API from oslo.messaging * Fix PyScripts processing * Make tox.ini tox 4.0.0 compatible * Add missing api-ref envlist to tox * Fix typo in docs command sample * Move API docs to \`api-ref/source\` * CI: deploy OpenSearch 1.x instead of Elasticsearch * Create indexes to allow SQL rewrites and optimizations * Add deprecation notice for Monasca * Announce future deprecation of Elasticsearch * Validates the period compatibility of reprocessing dates * [docs] Install cloudkitty requirements using constraints * Switch to 2023.1 Python3 unit tests and generic template name * Update master for stable/zed * Fix compatibility with oslo.db 12.1.0 17.0.0.0rc1 ----------- * Add MAP mutator * Fix response format 'total' for v2 dataframes API * Replace deprecated assertRaisesRegexp * Allow rating rules that have 12 digits in the integer part of the number * Add API to create scopes * Drop lower-constraints.txt and its testing * [CI] Move queue setting to project level * Fix incorrect use of voluptuous.Coerce * Add rating modules GET endpoints to v2 API * Include /v2 in public routes for keystone auth * Add Python3 zed unit tests * Update master for stable/yoga 16.0.0 ------ * Fix v2 API dataframes get policy check * Introduce reprocessing API * Raise CollectError when Prometheus query returns an error * Avoid DivByZero if there is no metrics to collect * Fix description of orchestrator parameters * Update sample configuration file * Support customising Prometheus queries * Updating python testing classifier as per Yoga testing runtime * Fix quote API * Add support for multiple value filters * Add missing whitespace in log message * Introduce "response\_format" for the V2 summary API * Adding two options in fetcher\_keystone * Add active status fields in the storage state table * Add Python3 yoga unit tests * Update master for stable/xena 15.0.0 ------ * Set force\_granularity: 300 in metrics.yml * Custom query Gnocchi collector * Replace deprecated import of ABCs from collections * Changed minversion in tox to 3.18.0 * Fix cloudkitty exception handling from gnocchiclient * Deprecate \`state\` field and propose \`last\_processed\_timestamp\` field * SQLalchemy not creating constraint for Enum on version 1.4.0+ * setup.cfg: Replace dashes with underscores * Fix tests cases broken by flask >=2.0.1 * docs: Update Freenode to OFTC * Fixed PyScripts.start\_script method to return the updated data object * [ussuri][goal] Update contributor documentation * Use py3 as the default runtime for tox * Add release note for admin\_or\_owner policy fix * Fix typo in policy rule description * Fix default admin\_or\_owner policy expression * Add Python3 xena unit tests * Update master for stable/wallaby * Add the NOTNUMBOOL mutator 14.0.0 ------ * Use recreate='auto' in storage\_states migration * remove unicode from code * Remove six * Add doc/requirements * [goal] Deprecate the JSON formatted policy file * Update TOX\_CONSTRAINTS\_FILE * Update lower-constraints * Replace deprecated UPPER\_CONSTRAINTS\_FILE variable * Create 'use\_all\_resource\_revisions' for Gnocchi collector * Increase cost fields to 28 digits precision * Update sample configuration and policy files * Add Python3 wallaby unit tests * Update master for stable/victoria * Log the number of tenants loaded by the fetcher 13.0.0.0rc1 ----------- * bump py37 to py38 in tox.ini * Bump hacking min version to 3.0.1 * Replace assertItemsEqual with assertCountEqual * Make Gnocchi connection pool configurable * Add a Monasca fetcher * Add a /healthcheck URL * Fix empty metadata exception in Prometheus collector * Add quantity mutation to Prometheus collector * Custom fields in summary get API * Switch to newer openstackdocstheme and reno versions * Add Python3 victoria unit tests * Add py38 package metadata * Adjust hacking tests to fix py38 support * Replace tz.UTC with dateutil.tz.tzutc() * Update hacking for Python3 * Stop to use the \_\_future\_\_ module * Use unittest.mock instead of third party mock * Update master for stable/ussuri * Fix docs build error due to duplicate references * [devstack] Collector Variable 12.0.0 ------ * [ussuri][goal] Cleanup drop python 2.7 support * Use Keystone v3 API by default in fetcher * Fix processing of "rate:xxx" re-aggregation configuration * Standardize aggregation methods and granularities for Gnocchi collector * Add support for the range\_function field to the Prometheus collector * Add support for the query\_function field to the Prometheus collector * tox: Keeping going with docs * Update oslo.policy and oslo.context usage * Allow reaggregation method to be specified in the gnocchi collector * Switch import's behaviour to absolute imports for json module * Check for duplicates in "groupby" and "metadata" for each metric * Raise an exception in case of an invalid configuration file * Fix 500 errors in the API when request context bears no project\_id * Introduce cloudkitty.utils * Build docs with -W * Load drivers only once in the v2 API * [ussuri][goal] Drop python 2.7 support and testing * Make cloudkitty build reproducible * Replace ModuleNotFoundError with ImportError * Support grouping by timestamp in GET /v2/summary * Add developer documentation about fetcher implementation * Change werkzeug import paths * Support cross-tenant metrics in monasca collector * Fix filtering on dataframe endpoints * Switch to Ussuri jobs * Update .zuul.yaml to run separate tempest tests for each API version * Use "interface" option for monasca endpoint discovery * Update master for stable/train 11.0.0 ------ * Add Elasticsearch v2 storage driver configuration documentation * Allow missing '/' in api.v2.utils.do\_init() * Register keystone auth options with keystoneauth1 helper functions * Remove deprecated config section names * Replace deprecated devstack authtoken function * Add an API migration status table to the roadmap * Change logging for generic Exceptions * Add support for Elasticsearch to devstack plugin * Add support for PDF doc generation * Add an ElasticSearch v2 storage driver * Add a v2 API endpoint to retrieve DataFrame objects * Use cloudkitty.tzutils.diff\_seconds() in prometheus collector * Use tzutils functions in gnocchi collector * Replace eventlet with futurist * Add a "force\_granularity" option to gnocchi collector's extra\_args * Convert timestamps to strs before passing them to gnocchiclient * Store collect period in InfluxDB driver datapoints * Remove transformers from the codebase * Pass 'type' as metric\_types in /v2/summary endpoints * Fix validation of begin/end in GET /v2/summary endpoint * Update tempest jobs * Fix malformed InfluxDB query (LIMIT and OFFSET inverted) * Added a roadmap to the developer documentation * Add lower-constraints job * Update bandit version * Add a v2 API endpoint to push DataFrame objects * Fix GET /v1/dataframes endpoint * Fix RST markup in v2 API developer documentation * Add DataPoint/DataFrame objects * Add support for empty or missing "extra\_args" in metrics config file * Introduce validation utils * Updated the documentation's rst markup * Fix StateManager.set\_state() logic * Fix call to storage.delete() in ScopeEndpoint RPC endpoint * Use isoformat() instead of isotime() in InfluxDB storage driver * Define new 'cloudkitty-tempest-full-ipv6-only' job in gate * Make cloudkitty timezone-aware * Remove usage of unix timestamps * Removing author identification in all files 10.0.0 ------ * Do not re-instantiate a StateManager for each request in /v2/scope * Update PUT /v2/scope API reference * Fix v1 hybrid storage python2 python3 string type comparison in memory * Add a v2 API endpoint to reset the state of different scopes * Add an option for the metric project to the Monasca collector * Add a "delete" method to the v2 storage interface * Add quotes to InfluxDb queries * Add Monasca collector auth options to the sample config * Bump openstackdocstheme to 1.30.0 * Add a v2 summary endpoint * Update the "admin/configuration" section of the documentation * Modify the url of upper\_constraints\_file * Blacklist sphinx 2.1.0 (autodoc bug) * Add python 3.7 classifier to setup.cfg * Use openstack-python3-train-jobs for python3 test runtime * Update the "architecture" section of the documentation * Fix sqlalchemy grouping on v1 storage * Fix Prometheus fetcher error * Update sphinx dependency * Use a hash for lock names * Add a v2 API endpoint to get scope state * Fix bandit job * Add 'rate:xxx' to gnocchi collector aggregation methods * Remove "group\_filters" parameter from v2 storage interface * Addition of catch block to avoid hiding Gnocchi schema validation errors * Updated tooz lock name * Updated constraints on storage state * Add Fetcher documentation * Fix section name in config file generation * Add a base resource for v2 API endpoints * Add i18n support for error message * Replace git.openstack.org URLs with opendev.org URLs * OpenDev Migration Patch * Retrieve metrics in eventlet greenthreads * Fix rounding in v2 storage unit tests * Add missing import to cloudkitty/common/config * Implement Prometheus fetcher * Fix InfluxDB storage's "\_point\_to\_dataframe\_entry" method * Make cloudkitty-processor run several workers * Fix requirements.txt * Dropping the py35 testing * Update tox to 2.0 * Update the default policy rule for /v1/storage/dataframes * Add storage backend documentation * Bootstrap the v2 API * [devstack] Setting [collect]/wait\_periods to 0 by default * Fix HashMap field mapping comparison * Add bandit for security static analysis and fix potential security issues * Update the default metrics.yml file * Update admin documentation for Prometheus collector * Update master for stable/stein 9.0.0.0rc1 ---------- * Change configuration schema and query process for Prometheus collector * Add HTTPS and auth support to Prometheus collector * Skip a cycle instead of retrying if a metric is not found in gnocchi * Update the devstack plugin * Fix InfluxDB storage total() method pagination * Added details to storage state * Fix gnocchi collector metadata collection * Add a custom JSONEncoder * Update the hashmap module documentation * Moved Rating module introduction to Rating index * Add collector documentation * Add some developer documentation for collectors * Change the default storage backend to v2/influxdb * Add a tempest job running against python3 devstack * Support upgrade checks * Delete v2 gnocchi storage * Remove the fake and meta collectors * Make devstack-tempest job voting * Added help to gnocchi collector options * Remove the fake fetcher * Remove the gnocchi transformer * Add support for cafile option in the gnocchi fetcher * Add an introduction to the documentation * Change log message when loading v2 storage * Made metric conf validation more generic * Changed influxdb v2 storage option 'cacert' to 'cafile' * Change the documentation layout * Add missing help kwarg to oslo option in influx storage * Adding an InfluxDB storage backend * Changed author email * Make tenant\_id a string in hashmap models * Convert legacy zuul jobs to new format * Don't raise NoDataCollected in case of collect error * Don't update the state of a scope when no data was collected * Remove repetitions in hashmap.rst * Don't heartbeat manually in the cloudkitty orchestrator * Remove oslo\_i18n.enable\_lazy() * Enable python3.7 testing jobs * Fixed Mapping Update API returned status 409 with duplicate project * Adds doc8 check to pep8 * Update fetcher options * Support pagination in v2 storage total() * Use global-requirements for requirements * Add scope\_id to orchestrator log * Change configuration section names * Don't quote {posargs} in tox.ini * Handle the scope id as a regular groupby attribute in storage * Use python3 for documentation builds * [Docs] Change the cli to fit the latest client * Add support for cafile option in the gnocchi collector * Hard to read README from github because of wrong format * Use openstack-tox-cover template * Update the formulation on OpenStack release for RDO in the documentation * Import legacy-cloudkitty-dsvm-install * Add a gnocchi fetcher * add python 3.6 unit test job * switch documentation job to new PTI * import zuul job settings from project-config * Update reno for stable/rocky 8.0.0 ----- * Adding a v2 storage backend * Add Prometheus Collector * Bump the openstackdocstheme extension to 1.20 * Add an option to configure monasca endpoint type * Ensure resource\_key is in groupby in Monasca collector * Remove tail\_log in the devstack plugin script * Handle pagination of gnocchi's resource search API * Switch to stestr * Remove help message about ZeroMQ driver * Add multi-region support for gnocchi collector * Add storage configuration option to devstack plugin * Force python2 for documentation generation * Improve metrics configuration * add release notes to README.rst * Collector cleanup * Allow gnocchi collector to be used outside of OpenStack * Switch to oslo\_messaging.ConfFixture.transport\_url * fix tox python3 overrides * remove use of unicode type for python 2/3 compatibility * fix "identity\_uri" in install document * Remove collector from storage * Fix 400 on /v1/storage/dataframes * Removes unnecessary utf-8 encoding * Add some unit tests for cloudkitty/api/v1 * Add no\_group parameter to hashmap "list\_\*" calls * Allow Cloudkitty to collect non-OpenStack metrics * Replace usage of 'user' by 'user\_id' * Update Devstack documentation and README * Replace usage of 'tenant' by 'project\_id' * fix a typo in documentation * Update mysql connection in doc * Deprecate collector mappings * Deprecate /v1/report/total endpoint * Remove Ceilometer collector and transformer * Remove gnocchi and gnocchihybrid storage * Support connecting gnocchi on internal endpoint * Update reno for stable/queens 7.0.0 ----- * Secure convert\_unit() function * Add Apache License Content in index.rst * Replaces yaml.load() with yaml.safe\_load() * Use metric dimensions as metadata in monasca collector * Make build reproducible * Fix YAML configuration usage in monasca collector * Pass project\_id in dimensions rather than query parameter * Fix the typo and update the url links in doc files of cloudkitty * Create state entry for tenant\_id in \_dispatch for hybrid storage * Update default configuration for cors * Utils: fix usage of iso8601\_from\_timestamp * fix custom configuration file path * Fix two mistakes of method description * Zuul: Remove project name * Refactor the storage backend * Deprecate the ceilometer collector * Manage metrics units in yaml configuration * Remove use of unsupported TEMPEST\_SERVICES variable * Use CK\_DBUSER for the mysql user in the documentation * Use RABBIT\_USER for the rabbit user in the documentation * Add Ceph Object Storage Usage service * Minor documentation improvements * Update fields of CSV reports * Fix UsageEnd in CSV reports * Don't run non-voting jobs in gate * Policy in code * Use metrics.yml in get\_time\_frame() * Ensure compatibility with all versions of gnocchiclient * Remove deprecated oslo\_messaging.get\_transport * Fix metric name in etc/metrics.yml * Remove deprecated APIs and method in cloudkitty * Split metrology configuration from CK config file * Fix devstack for gnocchi collector * Fix wrong data in jsonfile generated by osrf writer * Add app.wsgi to target of pep8 * Add a tempest plugin gate job * Remove setting of version/release from releasenotes * Fix Devstack plugin * Update and replace http with https for doc links in cloudkitty * Replace launchpad with storyboard in README * Add a collector for Monasca * Add rm to whitelist\_externals * Update devstack/README.rst * Update usage of gabbi in Cloudkitty test * Allow authentification method to be chosen * Update reno for stable/pike 6.0.0 ----- * Update the documentation layout and content * Update URLs in documents according to document migration * Add WSGI support for \`cloudkitty-api' * Update log translation hacking rule * Switch from oslosphinx to openstackdocstheme * Update the documentation part about configuring cloudkitty * Fix devstack: replace deprecated screen functions * Remove usage of parameter enforce\_type * Set access\_policy for messaging's dispatcher * Refactor to use get\_month\_start\_timestamp directly * Updates the installation part of the documentation * Fix the gnocchi and gnocchihybrid storage * Add 'rm -f .testrepository/times.dbm' command in testenv * Add the missing configuration when generating cloudkitty.conf * Remove log translations * Improve the qty digit in sqlalchemy storage * Fixing the gate * Fix gnocchi metric collection * Change the cloudkitty logo to use the official project mascot * Assign the resource\_type when search resource by gnocchiclient * Remove unnecessary setUp function in testcase * Improve and simplify the gnocchi collector * Fix incorrect rating for network.floating * [Fix gate]Update test requirement * Delete unused testenv:checkconfig in tox * Fix usage of period configuration value * Fix some mistake and format in docs * Remove support for py34 * Modify the rule name in policy file * Update reno for stable/ocata 5.0.0 ----- * Trivial: fix warnings when build\_sphinx * Modify policy of get total/summary * Rename yaml file to keep consistent format * Refact Orchestrator * Fix JSON serialization error with sqlachemy storage backend * Fix compute service collection with ceilometer * Trivial: add the missing period * Add oslo\_debug\_helper to tox.ini * Fix wrong option names and missed options in cloudkitty.conf.sample * Get total price group by res\_type and tenant\_id * Fix pep8 check error * Improve User Experience by adding an info REST entrypoint * Added release note for cloudkitty * Add a note to indicate the change of default port * Generate the needed configuration files for devstack * [docs] Add rating module introduction * The qty's type should be more precision in storage tables * Replace oslo\_utils.timeutils.isotime * Update the documentation to propose the usage of keystone v3 * Introduce hacking check to Cloudkitty * Bring the begin/end checking before the storage module * Ensure the exist of writer path * devstack: support the gnocchi collector * Don't include openstack/common in flake8 exclude list * Add .idea and vim temp/swap types to .gitignore * Delete the magic number * Use keystone v3 instead of keystone v2 in cloudkitty's devstack plugin * Remove discover from test-requirements * Pin kombu to < 4.0.0 to fix gate error * Add wrapper for decimal.Decimal objects in json.dumps() * Replace six.iteritems() with .items() * Show team and repo badges on README * Add PyScript module documentation * Fix devstack plugin compatibility * Add Apache 2.0 license header to the alembic/script.py.mako * Update the install docs * Modify variable's using method in Log Messages * Rename the gabbi filename to avoid GabbiSyntaxWarning * Replaces uuid.uuid4 with uuidutils.generate\_uuid() * Make begin and end optional when get dataframes * Don't include \*/alembic/version/\* in flake8 exclude list * Upgrade oslo.messaging to 5.2.0 * Enable DeprecationWarning in test environments * Remove mox3 in test-requirement.txt * Add http\_proxy\_to\_wsgi to api\_paste * Enable code coverage report in console output * Remove html\_static\_path from doc 0.6.1 ----- * Update outputs in "Hashmap rating module" * Fix consistency on gnocchi storage commit * Fix typo in the file * ceilometer network.\* collector are not JSON serializable * Update hashmap documentation * Fix typos in docstrings * Add volume\_type attribute to volume when gnocchi collector * Change git url to openstack git in devstack doc * Fix the image size unit from 'image' to 'MB' * Specifies gabbi version in test-requirements.txt * Fix state tracking in gnocchi storage * modify the home-page info with the developer documentation 0.6.0 ----- * Added native gnocchi storage driver * Add network.floating to Gnocchi collector * Add fields to csv reports * Fix network.bw.\* qty matching the unit by gnocchi collector * Avoid error when iamge\_ref is None * Add infos about fields in hashmap documentation * Add Python 3.5 classifier and venv for cloudkitty * Add CSV support to cloudkitty-writer * Create DBCommand object after parsing * ceilometer image collector is not JSON serializable * Fix db api with hash rating * Improve the rpc module * Use local.conf instead of localrc in devstack doc * Remove downgrade migrations * Change LOG.warn to LOG.warning * Remove rating SQL Schema Downgrades * Remove db SQL Schema Downgrades * Remove storage SQL Schema Downgrades * Use international logging message * doc: fix cmd for creating hashmap group * Delete python bytecode before every test run * fix the typo * RootController: Use an index method instead of get * Fix port hardcoded on APILink sample * Changes default port from 8888 to 8889 * Fix loosing resource metadata in Gnocchi * Correct concurrency of gabbi tests for gabbi 1.22.0 * Refactor gnocchi transformer * Refactor ceilometer transformer * Remove spec file since cloudkitty is in RDO * Ensure module list is up to date in API tests * Refactor transformer base * Fix gnocchi support * Add per tenant hashmap rules * Add API check to verify PyScripts is loaded * Added gabbi tests for hashmap module * Clean constraints in hashmap fields table * Fix issues with alembic constraint naming * Added hashmap module documentation * Rename hashmap mapping table to hashmap\_mappings * Code cleanup of hashmap constraint migration * Refactor storage-init command * Refactor database models and migrations * Fix missing requirement alembic * Refactor writer command * Refactor dbsync command * Fix gnocchi UUID length in storage * Fix the path of the logo for the README file * Add a logo to the frontpage * Fix devstack cleanup of data dir * Replace subclassed RequestContext with base class * Define context.roles with base class 0.5.0 ----- * Added support for an hybrid gnocchi storage * Added gnocchi collector * Migrate from keystoneclient to keystoneauth * policy: fix the roles setup in the requests context * Fixed devstack not creating folder for tooz locks * Added distributed lock fixing horizontal scaling * Fixed meta collector not applying mappings * Added CORS support to CloudKitty * Use IPOpt and PortOpt * Updated from global requirements * Improve default error handling on collector * Cleanup unused conf variables * Refactor keystone tenant fetcher * Removes unused posix\_ipc requirement * Test: make enforce\_type=True in CONF.set\_override * Replace deprecated LOG.warn with LOG.warning * Load wsgi app(api) with paste.deploy * remove rating no match case * remove setting for option verbose * Modify noop module code in arch.rst * drop py33 and py26 env test * remove unused method in orchestrator * Remove iso8601 dependency * Deprecated tox -downloadcache option removed * Fixed random failures in storage tests * Loading scripts in memory when load pyscripts rating model * Remove unnecessary parameter * Put py34 first in the env order of tox * Added unit tests for storage drivers * Fixed \_NO\_DATA\_ insertion even when data was present * The result of tenant list may be unpredictable * Fixed Horizon static file compression in devstack * Change not found HTTP status code from 400 to 404 * Move global variables to settings file * Fix error when using keystone v3 * fixes error when get quote price from rpc * modify api of report total * Add .DS\_Store to .gitignore * Add \*.swp to .gitignore * Fixes sample rabbitmq config in doc * Tenant fetcher sometimes return wrong result * Added AuthPlugin support in devstack * Remove useless LOG definitions * Delegate log formatting to logging package * devstack: enable cloudkitty services by default * Added more API calls for HashMap rating module * Removed version information from setup.cfg * Fixed files to support liberty dashboard * Updated files to the new namespace * Update .gitreview for new namespace * Added new rating module PyScripts * Fix the sphinx build path in .gitignore file * Added gabbi API tests * UnconfigurableController returns 409 on all methods * Removed default values on rating modules * Fixed None value returned by report/total * Added support for dynamic rating module reloading * Fix the README file of the DevStack integration * readthedocs doesn't allow multiple files anymore * doc: document how to install from packages * install doc: install policy.json in /etc/cloudkitty * Added support for Keystone AuthPlugins * Moving to Liberty cycle (0.5) 0.4.1 ----- * Preparing 0.4.1 release 0.4.0 ----- * Improved documentation * Replace assertEqual(None, \*) with assertIsNone * Use six.text\_type() in exceptions * Update README.rst file * Improve setup.cfg a bit * Collector cleanup and six string types * Fixed output option basepath not applied to writer * Fixed mock tests failing due to deprecation * Added dashboard integration's documentation * Unused functions removal in writer * \_\_commit\_data() raising exception on empty data * Updated requirements to stable/kilo * Bumped version of WSME * Sync oslo modules * Refactored meta collector API * Fixed flag setting error after data commit to storage * Fixed errors in ceilometer volume data collection * Fixed hashmap migrations for postgresql * Fix the Context hook * Make keystone optional * doc: enable ceilometer and horizon in devstack * Enforce a default policy * Changed network bandwidth from B to MB * Fixed error when no field mapping was created * Added a fake fetcher and collector * Fixed tenant fetcher using Keystone objects * Fixed empty dataset handling on CloudKitty storage * Fixed dashboard installation with devstack * Added support for dashboard in devstack * Fixed collector unhandled exception management * Added more resources to ceilometer collector * Added implicit cast to decimal in rating total calculation * Fixed API bug with parents UUID * Added collection error catching * Added new mapping type: threshold * Fixed tenant list filtering with keystone fetcher * Fix the config sample generator method * Fix the broken options for tenant\_fetcher * Update the devstack scripts * Add ACL support on the API * Added support for pluggable tenant fetcher * Correct the authtoken options sample of installation * Fixed problem with the NO\_DATA type in sqlalchemy * Change the API storage path * Fixed potentially uninitialized session in sqlalchemy storage * Fix wrong order of arguments * Fixed bug in sqlalchemy storage filtering * Add option values log for cloudkitty-api * Fixed regression in processors handling * Fixed regression in RatingProcessorBase: nodata missing * Fixed a bug with uninitialized usage interval in nodata * Added CloudKitty client install to devstack * Added support for rating module priority * get\_rpc\_client has been renamed to rpc.get\_client() * Make sure the RPC use str's * Remove explicit oslo.messaging version requirement * Fix the public access on / * Fixed regression in the RPC module * Renaming billing to rating * devstack: configure keystone middleware * Added handling of empty collections * Added quote calls to rating processors * New HashMap rating module version * Add filtering on resource type to the storage API * Split the api controllers and resources * Use keystone middleware for authentication * Generate the sample with oslo-config-generator * Support both oslo.messaging and oslo\_messaging * Ceilo transformer: handle multiple metadata keys * Move extra files to the contrib/ dir * storage API: handle NoTimeFrame exception * Update the devstack lib and documentation * Provide an installation documentation * Implement a storage API * Added multi-tenancy support * Moved keystone settings in the ceilometer collector * Pinned oslo.messaging version * Drop some unneeded deps from .spec * Fixed extra/missing init files from migrations * Adding image collection to the collector * Insert empty frame if no data * Add missing \_\_init\_\_.py's * Fixed a bug while enabling rating module by RPC * Removed configuration check from pep8 tests * small changes to index file * Modified utils to implement all time calculations * Using UTC datetimes in StateManager tests * Repository general files update * rpm: install the new binaries in the common rpm * Fixed OSRF format writer to support Decimal * Fixed output format for sqlalchemy storage backend * Separated writing and rating processing * Added support for total on API * Implemented new storage drivers * Provide rpm packaging configurations * Added instance\_id in compute metadata * Fixed bug while loading empty metadata * Added time calculations code in utils * Removed i18n from flake checks * Added support for Multiple Collectors * Add tests for the DB state manager * Rename get\_migrate method for consistency * Move the tests in the cloudkitty package * Added pluggable transformer support * Fixed regression in file writing * Fixed writers bugs when switching to RPC * Added basepath support for writers * Fixed ZipFile not supporting backends * Implemented RPC messaging * Work toward Python 3.4 support and testing * Improved architecture section in documentation * Remove docutils from test-requirements * Add Devstack support for the api * Fix accuracy of the States.state column * Fix the output basepath for WriteOrchestrator * Removed docbook requirements from documentation * Updated sphinx version in test-requirements * Modified path in documentation config file * Update the pbr version * Fix the get/set state method * Fixed host port detection in root controller * Fixed bug with module name detection * Added hashmap types query via API * Improved documentation (docstrings and sphinx) * Updated openstack-common code to latest version * Added cloudkitty-dbsync tool * Implemented HashMap API * Added API service * Added module state management DB * Added alembic support for database migrations * Fixed missing config generator.rc file * Transitioned from file state to DB state * Added i18n support * Transitioned from importutils to stevedore * Moved base billing code * Moved base backend code * Fixed wrong datetime usage * Moved base collector code * Fixed state recovery in osrf writer * Moved base writer code * Fixed wrong path for cloudkitty config sample * Added config tools * Setup and dist modifications * Added ropeproject to gitignore * Bump hacking to version 0.9.2 * Fixed typo in README * Added more informations in the README * Moved to importutils from oslo-incubator * PEP8 and hacking compliance * Added more files to gitignore * Set copyright/license information in .py files * Modified README to use rst format * Pushing initial work * Added .gitreview ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/HACKING.rst0000664000175000017500000001000600000000000015603 0ustar00zuulzuul00000000000000Cloudkitty Style Commandments ============================= - Step 1: Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest - Step 2: Read on Cloudkitty Specific Commandments -------------------------------- - [C310] Check for improper use of logging format arguments. - [C311] Use assertIsNone(...) instead of assertEqual(None, ...). - [C312] Use assertTrue(...) rather than assertEqual(True, ...). - [C313] Validate that logs are not translated. - [C314] str() and unicode() cannot be used on an exception. - [C315] Translated messages cannot be concatenated. String should be included in translated message. - [C317] `oslo_` should be used instead of `oslo.` - [C318] Must use a dict comprehension instead of a dict constructor with a sequence of key-value pairs. - [C319] Ensure to not use xrange(). - [C320] Do not use LOG.warn as it's deprecated. - [C321] Ensure that the _() function is explicitly imported to ensure proper translations. - [C322] Check for usage of deprecated assertRaisesRegexp LOG Translations ---------------- LOG.debug messages will not get translated. Use ``_LI()`` for ``LOG.info``, ``_LW`` for ``LOG.warning``, ``_LE`` for ``LOG.error`` and ``LOG.exception``, and ``_LC()`` for ``LOG.critical``. ``_()`` is preferred for any user facing message, even if it is also going to a log file. This ensures that the translated version of the message will be available to the user. The log marker functions (``_LI()``, ``_LW()``, ``_LE()``, and ``_LC()``) must only be used when the message is only sent directly to the log. Anytime that the message will be passed outside of the current context (for example as part of an exception) the ``_()`` marker function must be used. A common pattern is to define a single message object and use it more than once, for the log call and the exception. In that case, ``_()`` must be used because the message is going to appear in an exception that may be presented to the user. For more details about translations, see https://docs.openstack.org/oslo.i18n/latest/ Creating Unit Tests ------------------- For every new feature, unit tests should be created that both test and (implicitly) document the usage of said feature. If submitting a patch for a bug that had no unit test, a new passing unit test should be added. If a submitted bug fix does have a unit test, be sure to add a new one that fails without the patch and passes with the patch. Running Tests ------------- The testing system is based on a combination of tox and testr. If you just want to run the whole suite, run `tox` and all will be fine. However, if you'd like to dig in a bit more, you might want to learn some things about testr itself. A basic walkthrough for OpenStack can be found at https://wiki.openstack.org/wiki/Testr OpenStack Trademark ------------------- OpenStack is a registered trademark of OpenStack, LLC, and uses the following capitalization: OpenStack Commit Messages --------------- Using a common format for commit messages will help keep our git history readable. Follow these guidelines: First, provide a brief summary (it is recommended to keep the commit title under 50 chars). The first line of the commit message should provide an accurate description of the change, not just a reference to a bug or blueprint. It must be followed by a single blank line. Following your brief summary, provide a more detailed description of the patch, manually wrapping the text at 72 characters. This description should provide enough detail that one does not have to refer to external resources to determine its high-level functionality. Once you use 'git review', two lines will be appended to the commit message: a blank line followed by a 'Change-Id'. This is important to correlate this commit with a specific review in Gerrit, and it should not be modified. For further information on constructing high quality commit messages, and how to split up commits into a series of changes, consult the project wiki: https://wiki.openstack.org/wiki/GitCommitMessages ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/LICENSE0000664000175000017500000002363700000000000015030 0ustar00zuulzuul00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.3034875 cloudkitty-21.0.0/PKG-INFO0000664000175000017500000001152300000000000015107 0ustar00zuulzuul00000000000000Metadata-Version: 1.2 Name: cloudkitty Version: 21.0.0 Summary: Rating as a Service component for OpenStack Home-page: https://docs.openstack.org/cloudkitty/latest Author: OpenStack Author-email: openstack-discuss@lists.openstack.org License: UNKNOWN Description: ======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/cloudkitty.svg :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on ========== CloudKitty ========== .. image:: doc/source/images/cloudkitty-logo.png :alt: cloudkitty :align: center Rating as a Service component +++++++++++++++++++++++++++++ Goal ---- CloudKitty aims at filling the gap between metrics collection systems like ceilometer and a billing system. Every metrics are collected, aggregated and processed through different rating modules. You can then query CloudKitty's storage to retrieve processed data and easily generate reports. Most parts of CloudKitty are modular so you can easily extend the base code to address your particular use case. You can find more information on its architecture in the documentation, `architecture section`_. Status ------ CloudKitty has been successfully deployed in production on different OpenStack systems. You can find the latest documentation on documentation_. Contributing ------------ We are welcoming new contributors, if you've got new ideas, suggestions or want to contribute contact us. You can reach us thought IRC (#cloudkitty @ oftc.net), or on the official OpenStack mailing list openstack-discuss@lists.openstack.org. A storyboard_ is available if you need to report bugs. Additional components --------------------- We're providing an OpenStack dashboard (Horizon) integration, you can find the files in the cloudkitty-dashboard_ repository. A CLI is available too in the python-cloudkittyclient_ repository. Trying it --------- CloudKitty can be deployed with DevStack, more information can be found in the `devstack section`_ of the documentation. Deploying it in production -------------------------- CloudKitty can be deployed in production on OpenStack environments, for more information check the `installation section`_ of the documentation. Getting release notes --------------------- Release notes can be found in the `release notes section`_ of the documentation. Contributing to CloudKitty -------------------------- For information on how to contribute to CloudKitty, please see the contents of the CONTRIBUTING.rst. Any new code must follow the development guidelines detailed in the HACKING.rst file, and pass all unit tests. .. Global references and images .. _documentation: https://docs.openstack.org/cloudkitty/latest/ .. _storyboard: https://storyboard.openstack.org/#!/project/890 .. _python-cloudkittyclient: https://opendev.org/openstack/python-cloudkittyclient .. _cloudkitty-dashboard: https://opendev.org/openstack/cloudkitty-dashboard .. _architecture section: https://docs.openstack.org/cloudkitty/latest/admin/architecture.html .. _devstack section: https://docs.openstack.org/cloudkitty/latest/admin/devstack.html .. _installation section: https://docs.openstack.org/cloudkitty/latest/admin/install/index.html .. _release notes section: https://docs.openstack.org/releasenotes/cloudkitty/ .. _contributing: https://docs.openstack.org/cloudkitty/latest/contributor/contributing.html Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Requires-Python: >=3.8 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/README.rst0000664000175000017500000000621100000000000015477 0ustar00zuulzuul00000000000000======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/cloudkitty.svg :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on ========== CloudKitty ========== .. image:: doc/source/images/cloudkitty-logo.png :alt: cloudkitty :align: center Rating as a Service component +++++++++++++++++++++++++++++ Goal ---- CloudKitty aims at filling the gap between metrics collection systems like ceilometer and a billing system. Every metrics are collected, aggregated and processed through different rating modules. You can then query CloudKitty's storage to retrieve processed data and easily generate reports. Most parts of CloudKitty are modular so you can easily extend the base code to address your particular use case. You can find more information on its architecture in the documentation, `architecture section`_. Status ------ CloudKitty has been successfully deployed in production on different OpenStack systems. You can find the latest documentation on documentation_. Contributing ------------ We are welcoming new contributors, if you've got new ideas, suggestions or want to contribute contact us. You can reach us thought IRC (#cloudkitty @ oftc.net), or on the official OpenStack mailing list openstack-discuss@lists.openstack.org. A storyboard_ is available if you need to report bugs. Additional components --------------------- We're providing an OpenStack dashboard (Horizon) integration, you can find the files in the cloudkitty-dashboard_ repository. A CLI is available too in the python-cloudkittyclient_ repository. Trying it --------- CloudKitty can be deployed with DevStack, more information can be found in the `devstack section`_ of the documentation. Deploying it in production -------------------------- CloudKitty can be deployed in production on OpenStack environments, for more information check the `installation section`_ of the documentation. Getting release notes --------------------- Release notes can be found in the `release notes section`_ of the documentation. Contributing to CloudKitty -------------------------- For information on how to contribute to CloudKitty, please see the contents of the CONTRIBUTING.rst. Any new code must follow the development guidelines detailed in the HACKING.rst file, and pass all unit tests. .. Global references and images .. _documentation: https://docs.openstack.org/cloudkitty/latest/ .. _storyboard: https://storyboard.openstack.org/#!/project/890 .. _python-cloudkittyclient: https://opendev.org/openstack/python-cloudkittyclient .. _cloudkitty-dashboard: https://opendev.org/openstack/cloudkitty-dashboard .. _architecture section: https://docs.openstack.org/cloudkitty/latest/admin/architecture.html .. _devstack section: https://docs.openstack.org/cloudkitty/latest/admin/devstack.html .. _installation section: https://docs.openstack.org/cloudkitty/latest/admin/install/index.html .. _release notes section: https://docs.openstack.org/releasenotes/cloudkitty/ .. _contributing: https://docs.openstack.org/cloudkitty/latest/contributor/contributing.html ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2114859 cloudkitty-21.0.0/api-ref/0000775000175000017500000000000000000000000015333 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.223486 cloudkitty-21.0.0/api-ref/source/0000775000175000017500000000000000000000000016633 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/conf.py0000664000175000017500000002022200000000000020130 0ustar00zuulzuul00000000000000# Copyright (c) 2010 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # # This file is execfile()'d with the current directory set to its containing # dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import os import sys # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path = [ os.path.abspath('../..'), os.path.abspath('../../bin') ] + sys.path # -- General configuration --------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.coverage', 'sphinx.ext.ifconfig', 'sphinx.ext.graphviz', 'stevedore.sphinxext', 'oslo_config.sphinxext', 'sphinx.ext.viewcode', 'oslo_config.sphinxconfiggen', 'sphinx.ext.mathjax', 'wsmeext.sphinxext', 'sphinx.ext.autodoc', 'sphinxcontrib.pecanwsme.rest', 'sphinxcontrib.httpdomain', 'os_api_ref', 'openstackdocstheme', 'oslo_policy.sphinxext', 'oslo_policy.sphinxpolicygen', ] # Ignore the following warning: WARNING: while setting up extension # wsmeext.sphinxext: directive 'autoattribute' is already registered, # it will be overridden. suppress_warnings = ['app.add_directive'] # openstackdocstheme options openstackdocs_repo_name = 'openstack/cloudkitty' openstackdocs_pdf_link = True openstackdocs_use_storyboard = True config_generator_config_file = '../../etc/oslo-config-generator/cloudkitty.conf' policy_generator_config_file = '../../etc/oslo-policy-generator/cloudkitty.conf' sample_policy_basename = sample_config_basename = '_static/cloudkitty' # Add any paths that contain templates here, relative to this directory. # templates_path = [] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. copyright = '2014-present, OpenStack Foundation.' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. #unused_docs = [] # List of directories, relative to source directory, that shouldn't be searched # for source files. #exclude_trees = ['api'] exclude_patterns = [] # The reST default role (for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. show_authors = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'native' # A list of ignored prefixes for module index sorting. modindex_common_prefix = ['cloudkitty.'] # -- Options for man page output -------------------------------------------- # Grouping the document tree for man pages. # List of tuples 'sourcefile', 'target', u'title', u'Authors name', 'manual' man_pages = [ ('index', 'cloudkitty', 'cloudkitty Documentation', ['Objectif Libre'], 1) ] # -- Options for HTML output ------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. # html_theme_path = ["."] # html_theme = '_theme' html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. html_theme_options = { "show_other_versions": "True", } # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = ['_theme'] #html_theme_path = [openstackdocstheme.get_html_theme_path()] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. html_use_modindex = True # If false, no index is generated. html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'cloudkittydoc' # -- Options for LaTeX output ------------------------------------------------ # The paper size ('letter' or 'a4'). #latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). #latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, # documentclass [howto/manual]). latex_documents = [ ('pdf-index', 'doc-cloudkitty.tex', 'Cloudkitty Documentation', 'Cloudkitty Team', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # Additional stuff for the LaTeX preamble. #latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_use_modindex = True # If false, no module index is generated. latex_domain_indices = False latex_elements = { 'makeindex': '', 'printindex': '', 'preamble': r'\setcounter{tocdepth}{3}', } # Disable usage of xindy https://bugzilla.redhat.com/show_bug.cgi?id=1643664 latex_use_xindy = False ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/index.rst0000664000175000017500000000037700000000000020503 0ustar00zuulzuul00000000000000############# API Reference ############# This is a complete reference of Cloudkitty's API. API v1 ====== .. toctree:: :glob: v1/* v1/rating/* .. only:: html API v2 ====== .. toctree:: :maxdepth: 2 :glob: v2/* ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.223486 cloudkitty-21.0.0/api-ref/source/v1/0000775000175000017500000000000000000000000017161 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.223486 cloudkitty-21.0.0/api-ref/source/v1/rating/0000775000175000017500000000000000000000000020445 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v1/rating/hashmap.rst0000664000175000017500000000311000000000000022613 0ustar00zuulzuul00000000000000======================= HashMap Module REST API ======================= .. rest-controller:: cloudkitty.rating.hash.controllers.root:HashMapConfigController :webprefix: /v1/rating/module_config/hashmap .. rest-controller:: cloudkitty.rating.hash.controllers.service:HashMapServicesController :webprefix: /v1/rating/module_config/hashmap/services .. autotype:: cloudkitty.rating.hash.datamodels.service.Service :members: .. autotype:: cloudkitty.rating.hash.datamodels.service.ServiceCollection :members: .. rest-controller:: cloudkitty.rating.hash.controllers.field:HashMapFieldsController :webprefix: /v1/rating/module_config/hashmap/fields .. autotype:: cloudkitty.rating.hash.datamodels.field.Field :members: .. autotype:: cloudkitty.rating.hash.datamodels.field.FieldCollection :members: .. rest-controller:: cloudkitty.rating.hash.controllers.mapping:HashMapMappingsController :webprefix: /v1/rating/module_config/hashmap/mappings .. autotype:: cloudkitty.rating.hash.datamodels.mapping.Mapping :members: .. autotype:: cloudkitty.rating.hash.datamodels.mapping.MappingCollection :members: .. autotype:: cloudkitty.rating.hash.datamodels.threshold.Threshold :members: .. autotype:: cloudkitty.rating.hash.datamodels.threshold.ThresholdCollection :members: .. rest-controller:: cloudkitty.rating.hash.controllers.group:HashMapGroupsController :webprefix: /v1/rating/module_config/hashmap/groups .. autotype:: cloudkitty.rating.hash.datamodels.group.Group :members: .. autotype:: cloudkitty.rating.hash.datamodels.group.GroupCollection :members: ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v1/rating/pyscripts.rst0000664000175000017500000000104300000000000023235 0ustar00zuulzuul00000000000000========================= PyScripts Module REST API ========================= .. rest-controller:: cloudkitty.rating.pyscripts.controllers.root:PyScriptsConfigController :webprefix: /v1/rating/module_config/pyscripts .. rest-controller:: cloudkitty.rating.pyscripts.controllers.script:PyScriptsScriptsController :webprefix: /v1/rating/module_config/pyscripts/scripts .. autotype:: cloudkitty.rating.pyscripts.datamodels.script.Script :members: .. autotype:: cloudkitty.rating.pyscripts.datamodels.script.ScriptCollection :members: ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v1/v1.rst0000664000175000017500000000473000000000000020245 0ustar00zuulzuul00000000000000======================== CloudKitty REST API (v1) ======================== Collector ========= .. rest-controller:: cloudkitty.api.v1.controllers.collector:CollectorController :webprefix: /v1/collector .. rest-controller:: cloudkitty.api.v1.controllers.collector:MappingController :webprefix: /v1/collector/mappings .. rest-controller:: cloudkitty.api.v1.controllers.collector:CollectorStateController :webprefix: /v1/collector/states .. autotype:: cloudkitty.api.v1.datamodels.collector.CollectorInfos :members: .. autotype:: cloudkitty.api.v1.datamodels.collector.ServiceToCollectorMapping :members: .. autotype:: cloudkitty.api.v1.datamodels.collector.ServiceToCollectorMappingCollection :members: Info ==== .. rest-controller:: cloudkitty.api.v1.controllers.info:InfoController :webprefix: /v1/info .. rest-controller:: cloudkitty.api.v1.controllers.info:MetricInfoController :webprefix: /v1/info/metric .. autotype:: cloudkitty.api.v1.datamodels.info.CloudkittyMetricInfo :members: .. autotype:: cloudkitty.api.v1.datamodels.info.CloudkittyMetricInfoCollection :members: .. rest-controller:: cloudkitty.api.v1.controllers.info:ServiceInfoController :webprefix: /v1/info/service Rating ====== .. rest-controller:: cloudkitty.api.v1.controllers.rating:ModulesController :webprefix: /v1/rating/modules .. rest-controller:: cloudkitty.api.v1.controllers.rating:ModulesExposer :webprefix: /v1/rating/module_config .. rest-controller:: cloudkitty.api.v1.controllers.rating:RatingController :webprefix: /v1/rating .. autotype:: cloudkitty.api.v1.datamodels.rating.CloudkittyModule :members: .. autotype:: cloudkitty.api.v1.datamodels.rating.CloudkittyModuleCollection :members: .. autotype:: cloudkitty.api.v1.datamodels.rating.CloudkittyResource :members: .. autotype:: cloudkitty.api.v1.datamodels.rating.CloudkittyResourceCollection :members: Report ====== .. rest-controller:: cloudkitty.api.v1.controllers.report:ReportController :webprefix: /v1/report Storage ======= .. rest-controller:: cloudkitty.api.v1.controllers.storage:StorageController :webprefix: /v1/storage .. rest-controller:: cloudkitty.api.v1.controllers.storage:DataFramesController :webprefix: /v1/storage/dataframes .. autotype:: cloudkitty.api.v1.datamodels.storage.RatedResource :members: .. autotype:: cloudkitty.api.v1.datamodels.storage.DataFrame :members: .. autotype:: cloudkitty.api.v1.datamodels.storage.DataFrameCollection :members: ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.223486 cloudkitty-21.0.0/api-ref/source/v2/0000775000175000017500000000000000000000000017162 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2114859 cloudkitty-21.0.0/api-ref/source/v2/api_samples/0000775000175000017500000000000000000000000021457 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.223486 cloudkitty-21.0.0/api-ref/source/v2/api_samples/dataframes/0000775000175000017500000000000000000000000023566 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/api_samples/dataframes/dataframes_get.json0000664000175000017500000000471100000000000027432 0ustar00zuulzuul00000000000000{ "total": 3, "dataframes": [ { "usage": { "metric_one": [ { "vol": { "unit": "GiB", "qty": 1.2 }, "rating": { "price": 0.04 }, "groupby": { "group_one": "one", "group_two": "two" }, "metadata": { "attr_one": "one", "attr_two": "two" } } ], "metric_two": [ { "vol": { "unit": "GiB", "qty": 1.2 }, "rating": { "price": 0.04 }, "groupby": { "group_one": "one", "group_two": "two" }, "metadata": { "attr_one": "one", "attr_two": "two" } } ] }, "period": { "begin": "2019-07-23T12:28:10+00:00", "end": "2019-07-23T13:28:10+00:00" } }, { "usage": { "volume.size": [ { "vol": { "unit": "GiB", "qty": 1.9 }, "rating": { "price": 3.8 }, "groupby": { "project_id": "8ace6f139a1742548e09f1e446bc9737", "user_id": "b28fd3f448c34c17bf70e32886900eed", "id": "be966c6d-78a0-42cf-bab9-e833ed996dee" }, "metadata": { "volume_type": "" } } ] }, "period": { "begin": "2019-08-01T01:00:00+00:00", "end": "2019-08-01T02:00:00+00:00" } } ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/api_samples/dataframes/dataframes_post.json0000664000175000017500000000572500000000000027646 0ustar00zuulzuul00000000000000{ "dataframes": [ { "period": { "begin": "20190723T122810Z", "end": "20190723T132810Z" }, "usage": { "metric_one": [ { "vol": { "unit": "GiB", "qty": 1.2 }, "rating": { "price": 0.04 }, "groupby": { "group_one": "one", "group_two": "two" }, "metadata": { "attr_one": "one", "attr_two": "two" } } ], "metric_two": [ { "vol": { "unit": "MB", "qty": 200.4 }, "rating": { "price": 0.06 }, "groupby": { "group_one": "one", "group_two": "two" }, "metadata": { "attr_one": "one", "attr_two": "two" } } ] } }, { "period": { "begin": "20190823T122810Z", "end": "20190823T132810Z" }, "usage": { "metric_one": [ { "vol": { "unit": "GiB", "qty": 2.4 }, "rating": { "price": 0.08 }, "groupby": { "group_one": "one", "group_two": "two" }, "metadata": { "attr_one": "one", "attr_two": "two" } } ], "metric_two": [ { "vol": { "unit": "MB", "qty": 400.8 }, "rating": { "price": 0.12 }, "groupby": { "group_one": "one", "group_two": "two" }, "metadata": { "attr_one": "one", "attr_two": "two" } } ] } } ] } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.223486 cloudkitty-21.0.0/api-ref/source/v2/api_samples/rating/0000775000175000017500000000000000000000000022743 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/api_samples/rating/module_get.json0000664000175000017500000000017700000000000025767 0ustar00zuulzuul00000000000000{ "module_id": "sample_id", "description": "Sample extension", "enabled": true, "hot-config": false, "priority": 2 } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/api_samples/rating/modules_list_get.json0000664000175000017500000000056400000000000027205 0ustar00zuulzuul00000000000000 { "modules": [ { "module_id": "noop" "description": "Dummy test module.", "enabled": true, "hot-config": false, "priority": 1 }, { "module_id": "hashmap", "description": "HashMap rating module.", "enabled": false, "hot-config": true, "priority": 2 } ] } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.223486 cloudkitty-21.0.0/api-ref/source/v2/api_samples/scope/0000775000175000017500000000000000000000000022570 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/api_samples/scope/scope_get.json0000664000175000017500000000154500000000000025440 0ustar00zuulzuul00000000000000{ "results": [ { "collector": "gnocchi", "fetcher": "keystone", "scope_id": "7a7e5183264644a7a79530eb56e59941", "scope_key": "project_id", "last_processed_timestamp": "2019-05-09 10:00:00", "active": true }, { "collector": "gnocchi", "fetcher": "keystone", "scope_id": "9084fadcbd46481788e0ad7405dcbf12", "scope_key": "project_id", "last_processed_timestamp": "2019-05-08 03:00:00", "active": true }, { "collector": "gnocchi", "fetcher": "keystone", "scope_id": "1f41d183fca5490ebda5c63fbaca026a", "scope_key": "project_id", "last_processed_timestamp": "2019-05-06 22:00:00", "active": true } ] } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.223486 cloudkitty-21.0.0/api-ref/source/v2/api_samples/summary/0000775000175000017500000000000000000000000023154 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/api_samples/summary/summary_get.json0000664000175000017500000000175000000000000026406 0ustar00zuulzuul00000000000000{ "columns": [ "begin", "end", "qty", "rate", "project_id", "type" ], "results": [ [ "2019-06-01T00:00:00Z", "2019-07-01T00:00:00Z", 2590.421676635742, 1295.210838317871, "fe9c35372db6420089883805b37a34af", "image.size" ], [ "2019-06-01T00:00:00Z", "2019-07-01T00:00:00Z", 1354, 3625, "fe9c35372db6420089883805b37a34af", "instance" ], [ "2019-06-01T00:00:00Z", "2019-07-01T00:00:00Z", 502, 502, "fe9c35372db6420089883805b37a34af", "ip.floating" ], [ "2019-06-01T00:00:00Z", "2019-07-01T00:00:00Z", 175.9, 351.8, "fe9c35372db6420089883805b37a34af", "volume.size" ] ], "total": 4 } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/api_samples/summary/summary_get_groupby_time.json0000664000175000017500000000121200000000000031164 0ustar00zuulzuul00000000000000{ "total": 232, "columns": [ "begin", "end", "qty", "rate", "project_id" ], "results": [ [ "2019-10-01T06:00:00+02:00", "2019-10-01T07:00:00+02:00", 3.5533905029296875, 1.7766952514648438, "84631866b2d84db49b29828052bdc287" ], [ "2019-10-01T07:00:00+02:00", "2019-10-01T08:00:00+02:00", 3.5533905029296875, 1.7766952514648438, "84631866b2d84db49b29828052bdc287" ], [ "2019-10-01T08:00:00+02:00", "2019-10-01T09:00:00+02:00", 3.5533905029296875, 1.7766952514648438, "84631866b2d84db49b29828052bdc287" ] ] } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.223486 cloudkitty-21.0.0/api-ref/source/v2/dataframes/0000775000175000017500000000000000000000000021271 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/dataframes/dataframes.inc0000664000175000017500000000261000000000000024072 0ustar00zuulzuul00000000000000=================== Dataframes endpoint =================== Add dataframes into the storage backend ======================================= Add dataframes into the storage backend. .. rest_method:: POST /v2/dataframes .. rest_parameters:: dataframes/dataframes_parameters.yml - dataframes: dataframes_body Request Example --------------- In the body: .. literalinclude:: ./api_samples/dataframes/dataframes_post.json :language: javascript Status codes ------------ .. rest_status_code:: success http_status.yml - 204 .. rest_status_code:: error http_status.yml - 400 - 401 - 403 - 405 Response -------- No content is to be returned. Get dataframes from the storage backend ============================================ Get dataframes from the storage backend. .. rest_method:: GET /v2/dataframes .. rest_parameters:: dataframes/dataframes_parameters.yml - limit: limit - offset: offset - begin: begin - end: end - filters: filters Status codes ------------ .. rest_status_code:: success http_status.yml - 200 .. rest_status_code:: error http_status.yml - 400 - 401 - 403 - 405 Response -------- .. rest_parameters:: dataframes/dataframes_parameters.yml - total: total_resp - dataframes: dataframes_resp Response Example ---------------- .. literalinclude:: ./api_samples/dataframes/dataframes_get.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/dataframes/dataframes_parameters.yml0000664000175000017500000000176100000000000026353 0ustar00zuulzuul00000000000000begin: in: query description: | Begin of the period for which the dataframes are required. type: iso8601 timestamp required: false end: in: query description: | End of the period for which the dataframes are required. type: iso8601 timestamp required: false filters: in: query description: | Optional filters. type: dict required: false limit: in: query description: | For pagination. The maximum number of results to return. type: int required: false offset: in: query description: | For pagination. The index of the first element that should be returned. type: int required: false dataframes_body: in: body description: | List of dataframes to add. type: list required: true dataframes_resp: in: body description: | List of dataframes matching the query parameters. type: list required: true total_resp: in: body description: | Total of datapoints matching the query parameters. type: int required: true ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/dataframes/http_status.yml0000664000175000017500000000073500000000000024403 0ustar00zuulzuul00000000000000200: default: Request was successful. 201: default: Resource was successfully created. 202: default: Request has been accepted for asynchronous processing. 204: default: Request was successful even though no content is to be returned. 400: default: Invalid request. 401: default: Unauthenticated user. 403: default: Forbidden operation for the authentified user. 404: default: Not found. 405: default: The method is not allowed for the requested URL. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/http_status.yml0000664000175000017500000000073500000000000022274 0ustar00zuulzuul00000000000000200: default: Request was successful. 201: default: Resource was successfully created. 202: default: Request has been accepted for asynchronous processing. 204: default: Request was successful even though no content is to be returned. 400: default: Invalid request. 401: default: Unauthenticated user. 403: default: Forbidden operation for the authentified user. 404: default: Not found. 405: default: The method is not allowed for the requested URL. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/index.rst0000664000175000017500000000027600000000000021030 0ustar00zuulzuul00000000000000.. rest_expand_all:: .. include:: dataframes/dataframes.inc .. include:: scope/scope.inc .. include:: summary/summary.inc .. include:: task/reprocessing.inc .. include:: rating/modules.inc ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.223486 cloudkitty-21.0.0/api-ref/source/v2/rating/0000775000175000017500000000000000000000000020446 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/rating/http_status.yml0000664000175000017500000000073500000000000023560 0ustar00zuulzuul00000000000000200: default: Request was successful. 201: default: Resource was successfully created. 202: default: Request has been accepted for asynchronous processing. 204: default: Request was successful even though no content is to be returned. 400: default: Invalid request. 401: default: Unauthenticated user. 403: default: Forbidden operation for the authentified user. 404: default: Not found. 405: default: The method is not allowed for the requested URL. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/rating/modules.inc0000664000175000017500000000350500000000000022614 0ustar00zuulzuul00000000000000======================= Rating modules endpoint ======================= Get the list of modules ======================= Returns the list of all rating modules loaded. This method does not require any parameter. .. rest_method:: GET /v2/rating/modules Status codes ------------ .. rest_status_code:: success http_status.yml - 200 .. rest_status_code:: error http_status.yml - 400 - 401 - 403 Response -------- .. rest_parameters:: rating/modules_parameters.yml - modules: modules_list Response Example ---------------- .. literalinclude:: ./api_samples/rating/modules_list_get.json :language: javascript Get one module ============== Returns the details of one specific module. This method does not require any parameter. .. rest_method:: GET /v2/rating/modules/ Status codes ------------ .. rest_status_code:: success http_status.yml - 200 .. rest_status_code:: error http_status.yml - 400 - 401 - 403 - 404 Response -------- .. rest_parameters:: rating/modules_parameters.yml - module_id: module_id - description: description - enabled: enabled - hot_config: hot_config - priority: priority Response Example ---------------- .. literalinclude:: ./api_samples/rating/module_get.json :language: javascript Update one module ================= .. rest_method:: PUT /v2/rating/modules/(module_id) .. rest_parameters:: rating/modules_parameters.yml - enabled: enabled_opt - priority: priority_opt Status codes ------------ .. rest_status_code:: success http_status.yml - 204 .. rest_status_code:: error http_status.yml - 400 - 401 - 403 - 404 Response -------- .. rest_parameters:: rating/modules_parameters.yml - module_id: module_id - description: description - enabled: enabled - hot_config: hot_config - priority: priority ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/rating/modules_parameters.yml0000664000175000017500000000142300000000000025064 0ustar00zuulzuul00000000000000description: in: body description: | A quick description of the module type: string required: true enabled: &enabled in: body description: | Boolean representing if the module is enabled type: bool required: true enabled_opt: <<: *enabled required: false hot_config: in: body description: | Boolean representing if the module supports hot-config type: bool required: true module_id: in: body description: | The id of the module type: string required: true modules_list: in: body description: | List of modules. type: list required: true priority: &priority in: body description: | Priority of the module, relative to other modules type: int required: true priority_opt: <<: *priority required: false ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.223486 cloudkitty-21.0.0/api-ref/source/v2/scope/0000775000175000017500000000000000000000000020273 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/scope/http_status.yml0000664000175000017500000000073500000000000023405 0ustar00zuulzuul00000000000000200: default: Request was successful. 201: default: Resource was successfully created. 202: default: Request has been accepted for asynchronous processing. 204: default: Request was successful even though no content is to be returned. 400: default: Invalid request. 401: default: Unauthenticated user. 403: default: Forbidden operation for the authentified user. 404: default: Not found. 405: default: The method is not allowed for the requested URL. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/scope/scope.inc0000664000175000017500000000632500000000000022105 0ustar00zuulzuul00000000000000==================== Scope state endpoint ==================== Get the status of several scopes ================================ Returns the status of several scopes. .. rest_method:: GET /v2/scope .. rest_parameters:: scope/scope_parameters.yml - collector: collector - fetcher: fetcher - limit: limit - offset: offset - scope_id: scope_id - scope_key: scope_key Status codes ------------ .. rest_status_code:: success http_status.yml - 200 .. rest_status_code:: error http_status.yml - 400 - 403 - 404 - 405 Response -------- .. rest_parameters:: scope/scope_parameters.yml - collector: collector_resp - fetcher: fetcher_resp - state: state - last_processed_timestamp: last_processed_timestamp - scope_id: scope_id_resp - scope_key: scope_key_resp - active: active_key_resp Response Example ---------------- .. literalinclude:: ./api_samples/scope/scope_get.json :language: javascript Reset the status of several scopes ================================== Reset the status of several scopes. .. rest_method:: PUT /v2/scope .. rest_parameters:: scope/scope_parameters.yml - state: state - last_processed_timestamp: last_processed_timestamp - collector: collector_body - fetcher: fetcher_body - scope_id: scope_id_body - scope_key: scope_key_body - all_scopes: all_scopes Status codes ------------ .. rest_status_code:: success http_status.yml - 202 .. rest_status_code:: error http_status.yml - 400 - 403 - 404 - 405 Patch a scope ================================ Patches/updates a scope. .. rest_method:: PATCH /v2/scope .. rest_parameters:: scope/scope_parameters.yml - collector: collector - fetcher: fetcher - limit: limit - offset: offset - scope_id: scope_id - scope_key: scope_key - active: active_body Status codes ------------ .. rest_status_code:: success http_status.yml - 200 .. rest_status_code:: error http_status.yml - 400 - 403 - 404 - 405 Response -------- .. rest_parameters:: scope/scope_parameters.yml - collector: collector_resp - fetcher: fetcher_resp - state: state - scope_id: scope_id_resp - scope_key: scope_key_resp - active: active_key_resp Response Example ---------------- .. literalinclude:: ./api_samples/scope/scope_get.json :language: javascript Create a scope ================================ Create a scope. .. rest_method:: POST /v2/scope .. rest_parameters:: scope/scope_parameters.yml - collector: collector - fetcher: fetcher - scope_id: scope_id - scope_key: scope_key - active: active_body Status codes ------------ .. rest_status_code:: success http_status.yml - 200 .. rest_status_code:: error http_status.yml - 400 - 403 - 404 - 405 Response -------- .. rest_parameters:: scope/scope_parameters.yml - scope_id: scope_id_resp - scope_key: scope_key_resp - fetcher: fetcher_resp - collector: collector_resp - state: state - last_processed_timestamp: last_processed_timestamp - active: active_key_resp - scope_activation_toggle_date: scope_activation_toggle_date Response Example ---------------- .. literalinclude:: ./api_samples/scope/scope_get.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/scope/scope_parameters.yml0000664000175000017500000000447400000000000024363 0ustar00zuulzuul00000000000000collector: &collector in: query description: | Filter on collector. type: string required: false fetcher: &fetcher in: query description: | Filter on fetcher. type: string required: false limit: in: query description: | For pagination. The maximum number of results to return. type: int required: false offset: &offset in: query description: | For pagination. The index of the first element that should be returned. type: int required: false scope_id: &scope_id in: query description: | Filter on scope. type: string required: false scope_key: &scope_key in: query description: | Filter on scope_key. type: string required: false active_anchor_query: &active_query in: body description: | Defines if a scope should be processed or not; `True` means that CloudKitty must process the scope. type: bool required: true active_body: <<: *active_query required: false active_key_resp: <<: *active_query all_scopes: &all_scopes in: body description: | Confirmation whether all scopes must be reset type: bool collector_body: <<: *collector in: body collector_resp: <<: *collector required: true description: Collector for the given scope in: body fetcher_body: <<: *fetcher in: body fetcher_resp: <<: *fetcher required: true description: Fetcher for the given scope in: body last_processed_timestamp: in: body description: | It represents the last processed timestamp for the storage state element. type: iso8601 timestamp required: true scope_activation_toggle_date: in: body description: | It represents the last time the scope was activated/deactivated via the PATCH API. type: iso8601 timestamp required: true scope_id_body: <<: *scope_id in: body scope_id_resp: <<: *scope_id required: true description: Scope in: body scope_key_body: <<: *scope_key in: body scope_key_resp: <<: *scope_key required: true description: Scope key for the given scope in: body state: in: body description: | State of the scope. This variable represents the last processed timestamp for the storage state element. It is DEPRECATED, and it will be removed in upcoming releases. The alternative is `last_processed_timestamp`. type: iso8601 timestamp required: true ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.223486 cloudkitty-21.0.0/api-ref/source/v2/summary/0000775000175000017500000000000000000000000020657 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/summary/summary.inc0000664000175000017500000000542200000000000023052 0ustar00zuulzuul00000000000000================ Summary endpoint ================ Get a rating summary ==================== Get a rating summary for one or several tenants. .. rest_method:: GET /v2/summary .. rest_parameters:: summary/summary_parameters.yml - limit: limit - offset: offset - begin: begin - end: end - groupby: groupby - filters: filters - custom_fields: custom_fields - response_format: response_format Status codes ------------ .. rest_status_code:: success http_status.yml - 200 .. rest_status_code:: error http_status.yml - 400 - 403 - 405 Response -------- The response has the following default format (response_format='table'): .. code-block:: javascript { "columns": [ "begin", "end", "qty", "rate", "group1", "group2", ], "results": [ [ "2019-06-01T00:00:00Z", "2019-07-01T00:00:00Z", 2590.421676635742, 1295.210838317871, "group1", "group2", ] ], "total": 4 } ``total`` is the total amount of found elements. ``columns`` contains the name of the columns for each element of ``results``. The columns are the four mandatory ones (``begin``, ``end``, ``qty``, ``rate``) along with each attribute the result is grouped by. ``format`` is the response format. It can be "table" or "object". The default response structure is "table", which is presented above. The object structure uses the following pattern. .. code-block:: javascript { "results": [ {"begin": "2019-06-01T00:00:00Z", "end": "2019-07-01T00:00:00Z", "qty": 2590.421676635742, "rate": 1295.210838317871, "group1": "group1", "group2": "group2", }, ], "total": 4 } .. note:: It is also possible to group data by time, in order to obtain timeseries. In order to do this, group by ``time``. No extra column will be added, but you'll get one entry per collect period in the queried timeframe. See examples below. .. rest_parameters:: summary/summary_parameters.yml - begin: begin_resp - end: end_resp - qty: qty_resp - rate: rate_resp Response Example ---------------- Grouping by time and project_id: .. code-block:: shell curl "http://cloudkitty-api:8889/v2/summary?groupby=time&groupby=project_id&limit=3" .. literalinclude:: ./api_samples/summary/summary_get_groupby_time.json :language: javascript .. code-block:: shell curl "http://cloudkitty-api:8889/v2/summary?filters=project_id%3Afe9c35372db6420089883805b37a34af&groupby=type&groupby=project_id" .. literalinclude:: ./api_samples/summary/summary_get.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/summary/summary_parameters.yml0000664000175000017500000001174700000000000025334 0ustar00zuulzuul00000000000000begin: &begin in: query description: | Begin of the period for which the summary is required. type: iso8601 timestamp required: false custom_fields: in: query description: | Optional attributes to customize the summary GET API response. When using this parameter, users can create custom reports. The default behavior is to list the sum of the quantity and the sum of the price, which is projected as ``rate`` field. The default value for the ``custom_fields`` parameter is ``SUM(qty) AS qty, SUM(price) AS rate``. One can customize this field as they wish with InfluxDB queries. The following statements ``"select", "from", "drop", "delete", "create", "alter", "insert", "update"`` are not allowed though. For instance, if one wants to retrieve the quantity field as the last value of the quantity, and not the sum (this is quite interesting when generating reports for storage values), the user can send the parameter as ``last(qty) AS qty, SUM(price) AS rate``. To discover all possible fields that one can work with, the user can also use ``*`` as a parameter. ``Currently this feature only works for Influx storage backend.`` It (the feature) depends on the storage backend driver to support it. If the user tries to set this configuration while using other storage backends, it will be ignored. type: list of strings required: false end: &end in: query description: | End of the period for which the summary is required. type: iso8601 timestamp required: false filters: in: query description: | Optional filters. These filters accept multiple query parameters. To use this option with multiple parameters, repeat it as many time as desired in the query string. For instance, to restrict the result to only two projects, add to the query string ``filters=project_id:&filters=project_id:``. Bear in mind that this string must be URL escaped. Therefore, it becomes ``filters=project_id%3A&filters=project_id%3A``. type: dict required: false groupby: in: query description: | Optional attributes to group the summary by. The ``groupby`` elements are defined in the collector YML settings. Therefore, one can group the result using any of the ``groupby`` attributes defined in the collector settings of CloudKitty. Besides those attributes, by default, starting in CloudKitty ``2024.1`` release, we will have the following new groupby options: (i) time: to group data hourly; (ii) time-d: to group data by day of the year; (iii) time-w: to group data by week of the year; (iv) time-m: to group data by month; and, (v) time-y: to group data by year. If you have old data in CloudKitty and you wish to use these group by methods, you will need to reprocess the desired timeframe. The `groupby` options ``time-d``, ``time-w``, ``time-m``, ``time-y`` are the short versions of the following `groupby` options ``day_of_the_year``, ``week_of_the_year``, ``month``, and ``year`` respectively. type: list of strings required: false limit: in: query description: | For pagination. The maximum number of results to return. type: int required: false offset: &offset in: query description: | For pagination. The index of the first element that should be returned. type: int required: false response_format: in: query description: | Optional attribute to define the object structure used in the response. Both responses will be JSON objects. Possible values are ``table`` or ``object``. The default value is ``table`` object structure, where one has the attributes `total`, which indicates the total number of entries in the response; `results`, which is a list of lists, where the nested list contains the values of each entry; and, `columns`, which is the attribute that describes all of the available columns. Then, each index in this list (`columns`) corresponds to the metadata of the values in the `results` list. The structure for the `object` option uses a dictionary. The response still has the `total` attribute. However, in the `results` attribute, one will find a list of objects, instead of a list of lists of values that we see in the `table` option. This facilitates the processing of some use cases. type: string required: false begin_resp: <<: *begin required: true description: Begin of the period for the item. in: body end_resp: <<: *end required: true description: End of the period for the item. in: body qty: &qty in: body description: | Qty for the item. type: float required: true qty_resp: <<: *qty required: true description: Qty for the item in the specified period. in: body rate: &rate in: body description: | Rate for the item. type: float required: true rate_resp: <<: *rate required: true description: Rate for the item in the specified period. in: body ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.223486 cloudkitty-21.0.0/api-ref/source/v2/task/0000775000175000017500000000000000000000000020124 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/task/reprocessing.inc0000664000175000017500000001007100000000000023321 0ustar00zuulzuul00000000000000====================== Task schedule endpoint ====================== CloudKitty has a task endpoint `/v2/task/`, which allows operators to schedule administrative tasks, such as reprocessing. Currently, the only task available is the reprocessing one, which is avaiable via the following endpoints. - POST `/v2/task/reprocesses` -- to create a reprocessing task. - GET `/v2/task/reprocesses/` -- to retrieve a reprocessing task. - GET `/v2/task/reprocesses` -- to retrieve all reprocessing task. Create a reprocessing task ========================== The endpoint used to schedule a reprocessing task. The scheduled tasks are loaded to execution once every processing cycle, as defined in the CloudKitty `period` configuration. .. rest_method:: POST `/v2/task/reprocesses` .. rest_parameters:: task/reprocessing_parameters.yml - scope_ids: scope_ids - start_reprocess_time: start_reprocess_time - end_reprocess_time: end_reprocess_time - reason: reason Status codes ------------ .. rest_status_code:: success http_status.yml - 200 .. rest_status_code:: error http_status.yml - 400 - 403 - 405 Response -------- We will return an empty object as the response in case of success: .. code-block:: javascript {} Example ------- .. code-block:: shell curl -s -X POST "https:///v2/task/reprocesses" -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "X-Auth-Token: ${ACCESS_TOKEN_KEYSTONE}" -H "Content-Type: application/json" -d '{"reason": "Reprocessing test", "scope_ids": "", "start_reprocess_time": "2021-06-01 00:00:00+00:00", "end_reprocess_time": "2021-06-01 23:00:00+00:00"}' The scope IDs can be retrieved via "/v2/scope" API, which is the API that one can use to list all scopes, and their status. Retrieve a reprocessing task ============================ The endpoint used to retrieve a reprocessing task. By using this endpoint, one can for instance check the progress of the reprocessing tasks. .. rest_method:: GET `/v2/task/reprocesses/` .. rest_parameters:: task/reprocessing_parameters.yml - path_scope_id: path_scope_id Status codes ------------ .. rest_status_code:: success http_status.yml - 200 .. rest_status_code:: error http_status.yml - 400 - 403 - 405 Response -------- We will return the scope data in case of a valid scope ID: .. code-block:: javascript {"scope_id": "scope ID goes here", "reason": "The reason for this reprocessing for this scope", "start_reprocess_time": "2021-06-01 00:00:00+00:00", "end_reprocess_time": "2021-07-01 00:00:00+00:00", "current_reprocess_time": "2021-06-06 00:00:00+00:00"} Example ------- .. code-block:: shell curl -s -X GET "https:///v2/task/reprocesses/" -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "X-Auth-Token: ${ACCESS_TOKEN_KEYSTONE}" Retrieve all reprocessing tasks =============================== The endpoint used to retrieve all reprocessing tasks. By using this endpoint, one can retrieve all reprocessing tasks scheduled for a scope. .. rest_method:: GET `/v2/task/reprocesses` .. rest_parameters:: task/reprocessing_parameters.yml - scope_ids: scope_ids_query - order: scope_ids_order_query Status codes ------------ .. rest_status_code:: success http_status.yml - 200 .. rest_status_code:: error http_status.yml - 400 - 403 - 405 Response -------- We will return the scope data in case of a valid scope ID: .. code-block:: javascript [{"scope_id": "scope ID goes here", "reason": "The reason for this reprocessing for this scope", "start_reprocess_time": "2021-06-01 00:00:00+00:00", "end_reprocess_time": "2021-07-01 00:00:00+00:00", "current_reprocess_time": "2021-06-06 00:00:00+00:00"}] Example ------- .. code-block:: shell curl -s -X GET "https:///v2/task/reprocesses" -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "X-Auth-Token: ${ACCESS_TOKEN_KEYSTONE}"././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/api-ref/source/v2/task/reprocessing_parameters.yml0000664000175000017500000000303000000000000025571 0ustar00zuulzuul00000000000000path_scope_id: in: path description: | The scope ID to retrieve. type: string required: true limit: in: query description: | For pagination. The maximum number of results to return. type: int required: false offset: in: query description: | For pagination. The index of the first element that should be returned. type: int required: false scope_ids_order_query: &scope_ids_order_query in: query description: | The order in which reprocessing tasks are returned. By default, we order reprocessing tasks ``desc`` by their insertion order in the database. The possible values are ``desc`` or ``asc``. required: false type: string scope_ids_query: &scope_ids_query in: query description: | The scope IDs one wants to retrieve the reprocessing tasks of. If not informed, all reprocessing tasks, for all scopes are retrieved. required: false type: string end_reprocess_time: in: body description: | The end date for the reprocessing task. type: iso8601 timestamp required: true reason: in: body description: | The reason for the reprocessing to take place. type: string required: true scope_ids: <<: *scope_ids_query in: body description: | The scope IDs to reprocess. Must be comma-separated to schedule more than one. One can use the keyword `ALL` to schedule all scopes to be reprocessed. required: true start_reprocess_time: in: body description: | The start date for the reprocessing task. type: iso8601 timestamp required: true ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2274861 cloudkitty-21.0.0/cloudkitty/0000775000175000017500000000000000000000000016203 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/cloudkitty/__init__.py0000664000175000017500000000123200000000000020312 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pbr.version __version__ = pbr.version.VersionInfo( 'cloudkitty').version_string() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2274861 cloudkitty-21.0.0/cloudkitty/api/0000775000175000017500000000000000000000000016754 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/cloudkitty/api/__init__.py0000664000175000017500000000000000000000000021053 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/cloudkitty/api/app.py0000664000175000017500000000560600000000000020115 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import os import flask import flask_restful from oslo_config import cfg from oslo_log import log from paste import deploy from werkzeug.middleware import dispatcher from cloudkitty.api import root as api_root from cloudkitty.api.v1 import get_api_app as get_v1_app from cloudkitty.api.v2 import get_api_app as get_v2_app from cloudkitty import service LOG = log.getLogger(__name__) auth_opts = [ cfg.StrOpt('api_paste_config', default="api_paste.ini", help="Configuration file for WSGI definition of API."), cfg.StrOpt('auth_strategy', choices=['noauth', 'keystone'], default='keystone', help=("The strategy to use for auth. Supports noauth and " "keystone")), ] api_opts = [ cfg.PortOpt('port', default=8889, help='The port for the cloudkitty API server.'), ] CONF = cfg.CONF CONF.import_opt('version', 'cloudkitty.storage', 'storage') CONF.register_opts(auth_opts) CONF.register_opts(api_opts, group='api') def setup_app(): root_app = flask.Flask('cloudkitty') root_api = flask_restful.Api(root_app) root_api.add_resource(api_root.CloudkittyAPIRoot, '/') dispatch_dict = { '/v1': get_v1_app(), '/v2': get_v2_app(), } # Disabling v2 api in case v1 storage is used if CONF.storage.version < 2: LOG.warning('v1 storage is used, disabling v2 API') dispatch_dict.pop('/v2') app = dispatcher.DispatcherMiddleware(root_app, dispatch_dict) return app def load_app(): cfg_file = None cfg_path = cfg.CONF.api_paste_config if not os.path.isabs(cfg_path): cfg_file = CONF.find_file(cfg_path) elif os.path.exists(cfg_path): cfg_file = cfg_path if not cfg_file: raise cfg.ConfigFilesNotFoundError([cfg.CONF.api_paste_config]) LOG.info("Full WSGI config used: %s", cfg_file) appname = "cloudkitty+{}".format(cfg.CONF.auth_strategy) LOG.info("Cloudkitty API with '%s' auth type will be loaded.", cfg.CONF.auth_strategy) return deploy.loadapp("config:" + cfg_file, name=appname) def build_wsgi_app(argv=None): service.prepare_service() return load_app() def app_factory(global_config, **local_conf): return setup_app() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/cloudkitty/api/app.wsgi0000664000175000017500000000153300000000000020431 0ustar00zuulzuul00000000000000# -*- mode: python -*- # # Copyright 2013 New Dream Network, LLC (DreamHost) # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Use this file for deploying the API under mod_wsgi. See http://pecan.readthedocs.org/en/latest/deployment.html for details. """ from cloudkitty.api import app application = app.build_wsgi_app(argv=[]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/cloudkitty/api/middleware.py0000664000175000017500000000322500000000000021445 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from keystonemiddleware import auth_token class AuthTokenMiddleware(auth_token.AuthProtocol): """A subclass of keystone auth_token middleware. It avoids authentication on public routes. """ def __init__(self, app, conf, public_api_routes=[]): self._public_routes = public_api_routes super(AuthTokenMiddleware, self).__init__(app, conf) def __call__(self, env, start_response): # Strip the / from the URL if we're not dealing with '/' path = env.get('PATH_INFO').rstrip('/') or '/' if path in self._public_routes: return self._app(env, start_response) return super(AuthTokenMiddleware, self).__call__(env, start_response) @classmethod def factory(cls, global_config, **local_conf): public_routes = local_conf.get('acl_public_routes', '') public_api_routes = [path.strip() for path in public_routes.split(',')] def _factory(app): return cls(app, global_config, public_api_routes=public_api_routes) return _factory ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/cloudkitty/api/root.py0000664000175000017500000000437400000000000020321 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from flask import request from oslo_config import cfg import voluptuous from cloudkitty.api.v2 import base from cloudkitty.api.v2 import utils as api_utils CONF = cfg.CONF CONF.import_opt('version', 'cloudkitty.storage', 'storage') API_VERSION_SCHEMA = voluptuous.Schema({ voluptuous.Required('id'): str, voluptuous.Required('links'): [ voluptuous.Schema({ voluptuous.Required('href'): str, voluptuous.Required('rel', default='self'): 'self', }), ], voluptuous.Required('status'): voluptuous.Any( 'CURRENT', 'SUPPORTED', 'EXPERIMENTAL', 'DEPRECATED', ), }) def get_api_versions(): """Returns a list of all existing API versions.""" apis = [ { 'id': 'v1', 'links': [{ 'href': '{scheme}://{host}/v1'.format( scheme=request.scheme, host=request.host, ), }], 'status': 'CURRENT', }, { 'id': 'v2', 'links': [{ 'href': '{scheme}://{host}/v2'.format( scheme=request.scheme, host=request.host, ), }], 'status': 'EXPERIMENTAL', }, ] # v2 api is disabled when using v1 storage if CONF.storage.version < 2: apis = apis[:1] return apis class CloudkittyAPIRoot(base.BaseResource): @api_utils.add_output_schema(voluptuous.Schema({ 'versions': [API_VERSION_SCHEMA], })) def get(self): return { 'versions': get_api_versions(), } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2274861 cloudkitty-21.0.0/cloudkitty/api/v1/0000775000175000017500000000000000000000000017302 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866579.0 cloudkitty-21.0.0/cloudkitty/api/v1/__init__.py0000664000175000017500000000323600000000000021417 0ustar00zuulzuul00000000000000# Copyright 2018 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_config import cfg import pecan from cloudkitty.api.v1 import config as api_config from cloudkitty.api.v1 import hooks from cloudkitty import storage api_opts = [ cfg.BoolOpt('pecan_debug', default=False, help='Toggle Pecan Debug Middleware.'), ] CONF = cfg.CONF CONF.register_opts(api_opts, group='api') def get_pecan_config(): # Set up the pecan configuration filename = api_config.__file__.replace('.pyc', '.py') return pecan.configuration.conf_from_file(filename) def get_api_app(): app_conf = get_pecan_config() storage_backend = storage.get_storage() app_hooks = [ hooks.RPCHook(), hooks.StorageHook(storage_backend), hooks.ContextHook(), ] return pecan.make_app( app_conf.app.root, static_root=app_conf.app.static_root, template_path=app_conf.app.template_path, debug=CONF.api.pecan_debug, force_canonical=getattr(app_conf.app, 'force_canonical', True), hooks=app_hooks, guess_content_type_from_ext=False, ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v1/config.py0000664000175000017500000000163000000000000021121 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from cloudkitty import config # noqa # Pecan Application Configurations app = { 'root': 'cloudkitty.api.v1.controllers.V1Controller', 'modules': ['cloudkitty.api'], 'static_root': '%(confdir)s/public', 'template_path': '%(confdir)s/templates', 'debug': False, 'enable_acl': True, 'acl_public_routes': ['/', '/v1'], } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2274861 cloudkitty-21.0.0/cloudkitty/api/v1/controllers/0000775000175000017500000000000000000000000021650 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v1/controllers/__init__.py0000664000175000017500000000245600000000000023770 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from pecan import rest from cloudkitty.api.v1.controllers import collector as collector_api from cloudkitty.api.v1.controllers import info as info_api from cloudkitty.api.v1.controllers import rating as rating_api from cloudkitty.api.v1.controllers import report as report_api from cloudkitty.api.v1.controllers import storage as storage_api class V1Controller(rest.RestController): """API version 1 controller. """ billing = rating_api.RatingController() collector = collector_api.CollectorController() rating = rating_api.RatingController() report = report_api.ReportController() storage = storage_api.StorageController() info = info_api.InfoController() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v1/controllers/collector.py0000664000175000017500000001326600000000000024220 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_log import log as logging import pecan from pecan import rest from wsme import types as wtypes import wsmeext.pecan as wsme_pecan from cloudkitty.api.v1.datamodels import collector as collector_models from cloudkitty.common import policy from cloudkitty.db import api as db_api LOG = logging.getLogger(__name__) class MappingController(rest.RestController): """REST Controller managing service to collector mappings.""" def __init__(self): self._db = db_api.get_instance().get_service_to_collector_mapping() @wsme_pecan.wsexpose(collector_models.ServiceToCollectorMapping, wtypes.text) def get_one(self, service): """Return a service to collector mapping. :param service: Name of the service to filter on. """ LOG.warning("Collector mappings are deprecated and shouldn't be used.") policy.authorize(pecan.request.context, 'collector:get_mapping', {}) try: mapping = self._db.get_mapping(service) return collector_models.ServiceToCollectorMapping( **mapping.as_dict()) except db_api.NoSuchMapping as e: pecan.abort(404, e.args[0]) @wsme_pecan.wsexpose(collector_models.ServiceToCollectorMappingCollection, wtypes.text) def get_all(self, collector=None): """Return the list of every services mapped to a collector. :param collector: Filter on the collector name. :return: Service to collector mappings collection. """ LOG.warning("Collector mappings are deprecated and shouldn't be used.") policy.authorize(pecan.request.context, 'collector:list_mappings', {}) mappings = [collector_models.ServiceToCollectorMapping( **mapping.as_dict()) for mapping in self._db.list_mappings(collector)] return collector_models.ServiceToCollectorMappingCollection( mappings=mappings) @wsme_pecan.wsexpose(collector_models.ServiceToCollectorMapping, wtypes.text, wtypes.text) def post(self, collector, service): """Create a service to collector mapping. :param collector: Name of the collector to apply mapping on. :param service: Name of the service to apply mapping on. """ LOG.warning("Collector mappings are deprecated and shouldn't be used.") policy.authorize(pecan.request.context, 'collector:manage_mapping', {}) new_mapping = self._db.set_mapping(service, collector) return collector_models.ServiceToCollectorMapping( service=new_mapping.service, collector=new_mapping.collector) @wsme_pecan.wsexpose(None, wtypes.text, status_code=204) def delete(self, service): """Delete a service to collector mapping. :param service: Name of the service to filter on. """ LOG.warning("Collector mappings are deprecated and shouldn't be used.") policy.authorize(pecan.request.context, 'collector:manage_mapping', {}) try: self._db.delete_mapping(service) except db_api.NoSuchMapping as e: pecan.abort(404, e.args[0]) class CollectorStateController(rest.RestController): """REST Controller managing collector states.""" def __init__(self): self._db = db_api.get_instance().get_module_info() @wsme_pecan.wsexpose(collector_models.CollectorInfos, wtypes.text) def get(self, name): """Query the enable state of a collector. :param name: Name of the collector. :return: State of the collector. """ policy.authorize(pecan.request.context, 'collector:get_state', {}) enabled = self._db.get_state('collector_{}'.format(name)) collector = collector_models.CollectorInfos(name=name, enabled=enabled) return collector @wsme_pecan.wsexpose(collector_models.CollectorInfos, wtypes.text, body=collector_models.CollectorInfos) def put(self, name, infos): """Set the enable state of a collector. :param name: Name of the collector. :param infos: New state informations of the collector. :return: State of the collector. """ policy.authorize(pecan.request.context, 'collector:update_state', {}) enabled = self._db.set_state('collector_{}'.format(name), infos.enabled) collector = collector_models.CollectorInfos(name=name, enabled=enabled) return collector class CollectorController(rest.RestController): """REST Controller managing collector modules.""" mappings = MappingController() states = CollectorStateController() # FIXME(sheeprine): Stub function used to pass requests to subcontrollers @wsme_pecan.wsexpose(None) def get(self): "Unused function, hack to let pecan route requests to subcontrollers." return ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v1/controllers/info.py0000664000175000017500000001104100000000000023152 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2016 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_config import cfg from oslo_log import log as logging import pecan from pecan import rest import voluptuous from wsme import types as wtypes import wsmeext.pecan as wsme_pecan from cloudkitty.api.v1.datamodels import info as info_models from cloudkitty.api.v1 import types as ck_types from cloudkitty import collector from cloudkitty.common import policy from cloudkitty import utils as ck_utils LOG = logging.getLogger(__name__) CONF = cfg.CONF def get_all_metrics(): try: metrics_conf = collector.validate_conf( ck_utils.load_conf(CONF.collect.metrics_conf)) except (voluptuous.Invalid, voluptuous.MultipleInvalid): msg = 'Invalid endpoint: no metrics in current configuration.' pecan.abort(405, msg) policy.authorize(pecan.request.context, 'info:list_metrics_info', {}) metrics_info_list = [] for metric_name, metric in metrics_conf.items(): info = metric.copy() info['metric_id'] = info['alt_name'] metrics_info_list.append( info_models.CloudkittyMetricInfo(**info)) return info_models.CloudkittyMetricInfoCollection( metrics=metrics_info_list) def _find_metric(name, conf): for metric_name, metric in conf.items(): if metric['alt_name'] == name: return metric def get_one_metric(metric_name): try: metrics_conf = collector.validate_conf( ck_utils.load_conf(CONF.collect.metrics_conf)) except (voluptuous.Invalid, voluptuous.MultipleInvalid): msg = 'Invalid endpoint: no metrics in current configuration.' pecan.abort(405, msg) policy.authorize(pecan.request.context, 'info:get_metric_info', {}) metric = _find_metric(metric_name, metrics_conf) if not metric: pecan.abort(404, str(metric_name)) info = metric.copy() info['metric_id'] = info['alt_name'] return info_models.CloudkittyMetricInfo(**info) class MetricInfoController(rest.RestController): """REST Controller managing collected metrics information independently of their services. If no metrics are defined in conf, return 405 for each endpoint. """ @wsme_pecan.wsexpose(info_models.CloudkittyMetricInfoCollection) def get_all(self): """Get the metric list. :return: List of every metrics. """ return get_all_metrics() @wsme_pecan.wsexpose(info_models.CloudkittyMetricInfo, wtypes.text) def get_one(self, metric_name): """Return a metric. :param metric_name: name of the metric. """ return get_one_metric(metric_name) class ServiceInfoController(rest.RestController): """REST Controller managing collected services information.""" @wsme_pecan.wsexpose(info_models.CloudkittyMetricInfoCollection) def get_all(self): """Get the service list (deprecated). :return: List of every services. """ LOG.warning("Services based endpoints are deprecated. " "Please use metrics based enpoints instead.") return get_all_metrics() @wsme_pecan.wsexpose(info_models.CloudkittyMetricInfo, wtypes.text) def get_one(self, service_name): """Return a service (deprecated). :param service_name: name of the service. """ LOG.warning("Services based endpoints are deprecated. " "Please use metrics based enpoints instead.") return get_one_metric(service_name) class InfoController(rest.RestController): """REST Controller managing Cloudkitty general information.""" services = ServiceInfoController() metrics = MetricInfoController() _custom_actions = {'config': ['GET']} @wsme_pecan.wsexpose({ str: ck_types.MultiType(wtypes.text, int, float, dict, list) }) def config(self): """Return current configuration.""" policy.authorize(pecan.request.context, 'info:get_config', {}) return ck_utils.load_conf(CONF.collect.metrics_conf) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v1/controllers/rating.py0000664000175000017500000001714400000000000023515 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_concurrency import lockutils from oslo_log import log import pecan from pecan import rest from stevedore import extension from wsme import types as wtypes import wsmeext.pecan as wsme_pecan from cloudkitty.api.v1.datamodels import rating as rating_models from cloudkitty.common import policy from cloudkitty import utils as ck_utils PROCESSORS_NAMESPACE = 'cloudkitty.rating.processors' LOG = log.getLogger(__name__) class RatingModulesMixin(object): def reload_extensions(self): lock = lockutils.lock('rating-modules') with lock: ck_utils.refresh_stevedore(PROCESSORS_NAMESPACE) # FIXME(sheeprine): Implement RPC messages to trigger reload on # processors self.extensions = extension.ExtensionManager( PROCESSORS_NAMESPACE, # FIXME(sheeprine): don't want to load it here as we just need # the controller invoke_on_load=True) if not self._first_call: self.notify_reload() else: self._first_call = False def notify_reload(self): client = pecan.request.rpc_client.prepare(namespace='rating', version='1.1') client.cast({}, 'reload_modules') def __init__(self): self._first_call = True self.extensions = [] self.reload_extensions() class ModulesController(rest.RestController, RatingModulesMixin): """REST Controller managing rating modules.""" def route(self, *args): route = args[0] if route.startswith('/v1/module_config'): policy.authorize(pecan.request.context, 'rating:module_config', {}) super(ModulesController, self).route(*args) @wsme_pecan.wsexpose(rating_models.CloudkittyModuleCollection) def get_all(self): """return the list of loaded modules. :return: name of every loaded modules. """ policy.authorize(pecan.request.context, 'rating:list_modules', {}) modules_list = [] lock = lockutils.lock('rating-modules') with lock: for module in self.extensions: infos = module.obj.module_info.copy() infos['module_id'] = infos.pop('name') modules_list.append(rating_models.CloudkittyModule(**infos)) return rating_models.CloudkittyModuleCollection( modules=modules_list) @wsme_pecan.wsexpose(rating_models.CloudkittyModule, wtypes.text) def get_one(self, module_id): """return a module :return: CloudKittyModule """ policy.authorize(pecan.request.context, 'rating:get_module', {}) try: lock = lockutils.lock('rating-modules') with lock: module = self.extensions[module_id] except KeyError: pecan.abort(404, 'Module not found.') infos = module.obj.module_info.copy() infos['module_id'] = infos.pop('name') return rating_models.CloudkittyModule(**infos) @wsme_pecan.wsexpose(rating_models.CloudkittyModule, wtypes.text, body=rating_models.CloudkittyModule, status_code=302) def put(self, module_id, module): """Change the state and priority of a module. :param module_id: name of the module to modify :param module: CloudKittyModule object describing the new desired state """ policy.authorize(pecan.request.context, 'rating:update_module', {}) try: lock = lockutils.lock('rating-modules') with lock: ext = self.extensions[module_id].obj except KeyError: pecan.abort(404, 'Module not found.') if module.enabled != wtypes.Unset and ext.enabled != module.enabled: ext.set_state(module.enabled) if module.priority != wtypes.Unset and ext.priority != module.priority: ext.set_priority(module.priority) pecan.response.location = pecan.request.path class UnconfigurableController(rest.RestController): """This controller raises an error when requested.""" @wsme_pecan.wsexpose(None) def _default(self): self.abort() def abort(self): pecan.abort(409, "Module is not configurable") class ModulesExposer(rest.RestController, RatingModulesMixin): """REST Controller exposing rating modules. This is the controller that exposes the modules own configuration settings. """ def __init__(self): super(ModulesExposer, self).__init__() self._loaded_modules = [] self.expose_modules() def expose_modules(self): """Load rating modules to expose API controllers.""" lock = lockutils.lock('rating-modules') with lock: for ext in self.extensions: # FIXME(sheeprine): we should notify two modules with same name name = ext.name if not ext.obj.config_controller: ext.obj.config_controller = UnconfigurableController # Update extension reference setattr(self, name, ext.obj.config_controller()) if name in self._loaded_modules: self._loaded_modules.remove(name) # Clear removed modules for module in self._loaded_modules: delattr(self, module) self._loaded_modules = self.extensions.names() class RatingController(rest.RestController): """The RatingController is exposed by the API. The RatingControler connects the ModulesExposer, ModulesController and a quote action to the API. """ _custom_actions = { 'quote': ['POST'], 'reload_modules': ['GET'], } modules = ModulesController() module_config = ModulesExposer() @wsme_pecan.wsexpose(float, body=rating_models.CloudkittyResourceCollection) def quote(self, res_data): """Get an instant quote based on multiple resource descriptions. :param res_data: List of resource descriptions. :return: Total price for these descriptions. """ policy.authorize(pecan.request.context, 'rating:quote', {}) client = pecan.request.rpc_client.prepare(namespace='rating') res_dict = {} for res in res_data.resources: if res.service not in res_dict: res_dict[res.service] = [] json_data = res.to_json() res_dict[res.service].extend(json_data[res.service]) LOG.debug("Calling quote method with data: [%s].", res_dict) res = client.call({}, 'quote', res_data={'usage': res_dict}) return res @wsme_pecan.wsexpose(None) def reload_modules(self): """Trigger a rating module list reload. """ policy.authorize(pecan.request.context, 'rating:module_config', {}) self.modules.reload_extensions() self.module_config.reload_extensions() self.module_config.expose_modules() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v1/controllers/report.py0000664000175000017500000001320300000000000023534 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import datetime import decimal from oslo_config import cfg from oslo_log import log as logging import pecan from pecan import rest from wsme import types as wtypes import wsmeext.pecan as wsme_pecan from cloudkitty.api.v1.datamodels import report as report_models from cloudkitty.common import policy from cloudkitty import utils as ck_utils from cloudkitty.utils import tz as tzutils LOG = logging.getLogger(__name__) CONF = cfg.CONF CONF.import_opt('scope_key', 'cloudkitty.collector', 'collect') class InvalidFilter(Exception): """Exception raised when a storage filter is invalid""" class ReportController(rest.RestController): """REST Controller managing the reporting. """ _custom_actions = { 'total': ['GET'], 'tenants': ['GET'], 'summary': ['GET'] } @wsme_pecan.wsexpose([wtypes.text], datetime.datetime, datetime.datetime) def tenants(self, begin=None, end=None): """Return the list of rated tenants. """ policy.authorize(pecan.request.context, 'report:list_tenants', {}) if not begin: begin = ck_utils.get_month_start() if not end: end = ck_utils.get_next_month() storage = pecan.request.storage_backend tenants = storage.get_tenants(begin, end) return tenants @wsme_pecan.wsexpose(decimal.Decimal, datetime.datetime, datetime.datetime, wtypes.text, wtypes.text, bool) def total(self, begin=None, end=None, tenant_id=None, service=None, all_tenants=False): """Return the amount to pay for a given period. """ LOG.warning('/v1/report/total is deprecated, please use ' '/v1/report/summary instead.') if not begin: begin = ck_utils.get_month_start() if not end: end = ck_utils.get_next_month() if all_tenants: tenant_id = None else: tenant_context = pecan.request.context.project_id tenant_id = tenant_context if not tenant_id else tenant_id policy.authorize(pecan.request.context, 'report:get_total', {"project_id": tenant_id}) storage = pecan.request.storage_backend # FIXME(sheeprine): We should filter on user id. # Use keystone token information by default but make it overridable and # enforce it by policy engine scope_key = CONF.collect.scope_key groupby = [scope_key] filters = {scope_key: tenant_id} if tenant_id else None result = storage.total( groupby=groupby, begin=begin, end=end, metric_types=service, filters=filters) if result['total'] < 1: return decimal.Decimal('0') return sum(total['rate'] for total in result['results']) @wsme_pecan.wsexpose(report_models.SummaryCollectionModel, datetime.datetime, datetime.datetime, wtypes.text, wtypes.text, wtypes.text, bool) def summary(self, begin=None, end=None, tenant_id=None, service=None, groupby=None, all_tenants=False): """Return the summary to pay for a given period. """ if not begin: begin = ck_utils.get_month_start() if not end: end = ck_utils.get_next_month() if all_tenants: tenant_id = None else: tenant_context = pecan.request.context.project_id tenant_id = tenant_context if not tenant_id else tenant_id policy.authorize(pecan.request.context, 'report:get_summary', {"project_id": tenant_id}) storage = pecan.request.storage_backend scope_key = CONF.collect.scope_key storage_groupby = [] if groupby is not None and 'tenant_id' in groupby: storage_groupby.append(scope_key) if groupby is not None and 'res_type' in groupby: storage_groupby.append('type') filters = {scope_key: tenant_id} if tenant_id else None result = storage.total( groupby=storage_groupby, begin=begin, end=end, metric_types=service, filters=filters) summarymodels = [] for res in result['results']: kwargs = { 'res_type': res.get('type') or res.get('res_type'), 'tenant_id': res.get(scope_key) or res.get('tenant_id'), 'begin': tzutils.local_to_utc(res['begin'], naive=True), 'end': tzutils.local_to_utc(res['end'], naive=True), 'rate': res['rate'], } summarymodel = report_models.SummaryModel(**kwargs) summarymodels.append(summarymodel) return report_models.SummaryCollectionModel(summary=summarymodels) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v1/controllers/storage.py0000664000175000017500000001022200000000000023663 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import datetime from oslo_config import cfg import pecan from pecan import rest from wsme import types as wtypes import wsmeext.pecan as wsme_pecan from cloudkitty.api.v1.datamodels import storage as storage_models from cloudkitty.common import policy from cloudkitty import storage from cloudkitty.utils import tz as tzutils CONF = cfg.CONF CONF.import_opt('scope_key', 'cloudkitty.collector', 'collect') class DataFramesController(rest.RestController): """REST Controller to access stored data frames.""" @wsme_pecan.wsexpose(storage_models.DataFrameCollection, datetime.datetime, datetime.datetime, wtypes.text, wtypes.text) def get_all(self, begin=None, end=None, tenant_id=None, resource_type=None): """Return a list of rated resources for a time period and a tenant. :param begin: Start of the period :param end: End of the period :param tenant_id: UUID of the tenant to filter on. :param resource_type: Type of the resource to filter on. :return: Collection of DataFrame objects. """ project_id = tenant_id or pecan.request.context.project_id policy.authorize(pecan.request.context, 'storage:list_data_frames', { 'project_id': project_id, }) scope_key = CONF.collect.scope_key backend = pecan.request.storage_backend dataframes = [] if pecan.request.context.is_admin: filters = {scope_key: tenant_id} if tenant_id else None else: # Unscoped non-admin user if project_id is None: return {'dataframes': []} filters = {scope_key: project_id} try: resp = backend.retrieve( begin, end, filters=filters, metric_types=resource_type, paginate=False) except storage.NoTimeFrame: return storage_models.DataFrameCollection(dataframes=[]) for frame in resp['dataframes']: frame_tenant = None for type_, points in frame.itertypes(): resources = [] for point in points: resource = storage_models.RatedResource( service=type_, desc=point.desc, volume=point.qty, rating=point.price) if frame_tenant is None: # NOTE(jferrieu): Since DataFrame/DataPoint # implementation patch we cannot guarantee # anymore that a DataFrame does contain a scope_id # therefore the __UNDEF__ default value has been # retained to maintain backward compatibility # if it would occur being absent frame_tenant = point.desc.get(scope_key, '__UNDEF__') resources.append(resource) dataframe = storage_models.DataFrame( begin=tzutils.local_to_utc(frame.start, naive=True), end=tzutils.local_to_utc(frame.end, naive=True), tenant_id=frame_tenant, resources=resources) dataframes.append(dataframe) return storage_models.DataFrameCollection(dataframes=dataframes) class StorageController(rest.RestController): """REST Controller to access stored data.""" dataframes = DataFramesController() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2314863 cloudkitty-21.0.0/cloudkitty/api/v1/datamodels/0000775000175000017500000000000000000000000021417 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v1/datamodels/__init__.py0000664000175000017500000000000000000000000023516 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v1/datamodels/collector.py0000664000175000017500000000432500000000000023763 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from wsme import types as wtypes class CollectorInfos(wtypes.Base): """Type describing a collector module. """ name = wtypes.wsattr(wtypes.text, mandatory=False) """Name of the collector.""" enabled = wtypes.wsattr(bool, mandatory=True) """State of the collector.""" def to_json(self): res_dict = {'name': self.name, 'enabled': self.enabled} return res_dict @classmethod def sample(cls): sample = cls(name='gnocchi', enabled=True) return sample class ServiceToCollectorMapping(wtypes.Base): """Type describing a service to collector mapping. """ service = wtypes.text """Name of the service.""" collector = wtypes.text """Name of the collector.""" def to_json(self): res_dict = {'service': self.service, 'collector': self.collector} return res_dict @classmethod def sample(cls): sample = cls(service='compute', collector='gnocchi') return sample class ServiceToCollectorMappingCollection(wtypes.Base): """Type describing a service to collector mapping collection. """ mappings = [ServiceToCollectorMapping] """List of service to collector mappings.""" def to_json(self): res_dict = {'mappings': self.mappings} return res_dict @classmethod def sample(cls): mapping = ServiceToCollectorMapping(service='compute', collector='gnocchi') sample = cls(mappings=[mapping]) return sample ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v1/datamodels/info.py0000664000175000017500000000320100000000000022720 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_config import cfg from wsme import types as wtypes CONF = cfg.CONF class CloudkittyMetricInfo(wtypes.Base): """Type describing a metric info in CloudKitty.""" metric_id = wtypes.text """Name of the metric.""" metadata = [wtypes.text] """List of metric metadata""" unit = wtypes.text """Metric unit""" def to_json(self): res_dict = {} res_dict[self.metric_id] = [{ 'metadata': self.metadata, 'unit': self.unit }] return res_dict @classmethod def sample(cls): metadata = ['resource_id', 'project_id', 'qty', 'unit'] sample = cls(metric_id='image.size', metadata=metadata, unit='MiB') return sample class CloudkittyMetricInfoCollection(wtypes.Base): """A list of CloudKittyMetricInfo.""" metrics = [CloudkittyMetricInfo] @classmethod def sample(cls): sample = CloudkittyMetricInfo.sample() return cls(metrics=[sample]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v1/datamodels/rating.py0000664000175000017500000000557000000000000023264 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import decimal from oslo_config import cfg from oslo_log import log from wsme import types as wtypes from cloudkitty.api.v1 import types as cktypes LOG = log.getLogger(__name__) CONF = cfg.CONF class CloudkittyResource(wtypes.Base): """Type describing a resource in CloudKitty. """ service = wtypes.text """Name of the service.""" # FIXME(sheeprine): values should be dynamic # Testing with ironic dynamic type desc = {wtypes.text: cktypes.MultiType(wtypes.text, int, float, dict, decimal.Decimal)} """Description of the resources parameters.""" volume = decimal.Decimal """Volume of resources.""" def to_json(self): res_dict = {} res_dict[self.service] = [{'desc': self.desc, 'vol': {'qty': str(self.volume), 'unit': 'undef'} }] return res_dict @classmethod def sample(cls): sample = cls(service='compute', desc={ 'image_id': 'a41fba37-2429-4f15-aa00-b5bc4bf557bf' }, volume=decimal.Decimal(1)) return sample class CloudkittyResourceCollection(wtypes.Base): """A list of CloudKittyResources.""" resources = [CloudkittyResource] class CloudkittyModule(wtypes.Base): """A rating extension summary """ module_id = wtypes.wsattr(wtypes.text, mandatory=True) """Name of the extension.""" description = wtypes.wsattr(wtypes.text, mandatory=False) """Short description of the extension.""" enabled = wtypes.wsattr(bool) """Extension status.""" hot_config = wtypes.wsattr(bool, default=False, name='hot-config') """On-the-fly configuration support.""" priority = wtypes.wsattr(int) """Priority of the extension.""" @classmethod def sample(cls): sample = cls(name='example', description='Sample extension.', enabled=True, hot_config=False, priority=2) return sample class CloudkittyModuleCollection(wtypes.Base): """A list of rating extensions.""" modules = [CloudkittyModule] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v1/datamodels/report.py0000664000175000017500000000432600000000000023311 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import datetime from wsme import types as wtypes class SummaryModel(wtypes.Base): """Type describing a report summary info.""" begin = datetime.datetime """Begin date for the sample.""" end = datetime.datetime """End date for the sample.""" tenant_id = wtypes.text """Tenant owner of the sample.""" res_type = wtypes.text """Resource type of the sample.""" rate = wtypes.text """summary rate of the sample""" def __init__(self, begin=None, end=None, tenant_id=None, res_type=None, rate=None): self.begin = begin self.end = end self.tenant_id = tenant_id if tenant_id else "ALL" self.res_type = res_type if res_type else "ALL" # TODO(Aaron): Need optimize, control precision with decimal self.rate = str(float('%0.5f' % rate)) if rate else "0" def to_json(self): return {'begin': self.begin, 'end': self.end, 'tenant_id': self.tenant_id, 'res_type': self.res_type, 'rate': self.rate} @classmethod def sample(cls): sample = cls(tenant_id='69d12143688f413cbf5c3cfe03ed0a12', begin=datetime.datetime(2015, 4, 22, 7), end=datetime.datetime(2015, 4, 22, 8), res_type='compute', rate="1") return sample class SummaryCollectionModel(wtypes.Base): """A list of report summary.""" summary = [SummaryModel] @classmethod def sample(cls): sample = SummaryModel.sample() return cls(summary=[sample]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v1/datamodels/storage.py0000664000175000017500000000454700000000000023447 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import datetime import decimal from wsme import types as wtypes from cloudkitty.api.v1.datamodels import rating as rating_resources class RatedResource(rating_resources.CloudkittyResource): """Represents a rated CloudKitty resource.""" rating = decimal.Decimal def to_json(self): res_dict = super(RatedResource, self).to_json() res_dict['rating'] = self.rating return res_dict @classmethod def sample(cls): sample = cls(volume=decimal.Decimal('1.0'), service='compute', rating=decimal.Decimal('1.0'), desc={'flavor': 'm1.tiny', 'vcpus': '1'}) return sample class DataFrame(wtypes.Base): """Type describing a stored data frame.""" begin = datetime.datetime """Begin date for the sample.""" end = datetime.datetime """End date for the sample.""" tenant_id = wtypes.text """Tenant owner of the sample.""" resources = [RatedResource] """A resource list.""" def to_json(self): return {'begin': self.begin, 'end': self.end, 'tenant_id': self.tenant_id, 'resources': self.resources} @classmethod def sample(cls): res_sample = RatedResource.sample() sample = cls(tenant_id='69d12143688f413cbf5c3cfe03ed0a12', begin=datetime.datetime(2015, 4, 22, 7), end=datetime.datetime(2015, 4, 22, 8), resources=[res_sample]) return sample class DataFrameCollection(wtypes.Base): """A list of stored data frames.""" dataframes = [DataFrame] @classmethod def sample(cls): sample = DataFrame.sample() return cls(dataframes=[sample]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v1/hooks.py0000664000175000017500000000243600000000000021004 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from pecan import hooks from cloudkitty.common import context from cloudkitty import messaging class RPCHook(hooks.PecanHook): def __init__(self): self._rpc_client = messaging.get_client() def before(self, state): state.request.rpc_client = self._rpc_client class StorageHook(hooks.PecanHook): def __init__(self, storage_backend): self._storage_backend = storage_backend def before(self, state): state.request.storage_backend = self._storage_backend class ContextHook(hooks.PecanHook): def on_route(self, state): state.request.context = context.RequestContext.from_environ( state.request.environ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v1/types.py0000664000175000017500000000336300000000000021025 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_utils import uuidutils from wsme import types as wtypes class UuidType(wtypes.UuidType): """A simple UUID type.""" basetype = wtypes.text name = 'uuid' @staticmethod def validate(value): if not uuidutils.is_uuid_like(value): raise ValueError("Invalid UUID, got '%s'" % value) return value # Code taken from ironic types class MultiType(wtypes.UserType): """A complex type that represents one or more types. Used for validating that a value is an instance of one of the types. :param *types: Variable-length list of types. """ def __init__(self, *types): self.types = types def __str__(self): return ' | '.join(map(str, self.types)) def validate(self, value): for t in self.types: if t is wtypes.text and isinstance(value, wtypes.bytes): value = value.decode() if isinstance(value, t): return value else: raise ValueError( "Wrong type. Expected '%(type)s', got '%(value)s'" % {'type': self.types, 'value': type(value)}) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2314863 cloudkitty-21.0.0/cloudkitty/api/v2/0000775000175000017500000000000000000000000017303 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v2/__init__.py0000664000175000017500000000276600000000000021427 0ustar00zuulzuul00000000000000# Copyright 2018 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import importlib import flask import voluptuous from cloudkitty.common import context RESOURCE_SCHEMA = voluptuous.Schema({ # python module containing the resource voluptuous.Required('module'): str, # Name of the resource class voluptuous.Required('resource_class'): str, # Url suffix of this specific resource voluptuous.Required('url'): str, }) API_MODULES = [ 'cloudkitty.api.v2.scope', 'cloudkitty.api.v2.dataframes', 'cloudkitty.api.v2.summary', 'cloudkitty.api.v2.task', 'cloudkitty.api.v2.rating', ] def _extend_request_context(): flask.request.context = context.RequestContext.from_environ( flask.request.environ) def get_api_app(): app = flask.Flask(__name__) for module_name in API_MODULES: module = importlib.import_module(module_name) module.init(app) app.before_request(_extend_request_context) return app ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v2/base.py0000664000175000017500000000334700000000000020576 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import flask_restful from werkzeug import exceptions as http_exceptions from cloudkitty.common import policy from cloudkitty import storage class BaseResource(flask_restful.Resource): """Base class for all cloudkitty v2 API resources. Returns a 403 Forbidden HTTP code in case a ``PolicyNotAuthorized`` exception is raised by the API method. """ def dispatch_request(self, *args, **kwargs): try: return super(BaseResource, self).dispatch_request(*args, **kwargs) except policy.PolicyNotAuthorized: raise http_exceptions.Forbidden( "You are not authorized to perform this action") @classmethod def reload(cls): """Reloads all required drivers""" cls._storage = storage.get_storage() @classmethod def load(cls): """Loads all required drivers. If the drivers are already loaded, does nothing. """ if not getattr(cls, '_loaded', False): cls.reload() cls._loaded = True def __init__(self, *args, **kwargs): super(BaseResource, self).__init__(*args, **kwargs) self.load() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2314863 cloudkitty-21.0.0/cloudkitty/api/v2/dataframes/0000775000175000017500000000000000000000000021412 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v2/dataframes/__init__.py0000664000175000017500000000156300000000000023530 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from cloudkitty.api.v2 import utils as api_utils def init(app): api_utils.do_init(app, 'dataframes', [ { 'module': __name__ + '.' + 'dataframes', 'resource_class': 'DataFrameList', 'url': '', }, ]) return app ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v2/dataframes/dataframes.py0000664000175000017500000000670600000000000024104 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import flask from oslo_config import cfg import voluptuous from werkzeug import exceptions as http_exceptions from cloudkitty.api.v2 import base from cloudkitty.api.v2 import utils as api_utils from cloudkitty.common import policy from cloudkitty import dataframe from cloudkitty.utils import tz as tzutils CONF = cfg.CONF CONF.import_opt('scope_key', 'cloudkitty.collector', 'collect') class DataFrameList(base.BaseResource): @api_utils.add_input_schema('body', { voluptuous.Required('dataframes'): [dataframe.DataFrame.from_dict], }) def post(self, dataframes=[]): policy.authorize( flask.request.context, 'dataframes:add', {}, ) if not dataframes: raise http_exceptions.BadRequest( "Parameter dataframes must not be empty.") self._storage.push(dataframes) return {}, 204 @api_utils.paginated @api_utils.add_input_schema('query', { voluptuous.Optional('begin'): api_utils.SingleQueryParam(tzutils.dt_from_iso), voluptuous.Optional('end'): api_utils.SingleQueryParam(tzutils.dt_from_iso), voluptuous.Optional('filters'): api_utils.SingleDictQueryParam(str, str), }) @api_utils.add_output_schema({ voluptuous.Required('total'): int, voluptuous.Required('dataframes'): [dataframe.DataFrame.as_dict], }) def get(self, offset=0, limit=100, begin=None, end=None, filters=None): policy.authorize( flask.request.context, 'dataframes:get', {'project_id': flask.request.context.project_id}, ) begin = begin or tzutils.get_month_start() end = end or tzutils.get_next_month() if filters and 'type' in filters: metric_types = [filters.pop('type')] else: metric_types = None if not flask.request.context.is_admin: if flask.request.context.project_id is None: # Unscoped non-admin user return { 'total': 0, 'dataframes': [] } scope_key = CONF.collect.scope_key if filters: filters[scope_key] = flask.request.context.project_id else: filters = {scope_key: flask.request.context.project_id} results = self._storage.retrieve( begin=begin, end=end, filters=filters, metric_types=metric_types, offset=offset, limit=limit, ) if results['total'] < 1: raise http_exceptions.NotFound( "No resource found for provided filters.") return { 'total': results['total'], 'dataframes': results['dataframes'], } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2314863 cloudkitty-21.0.0/cloudkitty/api/v2/rating/0000775000175000017500000000000000000000000020567 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v2/rating/__init__.py0000664000175000017500000000203500000000000022700 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from cloudkitty.api.v2 import utils as api_utils def init(app): api_utils.do_init(app, 'rating', [ { 'module': __name__ + '.' + 'modules', 'resource_class': 'RatingModule', 'url': '/modules/', }, { 'module': __name__ + '.' + 'modules', 'resource_class': 'RatingModuleList', 'url': '/modules', }, ]) return app ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v2/rating/modules.py0000664000175000017500000000740600000000000022620 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import flask from oslo_concurrency import lockutils from stevedore import extension import voluptuous from werkzeug import exceptions as http_exceptions from cloudkitty.api.v2 import base from cloudkitty.api.v2 import utils as api_utils from cloudkitty.common import policy from cloudkitty import utils as ck_utils from cloudkitty.utils import validation as vutils PROCESSORS_NAMESPACE = 'cloudkitty.rating.processors' MODULE_SCHEMA = { voluptuous.Required( 'description', default=None, ): vutils.get_string_type(), voluptuous.Required( 'module_id', default=None, ): vutils.get_string_type(), voluptuous.Required( 'enabled', default=None, ): voluptuous.Boolean(), voluptuous.Required( 'hot_config', default=None, ): voluptuous.Boolean(), voluptuous.Required( 'priority', default=None, ): voluptuous.All(int, min=1), } class BaseRatingModule(base.BaseResource): @classmethod def reload(cls): super(BaseRatingModule, cls).reload() with lockutils.lock('rating-modules'): ck_utils.refresh_stevedore(PROCESSORS_NAMESPACE) cls.rating_modules = extension.ExtensionManager( PROCESSORS_NAMESPACE, invoke_on_load=True) class RatingModule(BaseRatingModule): @api_utils.add_output_schema(MODULE_SCHEMA) def get(self, module_id): policy.authorize(flask.request.context, 'v2_rating:get_module', {}) try: module = self.rating_modules[module_id] except KeyError: raise http_exceptions.NotFound( "Module '{}' not found".format(module_id)) infos = module.obj.module_info.copy() return { 'module_id': module_id, 'description': infos['description'], 'enabled': infos['enabled'], 'hot_config': infos['hot_config'], 'priority': infos['priority'], } @api_utils.add_input_schema('body', { voluptuous.Optional('enabled'): voluptuous.Boolean(), voluptuous.Optional('priority'): voluptuous.All(int, min=1), }) def put(self, module_id, enabled=None, priority=None): policy.authorize(flask.request.context, 'v2_rating:update_module', {}) try: module = self.rating_modules[module_id].obj except KeyError: raise http_exceptions.NotFound( "Module '{}' not found".format(module_id)) if enabled is not None: module.set_state(enabled) if priority is not None: module.set_priority(priority) return "", 204 class RatingModuleList(BaseRatingModule): @api_utils.add_output_schema({ 'modules': [MODULE_SCHEMA], }) def get(self): modules = [] for module in self.rating_modules: infos = module.obj.module_info.copy() modules.append({ 'module_id': infos['name'], 'description': infos['description'], 'enabled': infos['enabled'], 'hot_config': infos['hot_config'], 'priority': infos['priority'], }) return {'modules': modules} ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2314863 cloudkitty-21.0.0/cloudkitty/api/v2/scope/0000775000175000017500000000000000000000000020414 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v2/scope/__init__.py0000664000175000017500000000154600000000000022533 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from cloudkitty.api.v2 import utils as api_utils def init(app): api_utils.do_init(app, 'scope', [ { 'module': __name__ + '.' + 'state', 'resource_class': 'ScopeState', 'url': '', }, ]) return app ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v2/scope/state.py0000664000175000017500000002666700000000000022127 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import flask import voluptuous from werkzeug import exceptions as http_exceptions from cloudkitty.api.v2 import base from cloudkitty.api.v2 import utils as api_utils from cloudkitty.common import policy from cloudkitty import messaging from cloudkitty import storage_state from cloudkitty.utils import tz as tzutils from cloudkitty.utils import validation as vutils from oslo_log import log LOG = log.getLogger(__name__) class ScopeState(base.BaseResource): @classmethod def reload(cls): super(ScopeState, cls).reload() cls._client = messaging.get_client() cls._storage_state = storage_state.StateManager() @api_utils.paginated @api_utils.add_input_schema('query', { voluptuous.Optional('scope_id', default=[]): api_utils.MultiQueryParam(str), voluptuous.Optional('scope_key', default=[]): api_utils.MultiQueryParam(str), voluptuous.Optional('fetcher', default=[]): api_utils.MultiQueryParam(str), voluptuous.Optional('collector', default=[]): api_utils.MultiQueryParam(str), voluptuous.Optional('active', default=[]): api_utils.MultiQueryParam(int), }) @api_utils.add_output_schema({'results': [{ voluptuous.Required('scope_id'): vutils.get_string_type(), voluptuous.Required('scope_key'): vutils.get_string_type(), voluptuous.Required('fetcher'): vutils.get_string_type(), voluptuous.Required('collector'): vutils.get_string_type(), voluptuous.Optional( 'last_processed_timestamp'): vutils.get_string_type(), voluptuous.Required('active'): bool, voluptuous.Optional('scope_activation_toggle_date'): vutils.get_string_type(), }]}) def get(self, offset=0, limit=100, scope_id=None, scope_key=None, fetcher=None, collector=None, active=None): policy.authorize( flask.request.context, 'scope:get_state', {'project_id': scope_id or flask.request.context.project_id} ) results = self._storage_state.get_all( identifier=scope_id, scope_key=scope_key, fetcher=fetcher, collector=collector, offset=offset, limit=limit, active=active) if len(results) < 1: raise http_exceptions.NotFound( "No resource found for provided filters.") return { 'results': [{ 'scope_id': r.identifier, 'scope_key': r.scope_key, 'fetcher': r.fetcher, 'collector': r.collector, 'last_processed_timestamp': r.last_processed_timestamp.isoformat(), 'active': r.active, 'scope_activation_toggle_date': r.scope_activation_toggle_date.isoformat() if r.scope_activation_toggle_date else None } for r in results] } @api_utils.add_input_schema('body', { voluptuous.Exclusive('all_scopes', 'scope_selector'): voluptuous.Boolean(), voluptuous.Exclusive('scope_id', 'scope_selector'): api_utils.MultiQueryParam(str), voluptuous.Optional('scope_key', default=[]): api_utils.MultiQueryParam(str), voluptuous.Optional('fetcher', default=[]): api_utils.MultiQueryParam(str), voluptuous.Optional('collector', default=[]): api_utils.MultiQueryParam(str), voluptuous.Optional('last_processed_timestamp'): voluptuous.Coerce(tzutils.dt_from_iso), }) def put(self, all_scopes=False, scope_id=None, scope_key=None, fetcher=None, collector=None, last_processed_timestamp=None): policy.authorize( flask.request.context, 'scope:reset_state', {'project_id': scope_id or flask.request.context.project_id} ) if not all_scopes and scope_id is None: raise http_exceptions.BadRequest( "Either all_scopes or a scope_id should be specified.") if not last_processed_timestamp: raise http_exceptions.BadRequest( "Variable 'last_processed_timestamp' cannot be empty/None.") results = self._storage_state.get_all( identifier=scope_id, scope_key=scope_key, fetcher=fetcher, collector=collector, ) if len(results) < 1: raise http_exceptions.NotFound( "No resource found for provided filters.") serialized_results = [{ 'scope_id': r.identifier, 'scope_key': r.scope_key, 'fetcher': r.fetcher, 'collector': r.collector, } for r in results] self._client.cast({}, 'reset_state', res_data={ 'scopes': serialized_results, 'last_processed_timestamp': last_processed_timestamp.isoformat() }) return {}, 202 @api_utils.add_input_schema('body', { voluptuous.Required('scope_id'): api_utils.SingleQueryParam(str), voluptuous.Optional('scope_key'): api_utils.SingleQueryParam(str), voluptuous.Optional('fetcher'): api_utils.SingleQueryParam(str), voluptuous.Optional('collector'): api_utils.SingleQueryParam(str), voluptuous.Optional('active'): api_utils.SingleQueryParam(bool), }) @api_utils.add_output_schema({ voluptuous.Required('scope_id'): vutils.get_string_type(), voluptuous.Required('scope_key'): vutils.get_string_type(), voluptuous.Required('fetcher'): vutils.get_string_type(), voluptuous.Required('collector'): vutils.get_string_type(), voluptuous.Optional('last_processed_timestamp'): voluptuous.Coerce(tzutils.dt_from_iso), voluptuous.Required('active'): bool, voluptuous.Required('scope_activation_toggle_date'): vutils.get_string_type() }) def patch(self, scope_id, scope_key=None, fetcher=None, collector=None, active=None): policy.authorize( flask.request.context, 'scope:patch_state', {'tenant_id': scope_id or flask.request.context.project_id} ) results = self._storage_state.get_all(identifier=scope_id, active=None) if len(results) < 1: raise http_exceptions.NotFound( "No resource found for provided filters.") if len(results) > 1: LOG.debug("Too many resources found with the same scope_id [%s], " "scopes found: [%s].", scope_id, results) raise http_exceptions.NotFound("Too many resources found with " "the same scope_id: %s." % scope_id) scope_to_update = results[0] LOG.debug("Executing update of storage scope: [%s].", scope_to_update) self._storage_state.update_storage_scope(scope_to_update, scope_key=scope_key, fetcher=fetcher, collector=collector, active=active) storage_scopes = self._storage_state.get_all( identifier=scope_id, active=active) update_storage_scope = storage_scopes[0] return { 'scope_id': update_storage_scope.identifier, 'scope_key': update_storage_scope.scope_key, 'fetcher': update_storage_scope.fetcher, 'collector': update_storage_scope.collector, 'last_processed_timestamp': update_storage_scope.last_processed_timestamp.isoformat(), 'active': update_storage_scope.active, 'scope_activation_toggle_date': update_storage_scope.scope_activation_toggle_date.isoformat() } @api_utils.add_input_schema('body', { voluptuous.Required('scope_id'): api_utils.SingleQueryParam(str), voluptuous.Optional('scope_key'): api_utils.SingleQueryParam(str), voluptuous.Optional('fetcher'): api_utils.SingleQueryParam(str), voluptuous.Optional('collector'): api_utils.SingleQueryParam(str), voluptuous.Optional('active'): api_utils.SingleQueryParam(bool), }) @api_utils.add_output_schema({ voluptuous.Required('scope_id'): vutils.get_string_type(), voluptuous.Required('scope_key'): vutils.get_string_type(), voluptuous.Required('fetcher'): vutils.get_string_type(), voluptuous.Required('collector'): vutils.get_string_type(), voluptuous.Optional('last_processed_timestamp'): voluptuous.Coerce(tzutils.dt_from_iso), voluptuous.Required('active'): bool, voluptuous.Required('scope_activation_toggle_date'): vutils.get_string_type() }) def post(self, scope_id, scope_key=None, fetcher=None, collector=None, active=None): policy.authorize( flask.request.context, 'scope:post_state', {'tenant_id': scope_id or flask.request.context.project_id} ) results = self._storage_state.get_all(identifier=scope_id) if len(results) >= 1: LOG.debug("There is already a scope with ID [%s], " "scopes found: [%s].", scope_id, results) raise http_exceptions.NotFound("Cannot create a scope with an " "already existing scope_id: %s." % scope_id) LOG.debug("Creating storage scope with data: [scope_id=%s, " "scope_key=%s, fetcher=%s, collector=%s, active=%s].", scope_id, scope_key, fetcher, collector, active) self._storage_state.create_scope(scope_id, None, fetcher=fetcher, collector=collector, scope_key=scope_key, active=active) storage_scopes = self._storage_state.get_all( identifier=scope_id) update_storage_scope = storage_scopes[0] last_processed_timestamp = None if update_storage_scope.last_processed_timestamp: last_processed_timestamp =\ update_storage_scope.last_processed_timestamp.isoformat() return { 'scope_id': update_storage_scope.identifier, 'scope_key': update_storage_scope.scope_key, 'fetcher': update_storage_scope.fetcher, 'collector': update_storage_scope.collector, 'last_processed_timestamp': last_processed_timestamp, 'active': update_storage_scope.active, 'scope_activation_toggle_date': update_storage_scope.scope_activation_toggle_date.isoformat() } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2314863 cloudkitty-21.0.0/cloudkitty/api/v2/summary/0000775000175000017500000000000000000000000021000 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v2/summary/__init__.py0000664000175000017500000000154700000000000023120 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from cloudkitty.api.v2 import utils as api_utils def init(app): api_utils.do_init(app, 'summary', [ { 'module': __name__ + '.' + 'summary', 'resource_class': 'Summary', 'url': '', }, ]) return app ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v2/summary/summary.py0000664000175000017500000000755300000000000023061 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import flask import voluptuous from cloudkitty.api.v2 import base from cloudkitty.api.v2 import utils as api_utils from cloudkitty.common import policy from cloudkitty.utils import tz as tzutils TABLE_RESPONSE_FORMAT = "table" OBJECT_RESPONSE_FORMAT = "object" ALL_RESPONSE_FORMATS = [TABLE_RESPONSE_FORMAT, OBJECT_RESPONSE_FORMAT] class Summary(base.BaseResource): """Resource allowing to retrieve a rating summary.""" @api_utils.paginated @api_utils.add_input_schema('query', { voluptuous.Optional('response_format'): api_utils.SingleQueryParam(str), voluptuous.Optional('custom_fields'): api_utils.SingleQueryParam(str), voluptuous.Optional('groupby'): api_utils.MultiQueryParam(str), voluptuous.Optional('filters'): api_utils.MultiDictQueryParam(str, str), voluptuous.Optional('begin'): api_utils.SingleQueryParam( tzutils.dt_from_iso), voluptuous.Optional('end'): api_utils.SingleQueryParam( tzutils.dt_from_iso), }) def get(self, response_format=TABLE_RESPONSE_FORMAT, custom_fields=None, groupby=None, filters={}, begin=None, end=None, offset=0, limit=100): if response_format not in ALL_RESPONSE_FORMATS: raise voluptuous.Invalid("Invalid response format [%s]. Valid " "format are [%s]." % (response_format, ALL_RESPONSE_FORMATS)) policy.authorize( flask.request.context, 'summary:get_summary', {'project_id': flask.request.context.project_id}) begin = begin or tzutils.get_month_start() end = end or tzutils.get_next_month() if not flask.request.context.is_admin: if flask.request.context.project_id is None: # Unscoped non-admin user return { 'total': 0, 'columns': [], 'results': [], } filters['project_id'] = flask.request.context.project_id metric_types = filters.pop('type', []) if not isinstance(metric_types, list): metric_types = [metric_types] arguments = { 'begin': begin, 'end': end, 'groupby': groupby, 'filters': filters, 'metric_types': metric_types, 'offset': offset, 'limit': limit, 'paginate': True } if custom_fields: arguments['custom_fields'] = custom_fields total = self._storage.total(**arguments) return self.generate_response(response_format, total) @staticmethod def generate_response(response_format, total): response = {'total': total['total']} if response_format == TABLE_RESPONSE_FORMAT: columns = [] if len(total['results']) > 0: columns = list(total['results'][0].keys()) response['columns'] = columns response['results'] = [list(res.values()) for res in total['results']] elif response_format == OBJECT_RESPONSE_FORMAT: response['results'] = total['results'] response['format'] = response_format return response ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2314863 cloudkitty-21.0.0/cloudkitty/api/v2/task/0000775000175000017500000000000000000000000020245 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v2/task/__init__.py0000664000175000017500000000230200000000000022353 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from cloudkitty.api.v2 import utils as api_utils def init(app): api_utils.do_init(app, 'task', [ { 'module': __name__ + '.' + 'reprocess', 'resource_class': 'ReprocessSchedulerPostApi', 'url': '/reprocesses', }, { 'module': __name__ + '.' + 'reprocess', 'resource_class': 'ReprocessSchedulerGetApi', 'url': '/reprocesses/', }, { 'module': __name__ + '.' + 'reprocess', 'resource_class': 'ReprocessesSchedulerGetApi', 'url': '/reprocesses', }, ]) return app ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v2/task/reprocess.py0000664000175000017500000003220000000000000022621 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from datetime import timedelta from datetimerange import DateTimeRange import flask from oslo_log import log import voluptuous from werkzeug import exceptions as http_exceptions from cloudkitty.api.v2 import base from cloudkitty.api.v2 import utils as api_utils from cloudkitty.common import policy from cloudkitty import storage_state from cloudkitty.storage_state.models import ReprocessingScheduler from cloudkitty.utils import tz as tzutils from cloudkitty.utils import validation as validation_utils from oslo_config import cfg LOG = log.getLogger(__name__) ALL_SCOPES_OPTION = 'ALL' def dt_from_iso_as_utc(date_string): return tzutils.dt_from_iso(date_string, as_utc=True) class ReprocessSchedulerPostApi(base.BaseResource): def __init__(self, *args, **kwargs): super(ReprocessSchedulerPostApi, self).__init__(*args, **kwargs) self.storage_state_manager = storage_state.StateManager() self.schedule_reprocessing_db = storage_state.ReprocessingSchedulerDb() @api_utils.add_input_schema('body', { voluptuous.Required('scope_ids'): api_utils.MultiQueryParam(str), voluptuous.Required('start_reprocess_time'): voluptuous.Coerce(dt_from_iso_as_utc), voluptuous.Required('end_reprocess_time'): voluptuous.Coerce(dt_from_iso_as_utc), voluptuous.Required('reason'): api_utils.SingleQueryParam(str), }) def post(self, scope_ids=[], start_reprocess_time=None, end_reprocess_time=None, reason=None): policy.authorize( flask.request.context, 'schedule:task_reprocesses', {'tenant_id': flask.request.context.project_id or scope_ids} ) ReprocessSchedulerPostApi.validate_inputs( end_reprocess_time, reason, scope_ids, start_reprocess_time) if ALL_SCOPES_OPTION in scope_ids: scope_ids = [] if not isinstance(scope_ids, list): scope_ids = [scope_ids] all_scopes_to_reprocess = self.storage_state_manager.get_all( identifier=scope_ids, offset=None, limit=None) ReprocessSchedulerPostApi.check_if_there_are_invalid_scopes( all_scopes_to_reprocess, end_reprocess_time, scope_ids, start_reprocess_time) ReprocessSchedulerPostApi.validate_start_end_for_reprocessing( all_scopes_to_reprocess, end_reprocess_time, start_reprocess_time) self.validate_reprocessing_schedules_overlaps( all_scopes_to_reprocess, end_reprocess_time, start_reprocess_time) for scope in all_scopes_to_reprocess: schedule = ReprocessingScheduler( identifier=scope.identifier, reason=reason, start_reprocess_time=start_reprocess_time, end_reprocess_time=end_reprocess_time) LOG.debug("Persisting scope reprocessing schedule [%s].", schedule) self.schedule_reprocessing_db.persist(schedule) return {}, 202 @staticmethod def get_date_period_overflow(date): return int(date.timestamp() % cfg.CONF.collect.period) @staticmethod def get_valid_period_date(date): return date - timedelta( seconds=ReprocessSchedulerPostApi.get_date_period_overflow(date)) @staticmethod def get_overflow_from_dates(start, end): start_overflow = ReprocessSchedulerPostApi.get_date_period_overflow( start) end_overflow = ReprocessSchedulerPostApi.get_date_period_overflow(end) if start_overflow or end_overflow: valid_start = ReprocessSchedulerPostApi.get_valid_period_date( start) valid_end = ReprocessSchedulerPostApi.get_valid_period_date(end) if valid_start == valid_end: valid_end += timedelta(seconds=cfg.CONF.collect.period) return [str(valid_start), str(valid_end)] @staticmethod def validate_inputs( end_reprocess_time, reason, scope_ids, start_reprocess_time): ReprocessSchedulerPostApi.validate_scope_ids(scope_ids) if not reason.strip(): raise http_exceptions.BadRequest( "Empty or blank reason text is not allowed. Please, do " "inform/register the reason for the reprocessing of a " "previously processed timestamp.") if end_reprocess_time < start_reprocess_time: raise http_exceptions.BadRequest( "End reprocessing timestamp [%s] cannot be less than " "start reprocessing timestamp [%s]." % (start_reprocess_time, end_reprocess_time)) periods_overflows = ReprocessSchedulerPostApi.get_overflow_from_dates( start_reprocess_time, end_reprocess_time) if periods_overflows: raise http_exceptions.BadRequest( "The provided reprocess time window does not comply with " "the configured collector period. A valid time window " "near the provided one is %s" % periods_overflows) @staticmethod def validate_scope_ids(scope_ids): option_all_selected = False for s in scope_ids: if s == ALL_SCOPES_OPTION: option_all_selected = True continue if option_all_selected and len(scope_ids) != 1: raise http_exceptions.BadRequest( "Cannot use 'ALL' with scope ID [%s]. Either schedule a " "reprocessing for all active scopes using 'ALL' option, " "or inform only the scopes you desire to schedule a " "reprocessing." % scope_ids) @staticmethod def check_if_there_are_invalid_scopes( all_scopes_to_reprocess, end_reprocess_time, scope_ids, start_reprocess_time): invalid_scopes = [] for s in scope_ids: scope_exist_in_db = False for scope_to_reprocess in all_scopes_to_reprocess: if s == scope_to_reprocess.identifier: scope_exist_in_db = True break if not scope_exist_in_db: invalid_scopes.append(s) if invalid_scopes: raise http_exceptions.BadRequest( "Scopes %s scheduled to reprocess [start=%s, end=%s] " "do not exist." % (invalid_scopes, start_reprocess_time, end_reprocess_time)) @staticmethod def validate_start_end_for_reprocessing(all_scopes_to_reprocess, end_reprocess_time, start_reprocess_time): for scope in all_scopes_to_reprocess: last_processed_timestamp = scope.last_processed_timestamp if start_reprocess_time > last_processed_timestamp: raise http_exceptions.BadRequest( "Cannot execute a reprocessing [start=%s] for scope [%s] " "starting after the last possible timestamp [%s]." % (start_reprocess_time, scope, last_processed_timestamp)) if end_reprocess_time > scope.last_processed_timestamp: raise http_exceptions.BadRequest( "Cannot execute a reprocessing [end=%s] for scope [%s] " "ending after the last possible timestamp [%s]." % (end_reprocess_time, scope, last_processed_timestamp)) def validate_reprocessing_schedules_overlaps( self, all_scopes_to_reprocess, end_reprocess_time, start_reprocess_time): scheduling_range = DateTimeRange( start_reprocess_time, end_reprocess_time) for scope_to_reprocess in all_scopes_to_reprocess: all_reprocessing_schedules = self.schedule_reprocessing_db.get_all( identifier=[scope_to_reprocess.identifier]) LOG.debug("All schedules [%s] for reprocessing found for scope " "[%s]", all_reprocessing_schedules, scope_to_reprocess) if not all_reprocessing_schedules: LOG.debug( "No need to validate possible collision of reprocessing " "for scope [%s] because it does not have active " "reprocessing schedules." % scope_to_reprocess) continue for schedule in all_reprocessing_schedules: scheduled_range = DateTimeRange( tzutils.local_to_utc(schedule.start_reprocess_time), tzutils.local_to_utc(schedule.end_reprocess_time)) try: if scheduling_range.is_intersection(scheduled_range): raise http_exceptions.BadRequest( self.generate_overlap_error_message( scheduled_range, scheduling_range, scope_to_reprocess)) except ValueError as e: raise http_exceptions.BadRequest( self.generate_overlap_error_message( scheduled_range, scheduling_range, scope_to_reprocess) + "Error: [%s]." % e) @staticmethod def generate_overlap_error_message(scheduled_range, scheduling_range, scope_to_reprocess): return "Cannot schedule a reprocessing for scope [%s] for " \ "reprocessing time [%s], because it already has a schedule " \ "for a similar time range [%s]." % (scope_to_reprocess, scheduling_range, scheduled_range) ACCEPTED_GET_REPROCESSING_REQUEST_ORDERS = ['asc', 'desc'] class ReprocessSchedulerGetApi(base.BaseResource): def __init__(self, *args, **kwargs): super(ReprocessSchedulerGetApi, self).__init__(*args, **kwargs) self.schedule_reprocessing_db = storage_state.ReprocessingSchedulerDb() @api_utils.paginated @api_utils.add_input_schema('query', { voluptuous.Optional('scope_ids'): api_utils.MultiQueryParam(str), voluptuous.Optional('order', default="desc"): api_utils.SingleQueryParam(str) }) @api_utils.add_output_schema({'results': [{ voluptuous.Required('reason'): validation_utils.get_string_type(), voluptuous.Required('scope_id'): validation_utils.get_string_type(), voluptuous.Required('start_reprocess_time'): validation_utils.get_string_type(), voluptuous.Required('end_reprocess_time'): validation_utils.get_string_type(), voluptuous.Required('current_reprocess_time'): validation_utils.get_string_type(), }]}) def get(self, scope_ids=[], path_scope_id=None, offset=0, limit=100, order="desc"): if path_scope_id and scope_ids: LOG.warning("Filtering by scope IDs [%s] and path scope ID [%s] " "does not make sense. You should use only one of " "them. We will use only the path scope ID for this " "request.", scope_ids, path_scope_id) if path_scope_id: scope_ids = [path_scope_id] policy.authorize( flask.request.context, 'schedule:get_task_reprocesses', {'tenant_id': flask.request.context.project_id or scope_ids} ) if not isinstance(scope_ids, list): scope_ids = [scope_ids] # Some versions of python-cloudkittyclient can send the order in upper # case, e.g. "DESC". Convert it to lower case for compatibility. order = order.lower() if order not in ACCEPTED_GET_REPROCESSING_REQUEST_ORDERS: raise http_exceptions.BadRequest( "The order [%s] is not valid. Accepted values are %s." % (order, ACCEPTED_GET_REPROCESSING_REQUEST_ORDERS)) schedules = self.schedule_reprocessing_db.get_all( identifier=scope_ids, remove_finished=False, offset=offset, limit=limit, order=order) return { 'results': [{ 'scope_id': s.identifier, 'reason': s.reason, 'start_reprocess_time': s.start_reprocess_time.isoformat(), 'end_reprocess_time': s.end_reprocess_time.isoformat(), 'current_reprocess_time': s.current_reprocess_time.isoformat() if s.current_reprocess_time else "", } for s in schedules]} class ReprocessesSchedulerGetApi(ReprocessSchedulerGetApi): def __init__(self, *args, **kwargs): super(ReprocessesSchedulerGetApi, self).__init__(*args, **kwargs) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/api/v2/utils.py0000664000175000017500000002632700000000000021027 0ustar00zuulzuul00000000000000# Copyright 2018 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import importlib import itertools import flask import flask_restful import voluptuous from werkzeug import exceptions from cloudkitty.api import v2 as v2_api from cloudkitty.utils import json class SingleQueryParam(object): """Voluptuous validator allowing to validate unique query parameters. This validator checks that a URL query parameter is provided only once, verifies its type and returns it directly, instead of returning a list containing a single element. Note that this validator uses ``voluptuous.Coerce`` internally and thus should not be used together with ``cloudkitty.utils.validation.get_string_type`` in python2. :param param_type: Type of the query parameter """ def __init__(self, param_type): self._validate = voluptuous.Coerce(param_type) def __call__(self, v): if not isinstance(v, list): v = [v] if len(v) != 1: raise voluptuous.LengthInvalid('length of value must be 1') output = v[0] return self._validate(output) class MultiQueryParam(object): """Voluptuous validator allowing to validate multiple query parameters. This validator splits comma-separated query parameter into lists, verifies their type and returns it directly, instead of returning a list containing a single element. Note that this validator uses ``voluptuous.Coerce`` internally and thus should not be used together with ``cloudkitty.utils.validation.get_string_type`` in python2. :param param_type: Type of the query parameter """ def __init__(self, param_type): self._validate = lambda x: list(map(voluptuous.Coerce(param_type), x)) def __call__(self, v): if not isinstance(v, list): v = [v] output = itertools.chain(*[elem.split(',') for elem in v]) return self._validate(output) class DictQueryParam(object): """Voluptuous helper to validate dict query params. This validator converts a dict query parameter to a python dict. :param key_type: Type of the dict keys :param val_type: Type of the dict values :param unique_values: Defaults to True. Set to True if each key should contain only one value :type unique_values: bool """ def __init__(self, key_type, val_type, unique_values=True): self._kval = voluptuous.Coerce(key_type) self._unique_val = unique_values if self._unique_val: self._vval = voluptuous.Coerce(val_type) else: def __vval(values): return [voluptuous.Coerce(val_type)(v) for v in values] self._vval = __vval @staticmethod def _append(output, key, val): if key in output.keys(): output[key].append(val) else: output[key] = [val] return output def __call__(self, v): if not isinstance(v, list): v = [v] tokens = itertools.chain(*[elem.split(',') for elem in v]) output = {} for token in tokens: try: key, val = token.split(':') except ValueError: # Not enough or too many values to unpack raise voluptuous.DictInvalid( 'invalid key:value association {}'.format(token)) if key in output.keys(): if self._unique_val: raise voluptuous.DictInvalid( 'key {} already provided'.format(key)) if self._unique_val: output[key] = val else: output = self._append(output, key, val) return {self._kval(k): self._vval(v) for k, v in output.items()} class SingleDictQueryParam(DictQueryParam): def __init__(self, key_type, val_type): super(SingleDictQueryParam, self).__init__(key_type=key_type, val_type=val_type, unique_values=True) class MultiDictQueryParam(DictQueryParam): def __init__(self, key_type, val_type): super(MultiDictQueryParam, self).__init__(key_type=key_type, val_type=val_type, unique_values=False) def add_input_schema(location, schema): """Add a voluptuous schema validation on a method's input Takes a dict which can be converted to a voluptuous schema as parameter, and validates the parameters with this schema. The "location" parameter is used to specify the parameters' location. Note that for query parameters, a ``MultiDict`` is returned by Flask. Thus, each dict key will contain a list. In order to ease interaction with unique query parameters, the ``SingleQueryParam`` voluptuous validator can be used:: from cloudkitty.api.v2 import utils as api_utils @api_utils.add_input_schema('query', { voluptuous.Required('fruit'): api_utils.SingleQueryParam(str), }) def put(self, fruit=None): return fruit To accept a list of query parameters, a ``MultiQueryParam`` can be used:: from cloudkitty.api.v2 import utils as api_utils @api_utils.add_input_schema('query', { voluptuous.Required('fruit'): api_utils.MultiQueryParam(str), }) def put(self, fruit=[]): for f in fruit: # Do something with the fruit :param location: Location of the args. Must be one of ['body', 'query'] :type location: str :param schema: Schema to apply to the method's kwargs :type schema: dict """ def decorator(f): try: s = getattr(f, 'input_schema') s = s.extend(schema) # The previous schema must be deleted or it will be called... [1/2] delattr(f, 'input_schema') except AttributeError: s = voluptuous.Schema(schema) def wrap(self, **kwargs): if hasattr(wrap, 'input_schema'): if location == 'body': args = flask.request.get_json() elif location == 'query': # NOTE(lpeschke): issues with to_dict in python3.7, # see https://github.com/pallets/werkzeug/issues/1379 args = dict(flask.request.args.lists()) try: # ...here [2/2] kwargs.update(wrap.input_schema(args)) except voluptuous.Invalid as e: raise exceptions.BadRequest( "Invalid data '{a}' : {m} (path: '{p}')".format( a=args, m=e.msg, p=str(e.path))) return f(self, **kwargs) wrap.input_schema = s return wrap return decorator def paginated(func): """Helper function for pagination. Adds two parameters to the decorated function: * ``offset``: int >=0. Defaults to 0. * ``limit``: int >=1. Defaults to 100. Example usage:: class Example(base.BaseResource): @api_utils.paginated @api_utils.add_output_schema({ voluptuous.Required( 'message', default='This is an example endpoint', ): validation_utils.get_string_type(), }) def get(self, offset=0, limit=100): # [...] """ return add_input_schema('query', { voluptuous.Required('offset', default=0): voluptuous.All( SingleQueryParam(int), voluptuous.Range(min=0)), voluptuous.Required('limit', default=100): voluptuous.All( SingleQueryParam(int), voluptuous.Range(min=1)), })(func) def add_output_schema(schema): """Add a voluptuous schema validation on a method's output Example usage:: class Example(base.BaseResource): @api_utils.add_output_schema({ voluptuous.Required( 'message', default='This is an example endpoint', ): validation_utils.get_string_type(), }) def get(self): return {} :param schema: Schema to apply to the method's output :type schema: dict """ schema = voluptuous.Schema(schema) def decorator(f): def wrap(*args, **kwargs): resp = f(*args, **kwargs) return schema(resp) return wrap return decorator class ResourceNotFound(Exception): """Exception raised when a resource is not found""" def __init__(self, module, resource_class): msg = 'Resource {r} was not found in module {m}'.format( r=resource_class, m=module, ) super(ResourceNotFound, self).__init__(msg) def _load_resource(module, resource_class): try: module = importlib.import_module(module) except ImportError: raise ResourceNotFound(module, resource_class) resource = getattr(module, resource_class, None) if resource is None: raise ResourceNotFound(module, resource_class) return resource def output_json(data, code, headers=None): """Helper function for api endpoint json serialization""" resp = flask.make_response(json.dumps(data), code) resp.headers.extend(headers or {}) return resp def _get_blueprint_and_api(module_name): endpoint_name = module_name.split('.')[-1] blueprint = flask.Blueprint(endpoint_name, module_name) api = flask_restful.Api(blueprint) # Using cloudkitty.json instead of json for serialization api.representation('application/json')(output_json) return blueprint, api def do_init(app, blueprint_name, resources): """Registers a new Blueprint containing one or several resources to app. :param app: Flask app in which the Blueprint should be registered :type app: flask.Flask :param blueprint_name: Name of the blueprint to create :type blueprint_name: str :param resources: Resources to add to the Blueprint's Api :type resources: list of dicts matching ``cloudkitty.api.v2.RESOURCE_SCHEMA`` """ blueprint, api = _get_blueprint_and_api(blueprint_name) schema = voluptuous.Schema([v2_api.RESOURCE_SCHEMA]) for resource_info in schema(resources): resource = _load_resource(resource_info['module'], resource_info['resource_class']) if resource_info['url'] and not resource_info['url'].startswith('/'): resource_info['url'] = '/' + resource_info['url'] api.add_resource(resource, resource_info['url']) if not blueprint_name.startswith('/'): blueprint_name = '/' + blueprint_name app.register_blueprint(blueprint, url_prefix=blueprint_name) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2314863 cloudkitty-21.0.0/cloudkitty/backend/0000775000175000017500000000000000000000000017572 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/backend/__init__.py0000664000175000017500000000316200000000000021705 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import abc class BaseIOBackend(object, metaclass=abc.ABCMeta): def __init__(self, path): self.open(path) @abc.abstractmethod def open(self, path): """Open the connection/file on the backend. """ @abc.abstractmethod def tell(self): """Current position on the backend. """ @abc.abstractmethod def seek(self, offset, from_what=0): # 0 beg, 1 cur, 2 end """Change position in the backend. """ @abc.abstractmethod def flush(self): """Force write informations on the backend. """ @abc.abstractmethod def write(self, data): """Writer data on the backend. :param data: Data to be written on the backend. """ @abc.abstractmethod def read(self): """Read data from the backend. :return str: Data read from the backend. """ @abc.abstractmethod def close(self): """Close the connection/file on the backend. """ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/backend/file.py0000664000175000017500000000157400000000000021072 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # class FileBackend(file): # noqa def __init__(self, path, mode='ab+'): try: super(FileBackend, self).__init__(path, mode) except IOError: # File not found super(FileBackend, self).__init__(path, 'wb+') ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2314863 cloudkitty-21.0.0/cloudkitty/cli/0000775000175000017500000000000000000000000016752 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/cli/__init__.py0000664000175000017500000000000000000000000021051 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/cli/dbsync.py0000664000175000017500000001104100000000000020603 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_config import cfg from stevedore import extension from cloudkitty import config # noqa from cloudkitty.db import api as db_api from cloudkitty import service CONF = cfg.CONF PROCESSORS_NAMESPACE = 'cloudkitty.rating.processors' class ModuleNotFound(Exception): def __init__(self, name): self.name = name super(ModuleNotFound, self).__init__( "Module %s not found" % name) class MultipleModulesRevisions(Exception): def __init__(self, revision): self.revision = revision super(MultipleModulesRevisions, self).__init__( "Can't apply revision %s to multiple modules." % revision) class DBCommand(object): def __init__(self): self.rating_models = {} self._load_rating_models() def _load_rating_models(self): extensions = extension.ExtensionManager( PROCESSORS_NAMESPACE) self.rating_models = {} for ext in extensions: if hasattr(ext.plugin, 'db_api'): self.rating_models[ext.name] = ext.plugin.db_api def get_module_migration(self, name): if name == 'cloudkitty': mod_migration = db_api.get_instance().get_migration() else: try: module = self.rating_models[name] mod_migration = module.get_migration() except KeyError: raise ModuleNotFound(name) return mod_migration def get_migrations(self, name=None): if not name: migrations = [] migrations.append(self.get_module_migration('cloudkitty')) for model in self.rating_models.values(): migrations.append(model.get_migration()) return migrations else: return [self.get_module_migration(name)] def check_revsion(self, revision): revision = revision or 'head' if revision not in ('base', 'head'): raise MultipleModulesRevisions(revision) def _version_change(self, cmd): revision = CONF.command.revision module = CONF.command.module if not module: self.check_revsion(revision) migrations = self.get_migrations(module) for migration in migrations: func = getattr(migration, cmd) func(revision) def upgrade(self): self._version_change('upgrade') def revision(self): migration = self.get_module_migration(CONF.command.module) migration.revision(CONF.command.message, CONF.command.autogenerate) def stamp(self): migration = self.get_module_migration(CONF.command.module) migration.stamp(CONF.command.revision) def version(self): migration = self.get_module_migration(CONF.command.module) migration.version() def add_command_parsers(subparsers): command_object = DBCommand() parser = subparsers.add_parser('upgrade') parser.set_defaults(func=command_object.upgrade) parser.add_argument('--revision', nargs='?') parser.add_argument('--module', nargs='?') parser = subparsers.add_parser('stamp') parser.set_defaults(func=command_object.stamp) parser.add_argument('--revision', nargs='?') parser.add_argument('--module', required=True) parser = subparsers.add_parser('revision') parser.set_defaults(func=command_object.revision) parser.add_argument('-m', '--message') parser.add_argument('--autogenerate', action='store_true') parser.add_argument('--module', required=True) parser = subparsers.add_parser('version') parser.set_defaults(func=command_object.version) parser.add_argument('--module', required=True) command_opt = cfg.SubCommandOpt('command', title='Command', help='Available commands', handler=add_command_parsers) CONF.register_cli_opt(command_opt) def main(): service.prepare_service() CONF.command.func() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/cli/processor.py0000664000175000017500000000222300000000000021342 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from cloudkitty import service def main(): service.prepare_service() # NOTE(mc): This import is done here to ensure that the prepare_service() # function is called before any cfg option. By importing the orchestrator # file, the utils one is imported too, and then some cfg options are read # before the prepare_service(), making cfg.CONF returning default values # systematically. from cloudkitty import orchestrator orchestrator.CloudKittyServiceManager().run() if __name__ == '__main__': main() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/cli/status.py0000664000175000017500000000322300000000000020647 0ustar00zuulzuul00000000000000# Copyright 2018 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import sys from oslo_config import cfg from oslo_upgradecheck import common_checks from oslo_upgradecheck import upgradecheck from cloudkitty.i18n import _ # Import needed to register storage options from cloudkitty import storage # noqa CONF = cfg.CONF class CloudkittyUpgradeChecks(upgradecheck.UpgradeCommands): def _storage_version(self): if CONF.storage.version < 2: return upgradecheck.Result( upgradecheck.Code.WARNING, 'Storage version is inferior to 2. Support for v1 storage ' 'will be dropped in a future release.', ) return upgradecheck.Result(upgradecheck.Code.SUCCESS) _upgrade_checks = ( (_('Storage version'), _storage_version), (_("Policy File JSON to YAML Migration"), (common_checks.check_policy_json, {'conf': CONF})), ) def main(): return upgradecheck.main( conf=CONF, project='cloudkitty', upgrade_command=CloudkittyUpgradeChecks(), ) if __name__ == '__main__': sys.exit(main()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/cli/storage.py0000664000175000017500000000170100000000000020767 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from cloudkitty import service from cloudkitty import storage from cloudkitty import storage_state def init_storage_backend(): backend = storage.get_storage() backend.init() state_manager = storage_state.StateManager() state_manager.init() def main(): service.prepare_service() init_storage_backend() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/cli/writer.py0000664000175000017500000000735400000000000020651 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_config import cfg from oslo_utils import importutils as i_utils from cloudkitty import config # noqa from cloudkitty import service from cloudkitty import storage from cloudkitty import utils as ck_utils from cloudkitty import write_orchestrator CONF = cfg.CONF CONF.import_opt('period', 'cloudkitty.collector', 'collect') CONF.import_opt('backend', 'cloudkitty.config', 'output') CONF.import_opt('basepath', 'cloudkitty.config', 'output') STORAGES_NAMESPACE = 'cloudkitty.storage.backends' class DBCommand(object): def __init__(self): self._storage = None self._output = None self._load_storage_backend() self._load_output_backend() def _load_storage_backend(self): self._storage = storage.get_storage() def _load_output_backend(self): backend = i_utils.import_class(CONF.output.backend) self._output = backend def generate(self): if not CONF.command.tenant: if not CONF.command.begin: CONF.command.begin = ck_utils.get_month_start() if not CONF.command.end: CONF.command.end = ck_utils.get_next_month() tenants = self._storage.get_tenants(CONF.command.begin, CONF.command.end) else: tenants = [CONF.command.tenant] for tenant in tenants: wo = write_orchestrator.WriteOrchestrator(self._output, tenant, self._storage, CONF.output.basepath) wo.init_writing_pipeline() if not CONF.command.begin: wo.restart_month() wo.process() def tenants_list(self): if not CONF.command.begin: CONF.command.begin = ck_utils.get_month_start() if not CONF.command.end: CONF.command.end = ck_utils.get_next_month() tenants = self._storage.get_tenants(CONF.command.begin, CONF.command.end) print('Tenant list:') for tenant in tenants: print(tenant) def call_generate(command_object): command_object.generate() def call_tenants_list(command_object): command_object.tenants_list() def add_command_parsers(subparsers): parser = subparsers.add_parser('generate') parser.set_defaults(func=call_generate) parser.add_argument('--tenant', nargs='?') parser.add_argument('--begin', nargs='?') parser.add_argument('--end', nargs='?') parser = subparsers.add_parser('tenants_list') parser.set_defaults(func=call_tenants_list) parser.add_argument('--begin', nargs='?') parser.add_argument('--end', nargs='?') command_opt = cfg.SubCommandOpt('command', title='Command', help='Available commands', handler=add_command_parsers) CONF.register_cli_opt(command_opt) def main(): service.prepare_service() command_object = DBCommand() CONF.command.func(command_object) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2354863 cloudkitty-21.0.0/cloudkitty/collector/0000775000175000017500000000000000000000000020171 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/collector/__init__.py0000664000175000017500000002543200000000000022310 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import abc import datetime import fractions from oslo_config import cfg from oslo_log import log as logging from stevedore import driver from voluptuous import All from voluptuous import Any from voluptuous import Coerce from voluptuous import Error as VoluptuousError from voluptuous import In from voluptuous import Invalid from voluptuous import Length from voluptuous import Optional from voluptuous import Required from voluptuous import Schema from cloudkitty.dataframe import DataPoint from cloudkitty import utils as ck_utils LOG = logging.getLogger(__name__) collect_opts = [ cfg.StrOpt('collector', default='gnocchi', help='Data collector.'), cfg.IntOpt('period', default=3600, help='Rating period in seconds.'), cfg.IntOpt('wait_periods', default=2, help='Wait for N periods before collecting new data.'), cfg.StrOpt('metrics_conf', default='/etc/cloudkitty/metrics.yml', help='Metrology configuration file.'), cfg.StrOpt('scope_key', default='project_id', help='Key defining a scope. project_id or domain_id for ' 'OpenStack, but can be anything.'), ] CONF = cfg.CONF CONF.register_opts(collect_opts, 'collect') COLLECTORS_NAMESPACE = 'cloudkitty.collector.backends' def MetricDict(value): if isinstance(value, dict) and len(value.keys()) > 0: return value raise Invalid("Not a dict with at least one key") CONF_BASE_SCHEMA = {Required('metrics'): MetricDict} METRIC_BASE_SCHEMA = { # Human-readable description for the CloudKitty rating type Optional('description'): All(str, Length(min=1)), # Display unit Required('unit'): All(str, Length(min=1)), # Factor for unit converion Required('factor', default=1): Any(int, float, Coerce(fractions.Fraction)), # Offset for unit conversion Required('offset', default=0): # [int, float, fractions.Fraction], Any(int, float, Coerce(fractions.Fraction)), # Name to be used in dataframes, and used for service creation in hashmap # module. Defaults to the name of the metric Optional('alt_name'): All(str, Length(min=1)), # This is what metrics are grouped by on collection. Required('groupby', default=list): [ All(str, Length(min=1)) ], # Available in HashMap Required('metadata', default=list): [ All(str, Length(min=1)) ], # Mutate collected value. May be any of: # (NONE, NUMBOOL, NOTNUMBOOL, FLOOR, CEIL). # Defaults to NONE Required('mutate', default='NONE'): In(['NONE', 'NUMBOOL', 'NOTNUMBOOL', 'FLOOR', 'CEIL', 'MAP']), # Map dict used if mutate == 'MAP' Optional('mutate_map'): dict, # Collector-specific args. Should be overriden by schema provided for # the given collector Optional('extra_args'): dict, } def get_collector(): metrics_conf = ck_utils.load_conf(CONF.collect.metrics_conf) collector_args = { 'period': CONF.collect.period, 'conf': metrics_conf, } return driver.DriverManager( COLLECTORS_NAMESPACE, CONF.collect.collector, invoke_on_load=True, invoke_kwds=collector_args).driver def get_collector_without_invoke(): """Return the collector without invoke it.""" return driver.DriverManager( COLLECTORS_NAMESPACE, CONF.collect.collector, invoke_on_load=False ).driver def get_metrics_based_collector_metadata(): """Return dict of metadata. Results are based on enabled collector and metrics in CONF. """ metrics_conf = ck_utils.load_conf(CONF.collect.metrics_conf) collector = get_collector_without_invoke() metadata = {} if 'metrics' in metrics_conf: for metric_name, metric in metrics_conf.get('metrics', {}).items(): alt_name = metric.get('alt_name', metric_name) metadata[alt_name] = collector.get_metadata( metric_name, metrics_conf, ) return metadata class NoDataCollected(Exception): """Raised when the collection returned no data. """ def __init__(self, collector, resource): super(NoDataCollected, self).__init__( "Collector '%s' returned no data for resource '%s'" % ( collector, resource)) self.collector = collector self.resource = resource class BaseCollector(object, metaclass=abc.ABCMeta): collector_name = None def __init__(self, **kwargs): try: self.period = kwargs['period'] self.conf = self.check_configuration(kwargs['conf']) except KeyError as e: key_error_message = "Missing argument (%s)" % e LOG.error(key_error_message, e) raise ValueError(key_error_message) except VoluptuousError as v: LOG.error('Problem while checking configurations.', v) raise v @staticmethod def check_configuration(conf): """Checks and validates metric configuration. Collectors requiring extra parameters for metric collection should implement this method, call the method of the parent class, extend the ``extra_args`` key in ``METRIC_BASE_SCHEMA`` and validate the metric configuration against the new schema. """ conf = Schema(CONF_BASE_SCHEMA)(conf) metric_schema = Schema(METRIC_BASE_SCHEMA) scope_key = CONF.collect.scope_key output = {} for metric_name, metric in conf['metrics'].items(): output[metric_name] = metric_schema(metric) if scope_key not in output[metric_name]['groupby']: output[metric_name]['groupby'].append(scope_key) return output @classmethod def _res_to_func(cls, resource_name): trans_resource = 'get_' trans_resource += resource_name.replace('.', '_') return trans_resource @classmethod def get_metadata(cls, resource_name): """Return metadata about collected resource as a dict. Dict object should contain: - "metadata": available metadata list, - "unit": collected quantity unit """ return {"metadata": [], "unit": "undefined"} @abc.abstractmethod def fetch_all(self, metric_name, start, end, project_id=None, q_filter=None): """Fetches information about a specific metric for a given period. This method must respect the ``groupby`` and ``metadata`` arguments provided in the metric conf at initialization. (Available in ``self.conf['groupby']`` and ``self.conf['metadata']``). Returns a list of cloudkitty.dataframe.DataPoint objects. :param metric_name: Name of the metric to fetch :type metric_name: str :param start: start of the period :type start: datetime.datetime :param end: end of the period :type end: datetime.datetime :param project_id: ID of the scope for which data should be collected :type project_id: str :param q_filter: Optional filters :type q_filter: dict """ def retrieve(self, metric_name, start, end, project_id=None, q_filter=None): data = self.fetch_all( metric_name, start, end, project_id, q_filter=q_filter, ) name = self.conf[metric_name].get('alt_name', metric_name) if not data: raise NoDataCollected(self.collector_name, name) return name, data def _create_data_point(self, metric, qty, price, groupby, metadata, start): unit = metric['unit'] if not start: start = datetime.datetime.now() LOG.debug("Collector [%s]. No start datetime defined for " "datapoint[unit=%s, quantity=%s, price=%s, groupby=%s, " "metadata=%s]. Therefore, we use the current time as " "the start time for this datapoint.", self.collector_name, unit, qty, price, groupby, metadata) week_of_the_year = start.strftime("%U") day_of_the_year = start.strftime("%-j") month_of_the_year = start.strftime("%-m") year = start.strftime("%Y") if groupby is None: groupby = {} groupby['week_of_the_year'] = week_of_the_year groupby['day_of_the_year'] = day_of_the_year groupby['month'] = month_of_the_year groupby['year'] = year return DataPoint(unit, qty, price, groupby, metadata, metric.get('description')) class InvalidConfiguration(Exception): pass def check_duplicates(metric_name, metric): """Checks for duplicates in "groupby" and "metadata". :param metric: config dict for a metric to check :type metric: dict """ groupby = set(metric['groupby']) metadata = set(metric['metadata']) duplicates = groupby.intersection(metadata) if duplicates: raise InvalidConfiguration( 'Metric {} has duplicates in groupby and metadata: {}'.format( metric_name, metric)) metric['groupby'] = list(groupby) metric['metadata'] = list(metadata) return metric def validate_map_mutator(metric_name, metric): """Validates MAP mutator""" mutate = metric.get('mutate') mutate_map = metric.get('mutate_map') if mutate == 'MAP' and mutate_map is None: raise InvalidConfiguration( 'Metric {} uses MAP mutator but mutate_map is missing: {}'.format( metric_name, metric)) if mutate != 'MAP' and mutate_map is not None: raise InvalidConfiguration( 'Metric {} not using MAP mutator but mutate_map is present: ' '{}'.format(metric_name, metric)) def validate_conf(conf): """Validates the provided configuration.""" collector = get_collector_without_invoke() output = collector.check_configuration(conf) for metric_name, metric in output.items(): if 'alt_name' not in metric.keys(): metric['alt_name'] = metric_name check_duplicates(metric_name, metric) validate_map_mutator(metric_name, metric) return output ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/collector/exceptions.py0000664000175000017500000000126600000000000022731 0ustar00zuulzuul00000000000000# Copyright 2018 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # class CollectError(Exception): """Base exception for collect errors""" pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/collector/gnocchi.py0000664000175000017500000006641100000000000022165 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import copy from datetime import timedelta import requests from gnocchiclient import auth as gauth from gnocchiclient import client as gclient from gnocchiclient import exceptions as gexceptions from keystoneauth1 import loading as ks_loading from oslo_config import cfg from oslo_log import log as logging from voluptuous import All from voluptuous import In from voluptuous import Invalid from voluptuous import Length from voluptuous import Range from voluptuous import Required from voluptuous import Schema from cloudkitty import collector from cloudkitty.common import custom_session from cloudkitty import utils as ck_utils from cloudkitty.utils import tz as tzutils LOG = logging.getLogger(__name__) COLLECTOR_GNOCCHI_OPTS = 'collector_gnocchi' def GnocchiMetricDict(value): if isinstance(value, dict) and len(value.keys()) > 0: return value if isinstance(value, list) and len(value) > 0: for v in value: if not (isinstance(v, dict) and len(v.keys()) > 0): raise Invalid("Not a dict with at least one key or a " "list with at least one dict with at " "least one key. Provided value: %s" % value) return value raise Invalid("Not a dict with at least one key or a " "list with at least one dict with at " "least one key. Provided value: %s" % value) GNOCCHI_CONF_SCHEMA = {Required('metrics'): GnocchiMetricDict} collector_gnocchi_opts = [ cfg.StrOpt( 'gnocchi_auth_type', default='keystone', choices=['keystone', 'basic'], help='Gnocchi auth type (keystone or basic). Keystone credentials ' 'can be specified through the "auth_section" parameter', ), cfg.StrOpt( 'gnocchi_user', default='', help='Gnocchi user (for basic auth only)', ), cfg.StrOpt( 'gnocchi_endpoint', default='', help='Gnocchi endpoint (for basic auth only)', ), cfg.StrOpt( 'interface', default='internalURL', help='Endpoint URL type (for keystone auth only)', ), cfg.StrOpt( 'region_name', default='RegionOne', help='Region Name', ), cfg.IntOpt( 'http_pool_maxsize', default=requests.adapters.DEFAULT_POOLSIZE, help='If the value is not defined, we use the value defined by ' 'requests.adapters.DEFAULT_POOLSIZE', ) ] ks_loading.register_session_conf_options(cfg.CONF, COLLECTOR_GNOCCHI_OPTS) ks_loading.register_auth_conf_options(cfg.CONF, COLLECTOR_GNOCCHI_OPTS) cfg.CONF.register_opts(collector_gnocchi_opts, COLLECTOR_GNOCCHI_OPTS) CONF = cfg.CONF # According to 'gnocchi/rest/aggregates/operations.py#AGG_MAP' and # 'gnocchi/rest/aggregates/operations.py#AGG_MAP' the following are the basic # aggregation methods that one can use when configuring an aggregation # method in the archive policy in Gnocchi or using the aggregation API. BASIC_AGGREGATION_METHODS = set(('mean', 'sum', 'last', 'max', 'min', 'std', 'median', 'first', 'count')) for agg in list(BASIC_AGGREGATION_METHODS): BASIC_AGGREGATION_METHODS.add("rate:%s" % agg) EXTRA_AGGREGATION_METHODS_FOR_ARCHIVE_POLICY = set( (str(i) + 'pct' for i in range(1, 100))) for agg in list(EXTRA_AGGREGATION_METHODS_FOR_ARCHIVE_POLICY): EXTRA_AGGREGATION_METHODS_FOR_ARCHIVE_POLICY.add("rate:%s" % agg) # The aggregation method that one can use to configure the archive # policies also supports the 'pct' (percentile) operation. Therefore, # we also expose this as a configuration. VALID_AGGREGATION_METHODS_FOR_METRICS = BASIC_AGGREGATION_METHODS.union( EXTRA_AGGREGATION_METHODS_FOR_ARCHIVE_POLICY) GNOCCHI_EXTRA_SCHEMA = { Required('extra_args'): { Required('resource_type'): All(str, Length(min=1)), # Due to Gnocchi model, metric are grouped by resource. # This parameter permits to adapt the key of the resource identifier Required('resource_key', default='id'): All(str, Length(min=1)), Required('aggregation_method', default='max'): In(VALID_AGGREGATION_METHODS_FOR_METRICS), Required('re_aggregation_method', default='max'): In(BASIC_AGGREGATION_METHODS), Required('force_granularity', default=3600): All(int, Range(min=0)), Required('use_all_resource_revisions', default=True): All(bool), # Provide means for operators to customize the aggregation query # executed against Gnocchi. By default we use the following: # # '(aggregate RE_AGGREGATION_METHOD # (metric METRIC_NAME AGGREGATION_METHOD))' # # Therefore, this option enables operators to take full advantage of # operations available in Gnocchi, such as any arithmetic operations, # logical operations and many others. # # When using a custom aggregation query, you can keep the placeholders # 'RE_AGGREGATION_METHOD', 'AGGREGATION_METHOD', and 'METRIC_NAME': # they will be replaced at runtime by values from the metric # configuration. Required('custom_query', default=''): All(str), }, } class AssociatedResourceNotFound(Exception): """Exception raised when no resource can be associated with a metric.""" def __init__(self, resource_key, resource_id): super(AssociatedResourceNotFound, self).__init__( 'Resource with {}={} could not be found'.format( resource_key, resource_id), ) class GnocchiCollector(collector.BaseCollector): collector_name = 'gnocchi' def __init__(self, **kwargs): super(GnocchiCollector, self).__init__(**kwargs) adapter_options = {'connect_retries': 3} if CONF.collector_gnocchi.gnocchi_auth_type == 'keystone': auth_plugin = ks_loading.load_auth_from_conf_options( CONF, COLLECTOR_GNOCCHI_OPTS, ) adapter_options['interface'] = CONF.collector_gnocchi.interface else: auth_plugin = gauth.GnocchiBasicPlugin( user=CONF.collector_gnocchi.gnocchi_user, endpoint=CONF.collector_gnocchi.gnocchi_endpoint, ) adapter_options['region_name'] = CONF.collector_gnocchi.region_name verify = True if CONF.collector_gnocchi.cafile: verify = CONF.collector_gnocchi.cafile elif CONF.collector_gnocchi.insecure: verify = False self._conn = gclient.Client( '1', session=custom_session.create_custom_session( {'auth': auth_plugin, 'verify': verify}, CONF.collector_gnocchi.http_pool_maxsize), adapter_options=adapter_options, ) @staticmethod def check_configuration(conf): """Check metrics configuration """ conf = Schema(GNOCCHI_CONF_SCHEMA)(conf) conf = copy.deepcopy(conf) scope_key = CONF.collect.scope_key metric_schema = Schema(collector.METRIC_BASE_SCHEMA).extend( GNOCCHI_EXTRA_SCHEMA) output = {} for metric_name, metric in conf['metrics'].items(): if not isinstance(metric, list): metric = [metric] for m in metric: met = metric_schema(m) m.update(met) if scope_key not in m['groupby']: m['groupby'].append(scope_key) if met['extra_args']['resource_key'] not in m['groupby']: m['groupby'].append(met['extra_args']['resource_key']) names = [metric_name] alt_name = met.get('alt_name') if alt_name is not None: names.append(alt_name) new_metric_name = "@#".join(names) output[new_metric_name] = m return output @classmethod def get_metadata(cls, resource_name, conf): info = super(GnocchiCollector, cls).get_metadata(resource_name) try: info["metadata"].extend( conf[resource_name]['groupby'] ).extend( conf[resource_name]['metadata'] ) info['unit'] = conf[resource_name]['unit'] except KeyError: pass return info @classmethod def gen_filter(cls, cop='=', lop='and', **kwargs): """Generate gnocchi filter from kwargs. :param cop: Comparison operator. :param lop: Logical operator in case of multiple filters. """ q_filter = [] for kwarg in sorted(kwargs): q_filter.append({cop: {kwarg: kwargs[kwarg]}}) if len(kwargs) > 1: return cls.extend_filter(q_filter, lop=lop) else: return q_filter[0] if len(kwargs) else {} @classmethod def extend_filter(cls, *args, **kwargs): """Extend an existing gnocchi filter with multiple operations. :param lop: Logical operator in case of multiple filters. """ lop = kwargs.get('lop', 'and') filter_list = [] for cur_filter in args: if isinstance(cur_filter, dict) and cur_filter: filter_list.append(cur_filter) elif isinstance(cur_filter, list): filter_list.extend(cur_filter) if len(filter_list) > 1: return {lop: filter_list} else: return filter_list[0] if len(filter_list) else {} def _generate_time_filter(self, start, end): """Generate timeframe filter. :param start: Start of the timeframe. :param end: End of the timeframe if needed. """ time_filter = list() time_filter.append(self.extend_filter( self.gen_filter(ended_at=None), self.gen_filter(cop=">=", ended_at=start.isoformat()), lop='or')) time_filter.append( self.gen_filter(cop="<=", started_at=end.isoformat())) return time_filter def _fetch_resources(self, metric_name, start, end, project_id=None, q_filter=None): """Get resources during the timeframe. :type metric_name: str :param start: Start of the timeframe. :param end: End of the timeframe if needed. :param project_id: Filter on a specific tenant/project. :type project_id: str :param q_filter: Append a custom filter. :type q_filter: list """ # Get gnocchi specific conf extra_args = self.conf[metric_name]['extra_args'] resource_type = extra_args['resource_type'] scope_key = CONF.collect.scope_key # Build query # FIXME(peschk_l): In order not to miss any resource whose metrics may # contain measures after its destruction, we scan resources over three # collect periods. delta = timedelta(seconds=CONF.collect.period) start = tzutils.substract_delta(start, delta) end = tzutils.add_delta(end, delta) query_parameters = self._generate_time_filter(start, end) if project_id: kwargs = {scope_key: project_id} query_parameters.append(self.gen_filter(**kwargs)) if q_filter: query_parameters.append(q_filter) sorts = [extra_args['resource_key'] + ':asc'] resources = [] marker = None while True: resources_chunk = self._conn.resource.search( resource_type=resource_type, query=self.extend_filter(*query_parameters), sorts=sorts, marker=marker) if len(resources_chunk) < 1: break resources += resources_chunk marker = resources_chunk[-1][extra_args['resource_key']] return {res[extra_args['resource_key']]: res for res in resources} def _fetch_metric(self, metric_name, start, end, project_id=None, q_filter=None): """Get metric during the timeframe. :param metric_name: metric name to filter on. :param start: Start of the timeframe. :param end: End of the timeframe if needed. :param project_id: Filter on a specific tenant/project. :type project_id: str :param q_filter: Append a custom filter. :type q_filter: list """ agg_kwargs = self.get_aggregation_api_arguments(end, metric_name, project_id, q_filter, start) op = self.build_operation_command(metric_name) try: measurements = self._conn.aggregates.fetch(op, **agg_kwargs) LOG.debug("Measurements [%s] received with operation [%s] and " "arguments [%s].", measurements, op, agg_kwargs) return measurements except (gexceptions.MetricNotFound, gexceptions.BadRequest) as e: # FIXME(peschk_l): gnocchiclient seems to be raising a BadRequest # when it should be raising MetricNotFound if isinstance(e, gexceptions.BadRequest): if 'Metrics not found' not in e.message["cause"]: raise LOG.warning('[{scope}] Skipping this metric for the ' 'current cycle.'.format(scope=project_id)) return [] def get_aggregation_api_arguments(self, end, metric_name, project_id, q_filter, start): extra_args = self.conf[metric_name]['extra_args'] resource_type = extra_args['resource_type'] query_parameters = self.build_query_parameters(project_id, q_filter, resource_type) agg_kwargs = { 'resource_type': resource_type, 'start': start, 'stop': end, 'use_history': True, 'groupby': self.conf[metric_name]['groupby'], 'search': self.extend_filter(*query_parameters), } force_granularity = extra_args['force_granularity'] if force_granularity > 0: agg_kwargs['granularity'] = force_granularity re_aggregation_method = extra_args['re_aggregation_method'] if re_aggregation_method.startswith('rate:'): agg_kwargs['start'] = start - timedelta(seconds=force_granularity) LOG.debug("Re-aggregation method for metric [%s] configured as" " [%s]. Therefore, we need two data points. Start date" " modified from [%s] to [%s].", metric_name, re_aggregation_method, start, agg_kwargs['start']) return agg_kwargs def build_query_parameters(self, project_id, q_filter, resource_type): query_parameters = list() query_parameters.append( self.gen_filter(cop="=", type=resource_type)) if project_id: scope_key = CONF.collect.scope_key kwargs = {scope_key: project_id} query_parameters.append(self.gen_filter(**kwargs)) if q_filter: query_parameters.append(q_filter) return query_parameters def build_operation_command(self, metric_name): extra_args = self.conf[metric_name]['extra_args'] op = self.generate_aggregation_operation(extra_args, metric_name) LOG.debug("Aggregation operation [%s] used to retrieve metric [%s].", op, metric_name) return op @staticmethod def generate_aggregation_operation(extra_args, metric_name): metric_name = metric_name.split('@#')[0] aggregation_method = extra_args['aggregation_method'] re_aggregation_method = aggregation_method if 're_aggregation_method' in extra_args: re_aggregation_method = extra_args['re_aggregation_method'] op = ["aggregate", re_aggregation_method, ["metric", metric_name, aggregation_method]] custom_gnocchi_query = extra_args.get('custom_query') if custom_gnocchi_query: LOG.debug("Using custom Gnocchi query [%s] with metric [%s].", custom_gnocchi_query, metric_name) op = custom_gnocchi_query.replace( 'RE_AGGREGATION_METHOD', re_aggregation_method).replace( 'AGGREGATION_METHOD', aggregation_method).replace( 'METRIC_NAME', metric_name) return op def _format_data(self, metconf, data, resources_info=None): """Formats gnocchi data to CK data. Returns metadata, groupby and qty """ groupby = data['group'] # if resource info is provided, add additional # metadata as defined in the conf metadata = dict() if resources_info is not None: resource_key = metconf['extra_args']['resource_key'] resource_id = groupby[resource_key] try: resource = resources_info[resource_id] except KeyError: raise AssociatedResourceNotFound(resource_key, resource_id) for i in metconf['metadata']: metadata[i] = resource.get(i, '') qty = data['measures']['measures']['aggregated'][0][2] converted_qty = ck_utils.convert_unit( qty, metconf['factor'], metconf['offset']) mutate_map = metconf.get('mutate_map') mutated_qty = ck_utils.mutate(converted_qty, metconf['mutate'], mutate_map=mutate_map) return metadata, groupby, mutated_qty def fetch_all(self, metric_name, start, end, project_id=None, q_filter=None): met = self.conf[metric_name] data = self._fetch_metric( metric_name, start, end, project_id=project_id, q_filter=q_filter, ) data = GnocchiCollector.filter_unecessary_measurements( data, met, metric_name) resources_info = None if met['metadata']: resources_info = self._fetch_resources( metric_name, start, end, project_id=project_id, q_filter=q_filter ) formated_resources = list() for d in data: # Only if aggregates have been found LOG.debug("Processing entry [%s] for [%s] in timestamp [" "start=%s, end=%s] and project id [%s]", d, metric_name, start, end, project_id) if d['measures']['measures']['aggregated']: try: metadata, groupby, qty = self._format_data( met, d, resources_info) except AssociatedResourceNotFound as e: LOG.warning( '[{}] An error occured during data collection ' 'between {} and {}: {}'.format( project_id, start, end, e), ) continue point = self._create_data_point(met, qty, 0, groupby, metadata, start) formated_resources.append(point) return formated_resources @staticmethod def filter_unecessary_measurements(data, met, metric_name): """Filter unecessary measurements if not 'use_all_resource_revisions' The option 'use_all_resource_revisions' is useful when using Gnocchi with the patch introduced in https://github.com/gnocchixyz/gnocchi/pull/1059. That patch can cause queries to return more than one entry per granularity (timespan), according to the revisions a resource has. This can be problematic when using the 'mutate' option of Cloudkitty. Therefore, this option ('use_all_resource_revisions') allows operators to discard all datapoints returned from Gnocchi, but the last one in the granularity queried by CloudKitty. The default behavior is maintained, which means, CloudKitty always use all the data points returned. When the 'mutate' option is not used, we need to sum all the quantities, and use this value with the latest version of the attributes received. Otherwise, we will miss the complete accounting for the time frame where the revision happened. """ use_all_resource_revisions = met[ 'extra_args']['use_all_resource_revisions'] LOG.debug("Configuration use_all_resource_revisions set to [%s] for " "metric [%s]", use_all_resource_revisions, metric_name) if data and not use_all_resource_revisions: if "id" not in data[0].get('group', {}).keys(): LOG.debug("There is no ID id in the groupby section and we " "are trying to use 'use_all_resource_revisions'. " "However, without an ID there is not much we can do " "to identify the revisions for a resource.") return data original_data = copy.deepcopy(data) # Here we order the data in a way to maintain the latest revision # as the principal element to be used. We are assuming that there # is a revision_start attribute, which denotes when the revision # was created. If there is no revision start, we cannot do much. data.sort(key=lambda x: (x["group"]["id"], x["group"]["revision_start"]), reverse=False) # We just care about the oldest entry per resource in the # given time slice (configured granularity in Cloudkitty) regarding # the attributes. For the quantity, we still want to use all the # quantity elements summing up the value for all the revisions. map_id_entry = {d["group"]['id']: d for d in data} single_entries_per_id = list(map_id_entry.values()) GnocchiCollector.zero_quantity_values(single_entries_per_id) for element in original_data: LOG.debug("Processing entry [%s] for original data from " "Gnocchi to sum all of the revisions if needed for " "metric [%s].", element, metric_name) group_entry = element.get('group') if not group_entry: LOG.warning("No groupby section found for element [%s].", element) continue entry_id = group_entry.get('id') if not entry_id: LOG.warning("No ID attribute found for element [%s].", element) continue first_measure = element.get('measures') if first_measure: second_measure = first_measure.get('measures') if second_measure: aggregated_value = second_measure.get('aggregated', []) if len(aggregated_value) == 1: actual_aggregated_value = aggregated_value[0] if len(actual_aggregated_value) == 3: value_to_add = actual_aggregated_value[2] entry = map_id_entry[entry_id] old_value = list( entry['measures']['measures'][ 'aggregated'][0]) new_value = copy.deepcopy(old_value) new_value[2] += value_to_add entry['measures']['measures'][ 'aggregated'][0] = tuple(new_value) LOG.debug("Adding value [%s] to value [%s] " "in entry [%s] for metric [%s].", value_to_add, old_value, entry, metric_name) LOG.debug("Replaced list of data points [%s] with [%s] for " "metric [%s]", original_data, single_entries_per_id, metric_name) data = single_entries_per_id return data @staticmethod def zero_quantity_values(single_entries_per_id): """Cleans the quantity value of the entry for further processing.""" for single_entry in single_entries_per_id: first_measure = single_entry.get('measures') if first_measure: second_measure = first_measure.get('measures') if second_measure: aggregated_value = second_measure.get('aggregated', []) if len(aggregated_value) == 1: actual_aggregated_value = aggregated_value[0] # We need to convert the tuple to a list actual_aggregated_value = list(actual_aggregated_value) if len(actual_aggregated_value) == 3: LOG.debug("Zeroing aggregated value for single " "entry [%s].", single_entry) # We are going to zero this elements, as we # will be summing all of them later. actual_aggregated_value[2] = 0 # Convert back to tuple aggregated_value[0] = tuple( actual_aggregated_value) else: LOG.warning("We expect the actual aggregated " "value to be a list of 3 elements." " The first one is a timestamp, " "the second the granularity, and " "the last one the quantity " "measured. But we got a different " "structure: [%s]. for entry [%s].", actual_aggregated_value, single_entry) else: LOG.warning("Aggregated value return does not " "have the expected size. Expected 1, " "but got [%s].", len(aggregated_value)) else: LOG.debug('Second measure of the aggregates API for ' 'entry [%s] is empty.', single_entry) else: LOG.debug('First measure of the aggregates API for entry ' '[%s] is empty.', single_entry) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/collector/prometheus.py0000664000175000017500000001746100000000000022747 0ustar00zuulzuul00000000000000# Copyright 2018 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from decimal import Decimal from decimal import localcontext from decimal import ROUND_HALF_UP from oslo_config import cfg from oslo_log import log from voluptuous import All from voluptuous import In from voluptuous import Optional from voluptuous import Required from voluptuous import Schema from cloudkitty import collector from cloudkitty.collector.exceptions import CollectError from cloudkitty.common.prometheus_client import PrometheusClient from cloudkitty.common.prometheus_client import PrometheusResponseError from cloudkitty import utils as ck_utils from cloudkitty.utils import tz as tzutils LOG = log.getLogger(__name__) PROMETHEUS_COLLECTOR_OPTS = 'collector_prometheus' collector_prometheus_opts = [ cfg.StrOpt( 'prometheus_url', default='', help='Prometheus service URL', ), cfg.StrOpt( 'prometheus_user', help='Prometheus user (for basic auth only)', ), cfg.StrOpt( 'prometheus_password', help='Prometheus user password (for basic auth only)', secret=True, ), cfg.StrOpt( 'cafile', help='Custom certificate authority file path', ), cfg.BoolOpt( 'insecure', default=False, help='Explicitly trust untrusted HTTPS responses', ), ] cfg.CONF.register_opts(collector_prometheus_opts, PROMETHEUS_COLLECTOR_OPTS) CONF = cfg.CONF PROMETHEUS_EXTRA_SCHEMA = { Required('extra_args', default={}): { Required('aggregation_method', default='max'): In([ 'avg', 'count', 'max', 'min', 'stddev', 'stdvar', 'sum' ]), Optional('query_function'): In([ 'abs', 'ceil', 'exp', 'floor', 'ln', 'log2', 'log10', 'round', 'sqrt' ]), Optional('range_function'): In([ 'changes', 'delta', 'deriv', 'idelta', 'irange', 'irate', 'rate' ]), Optional('query_prefix', default=''): All(str), Optional('query_suffix', default=''): All(str), } } class PrometheusCollector(collector.BaseCollector): collector_name = 'prometheus' def __init__(self, **kwargs): super(PrometheusCollector, self).__init__(**kwargs) url = CONF.collector_prometheus.prometheus_url user = CONF.collector_prometheus.prometheus_user password = CONF.collector_prometheus.prometheus_password verify = True if CONF.collector_prometheus.cafile: verify = CONF.collector_prometheus.cafile elif CONF.collector_prometheus.insecure: verify = False self._conn = PrometheusClient( url, auth=(user, password) if user and password else None, verify=verify, ) @staticmethod def check_configuration(conf): conf = collector.BaseCollector.check_configuration(conf) metric_schema = Schema(collector.METRIC_BASE_SCHEMA).extend( PROMETHEUS_EXTRA_SCHEMA, ) output = {} for metric_name, metric in conf.items(): output[metric_name] = metric_schema(metric) return output def _format_data(self, metric_name, scope_key, scope_id, start, end, data): """Formats Prometheus data format to Cloudkitty data format. Returns metadata, groupby, qty """ metadata = {} for meta in self.conf[metric_name]['metadata']: metadata[meta] = data['metric'].get(meta, '') groupby = {scope_key: scope_id} for meta in self.conf[metric_name]['groupby']: groupby[meta] = data['metric'].get(meta, '') with localcontext() as ctx: ctx.prec = 9 ctx.rounding = ROUND_HALF_UP qty = ck_utils.convert_unit( +Decimal(data['value'][1]), self.conf[metric_name]['factor'], self.conf[metric_name]['offset'], ) mutate_map = self.conf[metric_name].get('mutate_map') qty = ck_utils.mutate(qty, self.conf[metric_name]['mutate'], mutate_map=mutate_map) return metadata, groupby, qty def fetch_all(self, metric_name, start, end, scope_id, q_filter=None): """Returns metrics to be valorized.""" scope_key = CONF.collect.scope_key method = self.conf[metric_name]['extra_args']['aggregation_method'] query_function = self.conf[metric_name]['extra_args'].get( 'query_function') range_function = self.conf[metric_name]['extra_args'].get( 'range_function') groupby = self.conf[metric_name].get('groupby', []) metadata = self.conf[metric_name].get('metadata', []) query_prefix = self.conf[metric_name]['extra_args']['query_prefix'] query_suffix = self.conf[metric_name]['extra_args']['query_suffix'] period = tzutils.diff_seconds(end, start) time = end # The metric with the period query = '{0}{{{1}="{2}"}}[{3}s]'.format( metric_name, scope_key, scope_id, period ) # Applying the aggregation_method or the range_function on # a Range Vector if range_function is not None: query = "{0}({1})".format( range_function, query ) else: query = "{0}_over_time({1})".format( method, query ) # Applying the query_function if query_function is not None: query = "{0}({1})".format( query_function, query ) # Applying the aggregation_method on a Instant Vector query = "{0}({1})".format( method, query ) # Filter by groupby and metadata query = "{0} by ({1})".format( query, ', '.join(groupby + metadata) ) # Add custom query prefix if query_prefix: query = "{0} {1}".format(query_prefix, query) # Add custom query suffix if query_suffix: query = "{0} {1}".format(query, query_suffix) try: res = self._conn.get_instant( query, time.isoformat(), ) except PrometheusResponseError as e: raise CollectError(*e.args) if res['status'] == 'error': error_type = res['errorType'] error_msg = res['error'] raise CollectError("%s: %s" % (error_type, error_msg)) # If the query returns an empty dataset, # return an empty list if not res['data']['result']: return [] formatted_resources = [] for item in res['data']['result']: metadata, groupby, qty = self._format_data( metric_name, scope_key, scope_id, start, end, item, ) point = self._create_data_point(self.conf[metric_name], qty, 0, groupby, metadata, start) formatted_resources.append(point) return formatted_resources ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2354863 cloudkitty-21.0.0/cloudkitty/common/0000775000175000017500000000000000000000000017473 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/__init__.py0000664000175000017500000000000000000000000021572 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/config.py0000664000175000017500000000623300000000000021316 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import copy import itertools import cloudkitty.api.app import cloudkitty.collector.gnocchi import cloudkitty.collector.prometheus import cloudkitty.config import cloudkitty.fetcher import cloudkitty.fetcher.gnocchi import cloudkitty.fetcher.keystone import cloudkitty.fetcher.prometheus import cloudkitty.fetcher.source import cloudkitty.orchestrator import cloudkitty.service import cloudkitty.storage import cloudkitty.storage.v1.hybrid.backends.gnocchi import cloudkitty.storage.v2.elasticsearch import cloudkitty.storage.v2.influx import cloudkitty.storage.v2.opensearch import cloudkitty.utils __all__ = ['list_opts'] _opts = [ ('api', list(itertools.chain( cloudkitty.api.app.api_opts,))), ('collect', list(itertools.chain( cloudkitty.collector.collect_opts))), ('collector_gnocchi', list(itertools.chain( cloudkitty.collector.gnocchi.collector_gnocchi_opts))), ('collector_prometheus', list(itertools.chain( cloudkitty.collector.prometheus.collector_prometheus_opts))), ('fetcher', list(itertools.chain( cloudkitty.fetcher.fetcher_opts))), ('fetcher_gnocchi', list(itertools.chain( cloudkitty.fetcher.gnocchi.gfetcher_opts, cloudkitty.fetcher.gnocchi.fetcher_gnocchi_opts))), ('fetcher_keystone', list(itertools.chain( cloudkitty.fetcher.keystone.fetcher_keystone_opts))), ('fetcher_prometheus', list(itertools.chain( cloudkitty.fetcher.prometheus.fetcher_prometheus_opts))), ('fetcher_source', list(itertools.chain( cloudkitty.fetcher.source.fetcher_source_opts))), ('orchestrator', list(itertools.chain( cloudkitty.orchestrator.orchestrator_opts))), ('output', list(itertools.chain( cloudkitty.config.output_opts))), ('state', list(itertools.chain( cloudkitty.config.state_opts))), ('storage', list(itertools.chain( cloudkitty.storage.storage_opts))), ('storage_influxdb', list(itertools.chain( cloudkitty.storage.v2.influx.influx_storage_opts))), ('storage_elasticsearch', list(itertools.chain( cloudkitty.storage.v2.elasticsearch.elasticsearch_storage_opts))), ('storage_opensearch', list(itertools.chain( cloudkitty.storage.v2.opensearch.opensearch_storage_opts))), ('storage_gnocchi', list(itertools.chain( cloudkitty.storage.v1.hybrid.backends.gnocchi.gnocchi_storage_opts))), (None, list(itertools.chain( cloudkitty.api.app.auth_opts, cloudkitty.service.service_opts))), ] def list_opts(): return [(g, copy.deepcopy(o)) for g, o in _opts] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/context.py0000664000175000017500000000165100000000000021534 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_context import context from cloudkitty.common import policy class RequestContext(context.RequestContext): def __init__(self, is_admin=None, **kwargs): super(RequestContext, self).__init__(is_admin=is_admin, **kwargs) if self.is_admin is None: self.is_admin = policy.check_is_admin(self) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/custom_session.py0000664000175000017500000000216000000000000023121 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import logging import requests from keystoneauth1 import session as ks_session LOG = logging.getLogger(__name__) def create_custom_session(session_options, pool_size): LOG.debug("Using custom connection pool size: %s", pool_size) session = requests.Session() session.adapters['http://'] = ks_session.TCPKeepAliveAdapter( pool_maxsize=pool_size) session.adapters['https://'] = ks_session.TCPKeepAliveAdapter( pool_maxsize=pool_size) return ks_session.Session(session=session, **session_options) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2354863 cloudkitty-21.0.0/cloudkitty/common/db/0000775000175000017500000000000000000000000020060 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/db/__init__.py0000664000175000017500000000000000000000000022157 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2354863 cloudkitty-21.0.0/cloudkitty/common/db/alembic/0000775000175000017500000000000000000000000021454 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/db/alembic/__init__.py0000664000175000017500000000000000000000000023553 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/db/alembic/alembic.ini0000664000175000017500000000213700000000000023554 0ustar00zuulzuul00000000000000# A generic, single database configuration. [alembic] # path to migration scripts script_location = alembic # template used to generate migration files # file_template = %%(rev)s_%%(slug)s # max length of characters to apply to the # "slug" field #truncate_slug_length = 40 # set to 'true' to run the environment during # the 'revision' command, regardless of autogenerate # revision_environment = false # set to 'true' to allow .pyc and .pyo files without # a source .py file to be detected as revisions in the # versions/ directory # sourceless = false # sqlalchemy.url = driver://user:pass@localhost/dbname # Logging configuration [loggers] keys = root,sqlalchemy,alembic [handlers] keys = console [formatters] keys = generic [logger_root] level = WARN handlers = console qualname = [logger_sqlalchemy] level = WARN handlers = qualname = sqlalchemy.engine [logger_alembic] level = INFO handlers = qualname = alembic [handler_console] class = StreamHandler args = (sys.stderr,) level = NOTSET formatter = generic [formatter_generic] format = %(levelname)-5.5s [%(name)s] %(message)s datefmt = %H:%M:%S ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/db/alembic/env.py0000664000175000017500000000301300000000000022613 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from logging import config as log_config from alembic import context from cloudkitty import db config = context.config log_config.fileConfig(config.config_file_name) def run_migrations_online(target_metadata, version_table): """Run migrations in 'online' mode. In this scenario we need to create an Engine and associate a connection with the context. :param target_metadata: Model's metadata used for autogenerate support. :param version_table: Override the default version table for alembic. """ with db.session_for_write() as session: engine = session.get_bind() with engine.connect() as connection: context.configure(connection=connection, target_metadata=target_metadata, version_table=version_table) with context.begin_transaction(): context.run_migrations() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/db/alembic/migration.py0000664000175000017500000000405000000000000024016 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import os import alembic from alembic import config as alembic_config ALEMBIC_INI_PATH = os.path.join(os.path.dirname(__file__), 'alembic.ini') def load_alembic_config(repo_path, ini_path=ALEMBIC_INI_PATH): if not os.path.exists(repo_path): raise Exception('Repo path (%s) not found.' % repo_path) if not os.path.exists(ini_path): raise Exception('Ini path (%s) not found.' % ini_path) config = alembic_config.Config(ini_path) config.set_main_option('script_location', repo_path) return config def upgrade(config, version): return alembic.command.upgrade(config, version or 'head') def version(config): return alembic.command.current(config) def revision(config, message='', autogenerate=False): """Creates template for migration. :param message: Text that will be used for migration title :type message: string :param autogenerate: If True - generates diff based on current database state :type autogenerate: bool """ return alembic.command.revision(config, message=message, autogenerate=autogenerate) def stamp(config, revision): """Stamps database with provided revision. :param revision: Should match one from repository or head - to stamp database with most recent revision :type revision: string """ return alembic.command.stamp(config, revision=revision) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/db/models.py0000664000175000017500000000205100000000000021713 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2016 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from sqlalchemy.ext import declarative NAMING_CONVENTION = { "ix": 'ix_%(column_0_label)s', "uq": "uq_%(table_name)s_%(column_0_name)s", "ck": "ck_%(table_name)s_%(constraint_name)s", "fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s", "pk": "pk_%(table_name)s"} def get_base(): base = declarative.declarative_base() base.metadata.naming_convention = NAMING_CONVENTION return base ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/defaults.py0000664000175000017500000000374400000000000021664 0ustar00zuulzuul00000000000000# Copyright 2016 Hewlett Packard Enterprise Development Corporation, LP # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_config import cfg from oslo_middleware import cors from oslo_policy import opts as policy_opts def set_config_defaults(): """This method updates all configuration default values.""" set_cors_middleware_defaults() # TODO(gmann): Remove setting the default value of config policy_file # once oslo_policy change the default value to 'policy.yaml'. # https://github.com/openstack/oslo.policy/blob/a626ad12fe5a3abd49d70e3e5b95589d279ab578/oslo_policy/opts.py#L49 DEFAULT_POLICY_FILE = 'policy.yaml' policy_opts.set_defaults(cfg.CONF, DEFAULT_POLICY_FILE) def set_cors_middleware_defaults(): """Update default configuration options for oslo.middleware.""" cors.set_defaults( allow_headers=['X-Auth-Token', 'X-Subject-Token', 'X-Roles', 'X-User-Id', 'X-Domain-Id', 'X-Project-Id', 'X-Tenant-Id', 'X-OpenStack-Request-ID'], expose_headers=['X-Auth-Token', 'X-Subject-Token', 'X-Service-Token', 'X-OpenStack-Request-ID'], allow_methods=['GET', 'PUT', 'POST', 'DELETE', 'PATCH']) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2354863 cloudkitty-21.0.0/cloudkitty/common/policies/0000775000175000017500000000000000000000000021302 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/policies/__init__.py0000664000175000017500000000324000000000000023412 0ustar00zuulzuul00000000000000# Copyright 2017 GohighSec. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools from cloudkitty.common.policies import base from cloudkitty.common.policies.v1 import collector as v1_collector from cloudkitty.common.policies.v1 import info as v1_info from cloudkitty.common.policies.v1 import rating as v1_rating from cloudkitty.common.policies.v1 import report as v1_report from cloudkitty.common.policies.v1 import storage as v1_storage from cloudkitty.common.policies.v2 import dataframes as v2_dataframes from cloudkitty.common.policies.v2 import rating as v2_rating from cloudkitty.common.policies.v2 import scope as v2_scope from cloudkitty.common.policies.v2 import summary as v2_summary from cloudkitty.common.policies.v2 import tasks as v2_tasks def list_rules(): return itertools.chain( base.list_rules(), v1_collector.list_rules(), v1_info.list_rules(), v1_rating.list_rules(), v1_report.list_rules(), v1_storage.list_rules(), v2_dataframes.list_rules(), v2_rating.list_rules(), v2_scope.list_rules(), v2_summary.list_rules(), v2_tasks.list_rules() ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/policies/base.py0000664000175000017500000000216500000000000022572 0ustar00zuulzuul00000000000000# Copyright 2017 GohighSec. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy RULE_ADMIN_OR_OWNER = 'rule:admin_or_owner' ROLE_ADMIN = 'role:admin' UNPROTECTED = '' rules = [ policy.RuleDefault( name='context_is_admin', check_str='role:admin'), policy.RuleDefault( name='admin_or_owner', check_str='is_admin:True or ' '(role:admin and is_admin_project:True) or ' 'project_id:%(project_id)s'), policy.RuleDefault( name='default', check_str=UNPROTECTED) ] def list_rules(): return rules ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2354863 cloudkitty-21.0.0/cloudkitty/common/policies/v1/0000775000175000017500000000000000000000000021630 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/policies/v1/__init__.py0000664000175000017500000000000000000000000023727 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/policies/v1/collector.py0000664000175000017500000000430600000000000024173 0ustar00zuulzuul00000000000000# Copyright 2017 GohighSec. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from cloudkitty.common.policies import base collector_policies = [ policy.DocumentedRuleDefault( name='collector:list_mappings', check_str=base.ROLE_ADMIN, description='Return the list of every services mapped to a collector.', operations=[{'path': '/v1/collector/mappings', 'method': 'LIST'}]), policy.DocumentedRuleDefault( name='collector:get_mapping', check_str=base.ROLE_ADMIN, description='Return a service to collector mapping.', operations=[{'path': '/v1/collector/mappings/{service_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name='collector:manage_mapping', check_str=base.ROLE_ADMIN, description='Manage a service to collector mapping.', operations=[{'path': '/v1/collector/mappings', 'method': 'POST'}, {'path': '/v1/collector/mappings/{service_id}', 'method': 'DELETE'}]), policy.DocumentedRuleDefault( name='collector:get_state', check_str=base.ROLE_ADMIN, description='Query the enable state of a collector.', operations=[{'path': '/v1/collector/states/{collector_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name='collector:update_state', check_str=base.ROLE_ADMIN, description='Set the enable state of a collector.', operations=[{'path': '/v1/collector/states/{collector_id}', 'method': 'PUT'}]) ] def list_rules(): return collector_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/policies/v1/info.py0000664000175000017500000000403700000000000023141 0ustar00zuulzuul00000000000000# Copyright 2017 GohighSec. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from cloudkitty.common.policies import base info_policies = [ policy.DocumentedRuleDefault( name='info:list_services_info', check_str=base.UNPROTECTED, description='List available services information in Cloudkitty.', operations=[{'path': '/v1/info/services', 'method': 'LIST'}]), policy.DocumentedRuleDefault( name='info:get_service_info', check_str=base.UNPROTECTED, description='Get specified service information.', operations=[{'path': '/v1/info/services/{metric_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name='info:list_metrics_info', check_str=base.UNPROTECTED, description='List available metrics information in Cloudkitty.', operations=[{'path': '/v1/info/metrics', 'method': 'LIST'}]), policy.DocumentedRuleDefault( name='info:get_metric_info', check_str=base.UNPROTECTED, description='Get specified metric information.', operations=[{'path': '/v1/info/metrics/{metric_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name='info:get_config', check_str=base.UNPROTECTED, description='Get current configuration in Cloudkitty.', operations=[{'path': '/v1/info/config', 'method': 'GET'}]) ] def list_rules(): return info_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/policies/v1/rating.py0000664000175000017500000000407100000000000023470 0ustar00zuulzuul00000000000000# Copyright 2017 GohighSec. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from cloudkitty.common.policies import base rating_policies = [ policy.DocumentedRuleDefault( name='rating:list_modules', check_str=base.ROLE_ADMIN, description='Return the list of loaded modules in Cloudkitty.', operations=[{'path': '/v1/rating/modules', 'method': 'LIST'}]), policy.DocumentedRuleDefault( name='rating:get_module', check_str=base.ROLE_ADMIN, description='Get specified module.', operations=[{'path': '/v1/rating/modules/{module_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name='rating:update_module', check_str=base.ROLE_ADMIN, description='Change the state and priority of a module.', operations=[{'path': '/v1/rating/modules/{module_id}', 'method': 'PUT'}]), policy.DocumentedRuleDefault( name='rating:quote', check_str=base.UNPROTECTED, description='Get an instant quote based on multiple resource ' 'descriptions.', operations=[{'path': '/v1/rating/quote', 'method': 'POST'}]), policy.DocumentedRuleDefault( name='rating:module_config', check_str=base.ROLE_ADMIN, description='Trigger a rating module list reload.', operations=[{'path': '/v1/rating/reload_modules', 'method': 'GET'}]) ] def list_rules(): return rating_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/policies/v1/report.py0000664000175000017500000000300500000000000023513 0ustar00zuulzuul00000000000000# Copyright 2017 GohighSec. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from cloudkitty.common.policies import base report_policies = [ policy.DocumentedRuleDefault( name='report:list_tenants', check_str=base.ROLE_ADMIN, description='Return the list of rated tenants.', operations=[{'path': '/v1/report/tenants', 'method': 'GET'}]), policy.DocumentedRuleDefault( name='report:get_summary', check_str=base.RULE_ADMIN_OR_OWNER, description='Return the summary to pay for a given period.', operations=[{'path': '/v1/report/summary', 'method': 'GET'}]), policy.DocumentedRuleDefault( name='report:get_total', check_str=base.RULE_ADMIN_OR_OWNER, description='Return the amount to pay for a given period.', operations=[{'path': '/v1/report/total', 'method': 'GET'}]) ] def list_rules(): return report_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/policies/v1/storage.py0000664000175000017500000000206000000000000023644 0ustar00zuulzuul00000000000000# Copyright 2017 GohighSec. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from cloudkitty.common.policies import base storage_policies = [ policy.DocumentedRuleDefault( name='storage:list_data_frames', check_str=base.RULE_ADMIN_OR_OWNER, description='Return a list of rated resources for a time period ' 'and a tenant.', operations=[{'path': '/v1/storage/dataframes', 'method': 'GET'}]) ] def list_rules(): return storage_policies ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2354863 cloudkitty-21.0.0/cloudkitty/common/policies/v2/0000775000175000017500000000000000000000000021631 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/policies/v2/__init__.py0000664000175000017500000000000000000000000023730 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/policies/v2/dataframes.py0000664000175000017500000000233000000000000024310 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_policy import policy from cloudkitty.common.policies import base dataframes_policies = [ policy.DocumentedRuleDefault( name='dataframes:add', check_str=base.ROLE_ADMIN, description='Add one or several DataFrames', operations=[{'path': '/v2/dataframes', 'method': 'POST'}]), policy.DocumentedRuleDefault( name='dataframes:get', check_str=base.RULE_ADMIN_OR_OWNER, description='Get DataFrames', operations=[{'path': '/v2/dataframes', 'method': 'GET'}]), ] def list_rules(): return dataframes_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/policies/v2/rating.py0000664000175000017500000000302400000000000023466 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from cloudkitty.common.policies import base rating_policies = [ policy.DocumentedRuleDefault( name='v2_rating:list_modules', check_str=base.ROLE_ADMIN, description='Returns the list of loaded modules in Cloudkitty.', operations=[{'path': '/v2/rating/modules', 'method': 'GET'}]), policy.DocumentedRuleDefault( name='v2_rating:get_module', check_str=base.ROLE_ADMIN, description='Get specified module.', operations=[{'path': '/v2/rating/modules/{module_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name='v2_rating:update_module', check_str=base.ROLE_ADMIN, description='Change the state and priority of a module.', operations=[{'path': '/v2/rating/modules/{module_id}', 'method': 'PUT'}]) ] def list_rules(): return rating_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/policies/v2/scope.py0000664000175000017500000000333400000000000023317 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_policy import policy from cloudkitty.common.policies import base scope_policies = [ policy.DocumentedRuleDefault( name='scope:get_state', check_str=base.ROLE_ADMIN, description='Get the state of one or several scopes', operations=[{'path': '/v2/scope', 'method': 'GET'}]), policy.DocumentedRuleDefault( name='scope:reset_state', check_str=base.ROLE_ADMIN, description='Reset the state of one or several scopes', operations=[{'path': '/v2/scope', 'method': 'PUT'}]), policy.DocumentedRuleDefault( name='scope:patch_state', check_str=base.ROLE_ADMIN, description='Enables operators to patch a storage scope', operations=[{'path': '/v2/scope', 'method': 'PATCH'}]), policy.DocumentedRuleDefault( name='scope:post_state', check_str=base.ROLE_ADMIN, description='Enables operators to create a storage scope', operations=[{'path': '/v2/scope', 'method': 'POST'}]), ] def list_rules(): return scope_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/policies/v2/summary.py0000664000175000017500000000174700000000000023711 0ustar00zuulzuul00000000000000# Copyright 2018 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_policy import policy from cloudkitty.common.policies import base example_policies = [ policy.DocumentedRuleDefault( name='summary:get_summary', check_str=base.RULE_ADMIN_OR_OWNER, description='Get a rating summary', operations=[{'path': '/v2/summary', 'method': 'GET'}]), ] def list_rules(): return example_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/policies/v2/tasks.py0000664000175000017500000000236200000000000023333 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_policy import policy from cloudkitty.common.policies import base schedule_policies = [ policy.DocumentedRuleDefault( name='schedule:task_reprocesses', check_str=base.ROLE_ADMIN, description='Schedule a scope for reprocessing', operations=[{'path': '/v2/task/reprocesses', 'method': 'POST'}]), policy.DocumentedRuleDefault( name='schedule:get_task_reprocesses', check_str=base.ROLE_ADMIN, description='Get reprocessing schedule tasks for scopes.', operations=[{'path': '/v2/task/reprocesses', 'method': 'GET'}]), ] def list_rules(): return schedule_policies ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/policy.py0000664000175000017500000001263400000000000021352 0ustar00zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # Borrowed from cinder (cinder/policy.py) import copy import sys from oslo_config import cfg from oslo_log import log as logging from oslo_policy import opts as policy_opts from oslo_policy import policy from oslo_utils import excutils from cloudkitty.common import policies LOG = logging.getLogger(__name__) CONF = cfg.CONF # TODO(gmann): Remove setting the default value of config policy_file # once oslo_policy change the default value to 'policy.yaml'. # https://github.com/openstack/oslo.policy/blob/a626ad12fe5a3abd49d70e3e5b95589d279ab578/oslo_policy/opts.py#L49 DEFAULT_POLICY_FILE = 'policy.yaml' policy_opts.set_defaults(cfg.CONF, DEFAULT_POLICY_FILE) _ENFORCER = None # oslo_policy will read the policy configuration file again when the file # is changed in runtime so the old policy rules will be saved to # saved_file_rules and used to compare with new rules to determine the # rules whether were updated. saved_file_rules = [] # TODO(gpocentek): provide a proper parent class to handle such exceptions class PolicyNotAuthorized(Exception): message = "Policy doesn't allow %(action)s to be performed." code = 403 def __init__(self, **kwargs): self.msg = self.message % kwargs super(PolicyNotAuthorized, self).__init__(self.msg) def __unicode__(self): return str(self.msg) def reset(): global _ENFORCER if _ENFORCER: _ENFORCER.clear() _ENFORCER = None def init(): global _ENFORCER global saved_file_rules if not _ENFORCER: _ENFORCER = policy.Enforcer(CONF) register_rules(_ENFORCER) # Only the rules which are loaded from file may be changed. current_file_rules = _ENFORCER.file_rules current_file_rules = _serialize_rules(current_file_rules) # Checks whether the rules are updated in the runtime if saved_file_rules != current_file_rules: saved_file_rules = copy.deepcopy(current_file_rules) def _serialize_rules(rules): """Serialize all the Rule object as string.""" result = [(rule_name, str(rule)) for rule_name, rule in rules.items()] return sorted(result, key=lambda rule: rule[0]) def authorize(context, action, target): """Verifies that the action is valid on the target in this context. :param context: cloudkitty context :param action: string representing the action to be checked this should be colon separated for clarity. i.e. ``compute:create_instance``, ``compute:attach_volume``, ``volume:attach_volume`` :param object: dictionary representing the object of the action for object creation this should be a dictionary representing the location of the object e.g. ``{'project_id': context.project_id}`` :raises PolicyNotAuthorized: if verification fails. """ if CONF.auth_strategy != "keystone": return init() try: LOG.debug('Authenticating user with credentials %(credentials)s', {'credentials': context.to_dict()}) return _ENFORCER.authorize(action, target, context, do_raise=True, exc=PolicyNotAuthorized, action=action) except policy.PolicyNotRegistered: with excutils.save_and_reraise_exception(): LOG.exception('Policy not registered') except Exception: with excutils.save_and_reraise_exception(): LOG.error('Policy check for %(action)s failed with credentials ' '%(credentials)s', {'action': action, 'credentials': context.to_dict()}) def check_is_admin(context): """Whether or not roles contains 'admin' role according to policy setting. """ if CONF.auth_strategy != "keystone": return True init() target = { 'user_id': context.user_id, 'project_id': context.project_id, } credentials = context.to_policy_values() return _ENFORCER.authorize('context_is_admin', target, credentials) def register_rules(enforcer): enforcer.register_defaults(policies.list_rules()) def get_enforcer(): # This method is for use by oslopolicy CLI scripts. Those scripts need the # 'output-file' and 'namespace' options, but having those in sys.argv means # loading the Cloudkitty config options will fail as those are not expected # to be present. So we pass in an arg list with those stripped out. conf_args = [] # Start at 1 because cfg.CONF expects the equivalent of sys.argv[1:] i = 1 while i < len(sys.argv): if sys.argv[i].strip('-') in ['namespace', 'output-file']: i += 2 continue conf_args.append(sys.argv[i]) i += 1 cfg.CONF(conf_args, project='cloudkitty') init() return _ENFORCER ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/common/prometheus_client.py0000664000175000017500000000417100000000000023601 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import requests class PrometheusResponseError(Exception): pass class PrometheusClient(object): INSTANT_QUERY_ENDPOINT = 'query' RANGE_QUERY_ENDPOINT = 'query_range' def __init__(self, url, auth=None, verify=True): self.url = url self.auth = auth self.verify = verify def _get(self, endpoint, params): return requests.get( '{}/{}'.format(self.url, endpoint), params=params, auth=self.auth, verify=self.verify, ) def get_instant(self, query, time=None, timeout=None): res = self._get( self.INSTANT_QUERY_ENDPOINT, params={'query': query, 'time': time, 'timeout': timeout}, ) try: return res.json() except ValueError: raise PrometheusResponseError( 'Could not get a valid json response for ' '{} (response: {})'.format(res.url, res.text) ) def get_range(self, query, start, end, step, timeout=None): res = self._get( self.RANGE_QUERY_ENDPOINT, params={ 'query': query, 'start': start, 'end': end, 'step': step, 'timeout': timeout, }, ) try: return res.json() except ValueError: raise PrometheusResponseError( 'Could not get a valid json response for ' '{} (response: {})'.format(res.url, res.text) ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/config.py0000664000175000017500000000324200000000000020023 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_config import cfg from oslo_db import options as db_options # noqa from oslo_messaging import opts # noqa state_opts = [ cfg.StrOpt('backend', default='cloudkitty.backend.file.FileBackend', help='Backend for the state manager.'), cfg.StrOpt('basepath', default='/var/lib/cloudkitty/states/', help='Storage directory for the file state backend.'), ] output_opts = [ cfg.StrOpt('backend', default='cloudkitty.backend.file.FileBackend', help='Backend for the output manager.'), cfg.StrOpt('basepath', default='/var/lib/cloudkitty/states/', help='Storage directory for the file output backend.'), cfg.ListOpt('pipeline', default=['osrf'], help='Output pipeline'), ] cfg.CONF.register_opts(state_opts, 'state') cfg.CONF.register_opts(output_opts, 'output') # oslo.db defaults db_options.set_defaults( cfg.CONF, connection='sqlite:////var/lib/cloudkitty/cloudkitty.sqlite') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/dataframe.py0000664000175000017500000002303700000000000020506 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import collections import datetime import decimal import functools import voluptuous from werkzeug import datastructures from cloudkitty.utils import json from cloudkitty.utils import tz as tzutils from cloudkitty.utils import validation as vutils # NOTE(peschk_l): qty and price are converted to strings to avoid # floating-point conversion issues: # Decimal(0.121) == Decimal('0.12099999999999999644728632119') # Decimal(str(0.121)) == Decimal('0.121') DATAPOINT_SCHEMA = voluptuous.Schema({ voluptuous.Required('vol'): { voluptuous.Required('unit'): vutils.get_string_type(), voluptuous.Required('qty'): voluptuous.Coerce(str), }, voluptuous.Required('rating', default={}): { voluptuous.Required('price', default=0): voluptuous.Coerce(str), }, voluptuous.Required('groupby'): voluptuous.Coerce(dict), voluptuous.Required('metadata'): voluptuous.Coerce(dict), }) _DataPointBase = collections.namedtuple( "DataPoint", field_names=("unit", "qty", "price", "groupby", "metadata", "description")) class DataPoint(_DataPointBase): def __new__(cls, unit, qty, price, groupby, metadata, description=None): return _DataPointBase.__new__( cls, unit or "undefined", # NOTE(peschk_l): avoids floating-point issues. decimal.Decimal(str(qty) if isinstance(qty, float) else qty), decimal.Decimal(str(price) if isinstance(price, float) else price), datastructures.ImmutableDict(groupby), datastructures.ImmutableDict(metadata), description ) def set_price(self, price): """Sets the price of the DataPoint and returns a new object.""" return self._replace(price=price) def as_dict(self, legacy=False, mutable=False): """Returns a dict representation of the object. The returned dict is immutable by default and has the following format:: { "vol": { "unit": "GiB", "qty": 1.2, }, "rating": { "price": 0.04, }, "groupby": { "group_one": "one", "group_two": "two", }, "metadata": { "attr_one": "one", "attr_two": "two", }, } The dict can also be returned in the legacy (v1 storage) format. In that case, `groupby` and `metadata` will be removed and merged together into the `desc` key. :param legacy: Defaults to False. If True, returned dict is in legacy format. :type legacy: bool :param mutable: Defaults to False. If True, returns a normal dict instead of an ImmutableDict. :type mutable: bool """ output = { "vol": { "unit": self.unit, "qty": self.qty, }, "rating": { "price": self.price, }, "groupby": dict(self.groupby) if mutable else self.groupby, "metadata": dict(self.metadata) if mutable else self.metadata, } if legacy: desc = output.pop("metadata") desc.update(output.pop("groupby")) output['desc'] = desc return output if mutable else datastructures.ImmutableDict(output) def json(self, legacy=False): """Returns a json representation of the dict returned by `as_dict`. :param legacy: Defaults to False. If True, returned dict is in legacy format. :type legacy: bool :rtype: str """ return json.dumps(self.as_dict(legacy=legacy, mutable=True)) @classmethod def from_dict(cls, dict_, legacy=False): """Returns a new DataPoint instance build from a dict. :param dict_: Dict to build the DataPoint from :type dict_: dict :param legacy: Set to true to convert the dict to a the new format before validating it. :rtype: DataPoint """ try: if legacy: dict_['groupby'] = dict_.pop('desc') dict_['metadata'] = {} valid = DATAPOINT_SCHEMA(dict_) return cls( unit=valid["vol"]["unit"], qty=valid["vol"]["qty"], price=valid["rating"]["price"], groupby=valid["groupby"], metadata=valid["metadata"], ) except (voluptuous.Invalid, KeyError) as e: raise ValueError("{} isn't a valid DataPoint: {}".format(dict_, e)) @property def desc(self): output = dict(self.metadata) output.update(self.groupby) return datastructures.ImmutableDict(output) DATAFRAME_SCHEMA = voluptuous.Schema({ voluptuous.Required('period'): { voluptuous.Required('begin'): voluptuous.Any( datetime.datetime, tzutils.dt_from_iso), voluptuous.Required('end'): voluptuous.Any( datetime.datetime, tzutils.dt_from_iso), }, voluptuous.Required('usage'): vutils.IterableValuesDict( str, DataPoint.from_dict), }) class DataFrame(object): __slots__ = ("start", "end", "_usage") def __init__(self, start, end, usage=None): if not isinstance(start, datetime.datetime): raise TypeError( '"start" must be of type datetime.datetime, not {}'.format( type(start))) if not isinstance(end, datetime.datetime): raise TypeError( '"end" must be of type datetime.datetime, not {}'.format( type(end))) if usage is not None and not isinstance(usage, dict): raise TypeError( '"usage" must be a dict, not {}'.format(type(usage))) self.start = start self.end = end self._usage = collections.OrderedDict() if usage: for key in sorted(usage.keys()): self.add_points(usage[key], key) def as_dict(self, legacy=False, mutable=False): output = { "period": {"begin": self.start, "end": self.end}, "usage": { key: [v.as_dict(legacy=legacy, mutable=mutable) for v in val] for key, val in self._usage.items() }, } return output if mutable else datastructures.ImmutableDict(output) def json(self, legacy=False): return json.dumps(self.as_dict(legacy=legacy, mutable=True)) @classmethod def from_dict(cls, dict_, legacy=False): try: schema = DATAFRAME_SCHEMA if legacy: validator = functools.partial(DataPoint.from_dict, legacy=True) # NOTE(peschk_l): __name__ is required for voluptuous exception # message formatting validator.__name__ = 'DataPoint.from_dict' # NOTE(peschk_l): In case the legacy format is required, we # create a new schema where DataPoint.from_dict is called with # legacy=True. The "extend" method does create a new objects, # and replaces existing keys with new ones. schema = DATAFRAME_SCHEMA.extend({ voluptuous.Required('usage'): vutils.IterableValuesDict( str, validator ), }) valid = schema(dict_) return cls( valid["period"]["begin"], valid["period"]["end"], usage=valid["usage"]) except (voluptuous.error.Invalid, KeyError) as e: raise ValueError("{} isn't a valid DataFrame: {}".format(dict_, e)) def add_points(self, points, type_): """Adds multiple points to the DataFrame :param points: DataPoints to add. :type point: list of DataPoints """ if type_ in self._usage: self._usage[type_] += points else: self._usage[type_] = points def add_point(self, point, type_): """Adds a single point to the DataFrame :param point: DataPoint to add. :type point: DataPoint """ if type_ in self._usage: self._usage[type_].append(point) else: self._usage[type_] = [point] def iterpoints(self): """Iterates over all datapoints of the dataframe. Yields (type, point) tuples. :rtype: (str, DataPoint) """ for type_, points in self._usage.items(): for point in points: yield type_, point def itertypes(self): """Iterates over all types of the dataframe. Yields (type, (point, )) tuples. :rtype: (str, (DataPoint, )) """ for type_, points in self._usage.items(): yield type_, points def __repr__(self): return 'DataFrame(metrics=[{}])'.format(','.join(self._usage.keys())) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2394865 cloudkitty-21.0.0/cloudkitty/db/0000775000175000017500000000000000000000000016570 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/db/__init__.py0000664000175000017500000000212600000000000020702 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import threading from oslo_db.sqlalchemy import enginefacade _CONTEXT = threading.local() _FACADE = None def _create_facade_lazily(): global _FACADE if _FACADE is None: ctx = enginefacade.transaction_context() ctx.configure(sqlite_fk=True) _FACADE = ctx return _FACADE def session_for_read(): return _create_facade_lazily().reader.using(_CONTEXT) def session_for_write(): return _create_facade_lazily().writer.using(_CONTEXT) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/db/api.py0000664000175000017500000001004200000000000017710 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import abc from oslo_config import cfg from oslo_db import api as db_api _BACKEND_MAPPING = {'sqlalchemy': 'cloudkitty.db.sqlalchemy.api'} IMPL = db_api.DBAPI.from_config(cfg.CONF, backend_mapping=_BACKEND_MAPPING, lazy=True) def get_instance(): """Return a DB API instance.""" return IMPL class State(object, metaclass=abc.ABCMeta): """Base class for state tracking.""" @abc.abstractmethod def get_state(self, name): """Retrieve the current state. :param name: Name of the state :return float: State value """ @abc.abstractmethod def set_state(self, name, state): """Store the state. :param name: Name of the state :param state: State value """ @abc.abstractmethod def get_metadata(self, name): """Retrieve state metadata :param name: Name of the state :return str: Return a json dict with all metadata attached to this state. """ @abc.abstractmethod def set_metadata(self, name, metadata): """Store the state metadata. :param name: Name of the state :param metadata: Metadata value """ class ModuleInfo(object, metaclass=abc.ABCMeta): """Base class for module info management.""" @abc.abstractmethod def get_priority(self, name): """Retrieve the module priority. :param name: Name of the module :return int: Priority of the module """ @abc.abstractmethod def set_priority(self, name, priority): """Set the module state. :param name: Name of the module :param priority: New priority of the module """ @abc.abstractmethod def get_state(self, name): """Retrieve the module state. :param name: Name of the module :return bool: State of the module """ @abc.abstractmethod def set_state(self, name, state): """Set the module state. :param name: Name of the module :param value: State of the module """ class NoSuchMapping(Exception): """Raised when the mapping doesn't exist.""" def __init__(self, service): super(NoSuchMapping, self).__init__( "No mapping for service: %s" % service) self.service = service class ServiceToCollectorMapping(object, metaclass=abc.ABCMeta): """Base class for service to collector mapping.""" @abc.abstractmethod def get_mapping(self, service): """Get a mapping. :return mapping: service to collector object. """ @abc.abstractmethod def set_mapping(self, service, collector): """Set a mapping. :param service: Service to work on. :param collector: Collector to prioritize. :return mapping: Service to Collector object. """ @abc.abstractmethod def list_services(self, collector=None): """Retrieve the list of every services mapped. :param collector: Filter on a collector name. :return list(str): List of services' name. """ @abc.abstractmethod def list_mappings(self, collector=None): """Retrieve the list of every mappings. :param collector: Filter on a collector's name. :return [tuple(str, str)]: List of mappings. """ @abc.abstractmethod def delete_mapping(self, service): """Remove a mapping. """ ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2394865 cloudkitty-21.0.0/cloudkitty/db/sqlalchemy/0000775000175000017500000000000000000000000020732 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/db/sqlalchemy/__init__.py0000664000175000017500000000000000000000000023031 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2394865 cloudkitty-21.0.0/cloudkitty/db/sqlalchemy/alembic/0000775000175000017500000000000000000000000022326 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/db/sqlalchemy/alembic/__init__.py0000664000175000017500000000000000000000000024425 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/db/sqlalchemy/alembic/env.py0000664000175000017500000000154200000000000023472 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from cloudkitty.common.db.alembic import env # noqa from cloudkitty.db.sqlalchemy import models target_metadata = models.Base.metadata version_table = 'cloudkitty_alembic' env.run_migrations_online(target_metadata, version_table) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/db/sqlalchemy/alembic/script.py.mako0000664000175000017500000000172300000000000025135 0ustar00zuulzuul00000000000000# Copyright ${create_date.year} OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """${message} Revision ID: ${up_revision} Revises: ${down_revision} Create Date: ${create_date} """ # revision identifiers, used by Alembic. revision = ${repr(up_revision)} down_revision = ${repr(down_revision)} from alembic import op import sqlalchemy as sa ${imports if imports else ""} def upgrade(): ${upgrades if upgrades else "pass"} ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2394865 cloudkitty-21.0.0/cloudkitty/db/sqlalchemy/alembic/versions/0000775000175000017500000000000000000000000024176 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=cloudkitty-21.0.0/cloudkitty/db/sqlalchemy/alembic/versions/2ac2217dcbd9_added_support_for_meta_collector.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/db/sqlalchemy/alembic/versions/2ac2217dcbd9_added_support_for_meta_coll0000664000175000017500000000213400000000000033612 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Added support for meta collector Revision ID: 2ac2217dcbd9 Revises: 464e951dc3b8 Create Date: 2014-09-25 12:41:28.585333 """ # revision identifiers, used by Alembic. revision = '2ac2217dcbd9' down_revision = '464e951dc3b8' from alembic import op # noqa: E402 import sqlalchemy as sa # noqa: E402 def upgrade(): op.create_table( 'service_to_collector_mappings', sa.Column('service', sa.String(length=255), nullable=False), sa.Column('collector', sa.String(length=255), nullable=False), sa.PrimaryKeyConstraint('service')) ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=cloudkitty-21.0.0/cloudkitty/db/sqlalchemy/alembic/versions/385e33fef139_added_priority_to_modules_state.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/db/sqlalchemy/alembic/versions/385e33fef139_added_priority_to_modules_s0000664000175000017500000000171600000000000033534 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Added priority to modules_state. Revision ID: 385e33fef139 Revises: 2ac2217dcbd9 Create Date: 2015-03-17 17:50:15.229896 """ # revision identifiers, used by Alembic. revision = '385e33fef139' down_revision = '2ac2217dcbd9' from alembic import op # noqa: E402 import sqlalchemy as sa # noqa: E402 def upgrade(): op.add_column( 'modules_state', sa.Column('priority', sa.Integer(), nullable=True)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/db/sqlalchemy/alembic/versions/464e951dc3b8_initial_migration.py0000664000175000017500000000244200000000000032101 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Initial migration Revision ID: 464e951dc3b8 Revises: None Create Date: 2014-08-05 17:41:34.470183 """ # revision identifiers, used by Alembic. revision = '464e951dc3b8' down_revision = None from alembic import op # noqa: E402 import sqlalchemy as sa # noqa: E402 def upgrade(): op.create_table( 'states', sa.Column('name', sa.String(length=255), nullable=False), sa.Column('state', sa.BigInteger(), nullable=False), sa.Column('s_metadata', sa.Text(), nullable=True), sa.PrimaryKeyConstraint('name')) op.create_table( 'modules_state', sa.Column('name', sa.String(length=255), nullable=False), sa.Column('state', sa.Boolean(), nullable=False), sa.PrimaryKeyConstraint('name')) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/db/sqlalchemy/api.py0000664000175000017500000001720400000000000022061 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_db.sqlalchemy import utils import sqlalchemy from cloudkitty import config # NOQA from cloudkitty import db from cloudkitty.db import api from cloudkitty.db.sqlalchemy import migration from cloudkitty.db.sqlalchemy import models def get_backend(): return DBAPIManager class State(api.State): def get_state(self, name): with db.session_for_read() as session: q = utils.model_query( models.StateInfo, session) q = q.filter(models.StateInfo.name == name) return q.value(models.StateInfo.state) def set_state(self, name, state): with db.session_for_write() as session: try: q = utils.model_query( models.StateInfo, session) q = q.filter(models.StateInfo.name == name) q = q.with_for_update() db_state = q.one() db_state.state = state except sqlalchemy.orm.exc.NoResultFound: db_state = models.StateInfo(name=name, state=state) session.add(db_state) return db_state.state def get_metadata(self, name): with db.session_for_read() as session: q = utils.model_query( models.StateInfo, session) q.filter(models.StateInfo.name == name) return q.value(models.StateInfo.s_metadata) def set_metadata(self, name, metadata): with db.session_for_write() as session: try: q = utils.model_query( models.StateInfo, session) q = q.filter(models.StateInfo.name == name) q = q.with_for_update() db_state = q.one() db_state.s_metadata = metadata except sqlalchemy.orm.exc.NoResultFound: db_state = models.StateInfo(name=name, s_metadata=metadata) session.add(db_state) class ModuleInfo(api.ModuleInfo): """Base class for module info management.""" def get_priority(self, name): with db.session_for_read() as session: q = utils.model_query( models.ModuleStateInfo, session) q = q.filter(models.ModuleStateInfo.name == name) res = q.value(models.ModuleStateInfo.priority) if res: return int(res) else: return 1 def set_priority(self, name, priority): with db.session_for_write() as session: try: q = utils.model_query( models.ModuleStateInfo, session) q = q.filter( models.ModuleStateInfo.name == name) q = q.with_for_update() db_state = q.one() db_state.priority = priority except sqlalchemy.orm.exc.NoResultFound: db_state = models.ModuleStateInfo(name=name, priority=priority) session.add(db_state) return int(db_state.priority) def get_state(self, name): with db.session_for_read() as session: try: q = utils.model_query( models.ModuleStateInfo, session) q = q.filter(models.ModuleStateInfo.name == name) res = q.value(models.ModuleStateInfo.state) return bool(res) except sqlalchemy.orm.exc.NoResultFound: return None def set_state(self, name, state): with db.session_for_write() as session: try: q = utils.model_query( models.ModuleStateInfo, session) q = q.filter(models.ModuleStateInfo.name == name) q = q.with_for_update() db_state = q.one() db_state.state = state except sqlalchemy.orm.exc.NoResultFound: db_state = models.ModuleStateInfo(name=name, state=state) session.add(db_state) return bool(db_state.state) class ServiceToCollectorMapping(object): """Base class for service to collector mapping.""" def get_mapping(self, service): with db.session_for_read() as session: try: q = utils.model_query( models.ServiceToCollectorMapping, session) q = q.filter( models.ServiceToCollectorMapping.service == service) return q.one() except sqlalchemy.orm.exc.NoResultFound: raise api.NoSuchMapping(service) def set_mapping(self, service, collector): with db.session_for_write() as session: try: q = utils.model_query( models.ServiceToCollectorMapping, session) q = q.filter( models.ServiceToCollectorMapping.service == service) q = q.with_for_update() db_mapping = q.one() db_mapping.collector = collector except sqlalchemy.orm.exc.NoResultFound: model = models.ServiceToCollectorMapping db_mapping = model(service=service, collector=collector) session.add(db_mapping) return db_mapping def list_services(self, collector=None): with db.session_for_read() as session: q = utils.model_query( models.ServiceToCollectorMapping, session) if collector: q = q.filter( models.ServiceToCollectorMapping.collector == collector) res = q.distinct().values( models.ServiceToCollectorMapping.service) return res def list_mappings(self, collector=None): with db.session_for_read() as session: q = utils.model_query( models.ServiceToCollectorMapping, session) if collector: q = q.filter( models.ServiceToCollectorMapping.collector == collector) res = q.all() return res def delete_mapping(self, service): with db.session_for_write() as session: q = utils.model_query( models.ServiceToCollectorMapping, session) q = q.filter(models.ServiceToCollectorMapping.service == service) r = q.delete() if not r: raise api.NoSuchMapping(service) class DBAPIManager(object): @staticmethod def get_state(): return State() @staticmethod def get_module_info(): return ModuleInfo() @staticmethod def get_service_to_collector_mapping(): return ServiceToCollectorMapping() @staticmethod def get_migration(): return migration ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/db/sqlalchemy/migration.py0000664000175000017500000000240300000000000023274 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import os from cloudkitty.common.db.alembic import migration ALEMBIC_REPO = os.path.join(os.path.dirname(__file__), 'alembic') def upgrade(revision): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.upgrade(config, revision) def version(): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.version(config) def revision(message, autogenerate): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.revision(config, message, autogenerate) def stamp(revision): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.stamp(config, revision) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/db/sqlalchemy/models.py0000664000175000017500000000553100000000000022573 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_db.sqlalchemy import models import sqlalchemy from sqlalchemy.ext import declarative Base = declarative.declarative_base() class StateInfo(Base, models.ModelBase): """State """ __tablename__ = 'states' name = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True) state = sqlalchemy.Column( sqlalchemy.BigInteger(), nullable=False) s_metadata = sqlalchemy.Column(sqlalchemy.Text(), nullable=True, default='') def __repr__(self): return ('').format( name=self.name, state=self.state, metadata=self.s_metadata) class ModuleStateInfo(Base, models.ModelBase): """Module state info. """ __tablename__ = 'modules_state' name = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True) state = sqlalchemy.Column( sqlalchemy.Boolean(), nullable=False, default=False) priority = sqlalchemy.Column( sqlalchemy.Integer(), default=1) def __repr__(self): return ('').format( name=self.name, state=self.state) def as_dict(self): d = {} for c in self.__table__.columns: d[c.name] = self[c.name] return d class ServiceToCollectorMapping(Base, models.ModelBase): """Collector module state. """ __tablename__ = 'service_to_collector_mappings' service = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True) collector = sqlalchemy.Column(sqlalchemy.String(255), nullable=False) def __repr__(self): return ('').format( service=self.service, collector=self.collector) def as_dict(self): d = {} for c in self.__table__.columns: d[c.name] = self[c.name] return d ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/extension_manager.py0000664000175000017500000000246300000000000022270 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from stevedore import enabled class EnabledExtensionManager(enabled.EnabledExtensionManager): """CloudKitty Rating processor manager Override default EnabledExtensionManager to check for an internal object property in the extension. """ def __init__(self, namespace, invoke_args=(), invoke_kwds={}): def check_enabled(ext): """Check if extension is enabled. """ return ext.obj.enabled super(EnabledExtensionManager, self).__init__( namespace=namespace, check_func=check_enabled, invoke_on_load=True, invoke_args=invoke_args, invoke_kwds=invoke_kwds, ) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2394865 cloudkitty-21.0.0/cloudkitty/fetcher/0000775000175000017500000000000000000000000017623 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/fetcher/__init__.py0000664000175000017500000000222600000000000021736 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import abc from oslo_config import cfg FETCHER_OPTS = 'fetcher' fetcher_opts = [ cfg.StrOpt('backend', default='keystone', help='Driver used to fetch the list of scopes to rate.'), ] cfg.CONF.register_opts(fetcher_opts, 'fetcher') class BaseFetcher(object, metaclass=abc.ABCMeta): """CloudKitty tenants fetcher. Provides Cloudkitty integration with a backend announcing ratable scopes. """ @abc.abstractmethod def get_tenants(self): """Retrieve a list of scopes to rate.""" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/fetcher/gnocchi.py0000664000175000017500000001351100000000000021610 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2018 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import requests from gnocchiclient import auth as gauth from gnocchiclient import client as gclient from keystoneauth1 import loading as ks_loading from oslo_config import cfg from oslo_log import log from cloudkitty.common import custom_session from cloudkitty import fetcher LOG = log.getLogger(__name__) FETCHER_GNOCCHI_OPTS = 'fetcher_gnocchi' fetcher_gnocchi_opts = ks_loading.get_auth_common_conf_options() gfetcher_opts = [ cfg.StrOpt('scope_attribute', default='project_id', help='Attribute from which scope_ids should be collected.'), cfg.ListOpt('resource_types', default=['generic'], help='List of gnocchi resource types. All if left blank'), cfg.StrOpt( 'gnocchi_auth_type', default='keystone', choices=['keystone', 'basic'], help='Gnocchi auth type (keystone or basic). Keystone credentials ' 'can be specified through the "auth_section" parameter', ), cfg.StrOpt( 'gnocchi_user', default='', help='Gnocchi user (for basic auth only)', ), cfg.StrOpt( 'gnocchi_endpoint', default='', help='Gnocchi endpoint (for basic auth only)', ), cfg.StrOpt( 'interface', default='internalURL', help='Endpoint URL type (for keystone auth only)', ), cfg.StrOpt( 'region_name', default='RegionOne', help='Region Name', ), cfg.IntOpt( 'http_pool_maxsize', default=requests.adapters.DEFAULT_POOLSIZE, help='If the value is not defined, we use the value defined by ' 'requests.adapters.DEFAULT_POOLSIZE', ) ] cfg.CONF.register_opts(fetcher_gnocchi_opts, FETCHER_GNOCCHI_OPTS) cfg.CONF.register_opts(gfetcher_opts, FETCHER_GNOCCHI_OPTS) ks_loading.register_session_conf_options( cfg.CONF, FETCHER_GNOCCHI_OPTS) ks_loading.register_auth_conf_options( cfg.CONF, FETCHER_GNOCCHI_OPTS) CONF = cfg.CONF class GnocchiFetcher(fetcher.BaseFetcher): """Gnocchi scope_id fetcher.""" name = 'gnocchi' def __init__(self): super(GnocchiFetcher, self).__init__() adapter_options = {'connect_retries': 3} if CONF.fetcher_gnocchi.gnocchi_auth_type == 'keystone': auth_plugin = ks_loading.load_auth_from_conf_options( CONF, FETCHER_GNOCCHI_OPTS, ) adapter_options['interface'] = CONF.fetcher_gnocchi.interface else: auth_plugin = gauth.GnocchiBasicPlugin( user=CONF.fetcher_gnocchi.gnocchi_user, endpoint=CONF.fetcher_gnocchi.gnocchi_endpoint, ) adapter_options['region_name'] = CONF.fetcher_gnocchi.region_name verify = True if CONF.fetcher_gnocchi.cafile: verify = CONF.fetcher_gnocchi.cafile elif CONF.fetcher_gnocchi.insecure: verify = False self._conn = gclient.Client( '1', session=custom_session.create_custom_session( {'auth': auth_plugin, 'verify': verify}, CONF.fetcher_gnocchi.http_pool_maxsize), adapter_options=adapter_options, ) def get_tenants(self): unique_scope_ids = set() total_resources_navigated = 0 scope_attribute = CONF.fetcher_gnocchi.scope_attribute resource_types = CONF.fetcher_gnocchi.resource_types for resource_type in resource_types: while True: search_scopes_query = None if unique_scope_ids: unique_scope_ids_list = list(unique_scope_ids) unique_scope_ids_list.sort() search_scopes_query = {"not": { "in": {scope_attribute: unique_scope_ids_list}} } resources_chunk = self._conn.resource.search( resource_type=resource_type, details=True, query=search_scopes_query ) chunk_len = len(resources_chunk) if chunk_len < 1: LOG.debug("Scopes IDs [%s] loaded. The total number of " "unique scope IDs loaded is [%s]. Total number " "of resources navigated [%s].", unique_scope_ids, len(unique_scope_ids), total_resources_navigated) break total_resources_navigated += chunk_len scope_ids = [ resource.get(scope_attribute) for resource in resources_chunk if resource.get(scope_attribute)] unique_scope_ids.update(set(scope_ids)) LOG.debug("Scopes IDs [%s] loaded. The total number of unique " "scopes IDs loaded so far is [%s]. Next chunk will " "be loaded filtering by resources not in scope " "attributes [%s] with values [%s]. Total number of " "resources navigated [%s].", scope_ids, len(scope_ids), scope_attribute, unique_scope_ids, total_resources_navigated) return list(unique_scope_ids) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/fetcher/keystone.py0000664000175000017500000000765200000000000022050 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # !/usr/bin/env python # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from keystoneauth1 import loading as ks_loading from keystoneclient import client as kclient from keystoneclient import discover from keystoneclient import exceptions from oslo_config import cfg from oslo_log import log as logging from cloudkitty import fetcher FETCHER_KEYSTONE_OPTS = 'fetcher_keystone' fetcher_keystone_opts = [ cfg.StrOpt( 'keystone_version', default='3', help='Keystone version to use.', ), cfg.BoolOpt( 'ignore_rating_role', default=False, help='Skip rating role check for cloudkitty user', ), cfg.BoolOpt( 'ignore_disabled_tenants', default=False, help='Stop rating disabled tenants', ), ] ks_loading.register_session_conf_options(cfg.CONF, FETCHER_KEYSTONE_OPTS) ks_loading.register_auth_conf_options(cfg.CONF, FETCHER_KEYSTONE_OPTS) cfg.CONF.register_opts(fetcher_keystone_opts, FETCHER_KEYSTONE_OPTS) CONF = cfg.CONF LOG = logging.getLogger(__name__) class KeystoneFetcher(fetcher.BaseFetcher): """Keystone tenants fetcher.""" name = 'keystone' def __init__(self): self.auth = ks_loading.load_auth_from_conf_options( CONF, FETCHER_KEYSTONE_OPTS) self.session = ks_loading.load_session_from_conf_options( CONF, FETCHER_KEYSTONE_OPTS, auth=self.auth) self.admin_ks = kclient.Client( version=CONF.fetcher_keystone.keystone_version, session=self.session, auth_url=self.auth.auth_url) def get_tenants(self): keystone_version = discover.normalize_version_number( CONF.fetcher_keystone.keystone_version) auth_dispatch = {(3,): ('project', 'projects', 'list'), (2,): ('tenant', 'tenants', 'roles_for_user')} for auth_version, auth_version_mapping in auth_dispatch.items(): if discover.version_match(auth_version, keystone_version): return self._do_get_tenants(auth_version_mapping) msg = "Keystone version you've specified is not supported" raise exceptions.VersionNotAvailable(msg) def _do_get_tenants(self, auth_version_mapping): tenant_attr, tenants_attr, role_func = auth_version_mapping tenant_list = getattr(self.admin_ks, tenants_attr).list() my_user_id = self.session.get_user_id() ignore_rating_role = CONF.fetcher_keystone.ignore_rating_role ignore_disabled_tenants = CONF.fetcher_keystone.ignore_disabled_tenants LOG.debug('Total number of tenants : %s', len(tenant_list)) for tenant in tenant_list[:]: if ignore_disabled_tenants: if not tenant.enabled: tenant_list.remove(tenant) LOG.debug('Disabled tenant name %s with id %s skipped.', tenant.name, tenant.id) continue if not ignore_rating_role: roles = getattr(self.admin_ks.roles, role_func)( **{'user': my_user_id, tenant_attr: tenant}) if 'rating' not in [role.name for role in roles]: tenant_list.remove(tenant) LOG.debug('Number of tenants to rate : %s', len(tenant_list)) return [tenant.id for tenant in tenant_list] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/fetcher/prometheus.py0000664000175000017500000000776500000000000022407 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_config import cfg from cloudkitty.common.prometheus_client import PrometheusClient from cloudkitty.common.prometheus_client import PrometheusResponseError from cloudkitty import fetcher class PrometheusFetcherError(Exception): pass FETCHER_PROMETHEUS_OPTS = 'fetcher_prometheus' fetcher_prometheus_opts = [ cfg.StrOpt( 'metric', help='Metric from which scope_ids should be requested', ), cfg.StrOpt( 'scope_attribute', default='project_id', help='Attribute from which scope_ids should be collected', ), cfg.StrOpt( 'prometheus_url', help='Prometheus service URL', ), cfg.StrOpt( 'prometheus_user', default='', help='Prometheus user (for basic auth only)', ), cfg.StrOpt( 'prometheus_password', default='', help='Prometheus user (for basic auth only)', ), cfg.StrOpt( 'cafile', help='Custom certificate authority file path', ), cfg.BoolOpt( 'insecure', default=False, help='Explicitly trust untrusted HTTPS responses', ), cfg.DictOpt( 'filters', default=dict(), help='Metadata to filter out the scope_ids discovery request response', ), ] cfg.CONF.register_opts(fetcher_prometheus_opts, FETCHER_PROMETHEUS_OPTS) CONF = cfg.CONF class PrometheusFetcher(fetcher.BaseFetcher): """Prometheus scope_id fetcher""" name = 'prometheus' def __init__(self): super(PrometheusFetcher, self).__init__() url = CONF.fetcher_prometheus.prometheus_url user = CONF.fetcher_prometheus.prometheus_user password = CONF.fetcher_prometheus.prometheus_password verify = True if CONF.fetcher_prometheus.cafile: verify = CONF.fetcher_prometheus.cafile elif CONF.fetcher_prometheus.insecure: verify = False self._conn = PrometheusClient( url, auth=(user, password) if user and password else None, verify=verify, ) def get_tenants(self): metric = CONF.fetcher_prometheus.metric scope_attribute = CONF.fetcher_prometheus.scope_attribute filters = CONF.fetcher_prometheus.filters metadata = '' # Preformatting filters as {label1="value1", label2="value2"} if filters: metadata = '{{{}}}'.format(', '.join([ '{}="{}"'.format(k, v) for k, v in filters.items() ])) # Formatting PromQL query query = 'max({}{}) by ({})'.format( metric, metadata, scope_attribute, ) try: res = self._conn.get_instant(query) except PrometheusResponseError as e: raise PrometheusFetcherError(*e.args) try: result = res['data']['result'] if not result: return [] scope_ids = [ item['metric'][scope_attribute] for item in result if scope_attribute in item['metric'].keys() ] except KeyError: msg = ( 'Unexpected Prometheus server response ' '"{}" for "{}"' ).format( res, query, ) raise PrometheusFetcherError(msg) # Returning unique ids return list(set(scope_ids)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/fetcher/source.py0000664000175000017500000000214300000000000021475 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_config import cfg from cloudkitty import fetcher FETCHER_SOURCE_OPTS = 'fetcher_source' fetcher_source_opts = [ cfg.ListOpt( 'sources', default=list(), help='list of source identifiers', ), ] cfg.CONF.register_opts(fetcher_source_opts, FETCHER_SOURCE_OPTS) CONF = cfg.CONF class SourceFetcher(fetcher.BaseFetcher): """Source projects fetcher.""" name = 'source' def get_tenants(self): return CONF.fetcher_source.sources ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2394865 cloudkitty-21.0.0/cloudkitty/hacking/0000775000175000017500000000000000000000000017607 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/hacking/__init__.py0000664000175000017500000000000000000000000021706 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/hacking/checks.py0000664000175000017500000003014200000000000021421 0ustar00zuulzuul00000000000000# Copyright (c) 2016, GohighSec # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ast import re from hacking import core """ Guidelines for writing new hacking checks - Use only for Cloudkitty specific tests. OpenStack general tests should be submitted to the common 'hacking' module. - Pick numbers in the range C3xx. Find the current test with the highest allocated number and then pick the next value. - Keep the test method code in the source file ordered based on the C3xx value. - List the new rule in the top level HACKING.rst file - Add test cases for each new rule to cloudkitty/tests/test_hacking.py """ UNDERSCORE_IMPORT_FILES = [] _all_log_levels = {'debug', 'error', 'info', 'warning', 'critical', 'exception'} # Since _Lx have been removed, we just need to check _() translated_logs = re.compile( r"(.)*LOG\.(%(level)s)\(\s*_\(" % {'level': '|'.join(_all_log_levels)}) string_translation = re.compile(r"[^_]*_\(\s*('|\")") underscore_import_check = re.compile(r"(.)*import _$") underscore_import_check_multi = re.compile(r"(.)*import (.)*_, (.)*") # We need this for cases where they have created their own _ function. custom_underscore_check = re.compile(r"(.)*_\s*=\s*(.)*") oslo_namespace_imports = re.compile(r"from[\s]*oslo[.](.*)") dict_constructor_with_list_copy_re = re.compile(r".*\bdict\((\[)?(\(|\[)") assert_no_xrange_re = re.compile(r"\s*xrange\s*\(") assert_True = re.compile(r".*assertEqual\(True, .*\)") assert_None = re.compile(r".*assertEqual\(None, .*\)") no_log_warn = re.compile(r".*LOG.warn\(.*\)") asse_raises_regexp = re.compile(r"assertRaisesRegexp\(") class BaseASTChecker(ast.NodeVisitor): """Provides a simple framework for writing AST-based checks. Subclasses should implement visit_* methods like any other AST visitor implementation. When they detect an error for a particular node the method should call ``self.add_error(offending_node)``. Details about where in the code the error occurred will be pulled from the node object. Subclasses should also provide a class variable named CHECK_DESC to be used for the human readable error message. """ CHECK_DESC = 'No check message specified' def __init__(self, tree, filename): """This object is created automatically by pep8. :param tree: an AST tree :param filename: name of the file being analyzed (ignored by our checks) """ self._tree = tree self._errors = [] def run(self): """Called automatically by pep8.""" self.visit(self._tree) return self._errors def add_error(self, node, message=None): """Add an error caused by a node to the list of errors for pep8.""" message = message or self.CHECK_DESC error = (node.lineno, node.col_offset, message, self.__class__) self._errors.append(error) def _check_call_names(self, call_node, names): if isinstance(call_node, ast.Call): if isinstance(call_node.func, ast.Name): if call_node.func.id in names: return True return False @core.flake8ext def no_translate_logs(logical_line, filename): """Check for 'LOG.*(_(' Starting with the Pike series, OpenStack no longer supports log translation. * This check assumes that 'LOG' is a logger. * Use filename so we can start enforcing this in specific folders instead of needing to do so all at once. C313 """ if translated_logs.match(logical_line): yield (0, "C313 Don't translate logs") class CheckLoggingFormatArgs(BaseASTChecker): """Check for improper use of logging format arguments. LOG.debug("Volume %s caught fire and is at %d degrees C and climbing.", ('volume1', 500)) The format arguments should not be a tuple as it is easy to miss. """ name = "check_logging_format_args" version = "1.0" CHECK_DESC = 'C310 Log method arguments should not be a tuple.' LOG_METHODS = [ 'debug', 'info', 'warn', 'warning', 'error', 'exception', 'critical', 'fatal', 'trace', 'log' ] def _find_name(self, node): """Return the fully qualified name or a Name or Attribute.""" if isinstance(node, ast.Name): return node.id elif (isinstance(node, ast.Attribute) and isinstance(node.value, (ast.Name, ast.Attribute))): method_name = node.attr obj_name = self._find_name(node.value) if obj_name is None: return None return obj_name + '.' + method_name elif isinstance(node, str): return node else: # could be Subscript, Call or many more return None def visit_Call(self, node): """Look for the 'LOG.*' calls.""" # extract the obj_name and method_name if isinstance(node.func, ast.Attribute): obj_name = self._find_name(node.func.value) if isinstance(node.func.value, ast.Name): method_name = node.func.attr elif isinstance(node.func.value, ast.Attribute): obj_name = self._find_name(node.func.value) method_name = node.func.attr else: # could be Subscript, Call or many more return super(CheckLoggingFormatArgs, self).generic_visit(node) # obj must be a logger instance and method must be a log helper if (obj_name != 'LOG' or method_name not in self.LOG_METHODS): return super(CheckLoggingFormatArgs, self).generic_visit(node) # the call must have arguments if not len(node.args): return super(CheckLoggingFormatArgs, self).generic_visit(node) # any argument should not be a tuple for arg in node.args: if isinstance(arg, ast.Tuple): self.add_error(arg) return super(CheckLoggingFormatArgs, self).generic_visit(node) @core.flake8ext def check_explicit_underscore_import(logical_line, filename): """Check for explicit import of the _ function We need to ensure that any files that are using the _() function to translate logs are explicitly importing the _ function. We can't trust unit test to catch whether the import has been added so we need to check for it here. """ # Build a list of the files that have _ imported. No further # checking needed once it is found. if filename in UNDERSCORE_IMPORT_FILES: pass elif (underscore_import_check.match(logical_line) or underscore_import_check_multi.match(logical_line) or custom_underscore_check.match(logical_line)): UNDERSCORE_IMPORT_FILES.append(filename) elif string_translation.match(logical_line): yield (0, "C321: Found use of _() without explicit import of _ !") class CheckForStrUnicodeExc(BaseASTChecker): """Checks for the use of str() or unicode() on an exception. This currently only handles the case where str() or unicode() is used in the scope of an exception handler. If the exception is passed into a function, returned from an assertRaises, or used on an exception created in the same scope, this does not catch it. """ name = "check_for_str_unicode_exc" version = "1.0" CHECK_DESC = ('C314 str() and unicode() cannot be used on an ' 'exception. Remove it.') def __init__(self, tree, filename): super(CheckForStrUnicodeExc, self).__init__(tree, filename) self.name = [] self.already_checked = [] # Python 2 def visit_TryExcept(self, node): for handler in node.handlers: if handler.name: self.name.append(handler.name.id) super(CheckForStrUnicodeExc, self).generic_visit(node) self.name = self.name[:-1] else: super(CheckForStrUnicodeExc, self).generic_visit(node) # Python 3 def visit_ExceptHandler(self, node): if node.name: self.name.append(node.name) super(CheckForStrUnicodeExc, self).generic_visit(node) self.name = self.name[:-1] else: super(CheckForStrUnicodeExc, self).generic_visit(node) def visit_Call(self, node): if self._check_call_names(node, ['str', 'unicode']): if node not in self.already_checked: self.already_checked.append(node) if isinstance(node.args[0], ast.Name): if node.args[0].id in self.name: self.add_error(node.args[0]) super(CheckForStrUnicodeExc, self).generic_visit(node) class CheckForTransAdd(BaseASTChecker): """Checks for the use of concatenation on a translated string. Translations should not be concatenated with other strings, but should instead include the string being added to the translated string to give the translators the most information. """ name = "check_for_trans_add" version = "1.0" CHECK_DESC = ('C315 Translated messages cannot be concatenated. ' 'String should be included in translated message.') TRANS_FUNC = ['_'] def visit_BinOp(self, node): if isinstance(node.op, ast.Add): if self._check_call_names(node.left, self.TRANS_FUNC): self.add_error(node.left) elif self._check_call_names(node.right, self.TRANS_FUNC): self.add_error(node.right) super(CheckForTransAdd, self).generic_visit(node) @core.flake8ext def check_oslo_namespace_imports(logical_line, noqa): """'oslo_' should be used instead of 'oslo.' C317 """ if noqa: return if re.match(oslo_namespace_imports, logical_line): msg = ("C317: '%s' must be used instead of '%s'.") % ( logical_line.replace('oslo.', 'oslo_'), logical_line) yield (0, msg) @core.flake8ext def dict_constructor_with_list_copy(logical_line): """Use a dict comprehension instead of a dict constructor C318 """ msg = ("C318: Must use a dict comprehension instead of a dict constructor" " with a sequence of key-value pairs." ) if dict_constructor_with_list_copy_re.match(logical_line): yield (0, msg) @core.flake8ext def no_xrange(logical_line): """Ensure to not use xrange() C319 """ if assert_no_xrange_re.match(logical_line): yield (0, "C319: Do not use xrange().") @core.flake8ext def validate_assertTrue(logical_line): """Use assertTrue instead of assertEqual C312 """ if re.match(assert_True, logical_line): msg = ("C312: Unit tests should use assertTrue(value) instead" " of using assertEqual(True, value).") yield (0, msg) @core.flake8ext def validate_assertIsNone(logical_line): """Use assertIsNone instead of assertEqual C311 """ if re.match(assert_None, logical_line): msg = ("C311: Unit tests should use assertIsNone(value) instead" " of using assertEqual(None, value).") yield (0, msg) @core.flake8ext def no_log_warn_check(logical_line): """Disallow 'LOG.warn' C320 """ msg = ("C320: LOG.warn is deprecated, please use LOG.warning!") if re.match(no_log_warn, logical_line): yield (0, msg) @core.flake8ext def assert_raises_regexp(logical_line): """Check for usage of deprecated assertRaisesRegexp C322 """ res = asse_raises_regexp.search(logical_line) if res: yield (0, "C322: assertRaisesRegex must be used instead " "of assertRaisesRegexp") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/i18n.py0000664000175000017500000000135700000000000017342 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import oslo_i18n as i18n # noqa _translators = i18n.TranslatorFactory(domain='cloudkitty') _ = _translators.primary ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/messaging.py0000664000175000017500000000477600000000000020550 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2016 99Cloud zhangguoqing # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg import oslo_messaging from oslo_messaging.rpc import dispatcher DEFAULT_URL = "__default__" RPC_TARGET = None TRANSPORTS = {} def setup(): oslo_messaging.set_transport_defaults('cloudkitty') def get_transport(url=None, optional=False, cache=True): """Initialise the oslo_messaging layer.""" global TRANSPORTS, DEFAULT_URL cache_key = url or DEFAULT_URL transport = TRANSPORTS.get(cache_key) if not transport or not cache: try: transport = oslo_messaging.get_rpc_transport(cfg.CONF, url) except (oslo_messaging.InvalidTransportURL, oslo_messaging.DriverLoadFailure): if not optional or url: # NOTE(sileht): oslo_messaging is configured but unloadable # so reraise the exception raise return None else: if cache: TRANSPORTS[cache_key] = transport return transport def get_target(): global RPC_TARGET if RPC_TARGET is None: RPC_TARGET = oslo_messaging.Target(topic='cloudkitty', version='1.0') return RPC_TARGET def get_client(version_cap=None): transport = get_transport() target = get_target() return oslo_messaging.get_rpc_client( transport, target, version_cap=version_cap) def get_server(target=None, endpoints=None): access_policy = dispatcher.DefaultRPCAccessPolicy transport = get_transport() if not target: target = get_target() return oslo_messaging.get_rpc_server(transport, target, endpoints, executor='threading', access_policy=access_policy) def cleanup(): """Cleanup the oslo_messaging layer.""" global TRANSPORTS, NOTIFIERS NOTIFIERS = {} for url in TRANSPORTS: TRANSPORTS[url].cleanup() del TRANSPORTS[url] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/orchestrator.py0000664000175000017500000007006500000000000021304 0ustar00zuulzuul00000000000000# Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import copy from datetime import timedelta import decimal import functools import hashlib import multiprocessing import random import sys import time import cotyledon import futurist from futurist import waiters from oslo_concurrency import lockutils from oslo_config import cfg from oslo_log import log as logging import oslo_messaging from oslo_utils import uuidutils from stevedore import driver from tooz import coordination from cloudkitty import collector from cloudkitty import config # noqa from cloudkitty import dataframe from cloudkitty import extension_manager from cloudkitty import messaging from cloudkitty import storage from cloudkitty import storage_state as state from cloudkitty import utils as ck_utils from cloudkitty.utils import tz as tzutils LOG = logging.getLogger(__name__) CONF = cfg.CONF orchestrator_opts = [ cfg.StrOpt( 'coordination_url', secret=True, help='Coordination backend URL', default='file:///var/lib/cloudkitty/locks'), cfg.IntOpt( 'max_workers', default=multiprocessing.cpu_count(), sample_default=4, min=0, help='Max number of workers to execute the rating process. Defaults ' 'to the number of available CPU cores.'), cfg.IntOpt( 'max_workers_reprocessing', default=multiprocessing.cpu_count(), min=0, help='Max number of workers to execute the reprocessing. Defaults to ' 'the number of available CPU cores.'), cfg.IntOpt('max_threads', # NOTE(peschk_l): This is the futurist default default=multiprocessing.cpu_count() * 5, sample_default=20, min=1, deprecated_name='max_greenthreads', advanced=True, help='Maximal number of threads to use per worker. Defaults to ' '5 times the number of available CPUs'), ] CONF.register_opts(orchestrator_opts, group='orchestrator') CONF.import_opt('backend', 'cloudkitty.fetcher', 'fetcher') FETCHERS_NAMESPACE = 'cloudkitty.fetchers' PROCESSORS_NAMESPACE = 'cloudkitty.rating.processors' COLLECTORS_NAMESPACE = 'cloudkitty.collector.backends' STORAGES_NAMESPACE = 'cloudkitty.storage.backends' def get_lock(coord, tenant_id): name = hashlib.sha256( ("cloudkitty-" + str(tenant_id + '-') + str(CONF.collect.collector + '-') + str(CONF.fetcher.backend + '-') + str(CONF.collect.scope_key)).encode('ascii')).hexdigest() return name, coord.get_lock(name.encode('ascii')) class RatingEndpoint(object): target = oslo_messaging.Target(namespace='rating', version='1.0') def __init__(self, orchestrator): self._global_reload = False self._pending_reload = [] self._module_state = {} self._orchestrator = orchestrator def get_reload_list(self): lock = lockutils.lock('module-reload') with lock: reload_list = self._pending_reload self._pending_reload = [] return reload_list def get_module_state(self): lock = lockutils.lock('module-state') with lock: module_list = self._module_state self._module_state = {} return module_list def quote(self, ctxt, res_data): LOG.debug('Received quote request [%s] from RPC.', res_data) worker = APIWorker() start = tzutils.localized_now() end = tzutils.add_delta(start, timedelta(seconds=CONF.collect.period)) # Need to prepare data to support the V2 processing format usage = {} for k in res_data['usage']: all_data_points_for_metric = [] all_quote_data_entries = res_data['usage'][k] for p in all_quote_data_entries: vol = p['vol'] desc = p.get('desc', {}) data_point = dataframe.DataPoint( vol['unit'], vol['qty'], 0, desc.get('groupby', []), desc.get('metadata', []), ) all_data_points_for_metric.append(data_point) usage[k] = all_data_points_for_metric frame = dataframe.DataFrame( start=start, end=end, usage=usage, ) quote_result = worker.quote(frame) LOG.debug("Quote result [%s] for input data [%s].", quote_result, res_data) return str(quote_result) def reload_modules(self, ctxt): LOG.info('Received reload modules command.') lock = lockutils.lock('module-reload') with lock: self._global_reload = True def reload_module(self, ctxt, name): LOG.info('Received reload command for module %s.', name) lock = lockutils.lock('module-reload') with lock: if name not in self._pending_reload: self._pending_reload.append(name) def enable_module(self, ctxt, name): LOG.info('Received enable command for module %s.', name) lock = lockutils.lock('module-state') with lock: self._module_state[name] = True def disable_module(self, ctxt, name): LOG.info('Received disable command for module %s.', name) lock = lockutils.lock('module-state') with lock: self._module_state[name] = False if name in self._pending_reload: self._pending_reload.remove(name) class ScopeEndpoint(object): target = oslo_messaging.Target(version='1.0') def __init__(self): self._coord = coordination.get_coordinator( CONF.orchestrator.coordination_url, uuidutils.generate_uuid().encode('ascii')) self._state = state.StateManager() self._storage = storage.get_storage() self._coord.start(start_heart=True) def reset_state(self, ctxt, res_data): LOG.info('Received state reset command. {}'.format(res_data)) random.shuffle(res_data['scopes']) for scope in res_data['scopes']: lock_name, lock = get_lock(self._coord, scope['scope_id']) LOG.debug( '[ScopeEndpoint] Trying to acquire lock "{}" ...'.format( lock_name, ) ) if lock.acquire(blocking=True): LOG.debug( '[ScopeEndpoint] Acquired lock "{}".'.format( lock_name, ) ) last_processed_timestamp = tzutils.dt_from_iso( res_data['last_processed_timestamp']) try: self._storage.delete( begin=last_processed_timestamp, end=None, filters={ scope['scope_key']: scope['scope_id']}) self._state.set_last_processed_timestamp( scope['scope_id'], last_processed_timestamp, fetcher=scope['fetcher'], collector=scope['collector'], scope_key=scope['scope_key'], ) finally: lock.release() LOG.debug( '[ScopeEndpoint] Released lock "{}" .'.format( lock_name, ) ) class BaseWorker(object): def __init__(self, tenant_id=None): self._tenant_id = tenant_id # Rating processors self._processors = [] self._load_rating_processors() def _load_rating_processors(self): self._processors = [] processors = extension_manager.EnabledExtensionManager( PROCESSORS_NAMESPACE, invoke_kwds={'tenant_id': self._tenant_id}) for processor in processors: self._processors.append(processor) self._processors.sort(key=lambda x: x.obj.priority, reverse=True) class APIWorker(BaseWorker): def __init__(self, tenant_id=None): super(APIWorker, self).__init__(tenant_id) def quote(self, res_data): quote_result = res_data for processor in self._processors: quote_result = processor.obj.quote(quote_result) price = decimal.Decimal(0) for _, point in quote_result.iterpoints(): price += point.price return price def _check_state(obj, period, tenant_id): timestamp = obj._state.get_last_processed_timestamp(tenant_id) return ck_utils.check_time_state(timestamp, period, CONF.collect.wait_periods) class Worker(BaseWorker): def __init__(self, collector, storage, tenant_id, worker_id): super(Worker, self).__init__(tenant_id) self._collector = collector self._storage = storage self._period = CONF.collect.period self._wait_time = CONF.collect.wait_periods * self._period self._worker_id = worker_id self._log_prefix = '[scope: {scope}, worker: {worker}] '.format( scope=self._tenant_id, worker=self._worker_id) self._conf = ck_utils.load_conf(CONF.collect.metrics_conf) self._state = state.StateManager() self.next_timestamp_to_process = functools.partial( _check_state, self, self._period, self._tenant_id) super(Worker, self).__init__(self._tenant_id) def _collect(self, metric, start_timestamp): next_timestamp = tzutils.add_delta( start_timestamp, timedelta(seconds=self._period)) name, data = self._collector.retrieve( metric, start_timestamp, next_timestamp, self._tenant_id ) if not data: raise collector.NoDataCollected(self._collector, metric) return name, data def _do_collection(self, metrics, timestamp): def _get_result(metric): try: return self._collect(metric, timestamp) except collector.NoDataCollected: LOG.info( self._log_prefix + 'No data collected ' 'for metric {metric} at timestamp {ts}'.format( metric=metric, ts=timestamp)) return metric, None except Exception as e: LOG.exception( self._log_prefix + 'Error while collecting' ' metric {metric} at timestamp {ts}: {e}. Exiting.'.format( metric=metric, ts=timestamp, e=e)) # FIXME(peschk_l): here we just exit, and the # collection will be retried during the next collect # cycle. In the future, we should implement a retrying # system in workers sys.exit(1) return self._do_execute_collection(_get_result, metrics) def _do_execute_collection(self, _get_result, metrics): """Execute the metric measurement collection When executing this method a ZeroDivisionError might be raised. This happens when no executions have happened in the `futurist.ThreadPoolExecutor`; then, when calling the `average_runtime`, the exception is thrown. In such a case, there is no need for further actions, and we can ignore the error. :param _get_result: the method to execute and get the metrics :param metrics: the list of metrics to be collected :return: the metrics measurements """ results = [] try: with futurist.ThreadPoolExecutor( max_workers=CONF.orchestrator.max_threads) as tpool: futs = [tpool.submit(_get_result, metric) for metric in metrics] LOG.debug(self._log_prefix + 'Collecting [{}] metrics.'.format(metrics)) results = [r.result() for r in waiters.wait_for_all(futs).done] log_message = self._log_prefix + \ "Collecting {} metrics took {}s total, with {}s average" LOG.debug(log_message.format(tpool.statistics.executed, tpool.statistics.runtime, tpool.statistics.average_runtime)) except ZeroDivisionError as zeroDivisionError: LOG.debug("Ignoring ZeroDivisionError for metrics [%s]: [%s].", metrics, zeroDivisionError) return dict(filter(lambda x: x[1] is not None, results)) def run(self): should_continue_processing = self.execute_worker_processing() while should_continue_processing: should_continue_processing = self.execute_worker_processing() def execute_worker_processing(self): timestamp = self.next_timestamp_to_process() LOG.debug("Processing timestamp [%s] for storage scope [%s].", timestamp, self._tenant_id) if not timestamp: LOG.debug("Worker [%s] finished processing storage scope [%s].", self._worker_id, self._tenant_id) return False if self._state.get_last_processed_timestamp(self._tenant_id): if not self._state.is_storage_scope_active(self._tenant_id): LOG.debug("Skipping processing for storage scope [%s] " "because it is marked as inactive.", self._tenant_id) return False else: LOG.debug("No need to check if [%s] is de-activated. " "We have never processed it before.") self.do_execute_scope_processing(timestamp) return True def do_execute_scope_processing(self, timestamp): metrics = list(self._collector.conf.keys()) # Collection metrics = sorted(metrics) usage_data = self._do_collection(metrics, timestamp) LOG.debug("Usage data [%s] found for storage scope [%s] in " "timestamp [%s].", usage_data, self._tenant_id, timestamp) start_time = timestamp end_time = tzutils.add_delta(timestamp, timedelta(seconds=self._period)) # No usage records found in if not usage_data: LOG.warning("No usage data for storage scope [%s] on " "timestamp [%s]. You might want to consider " "de-activating it.", self._tenant_id, timestamp) else: frame = self.execute_measurements_rating(end_time, start_time, usage_data) self.persist_rating_data(end_time, frame, start_time) self.update_scope_processing_state_db(timestamp) def persist_rating_data(self, end_time, frame, start_time): LOG.debug("Persisting processed frames [%s] for scope [%s] and time " "[start=%s,end=%s]", frame, self._tenant_id, start_time, end_time) self._storage.push([frame], self._tenant_id) def execute_measurements_rating(self, end_time, start_time, usage_data): frame = dataframe.DataFrame( start=start_time, end=end_time, usage=usage_data, ) for processor in self._processors: original_data = copy.deepcopy(frame) frame = processor.obj.process(frame) LOG.debug("Results [%s] for processing [%s] of data points [%s].", frame, processor.obj.process, original_data) return frame def update_scope_processing_state_db(self, timestamp): self._state.set_state(self._tenant_id, timestamp) class ReprocessingWorker(Worker): def __init__(self, collector, storage, tenant_id, worker_id): self.scope = tenant_id self.scope_key = None super(ReprocessingWorker, self).__init__( collector, storage, self.scope.identifier, worker_id) self.reprocessing_scheduler_db = state.ReprocessingSchedulerDb() self.next_timestamp_to_process = self._next_timestamp_to_process self.load_scope_key() def load_scope_key(self): scope_from_db = self._state.get_all(self._tenant_id) if len(scope_from_db) < 1: raise Exception("Scope [%s] scheduled for reprocessing does not " "seem to exist anymore." % self.scope) if len(scope_from_db) > 1: raise Exception("Unexpected number of storage state entries found " "for scope [%s]." % self.scope) self.scope_key = scope_from_db[0].scope_key def _next_timestamp_to_process(self): db_item = self.reprocessing_scheduler_db.get_from_db( identifier=self.scope.identifier, start_reprocess_time=self.scope.start_reprocess_time, end_reprocess_time=self.scope.end_reprocess_time) if not db_item: LOG.info("It seems that the processing for schedule [%s] was " "finished by other worker.", self.scope) return None return ReprocessingWorker.generate_next_timestamp( db_item, self._period) @staticmethod def generate_next_timestamp(db_item, processing_period_interval): new_timestamp = db_item.start_reprocess_time if db_item.current_reprocess_time: period_delta = timedelta(seconds=processing_period_interval) new_timestamp = db_item.current_reprocess_time + period_delta LOG.debug("Current reprocessed time is [%s], therefore, the next " "one to process is [%s] based on the processing " "interval [%s].", db_item.start_reprocess_time, new_timestamp, processing_period_interval) else: LOG.debug("There is no reprocessing for the schedule [%s]. " "Therefore, we use the start time [%s] as the first " "time to process.", db_item, new_timestamp) if new_timestamp <= db_item.end_reprocess_time: return tzutils.local_to_utc(new_timestamp) else: LOG.debug("No need to keep reprocessing schedule [%s] as we " "processed all requested timestamps.", db_item) return None def do_execute_scope_processing(self, timestamp): end_of_this_processing = timestamp + timedelta(seconds=self._period) end_of_this_processing = tzutils.local_to_utc(end_of_this_processing) # If the start_reprocess_time of the reprocessing task equals to # the current reprocessing time, it means that we have just started # executing it. Therefore, we can clean/erase the old data in the # reprocessing task time frame. if tzutils.local_to_utc(self.scope.start_reprocess_time) == timestamp: LOG.info( "Cleaning backend [%s] data for reprocessing scope [%s] for " "timeframe[start=%s, end=%s].", self._storage, self.scope, self.scope.start_reprocess_time, self.scope.end_reprocess_time) self._storage.delete( begin=self.scope.start_reprocess_time, end=self.scope.end_reprocess_time, filters={self.scope_key: self._tenant_id}) else: LOG.debug("No need to clean backend [%s] data for reprocessing " "scope [%s] for timeframe[start=%s, end=%s]. We are " "past the very first timestamp; therefore, the cleaning " "for the reprocessing task period has already been " "executed.", self._storage, self.scope, self.scope.start_reprocess_time, self.scope.end_reprocess_time) LOG.debug("Executing the reprocessing of scope [%s] for " "timeframe[start=%s, end=%s].", self.scope, timestamp, end_of_this_processing) super(ReprocessingWorker, self).do_execute_scope_processing(timestamp) def update_scope_processing_state_db(self, timestamp): LOG.debug("After data is persisted in the storage backend [%s], we " "will update the scope [%s] current processing time to " "[%s].", self._storage, self.scope, timestamp) self.reprocessing_scheduler_db.update_reprocessing_time( identifier=self.scope.identifier, start_reprocess_time=self.scope.start_reprocess_time, end_reprocess_time=self.scope.end_reprocess_time, new_current_time_stamp=timestamp) class CloudKittyProcessor(cotyledon.Service): def __init__(self, worker_id): self._worker_id = worker_id super(CloudKittyProcessor, self).__init__(self._worker_id) self.tenants = [] self.fetcher = driver.DriverManager( FETCHERS_NAMESPACE, CONF.fetcher.backend, invoke_on_load=True, ).driver self.collector = collector.get_collector() self.storage = storage.get_storage() self._state = state.StateManager() # RPC self.server = None self._rating_endpoint = RatingEndpoint(self) self._scope_endpoint = ScopeEndpoint() self._init_messaging() # DLM self.coord = coordination.get_coordinator( CONF.orchestrator.coordination_url, uuidutils.generate_uuid().encode('ascii')) self.coord.start(start_heart=True) self.next_timestamp_to_process = functools.partial( _check_state, self, CONF.collect.period) self.worker_class = Worker self.log_worker_initiated() def log_worker_initiated(self): LOG.info("Processor worker ID [%s] is initiated as CloudKitty " "rating processor.", self._worker_id) def _init_messaging(self): target = oslo_messaging.Target(topic='cloudkitty', server=CONF.host, version='1.0') endpoints = [ self._rating_endpoint, self._scope_endpoint, ] self.server = messaging.get_server(target, endpoints) self.server.start() def process_messages(self): # TODO(sheeprine): Code kept to handle threading and asynchronous # reloading # pending_reload = self._rating_endpoint.get_reload_list() # pending_states = self._rating_endpoint.get_module_state() pass def run(self): LOG.debug('Started worker {}.'.format(self._worker_id)) while True: self.internal_run() def terminate(self): LOG.debug('Terminating worker {}.'.format(self._worker_id)) self.coord.stop() LOG.debug('Terminated worker {}.'.format(self._worker_id)) def internal_run(self): self.load_scopes_to_process() for tenant_id in self.tenants: lock_name, lock = get_lock( self.coord, self.generate_lock_base_name(tenant_id)) LOG.debug('[Worker: {w}] Trying to acquire lock "{lock_name}" for ' 'scope ID {scope_id}.'.format(w=self._worker_id, lock_name=lock_name, scope_id=tenant_id)) lock_acquired = lock.acquire(blocking=False) if lock_acquired: LOG.debug('[Worker: {w}] Acquired lock "{lock_name}" for ' 'scope ID {scope_id}.'.format(w=self._worker_id, lock_name=lock_name, scope_id=tenant_id)) try: self.process_scope(tenant_id) finally: lock.release() LOG.debug("Finished processing scope [%s].", tenant_id) else: LOG.debug("Could not acquire lock [%s] for processing " "scope [%s] with worker [%s].", lock_name, tenant_id, self.worker_class) LOG.debug("Finished processing all storage scopes with worker " "[worker_id=%s, class=%s].", self._worker_id, self.worker_class) # FIXME(sheeprine): We may cause a drift here time.sleep(CONF.collect.period) def process_scope(self, scope_to_process): timestamp = self.next_timestamp_to_process(scope_to_process) LOG.debug("Next timestamp [%s] found for processing for " "storage scope [%s].", state, scope_to_process) if not timestamp: LOG.debug("There is no next timestamp to process for scope [%s]", scope_to_process) return worker = self.worker_class( self.collector, self.storage, scope_to_process, self._worker_id, ) worker.run() def generate_lock_base_name(self, tenant_id): return tenant_id def load_scopes_to_process(self): self.tenants = self.fetcher.get_tenants() random.shuffle(self.tenants) LOG.info('[Worker: {w}] Tenants loaded for fetcher {f}'.format( w=self._worker_id, f=self.fetcher.name)) class CloudKittyReprocessor(CloudKittyProcessor): def __init__(self, worker_id): super(CloudKittyReprocessor, self).__init__(worker_id) self.next_timestamp_to_process = self._next_timestamp_to_process self.worker_class = ReprocessingWorker self.reprocessing_scheduler_db = state.ReprocessingSchedulerDb() def log_worker_initiated(self): LOG.info("Processor worker ID [%s] is initiated as CloudKitty " "rating reprocessor.", self._worker_id) def _next_timestamp_to_process(self, scope): scope_db = self.reprocessing_scheduler_db.get_from_db( identifier=scope.identifier, start_reprocess_time=scope.start_reprocess_time, end_reprocess_time=scope.end_reprocess_time) if scope_db: return ReprocessingWorker.generate_next_timestamp( scope_db, CONF.collect.period) else: LOG.debug("It seems that the processing for schedule [%s] was " "finished by other CloudKitty reprocessor.", scope) return None def load_scopes_to_process(self): self.tenants = self.reprocessing_scheduler_db.get_all() random.shuffle(self.tenants) LOG.info('Reprocessing worker [%s] loaded [%s] schedules to process.', self._worker_id, len(self.tenants)) def generate_lock_base_name(self, scope): return "%s-id=%s-start=%s-end=%s" % (self.worker_class, scope.identifier, scope.start_reprocess_time, scope.end_reprocess_time) class CloudKittyServiceManager(cotyledon.ServiceManager): def __init__(self): super(CloudKittyServiceManager, self).__init__() if CONF.orchestrator.max_workers: self.cloudkitty_processor_service_id = self.add( CloudKittyProcessor, workers=CONF.orchestrator.max_workers) else: LOG.info("No worker configured for CloudKitty processing.") if CONF.orchestrator.max_workers_reprocessing: self.cloudkitty_reprocessor_service_id = self.add( CloudKittyReprocessor, workers=CONF.orchestrator.max_workers_reprocessing) else: LOG.info("No worker configured for CloudKitty reprocessing.") ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2394865 cloudkitty-21.0.0/cloudkitty/rating/0000775000175000017500000000000000000000000017467 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/__init__.py0000664000175000017500000001053500000000000021604 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import abc import pecan from pecan import rest from cloudkitty.common import policy from cloudkitty.db import api as db_api from cloudkitty import messaging class RatingProcessorBase(object, metaclass=abc.ABCMeta): """Provides the Cloudkitty integration code to the rating processors. Every rating processor should subclass this and override at least module_name, description. config_controller can be left at None to use the default one. """ module_name = None description = None config_controller = None hot_config = False @property def module_info(self): return { 'name': self.module_name, 'description': self.description, 'hot_config': self.hot_config, 'enabled': self.enabled, 'priority': self.priority} def __init__(self, tenant_id=None): self._tenant_id = tenant_id @property def enabled(self): """Check if the module is enabled :returns: bool if module is enabled """ api = db_api.get_instance() module_db = api.get_module_info() return module_db.get_state(self.module_name) or False @property def priority(self): """Get the priority of the module. """ api = db_api.get_instance() module_db = api.get_module_info() return module_db.get_priority(self.module_name) def set_priority(self, priority): """Set the priority of the module. :param priority: (int) The new priority, the higher the number, the higher the priority. """ api = db_api.get_instance() module_db = api.get_module_info() self.notify_reload() return module_db.set_priority(self.module_name, priority) def set_state(self, enabled): """Enable or disable a module. :param enabled: (bool) The state to put the module in. :return: bool """ api = db_api.get_instance() module_db = api.get_module_info() client = messaging.get_client().prepare(namespace='rating', fanout=True) if enabled: operation = 'enable_module' else: operation = 'disable_module' client.cast({}, operation, name=self.module_name) return module_db.set_state(self.module_name, enabled) def quote(self, data): """Compute rating informations from data. :param data: An internal CloudKitty dictionary used to describe resources. :type data: dict(str:?) """ return self.process(data) def nodata(self, begin, end): """Handle billing processing when no data has been collected. :param begin: Begin of the period. :param end: End of the period. """ pass @abc.abstractmethod def process(self, data): """Add rating informations to data :param data: An internal CloudKitty dictionary used to describe resources. :type data: dict(str:?) """ @abc.abstractmethod def reload_config(self): """Trigger configuration reload """ def notify_reload(self): client = messaging.get_client().prepare(namespace='rating', fanout=True) client.cast({}, 'reload_module', name=self.module_name) class RatingRestControllerBase(rest.RestController): @pecan.expose() def _route(self, args, request): try: policy.authorize(request.context, 'rating:module_config', {}) except policy.PolicyNotAuthorized as e: pecan.abort(403, e.args[0]) return super(RatingRestControllerBase, self)._route(args, request) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2434864 cloudkitty-21.0.0/cloudkitty/rating/hash/0000775000175000017500000000000000000000000020412 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/__init__.py0000664000175000017500000002470600000000000022534 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import decimal from cloudkitty import dataframe from cloudkitty import rating from cloudkitty.rating.hash.controllers import root as root_api from cloudkitty.rating.hash.db import api as hash_db_api class HashMap(rating.RatingProcessorBase): """HashMap rating module. HashMap can be used to map arbitrary fields of a resource to different costs. """ module_name = 'hashmap' description = 'HashMap rating module.' hot_config = True config_controller = root_api.HashMapConfigController db_api = hash_db_api.get_instance() def __init__(self, tenant_id=None): super(HashMap, self).__init__(tenant_id) self._entries = {} self._res = {} self._load_rates() def reload_config(self): """Reload the module's configuration. """ self._load_rates() def _load_mappings(self, mappings_uuid_list): hashmap = hash_db_api.get_instance() mappings = {} for mapping_uuid in mappings_uuid_list: mapping_db = hashmap.get_mapping(uuid=mapping_uuid) if mapping_db.group_id: group_name = mapping_db.group.name else: group_name = '_DEFAULT_' if group_name not in mappings: mappings[group_name] = {} current_scope = mappings[group_name] mapping_value = mapping_db.value if mapping_value: current_scope[mapping_value] = {} current_scope = current_scope[mapping_value] current_scope['type'] = mapping_db.map_type current_scope['cost'] = mapping_db.cost return mappings def _load_thresholds(self, thresholds_uuid_list): hashmap = hash_db_api.get_instance() thresholds = {} for threshold_uuid in thresholds_uuid_list: threshold_db = hashmap.get_threshold(uuid=threshold_uuid) if threshold_db.group_id: group_name = threshold_db.group.name else: group_name = '_DEFAULT_' if group_name not in thresholds: thresholds[group_name] = {} current_scope = thresholds[group_name] threshold_level = threshold_db.level current_scope[threshold_level] = {} current_scope = current_scope[threshold_level] current_scope['type'] = threshold_db.map_type current_scope['cost'] = threshold_db.cost return thresholds def _update_entries(self, entry_type, root, service_uuid=None, field_uuid=None, tenant_uuid=None): hashmap = hash_db_api.get_instance() list_func = getattr(hashmap, 'list_{}'.format(entry_type)) entries_uuid_list = list_func( service_uuid=service_uuid, field_uuid=field_uuid, tenant_uuid=tenant_uuid) load_func = getattr(self, '_load_{}'.format(entry_type)) entries = load_func(entries_uuid_list) if entry_type in root: res = root[entry_type] for group, values in entries.items(): if group in res: res[group].update(values) else: res[group] = values else: root[entry_type] = entries def _load_service_entries(self, service_name, service_uuid): self._entries[service_name] = dict() for entry_type in ('mappings', 'thresholds'): for tenant in (None, self._tenant_id): self._update_entries( entry_type, self._entries[service_name], service_uuid=service_uuid, tenant_uuid=tenant) def _load_field_entries(self, service_name, field_name, field_uuid): if service_name not in self._entries: self._entries[service_name] = {} if 'fields' not in self._entries[service_name]: self._entries[service_name]['fields'] = {} scope = self._entries[service_name]['fields'][field_name] = {} for entry_type in ('mappings', 'thresholds'): for tenant in (None, self._tenant_id): self._update_entries( entry_type, scope, field_uuid=field_uuid, tenant_uuid=tenant) def _load_rates(self): self._entries = {} hashmap = hash_db_api.get_instance() services_uuid_list = hashmap.list_services() for service_uuid in services_uuid_list: service_db = hashmap.get_service(uuid=service_uuid) service_name = service_db.name self._load_service_entries(service_name, service_uuid) fields_uuid_list = hashmap.list_fields(service_uuid) for field_uuid in fields_uuid_list: field_db = hashmap.get_field(uuid=field_uuid) field_name = field_db.name self._load_field_entries(service_name, field_name, field_uuid) def add_rating_informations(self, point): for entry in self._res.values(): rate = entry['rate'] flat = entry['flat'] if entry['threshold']['scope'] == 'field': if entry['threshold']['type'] == 'flat': flat += entry['threshold']['cost'] else: rate *= entry['threshold']['cost'] res = rate * flat res *= point.qty if entry['threshold']['scope'] == 'service': if entry['threshold']['type'] == 'flat': res += entry['threshold']['cost'] else: res *= entry['threshold']['cost'] point = point.set_price(point.price + res) return point def update_result(self, group, map_type, cost, level=0, is_threshold=False, threshold_scope='field'): if group not in self._res: self._res[group] = {'flat': 0, 'rate': 1, 'threshold': { 'level': -1, 'cost': 0, 'type': 'flat', 'scope': 'field'}} if is_threshold: best = self._res[group]['threshold']['level'] if level > best: self._res[group]['threshold']['level'] = level self._res[group]['threshold']['cost'] = cost self._res[group]['threshold']['type'] = map_type self._res[group]['threshold']['scope'] = threshold_scope else: if map_type == 'rate': self._res[group]['rate'] *= cost elif map_type == 'flat': new_flat = cost cur_flat = self._res[group]['flat'] if new_flat > cur_flat: self._res[group]['flat'] = new_flat def process_mappings(self, mapping_groups, cmp_value): for group_name, mappings in mapping_groups.items(): for mapping_value, mapping in mappings.items(): if str(cmp_value) == mapping_value: self.update_result( group_name, mapping['type'], mapping['cost']) def process_thresholds(self, threshold_groups, cmp_level, threshold_type): for group_name, thresholds in threshold_groups.items(): for threshold_level, threshold in thresholds.items(): if cmp_level >= threshold_level: self.update_result( group_name, threshold['type'], threshold['cost'], threshold_level, True, threshold_type) def process_services(self, service_name, point): if service_name not in self._entries: return service_mappings = self._entries[service_name]['mappings'] for group_name, mapping in service_mappings.items(): self.update_result(group_name, mapping['type'], mapping['cost']) service_thresholds = self._entries[service_name]['thresholds'] self.process_thresholds(service_thresholds, point.qty, 'service') def process_fields(self, service_name, point): if service_name not in self._entries: return if 'fields' not in self._entries[service_name]: return desc_data = point.desc field_mappings = self._entries[service_name]['fields'] for field_name, group_mappings in field_mappings.items(): if field_name not in desc_data: continue cmp_value = desc_data[field_name] self.process_mappings(group_mappings['mappings'], cmp_value) if group_mappings['thresholds']: self.process_thresholds(group_mappings['thresholds'], decimal.Decimal(cmp_value), 'field') def process(self, data): output = dataframe.DataFrame(start=data.start, end=data.end) for service_name, point in data.iterpoints(): self._res = {} self.process_services(service_name, point) self.process_fields(service_name, point) output.add_point(self.add_rating_informations(point), service_name) return output ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2434864 cloudkitty-21.0.0/cloudkitty/rating/hash/controllers/0000775000175000017500000000000000000000000022760 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/controllers/__init__.py0000664000175000017500000000000000000000000025057 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/controllers/field.py0000664000175000017500000000704200000000000024420 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import pecan import wsmeext.pecan as wsme_pecan from cloudkitty.api.v1 import types as ck_types from cloudkitty import rating from cloudkitty.rating.hash.datamodels import field as field_models from cloudkitty.rating.hash.db import api as db_api class HashMapFieldsController(rating.RatingRestControllerBase): """Controller responsible of fields management. """ @wsme_pecan.wsexpose(field_models.FieldCollection, ck_types.UuidType(), status_code=200) def get_all(self, service_id): """Get the field list. :param service_id: Service's UUID to filter on. :return: List of every fields. """ hashmap = db_api.get_instance() field_list = [] fields_uuid_list = hashmap.list_fields(service_id) for field_uuid in fields_uuid_list: field_db = hashmap.get_field(field_uuid) field_list.append(field_models.Field( **field_db.export_model())) res = field_models.FieldCollection(fields=field_list) return res @wsme_pecan.wsexpose(field_models.Field, ck_types.UuidType(), status_code=200) def get_one(self, field_id): """Return a field. :param field_id: UUID of the field to filter on. """ hashmap = db_api.get_instance() try: field_db = hashmap.get_field(uuid=field_id) return field_models.Field(**field_db.export_model()) except db_api.NoSuchField as e: pecan.abort(404, e.args[0]) @wsme_pecan.wsexpose(field_models.Field, body=field_models.Field, status_code=201) def post(self, field_data): """Create a field. :param field_data: Informations about the field to create. """ hashmap = db_api.get_instance() try: field_db = hashmap.create_field( field_data.service_id, field_data.name) pecan.response.location = pecan.request.path_url if pecan.response.location[-1] != '/': pecan.response.location += '/' pecan.response.location += field_db.field_id return field_models.Field( **field_db.export_model()) except db_api.FieldAlreadyExists as e: pecan.abort(409, e.args[0]) except db_api.ClientHashMapError as e: pecan.abort(400, e.args[0]) @wsme_pecan.wsexpose(None, ck_types.UuidType(), status_code=204) def delete(self, field_id): """Delete the field and all the sub keys recursively. :param field_id: UUID of the field to delete. """ hashmap = db_api.get_instance() try: hashmap.delete_field(uuid=field_id) except db_api.NoSuchField as e: pecan.abort(404, e.args[0]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/controllers/group.py0000664000175000017500000001200000000000000024457 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import pecan import wsmeext.pecan as wsme_pecan from cloudkitty.api.v1 import types as ck_types from cloudkitty import rating from cloudkitty.rating.hash.datamodels import group as group_models from cloudkitty.rating.hash.datamodels import mapping as mapping_models from cloudkitty.rating.hash.datamodels import threshold as threshold_models from cloudkitty.rating.hash.db import api as db_api class HashMapGroupsController(rating.RatingRestControllerBase): """Controller responsible of groups management. """ _custom_actions = { 'mappings': ['GET'], 'thresholds': ['GET']} @wsme_pecan.wsexpose(mapping_models.MappingCollection, ck_types.UuidType()) def mappings(self, group_id): """Get the mappings attached to the group. :param group_id: UUID of the group to filter on. """ hashmap = db_api.get_instance() mapping_list = [] mappings_uuid_list = hashmap.list_mappings(group_uuid=group_id) for mapping_uuid in mappings_uuid_list: mapping_db = hashmap.get_mapping(uuid=mapping_uuid) mapping_list.append(mapping_models.Mapping( **mapping_db.export_model())) res = mapping_models.MappingCollection(mappings=mapping_list) return res @wsme_pecan.wsexpose(threshold_models.ThresholdCollection, ck_types.UuidType()) def thresholds(self, group_id): """Get the thresholds attached to the group. :param group_id: UUID of the group to filter on. """ hashmap = db_api.get_instance() threshold_list = [] thresholds_uuid_list = hashmap.list_thresholds(group_uuid=group_id) for threshold_uuid in thresholds_uuid_list: threshold_db = hashmap.get_threshold(uuid=threshold_uuid) threshold_list.append(threshold_models.Threshold( **threshold_db.export_model())) res = threshold_models.ThresholdCollection(thresholds=threshold_list) return res @wsme_pecan.wsexpose(group_models.GroupCollection) def get_all(self): """Get the group list :return: List of every group. """ hashmap = db_api.get_instance() group_list = [] groups_uuid_list = hashmap.list_groups() for group_uuid in groups_uuid_list: group_db = hashmap.get_group(uuid=group_uuid) group_list.append(group_models.Group( **group_db.export_model())) res = group_models.GroupCollection(groups=group_list) return res @wsme_pecan.wsexpose(group_models.Group, ck_types.UuidType()) def get_one(self, group_id): """Return a group. :param group_id: UUID of the group to filter on. """ hashmap = db_api.get_instance() try: group_db = hashmap.get_group(uuid=group_id) return group_models.Group(**group_db.export_model()) except db_api.NoSuchGroup as e: pecan.abort(404, e.args[0]) @wsme_pecan.wsexpose(group_models.Group, body=group_models.Group, status_code=201) def post(self, group_data): """Create a group. :param group_data: Informations about the group to create. """ hashmap = db_api.get_instance() try: group_db = hashmap.create_group(group_data.name) pecan.response.location = pecan.request.path_url if pecan.response.location[-1] != '/': pecan.response.location += '/' pecan.response.location += group_db.group_id return group_models.Group( **group_db.export_model()) except db_api.GroupAlreadyExists as e: pecan.abort(409, e.args[0]) except db_api.ClientHashMapError as e: pecan.abort(400, e.args[0]) @wsme_pecan.wsexpose(None, ck_types.UuidType(), bool, status_code=204) def delete(self, group_id, recursive=False): """Delete a group. :param group_id: UUID of the group to delete. :param recursive: Delete mappings recursively. """ hashmap = db_api.get_instance() try: hashmap.delete_group(uuid=group_id, recurse=recursive) except db_api.NoSuchGroup as e: pecan.abort(404, e.args[0]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/controllers/mapping.py0000664000175000017500000001506000000000000024767 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import pecan import wsmeext.pecan as wsme_pecan from cloudkitty.api.v1 import types as ck_types from cloudkitty import rating from cloudkitty.rating.hash.datamodels import group as group_models from cloudkitty.rating.hash.datamodels import mapping as mapping_models from cloudkitty.rating.hash.db import api as db_api class HashMapMappingsController(rating.RatingRestControllerBase): """Controller responsible of mappings management. """ _custom_actions = { 'group': ['GET']} @wsme_pecan.wsexpose(group_models.Group, ck_types.UuidType()) def group(self, mapping_id): """Get the group attached to the mapping. :param mapping_id: UUID of the mapping to filter on. """ hashmap = db_api.get_instance() try: group_db = hashmap.get_group_from_mapping( uuid=mapping_id) return group_models.Group(**group_db.export_model()) except db_api.MappingHasNoGroup as e: pecan.abort(404, e.args[0]) @wsme_pecan.wsexpose(mapping_models.MappingCollection, ck_types.UuidType(), ck_types.UuidType(), ck_types.UuidType(), bool, ck_types.UuidType(), bool, status_code=200) def get_all(self, service_id=None, field_id=None, group_id=None, no_group=False, tenant_id=None, filter_tenant=False): """Get the mapping list :param service_id: Service UUID to filter on. :param field_id: Field UUID to filter on. :param group_id: Group UUID to filter on. :param no_group: Filter on orphaned mappings. :param tenant_id: Tenant UUID to filter on. :param filter_tenant: Explicitly filter on tenant (default is to not filter on tenant). Useful if you want to filter on tenant being None. :return: List of every mappings. """ hashmap = db_api.get_instance() mapping_list = [] search_opts = dict() if filter_tenant: search_opts['tenant_uuid'] = tenant_id mappings_uuid_list = hashmap.list_mappings( service_uuid=service_id, field_uuid=field_id, group_uuid=group_id, no_group=no_group, **search_opts) for mapping_uuid in mappings_uuid_list: mapping_db = hashmap.get_mapping(uuid=mapping_uuid) mapping_list.append(mapping_models.Mapping( **mapping_db.export_model())) res = mapping_models.MappingCollection(mappings=mapping_list) return res @wsme_pecan.wsexpose(mapping_models.Mapping, ck_types.UuidType()) def get_one(self, mapping_id): """Return a mapping. :param mapping_id: UUID of the mapping to filter on. """ hashmap = db_api.get_instance() try: mapping_db = hashmap.get_mapping(uuid=mapping_id) return mapping_models.Mapping( **mapping_db.export_model()) except db_api.NoSuchMapping as e: pecan.abort(404, e.args[0]) @wsme_pecan.wsexpose(mapping_models.Mapping, body=mapping_models.Mapping, status_code=201) def post(self, mapping_data): """Create a mapping. :param mapping_data: Informations about the mapping to create. """ hashmap = db_api.get_instance() try: mapping_db = hashmap.create_mapping( value=mapping_data.value, map_type=mapping_data.map_type, cost=mapping_data.cost, field_id=mapping_data.field_id, group_id=mapping_data.group_id, service_id=mapping_data.service_id, tenant_id=mapping_data.tenant_id) pecan.response.location = pecan.request.path_url if pecan.response.location[-1] != '/': pecan.response.location += '/' pecan.response.location += mapping_db.mapping_id return mapping_models.Mapping( **mapping_db.export_model()) except db_api.MappingAlreadyExists as e: pecan.abort(409, e.args[0]) except db_api.ClientHashMapError as e: pecan.abort(400, e.args[0]) @wsme_pecan.wsexpose(None, ck_types.UuidType(), body=mapping_models.Mapping, status_code=302) def put(self, mapping_id, mapping): """Update a mapping. :param mapping_id: UUID of the mapping to update. :param mapping: Mapping data to insert. """ hashmap = db_api.get_instance() try: hashmap.update_mapping( mapping_id, mapping_id=mapping.mapping_id, value=mapping.value, cost=mapping.cost, map_type=mapping.map_type, group_id=mapping.group_id, tenant_id=mapping.tenant_id) pecan.response.headers['Location'] = pecan.request.path except db_api.MappingAlreadyExists as e: pecan.abort(409, e.args[0]) except db_api.NoSuchMapping as e: pecan.abort(404, e.args[0]) except db_api.ClientHashMapError as e: pecan.abort(400, e.args[0]) @wsme_pecan.wsexpose(None, ck_types.UuidType(), status_code=204) def delete(self, mapping_id): """Delete a mapping. :param mapping_id: UUID of the mapping to delete. """ hashmap = db_api.get_instance() try: hashmap.delete_mapping(uuid=mapping_id) except db_api.NoSuchMapping as e: pecan.abort(404, e.args[0]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/controllers/root.py0000664000175000017500000000337400000000000024324 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from wsme import types as wtypes import wsmeext.pecan as wsme_pecan from cloudkitty import rating from cloudkitty.rating.hash.controllers import field as field_api from cloudkitty.rating.hash.controllers import group as group_api from cloudkitty.rating.hash.controllers import mapping as mapping_api from cloudkitty.rating.hash.controllers import service as service_api from cloudkitty.rating.hash.controllers import threshold as threshold_api from cloudkitty.rating.hash.datamodels import mapping as mapping_models class HashMapConfigController(rating.RatingRestControllerBase): """Controller exposing all management sub controllers.""" _custom_actions = { 'types': ['GET'] } services = service_api.HashMapServicesController() fields = field_api.HashMapFieldsController() groups = group_api.HashMapGroupsController() mappings = mapping_api.HashMapMappingsController() thresholds = threshold_api.HashMapThresholdsController() @wsme_pecan.wsexpose([wtypes.text]) def get_types(self): """Return the list of every mapping type available. """ return mapping_models.MAP_TYPE.values ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/controllers/service.py0000664000175000017500000000675500000000000025007 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import pecan import wsmeext.pecan as wsme_pecan from cloudkitty.api.v1 import types as ck_types from cloudkitty import rating from cloudkitty.rating.hash.controllers import field as field_api from cloudkitty.rating.hash.controllers import mapping as mapping_api from cloudkitty.rating.hash.datamodels import service as service_models from cloudkitty.rating.hash.db import api as db_api class HashMapServicesController(rating.RatingRestControllerBase): """Controller responsible of services management. """ fields = field_api.HashMapFieldsController() mappings = mapping_api.HashMapMappingsController() @wsme_pecan.wsexpose(service_models.ServiceCollection) def get_all(self): """Get the service list :return: List of every services. """ hashmap = db_api.get_instance() service_list = [] services_uuid_list = hashmap.list_services() for service_uuid in services_uuid_list: service_db = hashmap.get_service(uuid=service_uuid) service_list.append(service_models.Service( **service_db.export_model())) res = service_models.ServiceCollection(services=service_list) return res @wsme_pecan.wsexpose(service_models.Service, ck_types.UuidType()) def get_one(self, service_id): """Return a service. :param service_id: UUID of the service to filter on. """ hashmap = db_api.get_instance() try: service_db = hashmap.get_service(uuid=service_id) return service_models.Service(**service_db.export_model()) except db_api.NoSuchService as e: pecan.abort(404, e.args[0]) @wsme_pecan.wsexpose(service_models.Service, body=service_models.Service, status_code=201) def post(self, service_data): """Create hashmap service. :param service_data: Informations about the service to create. """ hashmap = db_api.get_instance() try: service_db = hashmap.create_service(service_data.name) pecan.response.location = pecan.request.path_url if pecan.response.location[-1] != '/': pecan.response.location += '/' pecan.response.location += service_db.service_id return service_models.Service( **service_db.export_model()) except db_api.ServiceAlreadyExists as e: pecan.abort(409, e.args[0]) @wsme_pecan.wsexpose(None, ck_types.UuidType(), status_code=204) def delete(self, service_id): """Delete the service and all the sub keys recursively. :param service_id: UUID of the service to delete. """ hashmap = db_api.get_instance() try: hashmap.delete_service(uuid=service_id) except db_api.NoSuchService as e: pecan.abort(404, e.args[0]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/controllers/threshold.py0000664000175000017500000001527400000000000025337 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import pecan import wsmeext.pecan as wsme_pecan from cloudkitty.api.v1 import types as ck_types from cloudkitty import rating from cloudkitty.rating.hash.datamodels import group as group_models from cloudkitty.rating.hash.datamodels import threshold as threshold_models from cloudkitty.rating.hash.db import api as db_api class HashMapThresholdsController(rating.RatingRestControllerBase): """Controller responsible of thresholds management. """ _custom_actions = { 'group': ['GET']} @wsme_pecan.wsexpose(group_models.Group, ck_types.UuidType()) def group(self, threshold_id): """Get the group attached to the threshold. :param threshold_id: UUID of the threshold to filter on. """ hashmap = db_api.get_instance() try: group_db = hashmap.get_group_from_threshold( uuid=threshold_id) return group_models.Group(**group_db.export_model()) except db_api.ThresholdHasNoGroup as e: pecan.abort(404, e.args[0]) @wsme_pecan.wsexpose(threshold_models.ThresholdCollection, ck_types.UuidType(), ck_types.UuidType(), ck_types.UuidType(), bool, ck_types.UuidType(), bool, status_code=200) def get_all(self, service_id=None, field_id=None, group_id=None, no_group=False, tenant_id=None, filter_tenant=False): """Get the threshold list :param service_id: Service UUID to filter on. :param field_id: Field UUID to filter on. :param group_id: Group UUID to filter on. :param no_group: Filter on orphaned thresholds. :param tenant_id: Tenant UUID to filter on. :param filter_tenant: Explicitly filter on tenant (default is to not filter on tenant). Useful if you want to filter on tenant being None. :return: List of every thresholds. """ hashmap = db_api.get_instance() threshold_list = [] search_opts = dict() if filter_tenant: search_opts['tenant_uuid'] = tenant_id thresholds_uuid_list = hashmap.list_thresholds( service_uuid=service_id, field_uuid=field_id, group_uuid=group_id, no_group=no_group, **search_opts) for threshold_uuid in thresholds_uuid_list: threshold_db = hashmap.get_threshold(uuid=threshold_uuid) threshold_list.append(threshold_models.Threshold( **threshold_db.export_model())) res = threshold_models.ThresholdCollection(thresholds=threshold_list) return res @wsme_pecan.wsexpose(threshold_models.Threshold, ck_types.UuidType()) def get_one(self, threshold_id): """Return a threshold. :param threshold_id: UUID of the threshold to filter on. """ hashmap = db_api.get_instance() try: threshold_db = hashmap.get_threshold(uuid=threshold_id) return threshold_models.Threshold( **threshold_db.export_model()) except db_api.NoSuchThreshold as e: pecan.abort(404, e.args[0]) @wsme_pecan.wsexpose(threshold_models.Threshold, body=threshold_models.Threshold, status_code=201) def post(self, threshold_data): """Create a threshold. :param threshold_data: Informations about the threshold to create. """ hashmap = db_api.get_instance() try: threshold_db = hashmap.create_threshold( level=threshold_data.level, map_type=threshold_data.map_type, cost=threshold_data.cost, field_id=threshold_data.field_id, group_id=threshold_data.group_id, service_id=threshold_data.service_id, tenant_id=threshold_data.tenant_id) pecan.response.location = pecan.request.path_url if pecan.response.location[-1] != '/': pecan.response.location += '/' pecan.response.location += threshold_db.threshold_id return threshold_models.Threshold( **threshold_db.export_model()) except db_api.ThresholdAlreadyExists as e: pecan.abort(409, e.args[0]) except db_api.ClientHashMapError as e: pecan.abort(400, e.args[0]) @wsme_pecan.wsexpose(None, ck_types.UuidType(), body=threshold_models.Threshold, status_code=302) def put(self, threshold_id, threshold): """Update a threshold. :param threshold_id: UUID of the threshold to update. :param threshold: Threshold data to insert. """ hashmap = db_api.get_instance() try: hashmap.update_threshold( threshold_id, threshold_id=threshold.threshold_id, level=threshold.level, cost=threshold.cost, map_type=threshold.map_type, group_id=threshold.group_id, tenant_id=threshold.tenant_id) pecan.response.headers['Location'] = pecan.request.path except db_api.ThresholdAlreadyExists as e: pecan.abort(409, e.args[0]) except db_api.NoSuchThreshold as e: pecan.abort(404, e.args[0]) except db_api.ClientHashMapError as e: pecan.abort(400, e.args[0]) @wsme_pecan.wsexpose(None, ck_types.UuidType(), status_code=204) def delete(self, threshold_id): """Delete a threshold. :param threshold_id: UUID of the threshold to delete. """ hashmap = db_api.get_instance() try: hashmap.delete_threshold(uuid=threshold_id) except db_api.NoSuchThreshold as e: pecan.abort(404, e.args[0]) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2434864 cloudkitty-21.0.0/cloudkitty/rating/hash/datamodels/0000775000175000017500000000000000000000000022527 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/datamodels/__init__.py0000664000175000017500000000000000000000000024626 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/datamodels/field.py0000664000175000017500000000321200000000000024162 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from wsme import types as wtypes from cloudkitty.api.v1 import types as ck_types class Field(wtypes.Base): """Type describing a field. A field is mapping a value of the 'desc' dict of the CloudKitty data. It's used to map the name of a metadata. """ field_id = wtypes.wsattr(ck_types.UuidType(), mandatory=False) """UUID of the field.""" name = wtypes.wsattr(wtypes.text, mandatory=True) """Name of the field.""" service_id = wtypes.wsattr(ck_types.UuidType(), mandatory=True) """UUID of the parent service.""" @classmethod def sample(cls): sample = cls(field_id='ac55b000-a05b-4832-b2ff-265a034886ab', name='image_id', service_id='a733d0e1-1ec9-4800-8df8-671e4affd017') return sample class FieldCollection(wtypes.Base): """Type describing a list of fields. """ fields = [Field] """List of fields.""" @classmethod def sample(cls): sample = Field.sample() return cls(fields=[sample]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/datamodels/group.py0000664000175000017500000000312300000000000024234 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from wsme import types as wtypes from cloudkitty.api.v1 import types as ck_types class Group(wtypes.Base): """Type describing a group. A group is used to divide calculations. It can be used to create a group for the instance rating (flavor) and one if we have premium images (image_id). So you can take into account multiple parameters during the rating. """ group_id = wtypes.wsattr(ck_types.UuidType(), mandatory=False) """UUID of the group.""" name = wtypes.wsattr(wtypes.text, mandatory=True) """Name of the group.""" @classmethod def sample(cls): sample = cls(group_id='afe898cb-86d8-4557-ad67-f4f01891bbee', name='instance_rating') return sample class GroupCollection(wtypes.Base): """Type describing a list of groups. """ groups = [Group] """List of groups.""" @classmethod def sample(cls): sample = Group.sample() return cls(groups=[sample]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/datamodels/mapping.py0000664000175000017500000000506100000000000024536 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import decimal from wsme import types as wtypes from cloudkitty.api.v1 import types as ck_types MAP_TYPE = wtypes.Enum(wtypes.text, 'flat', 'rate') class Mapping(wtypes.Base): """Type describing a Mapping. A mapping is used to apply rating rules based on a value, if the parent is a field then it's check the value of a metadata. If it's a service then it directly apply the rate to the volume. """ mapping_id = wtypes.wsattr(ck_types.UuidType(), mandatory=False) """UUID of the mapping.""" value = wtypes.wsattr(wtypes.text, mandatory=False, default='') """Key of the mapping.""" map_type = wtypes.wsattr(MAP_TYPE, default='flat', name='type') """Type of the mapping.""" cost = wtypes.wsattr(decimal.Decimal, mandatory=True) """Value of the mapping.""" service_id = wtypes.wsattr(ck_types.UuidType(), mandatory=False) """UUID of the service.""" field_id = wtypes.wsattr(ck_types.UuidType(), mandatory=False) """UUID of the field.""" group_id = wtypes.wsattr(ck_types.UuidType(), mandatory=False) """UUID of the hashmap group.""" tenant_id = wtypes.wsattr(wtypes.text, mandatory=False, default=None) """ID of the hashmap tenant.""" @classmethod def sample(cls): sample = cls(mapping_id='39dbd39d-f663-4444-a795-fb19d81af136', field_id='ac55b000-a05b-4832-b2ff-265a034886ab', value='m1.micro', map_type='flat', cost=decimal.Decimal('4.2'), tenant_id='7977999e-2e25-11e6-a8b2-df30b233ffcb') return sample class MappingCollection(wtypes.Base): """Type describing a list of mappings. """ mappings = [Mapping] """List of mappings.""" @classmethod def sample(cls): sample = Mapping.sample() return cls(mappings=[sample]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/datamodels/service.py0000664000175000017500000000267500000000000024553 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from wsme import types as wtypes from cloudkitty.api.v1 import types as ck_types class Service(wtypes.Base): """Type describing a service. A service is directly mapped to the usage key, the collected service. """ service_id = wtypes.wsattr(ck_types.UuidType(), mandatory=False) """UUID of the service.""" name = wtypes.wsattr(wtypes.text, mandatory=True) """Name of the service.""" @classmethod def sample(cls): sample = cls(service_id='a733d0e1-1ec9-4800-8df8-671e4affd017', name='compute') return sample class ServiceCollection(wtypes.Base): """Type describing a list of services.""" services = [Service] """List of services.""" @classmethod def sample(cls): sample = Service.sample() return cls(services=[sample]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/datamodels/threshold.py0000664000175000017500000000542100000000000025077 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import decimal from wsme import types as wtypes from cloudkitty.api.v1 import types as ck_types from cloudkitty.rating.hash.datamodels import mapping as mapping_models class Threshold(wtypes.Base): """Type describing a Threshold. A threshold is used to apply rating rules based on a level, if the parent is a field then the level is checked against a metadata. If it's a service then it's the quantity of the resource that is checked. """ threshold_id = wtypes.wsattr(ck_types.UuidType(), mandatory=False) """UUID of the threshold.""" level = wtypes.wsattr(decimal.Decimal, mandatory=True, default=decimal.Decimal('0')) """Level of the threshold.""" map_type = wtypes.wsattr(mapping_models.MAP_TYPE, default='flat', name='type') """Type of the threshold.""" cost = wtypes.wsattr(decimal.Decimal, mandatory=True) """Value of the threshold.""" service_id = wtypes.wsattr(ck_types.UuidType(), mandatory=False) """UUID of the service.""" field_id = wtypes.wsattr(ck_types.UuidType(), mandatory=False) """UUID of the field.""" group_id = wtypes.wsattr(ck_types.UuidType(), mandatory=False) """UUID of the hashmap group.""" tenant_id = wtypes.wsattr(wtypes.text, mandatory=False, default=None) """ID of the hashmap tenant.""" @classmethod def sample(cls): sample = cls(threshold_id='39dbd39d-f663-4444-a795-fb19d81af136', field_id='ac55b000-a05b-4832-b2ff-265a034886ab', level=decimal.Decimal('1024'), map_type='flat', cost=decimal.Decimal('4.2'), tenant_id='7977999e-2e25-11e6-a8b2-df30b233ffcb') return sample class ThresholdCollection(wtypes.Base): """Type describing a list of mappings. """ thresholds = [Threshold] """List of thresholds.""" @classmethod def sample(cls): sample = Threshold.sample() return cls(thresholds=[sample]) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2434864 cloudkitty-21.0.0/cloudkitty/rating/hash/db/0000775000175000017500000000000000000000000020777 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/db/__init__.py0000664000175000017500000000000000000000000023076 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/db/api.py0000664000175000017500000003144700000000000022133 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import abc from oslo_config import cfg from oslo_db import api as db_api from cloudkitty.i18n import _ _BACKEND_MAPPING = {'sqlalchemy': 'cloudkitty.rating.hash.db.sqlalchemy.api'} IMPL = db_api.DBAPI.from_config(cfg.CONF, backend_mapping=_BACKEND_MAPPING, lazy=True) def get_instance(): """Return a DB API instance.""" return IMPL class BaseHashMapError(Exception): """Base class for HashMap errors.""" class ClientHashMapError(BaseHashMapError): """Base class for client side errors.""" class NoSuchService(ClientHashMapError): """Raised when the service doesn't exist.""" def __init__(self, name=None, uuid=None): super(NoSuchService, self).__init__( _("No such service: %(name)s (UUID: %(uuid)s)") % {'name': name, 'uuid': uuid}) self.name = name self.uuid = uuid class NoSuchField(ClientHashMapError): """Raised when the field doesn't exist for the service.""" def __init__(self, uuid): super(NoSuchField, self).__init__( _("No such field: %s") % uuid) self.uuid = uuid class NoSuchGroup(ClientHashMapError): """Raised when the group doesn't exist.""" def __init__(self, name=None, uuid=None): super(NoSuchGroup, self).__init__( _("No such group: %(name)s (UUID: %(uuid)s)") % {'name': name, 'uuid': uuid}) self.name = name self.uuid = uuid class NoSuchMapping(ClientHashMapError): """Raised when the mapping doesn't exist.""" def __init__(self, uuid): msg = (_("No such mapping: %s") % uuid) super(NoSuchMapping, self).__init__(msg) self.uuid = uuid class NoSuchThreshold(ClientHashMapError): """Raised when the threshold doesn't exist.""" def __init__(self, uuid): msg = (_("No such threshold: %s") % uuid) super(NoSuchThreshold, self).__init__(msg) self.uuid = uuid class NoSuchType(ClientHashMapError): """Raised when a mapping type is not handled.""" def __init__(self, map_type): msg = (_("No mapping type: %s") % map_type) super(NoSuchType, self).__init__(msg) self.map_type = map_type class ServiceAlreadyExists(ClientHashMapError): """Raised when the service already exists.""" def __init__(self, name, uuid): super(ServiceAlreadyExists, self).__init__( _("Service %(name)s already exists (UUID: %(uuid)s)") % {'name': name, 'uuid': uuid}) self.name = name self.uuid = uuid class FieldAlreadyExists(ClientHashMapError): """Raised when the field already exists.""" def __init__(self, field, uuid): super(FieldAlreadyExists, self).__init__( _("Field %(field)s already exists (UUID: %(uuid)s)") % {'field': field, 'uuid': uuid}) self.field = field self.uuid = uuid class GroupAlreadyExists(ClientHashMapError): """Raised when the group already exists.""" def __init__(self, name, uuid): super(GroupAlreadyExists, self).__init__( _("Group %(name)s already exists (UUID: %(uuid)s)") % {'name': name, 'uuid': uuid}) self.name = name self.uuid = uuid class MappingAlreadyExists(ClientHashMapError): """Raised when the mapping already exists.""" def __init__(self, mapping, parent_id=None, parent_type=None, uuid=None, tenant_id=None): # TODO(sheeprine): UUID is deprecated parent_id = parent_id if parent_id else uuid super(MappingAlreadyExists, self).__init__( _("Mapping '%(mapping)s' already exists for %(p_type)s '%(p_id)s'," " tenant: '%(t_id)s'") % {'mapping': mapping, 'p_type': parent_type, 'p_id': parent_id, 't_id': tenant_id}) self.mapping = mapping self.uuid = parent_id self.parent_id = parent_id self.parent_type = parent_type self.tenant_id = tenant_id class ThresholdAlreadyExists(ClientHashMapError): """Raised when the threshold already exists.""" def __init__(self, threshold, parent_id=None, parent_type=None, uuid=None, tenant_id=None): # TODO(sheeprine): UUID is deprecated parent_id = parent_id if parent_id else uuid super(ThresholdAlreadyExists, self).__init__( _("Threshold '%(threshold)s' already exists for %(p_type)s " "'%(p_id)s', tenant: '%(t_id)s'") % {'threshold': threshold, 'p_type': parent_type, 'p_id': parent_id, 't_id': tenant_id}) self.threshold = threshold self.uuid = parent_id self.parent_id = parent_id self.parent_type = parent_type self.tenant_id = tenant_id class MappingHasNoGroup(ClientHashMapError): """Raised when the mapping is not attached to a group.""" def __init__(self, uuid): super(MappingHasNoGroup, self).__init__( _("Mapping has no group (UUID: %s)") % uuid) self.uuid = uuid class ThresholdHasNoGroup(ClientHashMapError): """Raised when the threshold is not attached to a group.""" def __init__(self, uuid): super(ThresholdHasNoGroup, self).__init__( _("Threshold has no group (UUID: %s)") % uuid) self.uuid = uuid class HashMap(object, metaclass=abc.ABCMeta): """Base class for hashmap configuration.""" @abc.abstractmethod def get_migration(self): """Return a migrate manager. """ @abc.abstractmethod def get_service(self, name=None, uuid=None): """Return a service object. :param name: Filter on a service name. :param uuid: The uuid of the service to get. """ @abc.abstractmethod def get_field(self, uuid=None, service_uuid=None, name=None): """Return a field object. :param uuid: UUID of the field to get. :param service_uuid: UUID of the service to filter on. (Used with name) :param name: Name of the field to filter on. (Used with service_uuid) """ @abc.abstractmethod def get_group(self, uuid=None, name=None): """Return a group object. :param uuid: UUID of the group to get. :param name: Name of the group to get. """ @abc.abstractmethod def get_mapping(self, uuid): """Return a mapping object. :param uuid: UUID of the mapping to get. """ @abc.abstractmethod def get_threshold(self, uuid): """Return a threshold object. :param uuid: UUID of the threshold to get. """ @abc.abstractmethod def list_services(self): """Return an UUID list of every service. """ @abc.abstractmethod def list_fields(self, service_uuid): """Return an UUID list of every field in a service. :param service_uuid: The service UUID to filter on. """ @abc.abstractmethod def list_groups(self): """Return an UUID list of every group. """ @abc.abstractmethod def list_mappings(self, service_uuid=None, field_uuid=None, group_uuid=None, no_group=False, **kwargs): """Return an UUID list of every mapping. :param service_uuid: The service to filter on. :param field_uuid: The field to filter on. :param group_uuid: The group to filter on. :param no_group: Filter on mappings without a group. :param tenant_uuid: The tenant to filter on. :return list(str): List of mappings' UUID. """ @abc.abstractmethod def list_thresholds(self, service_uuid=None, field_uuid=None, group_uuid=None, no_group=False, **kwargs): """Return an UUID list of every threshold. :param service_uuid: The service to filter on. :param field_uuid: The field to filter on. :param group_uuid: The group to filter on. :param no_group: Filter on thresholds without a group. :param tenant_uuid: The tenant to filter on. :return list(str): List of thresholds' UUID. """ @abc.abstractmethod def create_service(self, name): """Create a new service. :param name: Name of the service to create. """ @abc.abstractmethod def create_field(self, service_uuid, name): """Create a new field. :param service_uuid: UUID of the parent service. :param name: Name of the field. """ @abc.abstractmethod def create_group(self, name): """Create a new group. :param name: The name of the group. """ @abc.abstractmethod def create_mapping(self, cost, map_type='rate', value=None, service_id=None, field_id=None, group_id=None, tenant_id=None): """Create a new service/field mapping. :param cost: Rating value to apply to this mapping. :param map_type: The type of rating rule. :param value: Value of the field this mapping is applying to. :param service_id: Service the mapping is applying to. :param field_id: Field the mapping is applying to. :param group_id: The group of calculations to apply. :param tenant_id: The tenant to apply calculations to. """ @abc.abstractmethod def create_threshold(self, cost, map_type='rate', level=None, service_id=None, field_id=None, group_id=None, tenant_id=None): """Create a new service/field threshold. :param cost: Rating value to apply to this threshold. :param map_type: The type of rating rule. :param level: Level of the field this threshold is applying to. :param service_id: Service the threshold is applying to. :param field_id: Field the threshold is applying to. :param group_id: The group of calculations to apply. :param tenant_id: The tenant to apply calculations to. """ @abc.abstractmethod def update_mapping(self, uuid, **kwargs): """Update a mapping. :param uuid UUID of the mapping to modify. :param cost: Rating value to apply to this mapping. :param map_type: The type of rating rule. :param value: Value of the field this mapping is applying to. :param group_id: The group of calculations to apply. """ @abc.abstractmethod def update_threshold(self, uuid, **kwargs): """Update a mapping. :param uuid UUID of the threshold to modify. :param cost: Rating value to apply to this threshold. :param map_type: The type of rating rule. :param level: Level of the field this threshold is applying to. :param group_id: The group of calculations to apply. """ @abc.abstractmethod def delete_service(self, name=None, uuid=None): """Delete a service recursively. :param name: Name of the service to delete. :param uuid: UUID of the service to delete. """ @abc.abstractmethod def delete_field(self, uuid): """Delete a field recursively. :param uuid UUID of the field to delete. """ def delete_group(self, uuid, recurse=True): """Delete a group and all mappings recursively. :param uuid: UUID of the group to delete. :param recurse: Delete attached mappings recursively. """ @abc.abstractmethod def delete_mapping(self, uuid): """Delete a mapping :param uuid: UUID of the mapping to delete. """ @abc.abstractmethod def delete_threshold(self, uuid): """Delete a threshold :param uuid: UUID of the threshold to delete. """ ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2474866 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/0000775000175000017500000000000000000000000023141 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/__init__.py0000664000175000017500000000000000000000000025240 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2474866 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/0000775000175000017500000000000000000000000024535 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/__init__.py0000664000175000017500000000000000000000000026634 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/env.py0000664000175000017500000000155300000000000025703 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from cloudkitty.common.db.alembic import env # noqa from cloudkitty.rating.hash.db.sqlalchemy import models target_metadata = models.Base.metadata version_table = 'hashmap_alembic' env.run_migrations_online(target_metadata, version_table) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2474866 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/models/0000775000175000017500000000000000000000000026020 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/models/__init__.py0000664000175000017500000000000000000000000030117 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/models/f8c799db4aa0_fix_unnamed_constraints.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/models/f8c799db4aa0_fix_unnamed_const0000664000175000017500000002307300000000000033440 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_db.sqlalchemy import models import sqlalchemy from sqlalchemy.ext import declarative from sqlalchemy import orm from sqlalchemy import schema from cloudkitty.common.db import models as ck_models Base = ck_models.get_base() class HashMapBase(models.ModelBase): __table_args__ = {'mysql_charset': "utf8", 'mysql_engine': "InnoDB"} fk_to_resolve = {} def save(self): from cloudkitty import db with db.session_for_write() as session: super(HashMapBase, self).save(session=session) def as_dict(self): d = {} for c in self.__table__.columns: if c.name == 'id': continue d[c.name] = self[c.name] return d def _recursive_resolve(self, path): obj = self for attr in path.split('.'): if hasattr(obj, attr): obj = getattr(obj, attr) else: return None return obj def export_model(self): res = self.as_dict() for fk, mapping in self.fk_to_resolve.items(): res[fk] = self._recursive_resolve(mapping) return res class HashMapService(Base, HashMapBase): """A hashmap service. Used to describe a CloudKitty service such as compute or volume. """ __tablename__ = 'hashmap_services' id = sqlalchemy.Column( sqlalchemy.Integer, primary_key=True) service_id = sqlalchemy.Column( sqlalchemy.String(36), nullable=False, unique=True) name = sqlalchemy.Column( sqlalchemy.String(255), nullable=False, unique=True) fields = orm.relationship( 'HashMapField', backref=orm.backref( 'service', lazy='immediate')) mappings = orm.relationship( 'HashMapMapping', backref=orm.backref( 'service', lazy='immediate')) thresholds = orm.relationship( 'HashMapThreshold', backref=orm.backref( 'service', lazy='immediate')) def __repr__(self): return ('').format( uuid=self.service_id, service=self.name) class HashMapField(Base, HashMapBase): """A hashmap field. Used to describe a service metadata such as flavor_id or image_id for compute. """ __tablename__ = 'hashmap_fields' fk_to_resolve = { 'service_id': 'service.service_id'} @declarative.declared_attr def __table_args__(cls): args = ( schema.UniqueConstraint( 'field_id', 'name', name='uniq_field'), schema.UniqueConstraint( 'service_id', 'name', name='uniq_map_service_field'), HashMapBase.__table_args__,) return args id = sqlalchemy.Column( sqlalchemy.Integer, primary_key=True) field_id = sqlalchemy.Column( sqlalchemy.String(36), nullable=False, unique=True) name = sqlalchemy.Column( sqlalchemy.String(255), nullable=False) service_id = sqlalchemy.Column( sqlalchemy.Integer, sqlalchemy.ForeignKey( 'hashmap_services.id', ondelete='CASCADE'), nullable=False) mappings = orm.relationship( 'HashMapMapping', backref=orm.backref( 'field', lazy='immediate')) thresholds = orm.relationship( 'HashMapThreshold', backref=orm.backref( 'field', lazy='immediate')) def __repr__(self): return ('').format( uuid=self.field_id, field=self.name) class HashMapGroup(Base, HashMapBase): """A grouping of hashmap calculations. Used to group multiple mappings or thresholds into a single calculation. """ __tablename__ = 'hashmap_groups' id = sqlalchemy.Column( sqlalchemy.Integer, primary_key=True) group_id = sqlalchemy.Column( sqlalchemy.String(36), nullable=False, unique=True) name = sqlalchemy.Column( sqlalchemy.String(255), nullable=False, unique=True) mappings = orm.relationship( 'HashMapMapping', backref=orm.backref( 'group', lazy='immediate')) thresholds = orm.relationship( 'HashMapThreshold', backref=orm.backref( 'group', lazy='immediate')) def __repr__(self): return ('').format( uuid=self.group_id, name=self.name) class HashMapMapping(Base, HashMapBase): """A mapping between a field or service, a value and a type. Used to model final equation. """ __tablename__ = 'hashmap_mappings' fk_to_resolve = { 'service_id': 'service.service_id', 'field_id': 'field.field_id', 'group_id': 'group.group_id'} @declarative.declared_attr def __table_args__(cls): args = ( schema.UniqueConstraint( 'value', 'field_id', name='uniq_field_mapping'), schema.UniqueConstraint( 'value', 'service_id', name='uniq_service_mapping'), HashMapBase.__table_args__,) return args id = sqlalchemy.Column( sqlalchemy.Integer, primary_key=True) mapping_id = sqlalchemy.Column( sqlalchemy.String(36), nullable=False, unique=True) value = sqlalchemy.Column( sqlalchemy.String(255), nullable=True) cost = sqlalchemy.Column( sqlalchemy.Numeric(20, 8), nullable=False) map_type = sqlalchemy.Column( sqlalchemy.Enum( 'flat', 'rate', name='enum_map_type', create_constraint=True), nullable=False) service_id = sqlalchemy.Column( sqlalchemy.Integer, sqlalchemy.ForeignKey( 'hashmap_services.id', ondelete='CASCADE'), nullable=True) field_id = sqlalchemy.Column( sqlalchemy.Integer, sqlalchemy.ForeignKey( 'hashmap_fields.id', ondelete='CASCADE'), nullable=True) group_id = sqlalchemy.Column( sqlalchemy.Integer, sqlalchemy.ForeignKey( 'hashmap_groups.id', ondelete='SET NULL'), nullable=True) def __repr__(self): return ('').format( uuid=self.mapping_id, map_type=self.map_type, value=self.value, cost=self.cost) class HashMapThreshold(Base, HashMapBase): """A threshold matching a service or a field with a level and a type. Used to model final equation. """ __tablename__ = 'hashmap_thresholds' fk_to_resolve = { 'service_id': 'service.service_id', 'field_id': 'field.field_id', 'group_id': 'group.group_id'} @declarative.declared_attr def __table_args__(cls): args = ( schema.UniqueConstraint( 'level', 'field_id', name='uniq_field_threshold'), schema.UniqueConstraint( 'level', 'service_id', name='uniq_service_threshold'), HashMapBase.__table_args__,) return args id = sqlalchemy.Column( sqlalchemy.Integer, primary_key=True) threshold_id = sqlalchemy.Column( sqlalchemy.String(36), nullable=False, unique=True) level = sqlalchemy.Column( sqlalchemy.Numeric(20, 8), nullable=True) cost = sqlalchemy.Column( sqlalchemy.Numeric(20, 8), nullable=False) map_type = sqlalchemy.Column( sqlalchemy.Enum( 'flat', 'rate', name='enum_hashmap_type', create_constraint=True), nullable=False) service_id = sqlalchemy.Column( sqlalchemy.Integer, sqlalchemy.ForeignKey( 'hashmap_services.id', ondelete='CASCADE'), nullable=True) field_id = sqlalchemy.Column( sqlalchemy.Integer, sqlalchemy.ForeignKey( 'hashmap_fields.id', ondelete='CASCADE'), nullable=True) group_id = sqlalchemy.Column( sqlalchemy.Integer, sqlalchemy.ForeignKey( 'hashmap_groups.id', ondelete='SET NULL'), nullable=True) def __repr__(self): return ('').format( uuid=self.threshold_id, map_type=self.map_type, level=self.level, cost=self.cost) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/script.py.mako0000664000175000017500000000172300000000000027344 0ustar00zuulzuul00000000000000# Copyright ${create_date.year} OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """${message} Revision ID: ${up_revision} Revises: ${down_revision} Create Date: ${create_date} """ # revision identifiers, used by Alembic. revision = ${repr(up_revision)} down_revision = ${repr(down_revision)} from alembic import op import sqlalchemy as sa ${imports if imports else ""} def upgrade(): ${upgrades if upgrades else "pass"} ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2474866 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/0000775000175000017500000000000000000000000026405 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000024000000000000011451 xustar0000000000000000138 path=cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/10d2738b67df_rename_mapping_table_to_hashmap_mappings.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/10d2738b67df_rename_mapping_0000664000175000017500000000160100000000000033270 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Rename mapping table to hashmap_mappings. Revision ID: 10d2738b67df Revises: 54cc17accf2c Create Date: 2016-05-24 18:37:25.305430 """ # revision identifiers, used by Alembic. revision = '10d2738b67df' down_revision = '54cc17accf2c' from alembic import op # noqa: E402 def upgrade(): op.rename_table('hashmap_maps', 'hashmap_mappings') ././@PaxHeader0000000000000000000000000000021100000000000011447 xustar0000000000000000115 path=cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/3dd7e13527f3_initial_migration.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/3dd7e13527f3_initial_migrati0000664000175000017500000000737500000000000033332 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Initial migration Revision ID: 3dd7e13527f3 Revises: None Create Date: 2015-03-10 13:06:41.067563 """ # revision identifiers, used by Alembic. revision = '3dd7e13527f3' down_revision = None from alembic import op # noqa: E402 import sqlalchemy as sa # noqa: E402 def upgrade(): op.create_table( 'hashmap_services', sa.Column('id', sa.Integer(), nullable=False), sa.Column('service_id', sa.String(length=36), nullable=False), sa.Column('name', sa.String(length=255), nullable=False), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('name'), sa.UniqueConstraint('service_id'), mysql_charset='utf8', mysql_engine='InnoDB') op.create_table( 'hashmap_fields', sa.Column('id', sa.Integer(), nullable=False), sa.Column('field_id', sa.String(length=36), nullable=False), sa.Column('name', sa.String(length=255), nullable=False), sa.Column('service_id', sa.Integer(), nullable=False), sa.ForeignKeyConstraint( ['service_id'], ['hashmap_services.id'], ondelete='CASCADE'), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('field_id'), sa.UniqueConstraint('field_id', 'name', name='uniq_field'), sa.UniqueConstraint( 'service_id', 'name', name='uniq_map_service_field'), mysql_charset='utf8', mysql_engine='InnoDB') op.create_table( 'hashmap_groups', sa.Column('id', sa.Integer(), nullable=False), sa.Column('group_id', sa.String(length=36), nullable=False), sa.Column('name', sa.String(length=255), nullable=False), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('group_id'), sa.UniqueConstraint('name'), mysql_charset='utf8', mysql_engine='InnoDB') op.create_table( 'hashmap_maps', sa.Column('id', sa.Integer(), nullable=False), sa.Column('mapping_id', sa.String(length=36), nullable=False), sa.Column('value', sa.String(length=255), nullable=True), sa.Column('cost', sa.Numeric(20, 8), nullable=False), sa.Column( 'map_type', sa.Enum('flat', 'rate', name='enum_map_type', create_constraint=True), nullable=False), sa.Column('service_id', sa.Integer(), nullable=True), sa.Column('field_id', sa.Integer(), nullable=True), sa.Column('group_id', sa.Integer(), nullable=True), sa.ForeignKeyConstraint( ['field_id'], ['hashmap_fields.id'], ondelete='CASCADE'), sa.ForeignKeyConstraint( ['group_id'], ['hashmap_groups.id'], ondelete='SET NULL'), sa.ForeignKeyConstraint( ['service_id'], ['hashmap_services.id'], ondelete='CASCADE'), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('mapping_id'), sa.UniqueConstraint( 'value', 'field_id', name='uniq_field_mapping'), sa.UniqueConstraint( 'value', 'service_id', name='uniq_service_mapping'), mysql_charset='utf8', mysql_engine='InnoDB') ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/4da82e1c11c8_add_per_tenant_hashmap_support.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/4da82e1c11c8_add_per_tenant_0000664000175000017500000000610700000000000033332 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add per tenant hashmap support Revision ID: 4da82e1c11c8 Revises: c88a06b1cfce Create Date: 2016-05-31 12:27:30.821497 """ # revision identifiers, used by Alembic. revision = '4da82e1c11c8' down_revision = 'c88a06b1cfce' from alembic import op # noqa: E402 import sqlalchemy as sa # noqa: E402 CONSTRAINT_MAP = { 'hashmap_mappings': { 'uniq_field_mapping': ( ['value', 'field_id', 'tenant_id'], ['value', 'field_id']), 'uniq_service_mapping': ( ['value', 'service_id', 'tenant_id'], ['value', 'service_id'])}, 'hashmap_thresholds': { 'uniq_field_threshold': ( ['level', 'field_id', 'tenant_id'], ['level', 'field_id']), 'uniq_service_threshold': ( ['level', 'service_id', 'tenant_id'], ['level', 'service_id'])}} def get_reflect(table): reflect_args = [ sa.Column( 'service_id', sa.Integer, sa.ForeignKey( 'hashmap_services.id', ondelete='CASCADE', name='fk_{}_service_id_hashmap_services'.format(table)), nullable=True), sa.Column( 'field_id', sa.Integer, sa.ForeignKey( 'hashmap_fields.id', ondelete='CASCADE', name='fk_{}_field_id_hashmap_fields'.format(table)), nullable=True), sa.Column( 'group_id', sa.Integer, sa.ForeignKey( 'hashmap_groups.id', ondelete='SET NULL', name='fk_{}_group_id_hashmap_groups'.format(table)), nullable=True), sa.Column( 'map_type', sa.Enum( 'flat', 'rate', name='enum_{}map_type'.format( 'hash' if table == 'hashmap_thresholds' else ''), create_constraint=True), nullable=False)] return reflect_args def upgrade(): for table in ('hashmap_mappings', 'hashmap_thresholds'): with op.batch_alter_table( table, reflect_args=get_reflect(table) ) as batch_op: batch_op.add_column( sa.Column( 'tenant_id', sa.String(length=36), nullable=True)) for name, columns in CONSTRAINT_MAP[table].items(): batch_op.drop_constraint(name, type_='unique') batch_op.create_unique_constraint(name, columns[0]) ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/4e0232ce_increase_precision_for_cost_fields.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/4e0232ce_increase_precision_0000664000175000017500000000264100000000000033545 0ustar00zuulzuul00000000000000# Copyright 2018 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """Increase cost fields to 30 digits Revision ID: 4e0232ce Revises: Ifbf5b2515c7 Create Date: 2022-04-06 08:00:00.000000 """ from alembic import op import importlib import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '4e0232ce' down_revision = 'Ifbf5b2515c7' def upgrade(): down_version_module = importlib.import_module( "cloudkitty.rating.hash.db.sqlalchemy.alembic.versions." "644faa4491fd_update_tenant_id_type_from_uuid_to_text") for table_name in ('hashmap_mappings', 'hashmap_thresholds'): with op.batch_alter_table( table_name, reflect_args=down_version_module.get_reflect( table_name)) as batch_op: batch_op.alter_column('cost', type_=sa.Numeric(precision=40, scale=28)) ././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/4fa888fd7eda_added_threshold_support.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/4fa888fd7eda_added_threshold0000664000175000017500000000472300000000000033530 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Added threshold support. Revision ID: 4fa888fd7eda Revises: 3dd7e13527f3 Create Date: 2015-05-05 14:39:24.562388 """ # revision identifiers, used by Alembic. revision = '4fa888fd7eda' down_revision = '3dd7e13527f3' from alembic import op # noqa: E402 import sqlalchemy as sa # noqa: E402 def upgrade(): # NOTE(sheeprine): Hack to let the migrations pass for postgresql dialect = op.get_context().dialect.name if dialect == 'postgresql': constraints = ['uniq_field_threshold', 'uniq_service_threshold'] else: constraints = ['uniq_field_mapping', 'uniq_service_mapping'] op.create_table( 'hashmap_thresholds', sa.Column('id', sa.Integer(), nullable=False), sa.Column('threshold_id', sa.String(length=36), nullable=False), sa.Column('level', sa.Numeric(precision=20, scale=8), nullable=True), sa.Column('cost', sa.Numeric(precision=20, scale=8), nullable=False), sa.Column( 'map_type', sa.Enum('flat', 'rate', name='enum_map_type', create_constraint=True), nullable=False), sa.Column('service_id', sa.Integer(), nullable=True), sa.Column('field_id', sa.Integer(), nullable=True), sa.Column('group_id', sa.Integer(), nullable=True), sa.ForeignKeyConstraint( ['field_id'], ['hashmap_fields.id'], ondelete='CASCADE'), sa.ForeignKeyConstraint( ['group_id'], ['hashmap_groups.id'], ondelete='SET NULL'), sa.ForeignKeyConstraint( ['service_id'], ['hashmap_services.id'], ondelete='CASCADE'), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('threshold_id'), sa.UniqueConstraint('level', 'field_id', name=constraints[0]), sa.UniqueConstraint('level', 'service_id', name=constraints[1]), mysql_charset='utf8', mysql_engine='InnoDB') ././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/54cc17accf2c_fixed_constraint_name.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/54cc17accf2c_fixed_constrain0000664000175000017500000001020400000000000033534 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Fixed constraint name. Revision ID: 54cc17accf2c Revises: 4fa888fd7eda Create Date: 2015-05-28 16:44:32.936076 """ # revision identifiers, used by Alembic. revision = '54cc17accf2c' down_revision = '4fa888fd7eda' from alembic import op # noqa: E402 import sqlalchemy as sa # noqa: E402 def create_table(is_old=False): if is_old: constraints = ['uniq_field_mapping', 'uniq_service_mapping'] else: constraints = ['uniq_field_threshold', 'uniq_service_threshold'] table = op.create_table( 'tmig_hashmap_thresholds', sa.Column('id', sa.Integer(), nullable=False), sa.Column('threshold_id', sa.String(length=36), nullable=False), sa.Column('level', sa.Numeric(precision=20, scale=8), nullable=True), sa.Column('cost', sa.Numeric(precision=20, scale=8), nullable=False), sa.Column( 'map_type', sa.Enum('flat', 'rate', name='enum_map_type', create_constraint=True), nullable=False), sa.Column('service_id', sa.Integer(), nullable=True), sa.Column('field_id', sa.Integer(), nullable=True), sa.Column('group_id', sa.Integer(), nullable=True), sa.ForeignKeyConstraint( ['field_id'], ['hashmap_fields.id'], ondelete='CASCADE'), sa.ForeignKeyConstraint( ['group_id'], ['hashmap_groups.id'], ondelete='SET NULL'), sa.ForeignKeyConstraint( ['service_id'], ['hashmap_services.id'], ondelete='CASCADE'), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('threshold_id'), sa.UniqueConstraint('level', 'field_id', name=constraints[0]), sa.UniqueConstraint('level', 'service_id', name=constraints[1])) return table def upgrade(): dialect = op.get_context().dialect.name try: # Needs sqlalchemy 0.8 if dialect != 'postgresql': with op.batch_alter_table('hashmap_thresholds') as batch_op: batch_op.drop_constraint( 'uniq_field_mapping', type_='unique') batch_op.drop_constraint( 'uniq_service_mapping', type_='unique') batch_op.create_unique_constraint( 'uniq_field_threshold', ['level', 'field_id']) batch_op.create_unique_constraint( 'uniq_service_threshold', ['level', 'service_id']) except AttributeError: # No support for batch operations if dialect == 'sqlite': new_table = create_table() sel = sa.sql.expression.select(new_table.columns.keys()) op.execute( new_table.insert().from_select( new_table.columns.keys(), sel.select_from('hashmap_thresholds'))) op.drop_table('hashmap_thresholds') op.rename_table('tmig_hashmap_thresholds', 'hashmap_thresholds') else: op.drop_constraint( 'uniq_field_mapping', 'hashmap_thresholds', type_='unique') op.drop_constraint( 'uniq_service_mapping', 'hashmap_thresholds', type_='unique') op.create_unique_constraint( 'uniq_field_threshold', 'hashmap_thresholds', ['level', 'field_id']) op.create_unique_constraint( 'uniq_service_threshold', 'hashmap_thresholds', ['level', 'service_id']) ././@PaxHeader0000000000000000000000000000023700000000000011457 xustar0000000000000000137 path=cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/644faa4491fd_update_tenant_id_type_from_uuid_to_text.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/644faa4491fd_update_tenant_i0000664000175000017500000000622100000000000033375 0ustar00zuulzuul00000000000000# Copyright 2018 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """Update tenant_id type from uuid to text Revision ID: 644faa4491fd Revises: 4da82e1c11c8 Create Date: 2018-10-29 17:25:37.901136 """ # revision identifiers, used by Alembic. revision = '644faa4491fd' down_revision = '4da82e1c11c8' from alembic import op # noqa: E402 import sqlalchemy as sa # noqa: E402 CONSTRAINT_MAP = { 'hashmap_mappings': { 'uniq_field_mapping': ( ['value', 'field_id', 'tenant_id'], ['value', 'field_id']), 'uniq_service_mapping': ( ['value', 'service_id', 'tenant_id'], ['value', 'service_id'])}, 'hashmap_thresholds': { 'uniq_field_threshold': ( ['level', 'field_id', 'tenant_id'], ['level', 'field_id']), 'uniq_service_threshold': ( ['level', 'service_id', 'tenant_id'], ['level', 'service_id'])}} def get_reflect(table): reflect_args = [ sa.Column( 'service_id', sa.Integer, sa.ForeignKey( 'hashmap_services.id', ondelete='CASCADE', name='fk_{}_service_id_hashmap_services'.format(table)), nullable=True), sa.Column( 'field_id', sa.Integer, sa.ForeignKey( 'hashmap_fields.id', ondelete='CASCADE', name='fk_{}_field_id_hashmap_fields'.format(table)), nullable=True), sa.Column( 'group_id', sa.Integer, sa.ForeignKey( 'hashmap_groups.id', ondelete='SET NULL', name='fk_{}_group_id_hashmap_groups'.format(table)), nullable=True), sa.Column( 'map_type', sa.Enum( 'flat', 'rate', name='enum_{}map_type'.format( 'hash' if table == 'hashmap_thresholds' else ''), create_constraint=True), nullable=False)] return reflect_args def upgrade(): for table in ('hashmap_mappings', 'hashmap_thresholds'): with op.batch_alter_table( table, reflect_args=get_reflect(table) ) as batch_op: batch_op.alter_column('tenant_id', type_=sa.String(length=255), existing_nullable=True) for name, columns in CONSTRAINT_MAP[table].items(): batch_op.drop_constraint(name, type_='unique') batch_op.create_unique_constraint(name, columns[0]) ././@PaxHeader0000000000000000000000000000023200000000000011452 xustar0000000000000000132 path=cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/Ifbf5b2515c7_increase_precision_for_cost_fields.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/Ifbf5b2515c7_increase_precis0000664000175000017500000000265100000000000033416 0ustar00zuulzuul00000000000000# Copyright 2018 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """Increase cost fields to 30 digits Revision ID: Ifbf5b2515c7 Revises: 644faa4491fd Create Date: 2020-09-29 14:22:00.000000 """ from alembic import op import importlib import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'Ifbf5b2515c7' down_revision = '644faa4491fd' def upgrade(): down_version_module = importlib.import_module( "cloudkitty.rating.hash.db.sqlalchemy.alembic.versions." "644faa4491fd_update_tenant_id_type_from_uuid_to_text") for table_name in ('hashmap_mappings', 'hashmap_thresholds'): with op.batch_alter_table( table_name, reflect_args=down_version_module.get_reflect( table_name)) as batch_op: batch_op.alter_column('cost', type_=sa.Numeric(precision=30, scale=28)) ././@PaxHeader0000000000000000000000000000023000000000000011450 xustar0000000000000000130 path=cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/c88a06b1cfce_clean_hashmap_fields_constraints.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/c88a06b1cfce_clean_hashmap_f0000664000175000017500000000326600000000000033464 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Clean hashmap fields constraints. Revision ID: c88a06b1cfce Revises: f8c799db4aa0 Create Date: 2016-05-19 18:06:43.315066 """ # revision identifiers, used by Alembic. revision = 'c88a06b1cfce' down_revision = 'f8c799db4aa0' from alembic import op # noqa: E402 import sqlalchemy as sa # noqa: E402 def upgrade(): with op.batch_alter_table( 'hashmap_fields', # NOTE(sheeprine): Forced reflection is needed because of SQLAlchemy's # SQLite backend limitation reflecting ON DELETE clauses. reflect_args=[ sa.Column( 'service_id', sa.Integer, sa.ForeignKey( 'hashmap_services.id', ondelete='CASCADE', name='fk_hashmap_fields_service_id_hashmap_services'), nullable=False)]) as batch_op: batch_op.drop_constraint( 'uniq_field', type_='unique') batch_op.create_unique_constraint( 'uniq_field_per_service', ['service_id', 'name']) batch_op.drop_constraint( 'uniq_map_service_field', type_='unique') ././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/f8c799db4aa0_fix_unnamed_constraints.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/f8c799db4aa0_fix_unnamed_con0000664000175000017500000002111100000000000033445 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Fix unnamed constraints. Revision ID: f8c799db4aa0 Revises: 10d2738b67df Create Date: 2016-05-18 18:08:19.331412 """ # revision identifiers, used by Alembic. revision = 'f8c799db4aa0' down_revision = '10d2738b67df' import copy # noqa: E402 from alembic import op # noqa: E402 from cloudkitty.rating.hash.db.sqlalchemy.alembic.models import ( # noqa: E402 f8c799db4aa0_fix_unnamed_constraints as models) OPS = { 'foreignkey': { 'hashmap_fields': [ ('hashmap_fields_service_id_fkey', 'fk_hashmap_fields_service_id_hashmap_services', { 'args': [ 'hashmap_services', ['service_id'], ['id']], 'kwargs': {'ondelete': 'CASCADE'}})], 'hashmap_thresholds': [ ('hashmap_thresholds_field_id_fkey', 'fk_hashmap_thresholds_field_id_hashmap_fields', { 'args': [ 'hashmap_fields', ['field_id'], ['id']], 'kwargs': {'ondelete': 'CASCADE'}}), ('hashmap_thresholds_group_id_fkey', 'fk_hashmap_thresholds_group_id_hashmap_groups', { 'args': [ 'hashmap_groups', ['group_id'], ['id']], 'kwargs': {'ondelete': 'SET NULL'}}), ('hashmap_thresholds_service_id_fkey', 'fk_hashmap_thresholds_service_id_hashmap_services', { 'args': [ 'hashmap_services', ['service_id'], ['id']], 'kwargs': {'ondelete': 'CASCADE'}})], 'hashmap_mappings': [ ('hashmap_maps_field_id_fkey', 'fk_hashmap_maps_field_id_hashmap_fields', { 'args': [ 'hashmap_fields', ['field_id'], ['id']], 'kwargs': {'ondelete': 'CASCADE'}}), ('hashmap_maps_group_id_fkey', 'fk_hashmap_maps_group_id_hashmap_groups', { 'args': [ 'hashmap_groups', ['group_id'], ['id']], 'kwargs': {'ondelete': 'SET NULL'}}), ('hashmap_maps_service_id_fkey', 'fk_hashmap_maps_service_id_hashmap_services', { 'args': [ 'hashmap_fields', ['field_id'], ['id']], 'kwargs': {'ondelete': 'CASCADE'}})] }, 'primary': { 'hashmap_services': [ ('hashmap_services_pkey', 'pk_hashmap_services', {'args': [['id']]})], 'hashmap_fields': [ ('hashmap_fields_pkey', 'pk_hashmap_fields', {'args': [['id']]})], 'hashmap_groups': [ ('hashmap_groups_pkey', 'pk_hashmap_groups', {'args': [['id']]})], 'hashmap_mappings': [ ('hashmap_maps_pkey', 'pk_hashmap_maps', {'args': [['id']]})], 'hashmap_thresholds': [ ('hashmap_thresholds_pkey', 'pk_hashmap_thresholds', {'args': [['id']]})] }, 'unique': { 'hashmap_services': [ ('hashmap_services_name_key', 'uq_hashmap_services_name', {'args': [['name']]}), ('hashmap_services_service_id_key', 'uq_hashmap_services_service_id', {'args': [['service_id']]})], 'hashmap_fields': [ ('hashmap_fields_field_id_key', 'uq_hashmap_fields_field_id', {'args': [['field_id']]})], 'hashmap_groups': [ ('hashmap_groups_group_id_key', 'uq_hashmap_groups_group_id', {'args': [['group_id']]}), ('hashmap_groups_name_key', 'uq_hashmap_groups_name', {'args': [['name']]})], 'hashmap_mappings': [ ('hashmap_maps_mapping_id_key', 'uq_hashmap_maps_mapping_id', {'args': [['mapping_id']]})], 'hashmap_thresholds': [ ('hashmap_thresholds_threshold_id_key', 'uq_hashmap_thresholds_threshold_id', {'args': [['threshold_id']]})]}} POST_OPS = { 'primary': { 'hashmap_mappings': [ ('pk_hashmap_maps', 'pk_hashmap_mappings', {'args': [['id']]})] }} def upgrade_sqlite(): # NOTE(sheeprine): Batch automatically recreates tables, # use it as a lazy way to recreate tables and transfer data automagically. for name, table in models.Base.metadata.tables.items(): with op.batch_alter_table(name, copy_from=table) as batch_op: # NOTE(sheeprine): Dummy operation to force recreate. # Easier than delete and create. batch_op.alter_column('id') def upgrade_mysql(): op.execute('SET FOREIGN_KEY_CHECKS=0;') tables = copy.deepcopy(models.Base.metadata.tables) # Copy first without constraints tables['hashmap_fields'].constraints = set() tables['hashmap_mappings'].constraints = set() tables['hashmap_thresholds'].constraints = set() for name, table in tables.items(): with op.batch_alter_table(name, copy_from=table, recreate='always') as batch_op: batch_op.alter_column('id') # Final copy with constraints for name, table in models.Base.metadata.tables.items(): with op.batch_alter_table(name, copy_from=table, recreate='always') as batch_op: batch_op.alter_column('id') op.execute('SET FOREIGN_KEY_CHECKS=1;') def translate_op(op_, constraint_type, name, table, *args, **kwargs): if op_ == 'drop': op.drop_constraint(name, table, type_=constraint_type) else: if constraint_type == 'primary': func = op.create_primary_key elif constraint_type == 'unique': func = op.create_unique_constraint elif constraint_type == 'foreignkey': func = op.create_foreign_key func(name, table, *args, **kwargs) def upgrade_postgresql(): # NOTE(sheeprine): No automagic stuff here. # Check if tables need additional work conn = op.get_bind() res = conn.execute( "SELECT * FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS" " WHERE CONSTRAINT_NAME = 'hashmap_thresholds_field_id_fkey';") if res.rowcount: ops_list = [OPS, POST_OPS] else: ops_list = [POST_OPS] for cur_ops in ops_list: for constraint_type in ('foreignkey', 'unique', 'primary'): for table_name, constraints in cur_ops.get(constraint_type, dict()).items(): for constraint in constraints: old_name = constraint[0] translate_op( 'drop', constraint_type, old_name, table_name) for constraint_type in ('primary', 'unique', 'foreignkey'): for table_name, constraints in cur_ops.get(constraint_type, dict()).items(): for constraint in constraints: new_name = constraint[1] params = constraint[2] translate_op( 'create', constraint_type, new_name, table_name, *params.get('args', []), **params.get('kwargs', {})) def upgrade(): dialect = op.get_context().dialect if dialect.name == 'sqlite': upgrade_sqlite() elif dialect.name == 'mysql': upgrade_mysql() elif dialect.name == 'postgresql': upgrade_postgresql() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/api.py0000664000175000017500000005226100000000000024272 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_db import exception from oslo_db.sqlalchemy import utils from oslo_utils import uuidutils import sqlalchemy from cloudkitty import db from cloudkitty.rating.hash.db import api from cloudkitty.rating.hash.db.sqlalchemy import migration from cloudkitty.rating.hash.db.sqlalchemy import models def get_backend(): return HashMap() class HashMap(api.HashMap): def get_migration(self): return migration def get_service(self, name=None, uuid=None): with db.session_for_read() as session: try: q = session.query(models.HashMapService) if name: q = q.filter( models.HashMapService.name == name) elif uuid: q = q.filter( models.HashMapService.service_id == uuid) else: raise api.ClientHashMapError( 'You must specify either name or uuid.') res = q.one() return res except sqlalchemy.orm.exc.NoResultFound: raise api.NoSuchService(name=name, uuid=uuid) def get_field(self, uuid=None, service_uuid=None, name=None): with db.session_for_read() as session: try: q = session.query(models.HashMapField) if uuid: q = q.filter( models.HashMapField.field_id == uuid) elif service_uuid and name: q = q.join( models.HashMapField.service) q = q.filter( models.HashMapService.service_id == service_uuid, models.HashMapField.name == name) else: raise api.ClientHashMapError( 'You must specify either a uuid' ' or a service_uuid and a name.') res = q.one() return res except sqlalchemy.orm.exc.NoResultFound: raise api.NoSuchField(uuid) def get_group(self, uuid=None, name=None): with db.session_for_read() as session: try: q = session.query(models.HashMapGroup) if uuid: q = q.filter( models.HashMapGroup.group_id == uuid) if name: q = q.filter( models.HashMapGroup.name == name) res = q.one() return res except sqlalchemy.orm.exc.NoResultFound: raise api.NoSuchGroup(name, uuid) def get_mapping(self, uuid): with db.session_for_read() as session: try: q = session.query(models.HashMapMapping) q = q.filter( models.HashMapMapping.mapping_id == uuid) res = q.one() return res except sqlalchemy.orm.exc.NoResultFound: raise api.NoSuchMapping(uuid) def get_threshold(self, uuid): with db.session_for_read() as session: try: q = session.query(models.HashMapThreshold) q = q.filter( models.HashMapThreshold.threshold_id == uuid) res = q.one() return res except sqlalchemy.orm.exc.NoResultFound: raise api.NoSuchThreshold(uuid) def get_group_from_mapping(self, uuid): with db.session_for_read() as session: try: q = session.query(models.HashMapGroup) q = q.join( models.HashMapGroup.mappings) q = q.filter( models.HashMapMapping.mapping_id == uuid) res = q.one() return res except sqlalchemy.orm.exc.NoResultFound: raise api.MappingHasNoGroup(uuid=uuid) def get_group_from_threshold(self, uuid): with db.session_for_read() as session: try: q = session.query(models.HashMapGroup) q = q.join( models.HashMapGroup.thresholds) q = q.filter( models.HashMapThreshold.threshold_id == uuid) res = q.one() return res except sqlalchemy.orm.exc.NoResultFound: raise api.ThresholdHasNoGroup(uuid=uuid) def list_services(self): with db.session_for_read() as session: q = session.query(models.HashMapService) res = q.values( models.HashMapService.service_id) return [uuid[0] for uuid in res] def list_fields(self, service_uuid): with db.session_for_read() as session: q = session.query(models.HashMapField) q = q.join( models.HashMapField.service) q = q.filter( models.HashMapService.service_id == service_uuid) res = q.values(models.HashMapField.field_id) return [uuid[0] for uuid in res] def list_groups(self): with db.session_for_read() as session: q = session.query(models.HashMapGroup) res = q.values( models.HashMapGroup.group_id) return [uuid[0] for uuid in res] def list_mappings(self, service_uuid=None, field_uuid=None, group_uuid=None, no_group=False, **kwargs): with db.session_for_read() as session: q = session.query(models.HashMapMapping) if service_uuid: q = q.join( models.HashMapMapping.service) q = q.filter( models.HashMapService.service_id == service_uuid) elif field_uuid: q = q.join( models.HashMapMapping.field) q = q.filter(models.HashMapField.field_id == field_uuid) elif not service_uuid and not field_uuid and not group_uuid: raise api.ClientHashMapError( 'You must specify either service_uuid,' ' field_uuid or group_uuid.') if 'tenant_uuid' in kwargs: q = q.filter( models.HashMapMapping.tenant_id == kwargs.get( 'tenant_uuid')) if group_uuid: q = q.join( models.HashMapMapping.group) q = q.filter(models.HashMapGroup.group_id == group_uuid) elif no_group: q = q.filter(models.HashMapMapping.group_id == None) # noqa res = q.values( models.HashMapMapping.mapping_id) return [uuid[0] for uuid in res] def list_thresholds(self, service_uuid=None, field_uuid=None, group_uuid=None, no_group=False, **kwargs): with db.session_for_read() as session: q = session.query(models.HashMapThreshold) if service_uuid: q = q.join( models.HashMapThreshold.service) q = q.filter( models.HashMapService.service_id == service_uuid) elif field_uuid: q = q.join( models.HashMapThreshold.field) q = q.filter(models.HashMapField.field_id == field_uuid) elif not service_uuid and not field_uuid and not group_uuid: raise api.ClientHashMapError( 'You must specify either service_uuid,' ' field_uuid or group_uuid.') if 'tenant_uuid' in kwargs: q = q.filter( models.HashMapThreshold.tenant_id == kwargs.get( 'tenant_uuid')) if group_uuid: q = q.join( models.HashMapThreshold.group) q = q.filter(models.HashMapGroup.group_id == group_uuid) elif no_group: q = q.filter(models.HashMapThreshold.group_id == None) # noqa res = q.values( models.HashMapThreshold.threshold_id) return [uuid[0] for uuid in res] def create_service(self, name): try: with db.session_for_write() as session: service_db = models.HashMapService(name=name) service_db.service_id = uuidutils.generate_uuid() session.add(service_db) return service_db except exception.DBDuplicateEntry: service_db = self.get_service(name=name) raise api.ServiceAlreadyExists( service_db.name, service_db.service_id) def create_field(self, service_uuid, name): service_db = self.get_service(uuid=service_uuid) try: with db.session_for_write() as session: field_db = models.HashMapField( service_id=service_db.id, name=name, field_id=uuidutils.generate_uuid()) session.add(field_db) # FIXME(sheeprine): backref are not populated as they used to be. # Querying the item again to get backref. field_db = self.get_field(service_uuid=service_uuid, name=name) except exception.DBDuplicateEntry: field_db = self.get_field(service_uuid=service_uuid, name=name) raise api.FieldAlreadyExists(field_db.name, field_db.field_id) else: return field_db def create_group(self, name): try: with db.session_for_write() as session: group_db = models.HashMapGroup( name=name, group_id=uuidutils.generate_uuid()) session.add(group_db) return group_db except exception.DBDuplicateEntry: group_db = self.get_group(name=name) raise api.GroupAlreadyExists(name, group_db.group_id) def create_mapping(self, cost, map_type='rate', value=None, service_id=None, field_id=None, group_id=None, tenant_id=None): if field_id and service_id: raise api.ClientHashMapError('You can only specify one parent.') elif not service_id and not field_id: raise api.ClientHashMapError('You must specify one parent.') elif value and service_id: raise api.ClientHashMapError( 'You can\'t specify a value' ' and a service_id.') elif not value and field_id: raise api.ClientHashMapError( 'You must specify a value' ' for a field mapping.') field_fk = None if field_id: field_db = self.get_field(uuid=field_id) field_fk = field_db.id service_fk = None if service_id: service_db = self.get_service(uuid=service_id) service_fk = service_db.id group_fk = None if group_id: group_db = self.get_group(uuid=group_id) group_fk = group_db.id try: with db.session_for_write() as session: field_map = models.HashMapMapping( mapping_id=uuidutils.generate_uuid(), value=value, cost=cost, field_id=field_fk, service_id=service_fk, map_type=map_type, tenant_id=tenant_id) if group_fk: field_map.group_id = group_fk session.add(field_map) except exception.DBDuplicateEntry: if field_id: puuid = field_id ptype = 'field' else: puuid = service_id ptype = 'service' raise api.MappingAlreadyExists( value, puuid, ptype, tenant_id=tenant_id) except exception.DBError: raise api.NoSuchType(map_type) # FIXME(sheeprine): backref are not populated as they used to be. # Querying the item again to get backref. field_map = self.get_mapping(field_map.mapping_id) return field_map def create_threshold(self, level, cost, map_type='rate', service_id=None, field_id=None, group_id=None, tenant_id=None): if field_id and service_id: raise api.ClientHashMapError('You can only specify one parent.') elif not service_id and not field_id: raise api.ClientHashMapError('You must specify one parent.') field_fk = None if field_id: field_db = self.get_field(uuid=field_id) field_fk = field_db.id service_fk = None if service_id: service_db = self.get_service(uuid=service_id) service_fk = service_db.id group_fk = None if group_id: group_db = self.get_group(uuid=group_id) group_fk = group_db.id try: with db.session_for_write() as session: threshold_db = models.HashMapThreshold( threshold_id=uuidutils.generate_uuid(), level=level, cost=cost, field_id=field_fk, service_id=service_fk, map_type=map_type, tenant_id=tenant_id) if group_fk: threshold_db.group_id = group_fk session.add(threshold_db) except exception.DBDuplicateEntry: if field_id: puuid = field_id ptype = 'field' else: puuid = service_id ptype = 'service' raise api.ThresholdAlreadyExists(level, puuid, ptype) except exception.DBError: raise api.NoSuchType(map_type) # FIXME(sheeprine): backref are not populated as they used to be. # Querying the item again to get backref. threshold_db = self.get_threshold(threshold_db.threshold_id) return threshold_db def update_mapping(self, uuid, **kwargs): try: with db.session_for_write() as session: q = session.query(models.HashMapMapping) q = q.filter( models.HashMapMapping.mapping_id == uuid) mapping_db = q.with_for_update().one() if kwargs: # NOTE(sheeprine): We want to check that value is not set # to a None value. if mapping_db.field_id and not kwargs.get('value', 'GOOD'): raise api.ClientHashMapError( 'You must specify a value' ' for a field mapping.') # Resolve FK if 'group_id' in kwargs: group_id = kwargs.pop('group_id') if group_id: group_db = self.get_group(group_id) mapping_db.group_id = group_db.id # Service and Field shouldn't be updated excluded_cols = ['mapping_id', 'service_id', 'field_id'] for col in excluded_cols: if col in kwargs: kwargs.pop(col) for attribute, value in kwargs.items(): if hasattr(mapping_db, attribute): setattr(mapping_db, attribute, value) else: raise api.ClientHashMapError( 'No such attribute: {}'.format( attribute)) else: raise api.ClientHashMapError('No attribute to update.') return mapping_db except exception.DBDuplicateEntry: puuid = uuid ptype = 'Mapping_id' raise api.MappingAlreadyExists( value, puuid, ptype, tenant_id=kwargs.get('tenant_id')) except sqlalchemy.orm.exc.NoResultFound: raise api.NoSuchMapping(uuid) def update_threshold(self, uuid, **kwargs): try: with db.session_for_write() as session: q = session.query(models.HashMapThreshold) q = q.filter( models.HashMapThreshold.threshold_id == uuid) threshold_db = q.with_for_update().one() if kwargs: # Resolve FK if 'group_id' in kwargs: group_id = kwargs.pop('group_id') if group_id: group_db = self.get_group(group_id) threshold_db.group_id = group_db.id # Service and Field shouldn't be updated excluded_cols = ['threshold_id', 'service_id', 'field_id'] for col in excluded_cols: if col in kwargs: kwargs.pop(col) for attribute, value in kwargs.items(): if hasattr(threshold_db, attribute): setattr(threshold_db, attribute, value) else: raise api.ClientHashMapError( 'No such attribute: {}'.format( attribute)) else: raise api.ClientHashMapError('No attribute to update.') return threshold_db except exception.DBDuplicateEntry: puuid = uuid ptype = 'Threshold_id' raise api.ThresholdAlreadyExists( value, puuid, ptype, tenant_id=kwargs.get('tenant_id')) except sqlalchemy.orm.exc.NoResultFound: raise api.NoSuchThreshold(uuid) def delete_service(self, name=None, uuid=None): with db.session_for_write() as session: q = utils.model_query( models.HashMapService, session) if name: q = q.filter(models.HashMapService.name == name) elif uuid: q = q.filter(models.HashMapService.service_id == uuid) else: raise api.ClientHashMapError( 'You must specify either name or uuid.') r = q.delete() if not r: raise api.NoSuchService(name, uuid) def delete_field(self, uuid): with db.session_for_write() as session: q = utils.model_query( models.HashMapField, session) q = q.filter(models.HashMapField.field_id == uuid) r = q.delete() if not r: raise api.NoSuchField(uuid) def delete_group(self, uuid, recurse=True): with db.session_for_write() as session: q = utils.model_query( models.HashMapGroup, session) q = q.filter(models.HashMapGroup.group_id == uuid) try: r = q.with_for_update().one() except sqlalchemy.orm.exc.NoResultFound: raise api.NoSuchGroup(uuid=uuid) if recurse: for mapping in r.mappings: session.delete(mapping) for threshold in r.thresholds: session.delete(threshold) q.delete() def delete_mapping(self, uuid): with db.session_for_write() as session: q = utils.model_query( models.HashMapMapping, session) q = q.filter(models.HashMapMapping.mapping_id == uuid) r = q.delete() if not r: raise api.NoSuchMapping(uuid) def delete_threshold(self, uuid): with db.session_for_write() as session: q = utils.model_query( models.HashMapThreshold, session) q = q.filter(models.HashMapThreshold.threshold_id == uuid) r = q.delete() if not r: raise api.NoSuchThreshold(uuid) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/migration.py0000664000175000017500000000240300000000000025503 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import os from cloudkitty.common.db.alembic import migration ALEMBIC_REPO = os.path.join(os.path.dirname(__file__), 'alembic') def upgrade(revision): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.upgrade(config, revision) def version(): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.version(config) def revision(message, autogenerate): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.revision(config, message, autogenerate) def stamp(revision): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.stamp(config, revision) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/hash/db/sqlalchemy/models.py0000664000175000017500000002360600000000000025005 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_db.sqlalchemy import models import sqlalchemy from sqlalchemy.ext import declarative from sqlalchemy import orm from sqlalchemy import schema from cloudkitty.common.db import models as ck_models Base = ck_models.get_base() class HashMapBase(models.ModelBase): __table_args__ = {'mysql_charset': "utf8", 'mysql_engine': "InnoDB"} fk_to_resolve = {} def save(self): from cloudkitty import db with db.session_for_write() as session: super(HashMapBase, self).save(session=session) def as_dict(self): d = {} for c in self.__table__.columns: if c.name == 'id': continue d[c.name] = self[c.name] return d def _recursive_resolve(self, path): obj = self for attr in path.split('.'): if hasattr(obj, attr): obj = getattr(obj, attr) else: return None return obj def export_model(self): res = self.as_dict() for fk, mapping in self.fk_to_resolve.items(): res[fk] = self._recursive_resolve(mapping) return res class HashMapService(Base, HashMapBase): """A hashmap service. Used to describe a CloudKitty service such as compute or volume. """ __tablename__ = 'hashmap_services' id = sqlalchemy.Column( sqlalchemy.Integer, primary_key=True) service_id = sqlalchemy.Column( sqlalchemy.String(36), nullable=False, unique=True) name = sqlalchemy.Column( sqlalchemy.String(255), nullable=False, unique=True) fields = orm.relationship( 'HashMapField', backref=orm.backref( 'service', lazy='immediate')) mappings = orm.relationship( 'HashMapMapping', backref=orm.backref( 'service', lazy='immediate')) thresholds = orm.relationship( 'HashMapThreshold', backref=orm.backref( 'service', lazy='immediate')) def __repr__(self): return ('').format( uuid=self.service_id, service=self.name) class HashMapField(Base, HashMapBase): """A hashmap field. Used to describe a service metadata such as flavor_id or image_id for compute. """ __tablename__ = 'hashmap_fields' fk_to_resolve = { 'service_id': 'service.service_id'} @declarative.declared_attr def __table_args__(cls): args = ( schema.UniqueConstraint( 'service_id', 'name', name='uniq_field_per_service'), HashMapBase.__table_args__,) return args id = sqlalchemy.Column( sqlalchemy.Integer, primary_key=True) field_id = sqlalchemy.Column( sqlalchemy.String(36), nullable=False, unique=True) name = sqlalchemy.Column( sqlalchemy.String(255), nullable=False) service_id = sqlalchemy.Column( sqlalchemy.Integer, sqlalchemy.ForeignKey( 'hashmap_services.id', ondelete='CASCADE'), nullable=False) mappings = orm.relationship( 'HashMapMapping', backref=orm.backref( 'field', lazy='immediate')) thresholds = orm.relationship( 'HashMapThreshold', backref=orm.backref( 'field', lazy='immediate')) def __repr__(self): return ('').format( uuid=self.field_id, field=self.name) class HashMapGroup(Base, HashMapBase): """A grouping of hashmap calculations. Used to group multiple mappings or thresholds into a single calculation. """ __tablename__ = 'hashmap_groups' id = sqlalchemy.Column( sqlalchemy.Integer, primary_key=True) group_id = sqlalchemy.Column( sqlalchemy.String(36), nullable=False, unique=True) name = sqlalchemy.Column( sqlalchemy.String(255), nullable=False, unique=True) mappings = orm.relationship( 'HashMapMapping', backref=orm.backref( 'group', lazy='immediate')) thresholds = orm.relationship( 'HashMapThreshold', backref=orm.backref( 'group', lazy='immediate')) def __repr__(self): return ('').format( uuid=self.group_id, name=self.name) class HashMapMapping(Base, HashMapBase): """A mapping between a field or service, a value and a type. Used to model final equation. """ __tablename__ = 'hashmap_mappings' fk_to_resolve = { 'service_id': 'service.service_id', 'field_id': 'field.field_id', 'group_id': 'group.group_id'} @declarative.declared_attr def __table_args__(cls): args = ( schema.UniqueConstraint( 'value', 'field_id', 'tenant_id', name='uniq_field_mapping'), schema.UniqueConstraint( 'value', 'service_id', 'tenant_id', name='uniq_service_mapping'), HashMapBase.__table_args__,) return args id = sqlalchemy.Column( sqlalchemy.Integer, primary_key=True) mapping_id = sqlalchemy.Column( sqlalchemy.String(36), nullable=False, unique=True) value = sqlalchemy.Column( sqlalchemy.String(255), nullable=True) cost = sqlalchemy.Column( sqlalchemy.Numeric(40, 28), nullable=False) map_type = sqlalchemy.Column( sqlalchemy.Enum( 'flat', 'rate', name='enum_map_type', create_constraint=True), nullable=False) service_id = sqlalchemy.Column( sqlalchemy.Integer, sqlalchemy.ForeignKey( 'hashmap_services.id', ondelete='CASCADE'), nullable=True) field_id = sqlalchemy.Column( sqlalchemy.Integer, sqlalchemy.ForeignKey( 'hashmap_fields.id', ondelete='CASCADE'), nullable=True) group_id = sqlalchemy.Column( sqlalchemy.Integer, sqlalchemy.ForeignKey( 'hashmap_groups.id', ondelete='SET NULL'), nullable=True) tenant_id = sqlalchemy.Column( sqlalchemy.String(255), nullable=True) def __repr__(self): return ('').format( uuid=self.mapping_id, map_type=self.map_type, value=self.value, cost=self.cost, tenant=self.tenant_id) class HashMapThreshold(Base, HashMapBase): """A threshold matching a service or a field with a level and a type. Used to model final equation. """ __tablename__ = 'hashmap_thresholds' fk_to_resolve = { 'service_id': 'service.service_id', 'field_id': 'field.field_id', 'group_id': 'group.group_id'} @declarative.declared_attr def __table_args__(cls): args = ( schema.UniqueConstraint( 'level', 'field_id', 'tenant_id', name='uniq_field_threshold'), schema.UniqueConstraint( 'level', 'service_id', 'tenant_id', name='uniq_service_threshold'), HashMapBase.__table_args__,) return args id = sqlalchemy.Column( sqlalchemy.Integer, primary_key=True) threshold_id = sqlalchemy.Column( sqlalchemy.String(36), nullable=False, unique=True) level = sqlalchemy.Column( sqlalchemy.Numeric(20, 8), nullable=True) cost = sqlalchemy.Column( sqlalchemy.Numeric(40, 28), nullable=False) map_type = sqlalchemy.Column( sqlalchemy.Enum( 'flat', 'rate', name='enum_hashmap_type', create_constraint=True), nullable=False) service_id = sqlalchemy.Column( sqlalchemy.Integer, sqlalchemy.ForeignKey( 'hashmap_services.id', ondelete='CASCADE'), nullable=True) field_id = sqlalchemy.Column( sqlalchemy.Integer, sqlalchemy.ForeignKey( 'hashmap_fields.id', ondelete='CASCADE'), nullable=True) group_id = sqlalchemy.Column( sqlalchemy.Integer, sqlalchemy.ForeignKey( 'hashmap_groups.id', ondelete='SET NULL'), nullable=True) tenant_id = sqlalchemy.Column( sqlalchemy.String(255), nullable=True) def __repr__(self): return ('').format( uuid=self.threshold_id, map_type=self.map_type, level=self.level, cost=self.cost, tenant=self.tenant_id) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/noop.py0000664000175000017500000000206300000000000021015 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from cloudkitty import rating class Noop(rating.RatingProcessorBase): module_name = "noop" description = 'Dummy test module.' @property def enabled(self): """Check if the module is enabled :returns: bool if module is enabled """ return True @property def priority(self): return 1 def reload_config(self): pass def process(self, data): return data ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2474866 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/0000775000175000017500000000000000000000000021527 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/__init__.py0000664000175000017500000001011500000000000023636 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from cloudkitty import dataframe from cloudkitty import rating from cloudkitty.rating.pyscripts.controllers import root as root_api from cloudkitty.rating.pyscripts.db import api as pyscripts_db_api from oslo_log import log as logging LOG = logging.getLogger(__name__) class PyScripts(rating.RatingProcessorBase): """PyScripts rating module. PyScripts is a module made to execute custom made python scripts to create rating policies. """ module_name = 'pyscripts' description = 'PyScripts rating module.' hot_config = True config_controller = root_api.PyScriptsConfigController db_api = pyscripts_db_api.get_instance() def __init__(self, tenant_id=None): # current scripts loaded to memory self._scripts = {} self.load_scripts_in_memory() super(PyScripts, self).__init__(tenant_id) def load_scripts_in_memory(self): db = pyscripts_db_api.get_instance() scripts_uuid_list = db.list_scripts() self.purge_removed_scripts(scripts_uuid_list) # Load or update script for script_uuid in scripts_uuid_list: script_db = db.get_script(uuid=script_uuid) name = script_db.name checksum = script_db.checksum if name not in self._scripts: self._scripts[script_uuid] = {} script = self._scripts[script_uuid] # NOTE(sheeprine): We're doing this the easy way, we might want to # store the context and call functions in future if script.get(checksum, '') != checksum: code = compile( script_db.data, ''.format(name=name), 'exec') script.update({ 'name': name, 'code': code, 'checksum': checksum}) def purge_removed_scripts(self, scripts_uuid_list): scripts_to_purge = self.get_all_script_to_remove(scripts_uuid_list) self.remove_purged_scripts(scripts_to_purge) def get_all_script_to_remove(self, new_scripts_uuid_list): scripts_to_purge = [] for script_uuid in self._scripts.keys(): if script_uuid not in new_scripts_uuid_list: scripts_to_purge.append(script_uuid) return scripts_to_purge def remove_purged_scripts(self, scripts_to_purge): for script_uuid in scripts_to_purge: LOG.info("Removing script [%s] from the script list to execute.", self._scripts[script_uuid]) del self._scripts[script_uuid] def reload_config(self): """Reload the module's configuration. """ LOG.debug("Executing the reload of configurations.") self.load_scripts_in_memory() LOG.debug("Configurations reloaded.") def start_script(self, code, data): context = {'data': data} exec(code, context) # nosec return context['data'] def process(self, data): for script in self._scripts.values(): data_dict = data.as_dict(mutable=True) LOG.debug("Executing pyscript [%s] with data [%s].", script, data_dict) data_output = self.start_script(script['code'], data_dict) LOG.debug("Result [%s] for processing with pyscript [%s] with " "data [%s].", data_output, script, data_dict) data = dataframe.DataFrame.from_dict(data_output) return data ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2474866 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/controllers/0000775000175000017500000000000000000000000024075 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/controllers/__init__.py0000664000175000017500000000000000000000000026174 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/controllers/root.py0000664000175000017500000000163600000000000025440 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from cloudkitty import rating from cloudkitty.rating.pyscripts.controllers import script as script_api class PyScriptsConfigController(rating.RatingRestControllerBase): """Controller exposing all management sub controllers. """ scripts = script_api.PyScriptsScriptsController() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/controllers/script.py0000664000175000017500000001146700000000000025764 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import pecan from wsme import types as wtypes import wsmeext.pecan as wsme_pecan from cloudkitty.api.v1 import types as ck_types from cloudkitty import rating from cloudkitty.rating.pyscripts.datamodels import script as script_models from cloudkitty.rating.pyscripts.db import api as db_api class PyScriptsScriptsController(rating.RatingRestControllerBase): """Controller responsible of scripts management. """ def normalize_data(self, data): """Translate data to binary format if needed. :param data: Data to convert to binary type. """ if data == wtypes.Unset: return '' if not isinstance(data, bytes): data = data.encode('utf-8') return data @wsme_pecan.wsexpose(script_models.ScriptCollection, bool) def get_all(self, no_data=False): """Get the script list :param no_data: Set to True to remove script data from output. :return: List of every scripts. """ pyscripts = db_api.get_instance() script_list = [] script_uuid_list = pyscripts.list_scripts() for script_uuid in script_uuid_list: script_db = pyscripts.get_script(uuid=script_uuid) script = script_db.export_model() if no_data: del script['data'] script_list.append(script_models.Script( **script)) res = script_models.ScriptCollection(scripts=script_list) return res @wsme_pecan.wsexpose(script_models.Script, ck_types.UuidType()) def get_one(self, script_id): """Return a script. :param script_id: UUID of the script to filter on. """ pyscripts = db_api.get_instance() try: script_db = pyscripts.get_script(uuid=script_id) return script_models.Script(**script_db.export_model()) except db_api.NoSuchScript as e: pecan.abort(404, e.args[0]) @wsme_pecan.wsexpose(script_models.Script, body=script_models.Script, status_code=201) def post(self, script_data): """Create pyscripts script. :param script_data: Informations about the script to create. """ pyscripts = db_api.get_instance() try: data = self.normalize_data(script_data.data) script_db = pyscripts.create_script(script_data.name, data) pecan.response.location = pecan.request.path_url if pecan.response.location[-1] != '/': pecan.response.location += '/' pecan.response.location += script_db.script_id return script_models.Script( **script_db.export_model()) except db_api.ScriptAlreadyExists as e: pecan.abort(409, e.args[0]) @wsme_pecan.wsexpose(script_models.Script, ck_types.UuidType(), body=script_models.Script, status_code=201) def put(self, script_id, script_data): """Update pyscripts script. :param script_id: UUID of the script to update. :param script_data: Script data to update. """ pyscripts = db_api.get_instance() try: data = self.normalize_data(script_data.data) script_db = pyscripts.update_script(script_id, name=script_data.name, data=data) pecan.response.location = pecan.request.path_url if pecan.response.location[-1] != '/': pecan.response.location += '/' pecan.response.location += script_db.script_id return script_models.Script( **script_db.export_model()) except db_api.NoSuchScript as e: pecan.abort(404, e.args[0]) @wsme_pecan.wsexpose(None, ck_types.UuidType(), status_code=204) def delete(self, script_id): """Delete the script. :param script_id: UUID of the script to delete. """ pyscripts = db_api.get_instance() try: pyscripts.delete_script(uuid=script_id) except db_api.NoSuchScript as e: pecan.abort(404, e.args[0]) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2474866 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/datamodels/0000775000175000017500000000000000000000000023644 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/datamodels/__init__.py0000664000175000017500000000000000000000000025743 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/datamodels/script.py0000664000175000017500000000347600000000000025534 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from wsme import types as wtypes from cloudkitty.api.v1 import types as ck_types class Script(wtypes.Base): """Type describing a script. """ script_id = wtypes.wsattr(ck_types.UuidType(), mandatory=False) """UUID of the script.""" name = wtypes.wsattr(wtypes.text, mandatory=True) """Name of the script.""" data = wtypes.wsattr(wtypes.text, mandatory=False) """Data of the script.""" checksum = wtypes.wsattr(wtypes.text, mandatory=False, readonly=True) """Checksum of the script data.""" @classmethod def sample(cls): sample = cls(script_id='bc05108d-f515-4984-8077-de319cbf35aa', name='policy1', data='return 0', checksum='cf83e1357eefb8bdf1542850d66d8007d620e4050b5715d' 'c83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec' '2f63b931bd47417a81a538327af927da3e') return sample class ScriptCollection(wtypes.Base): """Type describing a list of scripts. """ scripts = [Script] """List of scripts.""" @classmethod def sample(cls): sample = Script.sample() return cls(scripts=[sample]) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2474866 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/db/0000775000175000017500000000000000000000000022114 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/db/__init__.py0000664000175000017500000000000000000000000024213 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/db/api.py0000664000175000017500000000556600000000000023253 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import abc from oslo_config import cfg from oslo_db import api as db_api from cloudkitty.i18n import _ _BACKEND_MAPPING = { 'sqlalchemy': 'cloudkitty.rating.pyscripts.db.sqlalchemy.api'} IMPL = db_api.DBAPI.from_config(cfg.CONF, backend_mapping=_BACKEND_MAPPING, lazy=True) def get_instance(): """Return a DB API instance.""" return IMPL class NoSuchScript(Exception): """Raised when the script doesn't exist.""" def __init__(self, name=None, uuid=None): super(NoSuchScript, self).__init__( _("No such script: %(name)s (UUID: %(uuid)s)") % {'name': name, 'uuid': uuid}) self.name = name self.uuid = uuid class ScriptAlreadyExists(Exception): """Raised when the script already exists.""" def __init__(self, name, uuid): super(ScriptAlreadyExists, self).__init__( _("Script %(name)s already exists (UUID: %(uuid)s)") % {'name': name, 'uuid': uuid}) self.name = name self.uuid = uuid class PyScripts(object, metaclass=abc.ABCMeta): """Base class for pyscripts configuration.""" @abc.abstractmethod def get_migration(self): """Return a migrate manager. """ @abc.abstractmethod def get_script(self, name=None, uuid=None): """Return a script object. :param name: Filter on a script name. :param uuid: The uuid of the script to get. """ @abc.abstractmethod def list_scripts(self): """Return a UUID list of every scripts available. """ @abc.abstractmethod def create_script(self, name, data): """Create a new script. :param name: Name of the script to create. :param data: Content of the python script. """ @abc.abstractmethod def update_script(self, uuid, **kwargs): """Update a script. :param uuid UUID of the script to modify. :param data: Script data. """ @abc.abstractmethod def delete_script(self, name=None, uuid=None): """Delete a list. :param name: Name of the script to delete. :param uuid: UUID of the script to delete. """ ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2514865 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/db/sqlalchemy/0000775000175000017500000000000000000000000024256 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/db/sqlalchemy/__init__.py0000664000175000017500000000000000000000000026355 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2514865 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/db/sqlalchemy/alembic/0000775000175000017500000000000000000000000025652 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/db/sqlalchemy/alembic/__init__.py0000664000175000017500000000000000000000000027751 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/db/sqlalchemy/alembic/env.py0000664000175000017500000000156200000000000027020 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from cloudkitty.common.db.alembic import env # noqa from cloudkitty.rating.pyscripts.db.sqlalchemy import models target_metadata = models.Base.metadata version_table = 'pyscripts_alembic' env.run_migrations_online(target_metadata, version_table) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/db/sqlalchemy/alembic/script.py.mako0000664000175000017500000000172300000000000030461 0ustar00zuulzuul00000000000000# Copyright ${create_date.year} OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """${message} Revision ID: ${up_revision} Revises: ${down_revision} Create Date: ${create_date} """ # revision identifiers, used by Alembic. revision = ${repr(up_revision)} down_revision = ${repr(down_revision)} from alembic import op import sqlalchemy as sa ${imports if imports else ""} def upgrade(): ${upgrades if upgrades else "pass"} ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2514865 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/db/sqlalchemy/alembic/versions/0000775000175000017500000000000000000000000027522 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=cloudkitty-21.0.0/cloudkitty/rating/pyscripts/db/sqlalchemy/alembic/versions/4f9efa4601c0_initial_migration.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/db/sqlalchemy/alembic/versions/4f9efa4601c0_initial_mi0000664000175000017500000000255400000000000033465 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Initial migration. Revision ID: 4f9efa4601c0 Revises: None Create Date: 2015-07-30 12:46:32.998770 """ # revision identifiers, used by Alembic. revision = '4f9efa4601c0' down_revision = None from alembic import op # noqa: E402 import sqlalchemy as sa # noqa: E402 def upgrade(): op.create_table( 'pyscripts_scripts', sa.Column('id', sa.Integer(), nullable=False), sa.Column('script_id', sa.String(length=36), nullable=False), sa.Column('name', sa.String(length=255), nullable=False), sa.Column('data', sa.LargeBinary(), nullable=False), sa.Column('checksum', sa.String(length=40), nullable=False), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('name'), sa.UniqueConstraint('script_id'), mysql_charset='utf8', mysql_engine='InnoDB') ././@PaxHeader0000000000000000000000000000022500000000000011454 xustar0000000000000000127 path=cloudkitty-21.0.0/cloudkitty/rating/pyscripts/db/sqlalchemy/alembic/versions/75c205f6f1a2_move_from_sha1_to_sha512.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/db/sqlalchemy/alembic/versions/75c205f6f1a2_move_from_0000664000175000017500000000260400000000000033412 0ustar00zuulzuul00000000000000# Copyright 2019 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """move from sha1 to sha512 Revision ID: 75c205f6f1a2 Revises: 4f9efa4601c0 Create Date: 2019-03-25 13:53:23.398755 """ # revision identifiers, used by Alembic. revision = '75c205f6f1a2' down_revision = '4f9efa4601c0' from alembic import op # noqa: E402 import sqlalchemy as sa # noqa: E402 def upgrade(): with op.batch_alter_table('pyscripts_scripts') as batch_op: batch_op.alter_column('checksum', existing_type=sa.VARCHAR(length=40), type_=sa.String(length=128)) def downgrade(): with op.batch_alter_table('pyscripts_scripts') as batch_op: batch_op.alter_column('checksum', existing_type=sa.String(length=128), type_=sa.VARCHAR(length=40)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/db/sqlalchemy/api.py0000664000175000017500000001020600000000000025400 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_db import exception from oslo_db.sqlalchemy import utils from oslo_utils import uuidutils import sqlalchemy from cloudkitty import db from cloudkitty.rating.pyscripts.db import api from cloudkitty.rating.pyscripts.db.sqlalchemy import migration from cloudkitty.rating.pyscripts.db.sqlalchemy import models def get_backend(): return PyScripts() class PyScripts(api.PyScripts): def get_migration(self): return migration def get_script(self, name=None, uuid=None): with db.session_for_read() as session: try: q = session.query(models.PyScriptsScript) if name: q = q.filter( models.PyScriptsScript.name == name) elif uuid: q = q.filter( models.PyScriptsScript.script_id == uuid) else: raise ValueError('You must specify either name or uuid.') res = q.one() return res except sqlalchemy.orm.exc.NoResultFound: raise api.NoSuchScript(name=name, uuid=uuid) def list_scripts(self): with db.session_for_read() as session: q = session.query(models.PyScriptsScript) res = q.values( models.PyScriptsScript.script_id) return [uuid[0] for uuid in res] def create_script(self, name, data): try: with db.session_for_write() as session: script_db = models.PyScriptsScript(name=name) script_db.data = data script_db.script_id = uuidutils.generate_uuid() session.add(script_db) return script_db except exception.DBDuplicateEntry: script_db = self.get_script(name=name) raise api.ScriptAlreadyExists( script_db.name, script_db.script_id) def update_script(self, uuid, **kwargs): try: with db.session_for_write() as session: q = session.query(models.PyScriptsScript) q = q.filter( models.PyScriptsScript.script_id == uuid ) script_db = q.with_for_update().one() if kwargs: excluded_cols = ['script_id'] for col in excluded_cols: if col in kwargs: kwargs.pop(col) for attribute, value in kwargs.items(): if hasattr(script_db, attribute): setattr(script_db, attribute, value) else: raise ValueError('No such attribute: {}'.format( attribute)) else: raise ValueError('No attribute to update.') return script_db except sqlalchemy.orm.exc.NoResultFound: raise api.NoSuchScript(uuid=uuid) def delete_script(self, name=None, uuid=None): with db.session_for_write() as session: q = utils.model_query( models.PyScriptsScript, session) if name: q = q.filter(models.PyScriptsScript.name == name) elif uuid: q = q.filter(models.PyScriptsScript.script_id == uuid) else: raise ValueError('You must specify either name or uuid.') r = q.delete() if not r: raise api.NoSuchScript(uuid=uuid) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/db/sqlalchemy/migration.py0000664000175000017500000000240300000000000026620 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import os from cloudkitty.common.db.alembic import migration ALEMBIC_REPO = os.path.join(os.path.dirname(__file__), 'alembic') def upgrade(revision): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.upgrade(config, revision) def version(): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.version(config) def revision(message, autogenerate): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.revision(config, message, autogenerate) def stamp(revision): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.stamp(config, revision) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/rating/pyscripts/db/sqlalchemy/models.py0000664000175000017500000000606400000000000026121 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import hashlib import zlib from oslo_db.sqlalchemy import models import sqlalchemy from sqlalchemy.ext import declarative from sqlalchemy.ext import hybrid Base = declarative.declarative_base() class PyScriptsBase(models.ModelBase): __table_args__ = {'mysql_charset': "utf8", 'mysql_engine': "InnoDB"} fk_to_resolve = {} def save(self): from cloudkitty import db with db.session_for_write() as session: super(PyScriptsBase, self).save(session=session) def as_dict(self): d = {} for c in self.__table__.columns: if c.name == 'id': continue d[c.name] = self[c.name] return d def _recursive_resolve(self, path): obj = self for attr in path.split('.'): if hasattr(obj, attr): obj = getattr(obj, attr) else: return None return obj def export_model(self): res = self.as_dict() for fk, mapping in self.fk_to_resolve.items(): res[fk] = self._recursive_resolve(mapping) return res class PyScriptsScript(Base, PyScriptsBase): """A PyScripts entry. """ __tablename__ = 'pyscripts_scripts' id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True) script_id = sqlalchemy.Column(sqlalchemy.String(36), nullable=False, unique=True) name = sqlalchemy.Column( sqlalchemy.String(255), nullable=False, unique=True) _data = sqlalchemy.Column('data', sqlalchemy.LargeBinary(), nullable=False) _checksum = sqlalchemy.Column('checksum', sqlalchemy.String(128), nullable=False) @hybrid.hybrid_property def data(self): udata = zlib.decompress(self._data) return udata @data.setter def data(self, value): sha_check = hashlib.sha512() sha_check.update(value) self._checksum = sha_check.hexdigest() self._data = zlib.compress(value) @hybrid.hybrid_property def checksum(self): return self._checksum def __repr__(self): return ('').format( uuid=self.script_id, name=self.name) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/service.py0000664000175000017500000000326300000000000020221 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import socket import sys from oslo_config import cfg import oslo_i18n from oslo_log import log from cloudkitty.common import defaults from cloudkitty import messaging from cloudkitty import version service_opts = [ cfg.StrOpt('host', default=socket.getfqdn(), sample_default='', help='Name of this node. This can be an opaque identifier. ' 'It is not necessarily a hostname, FQDN, or IP address. ' 'However, the node name must be valid within an AMQP key.') ] cfg.CONF.register_opts(service_opts) def prepare_service(argv=None, config_files=None): oslo_i18n.enable_lazy() log.register_options(cfg.CONF) log.set_defaults() defaults.set_cors_middleware_defaults() if argv is None: argv = sys.argv cfg.CONF(argv[1:], project='cloudkitty', validate_default_values=True, version=version.version_info.version_string(), default_config_files=config_files) log.setup(cfg.CONF, 'cloudkitty') messaging.setup() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/state.py0000664000175000017500000000731300000000000017701 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from cloudkitty.db import api from cloudkitty.utils import json class StateManager(object): def __init__(self, state_backend, state_basepath, user_id, report_type, distributed=False): self._backend = state_backend self._basepath = state_basepath self._uid = user_id self._type = report_type self._distributed = distributed # States self._ts = None self._metadata = {} # Load states self._load() def _gen_filename(self): # FIXME(sheeprine): Basepath can't be enforced at the moment filename = '{0}_{1}.state'.format(self._type, self._uid) return filename def _open(self, mode='rb'): filename = self._gen_filename() state_file = self._backend(filename, mode) return state_file def _load(self): try: state_file = self._open() raw_data = state_file.read() if raw_data: state_data = json.loads(raw_data) self._ts = state_data['timestamp'] self._metadata = state_data['metadata'] state_file.close() except IOError: pass def _update(self): state_file = self._open('wb') state_data = {'timestamp': self._ts, 'metadata': self._metadata} state_file.write(json.dumps(state_data)) state_file.close() def set_state(self, timestamp): """Set the current state's timestamp.""" if self._distributed: self._load() self._ts = timestamp self._update() def get_state(self): """Get the state timestamp.""" if self._distributed: self._load() return self._ts def set_metadata(self, metadata): """Set metadata attached to the state.""" if self._distributed: self._load() self._metadata = metadata self._update() def get_metadata(self): """Get metadata attached to the state.""" if self._distributed: self._load() return self._metadata class DBStateManager(object): def __init__(self, user_id, report_type, distributed=False): self._state_name = self._gen_name(report_type, user_id) self._distributed = distributed self._db = api.get_instance().get_state() def _gen_name(self, state_type, uid): name = '{0}_{1}'.format(state_type, uid) return name def get_state(self): """Get the state timestamp.""" return self._db.get_state(self._state_name) def set_state(self, timestamp): """Set the current state's timestamp.""" self._db.set_state(self._state_name, timestamp) def get_metadata(self): """Get metadata attached to the state.""" data = self._db.get_metadata(self._state_name) if data: return json.loads(data) def set_metadata(self, metadata): """Set metadata attached to the state.""" self._db.set_metadata(self._state_name, json.dumps(metadata)) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2514865 cloudkitty-21.0.0/cloudkitty/storage/0000775000175000017500000000000000000000000017647 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/__init__.py0000664000175000017500000001555400000000000021772 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2018 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import functools from oslo_config import cfg from oslo_log import log as logging from stevedore import driver from cloudkitty import dataframe from cloudkitty.storage import v2 as storage_v2 from cloudkitty.utils import tz as tzutils LOG = logging.getLogger(__name__) storage_opts = [ cfg.StrOpt('backend', default='influxdb', help='Name of the storage backend driver.'), cfg.IntOpt('version', min=1, max=2, default=2, help='Storage version to use.'), ] CONF = cfg.CONF CONF.import_opt('period', 'cloudkitty.collector', 'collect') CONF.register_opts(storage_opts, 'storage') class NoTimeFrame(Exception): """Raised when there is no time frame available.""" def __init__(self): super(NoTimeFrame, self).__init__( "No time frame available") def _get_storage_instance(storage_args, storage_namespace, backend=None): backend = backend or cfg.CONF.storage.backend return driver.DriverManager( storage_namespace, backend, invoke_on_load=True, invoke_kwds=storage_args ).driver class V1StorageAdapter(storage_v2.BaseStorage): def __init__(self, storage_args, storage_namespace, backend=None): self.storage = _get_storage_instance( storage_args, storage_namespace, backend=backend) self._localize_dataframes = functools.partial( self.__update_frames_timestamps, tzutils.utc_to_local) self._make_dataframes_naive = functools.partial( self.__update_frames_timestamps, tzutils.local_to_utc, naive=True) def init(self): return self.storage.init() @staticmethod def __update_frames_timestamps(func, frames, **kwargs): for frame in frames: start = frame.start end = frame.end if start: frame.start = func(start, **kwargs) if end: frame.end = func(end, **kwargs) def push(self, dataframes, scope_id=None): if dataframes: self._make_dataframes_naive(dataframes) self.storage.append( [d.as_dict(mutable=True, legacy=True) for d in dataframes], scope_id) self.storage.commit(scope_id) @staticmethod def _check_metric_types(metric_types): if isinstance(metric_types, list): return metric_types[0] return metric_types def retrieve(self, begin=None, end=None, filters=None, metric_types=None, offset=0, limit=100, paginate=True): tenant_id = filters.get('project_id') if filters else None metric_types = self._check_metric_types(metric_types) frames = self.storage.get_time_frame( tzutils.local_to_utc(begin, naive=True) if begin else None, tzutils.local_to_utc(end, naive=True) if end else None, res_type=metric_types, tenant_id=tenant_id) frames = [dataframe.DataFrame.from_dict(frame, legacy=True) for frame in frames] self._localize_dataframes(frames) return { 'total': len(frames), 'dataframes': frames, } @staticmethod def _localize_total(iterable): for elem in iterable: begin = elem['begin'] end = elem['end'] if begin: elem['begin'] = tzutils.utc_to_local(begin) if end: elem['end'] = tzutils.utc_to_local(end) def total(self, **arguments): filters = arguments.pop('filters', None) if filters: tenant_id = filters.get('project_id') arguments['tenant_id'] = tenant_id else: tenant_id = None groupby = arguments.get('groupby') storage_gby = self.get_storage_groupby(groupby) metric_types = arguments.pop('metric_types', None) if metric_types: metric_types = self._check_metric_types(metric_types) arguments['service'] = metric_types arguments['begin'] = tzutils.local_to_utc( arguments['begin'], naive=True) arguments['end'] = tzutils.local_to_utc( arguments['end'], naive=True) arguments['groupby'] = storage_gby total = self.storage.get_total(**arguments) for t in total: if t.get('tenant_id') is None: t['tenant_id'] = tenant_id if t.get('rate') is None: t['rate'] = float(0) if groupby and 'type' in groupby: t['type'] = t.get('res_type') else: t['type'] = None self._localize_total(total) return { 'total': len(total), 'results': total, } @staticmethod def get_storage_groupby(groupby): storage_gby = [] if groupby: for elem in set(groupby): if elem == 'type': storage_gby.append('res_type') elif elem == 'project_id': storage_gby.append('tenant_id') else: LOG.warning("The groupby [%s] is not supported by MySQL " "storage backend.", elem) return ','.join(storage_gby) if storage_gby else None def get_tenants(self, begin, end): return self.storage.get_tenants(begin, end) def get_state(self, tenant_id=None): return self.storage.get_state(tenant_id) def delete(self, begin=None, end=None, filters=None): LOG.warning('Calling unsupported "delete" method on v1 storage.') def get_storage(**kwargs): storage_args = { 'period': CONF.collect.period, } backend = kwargs.pop('backend', None) storage_args.update(kwargs) version = kwargs.pop('version', None) or cfg.CONF.storage.version if int(version) > 1: LOG.warning('V2 Storage is in beta. Its API is considered stable but ' 'its implementation may still evolve.') storage_namespace = 'cloudkitty.storage.v{}.backends'.format(version) if version == 1: return V1StorageAdapter( storage_args, storage_namespace, backend=backend) return _get_storage_instance( storage_args, storage_namespace, backend=backend) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2514865 cloudkitty-21.0.0/cloudkitty/storage/v1/0000775000175000017500000000000000000000000020175 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v1/__init__.py0000664000175000017500000001517700000000000022321 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import abc from datetime import timedelta from oslo_config import cfg from oslo_log import log as logging LOG = logging.getLogger(__name__) CONF = cfg.CONF class BaseStorage(object, metaclass=abc.ABCMeta): """Base Storage class: Handle incoming data from the global orchestrator, and store them. """ def __init__(self, **kwargs): self._period = kwargs.get('period') self._collector = kwargs.get('collector') # State vars self.usage_start_dt = {} self.usage_end_dt = {} self._has_data = {} @staticmethod def init(): """Initialize storage backend. Can be used to create DB schema on first start. """ pass def _filter_period(self, json_data): """Detect the best usage period to extract. Removes the usage from the json data and returns it. :param json_data: Data to filter. """ candidate = None candidate_idx = 0 for idx, usage in enumerate(json_data): usage_ts = usage['period']['begin'] if candidate is None or usage_ts < candidate: candidate = usage_ts candidate_idx = idx if candidate: return candidate, json_data.pop(candidate_idx)['usage'] def _pre_commit(self, tenant_id): """Called before every commit. :param tenant_id: tenant_id which information must be committed. """ @abc.abstractmethod def _commit(self, tenant_id): """Push data to the storage backend. :param tenant_id: tenant_id which information must be committed. """ def _post_commit(self, tenant_id): """Called after every commit. :param tenant_id: tenant_id which information must be committed. """ if tenant_id in self._has_data: del self._has_data[tenant_id] self._clear_usage_info(tenant_id) @abc.abstractmethod def _dispatch(self, data, tenant_id): """Process rated data. :param data: The rated data frames. :param tenant_id: tenant_id which data must be dispatched to. """ def _update_start(self, begin, tenant_id): """Update usage_start with a new timestamp. :param begin: New usage beginning timestamp. :param tenant_id: tenant_id to update. """ self.usage_start_dt[tenant_id] = begin def _update_end(self, end, tenant_id): """Update usage_end with a new timestamp. :param end: New usage end timestamp. :param tenant_id: tenant_id to update. """ self.usage_end_dt[tenant_id] = end def _clear_usage_info(self, tenant_id): """Clear usage information timestamps. :param tenant_id: tenant_id which information needs to be removed. """ self.usage_start_dt.pop(tenant_id, None) self.usage_end_dt.pop(tenant_id, None) def _check_commit(self, usage_start, tenant_id): """Check if the period for a given tenant must be committed. :param usage_start: Start of the period. :param tenant_id: tenant_id to check for. """ usage_end = self.usage_end_dt.get(tenant_id) if usage_end is not None and usage_start >= usage_end: self.commit(tenant_id) if self.usage_start_dt.get(tenant_id) is None: self._update_start(usage_start, tenant_id) self._update_end( usage_start + timedelta(seconds=self._period), tenant_id) @abc.abstractmethod def get_state(self, tenant_id=None): """Return the last written frame's timestamp. :param tenant_id: tenant_id to filter on. """ @abc.abstractmethod def get_total(self, begin=None, end=None, tenant_id=None, service=None, groupby=None): """Return the current total. :param begin: When to start filtering. :type begin: datetime.datetime :param end: When to stop filtering. :type end: datetime.datetime :param tenant_id: Filter on the tenant_id. :type res_type: str :param service: Filter on the resource type. :type service: str :param groupby: Fields to group by, separated by commas if multiple. :type groupby: str """ @abc.abstractmethod def get_tenants(self, begin, end): """Return the list of rated tenants. :param begin: When to start filtering. :type begin: datetime.datetime :param end: When to stop filtering. :type end: datetime.datetime """ @abc.abstractmethod def get_time_frame(self, begin, end, **filters): """Request a time frame from the storage backend. :param begin: When to start filtering. :type begin: datetime.datetime :param end: When to stop filtering. :type end: datetime.datetime :param res_type: (Optional) Filter on the resource type. :type res_type: str :param tenant_id: (Optional) Filter on the tenant_id. :type res_type: str """ def append(self, raw_data, tenant_id): """Append rated data before committing them to the backend. :param raw_data: The rated data frames. :param tenant_id: Tenant the frame is belonging to. """ while raw_data: usage_start, data = self._filter_period(raw_data) self._check_commit(usage_start, tenant_id) self._dispatch(data, tenant_id) def nodata(self, begin, end, tenant_id): """Append a no data frame to the storage backend. :param begin: Begin of the period with no data. :param end: End of the period with no data. :param tenant_id: Tenant to update with no data marker for the period. """ self._check_commit(begin, tenant_id) def commit(self, tenant_id): """Commit the changes to the backend. :param tenant_id: Tenant the changes belong to. """ self._pre_commit(tenant_id) self._commit(tenant_id) self._post_commit(tenant_id) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2514865 cloudkitty-21.0.0/cloudkitty/storage/v1/hybrid/0000775000175000017500000000000000000000000021456 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v1/hybrid/__init__.py0000664000175000017500000001004100000000000023563 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_config import cfg from oslo_db.sqlalchemy import utils from stevedore import driver from cloudkitty import db from cloudkitty.storage.v1 import BaseStorage from cloudkitty.storage.v1.hybrid import migration from cloudkitty.storage.v1.hybrid import models STORAGE_HYBRID_OPTS = 'storage_hybrid' storage_opts = [ cfg.StrOpt( 'backend', default='gnocchi', help='Name of the storage backend that should be used ' 'by the hybrid storage', ) ] CONF = cfg.CONF CONF.register_opts(storage_opts, group=STORAGE_HYBRID_OPTS) HYBRID_BACKENDS_NAMESPACE = 'cloudkitty.storage.hybrid.backends' class HybridStorage(BaseStorage): """Hybrid Storage Backend. Stores dataframes in one of the available backends and other informations in a classical SQL database. """ state_model = models.TenantState def __init__(self, **kwargs): super(HybridStorage, self).__init__(**kwargs) self._hybrid_backend = driver.DriverManager( HYBRID_BACKENDS_NAMESPACE, cfg.CONF.storage_hybrid.backend, invoke_on_load=True).driver def init(self): migration.upgrade('head') self._hybrid_backend.init() def get_state(self, tenant_id=None): with db.session_for_read() as session: q = utils.model_query(self.state_model, session) if tenant_id: q = q.filter(self.state_model.tenant_id == tenant_id) q = q.order_by(self.state_model.state.desc()) r = q.first() return r.state if r else None def _set_state(self, tenant_id, state): with db.session_for_write() as session: q = utils.model_query(self.state_model, session) if tenant_id: q = q.filter(self.state_model.tenant_id == tenant_id) r = q.first() do_commit = False if r: if state > r.state: q.update({'state': state}) do_commit = True else: state = self.state_model(tenant_id=tenant_id, state=state) session.add(state) do_commit = True if do_commit: session.commit() def _commit(self, tenant_id): self._hybrid_backend.commit(tenant_id, self.get_state(tenant_id)) def _pre_commit(self, tenant_id): super(HybridStorage, self)._pre_commit(tenant_id) def _post_commit(self, tenant_id): self._set_state(tenant_id, self.usage_start_dt.get(tenant_id)) super(HybridStorage, self)._post_commit(tenant_id) def get_total(self, begin=None, end=None, tenant_id=None, service=None, groupby=None): return self._hybrid_backend.get_total( begin=begin, end=end, tenant_id=tenant_id, service=service, groupby=groupby) def _dispatch(self, data, tenant_id): if not self.get_state(tenant_id): self._set_state(tenant_id, self.usage_start_dt.get(tenant_id)) for service in data: for frame in data[service]: self._hybrid_backend.append_time_frame( service, frame, tenant_id) self._has_data[tenant_id] = True def get_tenants(self, begin, end): return self._hybrid_backend.get_tenants(begin, end) def get_time_frame(self, begin, end, **filters): return self._hybrid_backend.get_time_frame(begin, end, **filters) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2514865 cloudkitty-21.0.0/cloudkitty/storage/v1/hybrid/alembic/0000775000175000017500000000000000000000000023052 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v1/hybrid/alembic/env.py0000664000175000017500000000155200000000000024217 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from cloudkitty.common.db.alembic import env # noqa from cloudkitty.storage.v1.hybrid import models target_metadata = models.Base.metadata version_table = 'storage_hybrid_alembic' env.run_migrations_online(target_metadata, version_table) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v1/hybrid/alembic/script.py.mako0000664000175000017500000000075600000000000025666 0ustar00zuulzuul00000000000000"""${message} Revision ID: ${up_revision} Revises: ${down_revision | comma,n} Create Date: ${create_date} """ from alembic import op import sqlalchemy as sa ${imports if imports else ""} # revision identifiers, used by Alembic. revision = ${repr(up_revision)} down_revision = ${repr(down_revision)} branch_labels = ${repr(branch_labels)} depends_on = ${repr(depends_on)} def upgrade(): ${upgrades if upgrades else "pass"} def downgrade(): ${downgrades if downgrades else "pass"} ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2514865 cloudkitty-21.0.0/cloudkitty/storage/v1/hybrid/alembic/versions/0000775000175000017500000000000000000000000024722 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v1/hybrid/alembic/versions/03da4bb002b9_initial_revision.py0000664000175000017500000000236500000000000032525 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """initial revision Revision ID: 03da4bb002b9 Revises: None Create Date: 2017-11-21 15:59:26.776639 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '03da4bb002b9' down_revision = None branch_labels = None depends_on = None def upgrade(): op.create_table( 'hybrid_storage_states', sa.Column('id', sa.Integer(), nullable=False), sa.Column('tenant_id', sa.String(length=32), nullable=False), sa.Column('state', sa.DateTime(), nullable=False), sa.PrimaryKeyConstraint('id'), mysql_charset='utf8', mysql_engine='InnoDB') ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2514865 cloudkitty-21.0.0/cloudkitty/storage/v1/hybrid/backends/0000775000175000017500000000000000000000000023230 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v1/hybrid/backends/__init__.py0000664000175000017500000000565500000000000025354 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import abc class BaseHybridBackend(object, metaclass=abc.ABCMeta): """Base Backend class for the Hybrid Storage. This is the interface that all backends for the hybrid storage should implement. """ @abc.abstractmethod def commit(self, tenant_id, state): """Push data to the storage backend. :param tenant_id: id of the tenant which information must be committed. """ pass @abc.abstractmethod def init(self): """Initialize hybrid storage backend. Can be used to create DB scheme on first start """ pass @abc.abstractmethod def get_total(self, begin=None, end=None, tenant_id=None, service=None, groupby=None): """Return the current total. :param begin: When to start filtering. :type begin: datetime.datetime :param end: When to stop filtering. :type end: datetime.datetime :param tenant_id: Filter on the tenant_id. :type res_type: str :param service: Filter on the resource type. :type service: str :param groupby: Fields to group by, separated by commas if multiple. :type groupby: str """ pass @abc.abstractmethod def append_time_frame(self, res_type, frame, tenant_id): """Append a time frame to commit to the backend. :param res_type: The resource type of the dataframe. :param frame: The timeframe to append. :param tenant_id: Tenant the frame is belonging to. """ pass @abc.abstractmethod def get_tenants(self, begin, end): """Return the list of rated tenants. :param begin: When to start filtering. :type begin: datetime.datetime :param end: When to stop filtering. :type end: datetime.datetime """ @abc.abstractmethod def get_time_frame(self, begin, end, **filters): """Request a time frame from the storage backend. :param begin: When to start filtering. :type begin: datetime.datetime :param end: When to stop filtering. :type end: datetime.datetime :param res_type: (Optional) Filter on the resource type. :type res_type: str :param tenant_id: (Optional) Filter on the tenant_id. :type res_type: str """ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v1/hybrid/backends/gnocchi.py0000664000175000017500000004347400000000000025230 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import datetime import decimal from gnocchiclient import client as gclient from gnocchiclient import exceptions as gexceptions from keystoneauth1 import loading as ks_loading from oslo_config import cfg from oslo_log import log as logging from oslo_utils import uuidutils from cloudkitty.collector import validate_conf from cloudkitty.storage.v1.hybrid.backends import BaseHybridBackend import cloudkitty.utils as ck_utils from cloudkitty.utils import json LOG = logging.getLogger(__name__) CONF = cfg.CONF CONF.import_opt('period', 'cloudkitty.collector', 'collect') GNOCCHI_STORAGE_OPTS = 'storage_gnocchi' gnocchi_storage_opts = [ cfg.StrOpt('interface', default='internalURL', help='endpoint url type'), cfg.StrOpt('archive_policy_name', default='rating', help='Gnocchi storage archive policy name.'), # The archive policy definition MUST include the collect period granularity cfg.StrOpt('archive_policy_definition', default='[{"granularity": ' + str(CONF.collect.period) + ', "timespan": "90 days"}, ' '{"granularity": 86400, "timespan": "360 days"}, ' '{"granularity": 2592000, "timespan": "1800 days"}]', help='Gnocchi storage archive policy definition.'), ] CONF.register_opts(gnocchi_storage_opts, GNOCCHI_STORAGE_OPTS) ks_loading.register_session_conf_options( CONF, GNOCCHI_STORAGE_OPTS) ks_loading.register_auth_conf_options( CONF, GNOCCHI_STORAGE_OPTS) RESOURCE_TYPE_NAME_ROOT = 'rating_service_' METADATA_NAME_ROOT = 'ckmeta_' class UnknownResourceType(Exception): """Exception raised when an unknown resource type is encountered""" def __init__(self, resource_type): super(UnknownResourceType, self).__init__( 'Unknown resource type {}'.format(resource_type) ) class GnocchiStorage(BaseHybridBackend): """Gnocchi backend for hybrid storage. """ groupby_keys = ['res_type', 'tenant_id'] groupby_values = ['type', 'project_id'] def _init_resource_types(self): for metric_name, metric in self.conf.items(): metric_dict = dict() metric_dict['attributes'] = list() for attribute in metric.get('metadata', {}): metric_dict['attributes'].append( METADATA_NAME_ROOT + attribute) metric_dict['required_attributes'] = ['unit', 'resource_id'] for attribute in metric['groupby']: metric_dict['required_attributes'].append( METADATA_NAME_ROOT + attribute) metric_dict['name'] = RESOURCE_TYPE_NAME_ROOT + metric['alt_name'] if metric['mutate'] == 'NUMBOOL': metric_dict['qty_metric'] = 1 else: metric_dict['qty_metric'] = metric_name self._resource_type_data[metric['alt_name']] = metric_dict def _get_res_type_dict(self, res_type): res_type_data = self._resource_type_data.get(res_type, None) if not res_type_data: return None attribute_dict = dict() for attribute in res_type_data['attributes']: attribute_dict[attribute] = { 'required': False, 'type': 'string', } for attribute in res_type_data['required_attributes']: attribute_dict[attribute] = { 'required': True, 'type': 'string', } return { 'name': res_type_data['name'], 'attributes': attribute_dict, } def _create_resource(self, res_type, tenant_id, data): res_type_data = self._resource_type_data.get(res_type, None) if not res_type_data: raise UnknownResourceType( "Unknown resource type '{}'".format(res_type)) res_dict = { 'id': data['id'], 'resource_id': data['id'], 'project_id': tenant_id, 'user_id': 'cloudkitty', 'unit': data['unit'], } for key in ['attributes', 'required_attributes']: for attr in res_type_data[key]: if METADATA_NAME_ROOT in attr: res_dict[attr] = data.get( attr.replace(METADATA_NAME_ROOT, ''), None) or '' if isinstance(res_dict[attr], decimal.Decimal): res_dict[attr] = float(res_dict[attr]) created_metrics = [ self._conn.metric.create({ 'name': metric, 'archive_policy_name': CONF.storage_gnocchi.archive_policy_name, }) for metric in ['price', res_type] ] metrics_dict = dict() for metric in created_metrics: metrics_dict[metric['name']] = metric['id'] res_dict['metrics'] = metrics_dict try: return self._conn.resource.create(res_type_data['name'], res_dict) except gexceptions.ResourceAlreadyExists: res_dict['id'] = uuidutils.generate_uuid() return self._conn.resource.create(res_type_data['name'], res_dict) def _get_resource(self, resource_type, resource_id): try: resource_name = self._resource_type_data[resource_type]['name'] except KeyError: raise UnknownResourceType( "Unknown resource type '{}'".format(resource_type)) try: return self._conn.resource.get(resource_name, resource_id) except gexceptions.ResourceNotFound: return None def _find_resource(self, resource_type, resource_id): try: resource_type = self._resource_type_data[resource_type]['name'] except KeyError: raise UnknownResourceType( "Unknown resource type '{}'".format(resource_type)) query = { '=': { 'resource_id': resource_id, } } try: return self._conn.resource.search( resource_type=resource_type, query=query, limit=1)[0] except IndexError: return None def _create_resource_type(self, resource_type): res_type = self._resource_type_data.get(resource_type, None) if not res_type: return None res_type_dict = self._get_res_type_dict(resource_type) try: output = self._conn.resource_type.create(res_type_dict) except gexceptions.ResourceTypeAlreadyExists: output = None return output def _get_resource_type(self, resource_type): res_type = self._resource_type_data.get(resource_type, None) if not res_type: return None return self._conn.resource_type.get(res_type['name']) def __init__(self, **kwargs): super(GnocchiStorage, self).__init__(**kwargs) conf = kwargs.get('conf') or ck_utils.load_conf( CONF.collect.metrics_conf) self.conf = validate_conf(conf) self.auth = ks_loading.load_auth_from_conf_options( CONF, GNOCCHI_STORAGE_OPTS) self.session = ks_loading.load_session_from_conf_options( CONF, GNOCCHI_STORAGE_OPTS, auth=self.auth) self._conn = gclient.Client( '1', session=self.session, adapter_options={'connect_retries': 3, 'interface': CONF.storage_gnocchi.interface}) self._archive_policy_name = ( CONF.storage_gnocchi.archive_policy_name) self._archive_policy_definition = json.loads( CONF.storage_gnocchi.archive_policy_definition) self._period = kwargs.get('period') or CONF.collect.period self._measurements = dict() self._resource_type_data = dict() self._init_resource_types() def commit(self, tenant_id, state): if not self._measurements.get(tenant_id, None): return commitable_measurements = dict() for metrics in self._measurements[tenant_id].values(): for metric_id, measurements in metrics.items(): if measurements: measures = list() for measurement in measurements: measures.append( { 'timestamp': state, 'value': measurement, } ) commitable_measurements[metric_id] = measures if commitable_measurements: self._conn.metric.batch_metrics_measures(commitable_measurements) del self._measurements[tenant_id] def init(self): try: self._conn.archive_policy.get(self._archive_policy_name) except gexceptions.ArchivePolicyNotFound: ck_archive_policy = {} ck_archive_policy['name'] = self._archive_policy_name ck_archive_policy['back_window'] = 0 ck_archive_policy['aggregation_methods'] \ = ['std', 'count', 'min', 'max', 'sum', 'mean'] ck_archive_policy['definition'] = self._archive_policy_definition self._conn.archive_policy.create(ck_archive_policy) for service in self._resource_type_data.keys(): try: self._get_resource_type(service) except gexceptions.ResourceTypeNotFound: self._create_resource_type(service) def get_total(self, begin=None, end=None, tenant_id=None, service=None, groupby=None): # Query can't be None if we don't specify a resource_id query = {'and': [{ 'like': {'type': RESOURCE_TYPE_NAME_ROOT + '%'}, }]} if tenant_id: query['and'].append({'=': {'project_id': tenant_id}}) gb = [] if groupby: for elem in groupby.split(','): if elem in self.groupby_keys: gb.append(self.groupby_values[ self.groupby_keys.index(elem)]) # Setting gb to None instead of an empty list gb = gb if len(gb) > 0 else None # build aggregration operation op = ['aggregate', 'sum', ['metric', 'price', 'sum']] try: aggregates = self._conn.aggregates.fetch( op, start=begin, stop=end, groupby=gb, search=query) # No 'price' metric found except gexceptions.BadRequest: return [dict(begin=begin, end=end, rate=0)] # In case no group_by was specified if not isinstance(aggregates, list): aggregates = [aggregates] total_list = list() for aggregate in aggregates: if groupby: measures = aggregate['measures']['measures']['aggregated'] else: measures = aggregate['measures']['aggregated'] if len(measures) > 0: rate = sum(measure[2] for measure in measures if (measure[1] == self._period)) total = dict(begin=begin, end=end, rate=rate) if gb: for value in gb: key = self.groupby_keys[ self.groupby_values.index(value)] total[key] = aggregate['group'][value].replace( RESOURCE_TYPE_NAME_ROOT, '') total_list.append(total) return total_list def _append_measurements(self, resource, data, tenant_id): if not self._measurements.get(tenant_id, None): self._measurements[tenant_id] = {} measurements = self._measurements[tenant_id] if not measurements.get(resource['id'], None): measurements[resource['id']] = { key: list() for key in resource['metrics'].values() } for metric_name, metric_id in resource['metrics'].items(): measurement = data.get(metric_name, None) if measurement is not None: measurements[resource['id']][metric_id].append( float(measurement) if isinstance(measurement, decimal.Decimal) else measurement) def append_time_frame(self, res_type, frame, tenant_id): flat_frame = ck_utils.flat_dict(frame) resource = self._find_resource(res_type, flat_frame['id']) if not resource: resource = self._create_resource(res_type, tenant_id, flat_frame) self._append_measurements(resource, flat_frame, tenant_id) def get_tenants(self, begin, end): query = {'like': {'type': RESOURCE_TYPE_NAME_ROOT + '%'}} r = self._conn.metric.aggregation( metrics='price', query=query, start=begin, stop=end, aggregation='sum', granularity=self._period, needed_overlap=0, groupby='project_id') projects = list() for measures in r: projects.append(measures['group']['project_id']) return projects @staticmethod def _get_time_query(start, end, resource_type, tenant_id=None): query = {'and': [{ 'or': [ {'=': {'ended_at': None}}, {'<=': {'ended_at': end}} ] }, {'>=': {'started_at': start}}, {'=': {'type': resource_type}}, ] } if tenant_id: query['and'].append({'=': {'project_id': tenant_id}}) return query def _get_resources(self, resource_type, start, end, tenant_id=None): """Returns the resources of the given type in the given period""" return self._conn.resource.search( resource_type=resource_type, query=self._get_time_query(start, end, resource_type, tenant_id), details=True) def _format_frame(self, res_type, resource, desc, measure, tenant_id): res_type_info = self._resource_type_data.get(res_type, None) if not res_type_info: return dict() start = measure[0] stop = start + datetime.timedelta(seconds=self._period) # Getting price price = decimal.Decimal(measure[2]) price_dict = {'price': float(price)} # Getting vol if isinstance(res_type_info['qty_metric'], str): try: qty = self._conn.metric.get_measures( resource['metrics'][res_type_info['qty_metric']], aggregation='sum', start=start, stop=stop, refresh=True)[-1][2] except IndexError: qty = 0 else: qty = res_type_info['qty_metric'] vol_dict = {'qty': decimal.Decimal(qty), 'unit': resource['unit']} # Period period_dict = { 'begin': ck_utils.dt2iso(start), 'end': ck_utils.dt2iso(stop), } # Formatting res_dict = dict() res_dict['desc'] = desc res_dict['vol'] = vol_dict res_dict['rating'] = price_dict res_dict['tenant_id'] = tenant_id return { 'usage': {res_type: [res_dict]}, 'period': period_dict, } def resource_info(self, resource_type, start, end, tenant_id=None): """Returns a dataframe for the given resource type""" try: res_type_info = self._resource_type_data.get(resource_type, None) resource_name = res_type_info['name'] except (KeyError, AttributeError): raise UnknownResourceType(resource_type) attributes = res_type_info['attributes'] \ + res_type_info['required_attributes'] output = list() query = self._get_time_query(start, end, resource_name, tenant_id) measures = self._conn.metric.aggregation( metrics='price', resource_type=resource_name, query=query, start=start, stop=end, granularity=self._period, aggregation='sum', needed_overlap=0, groupby=['type', 'id'], ) for resource_measures in measures: resource_desc = None resource = None for measure in resource_measures['measures']: if not resource_desc: resource = self._get_resource( resource_type, resource_measures['group']['id']) if not resource: continue desc = {attr.replace(METADATA_NAME_ROOT, ''): resource.get(attr, None) for attr in attributes} formatted_frame = self._format_frame( resource_type, resource, desc, measure, tenant_id) output.append(formatted_frame) return output def get_time_frame(self, begin, end, **filters): tenant_id = filters.get('tenant_id', None) resource_types = [filters.get('res_type', None)] if not resource_types[0]: resource_types = self._resource_type_data.keys() output = list() for resource_type in resource_types: output += self.resource_info(resource_type, begin, end, tenant_id) return output ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v1/hybrid/migration.py0000664000175000017500000000240300000000000024020 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import os from cloudkitty.common.db.alembic import migration ALEMBIC_REPO = os.path.join(os.path.dirname(__file__), 'alembic') def upgrade(revision): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.upgrade(config, revision) def version(): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.version(config) def revision(message, autogenerate): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.revision(config, message, autogenerate) def stamp(revision): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.stamp(config, revision) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v1/hybrid/models.py0000664000175000017500000000237700000000000023324 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_db.sqlalchemy import models import sqlalchemy from sqlalchemy.ext import declarative Base = declarative.declarative_base() class TenantState(Base, models.ModelBase): """A tenant state. """ __table_args__ = {'mysql_charset': "utf8", 'mysql_engine': "InnoDB"} __tablename__ = 'hybrid_storage_states' id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True) tenant_id = sqlalchemy.Column(sqlalchemy.String(32), nullable=False) state = sqlalchemy.Column(sqlalchemy.DateTime, nullable=False) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2514865 cloudkitty-21.0.0/cloudkitty/storage/v1/sqlalchemy/0000775000175000017500000000000000000000000022337 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v1/sqlalchemy/__init__.py0000664000175000017500000001650600000000000024460 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import decimal from oslo_db.sqlalchemy import utils import sqlalchemy from cloudkitty import db from cloudkitty.storage import NoTimeFrame from cloudkitty.storage import v1 as storage from cloudkitty.storage.v1.sqlalchemy import migration from cloudkitty.storage.v1.sqlalchemy import models from cloudkitty import utils as ck_utils from cloudkitty.utils import json class SQLAlchemyStorage(storage.BaseStorage): """SQLAlchemy Storage Backend """ frame_model = models.RatedDataFrame def __init__(self, **kwargs): super(SQLAlchemyStorage, self).__init__(**kwargs) @staticmethod def init(): migration.upgrade('head') def _pre_commit(self, tenant_id): if not self._has_data.get(tenant_id): empty_frame = {'vol': {'qty': 0, 'unit': 'None'}, 'rating': {'price': 0}, 'desc': ''} self._append_time_frame('_NO_DATA_', empty_frame, tenant_id) def _commit(self, tenant_id): super(SQLAlchemyStorage, self)._commit(tenant_id) def _post_commit(self, tenant_id): super(SQLAlchemyStorage, self)._post_commit(tenant_id) def _dispatch(self, data, tenant_id): for service in data: for frame in data[service]: self._append_time_frame(service, frame, tenant_id) self._has_data[tenant_id] = True def get_state(self, tenant_id=None): with db.session_for_read() as session: q = utils.model_query( self.frame_model, session) if tenant_id: q = q.filter( self.frame_model.tenant_id == tenant_id) q = q.order_by( self.frame_model.begin.desc()) r = q.first() if r: return r.begin def get_total(self, begin=None, end=None, tenant_id=None, service=None, groupby=None): with db.session_for_read() as session: querymodels = [ sqlalchemy.func.sum(self.frame_model.rate).label('rate') ] if not begin: begin = ck_utils.get_month_start_timestamp() if not end: end = ck_utils.get_next_month_timestamp() # Boundary calculation if tenant_id: querymodels.append(self.frame_model.tenant_id) if service: querymodels.append(self.frame_model.res_type) if groupby: groupbyfields = groupby.split(",") for field in groupbyfields: field_obj = self.frame_model.__dict__.get(field, None) if field_obj and field_obj not in querymodels: querymodels.append(field_obj) q = session.query(*querymodels) if tenant_id: q = q.filter( self.frame_model.tenant_id == tenant_id) if service: q = q.filter( self.frame_model.res_type == service) # begin and end filters are both needed, do not remove one of them. q = q.filter( self.frame_model.begin.between(begin, end), self.frame_model.end.between(begin, end), self.frame_model.res_type != '_NO_DATA_') if groupby: q = q.group_by(sqlalchemy.sql.text(groupby)) # Order by sum(rate) q = q.order_by(sqlalchemy.func.sum(self.frame_model.rate)) results = q.all() totallist = [] for r in results: total = {model.name: value for model, value in zip(querymodels, r)} total["begin"] = begin total["end"] = end totallist.append(total) return totallist def get_tenants(self, begin, end): with db.session_for_read() as session: q = utils.model_query( self.frame_model, session) # begin and end filters are both needed, do not remove one of them. q = q.filter( self.frame_model.begin.between(begin, end), self.frame_model.end.between(begin, end)) tenants = q.distinct().values( self.frame_model.tenant_id) return [tenant.tenant_id for tenant in tenants] def get_time_frame(self, begin, end, **filters): if not begin: begin = ck_utils.get_month_start() if not end: end = ck_utils.get_next_month() with db.session_for_read() as session: q = utils.model_query( self.frame_model, session) # begin and end filters are both needed, do not remove one of them. q = q.filter( self.frame_model.begin.between(begin, end), self.frame_model.end.between(begin, end)) for filter_name, filter_value in filters.items(): if filter_value: q = q.filter( getattr(self.frame_model, filter_name) == filter_value) if not filters.get('res_type'): q = q.filter(self.frame_model.res_type != '_NO_DATA_') count = q.count() if not count: raise NoTimeFrame() r = q.all() return [entry.to_cloudkitty(self._collector) for entry in r] def _append_time_frame(self, res_type, frame, tenant_id): vol_dict = frame['vol'] qty = vol_dict['qty'] unit = vol_dict['unit'] rating_dict = frame.get('rating', {}) rate = rating_dict.get('price') if not rate: rate = decimal.Decimal(0) desc = json.dumps(frame['desc']) self.add_time_frame(begin=self.usage_start_dt.get(tenant_id), end=self.usage_end_dt.get(tenant_id), tenant_id=tenant_id, unit=unit, qty=qty, res_type=res_type, rate=rate, desc=desc) def add_time_frame(self, **kwargs): """Create a new time frame. :param begin: Start of the dataframe. :param end: End of the dataframe. :param tenant_id: tenant_id of the dataframe owner. :param unit: Unit of the metric. :param qty: Quantity of the metric. :param res_type: Type of the resource. :param rate: Calculated rate for this dataframe. :param desc: Resource description (metadata). """ frame = self.frame_model(**kwargs) with db.session_for_write() as session: session.add(frame) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2514865 cloudkitty-21.0.0/cloudkitty/storage/v1/sqlalchemy/alembic/0000775000175000017500000000000000000000000023733 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v1/sqlalchemy/alembic/__init__.py0000664000175000017500000000000000000000000026032 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v1/sqlalchemy/alembic/env.py0000664000175000017500000000156200000000000025101 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from cloudkitty.common.db.alembic import env # noqa from cloudkitty.storage.v1.sqlalchemy import models target_metadata = models.Base.metadata version_table = 'storage_sqlalchemy_alembic' env.run_migrations_online(target_metadata, version_table) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v1/sqlalchemy/alembic/script.py.mako0000664000175000017500000000172300000000000026542 0ustar00zuulzuul00000000000000# Copyright ${create_date.year} OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """${message} Revision ID: ${up_revision} Revises: ${down_revision} Create Date: ${create_date} """ # revision identifiers, used by Alembic. revision = ${repr(up_revision)} down_revision = ${repr(down_revision)} from alembic import op import sqlalchemy as sa ${imports if imports else ""} def upgrade(): ${upgrades if upgrades else "pass"} ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2554867 cloudkitty-21.0.0/cloudkitty/storage/v1/sqlalchemy/alembic/versions/0000775000175000017500000000000000000000000025603 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=cloudkitty-21.0.0/cloudkitty/storage/v1/sqlalchemy/alembic/versions/17fd1b237aa3_initial_migration.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v1/sqlalchemy/alembic/versions/17fd1b237aa3_initial_migration.p0000664000175000017500000000266300000000000033362 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Initial migration Revision ID: 17fd1b237aa3 Revises: None Create Date: 2014-10-10 11:28:08.645122 """ # revision identifiers, used by Alembic. revision = '17fd1b237aa3' down_revision = None from alembic import op # noqa: E402 import sqlalchemy as sa # noqa: E402 def upgrade(): op.create_table( 'rated_data_frames', sa.Column('id', sa.Integer(), nullable=False), sa.Column('begin', sa.DateTime(), nullable=False), sa.Column('end', sa.DateTime(), nullable=False), sa.Column('unit', sa.String(length=255), nullable=False), sa.Column('qty', sa.Numeric(), nullable=False), sa.Column('res_type', sa.String(length=255), nullable=False), sa.Column('rate', sa.Float(), nullable=False), sa.Column('desc', sa.Text(), nullable=False), sa.PrimaryKeyConstraint('id'), mysql_charset='utf8', mysql_engine='InnoDB') ././@PaxHeader0000000000000000000000000000021100000000000011447 xustar0000000000000000115 path=cloudkitty-21.0.0/cloudkitty/storage/v1/sqlalchemy/alembic/versions/307430ab38bc_improve_qty_precision.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v1/sqlalchemy/alembic/versions/307430ab38bc_improve_qty_precisi0000664000175000017500000000203100000000000033421 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """improve qty precision Revision ID: 307430ab38bc Revises: 792b438b663 Create Date: 2016-09-05 18:37:26.714065 """ # revision identifiers, used by Alembic. revision = '307430ab38bc' down_revision = '792b438b663' from alembic import op # noqa: E402 import sqlalchemy as sa # noqa: E402 def upgrade(): with op.batch_alter_table('rated_data_frames') as batch_op: batch_op.alter_column( 'qty', type_=sa.Numeric(10, 5), existing_type=sa.Numeric()) ././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=cloudkitty-21.0.0/cloudkitty/storage/v1/sqlalchemy/alembic/versions/792b438b663_added_tenant_informations.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v1/sqlalchemy/alembic/versions/792b438b663_added_tenant_informa0000664000175000017500000000172300000000000033261 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """added tenant informations Revision ID: 792b438b663 Revises: 17fd1b237aa3 Create Date: 2014-12-02 13:12:11.328534 """ # revision identifiers, used by Alembic. revision = '792b438b663' down_revision = '17fd1b237aa3' from alembic import op # noqa: E402 import sqlalchemy as sa # noqa: E402 def upgrade(): op.add_column('rated_data_frames', sa.Column('tenant_id', sa.String(length=32), nullable=True)) ././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=cloudkitty-21.0.0/cloudkitty/storage/v1/sqlalchemy/alembic/versions/c703a1bad612_improve_qty_digit.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v1/sqlalchemy/alembic/versions/c703a1bad612_improve_qty_digit.p0000664000175000017500000000213100000000000033375 0ustar00zuulzuul00000000000000# Copyright 2017 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """improve_qty_digit Revision ID: c703a1bad612 Revises: 307430ab38bc Create Date: 2017-04-01 09:33:41.434750 """ # revision identifiers, used by Alembic. revision = 'c703a1bad612' down_revision = '307430ab38bc' from alembic import op # noqa: E402 import sqlalchemy as sa # noqa: E402 def upgrade(): with op.batch_alter_table('rated_data_frames') as batch_op: batch_op.alter_column( 'qty', type_=sa.Numeric(15, 5), existing_type=sa.Numeric()) ././@PaxHeader0000000000000000000000000000023300000000000011453 xustar0000000000000000133 path=cloudkitty-21.0.0/cloudkitty/storage/v1/sqlalchemy/alembic/versions/d875621d0384_create_index_idx_tenantid_begin_end_on_.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v1/sqlalchemy/alembic/versions/d875621d0384_create_index_idx_te0000664000175000017500000000234300000000000033201 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Creating indexes to allow SQL query optimizations Revision ID: d875621d0384 Revises: c703a1bad612 Create Date: 2022-11-23 15:36:05.331585 """ from alembic import op # revision identifiers, used by Alembic. revision = 'd875621d0384' down_revision = 'c703a1bad612' branch_labels = None depends_on = None def upgrade(): op.create_index('idx_rated_data_frames_date', 'rated_data_frames', ['begin', 'end']) op.create_index('idx_tenantid_begin_end', 'rated_data_frames', ['tenant_id', 'begin', 'end']) def downgrade(): op.drop_index('idx_tenantid_begin_end', 'rated_data_frames') op.drop_index('idx_rated_data_frames_date', 'rated_data_frames') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v1/sqlalchemy/migration.py0000664000175000017500000000240300000000000024701 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import os from cloudkitty.common.db.alembic import migration ALEMBIC_REPO = os.path.join(os.path.dirname(__file__), 'alembic') def upgrade(revision): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.upgrade(config, revision) def version(): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.version(config) def revision(message, autogenerate): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.revision(config, message, autogenerate) def stamp(revision): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.stamp(config, revision) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v1/sqlalchemy/models.py0000664000175000017500000000545000000000000024200 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_db.sqlalchemy import models import sqlalchemy from sqlalchemy.ext import declarative from cloudkitty.utils import json Base = declarative.declarative_base() class RatedDataFrame(Base, models.ModelBase): """A rated data frame. """ __table_args__ = {'mysql_charset': "utf8", 'mysql_engine': "InnoDB"} __tablename__ = 'rated_data_frames' id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True) tenant_id = sqlalchemy.Column(sqlalchemy.String(32), nullable=True) begin = sqlalchemy.Column(sqlalchemy.DateTime, nullable=False) end = sqlalchemy.Column(sqlalchemy.DateTime, nullable=False) unit = sqlalchemy.Column(sqlalchemy.String(255), nullable=False) qty = sqlalchemy.Column(sqlalchemy.Numeric(15, 5), nullable=False) res_type = sqlalchemy.Column(sqlalchemy.String(255), nullable=False) rate = sqlalchemy.Column(sqlalchemy.Float(), nullable=False) desc = sqlalchemy.Column(sqlalchemy.Text(), nullable=False) def to_cloudkitty(self, collector=None): # Rating informations rating_dict = {} rating_dict['price'] = self.rate # Volume informations vol_dict = {} vol_dict['qty'] = self.qty.normalize() vol_dict['unit'] = self.unit res_dict = {} # Encapsulate informations in a resource dict res_dict['rating'] = rating_dict res_dict['desc'] = json.loads(self.desc) res_dict['vol'] = vol_dict res_dict['desc']['tenant_id'] = self.tenant_id # Add resource to the usage dict usage_dict = {} usage_dict[self.res_type] = [res_dict] # Time informations period_dict = {} period_dict['begin'] = self.begin period_dict['end'] = self.end # Add period to the resource informations ck_dict = {} ck_dict['period'] = period_dict ck_dict['usage'] = usage_dict return ck_dict ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2554867 cloudkitty-21.0.0/cloudkitty/storage/v2/0000775000175000017500000000000000000000000020176 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v2/__init__.py0000664000175000017500000001551200000000000022313 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2018 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import abc import datetime from oslo_log import log as logging from oslo_config import cfg from cloudkitty import storage_state from werkzeug import exceptions as http_exceptions storage_opts = [ cfg.IntOpt( 'retention_period', default=2400, help='Duration after which data should be cleaned up/aggregated. ' 'Duration is given in hours. Defaults to 2400 (100 days)' ), ] CONF = cfg.CONF CONF.register_opts(storage_opts, 'storage') LOG = logging.getLogger(__name__) class BaseStorage(object, metaclass=abc.ABCMeta): """Abstract class for v2 storage objects.""" def __init__(self, *args, **kwargs): """Left empty so that child classes don't need to implement this.""" @abc.abstractmethod def init(self): """Called for storage backend initialization""" # NOTE(peschk_l): scope_id must not be used by any v2 storage backend. It # is only present for backward compatibility with the v1 storage. It will # be removed together with the v1 storage @abc.abstractmethod def push(self, dataframes, scope_id=None): """Pushes dataframes to the storage backend :param dataframes: List of dataframes :type dataframes: [cloudkitty.dataframe.DataFrame] """ @abc.abstractmethod def retrieve(self, begin=None, end=None, filters=None, metric_types=None, offset=0, limit=1000, paginate=True): """Returns the following dict:: { 'total': int, # total amount of measures found 'dataframes': list of dataframes, } :param begin: Start date :type begin: datetime :param end: End date :type end: datetime :param filters: Attributes to filter on. ex: {'flavor_id': '42'} :type filters: dict :param metric_types: Metric type to filter on. :type metric_types: str or list :param offset: Offset for pagination :type offset: int :param limit: Maximum amount of elements to return :type limit: int :param paginate: Defaults to True. If False, all found results will be returned. :type paginate: bool :rtype: dict """ @abc.abstractmethod def delete(self, begin=None, end=None, filters=None): """Deletes all data from for the given period and filters. :param begin: Start date :type begin: datetime :param end: End date :type end: datetime :param filters: Attributes to filter on. ex: {'flavor_id': '42'} :type filters: dict """ @abc.abstractmethod def total(self, groupby=None, begin=None, end=None, metric_types=None, filters=None, custom_fields=None, offset=0, limit=1000, paginate=True): """Returns a grouped total for given groupby. :param groupby: Attributes on which to group by. These attributes must be part of the 'groupby' section for the given metric type in metrics.yml. In order to group by metric type, add 'type' to the groupby list. :type groupby: list of strings :param begin: Start date :type begin: datetime :param end: End date :type end: datetime :param metric_types: Metric type to filter on. :type metric_types: str or list :param custom_fields: the custom fields that one desires to add in the summary reporting. Each driver must handle these values by themselves. :type: custom_fields: list of strings :param filters: Attributes to filter on. ex: {'flavor_id': '42'} :type filters: dict :param offset: Offset for pagination :type offset: int :param limit: Maximum amount of elements to return :type limit: int :param paginate: Defaults to True. If False, all found results will be returned. :type paginate: bool :rtype: dict Returns a dict with the following format:: { 'total': int, # total amount of results found 'results': list of results, } Each result has the following format:: { 'begin': XXX, 'end': XXX, 'rate': XXX, 'groupby1': XXX, 'groupby2': XXX } """ @staticmethod def get_retention(): """Returns the retention period defined in the configuration. :rtype: datetime.timedelta """ return datetime.timedelta(hours=CONF.storage.retention_period) # NOTE(lpeschke): This is only kept for v1 storage backward compatibility def get_tenants(self, begin=None, end=None): return storage_state.StateManager().get_tenants(begin, end) TIME_COMMANDS_MAP = {"d": "day_of_the_year", "w": "week_of_the_year", "m": "month", "y": "year"} def parse_groupby_syntax_to_groupby_elements(self, groupbys): if not groupbys: LOG.debug("No groupby to process syntax.") return groupbys groupbys_parsed = [] for elem in groupbys: if 'time' in elem: time_command = elem.split('-') number_of_parts = len(time_command) if number_of_parts == 2: g = self.TIME_COMMANDS_MAP.get(time_command[1]) if not g: raise http_exceptions.BadRequest( "Invalid groupby time option. There is no " "groupby processing for [%s]." % elem) LOG.debug("Replacing API groupby time command [%s] with " "internal groupby command [%s].", elem, g) elem = g elif number_of_parts > 2: LOG.warning("The groupby [%s] command is not expected for " "storage backend [%s]. Therefore, we leave it " "as is.", elem, self) groupbys_parsed.append(elem) return groupbys_parsed ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2554867 cloudkitty-21.0.0/cloudkitty/storage/v2/elasticsearch/0000775000175000017500000000000000000000000023010 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v2/elasticsearch/__init__.py0000664000175000017500000001617200000000000025130 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import datetime from oslo_config import cfg from oslo_log import log from cloudkitty import dataframe from cloudkitty.storage import v2 as v2_storage from cloudkitty.storage.v2.elasticsearch import client as es_client from cloudkitty.storage.v2.elasticsearch import exceptions from cloudkitty.utils import tz as tzutils LOG = log.getLogger(__name__) CONF = cfg.CONF ELASTICSEARCH_STORAGE_GROUP = 'storage_elasticsearch' elasticsearch_storage_opts = [ cfg.StrOpt( 'host', help='Elasticsearch host, along with port and protocol. ' 'Defaults to http://localhost:9200', default='http://localhost:9200'), cfg.StrOpt( 'index_name', help='Elasticsearch index to use. Defaults to "cloudkitty".', default='cloudkitty'), cfg.BoolOpt('insecure', help='Set to true to allow insecure HTTPS ' 'connections to Elasticsearch', default=False), cfg.StrOpt('cafile', help='Path of the CA certificate to trust for ' 'HTTPS connections.', default=None), cfg.IntOpt('scroll_duration', help="Duration (in seconds) for which the ES scroll contexts " "should be kept alive.", advanced=True, default=30, min=0, max=300), ] CONF.register_opts(elasticsearch_storage_opts, ELASTICSEARCH_STORAGE_GROUP) CLOUDKITTY_INDEX_MAPPING = { "dynamic_templates": [ { "strings_as_keywords": { "match_mapping_type": "string", "mapping": { "type": "keyword" } } } ], "dynamic": False, "properties": { "start": {"type": "date"}, "end": {"type": "date"}, "type": {"type": "keyword"}, "unit": {"type": "keyword"}, "qty": {"type": "double"}, "price": {"type": "double"}, "groupby": {"dynamic": True, "type": "object"}, "metadata": {"dynamic": True, "type": "object"} }, } class ElasticsearchStorage(v2_storage.BaseStorage): def __init__(self, *args, **kwargs): super(ElasticsearchStorage, self).__init__(*args, **kwargs) verify = not CONF.storage_elasticsearch.insecure if verify and CONF.storage_elasticsearch.cafile: verify = CONF.storage_elasticsearch.cafile self._conn = es_client.ElasticsearchClient( CONF.storage_elasticsearch.host, CONF.storage_elasticsearch.index_name, "_doc", verify=verify) def init(self): r = self._conn.get_index() if r.status_code != 200: raise exceptions.IndexDoesNotExist( CONF.storage_elasticsearch.index_name) LOG.info('Creating mapping "_doc" on index {}...'.format( CONF.storage_elasticsearch.index_name)) self._conn.put_mapping(CLOUDKITTY_INDEX_MAPPING) LOG.info('Mapping created.') def push(self, dataframes, scope_id=None): for frame in dataframes: for type_, point in frame.iterpoints(): start, end = self._local_to_utc(frame.start, frame.end) self._conn.add_point(point, type_, start, end) self._conn.commit() @staticmethod def _local_to_utc(*args): return [tzutils.local_to_utc(arg) for arg in args] @staticmethod def _doc_to_datapoint(doc): return dataframe.DataPoint( doc['unit'], doc['qty'], doc['price'], doc['groupby'], doc['metadata'], ) def _build_dataframes(self, docs): dataframes = {} nb_points = 0 for doc in docs: source = doc['_source'] start = tzutils.dt_from_iso(source['start']) end = tzutils.dt_from_iso(source['end']) key = (start, end) if key not in dataframes.keys(): dataframes[key] = dataframe.DataFrame(start=start, end=end) dataframes[key].add_point( self._doc_to_datapoint(source), source['type']) nb_points += 1 output = list(dataframes.values()) output.sort(key=lambda frame: (frame.start, frame.end)) return output def retrieve(self, begin=None, end=None, filters=None, metric_types=None, offset=0, limit=1000, paginate=True): begin, end = self._local_to_utc(begin or tzutils.get_month_start(), end or tzutils.get_next_month()) total, docs = self._conn.retrieve( begin, end, filters, metric_types, offset=offset, limit=limit, paginate=paginate) return { 'total': total, 'dataframes': self._build_dataframes(docs), } def delete(self, begin=None, end=None, filters=None): self._conn.delete_by_query(begin, end, filters) @staticmethod def _normalize_time(t): if isinstance(t, datetime.datetime): return tzutils.utc_to_local(t) return tzutils.dt_from_iso(t) def _doc_to_total_result(self, doc, start, end): output = { 'begin': self._normalize_time(doc.get('start', start)), 'end': self._normalize_time(doc.get('end', end)), 'qty': doc['sum_qty']['value'], 'rate': doc['sum_price']['value'], } # Means we had a composite aggregation if 'key' in doc.keys(): for key, value in doc['key'].items(): if key == 'begin' or key == 'end': # Elasticsearch returns ts in milliseconds value = tzutils.dt_from_ts(value // 1000) output[key] = value return output def total(self, groupby=None, begin=None, end=None, metric_types=None, filters=None, custom_fields=None, offset=0, limit=1000, paginate=True): begin, end = self._local_to_utc(begin or tzutils.get_month_start(), end or tzutils.get_next_month()) groupby = self.parse_groupby_syntax_to_groupby_elements(groupby) total, docs = self._conn.total(begin, end, metric_types, filters, groupby, custom_fields=custom_fields, offset=offset, limit=limit, paginate=paginate) return { 'total': total, 'results': [self._doc_to_total_result(doc, begin, end) for doc in docs], } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v2/elasticsearch/client.py0000664000175000017500000003523200000000000024645 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import itertools from oslo_log import log import requests from cloudkitty.storage.v2.elasticsearch import exceptions from cloudkitty.utils import json LOG = log.getLogger(__name__) class ElasticsearchClient(object): """Class used to ease interaction with Elasticsearch. :param autocommit: Defaults to True. Automatically push documents to Elasticsearch once chunk_size has been reached. :type autocommit: bool :param chunk_size: Maximal number of documents to commit/retrieve at once. :type chunk_size: int :param scroll_duration: Defaults to 60. Duration, in seconds, for which search contexts should be kept alive :type scroll_duration: int """ def __init__(self, url, index_name, mapping_name, verify=True, autocommit=True, chunk_size=5000, scroll_duration=60): self._url = url.strip('/') self._index_name = index_name.strip('/') self._mapping_name = mapping_name.strip('/') self._autocommit = autocommit self._chunk_size = chunk_size self._scroll_duration = str(scroll_duration) + 's' self._scroll_params = {'scroll': self._scroll_duration} self._docs = [] self._scroll_ids = set() self._sess = requests.Session() self._verify = self._sess.verify = verify self._sess.headers = {'Content-Type': 'application/json'} @staticmethod def _log_query(url, query, response): message = 'Query on {} with body "{}" took {}ms'.format( url, query, response['took']) if 'hits' in response.keys(): message += ' for {} hits'.format(response['hits']['total']) LOG.debug(message) @staticmethod def _build_must(start, end, metric_types, filters): must = [] if start: must.append({"range": {"start": {"gte": start.isoformat()}}}) if end: must.append({"range": {"end": {"lte": end.isoformat()}}}) if filters and 'type' in filters.keys(): must.append({'term': {'type': filters['type']}}) if metric_types: must.append({"terms": {"type": metric_types}}) return must @staticmethod def _build_should(filters): if not filters: return [] should = [] for k, v in filters.items(): if k != 'type': should += [{'term': {'groupby.' + k: v}}, {'term': {'metadata.' + k: v}}] return should def _build_composite(self, groupby): if not groupby: return [] sources = [] for elem in groupby: if elem == 'type': sources.append({'type': {'terms': {'field': 'type'}}}) elif elem == 'time': # Not doing a date_histogram aggregation because we don't know # the period sources.append({'begin': {'terms': {'field': 'start'}}}) sources.append({'end': {'terms': {'field': 'end'}}}) else: sources.append({elem: {'terms': {'field': 'groupby.' + elem}}}) return {"sources": sources} @staticmethod def _build_query(must, should, composite): query = {} if must or should: query["query"] = {"bool": {}} if must: query["query"]["bool"]["must"] = must if should: query["query"]["bool"]["should"] = should # We want each term to match exactly once, and each term introduces # two "term" aggregations: one for "groupby" and one for "metadata" query["query"]["bool"]["minimum_should_match"] = len(should) // 2 if composite: query["aggs"] = {"sum_and_price": { "composite": composite, "aggregations": { "sum_price": {"sum": {"field": "price"}}, "sum_qty": {"sum": {"field": "qty"}}, } }} return query def _req(self, method, url, data, params, deserialize=True): r = method(url, data=data, params=params) if r.status_code < 200 or r.status_code >= 300: raise exceptions.InvalidStatusCode( 200, r.status_code, r.text, data) if not deserialize: return r output = r.json() self._log_query(url, data, output) return output def put_mapping(self, mapping): """Does a PUT request against ES's mapping API. The PUT request will be done against `//_mapping/` :mapping: body of the request :type mapping: dict :rtype: requests.models.Response """ url = '/'.join( (self._url, self._index_name, '_mapping', self._mapping_name)) # NOTE(peschk_l): This is done for compatibility with # Elasticsearch 6 and 7. param = {"include_type_name": "true"} return self._req( self._sess.put, url, json.dumps(mapping), param, deserialize=False) def get_index(self): """Does a GET request against ES's index API. The GET request will be done against `/` :rtype: requests.models.Response """ url = '/'.join((self._url, self._index_name)) return self._req(self._sess.get, url, None, None, deserialize=False) def search(self, body, scroll=True): """Does a GET request against ES's search API. The GET request will be done against `//_search` :param body: body of the request :type body: dict :rtype: dict """ url = '/'.join((self._url, self._index_name, '_search')) params = self._scroll_params if scroll else None return self._req( self._sess.get, url, json.dumps(body), params) def scroll(self, body): """Does a GET request against ES's scroll API. The GET request will be done against `/_search/scroll` :param body: body of the request :type body: dict :rtype: dict """ url = '/'.join((self._url, '_search/scroll')) return self._req(self._sess.get, url, json.dumps(body), None) def close_scroll(self, body): """Does a DELETE request against ES's scroll API. The DELETE request will be done against `/_search/scroll` :param body: body of the request :type body: dict :rtype: dict """ url = '/'.join((self._url, '_search/scroll')) resp = self._req( self._sess.delete, url, json.dumps(body), None, deserialize=False) body = resp.json() LOG.debug('Freed {} scrolls contexts'.format(body['num_freed'])) return body def close_scrolls(self): """Closes all scroll contexts opened by this client.""" ids = list(self._scroll_ids) LOG.debug('Closing {} scroll contexts: {}'.format(len(ids), ids)) self.close_scroll({'scroll_id': ids}) self._scroll_ids = set() def bulk_with_instruction(self, instruction, terms): """Does a POST request against ES's bulk API The POST request will be done against `///_bulk` The instruction will be appended before each term. For example, bulk_with_instruction('instr', ['one', 'two']) will produce:: instr one instr two :param instruction: instruction to execute for each term :type instruction: dict :param terms: list of terms for which instruction should be executed :type terms: collections.abc.Iterable :rtype: requests.models.Response """ instruction = json.dumps(instruction) data = '\n'.join(itertools.chain( *[(instruction, json.dumps(term)) for term in terms] )) + '\n' url = '/'.join( (self._url, self._index_name, self._mapping_name, '_bulk')) return self._req(self._sess.post, url, data, None, deserialize=False) def bulk_index(self, terms): """Indexes each of the documents in 'terms' :param terms: list of documents to index :type terms: collections.abc.Iterable """ LOG.debug("Indexing {} documents".format(len(terms))) return self.bulk_with_instruction({"index": {}}, terms) def commit(self): """Index all documents""" while self._docs: self.bulk_index(self._docs[:self._chunk_size]) self._docs = self._docs[self._chunk_size:] def add_point(self, point, type_, start, end): """Append a point to the client. :param point: DataPoint to append :type point: cloudkitty.dataframe.DataPoint :param type_: type of the DataPoint :type type_: str """ self._docs.append({ 'start': start, 'end': end, 'type': type_, 'unit': point.unit, 'description': point.description, 'qty': point.qty, 'price': point.price, 'groupby': point.groupby, 'metadata': point.metadata, }) if self._autocommit and len(self._docs) >= self._chunk_size: self.commit() def _get_chunk_size(self, offset, limit, paginate): if paginate and offset + limit < self._chunk_size: return offset + limit return self._chunk_size def retrieve(self, begin, end, filters, metric_types, offset=0, limit=1000, paginate=True): """Retrieves a paginated list of documents from Elasticsearch.""" if not paginate: offset = 0 query = self._build_query( self._build_must(begin, end, metric_types, filters), self._build_should(filters), None) query['size'] = self._get_chunk_size(offset, limit, paginate) resp = self.search(query) scroll_id = resp['_scroll_id'] self._scroll_ids.add(scroll_id) total_hits = resp['hits']['total'] if isinstance(total_hits, dict): LOG.debug("Total hits [%s] is a dict. Therefore, we only extract " "the 'value' attribute as the total option.", total_hits) total_hits = total_hits.get("value") total = total_hits chunk = resp['hits']['hits'] output = chunk[offset:offset+limit if paginate else len(chunk)] offset = 0 if len(chunk) > offset else offset - len(chunk) while (not paginate) or len(output) < limit: resp = self.scroll({ 'scroll_id': scroll_id, 'scroll': self._scroll_duration, }) scroll_id, chunk = resp['_scroll_id'], resp['hits']['hits'] self._scroll_ids.add(scroll_id) # Means we've scrolled until the end if not chunk: break output += chunk[offset:offset+limit if paginate else len(chunk)] offset = 0 if len(chunk) > offset else offset - len(chunk) self.close_scrolls() return total, output def delete_by_query(self, begin=None, end=None, filters=None): """Does a POST request against ES's Delete By Query API. The POST request will be done against `//_delete_by_query` :param filters: Optional filters for documents to delete :type filters: list of dicts :rtype: requests.models.Response """ url = '/'.join((self._url, self._index_name, '_delete_by_query')) must = self._build_must(begin, end, None, filters) data = (json.dumps({"query": {"bool": {"must": must}}}) if must else None) return self._req(self._sess.post, url, data, None) def total(self, begin, end, metric_types, filters, groupby, custom_fields=None, offset=0, limit=1000, paginate=True): if custom_fields: LOG.warning("'custom_fields' are not implemented yet for " "ElasticSearch. Therefore, the custom fields [%s] " "informed by the user will be ignored.", custom_fields) if not paginate: offset = 0 metric_types = [metric_types] if metric_types else None must = self._build_must(begin, end, metric_types, filters) should = self._build_should(filters) composite = self._build_composite(groupby) if groupby else None if composite: composite['size'] = self._chunk_size query = self._build_query(must, should, composite) if "aggs" not in query.keys(): query["aggs"] = { "sum_price": {"sum": {"field": "price"}}, "sum_qty": {"sum": {"field": "qty"}}, } query['size'] = 0 resp = self.search(query, scroll=False) # Means we didn't group, so length is 1 if not composite: return 1, [resp["aggregations"]] after = resp["aggregations"]["sum_and_price"].get("after_key") chunk = resp["aggregations"]["sum_and_price"]["buckets"] total = len(chunk) output = chunk[offset:offset+limit if paginate else len(chunk)] offset = 0 if len(chunk) > offset else offset - len(chunk) # FIXME(peschk_l): We have to iterate over ALL buckets in order to get # the total length. If there is a way for composite aggregations to get # the total amount of buckets, please fix this while after: composite_query = query["aggs"]["sum_and_price"]["composite"] composite_query["size"] = self._chunk_size composite_query["after"] = after resp = self.search(query, scroll=False) after = resp["aggregations"]["sum_and_price"].get("after_key") chunk = resp["aggregations"]["sum_and_price"]["buckets"] if not chunk: break output += chunk[offset:offset+limit if paginate else len(chunk)] offset = 0 if len(chunk) > offset else offset - len(chunk) total += len(chunk) if paginate: output = output[offset:offset+limit] return total, output ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v2/elasticsearch/exceptions.py0000664000175000017500000000231000000000000025537 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # class BaseElasticsearchException(Exception): """Base exception raised by the Elasticsearch v2 storage driver""" class InvalidStatusCode(BaseElasticsearchException): def __init__(self, expected, actual, msg, query): super(InvalidStatusCode, self).__init__( "Expected {} status code, got {}: {}. Query was {}".format( expected, actual, msg, query)) class IndexDoesNotExist(BaseElasticsearchException): def __init__(self, index_name): super(IndexDoesNotExist, self).__init__( "Elasticsearch index {} does not exist".format(index_name) ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v2/influx.py0000664000175000017500000007674300000000000022076 0ustar00zuulzuul00000000000000# Copyright 2018 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import csv import datetime import influxdb import io import json import re from influxdb_client.client.write_api import SYNCHRONOUS from influxdb_client import InfluxDBClient from oslo_config import cfg from oslo_log import log import requests from cloudkitty import dataframe from cloudkitty.storage import v2 as v2_storage from cloudkitty.utils import tz as tzutils LOG = log.getLogger(__name__) CONF = cfg.CONF CONF.import_opt('period', 'cloudkitty.collector', 'collect') INFLUX_STORAGE_GROUP = 'storage_influxdb' influx_storage_opts = [ cfg.StrOpt('username', help='InfluxDB username'), cfg.StrOpt('password', help='InfluxDB password', secret=True), cfg.StrOpt('database', help='InfluxDB database'), cfg.StrOpt('retention_policy', default='autogen', help='Retention policy to use'), cfg.StrOpt('host', help='InfluxDB host', default='localhost'), cfg.IntOpt('port', help='InfluxDB port', default=8086), cfg.BoolOpt( 'use_ssl', help='Set to true to use ssl for influxDB connection. ' 'Defaults to False', default=False, ), cfg.BoolOpt( 'insecure', help='Set to true to authorize insecure HTTPS connections to ' 'influxDB. Defaults to False', default=False, ), cfg.StrOpt( 'cafile', help='Path of the CA certificate to trust for HTTPS connections', default=None ), cfg.IntOpt('version', help='InfluxDB version', default=1), cfg.IntOpt('query_timeout', help='Flux query timeout in milliseconds', default=3600000), cfg.StrOpt( 'token', help='InfluxDB API token for version 2 authentication', default=None ), cfg.StrOpt( 'org', help='InfluxDB 2 org', default="openstack" ), cfg.StrOpt( 'bucket', help='InfluxDB 2 bucket', default="cloudkitty" ), cfg.StrOpt( 'url', help='InfluxDB 2 URL', default=None ) ] CONF.register_opts(influx_storage_opts, INFLUX_STORAGE_GROUP) PERIOD_FIELD_NAME = '__ck_collect_period' def _sanitized_groupby(groupby): forbidden = ('time',) return [g for g in groupby if g not in forbidden] if groupby else [] class InfluxClient(object): """Classe used to ease interaction with InfluxDB""" def __init__(self, chunk_size=500, autocommit=True, default_period=3600): """Creates an InfluxClient object. :param chunk_size: Size after which points should be pushed. :param autocommit: Set to false to disable autocommit :param default_period: Placeholder for the period in cae it can't be determined. """ self._conn = self._get_influx_client() self._chunk_size = chunk_size self._autocommit = autocommit self._retention_policy = CONF.storage_influxdb.retention_policy self._default_period = default_period self._points = [] @staticmethod def _get_influx_client(): verify = CONF.storage_influxdb.use_ssl and not \ CONF.storage_influxdb.insecure if verify and CONF.storage_influxdb.cafile: verify = CONF.storage_influxdb.cafile return influxdb.InfluxDBClient( username=CONF.storage_influxdb.username, password=CONF.storage_influxdb.password, host=CONF.storage_influxdb.host, port=CONF.storage_influxdb.port, database=CONF.storage_influxdb.database, ssl=CONF.storage_influxdb.use_ssl, verify_ssl=verify, ) def retention_policy_exists(self, database, policy): policies = self._conn.get_list_retention_policies(database) return policy in [pol['name'] for pol in policies] def commit(self): total_points = len(self._points) if len(self._points) < 1: return LOG.debug('Pushing {} points to InfluxDB'.format(total_points)) self._conn.write_points(self._points, retention_policy=self._retention_policy) self._points = [] def append_point(self, metric_type, start, period, point): """Adds a point to commit to InfluxDB. :param metric_type: Name of the metric type :type metric_type: str :param start: Start of the period the point applies to :type start: datetime.datetime :param period: length of the period the point applies to (in seconds) :type period: int :param point: Point to push :type point: dataframe.DataPoint """ measurement_fields = dict(point.metadata) measurement_fields['qty'] = float(point.qty) measurement_fields['price'] = float(point.price) measurement_fields['unit'] = point.unit measurement_fields['description'] = point.description # Unfortunately, this seems to be the fastest way: Having several # measurements would imply a high client-side workload, and this allows # us to filter out unrequired keys measurement_fields['groupby'] = '|'.join(point.groupby.keys()) measurement_fields['metadata'] = '|'.join(point.metadata.keys()) measurement_fields[PERIOD_FIELD_NAME] = period measurement_tags = dict(point.groupby) measurement_tags['type'] = metric_type self._points.append({ 'measurement': 'dataframes', 'tags': measurement_tags, 'fields': measurement_fields, 'time': start, }) if self._autocommit and len(self._points) >= self._chunk_size: self.commit() @staticmethod def _get_filter(key, value): if isinstance(value, list): if len(value) == 1: return InfluxClient._get_filter(key, value[0]) return "(" + " OR ".join('"{}"=\'{}\''.format(key, v) for v in value) + ")" format_string = '' if isinstance(value, str): format_string = """"{}"='{}'""" elif isinstance(value, (int, float)): format_string = """"{}"={}""" return format_string.format(key, value) @staticmethod def _get_time_query(begin, end): return " WHERE time >= '{}' AND time < '{}'".format( begin.isoformat(), end.isoformat()) def _get_filter_query(self, filters): if not filters: return '' return ' AND ' + ' AND '.join( self._get_filter(k, v) for k, v in filters.items()) @staticmethod def _get_type_query(types): if not types: return '' return " AND " + InfluxClient._get_filter("type", types) def get_total(self, types, begin, end, custom_fields, groupby=None, filters=None, limit=None): self.validate_custom_fields(custom_fields) # We validate the SQL statements. Therefore, we can ignore this # bandit warning here. query = 'SELECT %s FROM "dataframes"' % custom_fields # nosec query += self._get_time_query(begin, end) query += self._get_filter_query(filters) query += self._get_type_query(types) if groupby: groupby_query = '' if 'time' in groupby: groupby_query += 'time(' + str(self._default_period) + 's)' groupby_query += ',' if groupby else '' if groupby: groupby_query += '"' + '","'.join( _sanitized_groupby(groupby)) + '"' query += ' GROUP BY ' + groupby_query query += ';' LOG.debug("Executing query [%s].", query) total = self._conn.query(query) LOG.debug( "Data [%s] received when executing query [%s].", total, query) return total @staticmethod def validate_custom_fields(custom_fields): forbidden_clauses = ["select", "from", "drop", "delete", "create", "alter", "insert", "update"] for field in custom_fields.split(","): if field.lower() in forbidden_clauses: raise RuntimeError("Clause [%s] is not allowed in custom" " fields summary get report. The following" " clauses are not allowed [%s].", field, forbidden_clauses) def retrieve(self, types, filters, begin, end, offset=0, limit=1000, paginate=True): query = 'SELECT * FROM "dataframes"' query += self._get_time_query(begin, end) query += self._get_filter_query(filters) query += self._get_type_query(types) if paginate: query += ' LIMIT {} OFFSET {}'.format(limit, offset) query += ';' total_query = 'SELECT COUNT(groupby) FROM "dataframes"' total_query += self._get_time_query(begin, end) total_query += self._get_filter_query(filters) total_query += self._get_type_query(types) total_query += ';' total, result = self._conn.query(total_query + query) total = sum(point['count'] for point in total.get_points()) return total, result @staticmethod def _get_time_query_delete(begin, end): output = "" if begin: output += " WHERE time >= '{}'".format(begin.isoformat()) if end: output += " AND " if output else " WHERE " output += "time < '{}'".format(end.isoformat()) return output def delete(self, begin, end, filters): query = 'DELETE FROM "dataframes"' query += self._get_time_query_delete(begin, end) filter_query = self._get_filter_query(filters) if 'WHERE' not in query and filter_query: query += " WHERE " + filter_query[5:] else: query += filter_query query += ';' LOG.debug("InfluxDB query to delete elements filtering by [%s] and " "with [begin=%s, end=%s]: [%].", filters, begin, end, query) self._conn.query(query) def _get_total_elem(self, begin, end, groupby, series_groupby, point): if groupby and 'time' in groupby: begin = tzutils.dt_from_iso(point['time']) period = point.get(PERIOD_FIELD_NAME) or self._default_period end = tzutils.add_delta(begin, datetime.timedelta(seconds=period)) output = { 'begin': begin, 'end': end, } for key in point.keys(): if "time" != key: output[key] = point[key] if groupby: for group in _sanitized_groupby(groupby): output[group] = series_groupby.get(group, '') return output def process_total(self, total, begin, end, groupby, *args): output = [] for (series_name, series_groupby), points in total.items(): for point in points: # NOTE(peschk_l): InfluxDB returns all timestamps for a given # period and interval, even those with no data. This filters # out periods with no data # NOTE (rafaelweingartner): the summary get API is allowing # users to customize the report. Therefore, we only ignore # data points, if all of the entries have None values. # Otherwise, they are presented to the user. if [k for k in point.keys() if point[k]]: output.append(self._get_total_elem( tzutils.utc_to_local(begin), tzutils.utc_to_local(end), groupby, series_groupby, point)) return output class InfluxClientV2(InfluxClient): """Class used to facilitate interaction with InfluxDB v2 custom_fields_rgx: Regex to parse the input custom_fields and retrieve the field name, the desired alias and the aggregation function to use. It allows us to keep the same custom_fields representation for both InfluxQL and Flux queries. """ custom_fields_rgx = r'([\w_\\"]+)\(([\w_\\"]+)\) (AS|as) ' \ r'\\?"?([\w_ \\]+)"?,? ?' class FluxResponseHandler(object): """Class used to process the response of Flux queries As the Flux response splits its result set by the requested fields, we need to merge them into a single one based on their groups (tags). Using this approach we keep the response data compatible with the InfluxQL result set, where we already have the multiple result set for each field merged into a single one. """ def __init__(self, response, groupby, fields, begin, end, field_filters): self.data = response self.field_filters = field_filters self.response = {} self.begin = begin self.end = end self.groupby = groupby self.fields = fields self.process() def process(self): """This method merges all the Flux result sets into a single one. To make sure the fields filtering comply with the user's request, we need to remove the merged entries that have None value for filtered fields, we need to do that because working with fields one by one in Flux queries is more performant than working with all the fields together, but it brings some problems when we want to filter some data. E.g: We want the fields A and B, grouped by C and D, and the field A must be 2. Imagine this query for the following dataset: A : C : D B : C : D 1 : 1 : 1 5 : 1 : 1 2 : 2 : 2 6 : 2 : 2 2 : 3 : 3 7 : 3 : 3 2 : 4 : 4 The result set is going to be like: A : C : D B : C : D 2 : 2 : 2 5 : 1 : 1 2 : 3 : 3 6 : 2 : 2 2 : 4 : 4 7 : 3 : 3 And the merged value is going to be like: A : B : C : D None : 5 : 1 : 1 2 : 6 : 2 : 2 2 : 7 : 3 : 3 2 : None : 4 : 4 So, we need to remove the first undesired entry to get the correct result: A : B : C : D 2 : 6 : 2 : 2 2 : 7 : 3 : 3 2 : None : 4 : 4 """ LOG.debug("Using fields %s to process InfluxDB V2 response.", self.fields) LOG.debug("Start processing data [%s] of InfluxDB V2 API.", self.data) if self.fields == ["*"] and not self.groupby: self.process_data_wildcard() else: self.process_data_with_fields() LOG.debug("Data processed by the InfluxDB V2 backend with " "result [%s].", self.response) LOG.debug("Start sanitizing the response of Influx V2 API.") self.sanitize_filtered_entries() LOG.debug("Response sanitized [%s] for InfluxDB V2 API.", self.response) def process_data_wildcard(self): LOG.debug("Processing wildcard response for InfluxDB V2 API.") found_fields = set() for r in self.data: if self.is_header_entry(r): LOG.debug("Skipping header entry: [%s].", r) continue r_key = ''.join(sorted(r.values())) found_fields.add(r['_field']) r_value = r r_value['begin'] = self.begin r_value['end'] = self.end self.response.setdefault( r_key, r_value)[r['result']] = float(r['_value']) def process_data_with_fields(self): for r in self.data: if self.is_header_entry(r): LOG.debug("Skipping header entry: [%s].", r) continue r_key = '' r_value = {f: None for f in self.fields} r_value['begin'] = self.begin r_value['end'] = self.end for g in (self.groupby or []): val = r.get(g) r_key += val or '' r_value[g] = val self.response.setdefault( r_key, r_value)[r['result']] = float(r['_value']) @staticmethod def is_header_entry(entry): """Check header entries. As the response contains multiple resultsets, each entry in the response CSV has its own header, which is the same for all the result sets, but the CSV parser does not ignore it and processes all headers except the first as a dict entry, so for these cases, each dict's value is going to be the same as the dict's key, so we are picking one and if it is this case, we skip it. """ return entry.get('_start') == '_start' def sanitize_filtered_entries(self): """Removes entries where filtered fields have None as value.""" for d in self.field_filters or []: for k in list(self.response.keys()): if self.response[k][d] is None: self.response.pop(k, None) def __init__(self, default_period=None): super().__init__(default_period=default_period) self.client = InfluxDBClient( url=CONF.storage_influxdb.url, timeout=CONF.storage_influxdb.query_timeout, token=CONF.storage_influxdb.token, org=CONF.storage_influxdb.org) self._conn = self.client def retrieve(self, types, filters, begin, end, offset=0, limit=1000, paginate=True): query = self.get_query(begin, end, '*', filters=filters) response = self.query(query) output = self.process_total( response, begin, end, None, '*', filters) LOG.debug("Retrieved output %s", output) results = {'results': output[ offset:offset + limit] if paginate else output} return len(output), results def delete(self, begin, end, filters): predicate = '_measurement="dataframes"' f = self.get_group_filters_query( filters, fmt=lambda x: '"' + str(x) + '"') if f: f = f.replace('==', '=').replace('and', 'AND') predicate += f'{f}' LOG.debug("InfluxDB v2 deleting elements filtering by [%s] and " "with [begin=%s, end=%s].", predicate, begin, end) delete_api = self.client.delete_api() delete_api.delete(begin, end, bucket=CONF.storage_influxdb.bucket, predicate=predicate) def process_total(self, total, begin, end, groupby, custom_fields, filters): cf = self.get_custom_fields(custom_fields) fields = list(map(lambda f: f[2], cf)) c_fields = {f[1]: f[2] for f in cf} field_filters = [c_fields[f] for f in filters if f in c_fields] handler = self.FluxResponseHandler(total, groupby, fields, begin, end, field_filters) return list(handler.response.values()) def commit(self): total_points = len(self._points) if len(self._points) < 1: return LOG.debug('Pushing {} points to InfluxDB'.format(total_points)) self.write_points(self._points, retention_policy=self._retention_policy) self._points = [] def write_points(self, points, retention_policy=None): write_api = self.client.write_api(write_options=SYNCHRONOUS) [write_api.write( bucket=CONF.storage_influxdb.bucket, record=p) for p in points] def _get_filter_query(self, filters): if not filters: return '' return ' and ' + ' and '.join( self._get_filter(k, v) for k, v in filters.items()) def get_custom_fields(self, custom_fields): if not custom_fields: return [] if custom_fields.strip() == '*': return [('*', '*', '*')] groups = [list(i.groups()) for i in re.finditer( self.custom_fields_rgx, custom_fields)] # Remove the "As|as" group that is useless. if groups: for g in groups: del g[2] return groups def get_group_filters_query(self, group_filters, fmt=lambda x: f'r.{x}'): if not group_filters: return '' get_val = (lambda x: x if isinstance(v, (int, float)) or x.isnumeric() else f'"{x}"') f = '' for k, v in group_filters.items(): if isinstance(v, (list, tuple)): if len(v) == 1: f += f' and {fmt(k)}=={get_val(v[0])}' continue f += ' and (%s)' % ' or '.join([f'{fmt(k)}=={get_val(val)}' for val in v]) continue f += f' and {fmt(k)}=={get_val(v)}' return f def get_field_filters_query(self, field_filters, fmt=lambda x: 'r["_value"]'): return self.get_group_filters_query(field_filters, fmt) def get_custom_fields_query(self, custom_fields, query, field_filters, group_filters, limit=None, groupby=None): if not groupby: groupby = [] if not custom_fields: custom_fields = 'sum(price) AS price,sum(qty) AS qty' columns_to_keep = ', '.join(map(lambda g: f'"{g}"', groupby)) columns_to_keep += ', "_field", "_value", "_start", "_stop"' new_query = '' LOG.debug("Custom fields: %s", custom_fields) LOG.debug("Custom fields processed: %s", self.get_custom_fields(custom_fields)) for operation, field, alias in self.get_custom_fields(custom_fields): LOG.debug("Generating query with operation [%s]," " field [%s] and alias [%s]", operation, field, alias) field_filter = {} if field_filters and field in field_filters: field_filter = {field: field_filters[field]} if field == '*': group_filter = self.get_group_filters_query( group_filters).replace(" and ", "", 1) filter_to_replace = f'|> filter(fn: (r) => {group_filter})' new_query += query.replace( '', filter_to_replace).replace( '', f'''|> drop(columns: ["_time"]) {'|> limit(n: ' + str(limit) + ')' if limit else ''} |> yield(name: "result")''') continue new_query += query.replace( '', f'|> filter(fn: (r) => r["_field"] == ' f'"{field}" {self.get_group_filters_query(group_filters)} ' f'{self.get_field_filters_query(field_filter)})' ).replace( '', f'''|> {operation.lower()}() |> keep(columns: [{columns_to_keep}]) |> set(key: "_field", value: "{alias}") |> yield(name: "{alias}")''') return new_query def get_groupby(self, groupby): if not groupby: return "|> group()" return f'''|> group(columns: [{','.join([f'"{g}"' for g in groupby])}])''' def get_time_range(self, begin, end): if not begin or not end: return '' return f'|> range(start: {begin.isoformat()}, stop: {end.isoformat()})' def get_query(self, begin, end, custom_fields, groupby=None, filters=None, limit=None): custom_fields_processed = list( map(lambda x: x[1], self.get_custom_fields(custom_fields))) field_filters = dict(filter( lambda f: f[0] in custom_fields_processed, filters.items())) group_filters = dict(filter( lambda f: f[0] not in field_filters, filters.items())) query = f''' from(bucket:"{CONF.storage_influxdb.bucket}") {self.get_time_range(begin, end)} |> filter(fn: (r) => r["_measurement"] == "dataframes") {self.get_groupby(groupby)} ''' LOG.debug("Field Filters: %s", field_filters) LOG.debug("Group Filters: %s", group_filters) query = self.get_custom_fields_query(custom_fields, query, field_filters, group_filters, limit, groupby) return query def query(self, query): url_base = CONF.storage_influxdb.url org = CONF.storage_influxdb.org url = f'{url_base}/api/v2/query?org={org}' response = requests.post( url=url, headers={ 'Content-type': 'application/json', 'Authorization': f'Token {CONF.storage_influxdb.token}'}, data=json.dumps({ 'query': query})) response_text = response.text LOG.debug("Raw Response: [%s].", response_text) handled_response = [] for csv_tables in response_text.split(',result,table,'): csv_tables = ',result,table,' + csv_tables LOG.debug("Processing CSV [%s].", csv_tables) processed = list(csv.DictReader(io.StringIO(csv_tables))) LOG.debug("Processed CSV in dict [%s]", processed) handled_response.extend(processed) return handled_response def get_total(self, types, begin, end, custom_fields, groupby=None, filters=None, limit=None): LOG.debug("Query types: %s", types) if types: if not filters: filters = {} filters['type'] = types LOG.debug("Query filters: %s", filters) query = self.get_query(begin, end, custom_fields, groupby, filters, limit) LOG.debug("Executing the Flux query [%s].", query) return self.query(query) class InfluxStorage(v2_storage.BaseStorage): def __init__(self, *args, **kwargs): super(InfluxStorage, self).__init__(*args, **kwargs) self._default_period = kwargs.get('period') or CONF.collect.period if CONF.storage_influxdb.version == 2: self._conn = InfluxClientV2(default_period=self._default_period) else: self._conn = InfluxClient(default_period=self._default_period) def init(self): if CONF.storage_influxdb.version == 2: return policy = CONF.storage_influxdb.retention_policy database = CONF.storage_influxdb.database if not self._conn.retention_policy_exists(database, policy): LOG.error( 'Archive policy "{}" does not exist in database "{}"'.format( policy, database) ) def push(self, dataframes, scope_id=None): for frame in dataframes: period = tzutils.diff_seconds(frame.end, frame.start) for type_, point in frame.iterpoints(): self._conn.append_point(type_, frame.start, period, point) self._conn.commit() @staticmethod def _check_begin_end(begin, end): if not begin: begin = tzutils.get_month_start() if not end: end = tzutils.get_next_month() return tzutils.local_to_utc(begin), tzutils.local_to_utc(end) @staticmethod def _point_to_dataframe_entry(point): groupby = filter(bool, (point.pop('groupby', None) or '').split('|')) metadata = filter(bool, (point.pop('metadata', None) or '').split('|')) return dataframe.DataPoint( point['unit'], point['qty'], point['price'], {key: point.get(key, '') for key in groupby}, {key: point.get(key, '') for key in metadata}, ) def _build_dataframes(self, points): dataframes = {} for point in points: point_type = point['type'] time = tzutils.dt_from_iso(point['time']) period = point.get(PERIOD_FIELD_NAME) or self._default_period timekey = ( time, tzutils.add_delta(time, datetime.timedelta(seconds=period))) if timekey not in dataframes.keys(): dataframes[timekey] = dataframe.DataFrame( start=timekey[0], end=timekey[1]) dataframes[timekey].add_point( self._point_to_dataframe_entry(point), point_type) output = list(dataframes.values()) output.sort(key=lambda frame: (frame.start, frame.end)) return output def retrieve(self, begin=None, end=None, filters=None, metric_types=None, offset=0, limit=1000, paginate=True): begin, end = self._check_begin_end(begin, end) total, resp = self._conn.retrieve( metric_types, filters, begin, end, offset, limit, paginate) # Unfortunately, a ResultSet has no values() method, so we need to # get them manually points = [] for _, item in resp.items(): points += list(item) return { 'total': total, 'dataframes': self._build_dataframes(points) } def delete(self, begin=None, end=None, filters=None): self._conn.delete(begin, end, filters) def total(self, groupby=None, begin=None, end=None, metric_types=None, filters=None, offset=0, limit=1000, paginate=True, custom_fields="SUM(qty) AS qty, SUM(price) AS rate"): begin, end = self._check_begin_end(begin, end) groupby = self.parse_groupby_syntax_to_groupby_elements(groupby) total = self._conn.get_total(metric_types, begin, end, custom_fields, groupby, filters, limit) output = self._conn.process_total( total, begin, end, groupby, custom_fields, filters) groupby = _sanitized_groupby(groupby) if groupby: output.sort(key=lambda x: [x[group] or "" for group in groupby]) return { 'total': len(output), 'results': output[offset:offset + limit] if paginate else output, } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2554867 cloudkitty-21.0.0/cloudkitty/storage/v2/opensearch/0000775000175000017500000000000000000000000022325 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v2/opensearch/__init__.py0000664000175000017500000001576600000000000024455 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import datetime from oslo_config import cfg from oslo_log import log from cloudkitty import dataframe from cloudkitty.storage import v2 as v2_storage from cloudkitty.storage.v2.opensearch import client as os_client from cloudkitty.storage.v2.opensearch import exceptions from cloudkitty.utils import tz as tzutils LOG = log.getLogger(__name__) CONF = cfg.CONF OPENSEARCH_STORAGE_GROUP = 'storage_opensearch' opensearch_storage_opts = [ cfg.StrOpt( 'host', help='OpenSearch host, along with port and protocol. ' 'Defaults to http://localhost:9200', default='http://localhost:9200'), cfg.StrOpt( 'index_name', help='OpenSearch index to use. Defaults to "cloudkitty".', default='cloudkitty'), cfg.BoolOpt('insecure', help='Set to true to allow insecure HTTPS ' 'connections to OpenSearch', default=False), cfg.StrOpt('cafile', help='Path of the CA certificate to trust for ' 'HTTPS connections.', default=None), cfg.IntOpt('scroll_duration', help="Duration (in seconds) for which the OpenSearch scroll " "contexts should be kept alive.", advanced=True, default=30, min=0, max=300), ] CONF.register_opts(opensearch_storage_opts, OPENSEARCH_STORAGE_GROUP) CLOUDKITTY_INDEX_MAPPING = { "dynamic_templates": [ { "strings_as_keywords": { "match_mapping_type": "string", "mapping": { "type": "keyword" } } } ], "dynamic": False, "properties": { "start": {"type": "date"}, "end": {"type": "date"}, "type": {"type": "keyword"}, "unit": {"type": "keyword"}, "qty": {"type": "double"}, "price": {"type": "double"}, "groupby": {"dynamic": True, "type": "object"}, "metadata": {"dynamic": True, "type": "object"} }, } class OpenSearchStorage(v2_storage.BaseStorage): def __init__(self, *args, **kwargs): super(OpenSearchStorage, self).__init__(*args, **kwargs) verify = not CONF.storage_opensearch.insecure if verify and CONF.storage_opensearch.cafile: verify = CONF.storage_opensearch.cafile self._conn = os_client.OpenSearchClient( CONF.storage_opensearch.host, CONF.storage_opensearch.index_name, "_doc", verify=verify) def init(self): r = self._conn.get_index() if r.status_code != 200: raise exceptions.IndexDoesNotExist( CONF.storage_opensearch.index_name) LOG.info('Creating mapping "_doc" on index {}...'.format( CONF.storage_opensearch.index_name)) self._conn.post_mapping(CLOUDKITTY_INDEX_MAPPING) LOG.info('Mapping created.') def push(self, dataframes, scope_id=None): for frame in dataframes: for type_, point in frame.iterpoints(): start, end = self._local_to_utc(frame.start, frame.end) self._conn.add_point(point, type_, start, end) self._conn.commit() @staticmethod def _local_to_utc(*args): return [tzutils.local_to_utc(arg) for arg in args] @staticmethod def _doc_to_datapoint(doc): return dataframe.DataPoint( doc['unit'], doc['qty'], doc['price'], doc['groupby'], doc['metadata'], ) def _build_dataframes(self, docs): dataframes = {} nb_points = 0 for doc in docs: source = doc['_source'] start = tzutils.dt_from_iso(source['start']) end = tzutils.dt_from_iso(source['end']) key = (start, end) if key not in dataframes.keys(): dataframes[key] = dataframe.DataFrame(start=start, end=end) dataframes[key].add_point( self._doc_to_datapoint(source), source['type']) nb_points += 1 output = list(dataframes.values()) output.sort(key=lambda frame: (frame.start, frame.end)) return output def retrieve(self, begin=None, end=None, filters=None, metric_types=None, offset=0, limit=1000, paginate=True): begin, end = self._local_to_utc(begin or tzutils.get_month_start(), end or tzutils.get_next_month()) total, docs = self._conn.retrieve( begin, end, filters, metric_types, offset=offset, limit=limit, paginate=paginate) return { 'total': total, 'dataframes': self._build_dataframes(docs), } def delete(self, begin=None, end=None, filters=None): self._conn.delete_by_query(begin, end, filters) @staticmethod def _normalize_time(t): if isinstance(t, datetime.datetime): return tzutils.utc_to_local(t) return tzutils.dt_from_iso(t) def _doc_to_total_result(self, doc, start, end): output = { 'begin': self._normalize_time(doc.get('start', start)), 'end': self._normalize_time(doc.get('end', end)), 'qty': doc['sum_qty']['value'], 'rate': doc['sum_price']['value'], } # Means we had a composite aggregation if 'key' in doc.keys(): for key, value in doc['key'].items(): if key == 'begin' or key == 'end': # OpenSearch returns ts in milliseconds value = tzutils.dt_from_ts(value // 1000) output[key] = value return output def total(self, groupby=None, begin=None, end=None, metric_types=None, filters=None, custom_fields=None, offset=0, limit=1000, paginate=True): begin, end = self._local_to_utc(begin or tzutils.get_month_start(), end or tzutils.get_next_month()) total, docs = self._conn.total(begin, end, metric_types, filters, groupby, custom_fields=custom_fields, offset=offset, limit=limit, paginate=paginate) return { 'total': total, 'results': [self._doc_to_total_result(doc, begin, end) for doc in docs], } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v2/opensearch/client.py0000664000175000017500000003477100000000000024171 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import itertools from oslo_log import log import requests from cloudkitty.storage.v2.opensearch import exceptions from cloudkitty.utils import json LOG = log.getLogger(__name__) class OpenSearchClient(object): """Class used to ease interaction with OpenSearch. :param autocommit: Defaults to True. Automatically push documents to OpenSearch once chunk_size has been reached. :type autocommit: bool :param chunk_size: Maximal number of documents to commit/retrieve at once. :type chunk_size: int :param scroll_duration: Defaults to 60. Duration, in seconds, for which search contexts should be kept alive :type scroll_duration: int """ def __init__(self, url, index_name, mapping_name, verify=True, autocommit=True, chunk_size=5000, scroll_duration=60): self._url = url.strip('/') self._index_name = index_name.strip('/') self._mapping_name = mapping_name.strip('/') self._autocommit = autocommit self._chunk_size = chunk_size self._scroll_duration = str(scroll_duration) + 's' self._scroll_params = {'scroll': self._scroll_duration} self._docs = [] self._scroll_ids = set() self._sess = requests.Session() self._verify = self._sess.verify = verify self._sess.headers = {'Content-Type': 'application/json'} @staticmethod def _log_query(url, query, response): message = 'Query on {} with body "{}" took {}ms'.format( url, query, response['took']) if 'hits' in response.keys(): message += ' for {} hits'.format(response['hits']['total']) LOG.debug(message) @staticmethod def _build_must(start, end, metric_types, filters): must = [] if start: must.append({"range": {"start": {"gte": start.isoformat()}}}) if end: must.append({"range": {"end": {"lte": end.isoformat()}}}) if filters and 'type' in filters.keys(): must.append({'term': {'type': filters['type']}}) if metric_types: must.append({"terms": {"type": metric_types}}) return must @staticmethod def _build_should(filters): if not filters: return [] should = [] for k, v in filters.items(): if k != 'type': should += [{'term': {'groupby.' + k: v}}, {'term': {'metadata.' + k: v}}] return should def _build_composite(self, groupby): if not groupby: return [] sources = [] for elem in groupby: if elem == 'type': sources.append({'type': {'terms': {'field': 'type.keyword'}}}) elif elem == 'time': # Not doing a date_histogram aggregation because we don't know # the period sources.append({'begin': {'terms': {'field': 'start'}}}) sources.append({'end': {'terms': {'field': 'end'}}}) else: field = 'groupby.' + elem + '.keyword' sources.append({elem: {'terms': {'field': field}}}) return {"sources": sources} @staticmethod def _build_query(must, should, composite): query = {} if must or should: query["query"] = {"bool": {}} if must: query["query"]["bool"]["must"] = must if should: query["query"]["bool"]["should"] = should # We want each term to match exactly once, and each term introduces # two "term" aggregations: one for "groupby" and one for "metadata" query["query"]["bool"]["minimum_should_match"] = len(should) // 2 if composite: query["aggs"] = {"sum_and_price": { "composite": composite, "aggregations": { "sum_price": {"sum": {"field": "price"}}, "sum_qty": {"sum": {"field": "qty"}}, } }} return query def _req(self, method, url, data, params, deserialize=True): r = method(url, data=data, params=params) if r.status_code < 200 or r.status_code >= 300: raise exceptions.InvalidStatusCode( 200, r.status_code, r.text, data) if not deserialize: return r output = r.json() self._log_query(url, data, output) return output def post_mapping(self, mapping): """Does a POST request against OpenSearch's mapping API. The POST request will be done against `//` :mapping: body of the request :type mapping: dict :rtype: requests.models.Response """ url = '/'.join( (self._url, self._index_name, self._mapping_name)) return self._req( self._sess.post, url, json.dumps(mapping), {}, deserialize=False) def get_index(self): """Does a GET request against OpenSearch's index API. The GET request will be done against `/` :rtype: requests.models.Response """ url = '/'.join((self._url, self._index_name)) return self._req(self._sess.get, url, None, None, deserialize=False) def search(self, body, scroll=True): """Does a GET request against OpenSearch's search API. The GET request will be done against `//_search` :param body: body of the request :type body: dict :rtype: dict """ url = '/'.join((self._url, self._index_name, '_search')) params = self._scroll_params if scroll else None return self._req( self._sess.get, url, json.dumps(body), params) def scroll(self, body): """Does a GET request against OpenSearch's scroll API. The GET request will be done against `/_search/scroll` :param body: body of the request :type body: dict :rtype: dict """ url = '/'.join((self._url, '_search/scroll')) return self._req(self._sess.get, url, json.dumps(body), None) def close_scroll(self, body): """Does a DELETE request against OpenSearch's scroll API. The DELETE request will be done against `/_search/scroll` :param body: body of the request :type body: dict :rtype: dict """ url = '/'.join((self._url, '_search/scroll')) resp = self._req( self._sess.delete, url, json.dumps(body), None, deserialize=False) body = resp.json() LOG.debug('Freed {} scrolls contexts'.format(body['num_freed'])) return body def close_scrolls(self): """Closes all scroll contexts opened by this client.""" ids = list(self._scroll_ids) LOG.debug('Closing {} scroll contexts: {}'.format(len(ids), ids)) self.close_scroll({'scroll_id': ids}) self._scroll_ids = set() def bulk_with_instruction(self, instruction, terms): """Does a POST request against OpenSearch's bulk API The POST request will be done against `//_bulk` The instruction will be appended before each term. For example, bulk_with_instruction('instr', ['one', 'two']) will produce:: instr one instr two :param instruction: instruction to execute for each term :type instruction: dict :param terms: list of terms for which instruction should be executed :type terms: collections.abc.Iterable :rtype: requests.models.Response """ instruction = json.dumps(instruction) data = '\n'.join(itertools.chain( *[(instruction, json.dumps(term)) for term in terms] )) + '\n' url = '/'.join( (self._url, self._index_name, '_bulk')) return self._req(self._sess.post, url, data, None, deserialize=False) def bulk_index(self, terms): """Indexes each of the documents in 'terms' :param terms: list of documents to index :type terms: collections.abc.Iterable """ LOG.debug("Indexing {} documents".format(len(terms))) return self.bulk_with_instruction({"index": {}}, terms) def commit(self): """Index all documents""" while self._docs: self.bulk_index(self._docs[:self._chunk_size]) self._docs = self._docs[self._chunk_size:] def add_point(self, point, type_, start, end): """Append a point to the client. :param point: DataPoint to append :type point: cloudkitty.dataframe.DataPoint :param type_: type of the DataPoint :type type_: str """ self._docs.append({ 'start': start, 'end': end, 'type': type_, 'unit': point.unit, 'qty': point.qty, 'price': point.price, 'groupby': point.groupby, 'metadata': point.metadata, }) if self._autocommit and len(self._docs) >= self._chunk_size: self.commit() def _get_chunk_size(self, offset, limit, paginate): if paginate and offset + limit < self._chunk_size: return offset + limit return self._chunk_size def retrieve(self, begin, end, filters, metric_types, offset=0, limit=1000, paginate=True): """Retrieves a paginated list of documents from OpenSearch.""" if not paginate: offset = 0 query = self._build_query( self._build_must(begin, end, metric_types, filters), self._build_should(filters), None) query['size'] = self._get_chunk_size(offset, limit, paginate) resp = self.search(query) scroll_id = resp['_scroll_id'] self._scroll_ids.add(scroll_id) total_hits = resp['hits']['total'] if isinstance(total_hits, dict): LOG.debug("Total hits [%s] is a dict. Therefore, we only extract " "the 'value' attribute as the total option.", total_hits) total_hits = total_hits.get("value") total = total_hits chunk = resp['hits']['hits'] output = chunk[offset:offset+limit if paginate else len(chunk)] offset = 0 if len(chunk) > offset else offset - len(chunk) while (not paginate) or len(output) < limit: resp = self.scroll({ 'scroll_id': scroll_id, 'scroll': self._scroll_duration, }) scroll_id, chunk = resp['_scroll_id'], resp['hits']['hits'] self._scroll_ids.add(scroll_id) # Means we've scrolled until the end if not chunk: break output += chunk[offset:offset+limit if paginate else len(chunk)] offset = 0 if len(chunk) > offset else offset - len(chunk) self.close_scrolls() return total, output def delete_by_query(self, begin=None, end=None, filters=None): """Does a POST request against ES's Delete By Query API. The POST request will be done against `//_delete_by_query` :param filters: Optional filters for documents to delete :type filters: list of dicts :rtype: requests.models.Response """ url = '/'.join((self._url, self._index_name, '_delete_by_query')) must = self._build_must(begin, end, None, filters) data = (json.dumps({"query": {"bool": {"must": must}}}) if must else None) return self._req(self._sess.post, url, data, None) def total(self, begin, end, metric_types, filters, groupby, custom_fields=None, offset=0, limit=1000, paginate=True): if custom_fields: LOG.warning("'custom_fields' are not implemented yet for " "OpenSearch. Therefore, the custom fields [%s] " "informed by the user will be ignored.", custom_fields) if not paginate: offset = 0 metric_types = [metric_types] if metric_types else None must = self._build_must(begin, end, metric_types, filters) should = self._build_should(filters) composite = self._build_composite(groupby) if groupby else None if composite: composite['size'] = self._chunk_size query = self._build_query(must, should, composite) if "aggs" not in query.keys(): query["aggs"] = { "sum_price": {"sum": {"field": "price"}}, "sum_qty": {"sum": {"field": "qty"}}, } query['size'] = 0 resp = self.search(query, scroll=False) # Means we didn't group, so length is 1 if not composite: return 1, [resp["aggregations"]] after = resp["aggregations"]["sum_and_price"].get("after_key") chunk = resp["aggregations"]["sum_and_price"]["buckets"] total = len(chunk) output = chunk[offset:offset+limit if paginate else len(chunk)] offset = 0 if len(chunk) > offset else offset - len(chunk) # FIXME(peschk_l): We have to iterate over ALL buckets in order to get # the total length. If there is a way for composite aggregations to get # the total amount of buckets, please fix this while after: composite_query = query["aggs"]["sum_and_price"]["composite"] composite_query["size"] = self._chunk_size composite_query["after"] = after resp = self.search(query, scroll=False) after = resp["aggregations"]["sum_and_price"].get("after_key") chunk = resp["aggregations"]["sum_and_price"]["buckets"] if not chunk: break output += chunk[offset:offset+limit if paginate else len(chunk)] offset = 0 if len(chunk) > offset else offset - len(chunk) total += len(chunk) if paginate: output = output[offset:offset+limit] return total, output ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage/v2/opensearch/exceptions.py0000664000175000017500000000227100000000000025062 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # class BaseOpenSearchException(Exception): """Base exception raised by the OpenSearch v2 storage driver""" class InvalidStatusCode(BaseOpenSearchException): def __init__(self, expected, actual, msg, query): super(InvalidStatusCode, self).__init__( "Expected {} status code, got {}: {}. Query was {}".format( expected, actual, msg, query)) class IndexDoesNotExist(BaseOpenSearchException): def __init__(self, index_name): super(IndexDoesNotExist, self).__init__( "OpenSearch index {} does not exist".format(index_name) ) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2554867 cloudkitty-21.0.0/cloudkitty/storage_state/0000775000175000017500000000000000000000000021047 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage_state/__init__.py0000664000175000017500000004210200000000000023157 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2018 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_config import cfg from oslo_db.sqlalchemy import utils from oslo_log import log from sqlalchemy import or_ as or_operation from sqlalchemy import sql from cloudkitty import db from cloudkitty.storage_state import migration from cloudkitty.storage_state import models from cloudkitty.utils import tz as tzutils LOG = log.getLogger(__name__) CONF = cfg.CONF # NOTE(peschk_l): Required for defaults CONF.import_opt('backend', 'cloudkitty.fetcher', 'fetcher') CONF.import_opt('collector', 'cloudkitty.collector', 'collect') CONF.import_opt('scope_key', 'cloudkitty.collector', 'collect') def to_list_if_needed(value): if not isinstance(value, list): value = [value] return value def apply_offset_and_limit(limit, offset, q): if offset: q = q.offset(offset) if limit: q = q.limit(limit) return q class StateManager(object): """Class allowing state management in CloudKitty""" model = models.IdentifierState def get_all(self, identifier=None, fetcher=None, collector=None, scope_key=None, active=1, limit=100, offset=0): """Returns the state of all scopes. This function returns the state of all scopes with support for optional filters. :param identifier: optional scope identifiers to filter on :type identifier: list :param fetcher: optional scope fetchers to filter on :type fetcher: list :param collector: optional collectors to filter on :type collector: list :param fetcher: optional fetchers to filter on :type fetcher: list :param scope_key: optional scope_keys to filter on :type scope_key: list :param active: optional active to filter scopes by status (active/deactivated) :type active: int :param limit: optional to restrict the projection :type limit: int :param offset: optional to shift the projection :type offset: int """ with db.session_for_read() as session: q = utils.model_query(self.model, session) if identifier: q = q.filter( self.model.identifier.in_(to_list_if_needed(identifier))) if fetcher: q = q.filter( self.model.fetcher.in_(to_list_if_needed(fetcher))) if collector: q = q.filter( self.model.collector.in_(to_list_if_needed(collector))) if scope_key: q = q.filter( self.model.scope_key.in_(to_list_if_needed(scope_key))) if active is not None and active != []: q = q.filter(self.model.active.in_(to_list_if_needed(active))) q = apply_offset_and_limit(limit, offset, q) r = q.all() for item in r: item.last_processed_timestamp = tzutils.utc_to_local( item.last_processed_timestamp) item.scope_activation_toggle_date = tzutils.utc_to_local( item.scope_activation_toggle_date) return r def _get_db_item(self, session, identifier, fetcher=None, collector=None, scope_key=None): fetcher = fetcher or CONF.fetcher.backend collector = collector or CONF.collect.collector scope_key = scope_key or CONF.collect.scope_key q = utils.model_query(self.model, session) r = q.filter(self.model.identifier == identifier). \ filter(self.model.scope_key == scope_key). \ filter(self.model.fetcher == fetcher). \ filter(self.model.collector == collector). \ first() # In case the identifier exists with empty columns, update them if not r: # NOTE(peschk_l): We must use == instead of 'is' because sqlalchemy # overloads this operator r = q.filter(self.model.identifier == identifier). \ filter(self.model.scope_key == None). \ filter(self.model.fetcher == None). \ filter(self.model.collector == None). \ first() # noqa if r: r.scope_key = scope_key r.collector = collector r.fetcher = fetcher LOG.info('Updating identifier "{i}" with scope_key "{sk}", ' 'collector "{c}" and fetcher "{f}"'.format( i=identifier, sk=scope_key, c=collector, f=fetcher)) session.commit() return r def set_state(self, identifier, state, fetcher=None, collector=None, scope_key=None): """Set the last processed timestamp of a scope. This method is deprecated, consider using "set_last_processed_timestamp". """ LOG.warning("The method 'set_state' is deprecated. " "Consider using the new method " "'set_last_processed_timestamp'.") self.set_last_processed_timestamp( identifier, state, fetcher, collector, scope_key) def set_last_processed_timestamp( self, identifier, last_processed_timestamp, fetcher=None, collector=None, scope_key=None): """Set the last processed timestamp of a scope. If the scope does not exist yet in the database, it will create it. :param identifier: Identifier of the scope :type identifier: str :param last_processed_timestamp: last processed timestamp of the scope :type last_processed_timestamp: datetime.datetime :param fetcher: Fetcher associated to the scope :type fetcher: str :param collector: Collector associated to the scope :type collector: str :param scope_key: scope_key associated to the scope :type scope_key: str """ last_processed_timestamp = tzutils.local_to_utc( last_processed_timestamp, naive=True) with db.session_for_write() as session: r = self._get_db_item( session, identifier, fetcher, collector, scope_key) if r: if r.last_processed_timestamp != last_processed_timestamp: r.last_processed_timestamp = last_processed_timestamp session.commit() else: self.create_scope(identifier, last_processed_timestamp, fetcher=fetcher, collector=collector, scope_key=scope_key) def create_scope(self, identifier, last_processed_timestamp, fetcher=None, collector=None, scope_key=None, active=True, session=None): """Creates a scope in the database. :param identifier: Identifier of the scope :type identifier: str :param last_processed_timestamp: last processed timestamp of the scope :type last_processed_timestamp: datetime.datetime :param fetcher: Fetcher associated to the scope :type fetcher: str :param collector: Collector associated to the scope :type collector: str :param scope_key: scope_key associated to the scope :type scope_key: str :param active: indicates if the scope is active :type active: bool :param session: the current database session to be reused :type session: object """ with db.session_for_write() as session: state_object = self.model( identifier=identifier, last_processed_timestamp=last_processed_timestamp, fetcher=fetcher, collector=collector, scope_key=scope_key, active=active ) session.add(state_object) session.commit() def get_last_processed_timestamp(self, identifier, fetcher=None, collector=None, scope_key=None): """Get the last processed timestamp of a scope. :param identifier: Identifier of the scope :type identifier: str :param fetcher: Fetcher associated to the scope :type fetcher: str :param collector: Collector associated to the scope :type collector: str :param scope_key: scope_key associated to the scope :type scope_key: str :rtype: datetime.datetime """ with db.session_for_read() as session: r = self._get_db_item( session, identifier, fetcher, collector, scope_key) return tzutils.utc_to_local(r.last_processed_timestamp) if r else None def init(self): migration.upgrade('head') # This is made in order to stay compatible with legacy behavior but # shouldn't be used def get_tenants(self, begin=None, end=None): with db.session_for_read() as session: q = utils.model_query(self.model, session) return [tenant.identifier for tenant in q] def update_storage_scope(self, storage_scope_to_update, scope_key=None, fetcher=None, collector=None, active=None): """Update storage scope data. :param storage_scope_to_update: The storage scope to update in the DB :type storage_scope_to_update: object :param fetcher: Fetcher associated to the scope :type fetcher: str :param collector: Collector associated to the scope :type collector: str :param scope_key: scope_key associated to the scope :type scope_key: str :param active: indicates if the storage scope is active for processing :type active: bool """ with db.session_for_write() as session: db_scope = self._get_db_item(session, storage_scope_to_update.identifier, storage_scope_to_update.fetcher, storage_scope_to_update.collector, storage_scope_to_update.scope_key) if scope_key: db_scope.scope_key = scope_key if fetcher: db_scope.fetcher = fetcher if collector: db_scope.collector = collector if active is not None and active != db_scope.active: db_scope.active = active now = tzutils.localized_now() db_scope.scope_activation_toggle_date = tzutils.local_to_utc( now, naive=True) session.commit() def is_storage_scope_active(self, identifier, fetcher=None, collector=None, scope_key=None): """Checks if a storage scope is active :param identifier: Identifier of the scope :type identifier: str :param fetcher: Fetcher associated to the scope :type fetcher: str :param collector: Collector associated to the scope :type collector: str :param scope_key: scope_key associated to the scope :type scope_key: str :rtype: datetime.datetime """ with db.session_for_read() as session: r = self._get_db_item( session, identifier, fetcher, collector, scope_key) return r.active class ReprocessingSchedulerDb(object): """Class to access and operator the reprocessing scheduler in the DB""" model = models.ReprocessingScheduler def get_all(self, identifier=None, remove_finished=True, limit=100, offset=0, order="desc"): """Returns all schedules for reprocessing for a given resource :param identifier: Identifiers of the scopes :type identifier: str :param remove_finished: option to remove from the projection the reprocessing scheduled that already finished. :type remove_finished: bool :param limit: optional to restrict the projection :type limit: int :param offset: optional to shift the projection :type offset: int :param order: optional parameter to indicate the order of the projection. The ordering field will be the `id`. :type order: str """ with db.session_for_read() as session: query = utils.model_query(self.model, session) if identifier: query = query.filter(self.model.identifier.in_(identifier)) if remove_finished: query = self.remove_finished_processing_schedules(query) if order: query = query.order_by(sql.text("id %s" % order)) query = apply_offset_and_limit(limit, offset, query) result_set = query.all() return result_set def remove_finished_processing_schedules(self, query): return query.filter(or_operation( self.model.current_reprocess_time.is_(None), self.model.current_reprocess_time < self.model.end_reprocess_time )) def persist(self, reprocessing_scheduler): """Persists the reprocessing_schedule :param reprocessing_scheduler: reprocessing schedule that we want to persist in the database. :type reprocessing_scheduler: models.ReprocessingScheduler """ with db.session_for_write() as session: session.add(reprocessing_scheduler) session.commit() def get_from_db(self, identifier=None, start_reprocess_time=None, end_reprocess_time=None): """Get the reprocessing schedule from DB :param identifier: Identifier of the scope :type identifier: str :param start_reprocess_time: the start time used in the reprocessing schedule :type start_reprocess_time: datetime.datetime :param end_reprocess_time: the end time used in the reprocessing schedule :type end_reprocess_time: datetime.datetime """ with db.session_for_read() as session: result_set = self._get_db_item( end_reprocess_time, identifier, session, start_reprocess_time) return result_set def _get_db_item(self, end_reprocess_time, identifier, session, start_reprocess_time): query = utils.model_query(self.model, session) query = query.filter(self.model.identifier == identifier) query = query.filter( self.model.start_reprocess_time == start_reprocess_time) query = query.filter( self.model.end_reprocess_time == end_reprocess_time) query = self.remove_finished_processing_schedules(query) return query.first() def update_reprocessing_time(self, identifier=None, start_reprocess_time=None, end_reprocess_time=None, new_current_time_stamp=None): """Update current processing time for a reprocessing schedule :param identifier: Identifier of the scope :type identifier: str :param start_reprocess_time: the start time used in the reprocessing schedule :type start_reprocess_time: datetime.datetime :param end_reprocess_time: the end time used in the reprocessing schedule :type end_reprocess_time: datetime.datetime :param new_current_time_stamp: the new current timestamp to set :type new_current_time_stamp: datetime.datetime """ with db.session_for_write() as session: result_set = self._get_db_item( end_reprocess_time, identifier, session, start_reprocess_time) if not result_set: LOG.warning("Trying to update current time to [%s] for " "identifier [%s] and reprocessing range [start=%, " "end=%s], but we could not find a this task in the" " database.", new_current_time_stamp, identifier, start_reprocess_time, end_reprocess_time) return new_current_time_stamp = tzutils.local_to_utc( new_current_time_stamp, naive=True) result_set.current_reprocess_time = new_current_time_stamp session.commit() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2554867 cloudkitty-21.0.0/cloudkitty/storage_state/alembic/0000775000175000017500000000000000000000000022443 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage_state/alembic/env.py0000664000175000017500000000154600000000000023613 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2018 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from cloudkitty.common.db.alembic import env # noqa from cloudkitty.storage_state import models target_metadata = models.Base.metadata version_table = 'storage_states_alembic' env.run_migrations_online(target_metadata, version_table) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage_state/alembic/script.py.mako0000664000175000017500000000075600000000000025257 0ustar00zuulzuul00000000000000"""${message} Revision ID: ${up_revision} Revises: ${down_revision | comma,n} Create Date: ${create_date} """ from alembic import op import sqlalchemy as sa ${imports if imports else ""} # revision identifiers, used by Alembic. revision = ${repr(up_revision)} down_revision = ${repr(down_revision)} branch_labels = ${repr(branch_labels)} depends_on = ${repr(depends_on)} def upgrade(): ${upgrades if upgrades else "pass"} def downgrade(): ${downgrades if downgrades else "pass"} ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2554867 cloudkitty-21.0.0/cloudkitty/storage_state/alembic/versions/0000775000175000017500000000000000000000000024313 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=cloudkitty-21.0.0/cloudkitty/storage_state/alembic/versions/4d69395f_add_storage_scope_state_fields.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage_state/alembic/versions/4d69395f_add_storage_scope_state_fields.0000664000175000017500000000335200000000000033667 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Update storage state constraint Revision ID: 4d69395f Revises: 750d3050cf71 Create Date: 2019-05-15 17:02:56.595274 """ import importlib import sqlalchemy from alembic import op # revision identifiers, used by Alembic. revision = '4d69395f' down_revision = '750d3050cf71' def upgrade(): down_version_module = importlib.import_module( "cloudkitty.storage_state.alembic.versions." "750d3050_create_last_processed_timestamp_column") for name, table in down_version_module.Base.metadata.tables.items(): if name == 'cloudkitty_storage_states': with op.batch_alter_table(name, copy_from=table, recreate='always') as batch_op: batch_op.add_column( sqlalchemy.Column('scope_activation_toggle_date', sqlalchemy.DateTime, nullable=False, server_default=sqlalchemy.sql.func.now()) ) batch_op.add_column( sqlalchemy.Column('active', sqlalchemy.Boolean, nullable=False, default=True)) break ././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=cloudkitty-21.0.0/cloudkitty/storage_state/alembic/versions/750d3050_create_last_processed_timestamp_column.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage_state/alembic/versions/750d3050_create_last_processed_timestamp0000664000175000017500000000547500000000000033740 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Create last processed timestamp column Revision ID: 750d3050cf71 Revises: d9d103dd4dcf Create Date: 2021-02-08 17:00:00.000 """ from alembic import op import sqlalchemy from sqlalchemy.ext import declarative from oslo_db.sqlalchemy import models from cloudkitty.storage_state.alembic.versions import \ c50ed2c19204_update_storage_state_constraint as down_version_module # revision identifiers, used by Alembic. revision = '750d3050cf71' down_revision = 'c50ed2c19204' branch_labels = None depends_on = None Base = declarative.declarative_base() def upgrade(): for name, table in down_version_module.Base.metadata.tables.items(): if name == 'cloudkitty_storage_states': with op.batch_alter_table(name, copy_from=table, recreate='always') as batch_op: batch_op.alter_column( 'state', new_column_name='last_processed_timestamp') break class IdentifierTableForThisDataBaseModelChangeSet(Base, models.ModelBase): """Represents the state of a given identifier.""" @declarative.declared_attr def __table_args__(cls): return ( sqlalchemy.schema.UniqueConstraint( 'identifier', 'scope_key', 'collector', 'fetcher', name='uq_cloudkitty_storage_states_identifier'), ) __tablename__ = 'cloudkitty_storage_states' id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True) identifier = sqlalchemy.Column(sqlalchemy.String(256), nullable=False, unique=False) scope_key = sqlalchemy.Column(sqlalchemy.String(40), nullable=True, unique=False) fetcher = sqlalchemy.Column(sqlalchemy.String(40), nullable=True, unique=False) collector = sqlalchemy.Column(sqlalchemy.String(40), nullable=True, unique=False) last_processed_timestamp = sqlalchemy.Column( sqlalchemy.DateTime, nullable=False) ././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=cloudkitty-21.0.0/cloudkitty/storage_state/alembic/versions/9feccd32_create_reprocessing_scheduler.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage_state/alembic/versions/9feccd32_create_reprocessing_scheduler.p0000664000175000017500000000310600000000000034142 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Create reprocessing scheduler table Revision ID: 9feccd32 Revises: 4d69395f Create Date: 2021-06-04 16:27:00.595274 """ from alembic import op import sqlalchemy # revision identifiers, used by Alembic. revision = '9feccd32' down_revision = '4d69395f' def upgrade(): op.create_table( 'storage_scope_reprocessing_schedule', sqlalchemy.Column('id', sqlalchemy.Integer, primary_key=True), sqlalchemy.Column('identifier', sqlalchemy.String(length=256), nullable=False, unique=False), sqlalchemy.Column('start_reprocess_time', sqlalchemy.DateTime, nullable=False), sqlalchemy.Column('end_reprocess_time', sqlalchemy.DateTime, nullable=False), sqlalchemy.Column('current_reprocess_time', sqlalchemy.DateTime, nullable=True), sqlalchemy.Column('reason', sqlalchemy.Text, nullable=False), sqlalchemy.PrimaryKeyConstraint('id'), mysql_charset='utf8', mysql_engine='InnoDB' ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage_state/alembic/versions/c14eea9d3cc1_initial.py0000664000175000017500000000256600000000000030360 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2018 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """Initial Revision ID: c14eea9d3cc1 Revises: Create Date: 2018-04-20 14:27:11.434366 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'c14eea9d3cc1' down_revision = None branch_labels = None depends_on = None def upgrade(): op.create_table( 'cloudkitty_storage_states', sa.Column('id', sa.Integer(), nullable=False), sa.Column('identifier', sa.String(length=40), nullable=False, unique=True), sa.Column('state', sa.DateTime(), nullable=False), sa.PrimaryKeyConstraint('id'), mysql_charset='utf8', mysql_engine='InnoDB' ) def downgrade(): op.drop_table('cloudkitty_storage_states') ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=cloudkitty-21.0.0/cloudkitty/storage_state/alembic/versions/c50ed2c19204_update_storage_state_constraint.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage_state/alembic/versions/c50ed2c19204_update_storage_state_constr0000664000175000017500000000540600000000000033646 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Update storage state constraint Revision ID: c50ed2c19204 Revises: d9d103dd4dcf Create Date: 2019-05-15 17:02:56.595274 """ from alembic import op from oslo_db.sqlalchemy import models import sqlalchemy from sqlalchemy.ext import declarative # revision identifiers, used by Alembic. revision = 'c50ed2c19204' down_revision = 'd9d103dd4dcf' branch_labels = None depends_on = None Base = declarative.declarative_base() def upgrade(): for name, table in Base.metadata.tables.items(): if name == 'cloudkitty_storage_states': with op.batch_alter_table(name, copy_from=table, recreate='auto') as batch_op: batch_op.alter_column('identifier') batch_op.create_unique_constraint( 'uq_cloudkitty_storage_states_identifier', ['identifier', 'scope_key', 'collector', 'fetcher']) break class IdentifierTableForThisDataBaseModelChangeSet(Base, models.ModelBase): """Represents the state of a given identifier.""" @declarative.declared_attr def __table_args__(cls): return ( sqlalchemy.schema.UniqueConstraint( 'identifier', 'scope_key', 'collector', 'fetcher', name='uq_cloudkitty_storage_states_identifier'), ) __tablename__ = 'cloudkitty_storage_states' id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True) identifier = sqlalchemy.Column(sqlalchemy.String(256), nullable=False, unique=False) scope_key = sqlalchemy.Column(sqlalchemy.String(40), nullable=True, unique=False) fetcher = sqlalchemy.Column(sqlalchemy.String(40), nullable=True, unique=False) collector = sqlalchemy.Column(sqlalchemy.String(40), nullable=True, unique=False) state = sqlalchemy.Column(sqlalchemy.DateTime, nullable=False) ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=cloudkitty-21.0.0/cloudkitty/storage_state/alembic/versions/d9d103dd4dcf_add_state_management_columns.py 22 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage_state/alembic/versions/d9d103dd4dcf_add_state_management_column0000664000175000017500000000212700000000000033776 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add details to state management Revision ID: d9d103dd4dcf Revises: c14eea9d3cc1 Create Date: 2019-02-07 13:59:39.294277 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'd9d103dd4dcf' down_revision = 'c14eea9d3cc1' branch_labels = None depends_on = None def upgrade(): for column_name in ('scope_key', 'collector', 'fetcher'): op.add_column( 'cloudkitty_storage_states', sa.Column(column_name, sa.String(length=40), nullable=True)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage_state/migration.py0000664000175000017500000000240300000000000023411 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import os from cloudkitty.common.db.alembic import migration ALEMBIC_REPO = os.path.join(os.path.dirname(__file__), 'alembic') def upgrade(revision): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.upgrade(config, revision) def version(): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.version(config) def revision(message, autogenerate): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.revision(config, message, autogenerate) def stamp(revision): config = migration.load_alembic_config(ALEMBIC_REPO) return migration.stamp(config, revision) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/storage_state/models.py0000664000175000017500000000777300000000000022722 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2018 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_db.sqlalchemy import models import sqlalchemy from sqlalchemy.ext import declarative Base = declarative.declarative_base() def to_string_selected_fields(object_to_print, fields=[]): object_to_return = {} if object_to_print: object_to_return = { a: y for a, y in object_to_print.items() if a in fields} return str(object_to_return) class IdentifierState(Base, models.ModelBase): """Represents the state of a given identifier.""" @declarative.declared_attr def __table_args__(cls): return ( sqlalchemy.schema.UniqueConstraint( 'identifier', 'scope_key', 'collector', 'fetcher', name='uq_cloudkitty_storage_states_identifier'), ) __tablename__ = 'cloudkitty_storage_states' id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True) identifier = sqlalchemy.Column(sqlalchemy.String(256), nullable=False, unique=False) scope_key = sqlalchemy.Column(sqlalchemy.String(40), nullable=True, unique=False) fetcher = sqlalchemy.Column(sqlalchemy.String(40), nullable=True, unique=False) collector = sqlalchemy.Column(sqlalchemy.String(40), nullable=True, unique=False) last_processed_timestamp = sqlalchemy.Column( sqlalchemy.DateTime, nullable=False) scope_activation_toggle_date = sqlalchemy.Column( 'scope_activation_toggle_date', sqlalchemy.DateTime, nullable=False, server_default=sqlalchemy.sql.func.now()) active = sqlalchemy.Column('active', sqlalchemy.Boolean, nullable=False, default=True) def __str__(self): return to_string_selected_fields( self, ['id', 'identifier', 'state', 'active']) class ReprocessingScheduler(Base, models.ModelBase): """Represents the reprocessing scheduler table.""" @declarative.declared_attr def __table_args__(cls): return ( sqlalchemy.schema.PrimaryKeyConstraint('id'), ) __tablename__ = 'storage_scope_reprocessing_schedule' id = sqlalchemy.Column("id", sqlalchemy.Integer, primary_key=True) reason = sqlalchemy.Column("reason", sqlalchemy.Text, nullable=False) identifier = sqlalchemy.Column("identifier", sqlalchemy.String(256), nullable=False, unique=False) start_reprocess_time = sqlalchemy.Column("start_reprocess_time", sqlalchemy.DateTime, nullable=False) end_reprocess_time = sqlalchemy.Column("end_reprocess_time", sqlalchemy.DateTime, nullable=False) current_reprocess_time = sqlalchemy.Column("current_reprocess_time", sqlalchemy.DateTime, nullable=True) def __str__(self): return to_string_selected_fields( self, ['id', 'identifier', 'start_reprocess_time', 'end_reprocess_time', 'current_reprocess_time']) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2594867 cloudkitty-21.0.0/cloudkitty/tests/0000775000175000017500000000000000000000000017345 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/__init__.py0000664000175000017500000000610400000000000021457 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import decimal from unittest import mock import flask from keystoneauth1 import session as ks_sess from oslo_config import fixture as config_fixture from oslotest import base import testscenarios from cloudkitty import collector from cloudkitty import db from cloudkitty.db import api as ck_db_api from cloudkitty import rating class FakeCollectorModule(collector.BaseCollector): collector_name = 'test_fake' dependencies = tuple() def __init__(self): super(FakeCollectorModule, self).__init__([], period=3600) class FakeRatingModule(rating.RatingProcessorBase): module_name = 'fake' description = 'fake rating module' def __init__(self, tenant_id=None): super(FakeRatingModule, self).__init__() def quote(self, data): self.process(data) def process(self, data): for cur_data in data: cur_usage = cur_data['usage'] for service in cur_usage: for entry in cur_usage[service]: if 'rating' not in entry: entry['rating'] = {'price': decimal.Decimal(0)} return data def reload_config(self): pass def notify_reload(self): pass class TestCase(testscenarios.TestWithScenarios, base.BaseTestCase): scenarios = [ ('sqlite', dict(db_url='sqlite:///')) ] def setUp(self): super(TestCase, self).setUp() self._conf_fixture = self.useFixture(config_fixture.Config()) self.conf = self._conf_fixture.conf self.conf.set_override('connection', self.db_url, 'database') self.conn = ck_db_api.get_instance() migration = self.conn.get_migration() migration.upgrade('head') auth = mock.patch( 'keystoneauth1.loading.load_auth_from_conf_options', return_value=dict()) auth.start() self.auth = auth session = mock.patch( 'keystoneauth1.loading.load_session_from_conf_options', return_value=ks_sess.Session()) session.start() self.session = session self.app = flask.Flask('cloudkitty') self.app_context = self.app.test_request_context() self.app_context.push() def tearDown(self): with db.session_for_write() as session: engine = session.get_bind() engine.dispose() self.auth.stop() self.session.stop() self.app_context.pop() super(TestCase, self).tearDown() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2594867 cloudkitty-21.0.0/cloudkitty/tests/api/0000775000175000017500000000000000000000000020116 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/api/__init__.py0000664000175000017500000000000000000000000022215 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2594867 cloudkitty-21.0.0/cloudkitty/tests/api/v1/0000775000175000017500000000000000000000000020444 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/api/v1/__init__.py0000664000175000017500000000000000000000000022543 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/api/v1/test_summary.py0000664000175000017500000000241100000000000023550 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """Test SummaryModel objects.""" from oslotest import base from cloudkitty.api.v1.datamodels import report class TestSummary(base.BaseTestCase): def setUp(self): super(TestSummary, self).setUp() def test_nulls(self): s = report.SummaryModel(begin=None, end=None, tenant_id=None, res_type=None, rate=None) self.assertIsNone(s.begin) self.assertIsNone(s.end) self.assertEqual(s.tenant_id, "ALL") self.assertEqual(s.res_type, "ALL") self.assertEqual(s.rate, "0") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/api/v1/test_types.py0000664000175000017500000000372400000000000023227 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """Test cloudkitty/api/v1/types.""" from oslotest import base from wsme import types as wtypes from cloudkitty.api.v1 import types class TestTypes(base.BaseTestCase): def setUp(self): super(TestTypes, self).setUp() self.uuidtype = types.UuidType self.multitype = types.MultiType(wtypes.text, int, float, dict) def test_valid_uuid_values(self): valid_values = ['7977999e-2e25-11e6-a8b2-df30b233ffcb', 'ac55b000-a05b-4832-b2ff-265a034886ab', '39dbd39d-f663-4444-a795-fb19d81af136'] for valid_value in valid_values: self.uuidtype.validate(valid_value) def test_invalid_uuid_values(self): invalid_values = ['dxwegycw', '1234567', '#@%^&$*!'] for invalid_value in invalid_values: self.assertRaises(ValueError, self.uuidtype.validate, invalid_value) def test_valid_multi_values(self): valid_values = ['string_value', 123, 23.4, {'key': 'value'}] for valid_value in valid_values: self.multitype.validate(valid_value) def test_invalid_multi_values(self): invalid_values = [[1, 2, 3], ('a', 'b', 'c')] for invalid_value in invalid_values: self.assertRaises(ValueError, self.multitype.validate, invalid_value) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2594867 cloudkitty-21.0.0/cloudkitty/tests/api/v2/0000775000175000017500000000000000000000000020445 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/api/v2/__init__.py0000664000175000017500000000000000000000000022544 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2594867 cloudkitty-21.0.0/cloudkitty/tests/api/v2/dataframes/0000775000175000017500000000000000000000000022554 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/api/v2/dataframes/__init__.py0000664000175000017500000000000000000000000024653 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/api/v2/dataframes/test_dataframes.py0000664000175000017500000000352600000000000026302 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import flask from unittest import mock from cloudkitty.api.v2.dataframes import dataframes from cloudkitty import tests from cloudkitty.utils import tz as tzutils class TestDataframeListEndpoint(tests.TestCase): def setUp(self): super(TestDataframeListEndpoint, self).setUp() self.endpoint = dataframes.DataFrameList() def test_non_admin_request_is_filtered_on_project_id(self): policy_mock = mock.patch('cloudkitty.common.policy.authorize') flask.request.context = mock.Mock() flask.request.context.project_id = 'test-project' flask.request.context.is_admin = False with mock.patch.object(self.endpoint._storage, 'retrieve') as ret_mock: with policy_mock, mock.patch('flask.request.args.lists') as fmock: ret_mock.return_value = {'total': 42, 'dataframes': []} fmock.return_value = [] self.endpoint.get() ret_mock.assert_called_once_with( begin=tzutils.get_month_start(), end=tzutils.get_next_month(), metric_types=None, filters={'project_id': 'test-project'}, offset=0, limit=100, ) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2594867 cloudkitty-21.0.0/cloudkitty/tests/api/v2/summary/0000775000175000017500000000000000000000000022142 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/api/v2/summary/__init__.py0000664000175000017500000000000000000000000024241 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/api/v2/summary/test_summary.py0000664000175000017500000000701600000000000025254 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import flask import uuid from unittest import mock import voluptuous from cloudkitty.api.v2.summary import summary from cloudkitty import tests from cloudkitty.utils import tz as tzutils class TestSummaryEndpoint(tests.TestCase): def setUp(self): super(TestSummaryEndpoint, self).setUp() self.endpoint = summary.Summary() def test_type_filter_is_passed_separately(self): policy_mock = mock.patch('cloudkitty.common.policy.authorize') flask.request.context = mock.Mock() flask.request.context.project_id = str(uuid.uuid4()) flask.request.context.is_admin = True with mock.patch.object(self.endpoint._storage, 'total') as total_mock: with policy_mock, mock.patch('flask.request.args.lists') as fmock: total_mock.return_value = {'total': 0, 'results': []} fmock.return_value = [ ('filters', 'a:b,type:awesome')] self.endpoint.get() total_mock.assert_called_once_with( begin=tzutils.get_month_start(), end=tzutils.get_next_month(), groupby=None, filters={'a': ['b']}, metric_types=['awesome'], offset=0, limit=100, paginate=True, ) def test_invalid_response_type(self): self.assertRaises(voluptuous.Invalid, self.endpoint.get, response_format="INVALID_RESPONSE_TYPE") def test_generate_response_table_response_type(self): objects = [{"a1": "obj1", "a2": "value1"}, {"a1": "obj2", "a2": "value2"}] total = {'total': len(objects), 'results': objects} response = self.endpoint.generate_response( summary.TABLE_RESPONSE_FORMAT, total) self.assertIn('total', response) self.assertIn('results', response) self.assertIn('columns', response) self.assertEqual(len(objects), response['total']) self.assertEqual(list(objects[0].keys()), response['columns']) self.assertEqual( [list(res.values()) for res in objects], response['results']) self.assertEqual(summary.TABLE_RESPONSE_FORMAT, response['format']) def test_generate_response_object_response_type(self): objects = [{"a1": "obj1", "a2": "value1"}, {"a1": "obj2", "a2": "value2"}] total = {'total': len(objects), 'results': objects} response = self.endpoint.generate_response( summary.OBJECT_RESPONSE_FORMAT, total) self.assertIn('total', response) self.assertIn('results', response) self.assertNotIn('columns', response) self.assertEqual(len(objects), response['total']) self.assertEqual(objects, response['results']) self.assertEqual(summary.OBJECT_RESPONSE_FORMAT, response['format']) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2594867 cloudkitty-21.0.0/cloudkitty/tests/api/v2/task/0000775000175000017500000000000000000000000021407 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/api/v2/task/__init__.py0000664000175000017500000000000000000000000023506 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/api/v2/task/test_reprocess.py0000664000175000017500000004515100000000000025033 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import datetime import re from unittest import mock from datetimerange import DateTimeRange from werkzeug import exceptions as http_exceptions from cloudkitty.api.v2.task import reprocess from cloudkitty import tests from cloudkitty.utils import tz as tzutils class TestReprocessSchedulerPostApi(tests.TestCase): def setUp(self): super(TestReprocessSchedulerPostApi, self).setUp() self.endpoint = reprocess.ReprocessSchedulerPostApi() self.scope_ids = ["some-other-scope-id", "5e56cb64-4980-4466-9fce-d0133c0c221e"] self.start_reprocess_time = self.endpoint.get_valid_period_date( tzutils.localized_now()) self.end_reprocess_time = self.endpoint.get_valid_period_date( self.start_reprocess_time + datetime.timedelta(hours=1)) self.reason = "We are testing the reprocess API." def test_validate_scope_ids_all_option_with_scope_ids(self): self.scope_ids.append('ALL') expected_message = \ "400 Bad Request: Cannot use 'ALL' with scope ID [['some-other-" \ "scope-id', '5e56cb64-4980-4466-9fce-d0133c0c221e', 'ALL']]. " \ "Either schedule a reprocessing for all active scopes using " \ "'ALL' option, or inform only the scopes you desire to schedule " \ "a reprocessing." expected_message = re.escape(expected_message) self.assertRaisesRegex(http_exceptions.BadRequest, expected_message, self.endpoint.validate_scope_ids, self.scope_ids) self.scope_ids.remove('ALL') self.endpoint.validate_scope_ids(self.scope_ids) def test_validate_inputs_blank_reason(self): expected_message = \ "400 Bad Request: Empty or blank reason text is not allowed. " \ "Please, do inform/register the reason for the reprocessing of " \ "a previously processed timestamp." expected_message = re.escape(expected_message) self.assertRaisesRegex(http_exceptions.BadRequest, expected_message, self.endpoint.validate_inputs, self.end_reprocess_time, "", self.scope_ids, self.start_reprocess_time) self.assertRaisesRegex( http_exceptions.BadRequest, expected_message, self.endpoint.validate_inputs, self.end_reprocess_time, " ", self.scope_ids, self.start_reprocess_time) self.endpoint.validate_inputs( self.end_reprocess_time, self.reason, self.scope_ids, self.start_reprocess_time) def test_validate_inputs_end_date_less_than_start_date(self): original_end_reprocess_time = self.end_reprocess_time self.end_reprocess_time =\ self.start_reprocess_time - datetime.timedelta(hours=1) expected_message = \ "400 Bad Request: End reprocessing timestamp [%s] cannot be " \ "less than start reprocessing timestamp [%s]." % ( self.start_reprocess_time, self.end_reprocess_time) expected_message = re.escape(expected_message) self.assertRaisesRegex(http_exceptions.BadRequest, expected_message, self.endpoint.validate_inputs, self.end_reprocess_time, self.reason, self.scope_ids, self.start_reprocess_time) self.end_reprocess_time = original_end_reprocess_time self.endpoint.validate_inputs( self.end_reprocess_time, self.reason, self.scope_ids, self.start_reprocess_time) def test_validate_inputs_different_from_configured_period(self): original_end_reprocess_time = self.end_reprocess_time self.end_reprocess_time += datetime.timedelta(seconds=1) expected_message = "400 Bad Request: The provided reprocess time " \ "window does not comply with the configured" \ " collector period. A valid time window near " \ "the provided one is ['%s', '%s']" % ( self.start_reprocess_time, original_end_reprocess_time) expected_message = re.escape(expected_message) self.assertRaisesRegex(http_exceptions.BadRequest, expected_message, self.endpoint.validate_inputs, self.end_reprocess_time, self.reason, self.scope_ids, self.start_reprocess_time) self.end_reprocess_time = original_end_reprocess_time self.endpoint.validate_inputs( self.end_reprocess_time, self.reason, self.scope_ids, self.start_reprocess_time) def test_validate_time_window_smaller_than_configured_period(self): start = datetime.datetime(year=2022, day=22, month=2, hour=10, minute=10, tzinfo=tzutils._LOCAL_TZ) end = datetime.datetime(year=2022, day=22, month=2, hour=10, minute=20, tzinfo=tzutils._LOCAL_TZ) expected_start = datetime.datetime(year=2022, day=22, month=2, hour=10, tzinfo=tzutils._LOCAL_TZ) expected_end = datetime.datetime(year=2022, day=22, month=2, hour=11, tzinfo=tzutils._LOCAL_TZ) expected_message = "400 Bad Request: The provided reprocess time " \ "window does not comply with the configured" \ " collector period. A valid time window near " \ "the provided one is ['%s', '%s']" % ( expected_start, expected_end) expected_message = re.escape(expected_message) self.assertRaisesRegex(http_exceptions.BadRequest, expected_message, self.endpoint.validate_inputs, end, self.reason, self.scope_ids, start) def test_check_if_there_are_invalid_scopes(self): all_scopes = self.generate_all_scopes_object() element_removed = all_scopes.pop(0) expected_message = \ "400 Bad Request: Scopes [\'%s\'] scheduled to reprocess "\ "[start=%s, end=%s] do not exist."\ % (element_removed.identifier, self.start_reprocess_time, self.end_reprocess_time) expected_message = re.escape(expected_message) self.assertRaisesRegex( http_exceptions.BadRequest, expected_message, self.endpoint.check_if_there_are_invalid_scopes, all_scopes, self.end_reprocess_time, self.scope_ids, self.start_reprocess_time) all_scopes.append(element_removed) self.endpoint.check_if_there_are_invalid_scopes( all_scopes, self.end_reprocess_time, self.scope_ids, self.start_reprocess_time) def generate_all_scopes_object(self, last_processed_time=None): all_scopes = [] def mock_to_string(self): return "toStringMock" for s in self.scope_ids: scope = mock.Mock() scope.identifier = s scope.last_processed_timestamp = last_processed_time scope.__str__ = mock_to_string all_scopes.append(scope) return all_scopes @mock.patch("cloudkitty.storage_state.ReprocessingSchedulerDb.get_all") def test_validate_reprocessing_schedules_overlaps( self, schedule_get_all_mock): self.configure_and_execute_overlap_test(schedule_get_all_mock, self.start_reprocess_time, self.end_reprocess_time) self.configure_and_execute_overlap_test(schedule_get_all_mock, self.end_reprocess_time, self.start_reprocess_time) end_reprocess_time =\ self.end_reprocess_time + datetime.timedelta(hours=5) self.configure_and_execute_overlap_test(schedule_get_all_mock, self.start_reprocess_time, end_reprocess_time) start_reprocess_time =\ self.start_reprocess_time + datetime.timedelta(hours=1) self.configure_and_execute_overlap_test(schedule_get_all_mock, start_reprocess_time, end_reprocess_time) start_reprocess_time =\ self.start_reprocess_time - datetime.timedelta(hours=1) self.configure_and_execute_overlap_test(schedule_get_all_mock, start_reprocess_time, end_reprocess_time) start_reprocess_time =\ self.end_reprocess_time + datetime.timedelta(hours=1) self.configure_schedules_mock(schedule_get_all_mock, start_reprocess_time, end_reprocess_time) self.endpoint.validate_reprocessing_schedules_overlaps( self.generate_all_scopes_object(), self.end_reprocess_time, self.start_reprocess_time) schedule_get_all_mock.assert_has_calls([ mock.call(identifier=[self.scope_ids[0]]), mock.call(identifier=[self.scope_ids[1]])]) def configure_and_execute_overlap_test(self, schedule_get_all_mock, start_reprocess_time, end_reprocess_time): self.configure_schedules_mock( schedule_get_all_mock, start_reprocess_time, end_reprocess_time) scheduling_range = DateTimeRange( tzutils.utc_to_local(self.start_reprocess_time), tzutils.utc_to_local(self.end_reprocess_time)) scheduled_range = DateTimeRange( tzutils.local_to_utc(start_reprocess_time), tzutils.local_to_utc(end_reprocess_time)) expected_message = \ "400 Bad Request: Cannot schedule a reprocessing for scope " \ "[toStringMock] for reprocessing time [%s], because it already " \ "has a schedule for a similar time range [%s]." \ % (scheduling_range, scheduled_range) expected_message = re.escape(expected_message) self.assertRaisesRegex( http_exceptions.BadRequest, expected_message, self.endpoint.validate_reprocessing_schedules_overlaps, self.generate_all_scopes_object(), self.end_reprocess_time, self.start_reprocess_time) schedule_get_all_mock.assert_called_with( identifier=[self.scope_ids[0]]) def configure_schedules_mock(self, schedule_get_all_mock, start_reprocess_time, end_reprocess_time): schedules = [] schedule_get_all_mock.return_value = schedules all_scopes = self.generate_all_scopes_object() for s in all_scopes: schedule_mock = mock.Mock() schedule_mock.identifier = s.identifier schedule_mock.start_reprocess_time = start_reprocess_time schedule_mock.end_reprocess_time = end_reprocess_time schedules.append(schedule_mock) def test_validate_start_end_for_reprocessing(self): all_scopes = self.generate_all_scopes_object( last_processed_time=self.start_reprocess_time) base_error_message = "400 Bad Request: Cannot execute a " \ "reprocessing [%s=%s] for scope [toStringMock] " \ "%s after the last possible timestamp [%s]." start_reprocess_time =\ self.start_reprocess_time + datetime.timedelta(hours=1) expected_message = base_error_message % ("start", start_reprocess_time, "starting", self.start_reprocess_time) expected_message = re.escape(expected_message) self.assertRaisesRegex( http_exceptions.BadRequest, expected_message, self.endpoint.validate_start_end_for_reprocessing, all_scopes, self.end_reprocess_time, start_reprocess_time) all_scopes = self.generate_all_scopes_object( last_processed_time=self.end_reprocess_time) end_processing_time =\ self.end_reprocess_time + datetime.timedelta(hours=1) expected_message = base_error_message % ("end", end_processing_time, "ending", self.end_reprocess_time) expected_message = re.escape(expected_message) self.assertRaisesRegex( http_exceptions.BadRequest, expected_message, self.endpoint.validate_start_end_for_reprocessing, all_scopes, end_processing_time, self.start_reprocess_time) self.endpoint.validate_start_end_for_reprocessing( all_scopes, self.end_reprocess_time, self.start_reprocess_time) all_scopes = self.generate_all_scopes_object( last_processed_time=self.start_reprocess_time) self.endpoint.validate_start_end_for_reprocessing( all_scopes, self.start_reprocess_time, self.start_reprocess_time) @mock.patch("flask.request") @mock.patch("cloudkitty.common.policy.authorize") @mock.patch("cloudkitty.api.v2.task.reprocess" ".ReprocessSchedulerPostApi.validate_inputs") @mock.patch("cloudkitty.api.v2.task.reprocess" ".ReprocessSchedulerPostApi" ".check_if_there_are_invalid_scopes") @mock.patch("cloudkitty.api.v2.task.reprocess." "ReprocessSchedulerPostApi." "validate_start_end_for_reprocessing") @mock.patch("cloudkitty.api.v2.task.reprocess" ".ReprocessSchedulerPostApi" ".validate_reprocessing_schedules_overlaps") @mock.patch("cloudkitty.storage_state.StateManager.get_all") @mock.patch("cloudkitty.storage_state.ReprocessingSchedulerDb.persist") def test_post(self, reprocessing_scheduler_db_persist_mock, state_manager_get_all_mock, validate_reprocessing_schedules_overlaps_mock, validate_start_end_for_reprocessing_mock, check_if_there_are_invalid_scopes_mock, validate_inputs_mock, policy_mock, request_mock): state_manager_get_all_mock.return_value =\ self.generate_all_scopes_object() request_mock.context = mock.Mock() request_mock.context.project_id = "project_id_mock" def get_json_mock(): return {"scope_ids": self.scope_ids[0], "start_reprocess_time": str(self.start_reprocess_time), "end_reprocess_time": str(self.end_reprocess_time), "reason": self.reason} request_mock.get_json = get_json_mock self.endpoint.post() self.assertEqual(reprocessing_scheduler_db_persist_mock.call_count, 2) state_manager_get_all_mock.assert_called_once() validate_reprocessing_schedules_overlaps_mock.assert_called_once() validate_start_end_for_reprocessing_mock.assert_called_once() check_if_there_are_invalid_scopes_mock.assert_called_once() validate_inputs_mock.assert_called_once() policy_mock.assert_called_once() class TestReprocessingSchedulerGetApi(tests.TestCase): def setUp(self): super(TestReprocessingSchedulerGetApi, self).setUp() self.endpoint = reprocess.ReprocessSchedulerGetApi() @mock.patch("flask.request") @mock.patch("cloudkitty.common.policy.authorize") @mock.patch("cloudkitty.storage_state.ReprocessingSchedulerDb.get_all") def test_get(self, reprocessing_db_get_all_mock, policy_mock, request_mock): time_now = tzutils.localized_now() schedule_mock = mock.Mock() schedule_mock.id = 1 schedule_mock.identifier = "scope_identifier" schedule_mock.reason = "reason to process" schedule_mock.current_reprocess_time = time_now schedule_mock.start_reprocess_time =\ time_now - datetime.timedelta(hours=10) schedule_mock.end_reprocess_time =\ time_now + datetime.timedelta(hours=10) reprocessing_db_get_all_mock.return_value = [schedule_mock] request_mock.context = mock.Mock() request_mock.args = mock.Mock() request_mock.args.lists = mock.Mock() request_mock.args.lists.return_value = [] list_all_return = self.endpoint.get() self.assertIn("results", list_all_return) self.assertNotIn("id", list_all_return['results'][0]) self.assertIn("scope_id", list_all_return['results'][0]) self.assertIn("reason", list_all_return['results'][0]) self.assertIn( "current_reprocess_time", list_all_return['results'][0]) self.assertIn( "start_reprocess_time", list_all_return['results'][0]) self.assertIn( "end_reprocess_time", list_all_return['results'][0]) self.assertEqual("scope_identifier", list_all_return['results'][0]['scope_id']) self.assertEqual("reason to process", list_all_return['results'][0]['reason']) self.assertEqual(time_now.isoformat(), list_all_return['results'][0][ 'current_reprocess_time']) self.assertEqual((time_now - datetime.timedelta(hours=10)).isoformat(), list_all_return['results'][0]['start_reprocess_time']) self.assertEqual((time_now + datetime.timedelta(hours=10)).isoformat(), list_all_return['results'][0]['end_reprocess_time']) reprocessing_db_get_all_mock.assert_called_once() policy_mock.assert_called_once() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/api/v2/test_utils.py0000664000175000017500000002323600000000000023224 0ustar00zuulzuul00000000000000# Copyright 2018 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from unittest import mock import flask import voluptuous from werkzeug.datastructures import MultiDict from werkzeug.exceptions import BadRequest from cloudkitty.api.v2.scope import state from cloudkitty.api.v2 import utils as api_utils from cloudkitty import tests class ApiUtilsDoInitTest(tests.TestCase): def test_do_init_valid_app_and_resources(self): app = flask.Flask('cloudkitty') resources = [ { 'module': 'cloudkitty.api.v2.scope.state', 'resource_class': 'ScopeState', 'url': '/scope', }, ] api_utils.do_init(app, 'example', resources) def test_do_init_suffix_without_heading_slash(self): app = flask.Flask('cloudkitty') resources = [ { 'module': 'cloudkitty.api.v2.scope.state', 'resource_class': 'ScopeState', 'url': 'suffix', }, ] with mock.patch.object(api_utils, '_get_blueprint_and_api') as fmock: blueprint_mock, api_mock = mock.MagicMock(), mock.MagicMock() fmock.return_value = (blueprint_mock, api_mock) api_utils.do_init(app, 'prefix', resources) api_mock.add_resource.assert_called_once_with( state.ScopeState, '/suffix') def test_do_init_suffix_without_heading_slash_no_prefix(self): app = flask.Flask('cloudkitty') resources = [ { 'module': 'cloudkitty.api.v2.scope.state', 'resource_class': 'ScopeState', 'url': 'suffix', }, ] with mock.patch.object(api_utils, '_get_blueprint_and_api') as fmock: blueprint_mock, api_mock = mock.MagicMock(), mock.MagicMock() fmock.return_value = (blueprint_mock, api_mock) api_utils.do_init(app, '', resources) api_mock.add_resource.assert_called_once_with( state.ScopeState, '/suffix') def test_do_init_invalid_resource(self): app = flask.Flask('cloudkitty') resources = [ { 'module': 'cloudkitty.api.v2.invalid', 'resource_class': 'Invalid', 'url': '/invalid', }, ] self.assertRaises( api_utils.ResourceNotFound, api_utils.do_init, app, 'invalid', resources, ) class SingleQueryParamTest(tests.TestCase): def test_single_int_to_int(self): self.assertEqual(api_utils.SingleQueryParam(int)(42), 42) def test_single_str_to_int(self): self.assertEqual(api_utils.SingleQueryParam(str)(42), '42') def test_int_list_to_int(self): self.assertEqual(api_utils.SingleQueryParam(int)([42]), 42) def test_str_list_to_int(self): self.assertEqual(api_utils.SingleQueryParam(str)([42]), '42') def test_raises_length_invalid_empty_list(self): validator = api_utils.SingleQueryParam(int) self.assertRaises( voluptuous.LengthInvalid, validator, [], ) def test_raises_length_invalid_long_list(self): validator = api_utils.SingleQueryParam(int) self.assertRaises( voluptuous.LengthInvalid, validator, [0, 1], ) class DictQueryParamTest(tests.TestCase): validator_class = api_utils.DictQueryParam def test_empty_list_str_str(self): validator = self.validator_class(str, str) input_ = [] self.assertEqual(validator(input_), {}) def test_list_invalid_elem_missing_key_str_str(self): validator = self.validator_class(str, str) input_ = ['a:b', 'c'] self.assertRaises(voluptuous.DictInvalid, validator, input_) def test_list_invalid_elem_too_many_columns_str_str(self): validator = self.validator_class(str, str) input_ = ['a:b', 'c:d:e'] self.assertRaises(voluptuous.DictInvalid, validator, input_) class SingleDictQueryParamTest(DictQueryParamTest): validator_class = api_utils.SingleDictQueryParam def test_single_valid_elem_str_int(self): validator = self.validator_class(str, int) input_ = 'life:42' self.assertEqual(validator(input_), {'life': 42}) def test_list_one_valid_elem_str_int(self): validator = self.validator_class(str, int) input_ = ['life:42'] self.assertEqual(validator(input_), {'life': 42}) def test_list_several_valid_elems_str_int(self): validator = self.validator_class(str, int) input_ = ['life:42', 'one:1', 'two:2'] self.assertEqual(validator(input_), {'life': 42, 'one': 1, 'two': 2}) class MultiDictQueryParamTest(DictQueryParamTest): validator_class = api_utils.MultiDictQueryParam def test_single_valid_elem_str_int(self): validator = self.validator_class(str, int) input_ = 'life:42' self.assertEqual(validator(input_), {'life': [42]}) def test_list_one_valid_elem_str_int(self): validator = self.validator_class(str, int) input_ = ['life:42'] self.assertEqual(validator(input_), {'life': [42]}) def test_list_several_valid_elems_str_int(self): validator = self.validator_class(str, int) input_ = ['life:42', 'one:1', 'two:2'] self.assertEqual(validator(input_), {'life': [42], 'one': [1], 'two': [2]}) def test_list_several_valid_elems_shared_keys_str_int(self): validator = self.validator_class(str, int) input_ = ['even:0', 'uneven:1', 'even:2', 'uneven:3', 'even:4'] self.assertEqual(validator(input_), {'even': [0, 2, 4], 'uneven': [1, 3]}) class AddInputSchemaTest(tests.TestCase): def test_paginated(self): @api_utils.paginated def test_func(self, offset=None, limit=None): self.assertEqual(offset, 0) self.assertEqual(limit, 100) self.assertIn('offset', test_func.input_schema.schema.keys()) self.assertIn('limit', test_func.input_schema.schema.keys()) self.assertEqual(2, len(test_func.input_schema.schema.keys())) with mock.patch('flask.request') as m: m.args = MultiDict({}) test_func(self) m.args = MultiDict({'offset': 0, 'limit': 100}) test_func(self) m.args = MultiDict({'offset': 1}) self.assertRaises(AssertionError, test_func, self) m.args = MultiDict({'limit': 99}) self.assertRaises(AssertionError, test_func, self) m.args = MultiDict({'offset': -1}) self.assertRaises(BadRequest, test_func, self) m.args = MultiDict({'limit': 0}) self.assertRaises(BadRequest, test_func, self) def test_simple_add_input_schema_query(self): @api_utils.add_input_schema('query', { voluptuous.Required( 'arg_one', default='one'): api_utils.SingleQueryParam(str), }) def test_func(self, arg_one=None): self.assertEqual(arg_one, 'one') self.assertEqual(len(test_func.input_schema.schema.keys()), 1) self.assertEqual( list(test_func.input_schema.schema.keys())[0], 'arg_one') with mock.patch('flask.request') as m: m.args = MultiDict({}) test_func(self) m.args = MultiDict({'arg_one': 'one'}) test_func(self) def test_simple_add_input_schema_body(self): @api_utils.add_input_schema('body', { voluptuous.Required( 'arg_one', default='one'): api_utils.SingleQueryParam(str), }) def test_func(self, arg_one=None): self.assertEqual(arg_one, 'one') self.assertEqual(len(test_func.input_schema.schema.keys()), 1) self.assertEqual( list(test_func.input_schema.schema.keys())[0], 'arg_one') with mock.patch('flask.request.get_json') as m: m.return_value = {} test_func(self) with mock.patch('flask.request.get_json') as m: m.return_value = {'arg_one': 'one'} test_func(self) def _test_multiple_add_input_schema_x(self, location): @api_utils.add_input_schema(location, { voluptuous.Required( 'arg_one', default='one'): api_utils.SingleQueryParam(str) if location == 'query' else str, }) @api_utils.add_input_schema(location, { voluptuous.Required( 'arg_two', default='two'): api_utils.SingleQueryParam(str) if location == 'query' else str, }) def test_func(self, arg_one=None, arg_two=None): self.assertEqual(arg_one, 'one') self.assertEqual(arg_two, 'two') self.assertEqual(len(test_func.input_schema.schema.keys()), 2) self.assertEqual( sorted(list(test_func.input_schema.schema.keys())), ['arg_one', 'arg_two'], ) def test_multiple_add_input_schema_query(self): self._test_multiple_add_input_schema_x('query') def test_multiple_add_input_schema_body(self): self._test_multiple_add_input_schema_x('body') ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2634869 cloudkitty-21.0.0/cloudkitty/tests/cli/0000775000175000017500000000000000000000000020114 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/cli/__init__.py0000664000175000017500000000000000000000000022213 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/cli/test_status.py0000664000175000017500000000252500000000000023054 0ustar00zuulzuul00000000000000# Copyright 2018 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from oslo_upgradecheck import upgradecheck from cloudkitty.cli import status from cloudkitty import tests class CloudKittyStatusCheckUpgradeTest(tests.TestCase): def setUp(self): super(CloudKittyStatusCheckUpgradeTest, self).setUp() self._checks = status.CloudkittyUpgradeChecks() def test_storage_version_with_v1(self): self.conf.set_override('version', 1, 'storage') self.assertEqual( upgradecheck.Code.WARNING, self._checks._storage_version().code, ) def test_storage_version_with_v2(self): self.conf.set_override('version', 2, 'storage') self.assertEqual( upgradecheck.Code.SUCCESS, self._checks._storage_version().code, ) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2634869 cloudkitty-21.0.0/cloudkitty/tests/collectors/0000775000175000017500000000000000000000000021516 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/collectors/__init__.py0000664000175000017500000000000000000000000023615 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/collectors/test_gnocchi.py0000664000175000017500000002773200000000000024554 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import datetime from unittest import mock from dateutil import tz from cloudkitty.collector import gnocchi from cloudkitty import tests from cloudkitty.tests import samples class GnocchiCollectorTest(tests.TestCase): def setUp(self): super(GnocchiCollectorTest, self).setUp() self._tenant_id = samples.TENANT self.conf.set_override('collector', 'gnocchi', 'collect') self.conf.set_override( 'gnocchi_auth_type', 'basic', 'collector_gnocchi') self.collector = gnocchi.GnocchiCollector( period=3600, conf=samples.DEFAULT_METRICS_CONF, ) def test_format_data_raises_exception(self): metconf = {'extra_args': {'resource_key': 'id'}} data = {'group': {'id': '281b9dc6-5d02-4610-af2d-10d0d6887f48'}} self.assertRaises( gnocchi.AssociatedResourceNotFound, self.collector._format_data, metconf, data, resources_info={}, ) # Filter generation def test_generate_one_field_filter(self): actual = self.collector.gen_filter(value1=2) expected = { '=': { 'value1': 2 }} self.assertEqual(expected, actual) def test_generate_two_fields_filter(self): actual = self.collector.gen_filter(value1=2, value2=3) expected = {'and': [{ '=': { 'value1': 2 }}, { '=': { 'value2': 3 }}]} self.assertEqual(expected, actual) def test_collector_retrieve_metrics(self): expected_data = {"group": {"id": "id-1", "revision_start": datetime.datetime( 2020, 1, 1, 1, 10, 0, tzinfo=tz.tzutc()) }} data = [ {"group": {"id": "id-1", "revision_start": datetime.datetime( 2020, 1, 1, tzinfo=tz.tzutc())}}, expected_data ] no_response = mock.patch( 'cloudkitty.collector.gnocchi.GnocchiCollector.fetch_all', return_value=data, ) for c in self.collector.conf: with no_response: actual_name, actual_data = self.collector.retrieve( metric_name=c, start=samples.FIRST_PERIOD_BEGIN, end=samples.FIRST_PERIOD_END, project_id=samples.TENANT, q_filter=None, ) def test_generate_two_fields_filter_different_operations(self): actual = self.collector.gen_filter( cop='>=', lop='or', value1=2, value2=3) expected = {'or': [{ '>=': { 'value1': 2 }}, { '>=': { 'value2': 3 }}]} self.assertEqual(expected, actual) def test_generate_two_filters_and_add_logical(self): filter1 = self.collector.gen_filter(value1=2) filter2 = self.collector.gen_filter(cop='>', value2=3) actual = self.collector.extend_filter(filter1, filter2, lop='or') expected = {'or': [{ '=': { 'value1': 2 }}, { '>': { 'value2': 3 }}]} self.assertEqual(expected, actual) def test_noop_on_single_filter(self): filter1 = self.collector.gen_filter(value1=2) actual = self.collector.extend_filter(filter1, lop='or') self.assertEqual(filter1, actual) def test_try_extend_empty_filter(self): actual = self.collector.extend_filter() self.assertEqual({}, actual) actual = self.collector.extend_filter(actual, actual) self.assertEqual({}, actual) def test_try_extend_filter_with_none(self): filter1 = self.collector.gen_filter(value1=2) actual = self.collector.extend_filter(filter1, None) self.assertEqual(filter1, actual) def test_generate_two_logical_ops(self): filter1 = self.collector.gen_filter(value1=2, value2=3) filter2 = self.collector.gen_filter(cop='<=', value3=1) actual = self.collector.extend_filter(filter1, filter2, lop='or') expected = {'or': [{ 'and': [{ '=': { 'value1': 2 }}, { '=': { 'value2': 3 }}]}, { '<=': { 'value3': 1 }}]} self.assertEqual(expected, actual) def test_gen_filter_parameters(self): actual = self.collector.gen_filter( cop='>', lop='or', value1=2, value2=3) expected = {'or': [{ '>': { 'value1': 2 }}, { '>': { 'value2': 3 }}]} self.assertEqual(expected, actual) def test_extend_filter_parameters(self): actual = self.collector.extend_filter( ['dummy1'], ['dummy2'], lop='or') expected = {'or': ['dummy1', 'dummy2']} self.assertEqual(expected, actual) class GnocchiCollectorAggregationOperationTest(tests.TestCase): def setUp(self): super(GnocchiCollectorAggregationOperationTest, self).setUp() self.conf.set_override('collector', 'gnocchi', 'collect') self.start = datetime.datetime(2019, 1, 1, tzinfo=tz.tzutc()) self.end = datetime.datetime(2019, 1, 1, 1, tzinfo=tz.tzutc()) def do_test(self, expected_op, extra_args=None, conf=None): conf = conf or { 'metrics': { 'metric_one': { 'unit': 'GiB', 'groupby': ['project_id'], 'extra_args': extra_args if extra_args else {}, } } } coll = gnocchi.GnocchiCollector(period=3600, conf=conf) for c in coll.conf: with mock.patch.object(coll._conn.aggregates, 'fetch') as fetch_mock: coll._fetch_metric(c, self.start, self.end) fetch_mock.assert_called_once_with( expected_op, groupby=['project_id', 'id'], resource_type='resource_x', search={'=': {'type': 'resource_x'}}, start=self.start, stop=self.end, granularity=3600, use_history=True ) def test_multiple_confs(self): conf = { 'metrics': { 'metric_one': [{ 'alt_name': 'foo', 'unit': 'GiB', 'groupby': ['project_id'], 'extra_args': {'resource_type': 'resource_x'}, }, { 'alt_name': 'bar', 'unit': 'GiB', 'groupby': ['project_id'], 'extra_args': {'resource_type': 'resource_x'}, }] } } expected_op = ["aggregate", "max", ["metric", "metric_one", "max"]] self.do_test(expected_op, conf=conf) def test_no_agg_no_re_agg(self): extra_args = {'resource_type': 'resource_x'} expected_op = ["aggregate", "max", ["metric", "metric_one", "max"]] self.do_test(expected_op, extra_args=extra_args) def test_custom_agg_no_re_agg(self): extra_args = { 'resource_type': 'resource_x', 'aggregation_method': 'mean', } expected_op = ["aggregate", "max", ["metric", "metric_one", "mean"]] self.do_test(expected_op, extra_args=extra_args) def test_no_agg_custom_re_agg(self): extra_args = { 'resource_type': 'resource_x', 're_aggregation_method': 'sum', } expected_op = ["aggregate", "sum", ["metric", "metric_one", "max"]] self.do_test(expected_op, extra_args=extra_args) def test_custom_agg_custom_re_agg(self): extra_args = { 'resource_type': 'resource_x', 'aggregation_method': 'rate:mean', 're_aggregation_method': 'sum', } expected_op = [ "aggregate", "sum", ["metric", "metric_one", "rate:mean"], ] self.do_test(expected_op, extra_args=extra_args) def test_filter_unecessary_measurements_use_all_datapoints(self): data = [ {"group": { "id": "id-1", "revision_start": datetime.datetime( 2020, 1, 1, tzinfo=tz.tzutc())}}, {"group": {"id": "id-1", "revision_start": datetime.datetime( 2020, 1, 1, 1, 10, 0, tzinfo=tz.tzutc())}} ] expected_data = data.copy() metric_name = 'test_metric' metric = { 'name': metric_name, 'extra_args': {'use_all_resource_revisions': True}} data_filtered = gnocchi.GnocchiCollector.\ filter_unecessary_measurements(data, metric, metric_name) self.assertEqual(expected_data, data_filtered) def test_filter_unecessary_measurements_use_only_last_datapoint(self): expected_data = {"group": {"id": "id-1", "revision_start": datetime.datetime( 2020, 1, 1, 1, 10, 0, tzinfo=tz.tzutc()) }} data = [ {"group": {"id": "id-1", "revision_start": datetime.datetime( 2020, 1, 1, tzinfo=tz.tzutc())}}, expected_data ] metric_name = 'test_metric' metric = {'name': metric_name, 'extra_args': { 'use_all_resource_revisions': False}} data_filtered = gnocchi.GnocchiCollector.\ filter_unecessary_measurements(data, metric, metric_name) data_filtered = list(data_filtered) self.assertEqual(1, len(data_filtered)) self.assertEqual(expected_data, data_filtered[0]) def test_generate_aggregation_operation_same_reaggregation(self): metric_name = "test" extra_args = {"aggregation_method": 'mean'} expected_op = ["aggregate", 'mean', ["metric", "test", 'mean']] op = gnocchi.GnocchiCollector.generate_aggregation_operation( extra_args, metric_name) self.assertEqual(expected_op, op) def test_generate_aggregation_operation_different_reaggregation(self): metric_name = "test" extra_args = {"aggregation_method": 'mean', "re_aggregation_method": 'max'} expected_op = ["aggregate", 'max', ["metric", "test", 'mean']] op = gnocchi.GnocchiCollector.generate_aggregation_operation( extra_args, metric_name) self.assertEqual(expected_op, op) def test_generate_aggregation_operation_custom_query(self): metric_name = "test" extra_args = {"aggregation_method": 'mean', "re_aggregation_method": 'max', "custom_query": "(* (aggregate RE_AGGREGATION_METHOD (metric " "METRIC_NAME AGGREGATION_METHOD)) -1)"} expected_op = "(* (aggregate max (metric test mean)) -1)" op = gnocchi.GnocchiCollector.generate_aggregation_operation( extra_args, metric_name) self.assertEqual(expected_op, op) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/collectors/test_prometheus.py0000664000175000017500000002613100000000000025325 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2018 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from decimal import Decimal from unittest import mock from cloudkitty import collector from cloudkitty.collector import exceptions from cloudkitty.collector import prometheus from cloudkitty.common.prometheus_client import PrometheusResponseError from cloudkitty import dataframe from cloudkitty import tests from cloudkitty.tests import samples class PrometheusCollectorTest(tests.TestCase): def setUp(self): super(PrometheusCollectorTest, self).setUp() self._tenant_id = samples.TENANT args = { 'period': 3600, 'scope_key': 'namespace', 'conf': { 'metrics': { 'http_requests_total': { 'unit': 'instance', 'groupby': [ 'foo', 'bar', ], 'metadata': [ 'code', 'instance', ], 'extra_args': { 'aggregation_method': 'avg' }, }, } } } args_range_function = { 'period': 3600, 'scope_key': 'namespace', 'conf': { 'metrics': { 'http_requests_total': { 'unit': 'instance', 'groupby': [ 'foo', 'bar', ], 'metadata': [ 'code', 'instance', ], 'extra_args': { 'aggregation_method': 'avg', 'query_function': 'abs', }, }, } } } args_query_function = { 'period': 3600, 'scope_key': 'namespace', 'conf': { 'metrics': { 'http_requests_total': { 'unit': 'instance', 'groupby': [ 'foo', 'bar', ], 'metadata': [ 'code', 'instance', ], 'extra_args': { 'aggregation_method': 'avg', 'range_function': 'delta', }, }, } } } args_all = { 'period': 3600, 'scope_key': 'namespace', 'conf': { 'metrics': { 'http_requests_total': { 'unit': 'instance', 'groupby': [ 'foo', 'bar', ], 'metadata': [ 'code', 'instance', ], 'extra_args': { 'aggregation_method': 'avg', 'range_function': 'delta', 'query_function': 'abs', }, }, } } } self.collector_mandatory = prometheus.PrometheusCollector(**args) self.collector_without_range_function = prometheus.PrometheusCollector( **args_range_function) self.collector_without_query_function = prometheus.PrometheusCollector( **args_query_function) self.collector_all = prometheus.PrometheusCollector(**args_all) def test_fetch_all_build_query_only_mandatory(self): query = ( 'avg(avg_over_time(http_requests_total' '{project_id="f266f30b11f246b589fd266f85eeec39"}[3600s]' ')) by (foo, bar, project_id, code, instance)' ) with mock.patch.object( prometheus.PrometheusClient, 'get_instant', ) as mock_get: self.collector_mandatory.fetch_all( 'http_requests_total', samples.FIRST_PERIOD_BEGIN, samples.FIRST_PERIOD_END, self._tenant_id, ) mock_get.assert_called_once_with( query, samples.FIRST_PERIOD_END.isoformat(), ) def test_fetch_all_build_query_without_range_function(self): query = ( 'avg(abs(avg_over_time(http_requests_total' '{project_id="f266f30b11f246b589fd266f85eeec39"}[3600s]' '))) by (foo, bar, project_id, code, instance)' ) with mock.patch.object( prometheus.PrometheusClient, 'get_instant', ) as mock_get: self.collector_without_range_function.fetch_all( 'http_requests_total', samples.FIRST_PERIOD_BEGIN, samples.FIRST_PERIOD_END, self._tenant_id, ) mock_get.assert_called_once_with( query, samples.FIRST_PERIOD_END.isoformat(), ) def test_fetch_all_build_query_without_query_function(self): query = ( 'avg(delta(http_requests_total' '{project_id="f266f30b11f246b589fd266f85eeec39"}[3600s]' ')) by (foo, bar, project_id, code, instance)' ) with mock.patch.object( prometheus.PrometheusClient, 'get_instant', ) as mock_get: self.collector_without_query_function.fetch_all( 'http_requests_total', samples.FIRST_PERIOD_BEGIN, samples.FIRST_PERIOD_END, self._tenant_id, ) mock_get.assert_called_once_with( query, samples.FIRST_PERIOD_END.isoformat(), ) def test_fetch_all_build_query_all(self): query = ( 'avg(abs(delta(http_requests_total' '{project_id="f266f30b11f246b589fd266f85eeec39"}[3600s]' '))) by (foo, bar, project_id, code, instance)' ) with mock.patch.object( prometheus.PrometheusClient, 'get_instant', ) as mock_get: self.collector_all.fetch_all( 'http_requests_total', samples.FIRST_PERIOD_BEGIN, samples.FIRST_PERIOD_END, self._tenant_id, ) mock_get.assert_called_once_with( query, samples.FIRST_PERIOD_END.isoformat(), ) def test_format_data_instant_query(self): expected = ({ 'code': '200', 'instance': 'localhost:9090', }, { 'bar': '', 'foo': '', 'project_id': '' }, Decimal('7')) params = { 'metric_name': 'http_requests_total', 'scope_key': 'project_id', 'scope_id': self._tenant_id, 'start': samples.FIRST_PERIOD_BEGIN, 'end': samples.FIRST_PERIOD_END, 'data': samples.PROMETHEUS_RESP_INSTANT_QUERY['data']['result'][0], } actual = self.collector_mandatory._format_data(**params) self.assertEqual(expected, actual) def test_format_data_instant_query_2(self): expected = ({ 'code': '200', 'instance': 'localhost:9090', }, { 'bar': '', 'foo': '', 'project_id': '' }, Decimal('42')) params = { 'metric_name': 'http_requests_total', 'scope_key': 'project_id', 'scope_id': self._tenant_id, 'start': samples.FIRST_PERIOD_BEGIN, 'end': samples.FIRST_PERIOD_END, 'data': samples.PROMETHEUS_RESP_INSTANT_QUERY['data']['result'][1], } actual = self.collector_mandatory._format_data(**params) self.assertEqual(expected, actual) def test_format_retrieve(self): expected_name = 'http_requests_total' group_by = {'bar': '', 'foo': '', 'project_id': '', 'week_of_the_year': '00', 'day_of_the_year': '1', 'month': '1', 'year': '2015'} expected_data = [ dataframe.DataPoint( 'instance', '7', '0', group_by, {'code': '200', 'instance': 'localhost:9090'}), dataframe.DataPoint( 'instance', '42', '0', group_by, {'code': '200', 'instance': 'localhost:9090'}), ] no_response = mock.patch( 'cloudkitty.common.prometheus_client.PrometheusClient.get_instant', return_value=samples.PROMETHEUS_RESP_INSTANT_QUERY, ) with no_response: actual_name, actual_data = self.collector_mandatory.retrieve( metric_name='http_requests_total', start=samples.FIRST_PERIOD_BEGIN, end=samples.FIRST_PERIOD_END, project_id=samples.TENANT, q_filter=None, ) self.assertEqual(expected_name, actual_name) self.assertEqual(expected_data, actual_data) def test_format_retrieve_raise_NoDataCollected(self): no_response = mock.patch( 'cloudkitty.common.prometheus_client.PrometheusClient.get_instant', return_value=samples.PROMETHEUS_EMPTY_RESP_INSTANT_QUERY, ) with no_response: self.assertRaises( collector.NoDataCollected, self.collector_mandatory.retrieve, metric_name='http_requests_total', start=samples.FIRST_PERIOD_BEGIN, end=samples.FIRST_PERIOD_END, project_id=samples.TENANT, q_filter=None, ) def test_format_retrieve_all_raises_exception(self): invalid_response = mock.patch( 'cloudkitty.common.prometheus_client.PrometheusClient.get_instant', side_effect=PrometheusResponseError, ) with invalid_response: self.assertRaises( exceptions.CollectError, self.collector_mandatory.retrieve, metric_name='http_requests_total', start=samples.FIRST_PERIOD_BEGIN, end=samples.FIRST_PERIOD_END, project_id=samples.TENANT, q_filter=None, ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/collectors/test_validation.py0000664000175000017500000001452100000000000025264 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import copy from voluptuous import error as verror from cloudkitty import collector from cloudkitty import tests class MetricConfigValidationTest(tests.TestCase): base_data = { 'metrics': { 'metric_one': { 'groupby': ['one'], 'metadata': ['two'], 'unit': 'u', } } } base_output = { 'metric_one': { 'groupby': ['one'], 'metadata': ['two'], 'unit': 'u', 'factor': 1, 'offset': 0, 'mutate': 'NONE', } } def test_base_minimal_config(self): data = copy.deepcopy(self.base_data) expected_output = copy.deepcopy(self.base_output) expected_output['metric_one']['groupby'].append('project_id') self.assertEqual( collector.BaseCollector.check_configuration(data), expected_output, ) def test_gnocchi_minimal_config_no_extra_args(self): data = copy.deepcopy(self.base_data) self.assertRaises( verror.MultipleInvalid, collector.gnocchi.GnocchiCollector.check_configuration, data, ) def test_gnocchi_minimal_config_minimal_extra_args(self): data = copy.deepcopy(self.base_data) data['metrics']['metric_one']['extra_args'] = {'resource_type': 'res'} expected_output = copy.deepcopy(self.base_output) expected_output['metric_one']['groupby'] += ['project_id', 'id'] expected_output['metric_one']['extra_args'] = { 'aggregation_method': 'max', 're_aggregation_method': 'max', 'force_granularity': 3600, 'resource_type': 'res', 'resource_key': 'id', 'use_all_resource_revisions': True, 'custom_query': ''} self.assertEqual( collector.gnocchi.GnocchiCollector.check_configuration(data), expected_output, ) def test_gnocchi_minimal_config_negative_forced_aggregation(self): data = copy.deepcopy(self.base_data) data['metrics']['metric_one']['extra_args'] = { 'resource_type': 'res', 'force_aggregation': -42, } self.assertRaises( verror.MultipleInvalid, collector.gnocchi.GnocchiCollector.check_configuration, data, ) def test_prometheus_minimal_config_empty_extra_args(self): data = copy.deepcopy(self.base_data) data['metrics']['metric_one']['extra_args'] = {} expected_output = copy.deepcopy(self.base_output) expected_output['metric_one']['groupby'].append('project_id') expected_output['metric_one']['extra_args'] = { 'aggregation_method': 'max', 'query_prefix': '', 'query_suffix': '', } self.assertEqual( collector.prometheus.PrometheusCollector.check_configuration(data), expected_output, ) def test_prometheus_minimal_config_no_extra_args(self): data = copy.deepcopy(self.base_data) expected_output = copy.deepcopy(self.base_output) expected_output['metric_one']['groupby'].append('project_id') expected_output['metric_one']['extra_args'] = { 'aggregation_method': 'max', 'query_prefix': '', 'query_suffix': '', } self.assertEqual( collector.prometheus.PrometheusCollector.check_configuration(data), expected_output, ) def test_prometheus_minimal_config_minimal_extra_args(self): data = copy.deepcopy(self.base_data) data['metrics']['metric_one']['extra_args'] = { 'aggregation_method': 'max', 'query_function': 'abs', 'query_prefix': 'custom_prefix', 'query_suffix': 'custom_suffix', 'range_function': 'delta', } expected_output = copy.deepcopy(self.base_output) expected_output['metric_one']['groupby'].append('project_id') expected_output['metric_one']['extra_args'] = { 'aggregation_method': 'max', 'query_function': 'abs', 'query_prefix': 'custom_prefix', 'query_suffix': 'custom_suffix', 'range_function': 'delta', } self.assertEqual( collector.prometheus.PrometheusCollector.check_configuration(data), expected_output, ) def test_check_duplicates(self): data = copy.deepcopy(self.base_data) for metric_name, metric in data['metrics'].items(): metric['metadata'].append('one') self.assertRaises( collector.InvalidConfiguration, collector.check_duplicates, metric_name, metric) def test_validate_map_mutator(self): data = copy.deepcopy(self.base_data) # Check that validation succeeds when MAP mutator is not used for metric_name, metric in data['metrics'].items(): collector.validate_map_mutator(metric_name, metric) # Check that validation raises an exception when mutate_map is missing for metric_name, metric in data['metrics'].items(): metric['mutate'] = 'MAP' self.assertRaises( collector.InvalidConfiguration, collector.validate_map_mutator, metric_name, metric) data = copy.deepcopy(self.base_data) # Check that validation raises an exception when mutate_map is present # but MAP mutator is not used for metric_name, metric in data['metrics'].items(): metric['mutate_map'] = {} self.assertRaises( collector.InvalidConfiguration, collector.validate_map_mutator, metric_name, metric) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2634869 cloudkitty-21.0.0/cloudkitty/tests/common/0000775000175000017500000000000000000000000020635 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/common/test_prometheus_client.py0000664000175000017500000001242300000000000026001 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from unittest import mock from cloudkitty.collector import prometheus from cloudkitty import tests from cloudkitty.tests import samples from cloudkitty.utils import json class PrometheusClientTest(tests.TestCase): class FakeResponse(object): """Mimics an HTTP ``requests`` response""" def __init__(self, url, text, status_code): self.url = url self.text = text self.status_code = status_code def json(self): return json.loads(self.text) @staticmethod def _mock_requests_get(text): """Factory to build FakeResponse with desired response body text""" return lambda *args, **kwargs: PrometheusClientTest.FakeResponse( args[0], text, 200, ) def setUp(self): super(PrometheusClientTest, self).setUp() self.client = prometheus.PrometheusClient( 'http://localhost:9090/api/v1', ) def test_get_with_no_options(self): with mock.patch('requests.get') as mock_get: self.client._get( 'query_range', params={ 'query': 'max(http_requests_total) by (project_id)', 'start': samples.FIRST_PERIOD_BEGIN, 'end': samples.FIRST_PERIOD_END, 'step': 10, }, ) mock_get.assert_called_once_with( 'http://localhost:9090/api/v1/query_range', params={ 'query': 'max(http_requests_total) by (project_id)', 'start': samples.FIRST_PERIOD_BEGIN, 'end': samples.FIRST_PERIOD_END, 'step': 10, }, auth=None, verify=True, ) def test_get_with_options(self): client = prometheus.PrometheusClient( 'http://localhost:9090/api/v1', auth=('foo', 'bar'), verify='/some/random/path', ) with mock.patch('requests.get') as mock_get: client._get( 'query_range', params={ 'query': 'max(http_requests_total) by (project_id)', 'start': samples.FIRST_PERIOD_BEGIN, 'end': samples.FIRST_PERIOD_END, 'step': 10, }, ) mock_get.assert_called_once_with( 'http://localhost:9090/api/v1/query_range', params={ 'query': 'max(http_requests_total) by (project_id)', 'start': samples.FIRST_PERIOD_BEGIN, 'end': samples.FIRST_PERIOD_END, 'step': 10, }, auth=('foo', 'bar'), verify='/some/random/path', ) def test_get_instant(self): mock_get = mock.patch( 'requests.get', side_effect=self._mock_requests_get('{"foo": "bar"}'), ) with mock_get: res = self.client.get_instant( 'max(http_requests_total) by (project_id)', ) self.assertEqual(res, {'foo': 'bar'}) def test_get_range(self): mock_get = mock.patch( 'requests.get', side_effect=self._mock_requests_get('{"foo": "bar"}'), ) with mock_get: res = self.client.get_range( 'max(http_requests_total) by (project_id)', samples.FIRST_PERIOD_BEGIN, samples.FIRST_PERIOD_END, 10, ) self.assertEqual(res, {'foo': 'bar'}) def test_get_instant_raises_error_on_bad_json(self): # Simulating malformed JSON response from HTTP+PromQL instant request mock_get = mock.patch( 'requests.get', side_effect=self._mock_requests_get('{"foo": "bar"'), ) with mock_get: self.assertRaises( prometheus.PrometheusResponseError, self.client.get_instant, 'max(http_requests_total) by (project_id)', ) def test_get_range_raises_error_on_bad_json(self): # Simulating malformed JSON response from HTTP+PromQL range request mock_get = mock.patch( 'requests.get', side_effect=self._mock_requests_get('{"foo": "bar"'), ) with mock_get: self.assertRaises( prometheus.PrometheusResponseError, self.client.get_range, 'max(http_requests_total) by (project_id)', samples.FIRST_PERIOD_BEGIN, samples.FIRST_PERIOD_END, 10, ) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2634869 cloudkitty-21.0.0/cloudkitty/tests/fetchers/0000775000175000017500000000000000000000000021150 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/fetchers/__init__.py0000664000175000017500000000000000000000000023247 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/fetchers/test_gnocchi.py0000664000175000017500000001405700000000000024202 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # from unittest import mock from cloudkitty.fetcher import gnocchi from cloudkitty import tests class GnocchiFetcherTest(tests.TestCase): def setUp(self): super(GnocchiFetcherTest, self).setUp() self.fetcher = gnocchi.GnocchiFetcher() self.resource_list = [{'id': "some_id", 'project_id': 'some_other_project_id'}, {'id': "some_id2", 'project_id': 'some_other_project_id2'}, {'id': "some_id3", 'project_id': 'some_other_project_id3'}, {'id': "some_replicated_id", 'project_id': 'some_replicated_id_project'}, {'id': "some_replicated_id", 'project_id': 'some_replicated_id_project'} ] self.unique_scope_ids = ["some_other_project_id", "some_other_project_id2", "some_other_project_id3", "some_replicated_id_project"] self.unique_scope_ids.sort() def test_get_tenants_marker_list_resource_last_call(self): with mock.patch.object( self.fetcher._conn.resource, 'search') as resource_list: resource_list.side_effect = [ self.resource_list, [{'id': "some_replicated_id", 'project_id': 'some_replicated_id_project'}], []] all_scope_ids = self.fetcher.get_tenants() all_scope_ids.sort() self.assertEqual(self.unique_scope_ids, all_scope_ids) resource_list.assert_has_calls([ mock.call(resource_type='generic', details=True, query=None), mock.call(resource_type='generic', details=True, query={'not': {'in': {'project_id': [ 'some_other_project_id', 'some_other_project_id2', 'some_other_project_id3', 'some_replicated_id_project']}}}), mock.call(resource_type='generic', details=True, query={'not': {'in': {'project_id': [ 'some_other_project_id', 'some_other_project_id2', 'some_other_project_id3', 'some_replicated_id_project']}}}) ]) def test_get_tenants_empty_list_resource_last_call(self): with mock.patch.object( self.fetcher._conn.resource, 'search') as resource_list: resource_list.side_effect = [ self.resource_list, self.resource_list, []] all_scope_ids = self.fetcher.get_tenants() all_scope_ids.sort() self.assertEqual(self.unique_scope_ids, all_scope_ids) resource_list.assert_has_calls([ mock.call(resource_type='generic', details=True, query=None), mock.call(resource_type='generic', details=True, query={'not': {'in': {'project_id': [ 'some_other_project_id', 'some_other_project_id2', 'some_other_project_id3', 'some_replicated_id_project']}}}), mock.call(resource_type='generic', details=True, query={'not': {'in': {'project_id': [ 'some_other_project_id', 'some_other_project_id2', 'some_other_project_id3', 'some_replicated_id_project']}}})], any_order=False) def test_get_tenants_scope_id_as_none(self): with mock.patch.object( self.fetcher._conn.resource, 'search') as resource_list: resource_list.side_effect = [ self.resource_list, self.resource_list, [{"id": "test", "project_id": None}], []] all_scope_ids = self.fetcher.get_tenants() all_scope_ids.sort() self.assertEqual(self.unique_scope_ids, all_scope_ids) resource_list.assert_has_calls([ mock.call(resource_type='generic', details=True, query=None), mock.call(resource_type='generic', details=True, query={'not': {'in': {'project_id': [ 'some_other_project_id', 'some_other_project_id2', 'some_other_project_id3', 'some_replicated_id_project']}}}), mock.call(resource_type='generic', details=True, query={'not': {'in': {'project_id': [ 'some_other_project_id', 'some_other_project_id2', 'some_other_project_id3', 'some_replicated_id_project']}}}), mock.call(resource_type='generic', details=True, query={'not': {'in': {'project_id': [ 'some_other_project_id', 'some_other_project_id2', 'some_other_project_id3', 'some_replicated_id_project']}}}) ], any_order=False) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/fetchers/test_prometheus.py0000664000175000017500000000753700000000000024770 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # from unittest import mock from cloudkitty.common.prometheus_client import PrometheusClient from cloudkitty.common.prometheus_client import PrometheusResponseError from cloudkitty.fetcher import prometheus from cloudkitty import tests class PrometheusFetcherTest(tests.TestCase): def setUp(self): super(PrometheusFetcherTest, self).setUp() self.conf.set_override( 'metric', 'http_requests_total', 'fetcher_prometheus', ) self.conf.set_override( 'scope_attribute', 'namespace', 'fetcher_prometheus', ) self.fetcher = prometheus.PrometheusFetcher() def test_get_tenants_build_query(self): query = ( 'max(http_requests_total) by (namespace)' ) with mock.patch.object( PrometheusClient, 'get_instant', ) as mock_get: self.fetcher.get_tenants() mock_get.assert_called_once_with(query) def test_get_tenants_build_query_with_filter(self): query = ( 'max(http_requests_total{label1="foo"})' ' by (namespace)' ) self.conf.set_override( 'filters', 'label1:foo', 'fetcher_prometheus', ) with mock.patch.object( PrometheusClient, 'get_instant', ) as mock_get: self.fetcher.get_tenants() mock_get.assert_called_once_with(query) def test_get_tenants(self): response = mock.patch( 'cloudkitty.common.prometheus_client.PrometheusClient.get_instant', return_value={ 'data': { 'result': [ { 'metric': {}, 'value': [42, 1337], }, { 'metric': {'namespace': 'scope_id1'}, 'value': [42, 1337], }, { 'metric': {'namespace': 'scope_id2'}, 'value': [42, 1337], }, { 'metric': {'namespace': 'scope_id3'}, 'value': [42, 1337], }, ] } }, ) with response: scopes = self.fetcher.get_tenants() self.assertCountEqual(scopes, [ 'scope_id1', 'scope_id2', 'scope_id3', ]) def test_get_tenants_raises_exception(self): no_response = mock.patch( 'cloudkitty.common.prometheus_client.PrometheusClient.get_instant', return_value={}, ) with no_response: self.assertRaises( prometheus.PrometheusFetcherError, self.fetcher.get_tenants, ) def test_get_tenants_raises_exception2(self): invalid_response = mock.patch( 'cloudkitty.common.prometheus_client.PrometheusClient.get_instant', side_effect=PrometheusResponseError, ) with invalid_response: self.assertRaises( prometheus.PrometheusFetcherError, self.fetcher.get_tenants, ) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2634869 cloudkitty-21.0.0/cloudkitty/tests/gabbi/0000775000175000017500000000000000000000000020411 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/__init__.py0000664000175000017500000000000000000000000022510 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/fixtures.py0000664000175000017500000004210700000000000022640 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import abc import collections import datetime import decimal import os from unittest import mock from dateutil import tz from gabbi import fixture from oslo_config import cfg from oslo_config import fixture as conf_fixture from oslo_db.sqlalchemy import utils import oslo_messaging from oslo_messaging import conffixture from oslo_policy import opts as policy_opts from stevedore import driver from stevedore import extension import webob.dec from wsme import types as wtypes import wsmeext.pecan as wsme_pecan from cloudkitty.api import app from cloudkitty.api import middleware from cloudkitty.api.v2.dataframes import dataframes as v2_api_dataframes from cloudkitty.api.v2.summary import summary as v2_api_summary from cloudkitty import dataframe from cloudkitty import db from cloudkitty.db import api as ck_db_api from cloudkitty import messaging from cloudkitty import rating from cloudkitty import storage from cloudkitty.storage.v1.sqlalchemy import models from cloudkitty import storage_state from cloudkitty import tests from cloudkitty.tests.storage.v2 import influx_utils from cloudkitty.tests import utils as test_utils from cloudkitty import utils as ck_utils from cloudkitty.utils import tz as tzutils INITIAL_DT = datetime.datetime(2015, 1, 1, tzinfo=tz.tzutc()) class UUIDFixture(fixture.GabbiFixture): def start_fixture(self): FAKE_UUID = '6c1b8a30-797f-4b7e-ad66-9879b79059fb' patcher = mock.patch( 'oslo_utils.uuidutils.generate_uuid', return_value=FAKE_UUID) patcher.start() self.patcher = patcher def stop_fixture(self): self.patcher.stop() class BaseExtensionFixture(fixture.GabbiFixture, metaclass=abc.ABCMeta): klass = None namespace = None stevedore_mgr = None assert_args = {} @abc.abstractmethod def setup_fake_modules(self): pass def start_fixture(self): fake_extensions = self.setup_fake_modules() self.mock = mock.patch(self.klass) fake_mgr = self.stevedore_mgr.make_test_instance( fake_extensions, self.namespace) self.patch = self.mock.start() self.patch.return_value = fake_mgr def stop_fixture(self): self.patch.assert_called_with( self.namespace, **self.assert_args) self.mock.stop() class CollectorExtensionsFixture(BaseExtensionFixture): klass = 'stevedore.driver.DriverManager' namespace = 'cloudkitty.collector.backends' stevedore_mgr = driver.DriverManager assert_args = { 'invoke_kwds': {'period': 3600}, 'invoke_on_load': True} def setup_fake_modules(self): def fake_metric(start, end=None, project_id=None, q_filter=None): return None fake_module1 = tests.FakeCollectorModule() fake_module1.collector_name = 'fake1' fake_module1.get_compute = fake_metric fake_module2 = tests.FakeCollectorModule() fake_module2.collector_name = 'fake2' fake_module2.get_volume = fake_metric fake_module3 = tests.FakeCollectorModule() fake_module3.collector_name = 'fake3' fake_module3.get_compute = fake_metric fake_extensions = [ extension.Extension( 'fake1', 'cloudkitty.tests.FakeCollectorModule1', None, fake_module1), extension.Extension( 'fake2', 'cloudkitty.tests.FakeCollectorModule2', None, fake_module2), extension.Extension( 'fake3', 'cloudkitty.tests.FakeCollectorModule3', None, fake_module3)] return fake_extensions[0] class RatingModulesFixture(BaseExtensionFixture): klass = 'stevedore.extension.ExtensionManager' namespace = 'cloudkitty.rating.processors' stevedore_mgr = extension.ExtensionManager assert_args = { 'invoke_on_load': True} def setup_fake_modules(self): class FakeConfigController(rating.RatingRestControllerBase): _custom_actions = { 'test': ['GET'] } @wsme_pecan.wsexpose(wtypes.text) def get_test(self): """Return the list of every mapping type available. """ return 'OK' fake_module1 = tests.FakeRatingModule() fake_module1.module_name = 'fake1' fake_module1.set_priority(3) fake_module2 = tests.FakeRatingModule() fake_module2.module_name = 'fake2' fake_module2.config_controller = FakeConfigController fake_module2.set_priority(1) fake_module3 = tests.FakeRatingModule() fake_module3.module_name = 'fake3' fake_module3.set_priority(2) fake_extensions = [ extension.Extension( 'fake1', 'cloudkitty.tests.FakeRatingModule1', None, fake_module1), extension.Extension( 'fake2', 'cloudkitty.tests.FakeRatingModule2', None, fake_module2), extension.Extension( 'fake3', 'cloudkitty.tests.FakeRatingModule3', None, fake_module3)] return fake_extensions class ConfigFixture(fixture.GabbiFixture): auth_strategy = 'noauth' def start_fixture(self): self.conf = None conf = conf_fixture.Config().conf policy_opts.set_defaults(conf) msg_conf = conffixture.ConfFixture(conf) msg_conf.transport_url = 'fake:/' conf.import_group('api', 'cloudkitty.api.app') conf.set_override('auth_strategy', self.auth_strategy) conf.set_override('connection', 'sqlite:///', 'database') conf.set_override('policy_file', os.path.abspath('etc/cloudkitty/policy.yaml'), group='oslo_policy') conf.set_override('api_paste_config', os.path.abspath( 'cloudkitty/tests/gabbi/gabbi_paste.ini') ) conf.import_group('storage', 'cloudkitty.storage') conf.set_override('backend', 'sqlalchemy', 'storage') conf.set_override('version', '1', 'storage') self.conf = conf self.conn = ck_db_api.get_instance() migration = self.conn.get_migration() migration.upgrade('head') def stop_fixture(self): if self.conf: self.conf.reset() with db.session_for_write() as session: engine = session.get_bind() engine.dispose() class ConfigFixtureStorageV2(ConfigFixture): def start_fixture(self): super(ConfigFixtureStorageV2, self).start_fixture() self.conf.set_override('backend', 'influxdb', 'storage') self.conf.set_override('version', '2', 'storage') class ConfigFixtureKeystoneAuth(ConfigFixture): auth_strategy = 'keystone' def start_fixture(self): # Mocking the middleware process_request which check for credentials # here, the only check done is that the hardcoded token is the one # send by the query. If not, 401, else 200. def _mock_proc_request(self, request): token = 'c93e3e31342e4e32ba201fd3d70878b5' http_code = 401 if 'X-Auth-Token' in request.headers and \ request.headers['X-Auth-Token'] == token: http_code = 200 return webob.Response( status_code=http_code, content_type='application/json' ) self._orig_func = middleware.auth_token.AuthProtocol.process_request middleware.auth_token.AuthProtocol.process_request = _mock_proc_request super(ConfigFixtureKeystoneAuth, self).start_fixture() def stop_fixture(self): super(ConfigFixtureKeystoneAuth, self).stop_fixture() middleware.auth_token.AuthProtocol.process_request = self._orig_func class BaseFakeRPC(fixture.GabbiFixture): endpoint = None def start_fixture(self): messaging.setup() target = oslo_messaging.Target(topic='cloudkitty', server=cfg.CONF.host, version='1.0') endpoints = [ self.endpoint() ] self.server = messaging.get_server(target, endpoints) self.server.start() def stop_fixture(self): self.server.stop() class ScopeStateResetFakeRPC(BaseFakeRPC): class FakeRPCEndpoint(object): target = oslo_messaging.Target(version='1.0') def reset_state(self, ctxt, res_data): pass endpoint = FakeRPCEndpoint class QuoteFakeRPC(BaseFakeRPC): class FakeRPCEndpoint(object): target = oslo_messaging.Target(namespace='rating', version='1.0') def quote(self, ctxt, res_data): return str(1.0) endpoint = FakeRPCEndpoint class BaseStorageDataFixture(fixture.GabbiFixture): def create_fake_data(self, begin, end, project_id): cpu_point = dataframe.DataPoint( unit="nothing", qty=1, groupby={"fake_meta": 1.0, "project_id": project_id}, metadata={"dummy": True}, price=decimal.Decimal('1.337'), ) image_point = dataframe.DataPoint( unit="nothing", qty=1, groupby={"fake_meta": 1.0, "project_id": project_id}, metadata={"dummy": True}, price=decimal.Decimal('0.121'), ) data = [ dataframe.DataFrame( start=begin, end=end, usage=collections.OrderedDict({"cpu": [cpu_point, cpu_point]}), ), dataframe.DataFrame( start=begin, end=end, usage=collections.OrderedDict( {"image.size": [image_point, image_point]}), ), ] return data def start_fixture(self): auth = mock.patch( 'keystoneauth1.loading.load_auth_from_conf_options', return_value=dict()) session = mock.patch( 'keystoneauth1.loading.load_session_from_conf_options', return_value=dict()) with auth: with session: self.storage = storage.get_storage(conf=test_utils.load_conf()) self.storage.init() self.initialize_data() def stop_fixture(self): model = models.RatedDataFrame with db.session_for_write() as session: q = utils.model_query( model, session) q.delete() class StorageDataFixture(BaseStorageDataFixture): def initialize_data(self): nodata_duration = (24 * 3 + 12) * 3600 hour_delta = datetime.timedelta(seconds=3600) tenant_list = ['8f82cc70-e50c-466e-8624-24bdea811375', '7606a24a-b8ad-4ae0-be6c-3d7a41334a2e'] data_dt = INITIAL_DT + datetime.timedelta( seconds=nodata_duration + 3600) data_duration = datetime.timedelta(seconds=(24 * 2 + 8) * 3600) iter_dt = data_dt while iter_dt < data_dt + data_duration: data = self.create_fake_data( iter_dt, iter_dt + hour_delta, tenant_list[0]) self.storage.push(data, tenant_list[0]) iter_dt += hour_delta iter_dt = data_dt while iter_dt < data_dt + data_duration / 2: data = self.create_fake_data( iter_dt, iter_dt + hour_delta, tenant_list[1]) self.storage.push(data, tenant_list[1]) iter_dt += hour_delta class NowStorageDataFixture(BaseStorageDataFixture): def initialize_data(self): dt = tzutils.get_month_start(naive=True).replace(tzinfo=tz.tzutc()) hour_delta = datetime.timedelta(seconds=3600) limit = dt + hour_delta * 12 while dt < limit: project_id = '3d9a1b33-482f-42fd-aef9-b575a3da9369' data = self.create_fake_data(dt, dt + hour_delta, project_id) self.storage.push(data, project_id) dt += hour_delta class ScopeStateFixture(fixture.GabbiFixture): def start_fixture(self): self.sm = storage_state.StateManager() self.sm.init() data = [ ('aaaa', datetime.datetime(2019, 1, 1), 'fet1', 'col1', 'key1'), ('bbbb', datetime.datetime(2019, 2, 2), 'fet1', 'col1', 'key2'), ('cccc', datetime.datetime(2019, 3, 3), 'fet1', 'col2', 'key1'), ('dddd', datetime.datetime(2019, 4, 4), 'fet1', 'col2', 'key2'), ('eeee', datetime.datetime(2019, 5, 5), 'fet2', 'col1', 'key1'), ('ffff', datetime.datetime(2019, 6, 6), 'fet2', 'col1', 'key2'), ('gggg', datetime.datetime(2019, 6, 6), 'fet2', 'col2', 'key1'), ('hhhh', datetime.datetime(2019, 6, 6), 'fet2', 'col2', 'key2'), ] for d in data: self.sm.set_state( d[0], d[1], fetcher=d[2], collector=d[3], scope_key=d[4]) def stop_fixture(self): with db.session_for_write() as session: q = utils.model_query( self.sm.model, session) q.delete() class CORSConfigFixture(fixture.GabbiFixture): """Inject mock configuration for the CORS middleware.""" def start_fixture(self): # Here we monkeypatch GroupAttr.__getattr__, necessary because the # paste.ini method of initializing this middleware creates its own # ConfigOpts instance, bypassing the regular config fixture. def _mock_getattr(instance, key): if key != 'allowed_origin': return self._original_call_method(instance, key) return "http://valid.example.com" self._original_call_method = cfg.ConfigOpts.GroupAttr.__getattr__ cfg.ConfigOpts.GroupAttr.__getattr__ = _mock_getattr def stop_fixture(self): """Remove the monkeypatch.""" cfg.ConfigOpts.GroupAttr.__getattr__ = self._original_call_method class MetricsConfFixture(fixture.GabbiFixture): """Inject Metrics configuration mock to the get_metrics_conf() function""" def start_fixture(self): self._original_function = ck_utils.load_conf ck_utils.load_conf = mock.Mock( return_value=tests.samples.METRICS_CONF, ) def stop_fixture(self): """Remove the get_metrics_conf() monkeypatch.""" ck_utils.load_conf = self._original_function class NowInfluxStorageDataFixture(NowStorageDataFixture): def start_fixture(self): cli = influx_utils.FakeInfluxClient() st = storage.get_storage() st._conn = cli self._get_storage_patch = mock.patch( 'cloudkitty.storage.get_storage', new=lambda **kw: st, ) self._get_storage_patch.start() v2_api_summary.Summary.reload() v2_api_dataframes.DataFrameList.reload() super(NowInfluxStorageDataFixture, self).start_fixture() def initialize_data(self): data = test_utils.generate_v2_storage_data( start=tzutils.get_month_start(), end=tzutils.localized_now().replace(hour=0), ) self.storage.push([data]) def stop_fixture(self): self._get_storage_patch.stop() class InfluxStorageDataFixture(StorageDataFixture): def start_fixture(self): cli = influx_utils.FakeInfluxClient() st = storage.get_storage() st._conn = cli self._get_storage_patch = mock.patch( 'cloudkitty.storage.get_storage', new=lambda **kw: st, ) self._get_storage_patch.start() v2_api_summary.Summary.reload() v2_api_dataframes.DataFrameList.reload() super(InfluxStorageDataFixture, self).start_fixture() def stop_fixture(self): self._get_storage_patch.stop() class UTCFixture(fixture.GabbiFixture): """Set the local timezone to UTC""" def start_fixture(self): self._tzmock = mock.patch('cloudkitty.utils.tz._LOCAL_TZ', tz.tzutc()) self._tzmock.start() def stop_fixture(self): self._tzmock.stop() def setup_app(): messaging.setup() # FIXME(sheeprine): Extension fixtures are interacting with transformers # loading, since collectors are not needed here we shunt them no_collector = mock.patch( 'cloudkitty.collector.get_collector', return_value=None) with no_collector: return app.load_app() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/gabbi_paste.ini0000664000175000017500000000130100000000000023345 0ustar00zuulzuul00000000000000[pipeline:cloudkitty+noauth] pipeline = cors http_proxy_to_wsgi request_id ck_api_v1 [pipeline:cloudkitty+keystone] pipeline = cors http_proxy_to_wsgi request_id authtoken ck_api_v1 [app:ck_api_v1] paste.app_factory = cloudkitty.api.app:app_factory [filter:authtoken] acl_public_routes = /, /v1 paste.filter_factory = cloudkitty.api.middleware:AuthTokenMiddleware.factory [filter:request_id] paste.filter_factory = oslo_middleware:RequestId.factory [filter:cors] paste.filter_factory = oslo_middleware.cors:filter_factory oslo_config_project = cloudkitty [filter:http_proxy_to_wsgi] paste.filter_factory = oslo_middleware.http_proxy_to_wsgi:HTTPProxyToWSGI.factory oslo_config_project = cloudkitty ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2634869 cloudkitty-21.0.0/cloudkitty/tests/gabbi/gabbits/0000775000175000017500000000000000000000000022024 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/gabbits/ks_middleware_auth.yaml0000664000175000017500000000105300000000000026542 0ustar00zuulzuul00000000000000fixtures: - ConfigFixtureKeystoneAuth - StorageDataFixture - NowStorageDataFixture tests: - name: Can't query api without token GET: /v1/storage/dataframes status: 401 - name: Can't query api with non valid token GET: /v1/storage/dataframes status: 401 request_headers: X-Auth-Token: bf08a02dbd52406b94fcc2447bb3e266 - name: Can query api with valid token GET: /v1/storage/dataframes status: 200 request_headers: X-Auth-Token: c93e3e31342e4e32ba201fd3d70878b5 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/gabbits/ks_middleware_cors.yaml0000664000175000017500000000202500000000000026547 0ustar00zuulzuul00000000000000fixtures: - ConfigFixture - CORSConfigFixture tests: - name: valid cors options OPTIONS: / status: 200 request_headers: origin: http://valid.example.com access-control-request-method: GET response_headers: access-control-allow-origin: http://valid.example.com - name: invalid cors options OPTIONS: / status: 200 request_headers: origin: http://invalid.example.com access-control-request-method: GET response_forbidden_headers: - access-control-allow-origin - name: valid cors get GET: / status: 200 request_headers: origin: http://valid.example.com access-control-request-method: GET response_headers: access-control-allow-origin: http://valid.example.com - name: invalid cors get GET: / status: 200 request_headers: origin: http://invalid.example.com response_forbidden_headers: - access-control-allow-origin ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/gabbits/no_auth.yaml0000664000175000017500000000026600000000000024351 0ustar00zuulzuul00000000000000fixtures: - ConfigFixture - StorageDataFixture - NowStorageDataFixture tests: - name: Can query API without auth url: /v1/storage/dataframes status: 200 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/gabbits/root-v1-storage.yaml0000664000175000017500000000057100000000000025664 0ustar00zuulzuul00000000000000fixtures: - ConfigFixture tests: - name: test if / is publicly available url: / status: 200 - name: test if HEAD / is available url: / status: 200 method: HEAD - name: test that only one APIs is available url: / status: 200 response_json_paths: $.versions.`len`: 1 $.versions[0].id: v1 $.versions[0].status: CURRENT ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/gabbits/root-v2-storage.yaml0000664000175000017500000000070100000000000025660 0ustar00zuulzuul00000000000000fixtures: - ConfigFixtureStorageV2 tests: - name: test if / is publicly available url: / status: 200 - name: test if HEAD / is available url: / status: 200 method: HEAD - name: test if both APIs are available url: / status: 200 response_json_paths: $.versions.`len`: 2 $.versions[0].id: v1 $.versions[1].id: v2 $.versions[0].status: CURRENT $.versions[1].status: EXPERIMENTAL ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/gabbits/v1-collector.yaml0000664000175000017500000000646400000000000025234 0ustar00zuulzuul00000000000000fixtures: - ConfigFixture tests: # States - name: check collector is disabled by default url: /v1/collector/fake1/states status: 200 response_json_paths: $.enabled: false $.name: "fake1" - name: enable collector url: /v1/collector/fake1/states method: PUT request_headers: content-type: application/json x-roles: admin data: name: "fake1" enabled: true status: 200 response_json_paths: $.enabled: true $.name: "fake1" - name: check collector state isolation url: /v1/collector/fake2/states status: 200 response_json_paths: $.enabled: false $.name: "fake2" - name: disable collector url: /v1/collector/fake1/states method: PUT request_headers: content-type: application/json x-roles: admin data: name: "fake1" enabled: false status: 200 response_json_paths: $.enabled: false $.name: "fake1" # Mappings - name: get all mappings (empty) url: /v1/collector/mappings status: 200 response_json_paths: $.mappings: [] - name: try to get an unknown mapping url: /v1/collector/mappings/notfound status: 404 response_strings: - "No mapping for service: notfound" - name: try to delete an unknown mapping url: /v1/collector/mappings/notfound method: DELETE status: 404 response_strings: - "No mapping for service: notfound" - name: create mapping url: /v1/collector/mappings/fake1/metric1 method: POST request_headers: content-type: application/json x-roles: admin status: 200 response_json_paths: $.collector: "fake1" $.service: "metric1" - name: get all mappings url: /v1/collector/mappings status: 200 response_json_paths: $.mappings[0].collector: "fake1" $.mappings[0].service: "metric1" - name: create second mapping url: /v1/collector/mappings/fake2/metric8 method: POST request_headers: content-type: application/json x-roles: admin status: 200 response_json_paths: $.collector: "fake2" $.service: "metric8" - name: get all mappings filtering on collector fake1 url: /v1/collector/mappings?collector=fake1 status: 200 response_json_paths: $.mappings[0].collector: "fake1" $.mappings[0].service: "metric1" - name: get all mappings filtering on collector fake2 url: /v1/collector/mappings?collector=fake2 status: 200 response_json_paths: $.mappings[0].collector: "fake2" $.mappings[0].service: "metric8" - name: get all mappings with no filtering url: /v1/collector/mappings status: 200 response_json_paths: $.mappings.`len`: 2 $.mappings[0].collector: "fake1" $.mappings[0].service: "metric1" $.mappings[1].collector: "fake2" $.mappings[1].service: "metric8" - name: get a mapping filtering on service metric8 url: /v1/collector/mappings/metric8 status: 200 response_json_paths: $.collector: "fake2" $.service: "metric8" - name: delete a mapping url: /v1/collector/mappings/metric1 method: DELETE status: 204 - name: check the mapping got deleted url: /v1/collector/mappings/metric1 status: 404 response_strings: - "No mapping for service: metric1" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/gabbits/v1-info.yaml0000664000175000017500000000265600000000000024200 0ustar00zuulzuul00000000000000fixtures: - ConfigFixture - MetricsConfFixture tests: - name: get config url: /v1/info/config status: 200 response_json_paths: $.metrics.`len`: 7 $.metrics['cpu'].unit: instance $.metrics['image.size'].unit: MiB $.metrics['volume.size'].unit: GiB $.metrics['network.incoming.bytes'].unit: MB $.metrics['network.outgoing.bytes'].unit: MB $.metrics['ip.floating'].unit: ip $.metrics['radosgw.objects.size'].unit: GiB - name: get metrics info url: /v1/info/metrics status: 200 response_json_paths: $.metrics.`len`: 7 $.metrics[/metric_id][0].metric_id: image.size $.metrics[/metric_id][0].unit: MiB $.metrics[/metric_id][1].metric_id: instance $.metrics[/metric_id][1].unit: instance $.metrics[/metric_id][2].metric_id: ip.floating $.metrics[/metric_id][2].unit: ip $.metrics[/metric_id][3].metric_id: network.incoming.bytes $.metrics[/metric_id][3].unit: MB $.metrics[/metric_id][4].metric_id: network.outgoing.bytes $.metrics[/metric_id][4].unit: MB $.metrics[/metric_id][5].metric_id: radosgw.objects.size $.metrics[/metric_id][5].unit: GiB $.metrics[/metric_id][6].metric_id: volume.size $.metrics[/metric_id][6].unit: GiB - name: get cpu metric info url: /v1/info/metrics/instance status: 200 response_json_paths: $.metric_id: instance $.unit: instance ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/gabbits/v1-rating.yaml0000664000175000017500000001007000000000000024516 0ustar00zuulzuul00000000000000fixtures: - ConfigFixture - RatingModulesFixture - QuoteFakeRPC tests: - name: reload list of modules available url: /v1/rating/reload_modules status: 204 - name: list all modules available url: /v1/rating/modules status: 200 response_json_paths: $.modules.`len`: 3 $.modules[0].priority: 3 $.modules[0].module_id: "fake1" $.modules[0].enabled: false $.modules[0].description: "fake rating module" $.modules[0].hot-config: false $.modules[1].priority: 1 $.modules[1].module_id: "fake2" $.modules[1].enabled: false $.modules[1].description: "fake rating module" $.modules[1].hot-config: false $.modules[2].priority: 2 $.modules[2].module_id: "fake3" $.modules[2].enabled: false $.modules[2].description: "fake rating module" $.modules[2].hot-config: false - name: get information of one module url: /v1/rating/modules/fake2 status: 200 response_json_paths: $.priority: 1 $.module_id: "fake2" $.enabled: false $.description: "fake rating module" $.hot-config: false - name: get information of a unknown module url: /v1/rating/modules/fakb status: 404 response_strings: - "Module not found." - name: change priority of a module url: /v1/rating/modules/fake3 method: PUT request_headers: content-type: application/json x-roles: admin data: module_id: "fake3" priority: 5 status: 302 response_headers: location: "$SCHEME://$NETLOC/v1/rating/modules/fake3" - name: get information of the modified module (priority) url: $LOCATION status: 200 response_json_paths: $.priority: 5 $.module_id: "fake3" $.enabled: false $.description: "fake rating module" $.hot-config: false - name: change enabled status of a module url: /v1/rating/modules/fake3 method: PUT request_headers: content-type: application/json x-roles: admin data: module_id: "fake3" enabled: true status: 302 response_headers: location: "$SCHEME://$NETLOC/v1/rating/modules/fake3" - name: get information of the modified module (status) url: $LOCATION status: 200 response_json_paths: $.priority: 5 $.module_id: "fake3" $.enabled: true $.description: "fake rating module" $.hot-config: false - name: change status and priority of a module url: /v1/rating/modules/fake3 method: PUT request_headers: content-type: application/json x-roles: admin data: module_id: "fake3" priority: 3 enabled: false status: 302 response_headers: location: "$SCHEME://$NETLOC/v1/rating/modules/fake3" - name: get information of the modified module (both) url: $LOCATION status: 200 response_json_paths: $.priority: 3 $.module_id: "fake3" $.enabled: false $.description: "fake rating module" $.hot-config: false - name: get a quote for a resource description url: /v1/rating/quote method: POST request_headers: content-type: application/json x-roles: admin data: resources: - service: "cpu" volume: "1.0" desc: test: 1 status: 200 response_strings: - "1.0" - name: module without custom API should use notconfigurable controller (GET) url: /v1/rating/module_config/fake1 status: 409 response_strings: - "Module is not configurable" - name: module without custom API should use notconfigurable controller (POST) url: /v1/rating/module_config/fake1 method: POST status: 409 response_strings: - "Module is not configurable" - name: module without custom API should use notconfigurable controller (PUT) url: /v1/rating/module_config/fake1 method: PUT status: 409 response_strings: - "Module is not configurable" - name: verify module exposes its custom API url: /v1/rating/module_config/fake2/test status: 200 response_strings: - "OK" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/gabbits/v1-report.yaml0000664000175000017500000001620000000000000024546 0ustar00zuulzuul00000000000000fixtures: - ConfigFixture - StorageDataFixture - NowStorageDataFixture tests: - name: get period with two tenants url: /v1/report/tenants query_parameters: begin: "2015-01-04T00:00:00" end: "2015-01-05T00:00:00" status: 200 response_strings: - "8f82cc70-e50c-466e-8624-24bdea811375" - "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" - name: by default give tenants for the current month url: /v1/report/tenants status: 200 response_strings: - "3d9a1b33-482f-42fd-aef9-b575a3da9369" - name: get period with no tenants url: /v1/report/tenants query_parameters: begin: "2015-02-01T00:00:00" end: "2015-02-02T00:00:00" status: 200 response_strings: - "[]" - name: get total when begin time bigger than end time url: /v1/report/total query_parameters: begin: "2015-02-04T00:00:00" end: "2015-01-01T00:00:00" tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" status: 200 response_strings: - "0" - name: get total for a period url: /v1/report/total query_parameters: begin: "2015-01-01T00:00:00" end: "2015-02-04T00:00:00" status: 200 response_strings: - "244.944" - name: get total for a period filtering on first tenant url: /v1/report/total query_parameters: begin: "2015-01-01T00:00:00" end: "2015-02-04T00:00:00" tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" status: 200 response_strings: - "163.296" - name: get total for a period filtering on second tenant url: /v1/report/total query_parameters: begin: "2015-01-01T00:00:00" end: "2015-02-04T00:00:00" tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" status: 200 response_strings: - "81.648" - name: get total for a period filtering on compute service url: /v1/report/total query_parameters: begin: "2015-01-01T00:00:00" end: "2015-02-04T00:00:00" service: "cpu" status: 200 response_strings: - "224.616" - name: get total for a period filtering on image service url: /v1/report/total query_parameters: begin: "2015-01-01T00:00:00" end: "2015-02-04T00:00:00" service: "image.size" status: 200 response_strings: - "20.328" - name: get total for a period filtering on compute service and tenant url: /v1/report/total query_parameters: begin: "2015-01-01T00:00:00" end: "2015-02-04T00:00:00" tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" service: "cpu" status: 200 response_strings: - "74.872" - name: get total for a period with no data url: /v1/report/total query_parameters: begin: "2015-02-01T00:00:00" end: "2015-02-02T00:00:00" status: 200 response_strings: - "0" - name: get summary for a period of each tenant url: /v1/report/summary query_parameters: begin: "2015-01-01T00:00:00" end: "2015-02-04T00:00:00" groupby: "tenant_id" status: 200 response_json_paths: $.summary.`len`: 2 $.summary[0].rate: "81.648" $.summary[0].res_type: "ALL" $.summary[0].tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" $.summary[0].begin: "2015-01-01T00:00:00" $.summary[0].end: "2015-02-04T00:00:00" $.summary[1].rate: "163.296" $.summary[1].res_type: "ALL" $.summary[1].tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" $.summary[1].begin: "2015-01-01T00:00:00" $.summary[1].end: "2015-02-04T00:00:00" - name: get summary for a period of each tenant filtering on compute service url: /v1/report/summary query_parameters: begin: "2015-01-01T00:00:00" end: "2015-02-04T00:00:00" service: "cpu" groupby: "tenant_id" status: 200 response_json_paths: $.summary.`len`: 2 $.summary[0].rate: "74.872" $.summary[0].tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" $.summary[0].res_type: "cpu" $.summary[0].begin: "2015-01-01T00:00:00" $.summary[0].end: "2015-02-04T00:00:00" $.summary[1].rate: "149.744" $.summary[1].tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" $.summary[1].res_type: "cpu" $.summary[1].begin: "2015-01-01T00:00:00" $.summary[1].end: "2015-02-04T00:00:00" - name: get summary for a period of each service url: /v1/report/summary query_parameters: begin: "2015-01-01T00:00:00" end: "2015-02-04T00:00:00" groupby: "res_type" status: 200 response_json_paths: $.summary.`len`: 2 $.summary[/res_type][0].rate: "224.616" $.summary[/res_type][0].res_type: "cpu" $.summary[/res_type][0].tenant_id: "ALL" $.summary[/res_type][0].begin: "2015-01-01T00:00:00" $.summary[/res_type][0].end: "2015-02-04T00:00:00" $.summary[/res_type][1].rate: "20.328" $.summary[/res_type][1].res_type: "image.size" $.summary[/res_type][1].tenant_id: "ALL" $.summary[/res_type][1].begin: "2015-01-01T00:00:00" $.summary[/res_type][1].end: "2015-02-04T00:00:00" - name: get summary for a period of each service filtering on first tenant url: /v1/report/summary query_parameters: begin: "2015-01-01T00:00:00" end: "2015-02-04T00:00:00" tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" groupby: "res_type" status: 200 response_json_paths: $.summary.`len`: 2 $.summary[/res_type][0].rate: "149.744" $.summary[/res_type][0].tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" $.summary[/res_type][0].res_type: "cpu" $.summary[/res_type][0].begin: "2015-01-01T00:00:00" $.summary[/res_type][0].end: "2015-02-04T00:00:00" $.summary[/res_type][1].rate: "13.552" $.summary[/res_type][1].tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" $.summary[/res_type][1].res_type: "image.size" $.summary[/res_type][1].begin: "2015-01-01T00:00:00" $.summary[/res_type][1].end: "2015-02-04T00:00:00" - name: get summary for a period of each service and tenant url: /v1/report/summary query_parameters: begin: "2015-01-01T00:00:00" end: "2015-02-04T00:00:00" groupby: "res_type,tenant_id" status: 200 response_json_paths: $.summary.`len`: 4 $.summary[0].rate: "6.776" $.summary[0].res_type: "image.size" $.summary[0].tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" $.summary[0].begin: "2015-01-01T00:00:00" $.summary[0].end: "2015-02-04T00:00:00" $.summary[1].rate: "13.552" $.summary[1].res_type: "image.size" $.summary[1].tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" $.summary[1].begin: "2015-01-01T00:00:00" $.summary[1].end: "2015-02-04T00:00:00" $.summary[2].rate: "74.872" $.summary[2].res_type: "cpu" $.summary[2].tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" $.summary[2].begin: "2015-01-01T00:00:00" $.summary[2].end: "2015-02-04T00:00:00" $.summary[3].rate: "149.744" $.summary[3].res_type: "cpu" $.summary[3].tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" $.summary[3].begin: "2015-01-01T00:00:00" $.summary[3].end: "2015-02-04T00:00:00" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/gabbits/v1-storage.yaml0000664000175000017500000003132100000000000024700 0ustar00zuulzuul00000000000000fixtures: - ConfigFixture - StorageDataFixture - NowStorageDataFixture tests: - name: fetch period with no data url: /v1/storage/dataframes query_parameters: begin: "2015-01-01T00:00:00" end: "2015-01-04T00:00:00" status: 200 response_json_paths: $.dataframes.`len`: 0 - name: fetch period with no data filtering on tenant_id url: /v1/storage/dataframes query_parameters: begin: "2015-01-01T00:00:00" end: "2015-01-04T00:00:00" tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" status: 200 response_json_paths: $.dataframes.`len`: 0 - name: fetch data for the first tenant without begin time url: /v1/storage/dataframes query_parameters: end: "2015-01-04T00:00:00" tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" status: 200 response_json_paths: $.dataframes.`len`: 0 - name: fetch data for the first tenant without end time url: /v1/storage/dataframes query_parameters: begin: "2015-01-04T00:00:00" tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" status: 200 response_json_paths: $.dataframes.`len`: 224 - name: fetch data for the first tenant without begin and end time url: /v1/storage/dataframes query_parameters: tenant_id: "3d9a1b33-482f-42fd-aef9-b575a3da9369" status: 200 response_json_paths: $.dataframes.`len`: 48 - name: fetch data for the first tenant when begin time bigger than end time url: /v1/storage/dataframes query_parameters: begin: "2015-01-04T14:00:00" end: "2015-01-04T13:00:00" tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" status: 200 response_json_paths: $.dataframes.`len`: 0 - name: fetch data for the first tenant url: /v1/storage/dataframes query_parameters: begin: "2015-01-04T13:00:00" end: "2015-01-04T14:00:00" tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" status: 200 response_json_paths: $.dataframes.`len`: 4 $.dataframes[0].tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" $.dataframes[0].begin: "2015-01-04T13:00:00" $.dataframes[0].end: "2015-01-04T14:00:00" $.dataframes[0].resources.`len`: 1 $.dataframes[0].resources[0].volume: "1" $.dataframes[0].resources[0].rating: "1.337" $.dataframes[0].resources[0].service: "cpu" $.dataframes[0].resources[0].desc.dummy: True $.dataframes[0].resources[0].desc.fake_meta: 1.0 $.dataframes[1].tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" $.dataframes[1].begin: "2015-01-04T13:00:00" $.dataframes[1].end: "2015-01-04T14:00:00" $.dataframes[1].resources.`len`: 1 $.dataframes[1].resources[0].volume: "1" $.dataframes[1].resources[0].rating: "1.337" $.dataframes[1].resources[0].service: "cpu" $.dataframes[1].resources[0].desc.dummy: True $.dataframes[1].resources[0].desc.fake_meta: 1.0 $.dataframes[2].tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" $.dataframes[2].begin: "2015-01-04T13:00:00" $.dataframes[2].end: "2015-01-04T14:00:00" $.dataframes[2].resources.`len`: 1 $.dataframes[2].resources[0].volume: "1" $.dataframes[2].resources[0].rating: "0.121" $.dataframes[2].resources[0].service: "image.size" $.dataframes[2].resources[0].desc.dummy: True $.dataframes[2].resources[0].desc.fake_meta: 1.0 $.dataframes[3].tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" $.dataframes[3].begin: "2015-01-04T13:00:00" $.dataframes[3].end: "2015-01-04T14:00:00" $.dataframes[3].resources.`len`: 1 $.dataframes[3].resources[0].volume: "1" $.dataframes[3].resources[0].rating: "0.121" $.dataframes[3].resources[0].service: "image.size" $.dataframes[3].resources[0].desc.dummy: True $.dataframes[3].resources[0].desc.fake_meta: 1.0 - name: fetch data for the second tenant url: /v1/storage/dataframes query_parameters: begin: "2015-01-04T13:00:00" end: "2015-01-04T14:00:00" tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" status: 200 response_json_paths: $.dataframes.`len`: 4 $.dataframes[0].tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" $.dataframes[0].begin: "2015-01-04T13:00:00" $.dataframes[0].end: "2015-01-04T14:00:00" $.dataframes[0].resources.`len`: 1 $.dataframes[0].resources[0].volume: "1" $.dataframes[0].resources[0].rating: "1.337" $.dataframes[0].resources[0].service: "cpu" $.dataframes[0].resources[0].desc.dummy: True $.dataframes[0].resources[0].desc.fake_meta: 1.0 $.dataframes[1].tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" $.dataframes[1].begin: "2015-01-04T13:00:00" $.dataframes[1].end: "2015-01-04T14:00:00" $.dataframes[1].resources.`len`: 1 $.dataframes[1].resources[0].volume: "1" $.dataframes[1].resources[0].rating: "1.337" $.dataframes[1].resources[0].service: "cpu" $.dataframes[1].resources[0].desc.dummy: True $.dataframes[1].resources[0].desc.fake_meta: 1.0 $.dataframes[2].tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" $.dataframes[2].begin: "2015-01-04T13:00:00" $.dataframes[2].end: "2015-01-04T14:00:00" $.dataframes[2].resources.`len`: 1 $.dataframes[2].resources[0].volume: "1" $.dataframes[2].resources[0].rating: "0.121" $.dataframes[2].resources[0].service: "image.size" $.dataframes[2].resources[0].desc.dummy: True $.dataframes[2].resources[0].desc.fake_meta: 1.0 $.dataframes[3].tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" $.dataframes[3].begin: "2015-01-04T13:00:00" $.dataframes[3].end: "2015-01-04T14:00:00" $.dataframes[3].resources.`len`: 1 $.dataframes[3].resources[0].volume: "1" $.dataframes[3].resources[0].rating: "0.121" $.dataframes[3].resources[0].service: "image.size" $.dataframes[3].resources[0].desc.dummy: True $.dataframes[3].resources[0].desc.fake_meta: 1.0 - name: fetch data for multiple tenants url: /v1/storage/dataframes query_parameters: begin: "2015-01-04T13:00:00" end: "2015-01-04T14:00:00" status: 200 response_json_paths: $.dataframes.`len`: 8 $.dataframes[0].tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" $.dataframes[0].begin: "2015-01-04T13:00:00" $.dataframes[0].end: "2015-01-04T14:00:00" $.dataframes[0].resources.`len`: 1 $.dataframes[0].resources[0].volume: "1" $.dataframes[0].resources[0].rating: "1.337" $.dataframes[0].resources[0].service: "cpu" $.dataframes[0].resources[0].desc.dummy: True $.dataframes[0].resources[0].desc.fake_meta: 1.0 $.dataframes[1].tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" $.dataframes[1].begin: "2015-01-04T13:00:00" $.dataframes[1].end: "2015-01-04T14:00:00" $.dataframes[1].resources.`len`: 1 $.dataframes[1].resources[0].volume: "1" $.dataframes[1].resources[0].rating: "1.337" $.dataframes[1].resources[0].service: "cpu" $.dataframes[1].resources[0].desc.dummy: True $.dataframes[1].resources[0].desc.fake_meta: 1.0 $.dataframes[2].tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" $.dataframes[2].begin: "2015-01-04T13:00:00" $.dataframes[2].end: "2015-01-04T14:00:00" $.dataframes[2].resources.`len`: 1 $.dataframes[2].resources[0].volume: "1" $.dataframes[2].resources[0].rating: "0.121" $.dataframes[2].resources[0].service: "image.size" $.dataframes[2].resources[0].desc.dummy: True $.dataframes[2].resources[0].desc.fake_meta: 1.0 $.dataframes[3].tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" $.dataframes[3].begin: "2015-01-04T13:00:00" $.dataframes[3].end: "2015-01-04T14:00:00" $.dataframes[3].resources.`len`: 1 $.dataframes[3].resources[0].volume: "1" $.dataframes[3].resources[0].rating: "0.121" $.dataframes[3].resources[0].service: "image.size" $.dataframes[3].resources[0].desc.dummy: True $.dataframes[3].resources[0].desc.fake_meta: 1.0 $.dataframes[0].tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" $.dataframes[0].begin: "2015-01-04T13:00:00" $.dataframes[0].end: "2015-01-04T14:00:00" $.dataframes[0].resources.`len`: 1 $.dataframes[0].resources[0].volume: "1" $.dataframes[0].resources[0].rating: "1.337" $.dataframes[0].resources[0].service: "cpu" $.dataframes[0].resources[0].desc.dummy: True $.dataframes[0].resources[0].desc.fake_meta: 1.0 $.dataframes[1].tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" $.dataframes[1].begin: "2015-01-04T13:00:00" $.dataframes[1].end: "2015-01-04T14:00:00" $.dataframes[1].resources.`len`: 1 $.dataframes[1].resources[0].volume: "1" $.dataframes[1].resources[0].rating: "1.337" $.dataframes[1].resources[0].service: "cpu" $.dataframes[1].resources[0].desc.dummy: True $.dataframes[1].resources[0].desc.fake_meta: 1.0 $.dataframes[2].tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" $.dataframes[2].begin: "2015-01-04T13:00:00" $.dataframes[2].end: "2015-01-04T14:00:00" $.dataframes[2].resources.`len`: 1 $.dataframes[2].resources[0].volume: "1" $.dataframes[2].resources[0].rating: "0.121" $.dataframes[2].resources[0].service: "image.size" $.dataframes[2].resources[0].desc.dummy: True $.dataframes[2].resources[0].desc.fake_meta: 1.0 $.dataframes[3].tenant_id: "8f82cc70-e50c-466e-8624-24bdea811375" $.dataframes[3].begin: "2015-01-04T13:00:00" $.dataframes[3].end: "2015-01-04T14:00:00" $.dataframes[3].resources.`len`: 1 $.dataframes[3].resources[0].volume: "1" $.dataframes[3].resources[0].rating: "0.121" $.dataframes[3].resources[0].service: "image.size" $.dataframes[3].resources[0].desc.dummy: True $.dataframes[3].resources[0].desc.fake_meta: 1.0 - name: fetch data filtering on cpu service and tenant url: /v1/storage/dataframes query_parameters: begin: "2015-01-04T13:00:00" end: "2015-01-04T14:00:00" resource_type: "cpu" tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" status: 200 response_json_paths: $.dataframes.`len`: 2 $.dataframes[0].tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" $.dataframes[0].begin: "2015-01-04T13:00:00" $.dataframes[0].end: "2015-01-04T14:00:00" $.dataframes[0].resources.`len`: 1 $.dataframes[0].resources[0].volume: "1" $.dataframes[0].resources[0].rating: "1.337" $.dataframes[0].resources[0].service: "cpu" $.dataframes[0].resources[0].desc.dummy: True $.dataframes[0].resources[0].desc.fake_meta: 1.0 $.dataframes[1].tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" $.dataframes[1].begin: "2015-01-04T13:00:00" $.dataframes[1].end: "2015-01-04T14:00:00" $.dataframes[1].resources.`len`: 1 $.dataframes[1].resources[0].volume: "1" $.dataframes[1].resources[0].rating: "1.337" $.dataframes[1].resources[0].service: "cpu" $.dataframes[1].resources[0].desc.dummy: True $.dataframes[1].resources[0].desc.fake_meta: 1.0 - name: fetch data filtering on image service and tenant url: /v1/storage/dataframes query_parameters: begin: "2015-01-04T13:00:00" end: "2015-01-04T14:00:00" resource_type: "image.size" tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" status: 200 response_json_paths: $.dataframes.`len`: 2 $.dataframes[0].tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" $.dataframes[0].begin: "2015-01-04T13:00:00" $.dataframes[0].end: "2015-01-04T14:00:00" $.dataframes[0].resources.`len`: 1 $.dataframes[0].resources[0].volume: "1" $.dataframes[0].resources[0].rating: "0.121" $.dataframes[0].resources[0].service: "image.size" $.dataframes[0].resources[0].desc.dummy: True $.dataframes[0].resources[0].desc.fake_meta: 1.0 $.dataframes[1].tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" $.dataframes[1].begin: "2015-01-04T13:00:00" $.dataframes[1].end: "2015-01-04T14:00:00" $.dataframes[1].resources.`len`: 1 $.dataframes[1].resources[0].volume: "1" $.dataframes[1].resources[0].rating: "0.121" $.dataframes[1].resources[0].service: "image.size" $.dataframes[1].resources[0].desc.dummy: True $.dataframes[1].resources[0].desc.fake_meta: 1.0 - name: fetch data filtering on service with no data and tenant url: /v1/storage/dataframes query_parameters: begin: "2015-01-04T13:00:00" end: "2015-01-04T14:00:00" resource_type: "volume" tenant_id: "7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" status: 200 response_json_paths: $.dataframes.`len`: 0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/gabbits/v2-dataframes.yaml0000664000175000017500000004061600000000000025353 0ustar00zuulzuul00000000000000fixtures: - ConfigFixtureStorageV2 - InfluxStorageDataFixture - UTCFixture tests: - name: Push dataframes url: /v2/dataframes method: POST status: 204 request_headers: content-type: application/json data: dataframes: - period: begin: 20190723T122810Z end: 20190723T132810Z usage: metric_one: - vol: unit: GiB qty: 1.2 rating: price: 0.04 groupby: group_one: one group_two: two metadata: attr_one: one attr_two: two metric_two: - vol: unit: GiB qty: 1.2 rating: price: 0.04 groupby: group_one: one group_two: two metadata: attr_one: one attr_two: two - period: begin: 20190723T122810Z end: 20190723T132810Z usage: metric_one: - vol: unit: GiB qty: 1.2 rating: price: 0.04 groupby: group_one: one group_two: two metadata: attr_one: one attr_two: two metric_two: - vol: unit: GiB qty: 1.2 rating: price: 0.04 groupby: group_one: one group_two: two metadata: attr_one: one attr_two: two - name: Push dataframes with empty dataframes url: /v2/dataframes method: POST status: 400 request_headers: content-type: application/json data: dataframes: [] response_strings: - "Parameter dataframes must not be empty." - name: Push dataframes with missing key url: /v2/dataframes method: POST status: 400 request_headers: content-type: application/json data: dataframes: - period: begin: 20190723T122810Z end: 20190723T132810Z usage: metric_one: - vol: unit: GiB qty: 1.2 rating: price: 0.04 groupby: group_one: one group_two: two metadata: attr_one: one attr_two: two metric_two: - vol: unit: GiB qty: 1.2 rating: price: 0.04 groupby: group_one: one group_two: two metadata: attr_one: one attr_two: two - period: begin: 20190723T122810Z end: 20190723T132810Z - name: Push dataframe with malformed datapoint url: /v2/dataframes method: POST status: 400 request_headers: content-type: application/json data: dataframes: - period: begin: 20190723T122810Z end: 20190723T132810Z usage: metric_one: - vol: unit: GiB qty: 1.2 metric_two: - vol: unit: GiB qty: 1.2 rating: price: 0.04 groupby: group_one: one group_two: two metadata: attr_one: one attr_two: two - name: Push dataframe with malformed datetimes url: /v2/dataframes method: POST status: 400 request_headers: content-type: application/json data: dataframes: - period: begin: 20190723TZ end: 20190723TZ usage: metric_one: - vol: unit: GiB qty: 1.2 rating: price: 0.04 groupby: group_one: one group_two: two metadata: attr_one: one attr_two: two metric_two: - vol: unit: GiB qty: 1.2 rating: price: 0.04 groupby: group_one: one group_two: two metadata: attr_one: one attr_two: two - name: fetch period with no data url: /v2/dataframes query_parameters: begin: "2014-01-01T00:00:00" end: "2015-01-04T00:00:00" status: 404 response_strings: - "No resource found for provided filters." - name: fetch period with no data filtering on tenant_id url: /v2/dataframes query_parameters: begin: "2015-01-01T00:00:00" end: "2015-01-04T00:00:00" filters: "project_id:8f82cc70-e50c-466e-8624-24bdea811375" status: 404 response_strings: - "No resource found for provided filters." - name: fetch data for the first tenant without begin time url: /v2/dataframes query_parameters: end: "2015-01-04T00:00:00" filters: "project_id:8f82cc70-e50c-466e-8624-24bdea811375" status: 404 response_strings: - "No resource found for provided filters." - name: fetch data for the first tenant without end time url: /v2/dataframes query_parameters: begin: "2015-01-04T00:00:00" filters: "project_id:8f82cc70-e50c-466e-8624-24bdea811375" status: 200 response_json_paths: $.dataframes.`len`: 56 - name: fetch data for the first tenant without begin and end time url: /v2/dataframes query_parameters: filters: "project_id:3d9a1b33-482f-42fd-aef9-b575a3da9369" status: 404 response_strings: - "No resource found for provided filters." - name: fetch data for the first tenant when begin time bigger than end time url: /v2/dataframes query_parameters: begin: "2015-01-04T14:00:00" end: "2015-01-04T13:00:00" filters: "project_id:8f82cc70-e50c-466e-8624-24bdea811375" status: 404 response_strings: - "No resource found for provided filters." - name: fetch data for the first tenant url: /v2/dataframes query_parameters: begin: "2015-01-04T13:00:00" end: "2015-01-04T14:00:00" filters: "project_id:8f82cc70-e50c-466e-8624-24bdea811375" status: 200 response_json_paths: $: total: 4 dataframes: - usage: image.size: - vol: unit: nothing qty: 1 rating: price: 0.121 groupby: project_id: 8f82cc70-e50c-466e-8624-24bdea811375 fake_meta: 1 metadata: dummy: true - vol: unit: nothing qty: 1 rating: price: 0.121 groupby: project_id: 8f82cc70-e50c-466e-8624-24bdea811375 fake_meta: 1 metadata: dummy: true cpu: - vol: unit: nothing qty: 1 rating: price: 1.337 groupby: project_id: 8f82cc70-e50c-466e-8624-24bdea811375 fake_meta: 1 metadata: dummy: true - vol: unit: nothing qty: 1 rating: price: 1.337 groupby: project_id: 8f82cc70-e50c-466e-8624-24bdea811375 fake_meta: 1 metadata: dummy: true period: begin: '2015-01-04T13:00:00+00:00' end: '2015-01-04T14:00:00+00:00' - name: fetch data for the second tenant url: /v2/dataframes query_parameters: begin: "2015-01-04T13:00:00" end: "2015-01-04T14:00:00" filters: "project_id:7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" status: 200 response_json_paths: $: total: 4 dataframes: - usage: image.size: - vol: unit: nothing qty: 1 rating: price: 0.121 groupby: project_id: 7606a24a-b8ad-4ae0-be6c-3d7a41334a2e fake_meta: 1 metadata: dummy: true - vol: unit: nothing qty: 1 rating: price: 0.121 groupby: project_id: 7606a24a-b8ad-4ae0-be6c-3d7a41334a2e fake_meta: 1 metadata: dummy: true cpu: - vol: unit: nothing qty: 1 rating: price: 1.337 groupby: project_id: 7606a24a-b8ad-4ae0-be6c-3d7a41334a2e fake_meta: 1 metadata: dummy: true - vol: unit: nothing qty: 1 rating: price: 1.337 groupby: project_id: 7606a24a-b8ad-4ae0-be6c-3d7a41334a2e fake_meta: 1 metadata: dummy: true period: begin: '2015-01-04T13:00:00+00:00' end: '2015-01-04T14:00:00+00:00' - name: fetch data for multiple tenants url: /v2/dataframes query_parameters: begin: "2015-01-04T13:00:00" end: "2015-01-04T14:00:00" status: 200 response_json_paths: $: total: 8 dataframes: - usage: image.size: - vol: unit: nothing qty: 1 rating: price: 0.121 groupby: project_id: 8f82cc70-e50c-466e-8624-24bdea811375 fake_meta: 1 metadata: dummy: true - vol: unit: nothing qty: 1 rating: price: 0.121 groupby: project_id: 8f82cc70-e50c-466e-8624-24bdea811375 fake_meta: 1 metadata: dummy: true - vol: unit: nothing qty: 1 rating: price: 0.121 groupby: project_id: 7606a24a-b8ad-4ae0-be6c-3d7a41334a2e fake_meta: 1 metadata: dummy: true - vol: unit: nothing qty: 1 rating: price: 0.121 groupby: project_id: 7606a24a-b8ad-4ae0-be6c-3d7a41334a2e fake_meta: 1 metadata: dummy: true cpu: - vol: unit: nothing qty: 1 rating: price: 1.337 groupby: project_id: 8f82cc70-e50c-466e-8624-24bdea811375 fake_meta: 1 metadata: dummy: true - vol: unit: nothing qty: 1 rating: price: 1.337 groupby: project_id: 8f82cc70-e50c-466e-8624-24bdea811375 fake_meta: 1 metadata: dummy: true - vol: unit: nothing qty: 1 rating: price: 1.337 groupby: project_id: 7606a24a-b8ad-4ae0-be6c-3d7a41334a2e fake_meta: 1 metadata: dummy: true - vol: unit: nothing qty: 1 rating: price: 1.337 groupby: project_id: 7606a24a-b8ad-4ae0-be6c-3d7a41334a2e fake_meta: 1 metadata: dummy: true period: begin: '2015-01-04T13:00:00+00:00' end: '2015-01-04T14:00:00+00:00' - name: fetch data filtering on cpu service and tenant url: /v2/dataframes query_parameters: begin: "2015-01-04T13:00:00" end: "2015-01-04T14:00:00" filters: "type:cpu" filters: "project_id:7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" status: 200 response_json_paths: $: total: 4 dataframes: - usage: image.size: - vol: unit: nothing qty: 1 rating: price: 0.121 groupby: project_id: 7606a24a-b8ad-4ae0-be6c-3d7a41334a2e fake_meta: 1 metadata: dummy: true - vol: unit: nothing qty: 1 rating: price: 0.121 groupby: project_id: 7606a24a-b8ad-4ae0-be6c-3d7a41334a2e fake_meta: 1 metadata: dummy: true cpu: - vol: unit: nothing qty: 1 rating: price: 1.337 groupby: project_id: 7606a24a-b8ad-4ae0-be6c-3d7a41334a2e fake_meta: 1 metadata: dummy: true - vol: unit: nothing qty: 1 rating: price: 1.337 groupby: project_id: 7606a24a-b8ad-4ae0-be6c-3d7a41334a2e fake_meta: 1 metadata: dummy: true period: begin: '2015-01-04T13:00:00+00:00' end: '2015-01-04T14:00:00+00:00' - name: fetch data filtering on image service and tenant url: /v2/dataframes query_parameters: begin: "2015-01-04T13:00:00" end: "2015-01-04T14:00:00" filters: "type:image.size" filters: "project_id:7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" status: 200 response_json_paths: $: total: 4 dataframes: - usage: image.size: - vol: unit: nothing qty: 1 rating: price: 0.121 groupby: project_id: 7606a24a-b8ad-4ae0-be6c-3d7a41334a2e fake_meta: 1 metadata: dummy: true - vol: unit: nothing qty: 1 rating: price: 0.121 groupby: project_id: 7606a24a-b8ad-4ae0-be6c-3d7a41334a2e fake_meta: 1 metadata: dummy: true cpu: - vol: unit: nothing qty: 1 rating: price: 1.337 groupby: project_id: 7606a24a-b8ad-4ae0-be6c-3d7a41334a2e fake_meta: 1 metadata: dummy: true - vol: unit: nothing qty: 1 rating: price: 1.337 groupby: project_id: 7606a24a-b8ad-4ae0-be6c-3d7a41334a2e fake_meta: 1 metadata: dummy: true period: begin: '2015-01-04T13:00:00+00:00' end: '2015-01-04T14:00:00+00:00' - name: fetch data filtering on service with no data and tenant url: /v2/dataframes query_parameters: begin: "2015-01-04T13:00:00" end: "2015-01-04T14:00:00" filters: "type:volume" filters: "project_id:7606a24a-b8ad-4ae0-be6c-3d7a41334a2e" status: 200 response_json_paths: $.dataframes.`len`: 1 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/gabbits/v2-rating-modules.yaml0000664000175000017500000000522200000000000026170 0ustar00zuulzuul00000000000000fixtures: - ConfigFixtureStorageV2 - RatingModulesFixture - QuoteFakeRPC tests: - name: list all modules available url: /v2/rating/modules status: 200 response_json_paths: $.modules.`len`: 3 $.modules[0].priority: 3 $.modules[0].module_id: "fake1" $.modules[0].enabled: false $.modules[0].description: "fake rating module" $.modules[0].hot_config: false $.modules[1].priority: 1 $.modules[1].module_id: "fake2" $.modules[1].enabled: false $.modules[1].description: "fake rating module" $.modules[1].hot_config: false $.modules[2].priority: 2 $.modules[2].module_id: "fake3" $.modules[2].enabled: false $.modules[2].description: "fake rating module" $.modules[2].hot_config: false - name: get information of one module url: /v2/rating/modules/fake2 status: 200 response_json_paths: $.priority: 1 $.module_id: "fake2" $.enabled: false $.description: "fake rating module" $.hot_config: false - name: get information of a unknown module url: /v2/rating/modules/fakb status: 404 response_json_paths: $.message: "Module 'fakb' not found" - name: change priority of a module url: /v2/rating/modules/fake3 method: PUT request_headers: content-type: application/json x-roles: admin data: priority: 5 status: 204 - name: get information of the modified module (priority) url: /v2/rating/modules/fake3 status: 200 response_json_paths: $.priority: 5 $.module_id: "fake3" $.enabled: false $.description: "fake rating module" $.hot_config: false - name: change enabled status of a module url: /v2/rating/modules/fake3 method: PUT request_headers: content-type: application/json x-roles: admin data: enabled: true status: 204 - name: get information of the modified module (status) url: /v2/rating/modules/fake3 status: 200 response_json_paths: $.priority: 5 $.module_id: "fake3" $.enabled: true $.description: "fake rating module" $.hot_config: false - name: change status and priority of a module url: /v2/rating/modules/fake3 method: PUT request_headers: content-type: application/json x-roles: admin data: priority: 3 enabled: false status: 204 - name: get information of the modified module (both) url: /v2/rating/modules/fake3 status: 200 response_json_paths: $.priority: 3 $.module_id: "fake3" $.enabled: false $.description: "fake rating module" $.hot_config: false ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/gabbits/v2-scope-state.yaml0000664000175000017500000001021100000000000025457 0ustar00zuulzuul00000000000000fixtures: - ConfigFixtureStorageV2 - ScopeStateFixture tests: - name: Get all scopes url: /v2/scope status: 200 response_json_paths: $.results.`len`: 8 $.results.[0].scope_id: aaaa - name: Get all scopes with limit url: /v2/scope status: 200 query_parameters: limit: 2 response_json_paths: $.results.`len`: 2 $.results.[0].scope_id: aaaa $.results.[1].scope_id: bbbb $.results.[*].collector: [col1, col1] $.results.[*].fetcher: [fet1, fet1] - name: Get all scopes with limit and offset url: /v2/scope status: 200 query_parameters: limit: 2 offset: 2 response_json_paths: $.results.`len`: 2 $.results.[0].scope_id: cccc $.results.[1].scope_id: dddd $.results.[*].collector: [col2, col2] $.results.[*].fetcher: [fet1, fet1] - name: Get all scopes with offset off bounds url: /v2/scope status: 404 query_parameters: limit: 2 offset: 20 - name: Get all scopes filter on collector url: /v2/scope status: 200 query_parameters: collector: col2 response_json_paths: $.results.`len`: 4 $.results.[0].scope_id: cccc $.results.[1].scope_id: dddd $.results.[2].scope_id: gggg $.results.[3].scope_id: hhhh - name: Get all scopes filter on collector and fetcher url: /v2/scope status: 200 query_parameters: collector: col2 fetcher: fet2 response_json_paths: $.results.`len`: 2 $.results.[0].scope_id: gggg $.results.[1].scope_id: hhhh - name: Get all scopes filter on several collectors and one fetcher url: /v2/scope status: 200 query_parameters: collector: [col2, col1] fetcher: fet2 response_json_paths: $.results.`len`: 4 $.results.[2].scope_id: gggg $.results.[3].scope_id: hhhh - name: Get all scopes filter on several comma separated collectors and one fetcher url: /v2/scope status: 200 query_parameters: collector: "col2,col1" fetcher: fet2 response_json_paths: $.results.`len`: 4 $.results.[2].scope_id: gggg $.results.[3].scope_id: hhhh - name: Get all scopes filter on several collectors and several keys url: /v2/scope status: 200 query_parameters: collector: [col2, col1] scope_key: [key1, key2] response_json_paths: $.results.`len`: 8 $.results[0].scope_id: aaaa - name: Get all scopes filter on scope url: /v2/scope status: 200 query_parameters: scope_id: dddd response_json_paths: $.results.`len`: 1 $.results.[0].scope_id: dddd - name: Get all scopes nonexistent filter url: /v2/scopes status: 404 query_parameters: scope_key: nope - name: Reset states of all scopes url: /v2/scope method: PUT status: 202 request_headers: content-type: application/json data: last_processed_timestamp: 20190716T085501Z all_scopes: true - name: Reset one scope state url: /v2/scope method: PUT status: 202 request_headers: content-type: application/json data: last_processed_timestamp: 20190716T085501Z scope_id: aaaa - name: Reset several scope states url: /v2/scope method: PUT status: 202 request_headers: content-type: application/json data: last_processed_timestamp: 20190716T085501Z scope_id: aaaa scope_id: bbbb - name: Reset state with no scope_id or all_scopes url: /v2/scope method: PUT status: 400 request_headers: content-type: application/json data: scope_key: key1 last_processed_timestamp: 20190716T085501Z response_strings: - "Either all_scopes or a scope_id should be specified." - name: Reset state with no params url: /v2/scope method: PUT status: 400 request_headers: content-type: application/json - name: Reset state with no results for parameters url: /v2/scope method: PUT status: 404 request_headers: content-type: application/json data: last_processed_timestamp: 20190716T085501Z scope_id: foobar ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/gabbits/v2-summary.yaml0000664000175000017500000000501100000000000024727 0ustar00zuulzuul00000000000000fixtures: - ConfigFixtureStorageV2 - NowInfluxStorageDataFixture tests: - name: Get a summary url: /v2/summary status: 200 response_json_paths: $.results.`len`: 1 $.total: 1 - name: Get a summary by project id url: /v2/summary status: 200 query_parameters: groupby: project_id response_json_paths: $.results.`len`: 2 $.total: 2 - name: Get a summary by type url: /v2/summary status: 200 query_parameters: groupby: type response_json_paths: $.results.`len`: 7 $.total: 7 - name: Get a summary by type and project_id url: /v2/summary status: 200 query_parameters: groupby: [type, project_id] response_json_paths: $.results.`len`: 14 $.total: 14 - name: Get a summary by type and project_id limit 5 offset 0 url: /v2/summary status: 200 query_parameters: groupby: [type, project_id] limit: 5 offset: 0 response_json_paths: $.results.`len`: 5 $.total: 14 - name: Get a summary by type and project_id limit 5 offset 5 url: /v2/summary status: 200 query_parameters: groupby: [type, project_id] limit: 5 offset: 5 response_json_paths: $.results.`len`: 5 $.total: 14 - name: Get a summary with a start and end date url: /v2/summary status: 200 query_parameters: begin: "2017-01-01T00:00:00+00:00" end: "2017-01-02T00:00:00+00:00" response_json_paths: $.results.`len`: 0 $.total: 0 - name: Get a summary grouped by time url: /v2/summary status: 200 query_parameters: groupby: [time] response_json_paths: $.results.`len`: 1 $.total: 1 - name: Get a summary grouped by time and project_id url: /v2/summary status: 200 query_parameters: groupby: [time, project_id] response_json_paths: $.results.`len`: 2 $.total: 2 - name: Get a summary grouped by time-w and project_id url: /v2/summary status: 200 query_parameters: groupby: [time-w, project_id] response_json_paths: $.results.`len`: 4 $.total: 4 - name: Get a summary grouped by time-d url: /v2/summary status: 200 query_parameters: groupby: [time-d] response_json_paths: $.results.`len`: 2 $.total: 2 - name: Get a summary grouped by time-y url: /v2/summary status: 200 query_parameters: groupby: [time-y] response_json_paths: $.results.`len`: 3 $.total: 3 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/handlers.py0000664000175000017500000000221500000000000022563 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2016 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import os from gabbi.handlers import base class EnvironStoreHandler(base.ResponseHandler): """Hackish response handler used to store data in environment. Store data into an environment variable to implement some kind of variable use in gabbi. """ test_key_suffix = 'store_environ' test_key_value = {} def action(self, test, key, value=None): data = test.content_handlers[0].extract_json_path_value( test.response_data, value) os.environ[key] = test.replace_template(data) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2634869 cloudkitty-21.0.0/cloudkitty/tests/gabbi/rating/0000775000175000017500000000000000000000000021675 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/rating/__init__.py0000664000175000017500000000000000000000000023774 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2674868 cloudkitty-21.0.0/cloudkitty/tests/gabbi/rating/hash/0000775000175000017500000000000000000000000022620 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/rating/hash/__init__.py0000664000175000017500000000000000000000000024717 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/rating/hash/fixtures.py0000664000175000017500000000176700000000000025056 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from cloudkitty.tests.gabbi.fixtures import * # noqa from cloudkitty.rating.hash.db import api as hashmap_db class HashMapConfigFixture(ConfigFixture): # noqa: F405 def start_fixture(self): super(HashMapConfigFixture, self).start_fixture() self.conn = hashmap_db.get_instance() migration = self.conn.get_migration() migration.upgrade('head') ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2674868 cloudkitty-21.0.0/cloudkitty/tests/gabbi/rating/hash/gabbits/0000775000175000017500000000000000000000000024233 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/rating/hash/gabbits/hash-empty.yaml0000664000175000017500000000141600000000000027200 0ustar00zuulzuul00000000000000fixtures: - HashMapConfigFixture tests: - name: list services (empty) url: /v1/rating/module_config/hashmap/services status: 200 response_strings: - "[]" - name: list fields from invalid service (empty) url: /v1/rating/module_config/hashmap/fields?service_id=d28490b2-fb3c-11e5-988b-eb9539c935dc status: 200 response_strings: - "[]" - name: list mappings from invalid service (empty) url: /v1/rating/module_config/hashmap/mappings?service_id=d28490b2-fb3c-11e5-988b-eb9539c935dc status: 200 response_strings: - "[]" - name: list mappings from invalid field (empty) url: /v1/rating/module_config/hashmap/mappings?field_id=d28490b2-fb3c-11e5-988b-eb9539c935dc status: 200 response_strings: - "[]" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/rating/hash/gabbits/hash-errors.yaml0000664000175000017500000002224200000000000027356 0ustar00zuulzuul00000000000000fixtures: - HashMapConfigFixture tests: - name: get an invalid service url: /v1/rating/module_config/hashmap/services/d28490b2-fb3c-11e5-988b-eb9539c935dc status: 404 response_strings: - "No such service: None (UUID: d28490b2-fb3c-11e5-988b-eb9539c935dc)" - name: get an invalid field url: /v1/rating/module_config/hashmap/fields/d28490b2-fb3c-11e5-988b-eb9539c935dc status: 404 response_strings: - "No such field: d28490b2-fb3c-11e5-988b-eb9539c935dc" - name: get an invalid mapping url: /v1/rating/module_config/hashmap/mappings/d28490b2-fb3c-11e5-988b-eb9539c935dc status: 404 response_strings: - "No such mapping: d28490b2-fb3c-11e5-988b-eb9539c935dc" - name: get an invalid threshold url: /v1/rating/module_config/hashmap/thresholds/d28490b2-fb3c-11e5-988b-eb9539c935dc status: 404 response_strings: - "No such threshold: d28490b2-fb3c-11e5-988b-eb9539c935dc" - name: get an invalid group url: /v1/rating/module_config/hashmap/groups/d28490b2-fb3c-11e5-988b-eb9539c935dc status: 404 response_strings: - "No such group: None (UUID: d28490b2-fb3c-11e5-988b-eb9539c935dc)" - name: create a service url: /v1/rating/module_config/hashmap/services method: POST request_headers: content-type: application/json x-roles: admin data: name: "cpu" status: 201 response_json_paths: $.name: "cpu" response_store_environ: hash_error_service_id: $.service_id - name: create a duplicate service url: /v1/rating/module_config/hashmap/services method: POST request_headers: content-type: application/json x-roles: admin data: name: "cpu" status: 409 response_strings: - "Service cpu already exists (UUID: $RESPONSE['$.service_id'])" - name: create a service mapping with an invalid type url: /v1/rating/module_config/hashmap/mappings method: POST request_headers: content-type: application/json x-roles: admin data: service_id: "371bcd08-009f-11e6-91de-8745729038b2" type: "fail" cost: "0.2" status: 400 response_strings: - "Invalid input for field/attribute type. Value: 'fail'. Value should be one of: " - name: create a field url: /v1/rating/module_config/hashmap/fields method: POST request_headers: content-type: application/json x-roles: admin data: service_id: $ENVIRON['hash_error_service_id'] name: "flavor_id" status: 201 response_json_paths: $.service_id: $ENVIRON['hash_error_service_id'] $.name: "flavor_id" response_store_environ: hash_error_field_id: $.field_id - name: create a duplicate field url: /v1/rating/module_config/hashmap/fields method: POST request_headers: content-type: application/json x-roles: admin data: service_id: $RESPONSE['$.service_id'] name: "flavor_id" status: 409 response_strings: - "Field $RESPONSE['$.name'] already exists (UUID: $RESPONSE['$.field_id'])" - name: modify unknown mapping url: /v1/rating/module_config/hashmap/mappings/42 method: PUT request_headers: content-type: application/json x-roles: admin data: service_id: "cf1e7d1e-fcc4-11e5-9c93-b7775ce62e3c" type: "flat" cost: "0.10000000" status: 404 response_strings: - "No such mapping: 42" - name: create a field mapping to check updates url: /v1/rating/module_config/hashmap/mappings method: POST request_headers: content-type: application/json x-roles: admin data: field_id: $ENVIRON['hash_error_field_id'] type: "flat" cost: "0.2" value: "fail" status: 201 - name: remove the value of a field mapping url: /v1/rating/module_config/hashmap/mappings/$RESPONSE['$.mapping_id'] method: PUT request_headers: content-type: application/json x-roles: admin data: type: "rate" cost: "0.2" value: '' status: 400 response_strings: - "You must specify a value for a field mapping." - name: create a service mapping with an invalid service_id url: /v1/rating/module_config/hashmap/mappings method: POST request_headers: content-type: application/json x-roles: admin data: service_id: "de23e3fe-0097-11e6-a44d-2b09512e61d9" type: "flat" cost: "0.2" status: 400 response_strings: - "No such service: None (UUID: de23e3fe-0097-11e6-a44d-2b09512e61d9)" - name: create a field mapping with an invalid field_id url: /v1/rating/module_config/hashmap/mappings method: POST request_headers: content-type: application/json x-roles: admin data: field_id: "de23e3fe-0097-11e6-a44d-2b09512e61d9" type: "flat" cost: "0.2" value: "fail" status: 400 response_strings: - "No such field: de23e3fe-0097-11e6-a44d-2b09512e61d9" - name: create a service threshold with an invalid service_id url: /v1/rating/module_config/hashmap/thresholds method: POST request_headers: content-type: application/json x-roles: admin data: service_id: "de23e3fe-0097-11e6-a44d-2b09512e61d9" type: "flat" cost: "0.2" level: "1.0" status: 400 response_strings: - "No such service: None (UUID: de23e3fe-0097-11e6-a44d-2b09512e61d9)" - name: create a field threshold with an invalid field_id url: /v1/rating/module_config/hashmap/thresholds method: POST request_headers: content-type: application/json x-roles: admin data: field_id: "de23e3fe-0097-11e6-a44d-2b09512e61d9" type: "flat" cost: "0.2" level: "1.0" status: 400 response_strings: - "No such field: de23e3fe-0097-11e6-a44d-2b09512e61d9" - name: create a mapping with both parent id set url: /v1/rating/module_config/hashmap/mappings method: POST request_headers: content-type: application/json x-roles: admin data: service_id: "de23e3fe-0097-11e6-a44d-2b09512e61d9" field_id: "de23e3fe-0097-11e6-a44d-2b09512e61d9" type: "flat" cost: "0.2" status: 400 response_strings: - "You can only specify one parent." - name: create a mapping with a value and no parent url: /v1/rating/module_config/hashmap/mappings method: POST request_headers: content-type: application/json x-roles: admin data: type: "flat" cost: "0.2" value: "fail" status: 400 response_strings: - "You must specify one parent." - name: create a field mapping with a parent and no value url: /v1/rating/module_config/hashmap/mappings method: POST request_headers: content-type: application/json x-roles: admin data: field_id: $ENVIRON['hash_error_field_id'] type: "flat" cost: "0.2" status: 400 response_strings: - "You must specify a value for a field mapping." - name: create a threshold with both parent id set url: /v1/rating/module_config/hashmap/thresholds method: POST request_headers: content-type: application/json x-roles: admin data: service_id: "de23e3fe-0097-11e6-a44d-2b09512e61d9" field_id: "de23e3fe-0097-11e6-a44d-2b09512e61d9" type: "flat" cost: "0.2" level: "1.0" status: 400 response_strings: - "You can only specify one parent." - name: create a threshold with no parent url: /v1/rating/module_config/hashmap/thresholds method: POST request_headers: content-type: application/json x-roles: admin data: type: "flat" cost: "0.2" level: "1.0" status: 400 response_strings: - "You must specify one parent." - name: create a service threshold with a parent and no level url: /v1/rating/module_config/hashmap/thresholds method: POST request_headers: content-type: application/json x-roles: admin data: service_id: $ENVIRON['hash_error_service_id'] type: "flat" cost: "0.2" status: 400 response_strings: - "Invalid input for field/attribute level. Value: 'None'. Mandatory field missing." - name: delete unknown threshold url: /v1/rating/module_config/hashmap/thresholds/d28490b2-fb3c-11e5-988b-eb9539c935dc method: DELETE status: 404 response_strings: - "No such threshold: d28490b2-fb3c-11e5-988b-eb9539c935dc" - name: delete unknown mapping url: /v1/rating/module_config/hashmap/mappings/d28490b2-fb3c-11e5-988b-eb9539c935dc method: DELETE status: 404 response_strings: - "No such mapping: d28490b2-fb3c-11e5-988b-eb9539c935dc" - name: delete unknown field url: /v1/rating/module_config/hashmap/fields/d28490b2-fb3c-11e5-988b-eb9539c935dc method: DELETE status: 404 response_strings: - "No such field: d28490b2-fb3c-11e5-988b-eb9539c935dc" - name: delete unknown service url: /v1/rating/module_config/hashmap/services/d28490b2-fb3c-11e5-988b-eb9539c935dc method: DELETE status: 404 response_strings: - "No such service: None (UUID: d28490b2-fb3c-11e5-988b-eb9539c935dc)" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/rating/hash/gabbits/hash-location.yaml0000664000175000017500000001142000000000000027646 0ustar00zuulzuul00000000000000fixtures: - HashMapConfigFixture - UUIDFixture tests: - name: check redirect on service creation url: /v1/rating/module_config/hashmap/services method: POST request_headers: content-type: application/json x-roles: admin data: name: "cpu" status: 201 response_json_paths: $.service_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" $.name: "cpu" response_headers: location: $SCHEME://$NETLOC/v1/rating/module_config/hashmap/services/6c1b8a30-797f-4b7e-ad66-9879b79059fb - name: check redirect on service mapping creation url: /v1/rating/module_config/hashmap/mappings method: POST request_headers: content-type: application/json x-roles: admin data: service_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" type: "flat" cost: "0.1000000000000000055511151231" status: 201 response_json_paths: $.mapping_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" $.service_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" $.type: "flat" $.cost: "0.1000000000000000055511151231" response_headers: location: '$SCHEME://$NETLOC/v1/rating/module_config/hashmap/mappings/6c1b8a30-797f-4b7e-ad66-9879b79059fb' - name: delete test mapping url: /v1/rating/module_config/hashmap/mappings/6c1b8a30-797f-4b7e-ad66-9879b79059fb method: DELETE status: 204 - name: check redirect on service threshold creation url: /v1/rating/module_config/hashmap/thresholds method: POST request_headers: content-type: application/json x-roles: admin data: service_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" level: "2" type: "flat" cost: "0.10000000" status: 201 response_json_paths: $.threshold_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" $.service_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" $.level: "2.00000000" $.type: "flat" $.cost: "0.1000000000000000055511151231" response_headers: location: '$SCHEME://$NETLOC/v1/rating/module_config/hashmap/thresholds/6c1b8a30-797f-4b7e-ad66-9879b79059fb' - name: delete test threshold url: /v1/rating/module_config/hashmap/thresholds/6c1b8a30-797f-4b7e-ad66-9879b79059fb method: DELETE status: 204 - name: check redirect on field creation url: /v1/rating/module_config/hashmap/fields method: POST request_headers: content-type: application/json x-roles: admin data: service_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" name: "flavor_id" status: 201 response_json_paths: $.service_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" $.name: "flavor_id" $.field_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" response_headers: location: '$SCHEME://$NETLOC/v1/rating/module_config/hashmap/fields/6c1b8a30-797f-4b7e-ad66-9879b79059fb' - name: check redirect on field mapping creation url: /v1/rating/module_config/hashmap/mappings method: POST request_headers: content-type: application/json x-roles: admin data: field_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" value: "04774238-fcad-11e5-a90e-6391fd56aab2" type: "flat" cost: "0.10000000" status: 201 response_json_paths: $.mapping_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" $.field_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" $.value: "04774238-fcad-11e5-a90e-6391fd56aab2" $.type: "flat" $.cost: "0.1000000000000000055511151231" response_headers: location: '$SCHEME://$NETLOC/v1/rating/module_config/hashmap/mappings/6c1b8a30-797f-4b7e-ad66-9879b79059fb' - name: check redirect on field threshold creation url: /v1/rating/module_config/hashmap/thresholds method: POST request_headers: content-type: application/json x-roles: admin data: field_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" level: "2" type: "flat" cost: "0.10000000" status: 201 response_json_paths: $.threshold_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" $.field_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" $.level: "2.00000000" $.type: "flat" $.cost: "0.1000000000000000055511151231" response_headers: location: '$SCHEME://$NETLOC/v1/rating/module_config/hashmap/thresholds/6c1b8a30-797f-4b7e-ad66-9879b79059fb' - name: check redirect on group creation url: /v1/rating/module_config/hashmap/groups method: POST request_headers: content-type: application/json x-roles: admin data: name: "compute_uptime" status: 201 response_json_paths: $.group_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" $.name: "compute_uptime" response_headers: location: $SCHEME://$NETLOC/v1/rating/module_config/hashmap/groups/6c1b8a30-797f-4b7e-ad66-9879b79059fb ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/rating/hash/gabbits/hash.yaml0000664000175000017500000002373600000000000026055 0ustar00zuulzuul00000000000000fixtures: - HashMapConfigFixture tests: - name: reload list of modules available url: /v1/rating/reload_modules status: 204 - name: check hashmap module is loaded url: /v1/rating/modules response_strings: - '"module_id": "hashmap"' - '"description": "HashMap rating module."' - name: create a service url: /v1/rating/module_config/hashmap/services method: POST request_headers: content-type: application/json x-roles: admin data: name: "cpu" status: 201 response_json_paths: $.name: "cpu" response_store_environ: hash_service_id: $.service_id - name: get a service url: /v1/rating/module_config/hashmap/services/$RESPONSE['$.service_id'] status: 200 response_json_paths: $.service_id: $RESPONSE['$.service_id'] $.name: "cpu" - name: create a flat service mapping url: /v1/rating/module_config/hashmap/mappings method: POST request_headers: content-type: application/json x-roles: admin data: service_id: $RESPONSE['$.service_id'] type: "flat" cost: "0.10000000" status: 201 response_json_paths: $.service_id: $RESPONSE['$.service_id'] $.type: "flat" $.cost: "0.1000000000000000055511151231" - name: delete a flat service mapping url: /v1/rating/module_config/hashmap/mappings/$RESPONSE['$.mapping_id'] method: DELETE status: 204 - name: list services url: /v1/rating/module_config/hashmap/services status: 200 response_json_paths: $.services.`len`: 1 $.services[0].name: "cpu" - name: create a rate service mapping url: /v1/rating/module_config/hashmap/mappings method: POST request_headers: content-type: application/json x-roles: admin data: service_id: $RESPONSE['$.services[0].service_id'] type: "rate" cost: "0.2" status: 201 response_json_paths: $.service_id: $RESPONSE['$.services[0].service_id'] $.type: "rate" $.cost: "0.2000000000000000111022302463" - name: create a flat service mapping for a tenant url: /v1/rating/module_config/hashmap/mappings method: POST request_headers: content-type: application/json x-roles: admin data: service_id: $ENVIRON['hash_service_id'] type: "flat" cost: "0.2" tenant_id: "24a7fdae-27ff-11e6-8c4f-6b725a05bf50" status: 201 response_json_paths: $.service_id: $ENVIRON['hash_service_id'] $.type: "flat" $.cost: "0.2000000000000000111022302463" $.tenant_id: "24a7fdae-27ff-11e6-8c4f-6b725a05bf50" - name: list service mappings no tenant filtering url: /v1/rating/module_config/hashmap/mappings?service_id=$ENVIRON['hash_service_id'] status: 200 response_json_paths: $.mappings.`len`: 2 - name: list service mappings filtering on no tenant url: /v1/rating/module_config/hashmap/mappings?service_id=$ENVIRON['hash_service_id']&filter_tenant=true status: 200 response_json_paths: $.mappings.`len`: 1 - name: list service mappings filtering on tenant url: /v1/rating/module_config/hashmap/mappings?service_id=$ENVIRON['hash_service_id']&tenant_id=24a7fdae-27ff-11e6-8c4f-6b725a05bf50&filter_tenant=true status: 200 response_json_paths: $.mappings.`len`: 1 - name: create a flat service threshold for a tenant url: /v1/rating/module_config/hashmap/thresholds method: POST request_headers: content-type: application/json x-roles: admin data: service_id: $ENVIRON['hash_service_id'] level: 2 type: "flat" cost: "0.2" tenant_id: "24a7fdae-27ff-11e6-8c4f-6b725a05bf50" status: 201 response_json_paths: $.service_id: $ENVIRON['hash_service_id'] $.level: "2.00000000" $.type: "flat" $.cost: "0.2000000000000000111022302463" $.tenant_id: "24a7fdae-27ff-11e6-8c4f-6b725a05bf50" - name: list service thresholds no tenant filtering url: /v1/rating/module_config/hashmap/thresholds?service_id=$ENVIRON['hash_service_id'] status: 200 response_json_paths: $.thresholds.`len`: 1 - name: list service thresholds filtering on no tenant url: /v1/rating/module_config/hashmap/thresholds?service_id=$ENVIRON['hash_service_id']&filter_tenant=true status: 200 response_json_paths: $.thresholds.`len`: 0 - name: list service thresholds filtering on tenant url: /v1/rating/module_config/hashmap/thresholds?service_id=$ENVIRON['hash_service_id']&tenant_id=24a7fdae-27ff-11e6-8c4f-6b725a05bf50&filter_tenant=true status: 200 response_json_paths: $.thresholds.`len`: 1 - name: create a field url: /v1/rating/module_config/hashmap/fields method: POST request_headers: content-type: application/json x-roles: admin data: service_id: $ENVIRON['hash_service_id'] name: "flavor_id" status: 201 response_json_paths: $.service_id: $ENVIRON['hash_service_id'] $.name: "flavor_id" response_store_environ: hash_field_id: $.field_id - name: get a field url: /v1/rating/module_config/hashmap/fields/$RESPONSE['$.field_id'] status: 200 response_json_paths: $.service_id: $RESPONSE['$.service_id'] $.name: "flavor_id" $.field_id: $RESPONSE['$.field_id'] - name: create a flat field mapping url: /v1/rating/module_config/hashmap/mappings method: POST request_headers: content-type: application/json x-roles: admin data: field_id: $RESPONSE['$.field_id'] type: "rate" cost: "0.2" value: "e2083e22-0004-11e6-82bd-2f02489b068b" status: 201 response_json_paths: $.field_id: $RESPONSE['$.field_id'] $.type: "rate" $.cost: "0.2000000000000000111022302463" - name: delete a flat field mapping url: /v1/rating/module_config/hashmap/mappings/$RESPONSE['$.mapping_id'] method: DELETE status: 204 - name: list fields url: /v1/rating/module_config/hashmap/fields?service_id=$ENVIRON['hash_service_id'] status: 200 response_json_paths: $.fields.`len`: 1 $.fields[0].service_id: $ENVIRON['hash_service_id'] $.fields[0].field_id: $ENVIRON['hash_field_id'] $.fields[0].name: "flavor_id" - name: create a rate field mapping url: /v1/rating/module_config/hashmap/mappings method: POST request_headers: content-type: application/json x-roles: admin data: field_id: $RESPONSE['$.fields[0].field_id'] type: "rate" cost: "0.2" value: "f17a0674-0004-11e6-a16b-cf941f4668c4" status: 201 response_json_paths: $.field_id: $RESPONSE['$.fields[0].field_id'] $.type: "rate" $.cost: "0.2000000000000000111022302463" response_store_environ: hash_rate_mapping_id: $.mapping_id - name: change the cost of a mapping url: /v1/rating/module_config/hashmap/mappings/$RESPONSE['$.mapping_id'] method: PUT request_headers: content-type: application/json x-roles: admin data: type: "rate" cost: "0.3" value: "f17a0674-0004-11e6-a16b-cf941f4668c4" status: 302 - name: check updated mapping url: /v1/rating/module_config/hashmap/mappings/$ENVIRON['hash_rate_mapping_id'] status: 200 response_json_paths: $.mapping_id: $ENVIRON['hash_rate_mapping_id'] $.field_id: $ENVIRON['hash_field_id'] $.type: "rate" $.cost: "0.2999999999999999888977697537" $.value: "f17a0674-0004-11e6-a16b-cf941f4668c4" - name: delete a field url: /v1/rating/module_config/hashmap/fields/$ENVIRON['hash_field_id'] method: DELETE status: 204 - name: check field got deleted url: /v1/rating/module_config/hashmap/fields/$ENVIRON['hash_field_id'] method: DELETE status: 404 response_strings: - "No such field: $ENVIRON['hash_field_id']" - name: check child mappings got deleted url: /v1/rating/module_config/hashmap/mappings/?field_id=$ENVIRON['hash_field_id'] status: 200 response_json_paths: $.mappings.`len`: 0 - name: delete a service url: /v1/rating/module_config/hashmap/services/$ENVIRON['hash_service_id'] method: DELETE status: 204 - name: check service got deleted url: /v1/rating/module_config/hashmap/services/$ENVIRON['hash_service_id'] method: DELETE status: 404 response_strings: - "No such service: None (UUID: $ENVIRON['hash_service_id'])" - name: create a service for recursive delete url: /v1/rating/module_config/hashmap/services method: POST request_headers: content-type: application/json x-roles: admin data: name: "service" status: 201 response_store_environ: hash_service_id: $.service_id - name: create a field for recursive delete url: /v1/rating/module_config/hashmap/fields method: POST request_headers: content-type: application/json x-roles: admin data: service_id: $RESPONSE['$.service_id'] name: "flavor_id" status: 201 response_store_environ: hash_field_id: $.field_id - name: create a field mapping for recursive delete url: /v1/rating/module_config/hashmap/mappings method: POST request_headers: content-type: application/json x-roles: admin data: field_id: $RESPONSE['$.field_id'] value: "flavor_id" cost: "0.1" status: 201 response_store_environ: hash_mapping_id: $.mapping_id - name: delete a service with recursive url: /v1/rating/module_config/hashmap/services/$ENVIRON['hash_service_id'] method: DELETE status: 204 - name: check mapping got recursively deleted url: /v1/rating/module_config/hashmap/mappings/$ENVIRON['hash_mapping_id'] status: 404 response_strings: - "No such mapping: $ENVIRON['hash_mapping_id']" - name: check field got recursively deleted url: /v1/rating/module_config/hashmap/fields/$ENVIRON['hash_field_id'] status: 404 response_strings: - "No such field: $ENVIRON['hash_field_id']" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/rating/hash/test_gabbi.py0000664000175000017500000000250500000000000025277 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import os from gabbi import driver from cloudkitty.tests.gabbi import fixtures from cloudkitty.tests.gabbi import handlers as cloudkitty_handlers from cloudkitty.tests.gabbi.rating.hash import fixtures as hash_fixtures TESTS_DIR = 'gabbits' def load_tests(loader, tests, pattern): test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR) return driver.build_tests(test_dir, loader, host=None, intercept=fixtures.setup_app, fixture_module=hash_fixtures, response_handlers=[ cloudkitty_handlers.EnvironStoreHandler]) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2674868 cloudkitty-21.0.0/cloudkitty/tests/gabbi/rating/pyscripts/0000775000175000017500000000000000000000000023735 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/rating/pyscripts/__init__.py0000664000175000017500000000000000000000000026034 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/rating/pyscripts/fixtures.py0000664000175000017500000000200400000000000026154 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from cloudkitty.tests.gabbi.fixtures import * # noqa from cloudkitty.rating.pyscripts.db import api as pyscripts_db class PyScriptsConfigFixture(ConfigFixture): # noqa: F405 def start_fixture(self): super(PyScriptsConfigFixture, self).start_fixture() self.conn = pyscripts_db.get_instance() migration = self.conn.get_migration() migration.upgrade('head') ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2674868 cloudkitty-21.0.0/cloudkitty/tests/gabbi/rating/pyscripts/gabbits/0000775000175000017500000000000000000000000025350 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/rating/pyscripts/gabbits/pyscripts.yaml0000664000175000017500000001113600000000000030276 0ustar00zuulzuul00000000000000fixtures: - PyScriptsConfigFixture - UUIDFixture tests: - name: reload list of modules available url: /v1/rating/reload_modules status: 204 - name: check pyscripts module is loaded url: /v1/rating/modules response_strings: - '"module_id": "pyscripts"' - '"description": "PyScripts rating module."' - name: typo of script url: /v1/rating/module_config/pyscripts/script status: 405 - name: list scripts (empty) url: /v1/rating/module_config/pyscripts/scripts status: 200 response_strings: - "[]" - name: create policy script url: /v1/rating/module_config/pyscripts/scripts method: POST request_headers: content-type: application/json x-roles: admin data: name: "policy1" data: "a = 0" status: 201 response_json_paths: $.script_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" $.name: "policy1" $.data: "a = 0" $.checksum: "4c612e33c0e40b7bf53cf95fad47dbfbeab9dd62f9bc181a9d1c6f40a087782223c23f793e747b0466b9e6998c6ea54f4edbd20febd13edb13b55074b5ee1a5a" response_headers: location: '$SCHEME://$NETLOC/v1/rating/module_config/pyscripts/scripts/6c1b8a30-797f-4b7e-ad66-9879b79059fb' - name: create duplicate policy script url: /v1/rating/module_config/pyscripts/scripts method: POST request_headers: content-type: application/json x-roles: admin data: name: "policy1" data: "a = 0" status: 409 response_strings: - "Script policy1 already exists (UUID: 6c1b8a30-797f-4b7e-ad66-9879b79059fb)" - name: list scripts url: /v1/rating/module_config/pyscripts/scripts status: 200 response_json_paths: $.scripts[0].script_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" $.scripts[0].name: "policy1" $.scripts[0].data: "a = 0" $.scripts[0].checksum: "4c612e33c0e40b7bf53cf95fad47dbfbeab9dd62f9bc181a9d1c6f40a087782223c23f793e747b0466b9e6998c6ea54f4edbd20febd13edb13b55074b5ee1a5a" - name: list scripts excluding data url: /v1/rating/module_config/pyscripts/scripts?no_data=true status: 200 response_json_paths: $.scripts[0].script_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" $.scripts[0].name: "policy1" $.scripts[0].checksum: "4c612e33c0e40b7bf53cf95fad47dbfbeab9dd62f9bc181a9d1c6f40a087782223c23f793e747b0466b9e6998c6ea54f4edbd20febd13edb13b55074b5ee1a5a" - name: get script url: /v1/rating/module_config/pyscripts/scripts/6c1b8a30-797f-4b7e-ad66-9879b79059fb status: 200 response_json_paths: $.script_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" $.name: "policy1" $.data: "a = 0" $.checksum: "4c612e33c0e40b7bf53cf95fad47dbfbeab9dd62f9bc181a9d1c6f40a087782223c23f793e747b0466b9e6998c6ea54f4edbd20febd13edb13b55074b5ee1a5a" - name: modify script url: /v1/rating/module_config/pyscripts/scripts/6c1b8a30-797f-4b7e-ad66-9879b79059fb method: PUT request_headers: content-type: application/json x-roles: admin data: name: "policy1" data: "a = 1" status: 201 response_json_paths: $.script_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" $.name: "policy1" $.data: "a = 1" $.checksum: "acb3095e24b13960484e75bce070e13e8a7728760517c31b34929a6f732841c652e9d2cc4d186bd02ef2e7495fab3c4850673bedc945cee7c74fea85eabd542c" - name: modify unknown script url: /v1/rating/module_config/pyscripts/scripts/42 method: PUT request_headers: content-type: application/json x-roles: admin data: name: "policy1" data: "a = 1" status: 404 response_strings: - "No such script: None (UUID: 42)" - name: check updated script url: /v1/rating/module_config/pyscripts/scripts/6c1b8a30-797f-4b7e-ad66-9879b79059fb request_headers: content-type: application/json x-roles: admin status: 200 response_json_paths: $.script_id: "6c1b8a30-797f-4b7e-ad66-9879b79059fb" $.name: "policy1" $.data: "a = 1" $.checksum: "acb3095e24b13960484e75bce070e13e8a7728760517c31b34929a6f732841c652e9d2cc4d186bd02ef2e7495fab3c4850673bedc945cee7c74fea85eabd542c" - name: delete script url: /v1/rating/module_config/pyscripts/scripts/6c1b8a30-797f-4b7e-ad66-9879b79059fb method: DELETE status: 204 - name: get unknown script url: /v1/rating/module_config/pyscripts/scripts/42 status: 404 response_strings: - "No such script: None (UUID: 42)" - name: delete unknown script url: /v1/rating/module_config/pyscripts/scripts/42 method: DELETE status: 404 response_strings: - "No such script: None (UUID: 42)" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/rating/pyscripts/test_gabbi.py0000664000175000017500000000220500000000000026411 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import os from gabbi import driver from cloudkitty.tests.gabbi import fixtures from cloudkitty.tests.gabbi.rating.pyscripts import fixtures as py_fixtures TESTS_DIR = 'gabbits' def load_tests(loader, tests, pattern): test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR) return driver.build_tests(test_dir, loader, host=None, intercept=fixtures.setup_app, fixture_module=py_fixtures) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/gabbi/test_gabbi.py0000664000175000017500000000206700000000000023073 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import os from gabbi import driver from cloudkitty.tests.gabbi import fixtures TESTS_DIR = 'gabbits' def load_tests(loader, tests, pattern): test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR) return driver.build_tests(test_dir, loader, host=None, intercept=fixtures.setup_app, fixture_module=fixtures) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/samples.py0000664000175000017500000002401100000000000021361 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import copy import datetime import decimal from oslo_utils import uuidutils from cloudkitty import dataframe from cloudkitty import utils as ck_utils # These have a different format in order to check that both forms are supported TENANT = 'f266f30b11f246b589fd266f85eeec39' OTHER_TENANT = '8d3ae500-89ea-4142-9c6e-1269db6a0b64' INITIAL_TIMESTAMP = 1420070400 FIRST_PERIOD_BEGIN = ck_utils.ts2dt(INITIAL_TIMESTAMP) FIRST_PERIOD_BEGIN_ISO = ck_utils.dt2iso(FIRST_PERIOD_BEGIN) FIRST_PERIOD_END = FIRST_PERIOD_BEGIN + datetime.timedelta(seconds=3600) FIRST_PERIOD_END_ISO = ck_utils.dt2iso(FIRST_PERIOD_END) SECOND_PERIOD_BEGIN = FIRST_PERIOD_END SECOND_PERIOD_BEGIN_ISO = ck_utils.dt2iso(SECOND_PERIOD_BEGIN) SECOND_PERIOD_END = SECOND_PERIOD_BEGIN + datetime.timedelta(seconds=3600) SECOND_PERIOD_END_ISO = ck_utils.dt2iso(SECOND_PERIOD_END) COMPUTE_METADATA = { 'availability_zone': 'nova', 'flavor': 'm1.nano', 'image_id': 'f5600101-8fa2-4864-899e-ebcb7ed6b568', 'instance_id': '26c084e1-b8f1-4cbc-a7ec-e8b356788a17', 'resource_id': '1558f911-b55a-4fd2-9173-c8f1f23e5639', 'memory': '64', 'metadata': { 'farm': 'prod' }, 'name': 'prod1', 'vcpus': '1' } COMPUTE_GROUPBY = { 'id': '1558f911-b55a-4fd2-9173-c8f1f23e5639', 'project_id': 'f266f30b11f246b589fd266f85eeec39', 'user_id': '55b3379b949243009ee96972fbf51ed1', } IMAGE_METADATA = { 'checksum': '836c69cbcd1dc4f225daedbab6edc7c7', 'resource_id': '7b5b73f2-9181-4307-a710-b1aa6472526d', 'container_format': 'aki', 'created_at': '2014-06-04T16:26:01', 'deleted': 'False', 'deleted_at': 'None', 'disk_format': 'aki', 'is_public': 'True', 'min_disk': '0', 'min_ram': '0', 'name': 'cirros-0.3.2-x86_64-uec-kernel', 'protected': 'False', 'size': '4969360', 'status': 'active', 'updated_at': '2014-06-04T16:26:02', } IMAGE_GROUPBY = { 'id': '7b5b73f2-9181-4307-a710-b1aa6472526d', } FIRST_PERIOD = { 'begin': FIRST_PERIOD_BEGIN, 'end': FIRST_PERIOD_END, } SECOND_PERIOD = { 'begin': SECOND_PERIOD_BEGIN, 'end': SECOND_PERIOD_END, } COLLECTED_DATA = [ dataframe.DataFrame(start=FIRST_PERIOD["begin"], end=FIRST_PERIOD["end"]), dataframe.DataFrame(start=SECOND_PERIOD["begin"], end=SECOND_PERIOD["end"]), ] _INSTANCE_POINT = dataframe.DataPoint( 'instance', '1.0', '0.42', COMPUTE_GROUPBY, COMPUTE_METADATA) _IMAGE_SIZE_POINT = dataframe.DataPoint( 'image', '1.0', '0.1337', IMAGE_GROUPBY, IMAGE_METADATA) COLLECTED_DATA[0].add_point(_INSTANCE_POINT, 'instance') COLLECTED_DATA[0].add_point(_IMAGE_SIZE_POINT, 'image.size') COLLECTED_DATA[1].add_point(_INSTANCE_POINT, 'instance') RATED_DATA = copy.deepcopy(COLLECTED_DATA) DEFAULT_METRICS_CONF = { "metrics": { "cpu": { "unit": "instance", "alt_name": "instance", "groupby": [ "id", "project_id" ], "metadata": [ "flavor", "flavor_id", "vcpus" ], "mutate": "NUMBOOL", "extra_args": { "aggregation_method": "max", "resource_type": "instance" } }, "image.size": { "unit": "MiB", "factor": "1/1048576", "groupby": [ "id", "project_id" ], "metadata": [ "container_format", "disk_format" ], "extra_args": { "aggregation_method": "max", "resource_type": "image" } }, "volume.size": { "unit": "GiB", "groupby": [ "id", "project_id" ], "metadata": [ "volume_type" ], "extra_args": { "aggregation_method": "max", "resource_type": "volume" } }, "network.outgoing.bytes": { "unit": "MB", "groupby": [ "id", "project_id" ], "factor": "1/1000000", "metadata": [ "instance_id" ], "extra_args": { "aggregation_method": "max", "resource_type": "instance_network_interface" } }, "network.incoming.bytes": { "unit": "MB", "groupby": [ "id", "project_id" ], "factor": "1/1000000", "metadata": [ "instance_id" ], "extra_args": { "aggregation_method": "max", "resource_type": "instance_network_interface" } }, "ip.floating": { "unit": "ip", "groupby": [ "id", "project_id" ], "metadata": [ "state" ], "mutate": "NUMBOOL", "extra_args": { "aggregation_method": "max", "resource_type": "network" } }, "radosgw.objects.size": { "unit": "GiB", "groupby": [ "id", "project_id" ], "factor": "1/1073741824", "extra_args": { "aggregation_method": "max", "resource_type": "ceph_account" } } } } METRICS_CONF = DEFAULT_METRICS_CONF PROMETHEUS_RESP_INSTANT_QUERY = { "status": "success", "data": { "resultType": "vector", "result": [ { "metric": { "code": "200", "method": "get", "group": "prometheus_group", "instance": "localhost:9090", "job": "prometheus", }, "value": [ FIRST_PERIOD_END, "7", ] }, { "metric": { "code": "200", "method": "post", "group": "prometheus_group", "instance": "localhost:9090", "job": "prometheus", }, "value": [ FIRST_PERIOD_END, "42", ] }, ] } } PROMETHEUS_EMPTY_RESP_INSTANT_QUERY = { "status": "success", "data": { "resultType": "vector", "result": [], } } V2_STORAGE_SAMPLE = { "instance": { "vol": { "unit": "instance", "qty": 1.0, }, "rating": { "price": decimal.Decimal(2.5), }, "groupby": { "id": uuidutils.generate_uuid(), "project_id": COMPUTE_GROUPBY['project_id'], }, "metadata": { "flavor": "m1.nano", "flavor_id": "42", }, }, "image.size": { "vol": { "unit": "MiB", "qty": 152.0, }, "rating": { "price": decimal.Decimal(0.152), }, "groupby": { "id": uuidutils.generate_uuid(), "project_id": COMPUTE_GROUPBY['project_id'], }, "metadata": { "disk_format": "qcow2", }, }, "volume.size": { "vol": { "unit": "GiB", "qty": 20.0, }, "rating": { "price": decimal.Decimal(1.2), }, "groupby": { "id": uuidutils.generate_uuid(), "project_id": COMPUTE_GROUPBY['project_id'], }, "metadata": { "volume_type": "ceph-region1" }, }, "network.outgoing.bytes": { "vol": { "unit": "MB", "qty": 12345.6, }, "rating": { "price": decimal.Decimal(0.00123456), }, "groupby": { "id": uuidutils.generate_uuid(), "project_id": COMPUTE_GROUPBY['project_id'], }, "metadata": { "instance_id": uuidutils.generate_uuid(), }, }, "network.incoming.bytes": { "vol": { "unit": "MB", "qty": 34567.8, }, "rating": { "price": decimal.Decimal(0.00345678), }, "groupby": { "id": uuidutils.generate_uuid(), "project_id": COMPUTE_GROUPBY['project_id'], }, "metadata": { "instance_id": uuidutils.generate_uuid(), }, }, "ip.floating": { "vol": { "unit": "ip", "qty": 1.0, }, "rating": { "price": decimal.Decimal(0.01), }, "groupby": { "id": uuidutils.generate_uuid(), "project_id": COMPUTE_GROUPBY['project_id'], }, "metadata": { "state": "attached", }, }, "radosgw.objects.size": { "vol": { "unit": "GiB", "qty": 3.0, }, "rating": { "price": decimal.Decimal(0.30), }, "groupby": { "id": uuidutils.generate_uuid(), "project_id": COMPUTE_GROUPBY['project_id'], }, "metadata": { "object_id": uuidutils.generate_uuid(), }, } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2674868 cloudkitty-21.0.0/cloudkitty/tests/storage/0000775000175000017500000000000000000000000021011 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/storage/__init__.py0000664000175000017500000000000000000000000023110 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2674868 cloudkitty-21.0.0/cloudkitty/tests/storage/v1/0000775000175000017500000000000000000000000021337 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/storage/v1/__init__.py0000664000175000017500000000000000000000000023436 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/storage/v1/test_hybrid_storage.py0000664000175000017500000001100200000000000025747 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from unittest import mock from gnocchiclient import exceptions as gexc from cloudkitty import storage from cloudkitty import tests from cloudkitty.tests import utils as test_utils class BaseHybridStorageTest(tests.TestCase): @mock.patch('cloudkitty.utils.load_conf', new=test_utils.load_conf) def setUp(self): super(BaseHybridStorageTest, self).setUp() self.conf.set_override('backend', 'hybrid', 'storage') self.conf.set_override('version', '1', 'storage') self.storage = storage.get_storage(conf=test_utils.load_conf()) with mock.patch.object( self.storage.storage._hybrid_backend, 'init'): self.storage.init() class PermissiveDict(object): """Allows to check a single key of a dict in an assertion. Example: >>> mydict = {'a': 'A', 'b': 'B'} >>> checker = PermissiveDict('A', key='a') >>> checker == mydict True """ def __init__(self, value, key='name'): self.key = key self.value = value def __eq__(self, other): return self.value == other.get(self.key) class HybridStorageTestGnocchi(BaseHybridStorageTest): def setUp(self): super(HybridStorageTestGnocchi, self).setUp() def tearDown(self): super(HybridStorageTestGnocchi, self).tearDown() def _init_storage(self, archive_policy=False, res_type=False): with mock.patch.object(self.storage.storage._hybrid_backend._conn, 'archive_policy', spec=['get', 'create']) as pol_mock: if not archive_policy: pol_mock.get.side_effect = gexc.ArchivePolicyNotFound else: pol_mock.create.side_effect = gexc.ArchivePolicyAlreadyExists with mock.patch.object(self.storage.storage._hybrid_backend._conn, 'resource_type', spec=['get', 'create']) as rtype_mock: if not res_type: rtype_mock.get.side_effect = gexc.ResourceTypeNotFound else: rtype_mock.create.side_effect \ = gexc.ResourceTypeAlreadyExists self.storage.init() rtype_data = (self.storage.storage ._hybrid_backend._resource_type_data) rtype_calls = list() for val in rtype_data.values(): rtype_calls.append( mock.call(PermissiveDict(val['name'], key='name'))) if res_type: rtype_mock.create.assert_not_called() else: rtype_mock.create.assert_has_calls( rtype_calls, any_order=True) pol_mock.get.assert_called_once_with( self.storage.storage._hybrid_backend._archive_policy_name) if archive_policy: pol_mock.create.assert_not_called() else: apolicy = { 'name': (self.storage.storage ._hybrid_backend._archive_policy_name), 'back_window': 0, 'aggregation_methods': ['std', 'count', 'min', 'max', 'sum', 'mean'], } apolicy['definition'] = (self.storage.storage ._hybrid_backend ._archive_policy_definition) pol_mock.create.assert_called_once_with(apolicy) def test_init_no_res_type_no_policy(self): self._init_storage() def test_init_with_res_type_no_policy(self): self._init_storage(res_type=True) def test_init_no_res_type_with_policy(self): self._init_storage(archive_policy=True) def test_init_with_res_type_with_policy(self): self._init_storage(res_type=True, archive_policy=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/storage/v1/test_storage.py0000664000175000017500000002533200000000000024421 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import copy import datetime from unittest import mock import testscenarios from cloudkitty import storage from cloudkitty import tests from cloudkitty.tests import samples from cloudkitty.tests import utils as test_utils from cloudkitty.utils import tz as tzutils class StorageTest(tests.TestCase): storage_scenarios = [ ('sqlalchemy', dict(storage_backend='sqlalchemy')), ('hybrid', dict(storage_backend='hybrid'))] @classmethod def generate_scenarios(cls): cls.scenarios = testscenarios.multiply_scenarios( cls.scenarios, cls.storage_scenarios) @mock.patch('cloudkitty.storage.v1.hybrid.backends.gnocchi.gclient') @mock.patch('cloudkitty.utils.load_conf', new=test_utils.load_conf) def setUp(self, gclient_mock): super(StorageTest, self).setUp() self._tenant_id = samples.TENANT self._other_tenant_id = '8d3ae50089ea4142-9c6e1269db6a0b64' self.conf.set_override('backend', self.storage_backend, 'storage') self.conf.set_override('version', '1', 'storage') self.storage = storage.get_storage(conf=test_utils.load_conf()) self.storage.init() def insert_data(self): working_data = copy.deepcopy(samples.RATED_DATA) self.storage.push(working_data, self._tenant_id) working_data = copy.deepcopy(samples.RATED_DATA) self.storage.push(working_data, self._other_tenant_id) def insert_different_data_two_tenants(self): working_data = copy.deepcopy(samples.RATED_DATA) del working_data[1] self.storage.push(working_data, self._tenant_id) working_data = copy.deepcopy(samples.RATED_DATA) del working_data[0] self.storage.push(working_data, self._other_tenant_id) class StorageDataframeTest(StorageTest): storage_scenarios = [ ('sqlalchemy', dict(storage_backend='sqlalchemy'))] # Queries # Data def test_get_no_frame_when_nothing_in_storage(self): self.assertRaises( storage.NoTimeFrame, self.storage.retrieve, begin=(samples.FIRST_PERIOD_BEGIN - datetime.timedelta(seconds=3600)), end=samples.FIRST_PERIOD_BEGIN) def test_get_frame_filter_outside_data(self): self.insert_different_data_two_tenants() self.assertRaises( storage.NoTimeFrame, self.storage.retrieve, begin=(samples.FIRST_PERIOD_BEGIN - datetime.timedelta(seconds=3600)), end=samples.FIRST_PERIOD_BEGIN) def test_get_frame_without_filter_but_timestamp(self): self.insert_different_data_two_tenants() data = self.storage.retrieve( begin=samples.FIRST_PERIOD_BEGIN, end=samples.SECOND_PERIOD_END)['dataframes'] self.assertEqual(3, len(data)) def test_get_frame_on_one_period(self): self.insert_different_data_two_tenants() data = self.storage.retrieve( begin=samples.FIRST_PERIOD_BEGIN, end=samples.FIRST_PERIOD_END)['dataframes'] self.assertEqual(2, len(data)) def test_get_frame_on_one_period_and_one_tenant(self): self.insert_different_data_two_tenants() filters = {'project_id': self._tenant_id} data = self.storage.retrieve( begin=samples.FIRST_PERIOD_BEGIN, end=samples.FIRST_PERIOD_END, filters=filters)['dataframes'] self.assertEqual(2, len(data)) def test_get_frame_on_one_period_and_one_tenant_outside_data(self): self.insert_different_data_two_tenants() filters = {'project_id': self._other_tenant_id} self.assertRaises( storage.NoTimeFrame, self.storage.retrieve, begin=samples.FIRST_PERIOD_BEGIN, end=samples.FIRST_PERIOD_END, filters=filters) def test_get_frame_on_two_periods(self): self.insert_different_data_two_tenants() data = self.storage.retrieve( begin=samples.FIRST_PERIOD_BEGIN, end=samples.SECOND_PERIOD_END)['dataframes'] self.assertEqual(3, len(data)) class StorageTotalTest(StorageTest): storage_scenarios = [ ('sqlalchemy', dict(storage_backend='sqlalchemy'))] # Total def test_get_empty_total(self): begin = tzutils.utc_to_local(samples.FIRST_PERIOD_BEGIN - datetime.timedelta(seconds=3600)) end = tzutils.utc_to_local(samples.FIRST_PERIOD_BEGIN) self.insert_data() total = self.storage.total( begin=begin, end=end)['results'] self.assertEqual(1, len(total)) self.assertEqual(total[0]["rate"], 0) self.assertEqual(begin, total[0]["begin"]) self.assertEqual(end, total[0]["end"]) def test_get_total_without_filter_but_timestamp(self): begin = tzutils.utc_to_local(samples.FIRST_PERIOD_BEGIN) end = tzutils.utc_to_local(samples.SECOND_PERIOD_END) self.insert_data() total = self.storage.total( begin=begin, end=end)['results'] # FIXME(sheeprine): floating point error (transition to decimal) self.assertEqual(1, len(total)) self.assertEqual(1.9473999999999998, total[0]["rate"]) self.assertEqual(begin, total[0]["begin"]) self.assertEqual(end, total[0]["end"]) def test_get_total_filtering_on_one_period(self): begin = tzutils.utc_to_local(samples.FIRST_PERIOD_BEGIN) end = tzutils.utc_to_local(samples.FIRST_PERIOD_END) self.insert_data() total = self.storage.total( begin=begin, end=end)['results'] self.assertEqual(1, len(total)) self.assertEqual(1.1074, total[0]["rate"]) self.assertEqual(begin, total[0]["begin"]) self.assertEqual(end, total[0]["end"]) def test_get_total_filtering_on_one_period_and_one_tenant(self): begin = tzutils.utc_to_local(samples.FIRST_PERIOD_BEGIN) end = tzutils.utc_to_local(samples.FIRST_PERIOD_END) self.insert_data() filters = {'project_id': self._tenant_id} total = self.storage.total( begin=begin, end=end, filters=filters)['results'] self.assertEqual(1, len(total)) self.assertEqual(0.5537, total[0]["rate"]) self.assertEqual(self._tenant_id, total[0]["tenant_id"]) self.assertEqual(begin, total[0]["begin"]) self.assertEqual(end, total[0]["end"]) def test_get_total_filtering_on_service(self): begin = tzutils.utc_to_local(samples.FIRST_PERIOD_BEGIN) end = tzutils.utc_to_local(samples.FIRST_PERIOD_END) self.insert_data() total = self.storage.total( begin=begin, end=end, metric_types='instance')['results'] self.assertEqual(1, len(total)) self.assertEqual(0.84, total[0]["rate"]) self.assertEqual('instance', total[0]["res_type"]) self.assertEqual(begin, total[0]["begin"]) self.assertEqual(end, total[0]["end"]) def test_get_total_groupby_tenant(self): begin = tzutils.utc_to_local(samples.FIRST_PERIOD_BEGIN) end = tzutils.utc_to_local(samples.SECOND_PERIOD_END) self.insert_data() total = self.storage.total( begin=begin, end=end, groupby=['project_id'])['results'] self.assertEqual(2, len(total)) self.assertEqual(0.9737, total[0]["rate"]) self.assertEqual(self._other_tenant_id, total[0]["tenant_id"]) self.assertEqual(begin, total[0]["begin"]) self.assertEqual(end, total[0]["end"]) self.assertEqual(0.9737, total[1]["rate"]) self.assertEqual(self._tenant_id, total[1]["tenant_id"]) self.assertEqual(begin, total[1]["begin"]) self.assertEqual(end, total[1]["end"]) def test_get_total_groupby_restype(self): begin = tzutils.utc_to_local(samples.FIRST_PERIOD_BEGIN) end = tzutils.utc_to_local(samples.SECOND_PERIOD_END) self.insert_data() total = self.storage.total( begin=begin, end=end, groupby=['type'])['results'] self.assertEqual(2, len(total)) self.assertEqual(0.2674, total[0]["rate"]) self.assertEqual('image.size', total[0]["res_type"]) self.assertEqual(begin, total[0]["begin"]) self.assertEqual(end, total[0]["end"]) self.assertEqual(1.68, total[1]["rate"]) self.assertEqual('instance', total[1]["res_type"]) self.assertEqual(begin, total[1]["begin"]) self.assertEqual(end, total[1]["end"]) def test_get_total_groupby_tenant_and_restype(self): begin = tzutils.utc_to_local(samples.FIRST_PERIOD_BEGIN) end = tzutils.utc_to_local(samples.SECOND_PERIOD_END) self.insert_data() total = self.storage.total( begin=begin, end=end, groupby=['project_id', 'type'])['results'] self.assertEqual(4, len(total)) self.assertEqual(0.1337, total[0]["rate"]) self.assertEqual(self._other_tenant_id, total[0]["tenant_id"]) self.assertEqual('image.size', total[0]["res_type"]) self.assertEqual(begin, total[0]["begin"]) self.assertEqual(end, total[0]["end"]) self.assertEqual(0.1337, total[1]["rate"]) self.assertEqual(self._tenant_id, total[1]["tenant_id"]) self.assertEqual('image.size', total[1]["res_type"]) self.assertEqual(begin, total[1]["begin"]) self.assertEqual(end, total[1]["end"]) self.assertEqual(0.84, total[2]["rate"]) self.assertEqual(self._other_tenant_id, total[2]["tenant_id"]) self.assertEqual('instance', total[2]["res_type"]) self.assertEqual(begin, total[2]["begin"]) self.assertEqual(end, total[2]["end"]) self.assertEqual(0.84, total[3]["rate"]) self.assertEqual(self._tenant_id, total[3]["tenant_id"]) self.assertEqual('instance', total[3]["res_type"]) self.assertEqual(begin, total[3]["begin"]) self.assertEqual(end, total[3]["end"]) StorageTest.generate_scenarios() StorageTotalTest.generate_scenarios() StorageDataframeTest.generate_scenarios() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2674868 cloudkitty-21.0.0/cloudkitty/tests/storage/v2/0000775000175000017500000000000000000000000021340 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/storage/v2/__init__.py0000664000175000017500000000000000000000000023437 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2674868 cloudkitty-21.0.0/cloudkitty/tests/storage/v2/elasticsearch/0000775000175000017500000000000000000000000024152 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/storage/v2/elasticsearch/__init__.py0000664000175000017500000000000000000000000026251 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/storage/v2/elasticsearch/test_client.py0000664000175000017500000004645400000000000027056 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import collections import datetime import unittest from unittest import mock from dateutil import tz from cloudkitty import dataframe from cloudkitty.storage.v2.elasticsearch import client from cloudkitty.storage.v2.elasticsearch import exceptions class TestElasticsearchClient(unittest.TestCase): def setUp(self): super(TestElasticsearchClient, self).setUp() self.client = client.ElasticsearchClient( 'http://elasticsearch:9200', 'index_name', 'test_mapping', autocommit=False) def test_build_must_no_params(self): self.assertEqual(self.client._build_must(None, None, None, None), []) def test_build_must_with_start_end(self): start = datetime.datetime(2019, 8, 30, tzinfo=tz.tzutc()) end = datetime.datetime(2019, 8, 31, tzinfo=tz.tzutc()) self.assertEqual( self.client._build_must(start, end, None, None), [{'range': {'start': {'gte': '2019-08-30T00:00:00+00:00'}}}, {'range': {'end': {'lte': '2019-08-31T00:00:00+00:00'}}}], ) def test_build_must_with_filters(self): filters = {'one': '1', 'two': '2', 'type': 'awesome'} self.assertEqual( self.client._build_must(None, None, None, filters), [{'term': {'type': 'awesome'}}], ) def test_build_must_with_metric_types(self): types = ['awesome', 'amazing'] self.assertEqual( self.client._build_must(None, None, types, None), [{'terms': {'type': ['awesome', 'amazing']}}], ) def test_build_should_no_filters(self): self.assertEqual( self.client._build_should(None), [], ) def test_build_should_with_filters(self): filters = collections.OrderedDict([ ('one', '1'), ('two', '2'), ('type', 'awesome')]) self.assertEqual( self.client._build_should(filters), [ {'term': {'groupby.one': '1'}}, {'term': {'metadata.one': '1'}}, {'term': {'groupby.two': '2'}}, {'term': {'metadata.two': '2'}}, ], ) def test_build_composite_no_groupby(self): self.assertEqual(self.client._build_composite(None), []) def test_build_composite(self): self.assertEqual( self.client._build_composite(['one', 'type', 'two']), {'sources': [ {'one': {'terms': {'field': 'groupby.one'}}}, {'type': {'terms': {'field': 'type'}}}, {'two': {'terms': {'field': 'groupby.two'}}}, ]}, ) def test_build_query_no_args(self): self.assertEqual(self.client._build_query(None, None, None), {}) def test_build_query(self): must = [{'range': {'start': {'gte': '2019-08-30T00:00:00+00:00'}}}, {'range': {'start': {'lt': '2019-08-31T00:00:00+00:00'}}}] should = [ {'term': {'groupby.one': '1'}}, {'term': {'metadata.one': '1'}}, {'term': {'groupby.two': '2'}}, {'term': {'metadata.two': '2'}}, ] composite = {'sources': [ {'one': {'terms': {'field': 'groupby.one'}}}, {'type': {'terms': {'field': 'type'}}}, {'two': {'terms': {'field': 'groupby.two'}}}, ]} expected = { 'query': { 'bool': { 'must': must, 'should': should, 'minimum_should_match': 2, }, }, 'aggs': { 'sum_and_price': { 'composite': composite, 'aggregations': { "sum_price": {"sum": {"field": "price"}}, "sum_qty": {"sum": {"field": "qty"}}, }, }, }, } self.assertEqual( self.client._build_query(must, should, composite), expected) def test_log_query_no_hits(self): url = '/endpoint' body = {'1': 'one'} response = {'took': 42} expected = """Query on /endpoint with body "{'1': 'one'}" took 42ms""" with mock.patch.object(client.LOG, 'debug') as debug_mock: self.client._log_query(url, body, response) debug_mock.assert_called_once_with(expected) def test_log_query_with_hits(self): url = '/endpoint' body = {'1': 'one'} response = {'took': 42, 'hits': {'total': 1337}} expected = """Query on /endpoint with body "{'1': 'one'}" took 42ms""" expected += " for 1337 hits" with mock.patch.object(client.LOG, 'debug') as debug_mock: self.client._log_query(url, body, response) debug_mock.assert_called_once_with(expected) def test_req_valid_status_code_no_deserialize(self): resp_mock = mock.MagicMock() resp_mock.status_code = 200 method_mock = mock.MagicMock() method_mock.return_value = resp_mock req_resp = self.client._req( method_mock, None, None, None, deserialize=False) method_mock.assert_called_once_with(None, data=None, params=None) self.assertEqual(req_resp, resp_mock) def test_req_valid_status_code_deserialize(self): resp_mock = mock.MagicMock() resp_mock.status_code = 200 resp_mock.json.return_value = 'output' method_mock = mock.MagicMock() method_mock.return_value = resp_mock with mock.patch.object(self.client, '_log_query') as log_mock: req_resp = self.client._req( method_mock, None, None, None, deserialize=True) method_mock.assert_called_once_with(None, data=None, params=None) self.assertEqual(req_resp, 'output') log_mock.assert_called_once_with(None, None, 'output') def test_req_invalid_status_code(self): resp_mock = mock.MagicMock() resp_mock.status_code = 400 method_mock = mock.MagicMock() method_mock.return_value = resp_mock self.assertRaises(exceptions.InvalidStatusCode, self.client._req, method_mock, None, None, None) def test_put_mapping(self): mapping = {'a': 'b'} with mock.patch.object(self.client, '_req') as rmock: self.client.put_mapping(mapping) rmock.assert_called_once_with( self.client._sess.put, 'http://elasticsearch:9200/index_name/_mapping/test_mapping', '{"a": "b"}', {'include_type_name': 'true'}, deserialize=False) def test_get_index(self): with mock.patch.object(self.client, '_req') as rmock: self.client.get_index() rmock.assert_called_once_with( self.client._sess.get, 'http://elasticsearch:9200/index_name', None, None, deserialize=False) def test_search_without_scroll(self): mapping = {'a': 'b'} with mock.patch.object(self.client, '_req') as rmock: self.client.search(mapping, scroll=False) rmock.assert_called_once_with( self.client._sess.get, 'http://elasticsearch:9200/index_name/_search', '{"a": "b"}', None) def test_search_with_scroll(self): mapping = {'a': 'b'} with mock.patch.object(self.client, '_req') as rmock: self.client.search(mapping, scroll=True) rmock.assert_called_once_with( self.client._sess.get, 'http://elasticsearch:9200/index_name/_search', '{"a": "b"}', {'scroll': '60s'}) def test_scroll(self): body = {'a': 'b'} with mock.patch.object(self.client, '_req') as rmock: self.client.scroll(body) rmock.assert_called_once_with( self.client._sess.get, 'http://elasticsearch:9200/_search/scroll', '{"a": "b"}', None) def test_close_scroll(self): body = {'a': 'b'} with mock.patch.object(self.client, '_req') as rmock: self.client.close_scroll(body) rmock.assert_called_once_with( self.client._sess.delete, 'http://elasticsearch:9200/_search/scroll', '{"a": "b"}', None, deserialize=False) def test_close_scrolls(self): with mock.patch.object(self.client, 'close_scroll') as func_mock: with mock.patch.object(self.client, '_scroll_ids', new=['a', 'b', 'c']): self.client.close_scrolls() func_mock.assert_called_once_with( {'scroll_id': ['a', 'b', 'c']}) self.assertSetEqual(set(), self.client._scroll_ids) def test_bulk_with_instruction(self): instruction = {'instruction': {}} terms = ('one', 'two', 'three') expected_data = ''.join([ '{"instruction": {}}\n' '"one"\n' '{"instruction": {}}\n' '"two"\n' '{"instruction": {}}\n' '"three"\n', ]) with mock.patch.object(self.client, '_req') as rmock: self.client.bulk_with_instruction(instruction, terms) rmock.assert_called_once_with( self.client._sess.post, 'http://elasticsearch:9200/index_name/test_mapping/_bulk', expected_data, None, deserialize=False) def test_bulk_index(self): terms = ('one', 'two', 'three') with mock.patch.object(self.client, 'bulk_with_instruction') as fmock: self.client.bulk_index(terms) fmock.assert_called_once_with({'index': {}}, terms) def test_commit(self): docs = ['one', 'two', 'three', 'four', 'five', 'six', 'seven'] size = 3 with mock.patch.object(self.client, 'bulk_index') as bulk_mock: with mock.patch.object(self.client, '_docs', new=docs): with mock.patch.object(self.client, '_chunk_size', new=size): self.client.commit() bulk_mock.assert_has_calls([ mock.call(['one', 'two', 'three']), mock.call(['four', 'five', 'six']), mock.call(['seven']), ]) def test_add_point_no_autocommit(self): point = dataframe.DataPoint( 'unit', '0.42', '0.1337', {}, {}) start = datetime.datetime(2019, 1, 1) end = datetime.datetime(2019, 1, 1, 1) with mock.patch.object(self.client, 'commit') as func_mock: with mock.patch.object(self.client, '_autocommit', new=False): with mock.patch.object(self.client, '_chunk_size', new=3): self.client._docs = [] for _ in range(5): self.client.add_point( point, 'awesome_type', start, end) func_mock.assert_not_called() self.assertEqual(self.client._docs, [{ 'start': start, 'end': end, 'type': 'awesome_type', 'unit': point.unit, 'description': point.description, 'qty': point.qty, 'price': point.price, 'groupby': point.groupby, 'metadata': point.metadata, } for _ in range(5)]) self.client._docs = [] def test_add_point_with_autocommit(self): point = dataframe.DataPoint( 'unit', '0.42', '0.1337', {}, {}) start = datetime.datetime(2019, 1, 1) end = datetime.datetime(2019, 1, 1, 1) commit_calls = {'count': 0} def commit(): # We can't re-assign nonlocal variables in python2 commit_calls['count'] += 1 self.client._docs = [] with mock.patch.object(self.client, 'commit', new=commit): with mock.patch.object(self.client, '_autocommit', new=True): with mock.patch.object(self.client, '_chunk_size', new=3): self.client._docs = [] for i in range(5): self.client.add_point( point, 'awesome_type', start, end) self.assertEqual(commit_calls['count'], 1) self.assertEqual(self.client._docs, [{ 'start': start, 'end': end, 'type': 'awesome_type', 'unit': point.unit, 'description': point.description, 'qty': point.qty, 'price': point.price, 'groupby': point.groupby, 'metadata': point.metadata, } for _ in range(2)]) # cleanup self.client._docs = [] def test_delete_by_query_with_must(self): with mock.patch.object(self.client, '_req') as rmock: with mock.patch.object(self.client, '_build_must') as func_mock: func_mock.return_value = {'a': 'b'} self.client.delete_by_query() rmock.assert_called_once_with( self.client._sess.post, 'http://elasticsearch:9200/index_name/_delete_by_query', '{"query": {"bool": {"must": {"a": "b"}}}}', None) def test_delete_by_query_no_must(self): with mock.patch.object(self.client, '_req') as rmock: with mock.patch.object(self.client, '_build_must') as func_mock: func_mock.return_value = {} self.client.delete_by_query() rmock.assert_called_once_with( self.client._sess.post, 'http://elasticsearch:9200/index_name/_delete_by_query', None, None) def test_retrieve_no_pagination(self): search_resp = { '_scroll_id': '000', 'hits': {'hits': ['one', 'two', 'three'], 'total': 12}, } scroll_resps = [{ '_scroll_id': str(i + 1) * 3, 'hits': {'hits': ['one', 'two', 'three']}, } for i in range(3)] scroll_resps.append({'_scroll_id': '444', 'hits': {'hits': []}}) self.client._scroll_ids = set() with mock.patch.object(self.client, 'search') as search_mock: with mock.patch.object(self.client, 'scroll') as scroll_mock: with mock.patch.object(self.client, 'close_scrolls') as close: search_mock.return_value = search_resp scroll_mock.side_effect = scroll_resps total, resp = self.client.retrieve( None, None, None, None, paginate=False) search_mock.assert_called_once() scroll_mock.assert_has_calls([ mock.call({ 'scroll_id': str(i) * 3, 'scroll': '60s', }) for i in range(4) ]) self.assertEqual(total, 12) self.assertEqual(resp, ['one', 'two', 'three'] * 4) self.assertSetEqual(self.client._scroll_ids, set(str(i) * 3 for i in range(5))) close.assert_called_once() self.client._scroll_ids = set() def test_retrieve_with_pagination(self): search_resp = { '_scroll_id': '000', 'hits': {'hits': ['one', 'two', 'three'], 'total': 12}, } scroll_resps = [{ '_scroll_id': str(i + 1) * 3, 'hits': {'hits': ['one', 'two', 'three']}, } for i in range(3)] scroll_resps.append({'_scroll_id': '444', 'hits': {'hits': []}}) self.client._scroll_ids = set() with mock.patch.object(self.client, 'search') as search_mock: with mock.patch.object(self.client, 'scroll') as scroll_mock: with mock.patch.object(self.client, 'close_scrolls') as close: search_mock.return_value = search_resp scroll_mock.side_effect = scroll_resps total, resp = self.client.retrieve( None, None, None, None, offset=2, limit=4, paginate=True) search_mock.assert_called_once() scroll_mock.assert_called_once_with({ 'scroll_id': '000', 'scroll': '60s', }) self.assertEqual(total, 12) self.assertEqual(resp, ['three', 'one', 'two', 'three']) self.assertSetEqual(self.client._scroll_ids, set(str(i) * 3 for i in range(2))) close.assert_called_once() self.client._scroll_ids = set() def _do_test_total(self, groupby, paginate): with mock.patch.object(self.client, 'search') as search_mock: if groupby: search_resps = [{ 'aggregations': { 'sum_and_price': { 'buckets': ['one', 'two', 'three'], 'after_key': str(i), } } } for i in range(3)] last_resp_aggs = search_resps[2]['aggregations'] last_resp_aggs['sum_and_price'].pop('after_key') last_resp_aggs['sum_and_price']['buckets'] = [] search_mock.side_effect = search_resps else: search_mock.return_value = { 'aggregations': ['one', 'two', 'three'], } resp = self.client.total(None, None, None, None, groupby, offset=2, limit=4, paginate=paginate) if not groupby: search_mock.assert_called_once() return resp def test_total_no_groupby_no_pagination(self): total, aggs = self._do_test_total(None, False) self.assertEqual(total, 1) self.assertEqual(aggs, [['one', 'two', 'three']]) def test_total_no_groupby_with_pagination(self): total, aggs = self._do_test_total(None, True) self.assertEqual(total, 1) self.assertEqual(aggs, [['one', 'two', 'three']]) def test_total_with_groupby_no_pagination(self): total, aggs = self._do_test_total(['x'], False) self.assertEqual(total, 6) self.assertEqual(aggs, ['one', 'two', 'three'] * 2) def test_total_with_groupby_with_pagination(self): total, aggs = self._do_test_total(['x'], True) self.assertEqual(total, 6) self.assertEqual(aggs, ['three', 'one', 'two', 'three']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/storage/v2/es_utils.py0000664000175000017500000000707200000000000023547 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import copy import functools import itertools import requests from cloudkitty.storage.v2.elasticsearch import client class FakeElasticsearchClient(client.ElasticsearchClient): def __init__(self, *args, **kwargs): kwargs["autocommit"] = False super(FakeElasticsearchClient, self).__init__(*args, **kwargs) for method in ('get_index', 'put_mapping'): setattr(self, method, self.__base_response) @staticmethod def __base_response(*args, **kwargs): r = requests.Response() r.status_code = 200 return r def commit(self): pass @staticmethod def __filter_func(begin, end, filters, mtypes, doc): type_filter = lambda doc: ( # noqa: E731 doc['type'] in mtypes if mtypes else True) time_filter = lambda doc: ( # noqa: E731 (doc['start'] >= begin if begin else True) and (doc['start'] < end if end else True)) def filter_(doc): return all((doc['groupby'].get(k) == v or (doc['metadata'].get(k) == v) for k, v in filters.items())) if filters else True return type_filter(doc) and time_filter(doc) and filter_(doc) def retrieve(self, begin, end, filters, metric_types, offset=0, limit=1000, paginate=True): filter_func = functools.partial( self.__filter_func, begin, end, filters, metric_types) output = list(filter(filter_func, self._docs))[offset:offset+limit] for doc in output: doc["start"] = doc["start"].isoformat() doc["end"] = doc["end"].isoformat() doc["_source"] = copy.deepcopy(doc) return len(output), output def total(self, begin, end, metric_types, filters, groupby, custom_fields=None, offset=0, limit=1000, paginate=True): filter_func = functools.partial( self.__filter_func, begin, end, filters, metric_types) docs = list(filter(filter_func, self._docs)) if not groupby: return 1, [{ 'sum_qty': {'value': sum(doc['qty'] for doc in docs)}, 'sum_price': {'value': sum(doc['price'] for doc in docs)}, 'begin': begin, 'end': end, }] output = [] key_func = lambda d: tuple( # noqa: E731 d['type'] if g == 'type' else d['groupby'][g] for g in groupby) docs.sort(key=key_func) for groups, values in itertools.groupby(docs, key_func): val_list = list(values) output.append({ 'begin': begin, 'end': end, 'sum_qty': {'value': sum(doc['qty'] for doc in val_list)}, 'sum_price': {'value': sum(doc['price'] for doc in val_list)}, 'key': dict(zip(groupby, groups)), }) return len(output), output[offset:offset+limit] def _req(self, method, url, data, params, deserialize=True): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/storage/v2/influx_utils.py0000664000175000017500000001235500000000000024445 0ustar00zuulzuul00000000000000# Copyright 2018 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import copy import functools from influxdb import resultset from cloudkitty.storage.v2.influx import _sanitized_groupby from cloudkitty.storage.v2.influx import InfluxClient class FakeInfluxClient(InfluxClient): total_sample = { "statement_id": 0, "series": [] } total_series_sample = { "name": "dataframes", "tags": {}, "columns": ["time", "qty", "price"], "values": [], } def __init__(self, **kwargs): super(FakeInfluxClient, self).__init__(autocommit=False) def commit(self): pass @staticmethod def __filter_func(types, filters, begin, end, elem): if elem['time'] < begin or elem['time'] >= end: return False if types and elem['tags']['type'] not in types: return False if filters is None: return True for key in filters.keys(): if key not in elem['tags'].keys(): return False if elem['tags'][key] != filters[key]: return False return True def __get_target_serie(self, point, series, groupby): target_serie = None for serie in series: if not groupby: target_serie = serie break valid = True for tag in serie['tags'].keys(): if tag == 'time': if point['time'].isoformat() != serie['values'][0][0]: valid = False break else: continue if tag not in point['tags'].keys() or \ point['tags'][tag] != serie['tags'][tag]: valid = False break if valid: target_serie = serie break if target_serie is None: target_serie = copy.deepcopy(self.total_series_sample) if groupby: target_serie['tags'] = {k: point['tags'][k] for k in _sanitized_groupby(groupby)} else: target_serie['tags'] = {} target_serie['values'] = [[point['time'].isoformat(), 0, 0]] series.append(target_serie) return target_serie def get_total(self, types, begin, end, custom_fields, groupby=None, filters=None, limit=None): total = copy.deepcopy(self.total_sample) series = [] filter_func = functools.partial( self.__filter_func, types, filters, begin, end) points = filter(filter_func, self._points) for point in points: target_serie = self.__get_target_serie(point, series, groupby) target_serie['values'][0][1] += point['fields']['qty'] target_serie['values'][0][2] += point['fields']['price'] total['series'] = series return resultset.ResultSet(total) def retrieve(self, types, filters, begin, end, offset=0, limit=1000, paginate=True): output = copy.deepcopy(self.total_sample) filter_func = functools.partial( self.__filter_func, types, filters, begin, end) points = list(filter(filter_func, self._points)) columns = set() for point in points: columns.update(point['tags'].keys()) columns.update(point['fields'].keys()) columns.add('time') series = { 'name': 'dataframes', 'columns': list(columns), } values = [] def __get_tag_or_field(point, key): if key == 'time': return point['time'].isoformat() return point['tags'].get(key) or point['fields'].get(key) for point in points: values.append([__get_tag_or_field(point, key) for key in series['columns']]) series['values'] = values output['series'] = [series] return len(list(points)), resultset.ResultSet(output) def delete(self, begin, end, filters): def __filter_func(elem): def __time(elem): return ((begin and begin > elem['time']) or (end and end <= elem['time'])) def __filt(elem): return all( (elem['tags'].get(k, None) == v or elem['fields'].get(k, None) == v) for k, v in filters.items()) return __time(elem) and __filt(elem) self._points = list(filter(__filter_func, self._points)) def retention_policy_exists(self, database, policy): return True ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2674868 cloudkitty-21.0.0/cloudkitty/tests/storage/v2/opensearch/0000775000175000017500000000000000000000000023467 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/storage/v2/opensearch/__init__.py0000664000175000017500000000000000000000000025566 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/storage/v2/opensearch/test_client.py0000664000175000017500000004616500000000000026372 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import collections import datetime import unittest from unittest import mock from dateutil import tz from cloudkitty import dataframe from cloudkitty.storage.v2.opensearch import client from cloudkitty.storage.v2.opensearch import exceptions class TestOpenSearchClient(unittest.TestCase): def setUp(self): super(TestOpenSearchClient, self).setUp() self.client = client.OpenSearchClient( 'http://opensearch:9200', 'index_name', 'test_mapping', autocommit=False) def test_build_must_no_params(self): self.assertEqual(self.client._build_must(None, None, None, None), []) def test_build_must_with_start_end(self): start = datetime.datetime(2019, 8, 30, tzinfo=tz.tzutc()) end = datetime.datetime(2019, 8, 31, tzinfo=tz.tzutc()) self.assertEqual( self.client._build_must(start, end, None, None), [{'range': {'start': {'gte': '2019-08-30T00:00:00+00:00'}}}, {'range': {'end': {'lte': '2019-08-31T00:00:00+00:00'}}}], ) def test_build_must_with_filters(self): filters = {'one': '1', 'two': '2', 'type': 'awesome'} self.assertEqual( self.client._build_must(None, None, None, filters), [{'term': {'type': 'awesome'}}], ) def test_build_must_with_metric_types(self): types = ['awesome', 'amazing'] self.assertEqual( self.client._build_must(None, None, types, None), [{'terms': {'type': ['awesome', 'amazing']}}], ) def test_build_should_no_filters(self): self.assertEqual( self.client._build_should(None), [], ) def test_build_should_with_filters(self): filters = collections.OrderedDict([ ('one', '1'), ('two', '2'), ('type', 'awesome')]) self.assertEqual( self.client._build_should(filters), [ {'term': {'groupby.one': '1'}}, {'term': {'metadata.one': '1'}}, {'term': {'groupby.two': '2'}}, {'term': {'metadata.two': '2'}}, ], ) def test_build_composite_no_groupby(self): self.assertEqual(self.client._build_composite(None), []) def test_build_composite(self): self.assertEqual( self.client._build_composite(['one', 'type', 'two']), {'sources': [ {'one': {'terms': {'field': 'groupby.one.keyword'}}}, {'type': {'terms': {'field': 'type.keyword'}}}, {'two': {'terms': {'field': 'groupby.two.keyword'}}}, ]}, ) def test_build_query_no_args(self): self.assertEqual(self.client._build_query(None, None, None), {}) def test_build_query(self): must = [{'range': {'start': {'gte': '2019-08-30T00:00:00+00:00'}}}, {'range': {'start': {'lt': '2019-08-31T00:00:00+00:00'}}}] should = [ {'term': {'groupby.one': '1'}}, {'term': {'metadata.one': '1'}}, {'term': {'groupby.two': '2'}}, {'term': {'metadata.two': '2'}}, ] composite = {'sources': [ {'one': {'terms': {'field': 'groupby.one'}}}, {'type': {'terms': {'field': 'type'}}}, {'two': {'terms': {'field': 'groupby.two'}}}, ]} expected = { 'query': { 'bool': { 'must': must, 'should': should, 'minimum_should_match': 2, }, }, 'aggs': { 'sum_and_price': { 'composite': composite, 'aggregations': { "sum_price": {"sum": {"field": "price"}}, "sum_qty": {"sum": {"field": "qty"}}, }, }, }, } self.assertEqual( self.client._build_query(must, should, composite), expected) def test_log_query_no_hits(self): url = '/endpoint' body = {'1': 'one'} response = {'took': 42} expected = """Query on /endpoint with body "{'1': 'one'}" took 42ms""" with mock.patch.object(client.LOG, 'debug') as debug_mock: self.client._log_query(url, body, response) debug_mock.assert_called_once_with(expected) def test_log_query_with_hits(self): url = '/endpoint' body = {'1': 'one'} response = {'took': 42, 'hits': {'total': 1337}} expected = """Query on /endpoint with body "{'1': 'one'}" took 42ms""" expected += " for 1337 hits" with mock.patch.object(client.LOG, 'debug') as debug_mock: self.client._log_query(url, body, response) debug_mock.assert_called_once_with(expected) def test_req_valid_status_code_no_deserialize(self): resp_mock = mock.MagicMock() resp_mock.status_code = 200 method_mock = mock.MagicMock() method_mock.return_value = resp_mock req_resp = self.client._req( method_mock, None, None, None, deserialize=False) method_mock.assert_called_once_with(None, data=None, params=None) self.assertEqual(req_resp, resp_mock) def test_req_valid_status_code_deserialize(self): resp_mock = mock.MagicMock() resp_mock.status_code = 200 resp_mock.json.return_value = 'output' method_mock = mock.MagicMock() method_mock.return_value = resp_mock with mock.patch.object(self.client, '_log_query') as log_mock: req_resp = self.client._req( method_mock, None, None, None, deserialize=True) method_mock.assert_called_once_with(None, data=None, params=None) self.assertEqual(req_resp, 'output') log_mock.assert_called_once_with(None, None, 'output') def test_req_invalid_status_code(self): resp_mock = mock.MagicMock() resp_mock.status_code = 400 method_mock = mock.MagicMock() method_mock.return_value = resp_mock self.assertRaises(exceptions.InvalidStatusCode, self.client._req, method_mock, None, None, None) def test_post_mapping(self): mapping = {'a': 'b'} with mock.patch.object(self.client, '_req') as rmock: self.client.post_mapping(mapping) rmock.assert_called_once_with( self.client._sess.post, 'http://opensearch:9200/index_name/test_mapping', '{"a": "b"}', {}, deserialize=False) def test_get_index(self): with mock.patch.object(self.client, '_req') as rmock: self.client.get_index() rmock.assert_called_once_with( self.client._sess.get, 'http://opensearch:9200/index_name', None, None, deserialize=False) def test_search_without_scroll(self): mapping = {'a': 'b'} with mock.patch.object(self.client, '_req') as rmock: self.client.search(mapping, scroll=False) rmock.assert_called_once_with( self.client._sess.get, 'http://opensearch:9200/index_name/_search', '{"a": "b"}', None) def test_search_with_scroll(self): mapping = {'a': 'b'} with mock.patch.object(self.client, '_req') as rmock: self.client.search(mapping, scroll=True) rmock.assert_called_once_with( self.client._sess.get, 'http://opensearch:9200/index_name/_search', '{"a": "b"}', {'scroll': '60s'}) def test_scroll(self): body = {'a': 'b'} with mock.patch.object(self.client, '_req') as rmock: self.client.scroll(body) rmock.assert_called_once_with( self.client._sess.get, 'http://opensearch:9200/_search/scroll', '{"a": "b"}', None) def test_close_scroll(self): body = {'a': 'b'} with mock.patch.object(self.client, '_req') as rmock: self.client.close_scroll(body) rmock.assert_called_once_with( self.client._sess.delete, 'http://opensearch:9200/_search/scroll', '{"a": "b"}', None, deserialize=False) def test_close_scrolls(self): with mock.patch.object(self.client, 'close_scroll') as func_mock: with mock.patch.object(self.client, '_scroll_ids', new=['a', 'b', 'c']): self.client.close_scrolls() func_mock.assert_called_once_with( {'scroll_id': ['a', 'b', 'c']}) self.assertSetEqual(set(), self.client._scroll_ids) def test_bulk_with_instruction(self): instruction = {'instruction': {}} terms = ('one', 'two', 'three') expected_data = ''.join([ '{"instruction": {}}\n' '"one"\n' '{"instruction": {}}\n' '"two"\n' '{"instruction": {}}\n' '"three"\n', ]) with mock.patch.object(self.client, '_req') as rmock: self.client.bulk_with_instruction(instruction, terms) rmock.assert_called_once_with( self.client._sess.post, 'http://opensearch:9200/index_name/_bulk', expected_data, None, deserialize=False) def test_bulk_index(self): terms = ('one', 'two', 'three') with mock.patch.object(self.client, 'bulk_with_instruction') as fmock: self.client.bulk_index(terms) fmock.assert_called_once_with({'index': {}}, terms) def test_commit(self): docs = ['one', 'two', 'three', 'four', 'five', 'six', 'seven'] size = 3 with mock.patch.object(self.client, 'bulk_index') as bulk_mock: with mock.patch.object(self.client, '_docs', new=docs): with mock.patch.object(self.client, '_chunk_size', new=size): self.client.commit() bulk_mock.assert_has_calls([ mock.call(['one', 'two', 'three']), mock.call(['four', 'five', 'six']), mock.call(['seven']), ]) def test_add_point_no_autocommit(self): point = dataframe.DataPoint( 'unit', '0.42', '0.1337', {}, {}) start = datetime.datetime(2019, 1, 1) end = datetime.datetime(2019, 1, 1, 1) with mock.patch.object(self.client, 'commit') as func_mock: with mock.patch.object(self.client, '_autocommit', new=False): with mock.patch.object(self.client, '_chunk_size', new=3): self.client._docs = [] for _ in range(5): self.client.add_point( point, 'awesome_type', start, end) func_mock.assert_not_called() self.assertEqual(self.client._docs, [{ 'start': start, 'end': end, 'type': 'awesome_type', 'unit': point.unit, 'qty': point.qty, 'price': point.price, 'groupby': point.groupby, 'metadata': point.metadata, } for _ in range(5)]) self.client._docs = [] def test_add_point_with_autocommit(self): point = dataframe.DataPoint( 'unit', '0.42', '0.1337', {}, {}) start = datetime.datetime(2019, 1, 1) end = datetime.datetime(2019, 1, 1, 1) commit_calls = {'count': 0} def commit(): # We can't re-assign nonlocal variables in python2 commit_calls['count'] += 1 self.client._docs = [] with mock.patch.object(self.client, 'commit', new=commit): with mock.patch.object(self.client, '_autocommit', new=True): with mock.patch.object(self.client, '_chunk_size', new=3): self.client._docs = [] for i in range(5): self.client.add_point( point, 'awesome_type', start, end) self.assertEqual(commit_calls['count'], 1) self.assertEqual(self.client._docs, [{ 'start': start, 'end': end, 'type': 'awesome_type', 'unit': point.unit, 'qty': point.qty, 'price': point.price, 'groupby': point.groupby, 'metadata': point.metadata, } for _ in range(2)]) # cleanup self.client._docs = [] def test_delete_by_query_with_must(self): with mock.patch.object(self.client, '_req') as rmock: with mock.patch.object(self.client, '_build_must') as func_mock: func_mock.return_value = {'a': 'b'} self.client.delete_by_query() rmock.assert_called_once_with( self.client._sess.post, 'http://opensearch:9200/index_name/_delete_by_query', '{"query": {"bool": {"must": {"a": "b"}}}}', None) def test_delete_by_query_no_must(self): with mock.patch.object(self.client, '_req') as rmock: with mock.patch.object(self.client, '_build_must') as func_mock: func_mock.return_value = {} self.client.delete_by_query() rmock.assert_called_once_with( self.client._sess.post, 'http://opensearch:9200/index_name/_delete_by_query', None, None) def test_retrieve_no_pagination(self): search_resp = { '_scroll_id': '000', 'hits': {'hits': ['one', 'two', 'three'], 'total': 12}, } scroll_resps = [{ '_scroll_id': str(i + 1) * 3, 'hits': {'hits': ['one', 'two', 'three']}, } for i in range(3)] scroll_resps.append({'_scroll_id': '444', 'hits': {'hits': []}}) self.client._scroll_ids = set() with mock.patch.object(self.client, 'search') as search_mock: with mock.patch.object(self.client, 'scroll') as scroll_mock: with mock.patch.object(self.client, 'close_scrolls') as close: search_mock.return_value = search_resp scroll_mock.side_effect = scroll_resps total, resp = self.client.retrieve( None, None, None, None, paginate=False) search_mock.assert_called_once() scroll_mock.assert_has_calls([ mock.call({ 'scroll_id': str(i) * 3, 'scroll': '60s', }) for i in range(4) ]) self.assertEqual(total, 12) self.assertEqual(resp, ['one', 'two', 'three'] * 4) self.assertSetEqual(self.client._scroll_ids, set(str(i) * 3 for i in range(5))) close.assert_called_once() self.client._scroll_ids = set() def test_retrieve_with_pagination(self): search_resp = { '_scroll_id': '000', 'hits': {'hits': ['one', 'two', 'three'], 'total': 12}, } scroll_resps = [{ '_scroll_id': str(i + 1) * 3, 'hits': {'hits': ['one', 'two', 'three']}, } for i in range(3)] scroll_resps.append({'_scroll_id': '444', 'hits': {'hits': []}}) self.client._scroll_ids = set() with mock.patch.object(self.client, 'search') as search_mock: with mock.patch.object(self.client, 'scroll') as scroll_mock: with mock.patch.object(self.client, 'close_scrolls') as close: search_mock.return_value = search_resp scroll_mock.side_effect = scroll_resps total, resp = self.client.retrieve( None, None, None, None, offset=2, limit=4, paginate=True) search_mock.assert_called_once() scroll_mock.assert_called_once_with({ 'scroll_id': '000', 'scroll': '60s', }) self.assertEqual(total, 12) self.assertEqual(resp, ['three', 'one', 'two', 'three']) self.assertSetEqual(self.client._scroll_ids, set(str(i) * 3 for i in range(2))) close.assert_called_once() self.client._scroll_ids = set() def _do_test_total(self, groupby, paginate): with mock.patch.object(self.client, 'search') as search_mock: if groupby: search_resps = [{ 'aggregations': { 'sum_and_price': { 'buckets': ['one', 'two', 'three'], 'after_key': str(i), } } } for i in range(3)] last_resp_aggs = search_resps[2]['aggregations'] last_resp_aggs['sum_and_price'].pop('after_key') last_resp_aggs['sum_and_price']['buckets'] = [] search_mock.side_effect = search_resps else: search_mock.return_value = { 'aggregations': ['one', 'two', 'three'], } resp = self.client.total(None, None, None, None, groupby, offset=2, limit=4, paginate=paginate) if not groupby: search_mock.assert_called_once() return resp def test_total_no_groupby_no_pagination(self): total, aggs = self._do_test_total(None, False) self.assertEqual(total, 1) self.assertEqual(aggs, [['one', 'two', 'three']]) def test_total_no_groupby_with_pagination(self): total, aggs = self._do_test_total(None, True) self.assertEqual(total, 1) self.assertEqual(aggs, [['one', 'two', 'three']]) def test_total_with_groupby_no_pagination(self): total, aggs = self._do_test_total(['x'], False) self.assertEqual(total, 6) self.assertEqual(aggs, ['one', 'two', 'three'] * 2) def test_total_with_groupby_with_pagination(self): total, aggs = self._do_test_total(['x'], True) self.assertEqual(total, 6) self.assertEqual(aggs, ['three', 'one', 'two', 'three']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/storage/v2/opensearch_utils.py0000664000175000017500000000705700000000000025272 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import copy import functools import itertools import requests from cloudkitty.storage.v2.opensearch import client class FakeOpenSearchClient(client.OpenSearchClient): def __init__(self, *args, **kwargs): kwargs["autocommit"] = False super(FakeOpenSearchClient, self).__init__(*args, **kwargs) for method in ('get_index', 'post_mapping'): setattr(self, method, self.__base_response) @staticmethod def __base_response(*args, **kwargs): r = requests.Response() r.status_code = 200 return r def commit(self): pass @staticmethod def __filter_func(begin, end, filters, mtypes, doc): type_filter = lambda doc: ( # noqa: E731 doc['type'] in mtypes if mtypes else True) time_filter = lambda doc: ( # noqa: E731 (doc['start'] >= begin if begin else True) and (doc['start'] < end if end else True)) def filter_(doc): return all((doc['groupby'].get(k) == v or (doc['metadata'].get(k) == v) for k, v in filters.items())) if filters else True return type_filter(doc) and time_filter(doc) and filter_(doc) def retrieve(self, begin, end, filters, metric_types, offset=0, limit=1000, paginate=True): filter_func = functools.partial( self.__filter_func, begin, end, filters, metric_types) output = list(filter(filter_func, self._docs))[offset:offset+limit] for doc in output: doc["start"] = doc["start"].isoformat() doc["end"] = doc["end"].isoformat() doc["_source"] = copy.deepcopy(doc) return len(output), output def total(self, begin, end, metric_types, filters, groupby, custom_fields=None, offset=0, limit=1000, paginate=True): filter_func = functools.partial( self.__filter_func, begin, end, filters, metric_types) docs = list(filter(filter_func, self._docs)) if not groupby: return 1, [{ 'sum_qty': {'value': sum(doc['qty'] for doc in docs)}, 'sum_price': {'value': sum(doc['price'] for doc in docs)}, 'begin': begin, 'end': end, }] output = [] key_func = lambda d: tuple( # noqa: E731 d['type'] if g == 'type' else d['groupby'][g] for g in groupby) docs.sort(key=key_func) for groups, values in itertools.groupby(docs, key_func): val_list = list(values) output.append({ 'begin': begin, 'end': end, 'sum_qty': {'value': sum(doc['qty'] for doc in val_list)}, 'sum_price': {'value': sum(doc['price'] for doc in val_list)}, 'key': dict(zip(groupby, groups)), }) return len(output), output[offset:offset+limit] def _req(self, method, url, data, params, deserialize=True): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/storage/v2/test_influxdb.py0000664000175000017500000004243700000000000024576 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import collections import copy from datetime import datetime from datetime import timedelta import unittest from unittest import mock from dateutil import tz from cloudkitty import dataframe from cloudkitty.storage.v2 import influx from cloudkitty.tests import TestCase from cloudkitty.utils import tz as tzutils class TestInfluxDBStorage(TestCase): def setUp(self): super(TestInfluxDBStorage, self).setUp() self.point = { 'type': 'amazing_type', 'unit': 'banana', 'qty': 42, 'price': 1.0, 'groupby': 'one|two', 'metadata': '1|2', 'one': '1', 'two': '2', '1': 'one', '2': 'two', 'time': datetime(2019, 1, 1, tzinfo=tz.tzutc()).isoformat(), } def test_point_to_dataframe_entry_valid_point(self): self.assertEqual( influx.InfluxStorage._point_to_dataframe_entry(self.point), dataframe.DataPoint( 'banana', 42, 1, {'one': '1', 'two': '2'}, {'1': 'one', '2': 'two'}, ), ) def test_point_to_dataframe_entry_invalid_groupby_metadata(self): point = copy.deepcopy(self.point) point['groupby'] = 'a' point['metadata'] = None self.assertEqual( influx.InfluxStorage._point_to_dataframe_entry(point), dataframe.DataPoint( 'banana', 42, 1, {'a': ''}, {}, ), ) def test_build_dataframes_differenciates_periods(self): points = [copy.deepcopy(self.point) for _ in range(3)] for idx, point in enumerate(points): point[influx.PERIOD_FIELD_NAME] = 100 * (idx + 1) dataframes = influx.InfluxStorage()._build_dataframes(points) self.assertEqual(len(dataframes), 3) for idx, frame in enumerate(dataframes): self.assertEqual( frame.start, datetime(2019, 1, 1, tzinfo=tz.tzutc())) delta = timedelta(seconds=(idx + 1) * 100) self.assertEqual(frame.end, datetime(2019, 1, 1, tzinfo=tz.tzutc()) + delta) typelist = list(frame.itertypes()) self.assertEqual(len(typelist), 1) type_, points = typelist[0] self.assertEqual(len(points), 1) self.assertEqual(type_, 'amazing_type') class FakeResultSet(object): def __init__(self, points=[], items=[]): self._points = points self._items = items def get_points(self): return self._points def items(self): return self._items class TestInfluxClient(unittest.TestCase): def setUp(self): self.period_begin = tzutils.local_to_utc( tzutils.get_month_start()).isoformat() self.period_end = tzutils.local_to_utc( tzutils.get_next_month()).isoformat() self.client = influx.InfluxClient() self._storage = influx.InfluxStorage() def test_get_filter_query(self): filters = collections.OrderedDict( (('str_filter', 'one'), ('float_filter', 2.0))) self.assertEqual( self.client._get_filter_query(filters), """ AND "str_filter"='one' AND "float_filter"=2.0""" ) def test_get_filter_query_no_filters(self): self.assertEqual(self.client._get_filter_query({}), '') def test_retrieve_format_with_pagination(self): self._storage._conn._conn.query = m = mock.MagicMock() m.return_value = (FakeResultSet(), FakeResultSet()) self._storage.retrieve() m.assert_called_once_with( "SELECT COUNT(groupby) FROM \"dataframes\"" " WHERE time >= '{0}'" " AND time < '{1}';" "SELECT * FROM \"dataframes\"" " WHERE time >= '{0}'" " AND time < '{1}'" " LIMIT 1000 OFFSET 0;".format( self.period_begin, self.period_end, )) def test_retrieve_format_with_types(self): self._storage._conn._conn.query = m = mock.MagicMock() m.return_value = (FakeResultSet(), FakeResultSet()) self._storage.retrieve(metric_types=['foo', 'bar']) m.assert_called_once_with( "SELECT COUNT(groupby) FROM \"dataframes\"" " WHERE time >= '{0}'" " AND time < '{1}'" " AND (\"type\"='foo' OR \"type\"='bar');" "SELECT * FROM \"dataframes\"" " WHERE time >= '{0}'" " AND time < '{1}'" " AND (\"type\"='foo' OR \"type\"='bar')" " LIMIT 1000 OFFSET 0;".format( self.period_begin, self.period_end, )) def test_delete_no_parameters(self): self._storage._conn._conn.query = m = mock.MagicMock() self._storage.delete() m.assert_called_once_with('DELETE FROM "dataframes";') def test_delete_begin_end(self): self._storage._conn._conn.query = m = mock.MagicMock() self._storage.delete(begin=datetime(2019, 1, 1), end=datetime(2019, 1, 2)) m.assert_called_once_with( """DELETE FROM "dataframes" WHERE time >= '2019-01-01T00:00:00'""" """ AND time < '2019-01-02T00:00:00';""") def test_delete_begin_end_filters(self): self._storage._conn._conn.query = m = mock.MagicMock() self._storage.delete( begin=datetime(2019, 1, 1), end=datetime(2019, 1, 2), filters={'project_id': 'foobar'}) m.assert_called_once_with( """DELETE FROM "dataframes" WHERE time >= '2019-01-01T00:00:00'""" """ AND time < '2019-01-02T00:00:00' AND "project_id"='foobar';""" ) def test_delete_end_filters(self): self._storage._conn._conn.query = m = mock.MagicMock() self._storage.delete(end=datetime(2019, 1, 2), filters={'project_id': 'foobar'}) m.assert_called_once_with( """DELETE FROM "dataframes" WHERE time < '2019-01-02T00:00:00' """ """AND "project_id"='foobar';""") def test_delete_begin_filters(self): self._storage._conn._conn.query = m = mock.MagicMock() self._storage.delete(begin=datetime(2019, 1, 2), filters={'project_id': 'foobar'}) m.assert_called_once_with( """DELETE FROM "dataframes" WHERE time >= '2019-01-02T00:00:00'""" """ AND "project_id"='foobar';""") def test_delete_begin(self): self._storage._conn._conn.query = m = mock.MagicMock() self._storage.delete(begin=datetime(2019, 1, 2)) m.assert_called_once_with("""DELETE FROM "dataframes" WHERE """ """time >= '2019-01-02T00:00:00';""") def test_delete_end(self): self._storage._conn._conn.query = m = mock.MagicMock() self._storage.delete(end=datetime(2019, 1, 2)) m.assert_called_once_with("""DELETE FROM "dataframes" WHERE """ """time < '2019-01-02T00:00:00';""") def test_process_total(self): begin = datetime(2019, 1, 2, 10) end = datetime(2019, 1, 2, 11) groupby = ['valA', 'time'] points_1 = [ { 'qty': 42, 'price': 1.0, 'time': begin.isoformat() } ] series_groupby_1 = { 'valA': '1' } points_2 = [ { 'qty': 12, 'price': 2.0, 'time': begin.isoformat() } ] series_groupby_2 = { 'valA': '2' } points_3 = [ { 'qty': None, 'price': None, 'time': None } ] series_groupby_3 = { 'valA': None } series_name = 'dataframes' items = [((series_name, series_groupby_1), points_1), ((series_name, series_groupby_2), points_2), ((series_name, series_groupby_3), points_3)] total = FakeResultSet(items=items) result = self.client.process_total(total=total, begin=begin, end=end, groupby=groupby) expected = [{'begin': tzutils.utc_to_local(begin), 'end': tzutils.utc_to_local(end), 'qty': 42, 'price': 1.0, 'valA': '1'}, {'begin': tzutils.utc_to_local(begin), 'end': tzutils.utc_to_local(end), 'qty': 12, 'price': 2.0, 'valA': '2'} ] self.assertEqual(expected, result) class TestInfluxClientV2(unittest.TestCase): @mock.patch('cloudkitty.storage.v2.influx.InfluxDBClient') def setUp(self, client_mock): self.period_begin = tzutils.local_to_utc( tzutils.get_month_start()) self.period_end = tzutils.local_to_utc( tzutils.get_next_month()) self.client = influx.InfluxClientV2() @mock.patch('cloudkitty.storage.v2.influx.requests') def test_query(self, mock_request): static_vals = ['', 'result', 'table', '_start', '_value'] custom_fields = 'last(f1) AS f1, last(f2) AS f2, last(f3) AS f3' groups = ['g1', 'g2', 'g3'] data = [ static_vals + groups, ['', 'f1', 0, 1, 1, 1, 2, 3], ['', 'f2', 0, 1, 2, 1, 2, 3], ['', 'f3', 0, 1, 3, 1, 2, 3], static_vals + groups, ['', 'f1', 0, 1, 3, 3, 1, 2], ['', 'f2', 0, 1, 1, 3, 1, 2], ['', 'f3', 0, 1, 2, 3, 1, 2], static_vals + groups, ['', 'f1', 0, 1, 2, 2, 3, 1], ['', 'f2', 0, 1, 3, 2, 3, 1], ['', 'f3', 0, 1, 1, 2, 3, 1] ] expected_value = [ {'f1': 1.0, 'f2': 2.0, 'f3': 3.0, 'begin': self.period_begin, 'end': self.period_end, 'g1': '1', 'g2': '2', 'g3': '3'}, {'f1': 3.0, 'f2': 1.0, 'f3': 2.0, 'begin': self.period_begin, 'end': self.period_end, 'g1': '3', 'g2': '1', 'g3': '2'}, {'f1': 2.0, 'f2': 3.0, 'f3': 1.0, 'begin': self.period_begin, 'end': self.period_end, 'g1': '2', 'g2': '3', 'g3': '1'} ] data_csv = '\n'.join([','.join(map(str, d)) for d in data]) mock_request.post.return_value = mock.Mock(text=data_csv) response = self.client.get_total( None, self.period_begin, self.period_end, custom_fields, filters={}, groupby=groups) result = self.client.process_total( response, self.period_begin, self.period_end, groups, custom_fields, {}) self.assertEqual(result, expected_value) def test_query_build(self): custom_fields = 'last(field1) AS F1, sum(field2) AS F2' groupby = ['group1', 'group2', 'group3'] filters = { 'filter1': '10', 'filter2': 'filter2_filter' } beg = self.period_begin.isoformat() end = self.period_end.isoformat() expected = ('\n' ' from(bucket:"cloudkitty")\n' f' |> range(start: {beg}, stop: {end})\n' ' |> filter(fn: (r) => r["_measurement"] == ' '"dataframes")\n' ' |> filter(fn: (r) => r["_field"] == "field1"' ' and r.filter1==10 and r.filter2=="filter2_filter" )\n' ' |> group(columns: ["group1","group2",' '"group3"])\n' ' |> last()\n' ' |> keep(columns: ["group1", "group2",' ' "group3", "_field", "_value", "_start", "_stop"])\n' ' |> set(key: "_field", value: "F1")\n' ' |> yield(name: "F1")\n' ' \n' ' from(bucket:"cloudkitty")\n' f' |> range(start: {beg}, stop: {end})\n' ' |> filter(fn: (r) => r["_measurement"] == ' '"dataframes")\n' ' |> filter(fn: (r) => r["_field"] == "field2"' ' and r.filter1==10 and r.filter2=="filter2_filter" )\n' ' |> group(columns: ["group1","group2",' '"group3"])\n' ' |> sum()\n' ' |> keep(columns: ["group1", "group2", ' '"group3", "_field", "_value", "_start", "_stop"])\n' ' |> set(key: "_field", value: "F2")\n' ' |> yield(name: "F2")\n' ' ') query = self.client.get_query(begin=self.period_begin, end=self.period_end, custom_fields=custom_fields, filters=filters, groupby=groupby) self.assertEqual(query, expected) def test_query_build_no_custom_fields(self): custom_fields = None groupby = ['group1', 'group2', 'group3'] filters = { 'filter1': '10', 'filter2': 'filter2_filter' } beg = self.period_begin.isoformat() end = self.period_end.isoformat() self.maxDiff = None expected = ('\n' ' from(bucket:"cloudkitty")\n' f' |> range(start: {beg}, stop: {end})\n' ' |> filter(fn: (r) => r["_measurement"] == ' '"dataframes")\n' ' |> filter(fn: (r) => r["_field"] == "price"' ' and r.filter1==10 and r.filter2=="filter2_filter" )\n' ' |> group(columns: ["group1","group2",' '"group3"])\n' ' |> sum()\n' ' |> keep(columns: ["group1", "group2",' ' "group3", "_field", "_value", "_start", "_stop"])\n' ' |> set(key: "_field", value: "price")\n' ' |> yield(name: "price")\n' ' \n' ' from(bucket:"cloudkitty")\n' f' |> range(start: {beg}, stop: {end})\n' ' |> filter(fn: (r) => r["_measurement"] == ' '"dataframes")\n' ' |> filter(fn: (r) => r["_field"] == "qty"' ' and r.filter1==10 and r.filter2=="filter2_filter" )\n' ' |> group(columns: ["group1","group2",' '"group3"])\n' ' |> sum()\n' ' |> keep(columns: ["group1", "group2", ' '"group3", "_field", "_value", "_start", "_stop"])\n' ' |> set(key: "_field", value: "qty")\n' ' |> yield(name: "qty")\n' ' ') query = self.client.get_query(begin=self.period_begin, end=self.period_end, custom_fields=custom_fields, filters=filters, groupby=groupby) self.assertEqual(query, expected) def test_query_build_all_custom_fields(self): custom_fields = '*' groupby = ['group1', 'group2', 'group3'] filters = { 'filter1': '10', 'filter2': 'filter2_filter' } beg = self.period_begin.isoformat() end = self.period_end.isoformat() expected = (f''' from(bucket:"cloudkitty") |> range(start: {beg}, stop: {end}) |> filter(fn: (r) => r["_measurement"] == "dataframes") |> filter(fn: (r) => r.filter1==10 and r.filter2=="filter 2_filter") |> group(columns: ["group1","group2","group3"]) |> drop(columns: ["_time"]) |> yield(name: "result")'''.replace( ' ', '').replace('\n', '').replace('\t', '')) query = self.client.get_query(begin=self.period_begin, end=self.period_end, custom_fields=custom_fields, filters=filters, groupby=groupby).replace( ' ', '').replace('\n', '').replace('\t', '') self.assertEqual(query, expected) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/storage/v2/test_storage_unit.py0000664000175000017500000003717000000000000025464 0ustar00zuulzuul00000000000000# Copyright 2018 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import datetime from unittest import mock import testscenarios from werkzeug import exceptions as http_exceptions from cloudkitty import storage from cloudkitty.tests import samples from cloudkitty.tests.storage.v2 import es_utils from cloudkitty.tests.storage.v2 import influx_utils from cloudkitty.tests.storage.v2 import opensearch_utils from cloudkitty.tests import TestCase from cloudkitty.tests import utils as test_utils from cloudkitty.utils import tz as tzutils _ES_CLIENT_PATH = ('cloudkitty.storage.v2.elasticsearch' '.client.ElasticsearchClient') _INFLUX_CLIENT_PATH = 'cloudkitty.storage.v2.influx.InfluxClient' _OS_CLIENT_PATH = ('cloudkitty.storage.v2.opensearch' '.client.OpenSearchClient') class StorageUnitTest(TestCase): storage_scenarios = [ ('influxdb', dict(storage_backend='influxdb')), ('elasticsearch', dict(storage_backend='elasticsearch')), ('opensearch', dict(storage_backend='opensearch'))] @classmethod def generate_scenarios(cls): cls.scenarios = testscenarios.multiply_scenarios( cls.scenarios, cls.storage_scenarios) @mock.patch(_ES_CLIENT_PATH, new=es_utils.FakeElasticsearchClient) @mock.patch(_INFLUX_CLIENT_PATH, new=influx_utils.FakeInfluxClient) @mock.patch(_OS_CLIENT_PATH, new=opensearch_utils.FakeOpenSearchClient) @mock.patch('cloudkitty.utils.load_conf', new=test_utils.load_conf) def setUp(self): super(StorageUnitTest, self).setUp() self._project_id = samples.TENANT self._other_project_id = samples.OTHER_TENANT self.conf.set_override('backend', self.storage_backend, 'storage') self.conf.set_override('version', '2', 'storage') self.storage = storage.get_storage(conf=test_utils.load_conf()) self.storage.init() self.data = [] self.init_data() def init_data(self): project_ids = [self._project_id, self._other_project_id] start_base = tzutils.utc_to_local(datetime.datetime(2018, 1, 1)) for i in range(3): start_delta = datetime.timedelta(seconds=3600 * i) end_delta = start_delta + datetime.timedelta(seconds=3600) start = tzutils.add_delta(start_base, start_delta) end = tzutils.add_delta(start_base, end_delta) data = test_utils.generate_v2_storage_data( project_ids=project_ids, start=start, end=end) self.data.append(data) self.storage.push([data]) @staticmethod def _expected_total_qty_len(data, project_id=None, types=None): total = 0 qty = 0 length = 0 for dataframe in data: for mtype, points in dataframe.itertypes(): if types is not None and mtype not in types: continue for point in points: if project_id is None or \ project_id == point.groupby['project_id']: total += point.price qty += point.qty length += 1 return round(float(total), 5), round(float(qty), 5), length def _compare_get_total_result_with_expected(self, expected_qty, expected_total, expected_total_len, total): self.assertEqual(len(total['results']), expected_total_len) self.assertEqual(total['total'], expected_total_len) returned_total = round( sum(r.get('rate', r.get('price')) for r in total['results']), 5) self.assertLessEqual( abs(expected_total - float(returned_total)), 0.0001) returned_qty = round(sum(r['qty'] for r in total['results']), 5) self.assertLessEqual( abs(expected_qty - float(returned_qty)), 0.0001) def test_get_total_all_scopes_all_periods(self): expected_total, expected_qty, _ = self._expected_total_qty_len( self.data) begin = datetime.datetime(2018, 1, 1) end = datetime.datetime(2018, 1, 1, 4) self._compare_get_total_result_with_expected( expected_qty, expected_total, 1, self.storage.total(begin=begin, end=end)) def test_get_total_one_scope_all_periods(self): expected_total, expected_qty, _ = self._expected_total_qty_len( self.data, self._project_id) begin = datetime.datetime(2018, 1, 1) end = datetime.datetime(2018, 1, 1, 4) filters = {'project_id': self._project_id} self._compare_get_total_result_with_expected( expected_qty, expected_total, 1, self.storage.total(begin=begin, end=end, filters=filters), ) def test_get_total_all_scopes_one_period(self): expected_total, expected_qty, _ = self._expected_total_qty_len( [self.data[0]]) begin = datetime.datetime(2018, 1, 1) end = datetime.datetime(2018, 1, 1, 1) self._compare_get_total_result_with_expected( expected_qty, expected_total, 1, self.storage.total(begin=begin, end=end)) def test_get_total_one_scope_one_period(self): expected_total, expected_qty, _ = self._expected_total_qty_len( [self.data[0]], self._project_id) begin = datetime.datetime(2018, 1, 1) end = datetime.datetime(2018, 1, 1, 1) filters = {'project_id': self._project_id} self._compare_get_total_result_with_expected( expected_qty, expected_total, 1, self.storage.total(begin=begin, end=end, filters=filters), ) def test_get_total_all_scopes_all_periods_groupby_project_id(self): expected_total_first, expected_qty_first, _ = \ self._expected_total_qty_len(self.data, self._project_id) expected_total_second, expected_qty_second, _ = \ self._expected_total_qty_len(self.data, self._other_project_id) begin = datetime.datetime(2018, 1, 1) end = datetime.datetime(2018, 1, 1, 4) total = self.storage.total(begin=begin, end=end, groupby=['project_id']) self.assertEqual(len(total['results']), 2) self.assertEqual(total['total'], 2) for t in total['results']: self.assertIn('project_id', t.keys()) total['results'].sort(key=lambda x: x['project_id'], reverse=True) first_element = total['results'][0] self.assertLessEqual( abs(round( float(first_element.get('rate', first_element.get('price'))) - expected_total_first, 5)), 0.0001, ) second_element = total['results'][1] self.assertLessEqual( abs(round( float(second_element.get('rate', second_element.get('price'))) - expected_total_second, 5)), 0.0001, ) self.assertLessEqual( abs(round(float(total['results'][0]['qty']) - expected_qty_first, 5)), 0.0001, ) self.assertLessEqual( abs(round(float(total['results'][1]['qty']) - expected_qty_second, 5)), 0.0001, ) def test_get_total_all_scopes_one_period_groupby_project_id(self): expected_total_first, expected_qty_first, _ = \ self._expected_total_qty_len([self.data[0]], self._project_id) expected_total_second, expected_qty_second, _ = \ self._expected_total_qty_len([self.data[0]], self._other_project_id) begin = datetime.datetime(2018, 1, 1) end = datetime.datetime(2018, 1, 1, 1) total = self.storage.total(begin=begin, end=end, groupby=['project_id']) self.assertEqual(len(total), 2) for t in total['results']: self.assertIn('project_id', t.keys()) total['results'].sort(key=lambda x: x['project_id'], reverse=True) first_entry = total['results'][0] second_entry = total['results'][1] self.assertLessEqual( abs(round(float(first_entry.get('rate', first_entry.get('price'))) - expected_total_first, 5)), 0.0001, ) self.assertLessEqual( abs(round( float(second_entry.get('rate', second_entry.get('price'))) - expected_total_second, 5)), 0.0001, ) self.assertLessEqual( abs(round(float(total['results'][0]['qty']) - expected_qty_first, 5)), 0.0001, ) self.assertLessEqual( abs(round(float(total['results'][1]['qty']) - expected_qty_second, 5)), 0.0001, ) def test_get_total_all_scopes_all_periods_groupby_type_paginate(self): expected_total, expected_qty, _ = \ self._expected_total_qty_len(self.data) begin = datetime.datetime(2018, 1, 1) end = datetime.datetime(2018, 1, 1, 4) total = {'total': 0, 'results': []} for offset in range(0, 7, 2): chunk = self.storage.total( begin=begin, end=end, offset=offset, limit=2, groupby=['type']) # there are seven metric types self.assertEqual(chunk['total'], 7) # last chunk, shorter if offset == 6: self.assertEqual(len(chunk['results']), 1) else: self.assertEqual(len(chunk['results']), 2) total['results'] += chunk['results'] total['total'] += len(chunk['results']) unpaginated_total = self.storage.total( begin=begin, end=end, groupby=['type']) self.assertEqual(total, unpaginated_total) self._compare_get_total_result_with_expected( expected_qty, expected_total, 7, total) def test_retrieve_all_scopes_all_types(self): expected_total, expected_qty, expected_length = \ self._expected_total_qty_len(self.data) begin = datetime.datetime(2018, 1, 1) end = datetime.datetime(2018, 1, 1, 4) frames = self.storage.retrieve(begin=begin, end=end) self.assertEqual(frames['total'], expected_length) retrieved_length = sum(len(list(frame.iterpoints())) for frame in frames['dataframes']) self.assertEqual(expected_length, retrieved_length) def test_retrieve_all_scopes_one_type(self): expected_total, expected_qty, expected_length = \ self._expected_total_qty_len(self.data, types=['image.size']) begin = datetime.datetime(2018, 1, 1) end = datetime.datetime(2018, 1, 1, 4) frames = self.storage.retrieve(begin=begin, end=end, metric_types=['image.size']) self.assertEqual(frames['total'], expected_length) retrieved_length = sum(len(list(frame.iterpoints())) for frame in frames['dataframes']) self.assertEqual(expected_length, retrieved_length) def test_retrieve_one_scope_two_types_one_period(self): expected_total, expected_qty, expected_length = \ self._expected_total_qty_len([self.data[0]], self._project_id, types=['image.size', 'instance']) begin = datetime.datetime(2018, 1, 1) end = datetime.datetime(2018, 1, 1, 1) filters = {'project_id': self._project_id} frames = self.storage.retrieve(begin=begin, end=end, filters=filters, metric_types=['image.size', 'instance']) self.assertEqual(frames['total'], expected_length) retrieved_length = sum(len(list(frame.iterpoints())) for frame in frames['dataframes']) self.assertEqual(expected_length, retrieved_length) def test_parse_groupby_syntax_to_groupby_elements_no_time_groupby(self): groupby = ["something"] out = self.storage.parse_groupby_syntax_to_groupby_elements(groupby) self.assertEqual(groupby, out) def test_parse_groupby_syntax_to_groupby_elements_time_groupby(self): groupby = ["something", "time"] out = self.storage.parse_groupby_syntax_to_groupby_elements(groupby) self.assertEqual(groupby, out) def test_parse_groupby_syntax_to_groupby_elements_odd_time(self): groupby = ["something", "time-odd-time-element"] with mock.patch.object(storage.v2.LOG, 'warning') as log_mock: out = self.storage.parse_groupby_syntax_to_groupby_elements( groupby) log_mock.assert_has_calls([ mock.call("The groupby [%s] command is not expected for " "storage backend [%s]. Therefore, we leave it as " "is.", "time-odd-time-element", self.storage)]) self.assertEqual(groupby, out) def test_parse_groupby_syntax_to_groupby_elements_wrong_time_frame(self): groupby = ["something", "time-u"] expected_message = r"400 Bad Request: Invalid groupby time option. " \ r"There is no groupby processing for \[time-u\]." self.assertRaisesRegex( http_exceptions.BadRequest, expected_message, self.storage.parse_groupby_syntax_to_groupby_elements, groupby) def test_parse_groupby_syntax_to_groupby_elements_all_time_options(self): groupby = ["something", "time", "time-d", "time-w", "time-m", "time-y"] expected_log_calls = [] for k, v in storage.v2.BaseStorage.TIME_COMMANDS_MAP.items(): expected_log_calls.append( mock.call("Replacing API groupby time command [%s] with " "internal groupby command [%s].", "time-%s" % k, v)) with mock.patch.object(storage.v2.LOG, 'debug') as log_debug_mock: out = self.storage.parse_groupby_syntax_to_groupby_elements( groupby) log_debug_mock.assert_has_calls(expected_log_calls) self.assertEqual(["something", "time", "day_of_the_year", "week_of_the_year", "month", "year"], out) def test_parse_groupby_syntax_to_groupby_elements_no_groupby(self): with mock.patch.object(storage.v2.LOG, 'debug') as log_debug_mock: out = self.storage.parse_groupby_syntax_to_groupby_elements(None) log_debug_mock.assert_has_calls([ mock.call("No groupby to process syntax.")]) self.assertIsNone(out) StorageUnitTest.generate_scenarios() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/test_config.py0000664000175000017500000000144400000000000022226 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from cloudkitty.common import config as ck_config from cloudkitty import tests class ConfigTest(tests.TestCase): def test_config(self): ck_config.list_opts() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/test_dataframe.py0000664000175000017500000002453000000000000022706 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import copy import datetime import decimal import unittest from dateutil import tz from werkzeug import datastructures from cloudkitty import dataframe from cloudkitty.utils import json class TestDataPoint(unittest.TestCase): default_params = { 'qty': 0, 'price': 0, 'unit': None, 'groupby': {}, 'metadata': {}, } def test_create_empty_datapoint(self): point = dataframe.DataPoint(**self.default_params) self.assertEqual(point.qty, decimal.Decimal(0)) self.assertEqual(point.price, decimal.Decimal(0)) self.assertEqual(point.unit, "undefined") self.assertEqual(point.groupby, {}) def test_readonly_attrs(self): point = dataframe.DataPoint(**self.default_params) for attr in ("qty", "price", "unit"): self.assertRaises(AttributeError, setattr, point, attr, 'x') def test_properties(self): params = copy.deepcopy(self.default_params) groupby = {"group_one": "one", "group_two": "two"} metadata = {"meta_one": "one", "meta_two": "two"} params.update({'groupby': groupby, 'metadata': metadata}) point = dataframe.DataPoint(**params) self.assertEqual(point.groupby, groupby) self.assertEqual(point.metadata, metadata) def test_as_dict_mutable_standard(self): self.assertEqual( dataframe.DataPoint( **self.default_params).as_dict(mutable=True), { "vol": {"unit": "undefined", "qty": decimal.Decimal(0)}, "rating": {"price": decimal.Decimal(0)}, "groupby": {}, "metadata": {}, } ) def test_as_dict_mutable_legacy(self): self.assertEqual( dataframe.DataPoint(**self.default_params).as_dict( legacy=True, mutable=True), { "vol": {"unit": "undefined", "qty": decimal.Decimal(0)}, "rating": {"price": decimal.Decimal(0)}, "desc": {}, } ) def test_as_dict_immutable(self): point_dict = dataframe.DataPoint(**self.default_params).as_dict() self.assertIsInstance(point_dict, datastructures.ImmutableDict) self.assertEqual(dict(point_dict), { "vol": {"unit": "undefined", "qty": decimal.Decimal(0)}, "rating": {"price": decimal.Decimal(0)}, "groupby": {}, "metadata": {}, }) def test_json_standard(self): self.assertEqual( json.loads(dataframe.DataPoint(**self.default_params).json()), { "vol": {"unit": "undefined", "qty": decimal.Decimal(0)}, "rating": {"price": decimal.Decimal(0)}, "groupby": {}, "metadata": {}, } ) def test_json_legacy(self): self.assertEqual( json.loads(dataframe.DataPoint( **self.default_params).json(legacy=True)), { "vol": {"unit": "undefined", "qty": decimal.Decimal(0)}, "rating": {"price": decimal.Decimal(0)}, "desc": {}, } ) def test_from_dict_valid_dict(self): self.assertEqual( dataframe.DataPoint( unit="amazing_unit", qty=3, price=0, groupby={"g_one": "one", "g_two": "two"}, metadata={"m_one": "one", "m_two": "two"}, ).as_dict(), dataframe.DataPoint.from_dict({ "vol": {"unit": "amazing_unit", "qty": 3}, "groupby": {"g_one": "one", "g_two": "two"}, "metadata": {"m_one": "one", "m_two": "two"}, }).as_dict(), ) def test_from_dict_invalid(self): invalid = { "vol": {}, "desc": {"a": "b"}, } self.assertRaises(ValueError, dataframe.DataPoint.from_dict, invalid) def test_set_price(self): point = dataframe.DataPoint(**self.default_params) self.assertEqual(point.price, decimal.Decimal(0)) self.assertEqual(point.set_price(42).price, decimal.Decimal(42)) self.assertEqual(point.set_price(1337).price, decimal.Decimal(1337)) def test_desc(self): params = copy.deepcopy(self.default_params) params['groupby'] = {'group_one': 'one', 'group_two': 'two'} params['metadata'] = {'meta_one': 'one', 'meta_two': 'two'} point = dataframe.DataPoint(**params) self.assertEqual(point.desc, { 'group_one': 'one', 'group_two': 'two', 'meta_one': 'one', 'meta_two': 'two', }) class TestDataFrame(unittest.TestCase): def test_dataframe_add_points(self): start = datetime.datetime(2019, 3, 4, 1, tzinfo=tz.tzutc()) end = datetime.datetime(2019, 3, 4, 2, tzinfo=tz.tzutc()) df = dataframe.DataFrame(start=start, end=end) a_points = [dataframe.DataPoint(**TestDataPoint.default_params) for _ in range(2)] b_points = [dataframe.DataPoint(**TestDataPoint.default_params) for _ in range(4)] df.add_point(a_points[0], 'service_a') df.add_points(a_points[1:], 'service_a') df.add_points(b_points[:2], 'service_b') df.add_points(b_points[2:3], 'service_b') df.add_point(b_points[3], 'service_b') self.assertEqual(dict(df.as_dict()), { 'period': {'begin': start, 'end': end}, 'usage': { 'service_a': [ dataframe.DataPoint( **TestDataPoint.default_params).as_dict() for _ in range(2)], 'service_b': [ dataframe.DataPoint( **TestDataPoint.default_params).as_dict() for _ in range(4)], } }) def test_properties(self): start = datetime.datetime(2019, 6, 1, tzinfo=tz.tzutc()) end = datetime.datetime(2019, 6, 1, 1, tzinfo=tz.tzutc()) df = dataframe.DataFrame(start=start, end=end) self.assertEqual(df.start, start) self.assertEqual(df.end, end) def test_json(self): start = datetime.datetime(2019, 3, 4, 1, tzinfo=tz.tzutc()) end = datetime.datetime(2019, 3, 4, 2, tzinfo=tz.tzutc()) df = dataframe.DataFrame(start=start, end=end) a_points = [dataframe.DataPoint(**TestDataPoint.default_params) for _ in range(2)] b_points = [dataframe.DataPoint(**TestDataPoint.default_params) for _ in range(4)] df.add_points(a_points, 'service_a') df.add_points(b_points, 'service_b') self.maxDiff = None self.assertEqual(json.loads(df.json()), json.loads(json.dumps({ 'period': {'begin': start.isoformat(), 'end': end.isoformat()}, 'usage': { 'service_a': [ dataframe.DataPoint( **TestDataPoint.default_params).as_dict() for _ in range(2)], 'service_b': [ dataframe.DataPoint( **TestDataPoint.default_params).as_dict() for _ in range(4)], } }))) def test_from_dict_valid_dict(self): start = datetime.datetime(2019, 1, 2, 12, tzinfo=tz.tzutc()) end = datetime.datetime(2019, 1, 2, 13, tzinfo=tz.tzutc()) point = dataframe.DataPoint( 'unit', 0, 0, {'g_one': 'one'}, {'m_two': 'two'}) usage = {'metric_x': [point]} dict_usage = {'metric_x': [point.as_dict(mutable=True)]} self.assertEqual( dataframe.DataFrame(start, end, usage).as_dict(), dataframe.DataFrame.from_dict({ 'period': {'begin': start, 'end': end}, 'usage': dict_usage, }).as_dict(), ) def test_from_dict_valid_dict_date_as_str(self): start = datetime.datetime(2019, 1, 2, 12, tzinfo=tz.tzutc()) end = datetime.datetime(2019, 1, 2, 13, tzinfo=tz.tzutc()) point = dataframe.DataPoint( 'unit', 0, 0, {'g_one': 'one'}, {'m_two': 'two'}) usage = {'metric_x': [point]} dict_usage = {'metric_x': [point.as_dict(mutable=True)]} self.assertEqual( dataframe.DataFrame(start, end, usage).as_dict(), dataframe.DataFrame.from_dict({ 'period': {'begin': start.isoformat(), 'end': end.isoformat()}, 'usage': dict_usage, }).as_dict(), ) def test_from_dict_invalid_dict(self): self.assertRaises( ValueError, dataframe.DataFrame.from_dict, {'usage': None}) def test_repr(self): start = datetime.datetime(2019, 3, 4, 1, tzinfo=tz.tzutc()) end = datetime.datetime(2019, 3, 4, 2, tzinfo=tz.tzutc()) df = dataframe.DataFrame(start=start, end=end) points = [dataframe.DataPoint(**TestDataPoint.default_params) for _ in range(4)] df.add_points(points, 'metric_x') self.assertEqual(str(df), "DataFrame(metrics=[metric_x])") df.add_points(points, 'metric_y') self.assertEqual(str(df), "DataFrame(metrics=[metric_x,metric_y])") def test_iterpoints(self): start = datetime.datetime(2019, 3, 4, 1, tzinfo=tz.tzutc()) end = datetime.datetime(2019, 3, 4, 2, tzinfo=tz.tzutc()) df = dataframe.DataFrame(start=start, end=end) points = [dataframe.DataPoint(**TestDataPoint.default_params) for _ in range(4)] df.add_points(points, 'metric_x') expected = [ ('metric_x', dataframe.DataPoint(**TestDataPoint.default_params)) for _ in range(4)] self.assertEqual(list(df.iterpoints()), expected) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/test_hacking.py0000664000175000017500000003156600000000000022375 0ustar00zuulzuul00000000000000# Copyright 2016 GohighSec # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import textwrap from unittest import mock import ddt import pycodestyle from cloudkitty.hacking import checks from cloudkitty import tests @ddt.ddt class HackingTestCase(tests.TestCase): """Hacking test cases This class tests the hacking checks in cloudkitty.hacking.checks by passing strings to the check methods like the pep8/flake8 parser would. The parser loops over each line in the file and then passes the parameters to the check method. The parameter names in the check method dictate what type of object is passed to the check method. The parameter types are:: logical_line: A processed line with the following modifications: - Multi-line statements converted to a single line. - Stripped left and right. - Contents of strings replaced with "xxx" of same length. - Comments removed. physical_line: Raw line of text from the input file. lines: a list of the raw lines from the input file tokens: the tokens that contribute to this logical line line_number: line number in the input file total_lines: number of lines in the input file blank_lines: blank lines before this one indent_char: indentation character in this file (" " or "\t") indent_level: indentation (with tabs expanded to multiples of 8) previous_indent_level: indentation on previous line previous_logical: previous logical line filename: Path of the file being run through pep8 When running a test on a check method the return will be False/None if there is no violation in the sample input. If there is an error a tuple is returned with a position in the line, and a message. So to check the result just assertTrue if the check is expected to fail and assertFalse if it should pass. """ def test_no_log_translations(self): for log in checks._all_log_levels: bad = 'LOG.%s(_("Bad"))' % log self.assertEqual(1, len(list(checks.no_translate_logs(bad, 'f')))) # Catch abuses when used with a variable and not a literal bad = 'LOG.%s(_(msg))' % log self.assertEqual(1, len(list(checks.no_translate_logs(bad, 'f')))) def test_check_explicit_underscore_import(self): self.assertEqual(1, len(list(checks.check_explicit_underscore_import( "LOG.info(_('My info message'))", "cloudkitty/tests/other_files.py")))) self.assertEqual(1, len(list(checks.check_explicit_underscore_import( "msg = _('My message')", "cloudkitty/tests/other_files.py")))) self.assertEqual(0, len(list(checks.check_explicit_underscore_import( "from cloudkitty.i18n import _", "cloudkitty/tests/other_files.py")))) self.assertEqual(0, len(list(checks.check_explicit_underscore_import( "LOG.info(_('My info message'))", "cloudkitty/tests/other_files.py")))) self.assertEqual(0, len(list(checks.check_explicit_underscore_import( "msg = _('My message')", "cloudkitty/tests/other_files.py")))) self.assertEqual(0, len(list(checks.check_explicit_underscore_import( "from cloudkitty.i18n import _", "cloudkitty/tests/other_files2.py")))) self.assertEqual(0, len(list(checks.check_explicit_underscore_import( "msg = _('My message')", "cloudkitty/tests/other_files2.py")))) self.assertEqual(0, len(list(checks.check_explicit_underscore_import( "_ = translations.ugettext", "cloudkitty/tests/other_files3.py")))) self.assertEqual(0, len(list(checks.check_explicit_underscore_import( "msg = _('My message')", "cloudkitty/tests/other_files3.py")))) # Complete code coverage by falling through all checks self.assertEqual(0, len(list(checks.check_explicit_underscore_import( "LOG.info('My info message')", "cloudkitty.tests.unit/other_files4.py")))) self.assertEqual(1, len(list(checks.check_explicit_underscore_import( "msg = _('My message')", "cloudkitty.tests.unit/other_files5.py")))) # We are patching pep8 so that only the check under test is actually # installed. @mock.patch('pycodestyle._checks', {'physical_line': {}, 'logical_line': {}, 'tree': {}}) def _run_check(self, code, checker, filename=None): pycodestyle.register_check(checker) lines = textwrap.dedent(code).strip().splitlines(True) checker = pycodestyle.Checker(filename=filename, lines=lines) checker.check_all() checker.report._deferred_print.sort() return checker.report._deferred_print def _assert_has_errors(self, code, checker, expected_errors=None, filename=None): actual_errors = [e[:3] for e in self._run_check(code, checker, filename)] self.assertEqual(expected_errors or [], actual_errors) def _assert_has_no_errors(self, code, checker, filename=None): self._assert_has_errors(code, checker, filename=filename) def test_logging_format_no_tuple_arguments(self): checker = checks.CheckLoggingFormatArgs code = """ import logging LOG = logging.getLogger() LOG.info("Message without a second argument.") LOG.critical("Message with %s arguments.", 'two') LOG.debug("Volume %s caught fire and is at %d degrees C and" " climbing.", 'volume1', 500) """ self._assert_has_no_errors(code, checker) @ddt.data(*checks.CheckLoggingFormatArgs.LOG_METHODS) def test_logging_with_tuple_argument(self, log_method): checker = checks.CheckLoggingFormatArgs code = """ import logging LOG = logging.getLogger() LOG.{0}("Volume %s caught fire and is at %d degrees C and " "climbing.", ('volume1', 500)) """ self._assert_has_errors(code.format(log_method), checker, expected_errors=[(4, mock.ANY, 'C310')]) def test_str_on_exception(self): checker = checks.CheckForStrUnicodeExc code = """ def f(a, b): try: p = str(a) + str(b) except ValueError as e: p = str(e) return p """ errors = [(5, mock.ANY, 'C314')] self._assert_has_errors(code, checker, expected_errors=errors) def test_no_str_unicode_on_exception(self): checker = checks.CheckForStrUnicodeExc code = """ def f(a, b): try: p = unicode(a) + str(b) except ValueError as e: p = e return p """ self._assert_has_no_errors(code, checker) def test_unicode_on_exception(self): checker = checks.CheckForStrUnicodeExc code = """ def f(a, b): try: p = str(a) + str(b) except ValueError as e: p = unicode(e) return p """ errors = [(5, mock.ANY, 'C314')] self._assert_has_errors(code, checker, expected_errors=errors) def test_str_on_multiple_exceptions(self): checker = checks.CheckForStrUnicodeExc code = """ def f(a, b): try: p = str(a) + str(b) except ValueError as e: try: p = unicode(a) + unicode(b) except ValueError as ve: p = str(e) + str(ve) p = e return p """ errors = [(8, mock.ANY, 'C314'), (8, mock.ANY, 'C314')] self._assert_has_errors(code, checker, expected_errors=errors) def test_str_unicode_on_multiple_exceptions(self): checker = checks.CheckForStrUnicodeExc code = """ def f(a, b): try: p = str(a) + str(b) except ValueError as e: try: p = unicode(a) + unicode(b) except ValueError as ve: p = str(e) + unicode(ve) p = str(e) return p """ errors = [(8, mock.ANY, 'C314'), (8, mock.ANY, 'C314'), (9, mock.ANY, 'C314')] self._assert_has_errors(code, checker, expected_errors=errors) def test_trans_add(self): checker = checks.CheckForTransAdd code = """ def fake_tran(msg): return msg _ = fake_tran def f(a, b): msg = _('test') + 'add me' msg = 'add to me' + _('test') return msg """ # We don't assert on specific column numbers since there is a small # change in calculation between =py38 errors = [(9, mock.ANY, 'C315'), (10, mock.ANY, 'C315')] self._assert_has_errors(code, checker, expected_errors=errors) code = """ def f(a, b): msg = 'test' + 'add me' return msg """ errors = [] self._assert_has_errors(code, checker, expected_errors=errors) def test_dict_constructor_with_list_copy(self): self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dict([(i, connect_info[i])")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " attrs = dict([(k, _from_json(v))")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " type_names = dict((value, key) for key, value in")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dict((value, key) for key, value in")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( "foo(param=dict((k, v) for k, v in bar.items()))")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dict([[i,i] for i in range(3)])")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dd = dict([i,i] for i in range(3))")))) self.assertEqual(0, len(list(checks.dict_constructor_with_list_copy( " create_kwargs = dict(snapshot=snapshot,")))) self.assertEqual(0, len(list(checks.dict_constructor_with_list_copy( " self._render_dict(xml, data_el, data.__dict__)")))) def test_no_xrange(self): self.assertEqual(1, len(list(checks.no_xrange("xrange(45)")))) self.assertEqual(0, len(list(checks.no_xrange("range(45)")))) def test_validate_assertTrue(self): test_value = True self.assertEqual(0, len(list(checks.validate_assertTrue( "assertTrue(True)")))) self.assertEqual(1, len(list(checks.validate_assertTrue( "assertEqual(True, %s)" % test_value)))) def test_validate_assertIsNone(self): test_value = None self.assertEqual(0, len(list(checks.validate_assertIsNone( "assertIsNone(None)")))) self.assertEqual(1, len(list(checks.validate_assertIsNone( "assertEqual(None, %s)" % test_value)))) def test_no_log_warn_check(self): self.assertEqual(0, len(list(checks.no_log_warn_check( "LOG.warning('This should not trigger LOG.warn" "hacking check.')")))) self.assertEqual(1, len(list(checks.no_log_warn_check( "LOG.warn('We should not use LOG.wan')")))) def test_oslo_assert_raises_regexp(self): code = """ self.assertRaisesRegexp(ValueError, "invalid literal for.*XYZ'$", int, 'XYZ') """ self._assert_has_errors(code, checks.assert_raises_regexp, expected_errors=[(1, 0, "C322")]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/test_hashmap.py0000664000175000017500000014244000000000000022404 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import copy import datetime import decimal from unittest import mock from oslo_utils import uuidutils from cloudkitty import dataframe from cloudkitty.rating import hash from cloudkitty.rating.hash.db import api from cloudkitty import tests TEST_TS = 1388577600 FAKE_UUID = '6c1b8a30-797f-4b7e-ad66-9879b79059fb' CK_RESOURCES_DATA = [dataframe.DataFrame.from_dict({ "period": { "begin": datetime.datetime(2014, 10, 1), "end": datetime.datetime(2014, 10, 1, 1)}, "usage": { "compute": [ { "desc": { "availability_zone": "nova", "flavor": "m1.nano", "image_id": "f5600101-8fa2-4864-899e-ebcb7ed6b568", "memory": "64", "metadata": { "farm": "prod"}, "name": "prod1", "project_id": "f266f30b11f246b589fd266f85eeec39", "user_id": "55b3379b949243009ee96972fbf51ed1", # Integer rather than a string on purpose "vcpus": 1}, "vol": { "qty": 1, "unit": "instance"} }, { "desc": { "availability_zone": "nova", "flavor": "m1.tiny", "image_id": "a41fba37-2429-4f15-aa00-b5bc4bf557bf", "memory": "512", "metadata": { "farm": "dev"}, "name": "dev1", "project_id": "f266f30b11f246b589fd266f85eeec39", "user_id": "55b3379b949243009ee96972fbf51ed1", "vcpus": "1"}, "vol": { "qty": 2, "unit": "instance"}}, { "desc": { "availability_zone": "nova", "flavor": "m1.xlarge", "image_id": "0052f884-461d-4e3a-9598-7e5391888209", "memory": "16384", "metadata": { "farm": "dev"}, "name": "dev1", "project_id": "f266f30b11f246b589fd266f85eeec39", "user_id": "55b3379b949243009ee96972fbf51ed1", # Integer rather than a string on purpose "vcpus": 8}, "vol": { "qty": 2, "unit": "instance"}}, { "desc": { "availability_zone": "nova", "flavor": "m1.nano", "image_id": "a41fba37-2429-4f15-aa00-b5bc4bf557bf", "memory": "64", "metadata": { "farm": "dev"}, "name": "dev2", "project_id": "f266f30b11f246b589fd266f85eeec39", "user_id": "55b3379b949243009ee96972fbf51ed1", "vcpus": "1"}, "vol": { "qty": 1, "unit": "instance"}}]}}, legacy=True)] class HashMapRatingTest(tests.TestCase): def setUp(self): super(HashMapRatingTest, self).setUp() self._tenant_id = 'f266f30b11f246b589fd266f85eeec39' self._db_api = hash.HashMap.db_api self._db_api.get_migration().upgrade('head') self._hash = hash.HashMap(self._tenant_id) # Group tests @mock.patch.object(uuidutils, 'generate_uuid', return_value=FAKE_UUID) def test_create_group(self, patch_generate_uuid): self._db_api.create_group('test_group') groups = self._db_api.list_groups() self.assertEqual([FAKE_UUID], groups) patch_generate_uuid.assert_called_once_with() def test_create_duplicate_group(self): self._db_api.create_group('test_group') self.assertRaises(api.GroupAlreadyExists, self._db_api.create_group, 'test_group') def test_delete_group(self): group_db = self._db_api.create_group('test_group') self._db_api.delete_group(group_db.group_id) groups = self._db_api.list_groups() self.assertEqual([], groups) def test_delete_unknown_group(self): self.assertRaises(api.NoSuchGroup, self._db_api.delete_group, uuidutils.generate_uuid()) def test_recursive_delete_group(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field(service_db.service_id, 'flavor') group_db = self._db_api.create_group('test_group') self._db_api.create_mapping( value='m1.tiny', cost='1.337', map_type='flat', field_id=field_db.field_id, group_id=group_db.group_id) self._db_api.delete_group(group_db.group_id) mappings = self._db_api.list_mappings(field_uuid=field_db.field_id) self.assertEqual([], mappings) groups = self._db_api.list_groups() self.assertEqual([], groups) def test_non_recursive_delete_group(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field(service_db.service_id, 'flavor') group_db = self._db_api.create_group('test_group') mapping_db = self._db_api.create_mapping( value='m1.tiny', cost='1.337', map_type='flat', field_id=field_db.field_id, group_id=group_db.group_id) self._db_api.delete_group(group_db.group_id, False) mappings = self._db_api.list_mappings(field_uuid=field_db.field_id) self.assertEqual([mapping_db.mapping_id], mappings) groups = self._db_api.list_groups() self.assertEqual([], groups) new_mapping_db = self._db_api.get_mapping(mapping_db.mapping_id) self.assertIsNone(new_mapping_db.group_id) def test_list_mappings_from_only_group(self): service_db = self._db_api.create_service('compute') group_db = self._db_api.create_group('test_group') mapping_tiny = self._db_api.create_mapping( cost='1.337', map_type='flat', service_id=service_db.service_id, group_id=group_db.group_id) self._db_api.create_mapping( cost='42', map_type='flat', service_id=service_db.service_id) mappings = self._db_api.list_mappings(group_uuid=group_db.group_id) self.assertEqual([mapping_tiny.mapping_id], mappings) def test_list_mappings_from_group(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field(service_db.service_id, 'flavor') group_db = self._db_api.create_group('test_group') mapping_tiny = self._db_api.create_mapping( value='m1.tiny', cost='1.337', map_type='flat', field_id=field_db.field_id, group_id=group_db.group_id) mapping_small = self._db_api.create_mapping( value='m1.small', cost='3.1337', map_type='flat', field_id=field_db.field_id, group_id=group_db.group_id) self._db_api.create_mapping( value='m1.large', cost='42', map_type='flat', field_id=field_db.field_id) mappings = self._db_api.list_mappings(field_uuid=field_db.field_id, group_uuid=group_db.group_id) self.assertEqual([mapping_tiny.mapping_id, mapping_small.mapping_id], mappings) def test_list_mappings_without_group(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field(service_db.service_id, 'flavor') group_db = self._db_api.create_group('test_group') self._db_api.create_mapping( value='m1.tiny', cost='1.337', map_type='flat', field_id=field_db.field_id, group_id=group_db.group_id) self._db_api.create_mapping( value='m1.small', cost='3.1337', map_type='flat', field_id=field_db.field_id, group_id=group_db.group_id) mapping_no_group = self._db_api.create_mapping( value='m1.large', cost='42', map_type='flat', field_id=field_db.field_id) mappings = self._db_api.list_mappings(field_uuid=field_db.field_id, no_group=True) self.assertEqual([mapping_no_group.mapping_id], mappings) # Service tests @mock.patch.object(uuidutils, 'generate_uuid', return_value=FAKE_UUID) def test_create_service(self, patch_generate_uuid): self._db_api.create_service('compute') services = self._db_api.list_services() self.assertEqual([FAKE_UUID], services) patch_generate_uuid.assert_called_once_with() def test_create_duplicate_service(self): self._db_api.create_service('compute') self.assertRaises(api.ServiceAlreadyExists, self._db_api.create_service, 'compute') def test_delete_service_by_name(self): self._db_api.create_service('compute') self._db_api.delete_service('compute') services = self._db_api.list_services() self.assertEqual([], services) def test_delete_service_by_uuid(self): service_db = self._db_api.create_service('compute') self._db_api.delete_service(uuid=service_db.service_id) services = self._db_api.list_services() self.assertEqual([], services) def test_delete_unknown_service_by_name(self): self.assertRaises(api.NoSuchService, self._db_api.delete_service, 'dummy') def test_delete_unknown_service_by_uuid(self): self.assertRaises( api.NoSuchService, self._db_api.delete_service, uuid='6e8de9fc-ee17-4b60-b81a-c9320e994e76') # Field tests def test_create_field_in_existing_service(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field(service_db.service_id, 'flavor') fields = self._db_api.list_fields(service_db.service_id) self.assertEqual([field_db.field_id], fields) def test_create_duplicate_field(self): service_db = self._db_api.create_service('compute') self._db_api.create_field(service_db.service_id, 'flavor') self.assertRaises(api.FieldAlreadyExists, self._db_api.create_field, service_db.service_id, 'flavor') def test_delete_field(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field(service_db.service_id, 'flavor') self._db_api.delete_field(field_db.field_id) services = self._db_api.list_services() self.assertEqual([service_db.service_id], services) fields = self._db_api.list_fields(service_db.service_id) self.assertEqual([], fields) def test_delete_unknown_field(self): self.assertRaises(api.NoSuchField, self._db_api.delete_field, uuidutils.generate_uuid()) def test_recursive_delete_field_from_service(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field(service_db.service_id, 'flavor') self._db_api.delete_service(uuid=service_db.service_id) self.assertRaises(api.NoSuchField, self._db_api.get_field, field_db.field_id) # Mapping tests def test_create_mapping(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field(service_db.service_id, 'flavor') mapping_db = self._db_api.create_mapping( value='m1.tiny', cost='1.337', map_type='flat', field_id=field_db.field_id) mappings = self._db_api.list_mappings(field_uuid=field_db.field_id) self.assertEqual([mapping_db.mapping_id], mappings) def test_get_mapping(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field(service_db.service_id, 'flavor') mapping_db = self._db_api.create_mapping( value='m1.tiny', cost='1.337', map_type='flat', field_id=field_db.field_id) mapping = self._db_api.get_mapping(mapping_db.mapping_id) self.assertEqual('flat', mapping.map_type) self.assertEqual('m1.tiny', mapping.value) self.assertEqual( decimal.Decimal('1.3369999999999999662492200514'), mapping.cost) self.assertEqual(field_db.id, mapping.field_id) def test_list_mappings_from_services(self): service_db = self._db_api.create_service('compute') mapping_db = self._db_api.create_mapping( cost='1.337', map_type='flat', service_id=service_db.service_id) mappings = self._db_api.list_mappings( service_uuid=service_db.service_id) self.assertEqual([mapping_db.mapping_id], mappings) def test_list_mappings_from_fields(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field(service_db.service_id, 'flavor') mapping_db = self._db_api.create_mapping( value='m1.tiny', cost='1.337', map_type='flat', field_id=field_db.field_id) mappings = self._db_api.list_mappings( field_uuid=field_db.field_id) self.assertEqual([mapping_db.mapping_id], mappings) def test_create_mapping_with_incorrect_type(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field(service_db.service_id, 'flavor') self.assertRaises(api.NoSuchType, self._db_api.create_mapping, value='m1.tiny', cost='1.337', map_type='invalid', field_id=field_db.field_id) def test_create_mapping_with_two_parents(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field(service_db.service_id, 'flavor') self.assertRaises(api.ClientHashMapError, self._db_api.create_mapping, value='m1.tiny', cost='1.337', map_type='flat', service_id=service_db.service_id, field_id=field_db.field_id) def test_update_mapping(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field(service_db.service_id, 'flavor') mapping_db = self._db_api.create_mapping( value='m1.tiny', cost='1.337', map_type='flat', field_id=field_db.field_id) new_mapping_db = self._db_api.update_mapping( uuid=mapping_db.mapping_id, value='42', map_type='rate') self.assertEqual('42', new_mapping_db.value) self.assertEqual('rate', new_mapping_db.map_type) def test_update_mapping_inside_group(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field(service_db.service_id, 'flavor') mapping_db = self._db_api.create_mapping( value='m1.tiny', cost='1.337', map_type='flat', field_id=field_db.field_id) group_db = self._db_api.create_group('test_group') new_mapping_db = self._db_api.update_mapping( mapping_db.mapping_id, value='42', map_type='rate', group_id=group_db.group_id) self.assertEqual('42', new_mapping_db.value) self.assertEqual('rate', new_mapping_db.map_type) self.assertEqual(group_db.id, new_mapping_db.group_id) def test_delete_mapping(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field(service_db.service_id, 'flavor') mapping_db = self._db_api.create_mapping( value='m1.tiny', cost='1.337', map_type='flat', field_id=field_db.field_id) self._db_api.delete_mapping(mapping_db.mapping_id) mappings = self._db_api.list_mappings(field_uuid=field_db.field_id) self.assertEqual([], mappings) def test_create_per_tenant_mapping(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field( service_db.service_id, 'flavor') mapping_db = self._db_api.create_mapping( value='m1.tiny', cost='1.337', map_type='flat', field_id=field_db.field_id, tenant_id=self._tenant_id) mappings = self._db_api.list_mappings(field_uuid=field_db.field_id) self.assertEqual( self._tenant_id, mapping_db.tenant_id) self.assertEqual([mapping_db.mapping_id], mappings) def test_list_mappings_filtering_on_tenant(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field( service_db.service_id, 'flavor') mapping_db = self._db_api.create_mapping( value='m1.tiny', cost='1.337', map_type='flat', field_id=field_db.field_id, tenant_id=self._tenant_id) self._db_api.create_mapping( value='m1.small', cost='1.337', map_type='flat', field_id=field_db.field_id) mappings = self._db_api.list_mappings( field_uuid=field_db.field_id, tenant_uuid=self._tenant_id) self.assertEqual([mapping_db.mapping_id], mappings) def test_list_mappings_filtering_on_no_tenant(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field( service_db.service_id, 'flavor') mapping_db = self._db_api.create_mapping( value='m1.tiny', cost='1.337', map_type='flat', field_id=field_db.field_id) self._db_api.create_mapping( value='m1.small', cost='1.337', map_type='flat', field_id=field_db.field_id, tenant_id=self._tenant_id) mappings = self._db_api.list_mappings( field_uuid=field_db.field_id, tenant_uuid=None) self.assertEqual([mapping_db.mapping_id], mappings) # Threshold tests def test_create_threshold(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field(service_db.service_id, 'memory') threshold_db = self._db_api.create_threshold( level='64', cost='0.1337', map_type='flat', field_id=field_db.field_id) thresholds = self._db_api.list_thresholds(field_uuid=field_db.field_id) self.assertEqual([threshold_db.threshold_id], thresholds) def test_get_threshold(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field(service_db.service_id, 'memory') threshold_db = self._db_api.create_threshold( level='64', cost='0.1337', map_type='rate', field_id=field_db.field_id) threshold = self._db_api.get_threshold(threshold_db.threshold_id) self.assertEqual('rate', threshold.map_type) self.assertEqual(decimal.Decimal('64'), threshold.level) self.assertEqual( decimal.Decimal('0.1337000000000000132782673745'), threshold.cost) self.assertEqual(field_db.id, threshold.field_id) def test_list_thresholds_from_only_group(self): service_db = self._db_api.create_service('compute') group_db = self._db_api.create_group('test_group') threshold_db = self._db_api.create_threshold( level=10, cost='1.337', map_type='flat', service_id=service_db.service_id, group_id=group_db.group_id) thresholds = self._db_api.list_thresholds( group_uuid=group_db.group_id) self.assertEqual([threshold_db.threshold_id], thresholds) def test_list_thresholds_from_services(self): service_db = self._db_api.create_service('compute') threshold_db = self._db_api.create_threshold( level=10, cost='1.337', map_type='flat', service_id=service_db.service_id) thresholds = self._db_api.list_thresholds( service_uuid=service_db.service_id) self.assertEqual([threshold_db.threshold_id], thresholds) def test_list_thresholds_from_fields(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field( service_db.service_id, 'memory') threshold_db = self._db_api.create_threshold( level='64', cost='0.1337', map_type='flat', field_id=field_db.field_id) thresholds = self._db_api.list_thresholds(field_uuid=field_db.field_id) self.assertEqual([threshold_db.threshold_id], thresholds) def test_create_threshold_with_incorrect_type(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field( service_db.service_id, 'memory') self.assertRaises( api.NoSuchType, self._db_api.create_threshold, level='64', cost='0.1337', map_type='invalid', field_id=field_db.field_id) def test_create_threshold_with_two_parents(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field( service_db.service_id, 'memory') self.assertRaises( api.ClientHashMapError, self._db_api.create_threshold, level='64', cost='0.1337', map_type='flat', service_id=service_db.service_id, field_id=field_db.field_id) def test_update_threshold(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field( service_db.service_id, 'memory') threshold_db = self._db_api.create_threshold( level='64', cost='0.1337', map_type='flat', field_id=field_db.field_id) new_threshold_db = self._db_api.update_threshold( uuid=threshold_db.threshold_id, level='128', map_type='rate') self.assertEqual('128', new_threshold_db.level) self.assertEqual('rate', new_threshold_db.map_type) def test_update_threshold_inside_group(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field( service_db.service_id, 'memory') threshold_db = self._db_api.create_threshold( level='64', cost='0.1337', map_type='flat', field_id=field_db.field_id) group_db = self._db_api.create_group('test_group') new_threshold_db = self._db_api.update_threshold( threshold_db.threshold_id, group_id=group_db.group_id) self.assertEqual(group_db.id, new_threshold_db.group_id) def test_delete_threshold(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field( service_db.service_id, 'memory') threshold_db = self._db_api.create_threshold( level='64', cost='0.1337', map_type='flat', field_id=field_db.field_id) self._db_api.delete_threshold(threshold_db.threshold_id) thresholds = self._db_api.list_thresholds(field_uuid=field_db.field_id) self.assertEqual([], thresholds) def test_create_per_tenant_threshold(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field( service_db.service_id, 'memory') threshold_db = self._db_api.create_threshold( level='64', cost='0.1337', map_type='flat', field_id=field_db.field_id, tenant_id=self._tenant_id) thresholds = self._db_api.list_thresholds(field_uuid=field_db.field_id) self.assertEqual( self._tenant_id, threshold_db.tenant_id) self.assertEqual([threshold_db.threshold_id], thresholds) def test_list_thresholds_filtering_on_tenant(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field( service_db.service_id, 'memory') threshold_db = self._db_api.create_threshold( level='64', cost='0.1337', map_type='flat', field_id=field_db.field_id, tenant_id=self._tenant_id) self._db_api.create_threshold( level='128', cost='0.2', map_type='flat', field_id=field_db.field_id) thresholds = self._db_api.list_thresholds( field_uuid=field_db.field_id, tenant_uuid=self._tenant_id) self.assertEqual([threshold_db.threshold_id], thresholds) def test_list_thresholds_filtering_on_no_tenant(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field( service_db.service_id, 'memory') threshold_db = self._db_api.create_threshold( level='64', cost='0.1337', map_type='flat', field_id=field_db.field_id) self._db_api.create_threshold( level='128', cost='0.2', map_type='flat', field_id=field_db.field_id, tenant_id=self._tenant_id) thresholds = self._db_api.list_thresholds( field_uuid=field_db.field_id, tenant_uuid=None) self.assertEqual([threshold_db.threshold_id], thresholds) # Processing tests def _generate_hashmap_rules(self): mapping_list = [] threshold_list = [] service_db = self._db_api.create_service('compute') flavor_field = self._db_api.create_field(service_db.service_id, 'flavor') memory_field = self._db_api.create_field(service_db.service_id, 'memory') group_db = self._db_api.create_group('test_group') mapping_list.append( self._db_api.create_mapping( cost='1.42', map_type='rate', service_id=service_db.service_id)) mapping_list.append( self._db_api.create_mapping( value='m1.tiny', cost='1.337', map_type='flat', field_id=flavor_field.field_id)) mapping_list.append( self._db_api.create_mapping( value='m1.large', cost='13.37', map_type='rate', field_id=flavor_field.field_id, group_id=group_db.group_id)) # Per tenant override mapping_list.append( self._db_api.create_mapping( value='m1.tiny', cost='2', map_type='flat', field_id=flavor_field.field_id, tenant_id=self._tenant_id)) threshold_list.append( self._db_api.create_threshold( level='64', cost='0.02', map_type='flat', field_id=memory_field.field_id, group_id=group_db.group_id)) threshold_list.append( self._db_api.create_threshold( level='128', cost='0.03', map_type='flat', field_id=memory_field.field_id, group_id=group_db.group_id)) threshold_list.append( self._db_api.create_threshold( level='64', cost='0.03', map_type='flat', field_id=memory_field.field_id, group_id=group_db.group_id, tenant_id=self._tenant_id)) return ([mapping.mapping_id for mapping in mapping_list], [threshold.threshold_id for threshold in threshold_list]) def test_load_rates(self): self._generate_hashmap_rules() self._hash.reload_config() expect = { 'compute': { 'fields': { 'flavor': { 'mappings': { '_DEFAULT_': { 'm1.tiny': { 'cost': decimal.Decimal( '2.0000000000000000000000000000'), 'type': 'flat'}}, 'test_group': { 'm1.large': { 'cost': decimal.Decimal( '13.3699999999999992184029906639'), 'type': 'rate'}}}, 'thresholds': {}}, 'memory': { 'mappings': {}, 'thresholds': { 'test_group': { 64: { 'cost': decimal.Decimal( '0.0299999999999999988897769754'), 'type': 'flat'}, 128: { 'cost': decimal.Decimal( '0.0299999999999999988897769754'), 'type': 'flat'}}}}}, 'mappings': { '_DEFAULT_': { 'cost': decimal.Decimal( '1.4199999999999999289457264240'), 'type': 'rate'}}, 'thresholds': {}}} self.assertEqual(expect, self._hash._entries) def test_load_mappings(self): mapping_list = [] service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field(service_db.service_id, 'flavor') group_db = self._db_api.create_group('test_group') mapping_list.append( self._db_api.create_mapping( value='m1.tiny', cost='1.337', map_type='flat', field_id=field_db.field_id)) mapping_list.append( self._db_api.create_mapping( value='m1.large', cost='13.37', map_type='rate', field_id=field_db.field_id, group_id=group_db.group_id)) mappings_uuid = [mapping.mapping_id for mapping in mapping_list] result = self._hash._load_mappings(mappings_uuid) expected_result = { '_DEFAULT_': { 'm1.tiny': { 'cost': decimal.Decimal('1.3369999999999999662492200514'), 'type': 'flat'}}, 'test_group': { 'm1.large': { 'cost': decimal.Decimal('13.3699999999999992184029906639'), 'type': 'rate'}}} self.assertEqual(expected_result, result) def test_load_thresholds(self): threshold_list = [] service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field(service_db.service_id, 'flavor') group_db = self._db_api.create_group('test_group') threshold_list.append( self._db_api.create_threshold( level='1000', cost='3.1337', map_type='flat', field_id=field_db.field_id, group_id=group_db.group_id)) thresholds_uuid = [threshold.threshold_id for threshold in threshold_list] result = self._hash._load_thresholds(thresholds_uuid) expected_result = { 'test_group': { 1000: { 'cost': decimal.Decimal('3.1337000000000001520561454527'), 'type': 'flat'}}} self.assertEqual(expected_result, result) def test_process_services(self): service_db = self._db_api.create_service('compute') group_db = self._db_api.create_group('test_group') self._db_api.create_mapping( cost='1.337', map_type='flat', service_id=service_db.service_id, group_id=group_db.group_id) self._db_api.create_mapping( cost='1.42', map_type='flat', service_id=service_db.service_id) self._hash.reload_config() expected_data = copy.deepcopy(CK_RESOURCES_DATA) actual_data = dataframe.DataFrame(start=expected_data[0].start, end=expected_data[0].end) for cur_data in expected_data: for service_name, point in cur_data.iterpoints(): self._hash._res = {} self._hash.process_services(service_name, point) actual_data.add_point( self._hash.add_rating_informations(point), service_name) actual_data = [actual_data] df_dicts = [d.as_dict(mutable=True) for d in expected_data] compute_list = df_dicts[0]['usage']['compute'] compute_list[0]['rating'] = {'price': decimal.Decimal( '2.756999999999999895194946475')} compute_list[1]['rating'] = {'price': decimal.Decimal( '5.513999999999999790389892950')} compute_list[2]['rating'] = {'price': decimal.Decimal( '5.513999999999999790389892950')} compute_list[3]['rating'] = {'price': decimal.Decimal( '2.756999999999999895194946475')} self.assertEqual(df_dicts, [d.as_dict(mutable=True) for d in actual_data]) def test_process_fields(self): service_db = self._db_api.create_service('compute') flavor_field = self._db_api.create_field(service_db.service_id, 'flavor') image_field = self._db_api.create_field(service_db.service_id, 'image_id') group_db = self._db_api.create_group('test_group') self._db_api.create_mapping( value='m1.nano', cost='1.337', map_type='flat', field_id=flavor_field.field_id, group_id=group_db.group_id) self._db_api.create_mapping( value='a41fba37-2429-4f15-aa00-b5bc4bf557bf', cost='1.10', map_type='rate', field_id=image_field.field_id, group_id=group_db.group_id) self._db_api.create_mapping( value='m1.tiny', cost='1.42', map_type='flat', field_id=flavor_field.field_id) self._hash.reload_config() expected_data = copy.deepcopy(CK_RESOURCES_DATA) actual_data = dataframe.DataFrame(start=expected_data[0].start, end=expected_data[0].end) for cur_data in expected_data: for service_name, point in cur_data.iterpoints(): self._hash._res = {} self._hash.process_fields(service_name, point) actual_data.add_point( self._hash.add_rating_informations(point), service_name) actual_data = [actual_data] df_dicts = [d.as_dict(mutable=True) for d in expected_data] compute_list = df_dicts[0]['usage']['compute'] compute_list[0]['rating'] = {'price': decimal.Decimal( '1.336999999999999966249220051')} compute_list[1]['rating'] = {'price': decimal.Decimal( '2.839999999999999857891452848')} compute_list[2]['rating'] = {'price': decimal.Decimal('0')} compute_list[3]['rating'] = {'price': decimal.Decimal( '1.470700000000000081623596770')} self.assertEqual(df_dicts, [d.as_dict(mutable=True) for d in actual_data]) def test_process_fields_no_match(self): service_db = self._db_api.create_service('compute') flavor_field = self._db_api.create_field(service_db.service_id, 'flavor') self._db_api.create_mapping( value='non-existent', cost='1.337', map_type='flat', field_id=flavor_field.field_id) self._hash.reload_config() expected_data = copy.deepcopy(CK_RESOURCES_DATA) actual_data = dataframe.DataFrame(start=expected_data[0].start, end=expected_data[0].end) for cur_data in expected_data: for service_name, point in cur_data.iterpoints(): self._hash._res = {} self._hash.process_fields(service_name, point) actual_data.add_point( self._hash.add_rating_informations(point), service_name) actual_data = [actual_data] df_dicts = [d.as_dict(mutable=True) for d in expected_data] self.assertEqual(df_dicts, [d.as_dict(mutable=True) for d in actual_data]) def test_process_field_threshold(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field(service_db.service_id, 'memory') self._db_api.create_threshold( level=64, cost='0.1337', map_type='flat', field_id=field_db.field_id) self._db_api.create_threshold( level=128, cost='0.2', map_type='flat', field_id=field_db.field_id) self._hash.reload_config() expected_data = copy.deepcopy(CK_RESOURCES_DATA) actual_data = dataframe.DataFrame(start=expected_data[0].start, end=expected_data[0].end) for cur_data in expected_data: for service_name, point in cur_data.iterpoints(): self._hash._res = {} self._hash.process_fields(service_name, point) actual_data.add_point( self._hash.add_rating_informations(point), service_name) actual_data = [actual_data] df_dicts = [d.as_dict(mutable=True) for d in expected_data] compute_list = df_dicts[0]['usage']['compute'] compute_list[0]['rating'] = {'price': decimal.Decimal( '0.1337000000000000132782673745')} compute_list[1]['rating'] = {'price': decimal.Decimal( '0.4000000000000000222044604926')} compute_list[2]['rating'] = {'price': decimal.Decimal( '0.4000000000000000222044604926')} compute_list[3]['rating'] = {'price': decimal.Decimal( '0.1337000000000000132782673745')} self.assertEqual(df_dicts, [d.as_dict(mutable=True) for d in actual_data]) def test_process_field_threshold_no_match(self): service_db = self._db_api.create_service('compute') field_db = self._db_api.create_field(service_db.service_id, 'memory') self._db_api.create_threshold( level=32768, cost='0.1337', map_type='flat', field_id=field_db.field_id) self._hash.reload_config() expected_data = copy.deepcopy(CK_RESOURCES_DATA) actual_data = dataframe.DataFrame(start=expected_data[0].start, end=expected_data[0].end) for cur_data in expected_data: for service_name, point in cur_data.iterpoints(): self._hash._res = {} self._hash.process_fields(service_name, point) actual_data.add_point( self._hash.add_rating_informations(point), service_name) actual_data = [actual_data] self.assertEqual([d.as_dict(mutable=True) for d in expected_data], [d.as_dict(mutable=True) for d in actual_data]) def test_process_service_threshold(self): service_db = self._db_api.create_service('compute') self._db_api.create_threshold( level=1, cost='0.1', map_type='flat', service_id=service_db.service_id) self._db_api.create_threshold( level=2, cost='0.15', map_type='flat', service_id=service_db.service_id) self._hash.reload_config() expected_data = copy.deepcopy(CK_RESOURCES_DATA) actual_data = dataframe.DataFrame(start=expected_data[0].start, end=expected_data[0].end) for cur_data in expected_data: for service_name, point in cur_data.iterpoints(): self._hash._res = {} self._hash.process_services(service_name, point) actual_data.add_point( self._hash.add_rating_informations(point), service_name) actual_data = [actual_data] df_dicts = [d.as_dict(mutable=True) for d in expected_data] compute_list = df_dicts[0]['usage']['compute'] compute_list[0]['rating'] = {'price': decimal.Decimal( '0.1000000000000000055511151231')} compute_list[1]['rating'] = {'price': decimal.Decimal( '0.1499999999999999944488848769')} compute_list[2]['rating'] = {'price': decimal.Decimal( '0.1499999999999999944488848769')} compute_list[3]['rating'] = {'price': decimal.Decimal( '0.1000000000000000055511151231')} self.assertEqual(df_dicts, [d.as_dict(mutable=True) for d in actual_data]) def test_update_result_flat(self): self._hash.update_result( 'test_group', 'flat', 1) self.assertEqual(1, self._hash._res['test_group']['flat']) self._hash.update_result( 'test_group', 'flat', 0.5) self.assertEqual(1, self._hash._res['test_group']['flat']) self._hash.update_result( 'test_group', 'flat', 1.5) self.assertEqual(1.5, self._hash._res['test_group']['flat']) def test_update_result_rate(self): self._hash.update_result( 'test_group', 'rate', 0.5) self.assertEqual(0.5, self._hash._res['test_group']['rate']) self._hash.update_result( 'test_group', 'rate', 0.5) self.assertEqual(0.25, self._hash._res['test_group']['rate']) self._hash.update_result( 'test_group', 'rate', 1) self.assertEqual(0.25, self._hash._res['test_group']['rate']) def test_update_result_threshold(self): self._hash.update_result( 'test_group', 'flat', 0.01, 0, True) self.assertEqual({'level': 0, 'cost': 0.01, 'scope': 'field', 'type': 'flat'}, self._hash._res['test_group']['threshold']) self._hash.update_result( 'test_group', 'flat', 1, 10, True) self.assertEqual({'level': 10, 'cost': 1, 'scope': 'field', 'type': 'flat'}, self._hash._res['test_group']['threshold']) self._hash.update_result( 'test_group', 'flat', 1.1, 15, True) self.assertEqual({'level': 15, 'cost': 1.1, 'scope': 'field', 'type': 'flat'}, self._hash._res['test_group']['threshold']) self._hash.update_result( 'test_group', 'threshold', 2.2, 10, True) self.assertEqual({'level': 15, 'cost': 1.1, 'scope': 'field', 'type': 'flat'}, self._hash._res['test_group']['threshold']) def test_process_rating(self): service_db = self._db_api.create_service('compute') flavor_db = self._db_api.create_field(service_db.service_id, 'flavor') vcpus_db = self._db_api.create_field(service_db.service_id, 'vcpus') group_db = self._db_api.create_group('test_group') second_group_db = self._db_api.create_group('second_test_group') self._db_api.create_mapping( cost='1.00', map_type='flat', service_id=service_db.service_id) self._db_api.create_mapping( value='m1.nano', cost='1.337', map_type='flat', field_id=flavor_db.field_id, group_id=group_db.group_id) self._db_api.create_mapping( value='m1.tiny', cost='1.42', map_type='flat', field_id=flavor_db.field_id, group_id=group_db.group_id) self._db_api.create_mapping( value='8', cost='16.0', map_type='flat', field_id=vcpus_db.field_id, group_id=second_group_db.group_id) image_db = self._db_api.create_field(service_db.service_id, 'image_id') self._db_api.create_mapping( value='a41fba37-2429-4f15-aa00-b5bc4bf557bf', cost='1.10', map_type='rate', field_id=image_db.field_id, group_id=group_db.group_id) memory_db = self._db_api.create_field(service_db.service_id, 'memory') self._db_api.create_threshold( level=64, cost='0.15', map_type='flat', field_id=memory_db.field_id, group_id=group_db.group_id) self._db_api.create_threshold( level=128, cost='0.2', map_type='flat', field_id=memory_db.field_id, group_id=group_db.group_id) self._hash.reload_config() expected_data = copy.deepcopy(CK_RESOURCES_DATA) actual_data = dataframe.DataFrame(start=expected_data[0].start, end=expected_data[0].end) df_dicts = [d.as_dict(mutable=True) for d in expected_data] compute_list = df_dicts[0]['usage']['compute'] compute_list[0]['rating'] = {'price': decimal.Decimal( '2.486999999999999960698104928')} compute_list[1]['rating'] = {'price': decimal.Decimal( '5.564000000000000155875312656')} # 8vcpu mapping * 2 + service_mapping * 1 + 128m ram threshold * 2 compute_list[2]['rating'] = {'price': decimal.Decimal( '34.40000000000000002220446049')} compute_list[3]['rating'] = {'price': decimal.Decimal( '2.635700000000000088840046430')} actual_data = [self._hash.process(d) for d in expected_data] self.assertEqual(df_dicts, [d.as_dict(mutable=True) for d in actual_data]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/test_keystone_fetcher.py0000664000175000017500000000552200000000000024323 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import unittest from unittest import mock from oslo_utils import uuidutils from cloudkitty.fetcher import keystone from cloudkitty import tests class FakeRole(object): def __init__(self, name, uuid=None): self.id = uuid if uuid else uuidutils.generate_uuid() self.name = name class FakeTenant(object): def __init__(self, id): self.id = id class FakeKeystoneClient(object): user_id = 'd89e3fee-2b92-4387-b564-63901d62e591' def __init__(self, **kwargs): pass class FakeTenants(object): @classmethod def list(cls): return [FakeTenant('f266f30b11f246b589fd266f85eeec39'), FakeTenant('4dfb25b0947c4f5481daf7b948c14187')] class FakeRoles(object): roles_mapping = { 'd89e3fee-2b92-4387-b564-63901d62e591': { 'f266f30b11f246b589fd266f85eeec39': [FakeRole('rating'), FakeRole('admin')], '4dfb25b0947c4f5481daf7b948c14187': [FakeRole('admin')]}} @classmethod def roles_for_user(cls, user_id, tenant, **kwargs): return cls.roles_mapping[user_id][tenant.id] roles = FakeRoles() tenants = FakeTenants() def Client(**kwargs): return FakeKeystoneClient(**kwargs) class KeystoneFetcherTest(tests.TestCase): def setUp(self): super(KeystoneFetcherTest, self).setUp() self.conf.set_override('backend', 'keystone', 'tenant_fetcher') self.conf.import_group('fetcher_keystone', 'cloudkitty.fetcher.keystone') @unittest.SkipTest def test_fetcher_keystone_filter_list(self): kclient = 'keystoneclient.client.Client' with mock.patch(kclient) as kclientmock: kclientmock.return_value = Client() fetcher = keystone.KeystoneFetcher() kclientmock.assert_called_once_with( auth_url='http://127.0.0.1:5000/v2.0', username='cloudkitty', password='cloudkitty', tenant_name='cloudkitty', region_name='RegionOne') tenants = fetcher.get_tenants() self.assertEqual(['f266f30b11f246b589fd266f85eeec39'], tenants) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/test_orchestrator.py0000664000175000017500000012363300000000000023505 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import datetime import re from unittest import mock from oslo_messaging import conffixture from stevedore import extension from tooz import coordination from tooz.drivers import file from cloudkitty import collector from cloudkitty import orchestrator from cloudkitty.storage.v2 import influx from cloudkitty import storage_state from cloudkitty import tests from cloudkitty.utils import tz as tzutils class FakeKeystoneClient(object): class FakeTenants(object): def list(self): return ['f266f30b11f246b589fd266f85eeec39', '4dfb25b0947c4f5481daf7b948c14187'] tenants = FakeTenants() class ScopeEndpointTest(tests.TestCase): def setUp(self): super(ScopeEndpointTest, self).setUp() messaging_conf = self.useFixture(conffixture.ConfFixture(self.conf)) messaging_conf.transport_url = 'fake:/' self.conf.set_override('backend', 'influxdb', 'storage') def test_reset_state(self): coord_start_patch = mock.patch.object( coordination.CoordinationDriverWithExecutor, 'start') lock_acquire_patch = mock.patch.object( file.FileLock, 'acquire', return_value=True) storage_delete_patch = mock.patch.object( influx.InfluxStorage, 'delete') state_set_patch = mock.patch.object( storage_state.StateManager, 'set_last_processed_timestamp') with coord_start_patch, lock_acquire_patch, \ storage_delete_patch as sd, state_set_patch as ss: endpoint = orchestrator.ScopeEndpoint() endpoint.reset_state({}, { 'scopes': [ { 'scope_id': 'f266f30b11f246b589fd266f85eeec39', 'scope_key': 'project_id', 'collector': 'prometheus', 'fetcher': 'prometheus', }, { 'scope_id': '4dfb25b0947c4f5481daf7b948c14187', 'scope_key': 'project_id', 'collector': 'gnocchi', 'fetcher': 'gnocchi', }, ], 'last_processed_timestamp': '20190716T085501Z', }) sd.assert_has_calls([ mock.call( begin=tzutils.utc_to_local( datetime.datetime(2019, 7, 16, 8, 55, 1)), end=None, filters={'project_id': 'f266f30b11f246b589fd266f85eeec39'} ), mock.call( begin=tzutils.utc_to_local( datetime.datetime(2019, 7, 16, 8, 55, 1)), end=None, filters={'project_id': '4dfb25b0947c4f5481daf7b948c14187'}, ) ], any_order=True) ss.assert_has_calls([ mock.call( 'f266f30b11f246b589fd266f85eeec39', tzutils.utc_to_local( datetime.datetime(2019, 7, 16, 8, 55, 1)), scope_key='project_id', collector='prometheus', fetcher='prometheus'), mock.call( '4dfb25b0947c4f5481daf7b948c14187', tzutils.utc_to_local( datetime.datetime(2019, 7, 16, 8, 55, 1)), scope_key='project_id', collector='gnocchi', fetcher='gnocchi')], any_order=True) class OrchestratorTest(tests.TestCase): def setUp(self): super(OrchestratorTest, self).setUp() messaging_conf = self.useFixture(conffixture.ConfFixture(self.conf)) messaging_conf.transport_url = 'fake:/' self.conf.set_override('backend', 'keystone', 'fetcher') self.conf.import_group('fetcher_keystone', 'cloudkitty.fetcher.keystone') def setup_fake_modules(self): fake_module1 = tests.FakeRatingModule() fake_module1.module_name = 'fake1' fake_module1.set_priority(3) fake_module2 = tests.FakeRatingModule() fake_module2.module_name = 'fake2' fake_module2.set_priority(1) fake_module3 = tests.FakeRatingModule() fake_module3.module_name = 'fake3' fake_module3.set_priority(2) fake_extensions = [ extension.Extension( 'fake1', 'cloudkitty.tests.FakeRatingModule1', None, fake_module1), extension.Extension( 'fake2', 'cloudkitty.tests.FakeRatingModule2', None, fake_module2), extension.Extension( 'fake3', 'cloudkitty.tests.FakeRatingModule3', None, fake_module3)] return fake_extensions def test_processors_ordering_in_workers(self): fake_extensions = self.setup_fake_modules() ck_ext_mgr = 'cloudkitty.extension_manager.EnabledExtensionManager' with mock.patch(ck_ext_mgr) as stevemock: fake_mgr = extension.ExtensionManager.make_test_instance( fake_extensions, 'cloudkitty.rating.processors') stevemock.return_value = fake_mgr worker = orchestrator.BaseWorker() stevemock.assert_called_once_with( 'cloudkitty.rating.processors', invoke_kwds={'tenant_id': None}) self.assertEqual('fake1', worker._processors[0].name) self.assertEqual(3, worker._processors[0].obj.priority) self.assertEqual('fake3', worker._processors[1].name) self.assertEqual(2, worker._processors[1].obj.priority) self.assertEqual('fake2', worker._processors[2].name) self.assertEqual(1, worker._processors[2].obj.priority) @mock.patch("cotyledon.ServiceManager.add") @mock.patch("cotyledon._service_manager.ServiceManager.__init__") def test_cloudkitty_service_manager_only_processing( self, service_manager_init_mock, cotyledon_add_mock): OrchestratorTest.execute_cloudkitty_service_manager_test( cotyledon_add_mock=cotyledon_add_mock, max_workers_reprocessing=0, max_workers=1) self.assertTrue(service_manager_init_mock.called) @mock.patch("cotyledon.ServiceManager.add") @mock.patch("cotyledon._service_manager.ServiceManager.__init__") def test_cloudkitty_service_manager_only_reprocessing( self, service_manager_init_mock, cotyledon_add_mock): OrchestratorTest.execute_cloudkitty_service_manager_test( cotyledon_add_mock=cotyledon_add_mock, max_workers_reprocessing=1, max_workers=0) self.assertTrue(service_manager_init_mock.called) @mock.patch("cotyledon.ServiceManager.add") @mock.patch("cotyledon._service_manager.ServiceManager.__init__") def test_cloudkitty_service_manager_both_processings( self, service_manager_init_mock, cotyledon_add_mock): OrchestratorTest.execute_cloudkitty_service_manager_test( cotyledon_add_mock=cotyledon_add_mock) self.assertTrue(service_manager_init_mock.called) @staticmethod def execute_cloudkitty_service_manager_test(cotyledon_add_mock=None, max_workers=1, max_workers_reprocessing=1): original_conf = orchestrator.CONF try: orchestrator.CONF = mock.Mock() orchestrator.CONF.orchestrator = mock.Mock() orchestrator.CONF.orchestrator.max_workers = max_workers orchestrator.CONF.orchestrator.max_workers_reprocessing = \ max_workers_reprocessing orchestrator.CloudKittyServiceManager() expected_calls = [] if max_workers: expected_calls.append( mock.call(orchestrator.CloudKittyProcessor, workers=max_workers)) if max_workers_reprocessing: expected_calls.append( mock.call(orchestrator.CloudKittyReprocessor, workers=max_workers_reprocessing)) cotyledon_add_mock.assert_has_calls(expected_calls) finally: orchestrator.CONF = original_conf class WorkerTest(tests.TestCase): def setUp(self): super(WorkerTest, self).setUp() patcher_state_manager_set_state = mock.patch( "cloudkitty.storage_state.StateManager.set_state") self.addCleanup(patcher_state_manager_set_state.stop) self.state_manager_set_state_mock = \ patcher_state_manager_set_state.start() self._tenant_id = 'a' self._worker_id = '0' self.collector_mock = mock.MagicMock() self.storage_mock = mock.MagicMock() self.collector_mock.__str__.return_value = "toString" load_conf_manager = mock.patch("cloudkitty.utils.load_conf") self.addCleanup(load_conf_manager.stop) self.load_conf_mock = load_conf_manager.start() self.worker = orchestrator.Worker(self.collector_mock, self.storage_mock, self._tenant_id, self._worker_id) def test_do_collection_all_valid(self): timestamp_now = tzutils.localized_now() metrics = ['metric{}'.format(i) for i in range(5)] side_effect = [( metrics[i], {'period': {'begin': 0, 'end': 3600}, 'usage': i}, ) for i in range(5)] self.collector_mock.retrieve.side_effect = side_effect output = sorted(self.worker._do_collection(metrics, timestamp_now).items(), key=lambda x: x[1]['usage']) self.assertEqual(side_effect, output) def test_do_collection_some_empty(self): timestamp_now = tzutils.localized_now() metrics = ['metric{}'.format(i) for i in range(7)] side_effect = [( metrics[i], {'period': {'begin': 0, 'end': 3600}, 'usage': i}, ) for i in range(5)] side_effect.insert(2, collector.NoDataCollected('a', 'b')) side_effect.insert(4, collector.NoDataCollected('a', 'b')) self.collector_mock.retrieve.side_effect = side_effect output = sorted(self.worker._do_collection( metrics, timestamp_now).items(), key=lambda x: x[1]['usage']) self.assertEqual([ i for i in side_effect if not isinstance(i, collector.NoDataCollected) ], output) def test_update_scope_processing_state_db(self): timestamp = tzutils.localized_now() self.worker.update_scope_processing_state_db(timestamp) self.state_manager_set_state_mock.assert_has_calls([ mock.call(self.worker._tenant_id, timestamp) ]) @mock.patch("cloudkitty.dataframe.DataFrame") def test_execute_measurements_rating(self, dataframe_mock): new_data_frame_mock = mock.Mock() dataframe_mock.return_value = new_data_frame_mock processor_mock_1 = mock.Mock() return_processor_1 = mock.Mock() processor_mock_1.obj.process.return_value = return_processor_1 processor_mock_2 = mock.Mock() return_processor_2 = mock.Mock() processor_mock_2.obj.process.return_value = return_processor_2 self.worker._processors = [processor_mock_1, processor_mock_2] start_time = tzutils.localized_now() end_time = start_time + datetime.timedelta(hours=1) return_of_method = self.worker.execute_measurements_rating( end_time, start_time, {}) self.assertEqual(return_processor_2, return_of_method) processor_mock_1.obj.process.assert_has_calls([ mock.call(new_data_frame_mock) ]) processor_mock_2.obj.process.assert_has_calls([ mock.call(return_processor_1) ]) dataframe_mock.assert_has_calls([ mock.call(start=start_time, end=end_time, usage={}) ]) def test_persist_rating_data(self): start_time = tzutils.localized_now() end_time = start_time + datetime.timedelta(hours=1) frame = {"id": "sd"} self.worker.persist_rating_data(end_time, frame, start_time) self.storage_mock.push.assert_has_calls([ mock.call([frame], self.worker._tenant_id) ]) @mock.patch("cloudkitty.orchestrator.Worker._do_collection") @mock.patch("cloudkitty.orchestrator.Worker.execute_measurements_rating") @mock.patch("cloudkitty.orchestrator.Worker.persist_rating_data") @mock.patch("cloudkitty.orchestrator.Worker" ".update_scope_processing_state_db") def test_do_execute_scope_processing_with_no_usage_data( self, update_scope_processing_state_db_mock, persist_rating_data_mock, execute_measurements_rating_mock, do_collection_mock): self.worker._collector = collector.gnocchi.GnocchiCollector( period=3600, conf=tests.samples.DEFAULT_METRICS_CONF, ) do_collection_mock.return_value = None timestamp_now = tzutils.localized_now() self.worker.do_execute_scope_processing(timestamp_now) do_collection_mock.assert_has_calls([ mock.call(['cpu@#instance', 'image.size', 'ip.floating', 'network.incoming.bytes', 'network.outgoing.bytes', 'radosgw.objects.size', 'volume.size'], timestamp_now) ]) self.assertFalse(execute_measurements_rating_mock.called) self.assertFalse(persist_rating_data_mock.called) self.assertTrue(update_scope_processing_state_db_mock.called) @mock.patch("cloudkitty.orchestrator.Worker._do_collection") @mock.patch("cloudkitty.orchestrator.Worker.execute_measurements_rating") @mock.patch("cloudkitty.orchestrator.Worker.persist_rating_data") @mock.patch("cloudkitty.orchestrator.Worker" ".update_scope_processing_state_db") def test_do_execute_scope_processing_with_usage_data( self, update_scope_processing_state_db_mock, persist_rating_data_mock, execute_measurements_rating_mock, do_collection_mock): self.worker._collector = collector.gnocchi.GnocchiCollector( period=3600, conf=tests.samples.DEFAULT_METRICS_CONF, ) usage_data_mock = {"some_usage_data": 2} do_collection_mock.return_value = usage_data_mock execute_measurements_rating_mock_return = mock.Mock() execute_measurements_rating_mock.return_value =\ execute_measurements_rating_mock_return timestamp_now = tzutils.localized_now() self.worker.do_execute_scope_processing(timestamp_now) do_collection_mock.assert_has_calls([ mock.call(['cpu@#instance', 'image.size', 'ip.floating', 'network.incoming.bytes', 'network.outgoing.bytes', 'radosgw.objects.size', 'volume.size'], timestamp_now) ]) end_time = tzutils.add_delta( timestamp_now, datetime.timedelta(seconds=self.worker._period)) execute_measurements_rating_mock.assert_has_calls([ mock.call(end_time, timestamp_now, usage_data_mock) ]) persist_rating_data_mock.assert_has_calls([ mock.call(end_time, execute_measurements_rating_mock_return, timestamp_now) ]) self.assertTrue(update_scope_processing_state_db_mock.called) @mock.patch("cloudkitty.storage_state.StateManager" ".get_last_processed_timestamp") @mock.patch("cloudkitty.storage_state.StateManager" ".is_storage_scope_active") @mock.patch("cloudkitty.orchestrator.Worker.do_execute_scope_processing") def test_execute_worker_processing_no_next_timestamp( self, do_execute_scope_processing_mock, state_manager_is_storage_scope_active_mock, state_manager_get_stage_mock): next_timestamp_to_process_mock = mock.Mock() next_timestamp_to_process_mock.return_value = None self.worker.next_timestamp_to_process = next_timestamp_to_process_mock return_method_value = self.worker.execute_worker_processing() self.assertFalse(return_method_value) self.assertFalse(state_manager_get_stage_mock.called) self.assertFalse(state_manager_is_storage_scope_active_mock.called) self.assertFalse(do_execute_scope_processing_mock.called) self.assertTrue(next_timestamp_to_process_mock.called) @mock.patch("cloudkitty.storage_state.StateManager" ".get_last_processed_timestamp") @mock.patch("cloudkitty.storage_state.StateManager" ".is_storage_scope_active") @mock.patch("cloudkitty.orchestrator.Worker.do_execute_scope_processing") def test_execute_worker_processing_scope_not_processed_yet( self, do_execute_scope_processing_mock, state_manager_is_storage_scope_active_mock, state_manager_get_stage_mock): timestamp_now = tzutils.localized_now() next_timestamp_to_process_mock = mock.Mock() next_timestamp_to_process_mock.return_value = timestamp_now self.worker.next_timestamp_to_process = next_timestamp_to_process_mock state_manager_get_stage_mock.return_value = None return_method_value = self.worker.execute_worker_processing() self.assertTrue(return_method_value) state_manager_get_stage_mock.assert_has_calls([ mock.call(self.worker._tenant_id) ]) do_execute_scope_processing_mock.assert_has_calls([ mock.call(timestamp_now) ]) self.assertFalse(state_manager_is_storage_scope_active_mock.called) self.assertTrue(next_timestamp_to_process_mock.called) @mock.patch("cloudkitty.storage_state.StateManager" ".get_last_processed_timestamp") @mock.patch("cloudkitty.storage_state.StateManager" ".is_storage_scope_active") @mock.patch("cloudkitty.orchestrator.Worker.do_execute_scope_processing") def test_execute_worker_processing_scope_already_processed_active( self, do_execute_scope_processing_mock, state_manager_is_storage_scope_active_mock, state_manager_get_stage_mock): timestamp_now = tzutils.localized_now() next_timestamp_to_process_mock = mock.Mock() next_timestamp_to_process_mock.return_value = timestamp_now self.worker.next_timestamp_to_process = next_timestamp_to_process_mock state_manager_get_stage_mock.return_value = mock.Mock() state_manager_is_storage_scope_active_mock.return_value = True return_method_value = self.worker.execute_worker_processing() self.assertTrue(return_method_value) state_manager_get_stage_mock.assert_has_calls([ mock.call(self.worker._tenant_id) ]) do_execute_scope_processing_mock.assert_has_calls([ mock.call(timestamp_now) ]) state_manager_is_storage_scope_active_mock.assert_has_calls([ mock.call(self.worker._tenant_id) ]) self.assertTrue(next_timestamp_to_process_mock.called) @mock.patch("cloudkitty.storage_state.StateManager" ".get_last_processed_timestamp") @mock.patch("cloudkitty.storage_state.StateManager" ".is_storage_scope_active") @mock.patch("cloudkitty.orchestrator.Worker.do_execute_scope_processing") def test_execute_worker_processing_scope_already_processed_inactive( self, do_execute_scope_processing_mock, state_manager_is_storage_scope_active_mock, state_manager_get_stage_mock): timestamp_now = tzutils.localized_now() next_timestamp_to_process_mock = mock.Mock() next_timestamp_to_process_mock.return_value = timestamp_now self.worker.next_timestamp_to_process = next_timestamp_to_process_mock state_manager_get_stage_mock.return_value = mock.Mock() state_manager_is_storage_scope_active_mock.return_value = False return_method_value = self.worker.execute_worker_processing() self.assertFalse(return_method_value) state_manager_get_stage_mock.assert_has_calls([ mock.call(self.worker._tenant_id) ]) state_manager_is_storage_scope_active_mock.assert_has_calls([ mock.call(self.worker._tenant_id) ]) self.assertTrue(next_timestamp_to_process_mock.called) self.assertFalse(do_execute_scope_processing_mock.called) @mock.patch("cloudkitty.orchestrator.Worker.execute_worker_processing") def test_run(self, execute_worker_processing_mock): execute_worker_processing_mock.side_effect = [True, True, False, True] self.worker.run() self.assertEqual(execute_worker_processing_mock.call_count, 3) def test_collect_no_data(self): metric = "metric1" timestamp_now = tzutils.localized_now() self.collector_mock.retrieve.return_value = (metric, None) expected_message = "Collector 'toString' returned no data for " \ "resource 'metric1'" expected_message = re.escape(expected_message) self.assertRaisesRegex( collector.NoDataCollected, expected_message, self.worker._collect, metric, timestamp_now) next_timestamp = tzutils.add_delta( timestamp_now, datetime.timedelta(seconds=self.worker._period)) self.collector_mock.retrieve.assert_has_calls([ mock.call(metric, timestamp_now, next_timestamp, self.worker._tenant_id)]) def test_collect_with_data(self): metric = "metric1" timestamp_now = tzutils.localized_now() usage_data = {"some_usage_data": 3} self.collector_mock.retrieve.return_value = (metric, usage_data) return_of_method = self.worker._collect(metric, timestamp_now) next_timestamp = tzutils.add_delta( timestamp_now, datetime.timedelta(seconds=self.worker._period)) self.collector_mock.retrieve.assert_has_calls([ mock.call(metric, timestamp_now, next_timestamp, self.worker._tenant_id)]) self.assertEqual((metric, usage_data), return_of_method) @mock.patch("cloudkitty.utils.check_time_state") def test_check_state(self, check_time_state_mock): state_mock = mock.Mock() timestamp_now = tzutils.localized_now() state_mock._state.get_last_processed_timestamp.return_value = \ timestamp_now expected_time = timestamp_now + datetime.timedelta(hours=1) check_time_state_mock.return_value = \ expected_time return_of_method = orchestrator._check_state( state_mock, 3600, self._tenant_id) self.assertEqual(expected_time, return_of_method) state_mock._state.get_last_processed_timestamp.assert_has_calls([ mock.call(self._tenant_id)]) check_time_state_mock.assert_has_calls([ mock.call(timestamp_now, 3600, 2)]) class CloudKittyReprocessorTest(tests.TestCase): def setUp(self): super(CloudKittyReprocessorTest, self).setUp() @mock.patch("cloudkitty.orchestrator.CloudKittyProcessor.__init__") @mock.patch("cloudkitty.storage_state.ReprocessingSchedulerDb") def test_generate_lock_base_name(self, reprocessing_scheduler_db_mock, cloudkitty_processor_init_mock): scope_mock = mock.Mock() scope_mock.identifier = "scope_identifier" return_generate_lock_name = orchestrator.CloudKittyReprocessor( 1).generate_lock_base_name(scope_mock) expected_lock_name = "-id=scope_identifier-" \ "start=%s-end=%s" % ( scope_mock.start_reprocess_time, scope_mock.end_reprocess_time) self.assertEqual(expected_lock_name, return_generate_lock_name) cloudkitty_processor_init_mock.assert_called_once() reprocessing_scheduler_db_mock.assert_called_once() @mock.patch("cloudkitty.orchestrator.CloudKittyProcessor.__init__") @mock.patch("cloudkitty.storage_state.ReprocessingSchedulerDb.get_all") def test_load_scopes_to_process(self, scheduler_db_mock_get_all_mock, cloudkitty_processor_init_mock): scheduler_db_mock_get_all_mock.return_value = ["teste"] reprocessor = CloudKittyReprocessorTest.create_cloudkitty_reprocessor() reprocessor.load_scopes_to_process() self.assertEqual(["teste"], reprocessor.tenants) cloudkitty_processor_init_mock.assert_called_once() scheduler_db_mock_get_all_mock.assert_called_once() @mock.patch("cloudkitty.orchestrator.CloudKittyProcessor.__init__") @mock.patch("cloudkitty.storage_state.ReprocessingSchedulerDb.get_from_db") def test_next_timestamp_to_process_processing_finished( self, scheduler_db_mock_get_from_db_mock, cloudkitty_processor_init_mock): start_time = tzutils.localized_now() scope = CloudKittyReprocessorTest.create_scope_mock(start_time) scheduler_db_mock_get_from_db_mock.return_value = None reprocessor = CloudKittyReprocessorTest.create_cloudkitty_reprocessor() next_timestamp = reprocessor._next_timestamp_to_process(scope) expected_calls = [ mock.call(identifier=scope.identifier, start_reprocess_time=scope.start_reprocess_time, end_reprocess_time=scope.end_reprocess_time)] self.assertIsNone(next_timestamp) cloudkitty_processor_init_mock.assert_called_once() scheduler_db_mock_get_from_db_mock.assert_has_calls(expected_calls) @staticmethod def create_scope_mock(start_time): scope = mock.Mock() scope.identifier = "scope_identifier" scope.start_reprocess_time = start_time scope.current_reprocess_time = None scope.end_reprocess_time = start_time + datetime.timedelta(hours=1) return scope @staticmethod def create_cloudkitty_reprocessor(): reprocessor = orchestrator.CloudKittyReprocessor(1) reprocessor._worker_id = 1 return reprocessor @mock.patch("cloudkitty.orchestrator.CloudKittyProcessor.__init__") @mock.patch("cloudkitty.storage_state.ReprocessingSchedulerDb.get_from_db") def test_next_timestamp_to_process( self, scheduler_db_mock_get_from_db_mock, cloudkitty_processor_init_mock): start_time = tzutils.localized_now() scope = CloudKittyReprocessorTest.create_scope_mock(start_time) scheduler_db_mock_get_from_db_mock.return_value = scope reprocessor = CloudKittyReprocessorTest.create_cloudkitty_reprocessor() next_timestamp = reprocessor._next_timestamp_to_process(scope) expected_calls = [ mock.call(identifier=scope.identifier, start_reprocess_time=scope.start_reprocess_time, end_reprocess_time=scope.end_reprocess_time)] # There is no current timestamp in the mock object. # Therefore, the next to process is the start timestamp expected_next_timestamp = start_time self.assertEqual(expected_next_timestamp, next_timestamp) cloudkitty_processor_init_mock.assert_called_once() scheduler_db_mock_get_from_db_mock.assert_has_calls(expected_calls) class CloudKittyProcessorTest(tests.TestCase): def setUp(self): super(CloudKittyProcessorTest, self).setUp() patcher_oslo_messaging_target = mock.patch("oslo_messaging.Target") self.addCleanup(patcher_oslo_messaging_target.stop) self.oslo_messaging_target_mock = patcher_oslo_messaging_target.start() patcher_messaging_get_server = mock.patch( "cloudkitty.messaging.get_server") self.addCleanup(patcher_messaging_get_server.stop) self.messaging_get_server_mock = patcher_messaging_get_server.start() patcher_driver_manager = mock.patch("stevedore.driver.DriverManager") self.addCleanup(patcher_driver_manager.stop) self.driver_manager_mock = patcher_driver_manager.start() get_collector_manager = mock.patch( "cloudkitty.collector.get_collector") self.addCleanup(get_collector_manager.stop) self.get_collector_mock = get_collector_manager.start() self.worker_id = 1 self.cloudkitty_processor = orchestrator.CloudKittyProcessor( self.worker_id) def test_init_messaging(self): server_mock = mock.Mock() self.messaging_get_server_mock.return_value = server_mock target_object_mock = mock.Mock() self.oslo_messaging_target_mock.return_value = target_object_mock self.cloudkitty_processor._init_messaging() server_mock.start.assert_called_once() self.oslo_messaging_target_mock.assert_has_calls([ mock.call(topic='cloudkitty', server=orchestrator.CONF.host, version='1.0')]) self.messaging_get_server_mock.assert_has_calls([ mock.call(target_object_mock, [ self.cloudkitty_processor._rating_endpoint, self.cloudkitty_processor._scope_endpoint])]) @mock.patch("time.sleep") @mock.patch("cloudkitty.orchestrator.CloudKittyProcessor." "load_scopes_to_process") @mock.patch("cloudkitty.orchestrator.CloudKittyProcessor." "process_scope") @mock.patch("cloudkitty.orchestrator.get_lock") def test_internal_run(self, get_lock_mock, process_scope_mock, load_scopes_to_process_mock, sleep_mock): lock_mock = mock.Mock() lock_mock.acquire.return_value = True get_lock_mock.return_value = ("lock_name", lock_mock) self.cloudkitty_processor.tenants = ["tenant1"] self.cloudkitty_processor.internal_run() lock_mock.acquire.assert_has_calls([mock.call(blocking=False)]) lock_mock.release.assert_called_once() get_lock_mock.assert_has_calls( [mock.call(self.cloudkitty_processor.coord, "tenant1")]) sleep_mock.assert_called_once() process_scope_mock.assert_called_once() load_scopes_to_process_mock.assert_called_once() @mock.patch("cloudkitty.orchestrator.Worker") def test_process_scope_no_next_timestamp(self, worker_class_mock): original_next_timestamp_method = \ self.cloudkitty_processor.next_timestamp_to_process next_timestamp_mock_method = mock.Mock() try: self.cloudkitty_processor.next_timestamp_to_process =\ next_timestamp_mock_method scope_mock = mock.Mock() next_timestamp_mock_method.return_value = None self.cloudkitty_processor.process_scope(scope_mock) next_timestamp_mock_method.assert_has_calls( [mock.call(scope_mock)]) self.assertFalse(worker_class_mock.called) finally: self.cloudkitty_processor.next_timestamp_to_process =\ original_next_timestamp_method @mock.patch("cloudkitty.orchestrator.Worker") def test_process_scope(self, worker_class_mock): original_next_timestamp_method =\ self.cloudkitty_processor.next_timestamp_to_process next_timestamp_mock_method = mock.Mock() worker_mock = mock.Mock() worker_class_mock.return_value = worker_mock original_worker_class = self.cloudkitty_processor.worker_class self.cloudkitty_processor.worker_class = worker_class_mock try: self.cloudkitty_processor.next_timestamp_to_process =\ next_timestamp_mock_method scope_mock = mock.Mock() next_timestamp_mock_method.return_value = tzutils.localized_now() self.cloudkitty_processor.process_scope(scope_mock) next_timestamp_mock_method.assert_has_calls( [mock.call(scope_mock)]) worker_class_mock.assert_has_calls( [mock.call(self.cloudkitty_processor.collector, self.cloudkitty_processor.storage, scope_mock, self.cloudkitty_processor._worker_id)]) worker_mock.run.assert_called_once() finally: self.cloudkitty_processor.next_timestamp_to_process =\ original_next_timestamp_method self.cloudkitty_processor.worker_class = original_worker_class def test_generate_lock_base_name(self): generated_lock_name = self.cloudkitty_processor.\ generate_lock_base_name("scope_id") self.assertEqual("scope_id", generated_lock_name) def test_load_scopes_to_process(self): fetcher_mock = mock.Mock() self.cloudkitty_processor.fetcher = fetcher_mock fetcher_mock.get_tenants.return_value = ["scope_1"] self.cloudkitty_processor.load_scopes_to_process() fetcher_mock.get_tenants.assert_called_once() self.assertEqual(["scope_1"], self.cloudkitty_processor.tenants) def test_terminate(self): coordinator_mock = mock.Mock() self.cloudkitty_processor.coord = coordinator_mock self.cloudkitty_processor.terminate() coordinator_mock.stop.assert_called_once() class ReprocessingWorkerTest(tests.TestCase): def setUp(self): super(ReprocessingWorkerTest, self).setUp() patcher_reprocessing_scheduler_db_get_from_db = mock.patch( "cloudkitty.storage_state.ReprocessingSchedulerDb.get_from_db") self.addCleanup(patcher_reprocessing_scheduler_db_get_from_db.stop) self.reprocessing_scheduler_db_get_from_db_mock =\ patcher_reprocessing_scheduler_db_get_from_db.start() patcher_state_manager_get_all = mock.patch( "cloudkitty.storage_state.StateManager.get_all") self.addCleanup(patcher_state_manager_get_all.stop) self.state_manager_get_all_mock = patcher_state_manager_get_all.start() self.collector_mock = mock.Mock() self.storage_mock = mock.Mock() self.scope_key_mock = "key_mock" self.worker_id = 1 self.scope_id = "scope_id1" self.scope_mock = mock.Mock() self.scope_mock.identifier = self.scope_id load_conf_manager = mock.patch("cloudkitty.utils.load_conf") self.addCleanup(load_conf_manager.stop) self.load_conf_mock = load_conf_manager.start() def to_string_scope_mock(self): return "toStringMock" self.scope_mock.__str__ = to_string_scope_mock self.scope_mock.scope_key = self.scope_key_mock self.state_manager_get_all_mock.return_value = [self.scope_mock] self.reprocessing_worker = self.create_reprocessing_worker() self.mock_scheduler = mock.Mock() self.mock_scheduler.identifier = self.scope_id self.start_schedule_mock = tzutils.localized_now() self.mock_scheduler.start_reprocess_time = self.start_schedule_mock self.mock_scheduler.current_reprocess_time = None self.mock_scheduler.end_reprocess_time =\ self.start_schedule_mock + datetime.timedelta(hours=1) def create_reprocessing_worker(self): return orchestrator.ReprocessingWorker( self.collector_mock, self.storage_mock, self.scope_mock, self.worker_id) def test_load_scope_key_scope_not_found(self): self.state_manager_get_all_mock.return_value = [] expected_message = "Scope [toStringMock] scheduled for reprocessing " \ "does not seem to exist anymore." expected_message = re.escape(expected_message) self.assertRaisesRegex(Exception, expected_message, self.reprocessing_worker.load_scope_key) self.state_manager_get_all_mock.assert_has_calls([ mock.call(self.reprocessing_worker._tenant_id)]) def test_load_scope_key_more_than_one_scope_found(self): self.state_manager_get_all_mock.return_value = [ self.scope_mock, self.scope_mock] expected_message = "Unexpected number of storage state entries " \ "found for scope [toStringMock]." expected_message = re.escape(expected_message) self.assertRaisesRegex(Exception, expected_message, self.reprocessing_worker.load_scope_key) self.state_manager_get_all_mock.assert_has_calls([ mock.call(self.reprocessing_worker._tenant_id)]) def test_load_scope_key(self): self.reprocessing_worker.load_scope_key() self.state_manager_get_all_mock.assert_has_calls([ mock.call(self.reprocessing_worker._tenant_id)]) self.assertEqual(self.scope_key_mock, self.reprocessing_worker.scope_key) @mock.patch("cloudkitty.orchestrator.ReprocessingWorker" ".generate_next_timestamp") def test_next_timestamp_to_process_no_db_item( self, generate_next_timestamp_mock): self.reprocessing_scheduler_db_get_from_db_mock.return_value = [] self.reprocessing_worker._next_timestamp_to_process() self.reprocessing_scheduler_db_get_from_db_mock.assert_has_calls([ mock.call( identifier=self.scope_mock.identifier, start_reprocess_time=self.scope_mock.start_reprocess_time, end_reprocess_time=self.scope_mock.end_reprocess_time)]) self.assertFalse(generate_next_timestamp_mock.called) @mock.patch("cloudkitty.orchestrator.ReprocessingWorker" ".generate_next_timestamp") def test_next_timestamp_to_process(self, generate_next_timestamp_mock): self.reprocessing_scheduler_db_get_from_db_mock.\ return_value = self.scope_mock self.reprocessing_worker._next_timestamp_to_process() self.reprocessing_scheduler_db_get_from_db_mock.assert_has_calls([ mock.call( identifier=self.scope_mock.identifier, start_reprocess_time=self.scope_mock.start_reprocess_time, end_reprocess_time=self.scope_mock.end_reprocess_time)]) generate_next_timestamp_mock.assert_has_calls([ mock.call(self.scope_mock, self.reprocessing_worker._period)]) def test_generate_next_timestamp_no_current_processing(self): next_timestamp = self.reprocessing_worker.generate_next_timestamp( self.mock_scheduler, 300) self.assertEqual(self.start_schedule_mock, next_timestamp) self.mock_scheduler.start_reprocess_time += datetime.timedelta(hours=2) next_timestamp = self.reprocessing_worker.generate_next_timestamp( self.mock_scheduler, 300) self.assertIsNone(next_timestamp) def test_generate_next_timestamp_with_current_processing(self): period = 300 self.mock_scheduler.current_reprocess_time =\ self.start_schedule_mock + datetime.timedelta(seconds=period) expected_next_time_stamp =\ self.mock_scheduler.current_reprocess_time + datetime.timedelta( seconds=period) next_timestamp = self.reprocessing_worker.generate_next_timestamp( self.mock_scheduler, period) self.assertEqual(expected_next_time_stamp, next_timestamp) self.mock_scheduler.current_reprocess_time +=\ datetime.timedelta(hours=2) next_timestamp = self.reprocessing_worker.generate_next_timestamp( self.mock_scheduler, period) self.assertIsNone(next_timestamp) @mock.patch("cloudkitty.orchestrator.Worker.do_execute_scope_processing") def test_do_execute_scope_processing( self, do_execute_scope_processing_mock_from_worker): now_timestamp = tzutils.localized_now() self.reprocessing_worker.scope.start_reprocess_time = now_timestamp self.reprocessing_worker.do_execute_scope_processing(now_timestamp) self.storage_mock.delete.assert_has_calls([ mock.call( begin=self.reprocessing_worker.scope.start_reprocess_time, end=self.reprocessing_worker.scope.end_reprocess_time, filters={ self.reprocessing_worker.scope_key: self.reprocessing_worker._tenant_id})]) do_execute_scope_processing_mock_from_worker.assert_has_calls([ mock.call(now_timestamp)]) @mock.patch("cloudkitty.storage_state.ReprocessingSchedulerDb" ".update_reprocessing_time") def test_update_scope_processing_state_db( self, update_reprocessing_time_mock): timestamp_now = tzutils.localized_now() self.reprocessing_worker.update_scope_processing_state_db( timestamp_now) start_time = self.reprocessing_worker.scope.start_reprocess_time end_time = self.reprocessing_worker.scope.end_reprocess_time update_reprocessing_time_mock.assert_has_calls([ mock.call( identifier=self.reprocessing_worker.scope.identifier, start_reprocess_time=start_time, end_reprocess_time=end_time, new_current_time_stamp=timestamp_now)]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/test_policy.py0000664000175000017500000001204500000000000022257 0ustar00zuulzuul00000000000000# Copyright (c) 2017 GohighSec. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os.path from oslo_config import cfg from oslo_config import fixture as config_fixture from oslo_policy import policy as oslo_policy from cloudkitty.common import context from cloudkitty.common import policy from cloudkitty import tests from cloudkitty import utils CONF = cfg.CONF class PolicyFileTestCase(tests.TestCase): def setUp(self): super(PolicyFileTestCase, self).setUp() self.fixture = self.useFixture(config_fixture.Config(CONF)) self.context = context.RequestContext( user_id='fake', project_id='fake', roles=['member'], is_admin=False) self.target = {} self.addCleanup(policy.reset) CONF(args=[], project='cloudkitty', default_config_files=[]) def test_modified_policy_reloads(self): with utils.tempdir() as tmpdir: tmpfilename = os.path.join(tmpdir, 'policy') self.fixture.config(policy_file=tmpfilename, group='oslo_policy') rule = oslo_policy.RuleDefault('example:test', "") policy.reset() policy.init() policy._ENFORCER.register_defaults([rule]) action = "example:test" with open(tmpfilename, "w") as policyfile: policyfile.write('{"example:test": ""}') policy.authorize(self.context, action, self.target) with open(tmpfilename, "w") as policyfile: policyfile.write('{"example:test": "!"}') policy._ENFORCER.load_rules(True) self.assertRaises(policy.PolicyNotAuthorized, policy.authorize, self.context, action, self.target) class PolicyTestCase(tests.TestCase): def setUp(self): super(PolicyTestCase, self).setUp() rules = [ oslo_policy.RuleDefault("true", '@'), oslo_policy.RuleDefault("test:allowed", '@'), oslo_policy.RuleDefault("test:denied", "!"), oslo_policy.RuleDefault("test:early_and_fail", "! and @"), oslo_policy.RuleDefault("test:early_or_success", "@ or !"), oslo_policy.RuleDefault("test:lowercase_admin", "role:admin"), oslo_policy.RuleDefault("test:uppercase_admin", "role:ADMIN"), ] CONF(args=[], project='cloudkitty', default_config_files=[]) # before a policy rule can be used, its default has to be registered. policy.reset() policy.init() policy._ENFORCER.register_defaults(rules) self.context = context.RequestContext(user_id='fake', project_id='fake', roles=['member']) self.target = {} self.addCleanup(policy.reset) def test_enforce_nonexistent_action_throws(self): action = "test:noexist" self.assertRaises(oslo_policy.PolicyNotRegistered, policy.authorize, self.context, action, self.target) def test_enforce_bad_action_throws(self): action = "test:denied" self.assertRaises(policy.PolicyNotAuthorized, policy.authorize, self.context, action, self.target) def test_enforce_bad_action_noraise(self): action = "test:denied" self.assertRaises(policy.PolicyNotAuthorized, policy.authorize, self.context, action, self.target) def test_enforce_good_action(self): action = "test:allowed" result = policy.authorize(self.context, action, self.target) self.assertTrue(result) def test_early_AND_authorization(self): action = "test:early_and_fail" self.assertRaises(policy.PolicyNotAuthorized, policy.authorize, self.context, action, self.target) def test_early_OR_authorization(self): action = "test:early_or_success" policy.authorize(self.context, action, self.target) def test_ignore_case_role_check(self): lowercase_action = "test:lowercase_admin" uppercase_action = "test:uppercase_admin" admin_context = context.RequestContext(user_id='admin', project_id='fake', roles=['AdMiN']) policy.authorize(admin_context, lowercase_action, self.target) policy.authorize(admin_context, uppercase_action, self.target) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/test_pyscripts.py0000664000175000017500000004455200000000000023030 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import copy import decimal import hashlib from unittest import mock import zlib from oslo_utils import uuidutils from cloudkitty import dataframe from cloudkitty.rating import pyscripts from cloudkitty.rating.pyscripts.db import api from cloudkitty import tests from dateutil import parser FAKE_UUID = '6c1b8a30-797f-4b7e-ad66-9879b79059fb' CK_RESOURCES_DATA = { "period": { "begin": "2014-10-01T00:00:00", "end": "2014-10-01T01:00:00"}, "usage": { "instance_status": [ dataframe.DataPoint( "instance", 1, 0, {"availability_zone": "nova", "flavor": "m1.ultra", "image_id": "f5600101-8fa2-4864-899e-ebcb7ed6b568", "memory": "64", "name": "prod1", "project_id": "f266f30b11f246b589fd266f85eeec39", "user_id": "55b3379b949243009ee96972fbf51ed1", "vcpus": "1" }, {"farm": "prod"}), dataframe.DataPoint( "instance", 1, 0, {"availability_zone": "nova", "flavor": "m1.not_so_ultra", "image_id": "f5600101-8fa2-4864-899e-ebcb7ed6b568", "memory": "64", "name": "prod1", "project_id": "f266f30b11f246b589fd266f85eeec39", "user_id": "55b3379b949243009ee96972fbf51ed1", "vcpus": "1" }, {"farm": "prod"})], "compute": [ dataframe.DataPoint( "instance", 1, 0, {"availability_zone": "nova", "flavor": "m1.nano", "image_id": "f5600101-8fa2-4864-899e-ebcb7ed6b568", "memory": "64", "name": "prod1", "project_id": "f266f30b11f246b589fd266f85eeec39", "user_id": "55b3379b949243009ee96972fbf51ed1", "vcpus": "1" }, {"farm": "prod"}), dataframe.DataPoint( "instance", 2, 0, {"availability_zone": "nova", "flavor": "m1.tiny", "image_id": "a41fba37-2429-4f15-aa00-b5bc4bf557bf", "memory": "512", "name": "dev1", "project_id": "f266f30b11f246b589fd266f85eeec39", "user_id": "55b3379b949243009ee96972fbf51ed1", "vcpus": "1" }, {"farm": "dev"}), dataframe.DataPoint( "instance", 1, 0, {"availability_zone": "nova", "flavor": "m1.nano", "image_id": "a41fba37-2429-4f15-aa00-b5bc4bf557bf", "memory": "64", "name": "dev2", "project_id": "f266f30b11f246b589fd266f85eeec39", "user_id": "55b3379b949243009ee96972fbf51ed1", "vcpus": "1" }, {"farm": "dev"}), ] } } TEST_CODE1 = 'a = 1'.encode('utf-8') TEST_CODE1_CHECKSUM = hashlib.sha512(TEST_CODE1).hexdigest() TEST_CODE2 = 'a = 0'.encode('utf-8') TEST_CODE2_CHECKSUM = hashlib.sha512(TEST_CODE2).hexdigest() TEST_CODE3 = 'if a == 1: raise Exception()'.encode('utf-8') TEST_CODE3_CHECKSUM = hashlib.sha512(TEST_CODE3).hexdigest() COMPLEX_POLICY1 = """ import decimal usage_data = data['usage'] for service in usage_data.keys(): if service == 'compute': all_points = usage_data.get(service, []) for resource in all_points: if resource['groupby'].get('flavor') == 'm1.nano': resource['rating'] = { 'price': decimal.Decimal(2.0)} if service == 'instance_status': all_points = usage_data.get(service, []) for resource in all_points: if resource['groupby'].get('flavor') == 'm1.ultra': resource['rating'] = { 'price': decimal.Decimal( resource['groupby'].get( 'memory')) * decimal.Decimal(1.5)} """.encode('utf-8') DOCUMENTATION_RATING_POLICY = """ import decimal # Price for each flavor. These are equivalent to hashmap field mappings. flavors = { 'm1.micro': decimal.Decimal(0.65), 'm1.nano': decimal.Decimal(0.35), 'm1.large': decimal.Decimal(2.67) } # Price per MB / GB for images and volumes. These are equivalent to # hashmap service mappings. image_mb_price = decimal.Decimal(0.002) volume_gb_price = decimal.Decimal(0.35) # These functions return the price of a service usage on a collect period. # The price is always equivalent to the price per unit multiplied by # the quantity. def get_compute_price(item): flavor_name = item['groupby']['flavor'] if not flavor_name in flavors: return 0 else: return (decimal.Decimal(item['vol']['qty']) * flavors[flavor_name]) def get_image_price(item): if not item['vol']['qty']: return 0 else: return decimal.Decimal(item['vol']['qty']) * image_mb_price def get_volume_price(item): if not item['vol']['qty']: return 0 else: return decimal.Decimal(item['vol']['qty']) * volume_gb_price # Mapping each service to its price calculation function services = { 'compute': get_compute_price, 'volume': get_volume_price, 'image': get_image_price } def process(data): # The 'data' is a dictionary with the usage entries for each service for # each given period. usage_data = data['usage'] for service_name, service_data in usage_data.items(): # Do not calculate the price if the service has no # price calculation function if service_name in services.keys(): # A service can have several items. For example, # each running instance is an item of the compute service for item in service_data: item['rating'] = {'price': services[service_name](item)} return data # 'data' is passed as a global variable. The script is supposed to set the # 'rating' element of each item in each service data = process(data) """.encode('utf-8') class PyScriptsRatingTest(tests.TestCase): def setUp(self): super(PyScriptsRatingTest, self).setUp() self._tenant_id = 'f266f30b11f246b589fd266f85eeec39' self._db_api = pyscripts.PyScripts.db_api self._db_api.get_migration().upgrade('head') self._pyscripts = pyscripts.PyScripts(self._tenant_id) self.dataframe_for_tests = dataframe.DataFrame( parser.parse(CK_RESOURCES_DATA['period']['begin']), parser.parse(CK_RESOURCES_DATA['period']['end']), CK_RESOURCES_DATA['usage']) # Scripts tests @mock.patch.object(uuidutils, 'generate_uuid', return_value=FAKE_UUID) def test_create_script(self, patch_generate_uuid): self._db_api.create_script('policy1', TEST_CODE1) scripts = self._db_api.list_scripts() self.assertEqual([FAKE_UUID], scripts) patch_generate_uuid.assert_called_once_with() def test_create_duplicate_script(self): self._db_api.create_script('policy1', TEST_CODE1) self.assertRaises(api.ScriptAlreadyExists, self._db_api.create_script, 'policy1', TEST_CODE1) def test_get_script_by_uuid(self): expected = self._db_api.create_script('policy1', TEST_CODE1) actual = self._db_api.get_script(uuid=expected.script_id) self.assertEqual(expected.data, actual.data) def test_get_script_by_name(self): expected = self._db_api.create_script('policy1', TEST_CODE1) actual = self._db_api.get_script(expected.name) self.assertEqual(expected.data, actual.data) def test_get_script_without_parameters(self): self._db_api.create_script('policy1', TEST_CODE1) self.assertRaises( ValueError, self._db_api.get_script) def test_delete_script_by_name(self): self._db_api.create_script('policy1', TEST_CODE1) self._db_api.delete_script('policy1') scripts = self._db_api.list_scripts() self.assertEqual([], scripts) def test_delete_script_by_uuid(self): script_db = self._db_api.create_script('policy1', TEST_CODE1) self._db_api.delete_script(uuid=script_db.script_id) scripts = self._db_api.list_scripts() self.assertEqual([], scripts) def test_delete_script_without_parameters(self): self._db_api.create_script('policy1', TEST_CODE1) self.assertRaises( ValueError, self._db_api.delete_script) def test_delete_unknown_script_by_name(self): self.assertRaises(api.NoSuchScript, self._db_api.delete_script, 'dummy') def test_delete_unknown_script_by_uuid(self): self.assertRaises( api.NoSuchScript, self._db_api.delete_script, uuid='6e8de9fc-ee17-4b60-b81a-c9320e994e76') def test_update_script(self): script_db = self._db_api.create_script('policy1', TEST_CODE1) self._db_api.update_script(script_db.script_id, data=TEST_CODE2) actual = self._db_api.get_script(uuid=script_db.script_id) self.assertEqual(TEST_CODE2, actual.data) def test_update_script_uuid_disabled(self): expected = self._db_api.create_script('policy1', TEST_CODE1) self._db_api.update_script(expected.script_id, data=TEST_CODE2, script_id='42') actual = self._db_api.get_script(uuid=expected.script_id) self.assertEqual(expected.script_id, actual.script_id) def test_update_script_unknown_attribute(self): expected = self._db_api.create_script('policy1', TEST_CODE1) self.assertRaises( ValueError, self._db_api.update_script, expected.script_id, nonexistent=1) def test_empty_script_update(self): expected = self._db_api.create_script('policy1', TEST_CODE1) self.assertRaises( ValueError, self._db_api.update_script, expected.script_id) # Storage tests def test_compressed_data(self): data = TEST_CODE1 self._db_api.create_script('policy1', data) script = self._db_api.get_script('policy1') expected = zlib.compress(data) self.assertEqual(expected, script._data) def test_on_the_fly_decompression(self): data = TEST_CODE1 self._db_api.create_script('policy1', data) script = self._db_api.get_script('policy1') self.assertEqual(data, script.data) def test_script_repr(self): script_db = self._db_api.create_script('policy1', TEST_CODE1) self.assertEqual( ''.format( uuid=script_db.script_id, name=script_db.name), str(script_db)) # Checksum tests def test_validate_checksum(self): self._db_api.create_script('policy1', TEST_CODE1) script = self._db_api.get_script('policy1') self.assertEqual(TEST_CODE1_CHECKSUM, script.checksum) def test_read_only_checksum(self): self._db_api.create_script('policy1', TEST_CODE1) script = self._db_api.get_script('policy1') self.assertRaises( AttributeError, setattr, script, 'checksum', 'cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce4' '7d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e') def test_update_checksum(self): self._db_api.create_script('policy1', TEST_CODE1) script = self._db_api.get_script('policy1') script = self._db_api.update_script(script.script_id, data=TEST_CODE2) self.assertEqual(TEST_CODE2_CHECKSUM, script.checksum) # Code exec tests def test_load_scripts(self): policy1_db = self._db_api.create_script('policy1', TEST_CODE1) policy2_db = self._db_api.create_script('policy2', TEST_CODE2) self._pyscripts.load_scripts_in_memory() self.assertIn(policy1_db.script_id, self._pyscripts._scripts) self.assertIn(policy2_db.script_id, self._pyscripts._scripts) def test_purge_old_scripts(self): policy1_db = self._db_api.create_script('policy1', TEST_CODE1) policy2_db = self._db_api.create_script('policy2', TEST_CODE2) self._pyscripts.reload_config() self.assertIn(policy1_db.script_id, self._pyscripts._scripts) self.assertIn(policy2_db.script_id, self._pyscripts._scripts) self._db_api.delete_script(uuid=policy1_db.script_id) self._pyscripts.reload_config() self.assertNotIn(policy1_db.script_id, self._pyscripts._scripts) self.assertIn(policy2_db.script_id, self._pyscripts._scripts) @mock.patch.object(uuidutils, 'generate_uuid', return_value=FAKE_UUID) def test_valid_script_data_loaded(self, patch_generate_uuid): self._db_api.create_script('policy1', TEST_CODE1) self._pyscripts.load_scripts_in_memory() expected = { FAKE_UUID: { 'code': compile( TEST_CODE1, ''.format(name='policy1'), 'exec'), 'checksum': TEST_CODE1_CHECKSUM, 'name': 'policy1' }} self.assertEqual(expected, self._pyscripts._scripts) context = {'a': 0} exec(self._pyscripts._scripts[FAKE_UUID]['code'], context) # nosec self.assertEqual(1, context['a']) def test_update_script_on_checksum_change(self): policy_db = self._db_api.create_script('policy1', TEST_CODE1) self._pyscripts.reload_config() self._db_api.update_script(policy_db.script_id, data=TEST_CODE2) self._pyscripts.reload_config() self.assertEqual( TEST_CODE2_CHECKSUM, self._pyscripts._scripts[policy_db.script_id]['checksum']) def test_exec_code_isolation(self): self._db_api.create_script('policy1', TEST_CODE1) self._db_api.create_script('policy2', TEST_CODE3) self._pyscripts.reload_config() self.assertEqual(2, len(self._pyscripts._scripts)) self.assertRaises(NameError, self._pyscripts.process, self.dataframe_for_tests) # Processing def test_process_rating(self): self._db_api.create_script('policy1', COMPLEX_POLICY1) self._pyscripts.reload_config() data_output = self._pyscripts.process(self.dataframe_for_tests) self.assertIsInstance(data_output, dataframe.DataFrame) dict_output = data_output.as_dict() for point in dict_output['usage']['compute']: if point['groupby'].get('flavor') == 'm1.nano': self.assertEqual( decimal.Decimal('2'), point['rating']['price']) else: self.assertEqual( decimal.Decimal('0'), point['rating']['price']) for point in dict_output['usage']['instance_status']: if point['groupby'].get('flavor') == 'm1.ultra': self.assertEqual( decimal.Decimal('96'), point['rating']['price']) else: self.assertEqual( decimal.Decimal('0'), point['rating']['price']) # Processing def test_process_rating_with_documentation_rules(self): self._db_api.create_script('policy1', DOCUMENTATION_RATING_POLICY) self._pyscripts.reload_config() dataframe_for_tests = copy.deepcopy(self.dataframe_for_tests) dataframe_for_tests.add_point( dataframe.DataPoint("GB", 5, 0, {"tag": "A"}, {}), "image") dataframe_for_tests.add_point( dataframe.DataPoint("GB", 15, 0, {"tag": "B"}, {}), "image") dataframe_for_tests.add_point( dataframe.DataPoint("GB", 500, 0, {"tag": "D"}, {}), "volume") dataframe_for_tests.add_point( dataframe.DataPoint("GB", 80, 0, {"tag": "E"}, {}), "volume") data_output = self._pyscripts.process(dataframe_for_tests) self.assertIsInstance(data_output, dataframe.DataFrame) dict_output = data_output.as_dict() for point in dict_output['usage']['compute']: if point['groupby'].get('flavor') == 'm1.nano': self.assertEqual( decimal.Decimal('0.3499999999999999777955395075'), point['rating']['price']) else: self.assertEqual( decimal.Decimal('0'), point['rating']['price']) for point in dict_output['usage']['instance_status']: if point['groupby'].get('flavor') == 'm1.ultra': self.assertEqual( decimal.Decimal('0'), point['rating']['price']) else: self.assertEqual( decimal.Decimal('0'), point['rating']['price']) for point in dict_output['usage']['image']: if point['groupby'].get('tag') == 'A': self.assertEqual( decimal.Decimal('0.01000000000000000020816681712'), point['rating']['price']) else: self.assertEqual( decimal.Decimal('0.03000000000000000062450045135'), point['rating']['price']) for point in dict_output['usage']['volume']: if point['groupby'].get('tag') == 'D': self.assertEqual( decimal.Decimal('174.9999999999999888977697537'), point['rating']['price']) else: self.assertEqual( decimal.Decimal('27.99999999999999822364316060'), point['rating']['price']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/test_rating.py0000664000175000017500000001022100000000000022236 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from unittest import mock from cloudkitty.db import api as ck_db_api from cloudkitty import tests class FakeRPCClient(object): def __init__(self, namespace=None, fanout=False): self._queue = [] self._namespace = namespace self._fanout = fanout def prepare(self, namespace=None, fanout=False): self._namespace = namespace self._fanout = fanout return self def cast(self, ctx, data, **kwargs): cast_data = {'ctx': ctx, 'data': data} cast_data.update(kwargs) self._queue.append(cast_data) class RatingTest(tests.TestCase): def setUp(self): super(RatingTest, self).setUp() self._tenant_id = 'f266f30b11f246b589fd266f85eeec39' self._module = tests.FakeRatingModule(self._tenant_id) self._fake_rpc = FakeRPCClient() def test_get_module_info(self): mod_infos = self._module.module_info expected_infos = {'name': 'fake', 'description': 'fake rating module', 'hot_config': False, 'enabled': False, 'priority': 1} self.assertEqual(expected_infos, mod_infos) def test_set_state_triggers_rpc(self): with mock.patch('cloudkitty.messaging.get_client') as rpcmock: rpcmock.return_value = self._fake_rpc self._module.set_state(True) self.assertTrue(self._fake_rpc._fanout) self.assertEqual('rating', self._fake_rpc._namespace) self.assertEqual(1, len(self._fake_rpc._queue)) rpc_data = self._fake_rpc._queue[0] expected_data = {'ctx': {}, 'data': 'enable_module', 'name': 'fake'} self.assertEqual(expected_data, rpc_data) self._module.set_state(False) self.assertEqual(2, len(self._fake_rpc._queue)) rpc_data = self._fake_rpc._queue[1] expected_data['data'] = 'disable_module' self.assertEqual(expected_data, rpc_data) def test_enable_module(self): with mock.patch('cloudkitty.messaging.get_client') as rpcmock: rpcmock.return_value = self._fake_rpc self._module.set_state(True) db_api = ck_db_api.get_instance() module_db = db_api.get_module_info() self.assertTrue(module_db.get_state('fake')) def test_disable_module(self): with mock.patch('cloudkitty.messaging.get_client') as rpcmock: rpcmock.return_value = self._fake_rpc self._module.set_state(False) db_api = ck_db_api.get_instance() module_db = db_api.get_module_info() self.assertFalse(module_db.get_state('fake')) def test_enabled_property(self): db_api = ck_db_api.get_instance() module_db = db_api.get_module_info() module_db.set_state('fake', True) self.assertTrue(self._module.enabled) module_db.set_state('fake', False) self.assertFalse(self._module.enabled) def test_get_default_priority(self): self.assertEqual(1, self._module.priority) def test_set_priority(self): self._module.set_priority(10) db_api = ck_db_api.get_instance() module_db = db_api.get_module_info() self.assertEqual(10, module_db.get_priority('fake')) def test_update_priority(self): old_prio = self._module.priority self._module.set_priority(10) new_prio = self._module.priority self.assertNotEqual(old_prio, new_prio) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/test_state.py0000664000175000017500000000267000000000000022103 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import datetime from cloudkitty import state from cloudkitty import tests class DBStateManagerTest(tests.TestCase): def setUp(self): super(DBStateManagerTest, self).setUp() self.sm = state.DBStateManager('testuser', 'osrtf') def test_gen_name(self): name = self.sm._gen_name('testuser', 'osrtf') self.assertEqual(name, 'testuser_osrtf') def test_state_access(self): now = datetime.datetime.utcnow() self.sm.set_state(now) result = self.sm.get_state() self.assertEqual(result, str(now)) def test_metadata_access(self): metadata = {'foo': 'bar'} now = datetime.datetime.utcnow() self.sm.set_state(now) self.sm.set_metadata(metadata) result = self.sm.get_metadata() self.assertEqual(result, metadata) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/test_storage_state.py0000664000175000017500000001321400000000000023623 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from collections import abc from datetime import datetime import itertools from unittest import mock from cloudkitty import storage_state from cloudkitty import tests class StateManagerTest(tests.TestCase): class QueryMock(mock.Mock): """Mocks an SQLalchemy query. ``filter()`` can be called any number of times, followed by first(), which will cycle over the ``output`` parameter passed to the constructor. The ``first_called`` attribute tracks how many times first() is called. """ def __init__(self, output, *args, **kwargs): super(StateManagerTest.QueryMock, self).__init__(*args, **kwargs) self.first_called = 0 if not isinstance(output, abc.Iterable): output = (output, ) self.output = itertools.cycle(output) def filter(self, *args, **kwargs): return self def first(self): self.first_called += 1 return next(self.output) def setUp(self): super(StateManagerTest, self).setUp() self._state = storage_state.StateManager() self.conf.set_override('backend', 'fetcher1', 'fetcher') self.conf.set_override('collector', 'collector1', 'collect') self.conf.set_override('scope_key', 'scope_key', 'collect') def _get_query_mock(self, *args): output = self.QueryMock(args) return output, mock.Mock(return_value=output) @staticmethod def _get_r_mock(scope_key, collector, fetcher, last_processed_timestamp): r_mock = mock.Mock() r_mock.scope_key = scope_key r_mock.collector = collector r_mock.fetcher = fetcher r_mock.last_processed_timestamp = last_processed_timestamp return r_mock def _test_x_state_does_update_columns(self, func): r_mock = self._get_r_mock(None, None, None, datetime(2042, 1, 1)) output, query_mock = self._get_query_mock(None, r_mock) with mock.patch('oslo_db.sqlalchemy.utils.model_query', new=query_mock): func('fake_identifier') self.assertEqual(output.first_called, 2) self.assertEqual(r_mock.collector, 'collector1') self.assertEqual(r_mock.scope_key, 'scope_key') self.assertEqual(r_mock.fetcher, 'fetcher1') def test_get_last_processed_timestamp_does_update_columns(self): self._test_x_state_does_update_columns( self._state.get_last_processed_timestamp) def test_set_state_does_update_columns(self): with mock.patch('cloudkitty.db.session_for_write'): self._test_x_state_does_update_columns( lambda x: self._state.set_state(x, datetime(2042, 1, 1))) def _test_x_state_no_column_update(self, func): r_mock = self._get_r_mock( 'scope_key', 'collector1', 'fetcher1', datetime(2042, 1, 1)) output, query_mock = self._get_query_mock(r_mock) with mock.patch('oslo_db.sqlalchemy.utils.model_query', new=query_mock): func('fake_identifier') self.assertEqual(output.first_called, 1) self.assertEqual(r_mock.collector, 'collector1') self.assertEqual(r_mock.scope_key, 'scope_key') self.assertEqual(r_mock.fetcher, 'fetcher1') def test_get_last_processed_timestamp_no_column_update(self): self._test_x_state_no_column_update( self._state.get_last_processed_timestamp) def test_set_state_no_column_update(self): with mock.patch('cloudkitty.db.session_for_write'): self._test_x_state_no_column_update( lambda x: self._state.set_state(x, datetime(2042, 1, 1))) def test_set_state_does_not_duplicate_entries(self): state = datetime(2042, 1, 1) _, query_mock = self._get_query_mock( self._get_r_mock('a', 'b', 'c', state)) with mock.patch( 'oslo_db.sqlalchemy.utils.model_query', new=query_mock), mock.patch( 'cloudkitty.db.session_for_write') as sm: sm.return_value.__enter__.return_value = session_mock = \ mock.MagicMock() self._state.set_state('fake_identifier', state) session_mock.commit.assert_not_called() session_mock.add.assert_not_called() def test_set_state_does_update_state(self): r_mock = self._get_r_mock('a', 'b', 'c', datetime(2000, 1, 1)) _, query_mock = self._get_query_mock(r_mock) new_state = datetime(2042, 1, 1) with mock.patch( 'oslo_db.sqlalchemy.utils.model_query', new=query_mock), mock.patch( 'cloudkitty.db.session_for_write') as sm: sm.return_value.__enter__.return_value = session_mock = \ mock.MagicMock() self.assertNotEqual(r_mock.state, new_state) self._state.set_state('fake_identifier', new_state) self.assertEqual(r_mock.last_processed_timestamp, new_state) session_mock.commit.assert_called_once() session_mock.add.assert_not_called() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/utils.py0000664000175000017500000000512000000000000021055 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2018 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import copy import random from oslo_utils import uuidutils from cloudkitty import dataframe from cloudkitty.tests import samples def generate_v2_storage_data(min_length=10, nb_projects=2, project_ids=None, start=None, end=None): if not project_ids: project_ids = [uuidutils.generate_uuid() for i in range(nb_projects)] elif not isinstance(project_ids, list): project_ids = [project_ids] df = dataframe.DataFrame(start=start, end=end) for metric_name, sample in samples.V2_STORAGE_SAMPLE.items(): datapoints = [] for project_id in project_ids: data = [copy.deepcopy(sample) for i in range(min_length + random.randint(1, 10))] first_group = data[:round(len(data)/2)] second_group = data[round(len(data)/2):] for elem in first_group: elem['groupby']['year'] = 2022 elem['groupby']['week_of_the_year'] = 1 elem['groupby']['day_of_the_year'] = 1 elem['groupby']['month'] = 10 for elem in second_group: elem['groupby']['year'] = 2023 elem['groupby']['week_of_the_year'] = 2 elem['groupby']['day_of_the_year'] = 2 elem['groupby']['month'] = 12 data[0]['groupby']['year'] = 2021 for elem in data: elem['groupby']['id'] = uuidutils.generate_uuid() elem['groupby']['project_id'] = project_id datapoints += [dataframe.DataPoint( elem['vol']['unit'], elem['vol']['qty'], elem['rating']['price'], elem['groupby'], elem['metadata'], ) for elem in data] df.add_points(datapoints, metric_name) return df def load_conf(*args): return samples.DEFAULT_METRICS_CONF ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2674868 cloudkitty-21.0.0/cloudkitty/tests/utils_tests/0000775000175000017500000000000000000000000021727 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/utils_tests/__init__.py0000664000175000017500000000000000000000000024026 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/utils_tests/test_json.py0000664000175000017500000000214300000000000024311 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import datetime import decimal from dateutil import tz from cloudkitty import tests from cloudkitty.utils import json class JSONEncoderTest(tests.TestCase): def test_encode_decimal(self): obj = {'nb': decimal.Decimal(42)} self.assertEqual(json.dumps(obj), '{"nb": 42.0}') def test_encode_datetime(self): obj = {'date': datetime.datetime(2019, 1, 1, tzinfo=tz.tzutc())} self.assertEqual(json.dumps(obj), '{"date": "2019-01-01T00:00:00+00:00"}') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/utils_tests/test_tz.py0000664000175000017500000001354200000000000024002 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import datetime import unittest from unittest import mock from dateutil import tz from oslo_utils import timeutils from cloudkitty import utils from cloudkitty.utils import tz as tzutils class TestTZUtils(unittest.TestCase): def setUp(self): self.local_now = tzutils.localized_now() self.naive_now = utils.utcnow().replace(microsecond=0) def test_localized_now(self): self.assertEqual( self.local_now.astimezone(tz.tzutc()).replace(tzinfo=None), self.naive_now) self.assertIsNotNone(self.local_now.tzinfo) def test_local_to_utc_naive(self): naive_local = tzutils.local_to_utc(self.local_now, naive=True) naive_naive = tzutils.local_to_utc(self.naive_now, naive=True) self.assertIsNone(naive_local.tzinfo) self.assertIsNone(naive_naive.tzinfo) self.assertEqual(naive_local, naive_naive) def test_local_to_utc_not_naive(self): local = tzutils.local_to_utc(self.local_now) naive = tzutils.local_to_utc(self.naive_now) self.assertIsNotNone(local.tzinfo) self.assertIsNotNone(naive.tzinfo) self.assertEqual(local, naive) def test_utc_to_local(self): self.assertEqual(tzutils.utc_to_local(self.naive_now), self.local_now) def test_dt_from_iso(self): tester = '2019-06-06T16:30:54+02:00' tester_utc = '2019-06-06T14:30:54+00:00' dt = tzutils.dt_from_iso(tester) self.assertIsNotNone(dt.tzinfo) self.assertEqual(tzutils.dt_from_iso(tester, as_utc=True).isoformat(), tester_utc) def _test_add_substract_delta(self, obj, tzone): delta = datetime.timedelta(seconds=3600) naive = obj.astimezone(tz.tzutc()).replace(tzinfo=None) self.assertEqual( tzutils.add_delta(obj, delta).astimezone(tzone), (naive + delta).replace(tzinfo=tz.tzutc()).astimezone(tzone), ) self.assertEqual( tzutils.substract_delta(obj, delta).astimezone(tzone), (naive - delta).replace(tzinfo=tz.tzutc()).astimezone(tzone), ) def test_add_substract_delta_summertime(self): tzone = tz.gettz('Europe/Paris') obj = datetime.datetime(2019, 3, 31, 1, tzinfo=tzone) self._test_add_substract_delta(obj, tzone) def test_add_substract_delta(self): tzone = tz.gettz('Europe/Paris') obj = datetime.datetime(2019, 1, 1, tzinfo=tzone) self._test_add_substract_delta(obj, tzone) def test_get_month_start_no_arg(self): naive_utc_now = timeutils.utcnow() naive_month_start = datetime.datetime( naive_utc_now.year, naive_utc_now.month, 1) month_start = tzutils.get_month_start() self.assertIsNotNone(month_start.tzinfo) self.assertEqual( naive_month_start, month_start.replace(tzinfo=None)) def test_get_month_start_with_arg(self): param = datetime.datetime(2019, 1, 3, 4, 5) month_start = tzutils.get_month_start(param) self.assertIsNotNone(month_start.tzinfo) self.assertEqual(month_start.replace(tzinfo=None), datetime.datetime(2019, 1, 1)) def test_get_month_start_with_arg_naive(self): param = datetime.datetime(2019, 1, 3, 4, 5) month_start = tzutils.get_month_start(param, naive=True) self.assertIsNone(month_start.tzinfo) self.assertEqual(month_start, datetime.datetime(2019, 1, 1)) def test_diff_seconds_positive_arg_naive_objects(self): one = datetime.datetime(2019, 1, 1, 1, 1, 30) two = datetime.datetime(2019, 1, 1, 1, 1) self.assertEqual(tzutils.diff_seconds(one, two), 30) def test_diff_seconds_negative_arg_naive_objects(self): one = datetime.datetime(2019, 1, 1, 1, 1, 30) two = datetime.datetime(2019, 1, 1, 1, 1) self.assertEqual(tzutils.diff_seconds(two, one), 30) def test_diff_seconds_positive_arg_aware_objects(self): one = datetime.datetime(2019, 1, 1, 1, 1, 30, tzinfo=tz.tzutc()) two = datetime.datetime(2019, 1, 1, 1, 1, tzinfo=tz.tzutc()) self.assertEqual(tzutils.diff_seconds(one, two), 30) def test_diff_seconds_negative_arg_aware_objects(self): one = datetime.datetime(2019, 1, 1, 1, 1, 30, tzinfo=tz.tzutc()) two = datetime.datetime(2019, 1, 1, 1, 1, tzinfo=tz.tzutc()) self.assertEqual(tzutils.diff_seconds(two, one), 30) def test_diff_seconds_negative_arg_aware_objects_on_summer_change(self): one = datetime.datetime(2019, 3, 31, 1, tzinfo=tz.gettz('Europe/Paris')) two = datetime.datetime(2019, 3, 31, 3, tzinfo=tz.gettz('Europe/Paris')) self.assertEqual(tzutils.diff_seconds(two, one), 3600) def test_cloudkitty_dt_from_ts_as_utc(self): ts = 1569902400 dt = datetime.datetime(2019, 10, 1, 4, tzinfo=tz.tzutc()) self.assertEqual(dt, tzutils.dt_from_ts(ts, as_utc=True)) def test_cloudkitty_dt_from_ts_local_tz(self): ts = 1569902400 timezone = tz.gettz('Europe/Paris') dt = datetime.datetime(2019, 10, 1, 6, tzinfo=timezone) with mock.patch.object(tzutils, '_LOCAL_TZ', new=timezone): self.assertEqual(dt, tzutils.dt_from_ts(ts)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/utils_tests/test_utils.py0000664000175000017500000001563600000000000024513 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import datetime import decimal import fractions import itertools import unittest from unittest import mock from oslo_utils import timeutils from cloudkitty import utils as ck_utils def iso2dt(iso_str): return timeutils.parse_isotime(iso_str) class UtilsTimeCalculationsTest(unittest.TestCase): def setUp(self): self.date_ts = 1416219015 self.date_iso = '2014-11-17T10:10:15Z' self.date_params = {'year': 2014, 'month': 11, 'day': 17, 'hour': 10, 'minute': 10, 'second': 15} self.date_tz_params = {'year': 2014, 'month': 10, 'day': 26, 'hour': 2, 'minute': 00, 'second': 00} def test_dt2ts(self): date = datetime.datetime(**self.date_params) trans_ts = ck_utils.dt2ts(date) self.assertEqual(self.date_ts, trans_ts) def test_iso2dt(self): date = datetime.datetime(**self.date_params) trans_dt = ck_utils.iso2dt(self.date_iso) self.assertEqual(date, trans_dt) def test_ts2iso(self): trans_iso = ck_utils.ts2iso(self.date_ts) self.assertEqual(self.date_iso, trans_iso) def test_dt2iso(self): date = datetime.datetime(**self.date_params) trans_iso = ck_utils.dt2iso(date) self.assertEqual(self.date_iso, trans_iso) @mock.patch.object(ck_utils, 'utcnow', return_value=iso2dt('2014-01-31T00:00:00Z')) def test_month_start_without_dt(self, patch_utcnow_mock): date = datetime.datetime(2014, 1, 1) trans_dt = ck_utils.get_month_start() self.assertEqual(date, trans_dt) patch_utcnow_mock.assert_called_once_with() @mock.patch.object(ck_utils, 'utcnow', return_value=iso2dt('2014-01-15T00:00:00Z')) def test_month_end_without_dt(self, patch_utcnow_mock): date = datetime.datetime(2014, 1, 31) trans_dt = ck_utils.get_month_end() self.assertEqual(date, trans_dt) patch_utcnow_mock.assert_called_once_with() @mock.patch.object(ck_utils, 'utcnow', return_value=iso2dt('2014-01-31T00:00:00Z')) def test_get_last_month_without_dt(self, patch_utcnow_mock): date = datetime.datetime(2013, 12, 1) trans_dt = ck_utils.get_last_month() self.assertEqual(date, trans_dt) patch_utcnow_mock.assert_called_once_with() @mock.patch.object(ck_utils, 'utcnow', return_value=iso2dt('2014-01-31T00:00:00Z')) def test_get_next_month_without_dt(self, patch_utcnow_mock): date = datetime.datetime(2014, 2, 1) trans_dt = ck_utils.get_next_month() self.assertEqual(date, trans_dt) patch_utcnow_mock.assert_called_once_with() def test_get_last_month_leap(self): base_date = datetime.datetime(2016, 3, 31) date = datetime.datetime(2016, 2, 1) trans_dt = ck_utils.get_last_month(base_date) self.assertEqual(date, trans_dt) def test_get_next_month_leap(self): base_date = datetime.datetime(2016, 1, 31) date = datetime.datetime(2016, 2, 1) trans_dt = ck_utils.get_next_month(base_date) self.assertEqual(date, trans_dt) def test_add_month_leap(self): base_date = datetime.datetime(2016, 1, 31) date = datetime.datetime(2016, 3, 3) trans_dt = ck_utils.add_month(base_date, False) self.assertEqual(date, trans_dt) def test_add_month_keep_leap(self): base_date = datetime.datetime(2016, 1, 31) date = datetime.datetime(2016, 2, 29) trans_dt = ck_utils.add_month(base_date) self.assertEqual(date, trans_dt) def test_sub_month_leap(self): base_date = datetime.datetime(2016, 3, 31) date = datetime.datetime(2016, 3, 3) trans_dt = ck_utils.sub_month(base_date, False) self.assertEqual(date, trans_dt) def test_sub_month_keep_leap(self): base_date = datetime.datetime(2016, 3, 31) date = datetime.datetime(2016, 2, 29) trans_dt = ck_utils.sub_month(base_date) self.assertEqual(date, trans_dt) def test_load_timestamp(self): calc_dt = ck_utils.iso2dt(self.date_iso) check_dt = ck_utils.ts2dt(self.date_ts) self.assertEqual(calc_dt, check_dt) class ConvertUnitTest(unittest.TestCase): """Class testing the convert_unit and num2decimal function""" possible_args = [ None, # Use default arg '2/3', decimal.Decimal(1.23), '1.23', 2, '2', 2.3, ] def test_arg_types(self): """Test function with several arg combinations of different types""" for fac, off in itertools.product(self.possible_args, repeat=2): factor = fac if fac else 1 offset = off if off else 0 ck_utils.convert_unit(10, factor, offset) def test_str_str_str(self): result = ck_utils.convert_unit('1/2', '1/2', '1/2') self.assertEqual(result, decimal.Decimal(0.5 * 0.5 + 0.5)) def test_str_float_float(self): result = ck_utils.convert_unit('1/2', 0.5, 0.5) self.assertEqual(result, decimal.Decimal(0.5 * 0.5 + 0.5)) def test_convert_str_float(self): result = ck_utils.num2decimal('2.0') self.assertEqual(result, decimal.Decimal(2.0)) def test_convert_str_int(self): result = ck_utils.num2decimal('2') self.assertEqual(result, decimal.Decimal(2)) def test_convert_str_fraction(self): result = ck_utils.num2decimal('2/3') self.assertEqual(result, decimal.Decimal(2.0 / 3)) def test_convert_fraction(self): result = ck_utils.num2decimal(fractions.Fraction(1, 2)) self.assertEqual(result, decimal.Decimal(1.0 / 2)) def test_convert_float(self): result = ck_utils.num2decimal(0.5) self.assertEqual(result, decimal.Decimal(0.5)) def test_convert_int(self): result = ck_utils.num2decimal(2) self.assertEqual(result, decimal.Decimal(2)) def test_convert_decimal(self): result = ck_utils.num2decimal(decimal.Decimal(2)) self.assertEqual(result, decimal.Decimal(2)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/tests/utils_tests/test_validation.py0000664000175000017500000000734500000000000025503 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import unittest import voluptuous.error from cloudkitty.utils import validation as validation_utils class DictTypeValidatorTest(unittest.TestCase): def test_dictvalidator_valid_dict_with_cast(self): validator = validation_utils.DictTypeValidator(str, str) self.assertEqual(validator({'a': '1', 'b': 2}), {'a': '1', 'b': '2'}) def test_dictvalidator_valid_dict_without_cast(self): validator = validation_utils.DictTypeValidator(str, str, cast=False) self.assertEqual(validator({'a': '1', 'b': '2'}), {'a': '1', 'b': '2'}) def test_dictvalidator_invalid_dict_without_cast(self): validator = validation_utils.DictTypeValidator(str, str, cast=False) self.assertRaises( voluptuous.error.Invalid, validator, {'a': '1', 'b': 2}) def test_dictvalidator_invalid_dict_with_cast(self): validator = validation_utils.DictTypeValidator(str, int) self.assertRaises( voluptuous.error.Invalid, validator, {'a': '1', 'b': 'aa'}) def test_dictvalidator_invalid_type_tuple(self): validator = validation_utils.DictTypeValidator(str, int) self.assertRaises( voluptuous.error.Invalid, validator, ('a', '1')) def test_dictvalidator_invalid_type_str(self): validator = validation_utils.DictTypeValidator(str, int) self.assertRaises( voluptuous.error.Invalid, validator, 'aaaa') class IterableValuesDictTest(unittest.TestCase): def test_iterablevaluesdict_valid_list_and_tuple_with_cast(self): validator = validation_utils.IterableValuesDict(str, str) self.assertEqual( validator({'a': [1, '2'], 'b': ('3', 4)}), {'a': ['1', '2'], 'b': ('3', '4')}, ) def test_iterablevaluesdict_valid_list_and_tuple_without_cast(self): validator = validation_utils.IterableValuesDict(str, str) self.assertEqual( validator({'a': ['1', '2'], 'b': ('3', '4')}), {'a': ['1', '2'], 'b': ('3', '4')}, ) def test_iterablevaluesdict_invalid_dict_iterable_without_cast(self): validator = validation_utils.IterableValuesDict(str, str, cast=False) self.assertRaises( voluptuous.error.Invalid, validator, {'a': ['1'], 'b': (2, )}) def test_iterablevaluesdict_invalid_dict_iterable_with_cast(self): validator = validation_utils.IterableValuesDict(str, int, cast=False) self.assertRaises( voluptuous.error.Invalid, validator, {'a': ['1'], 'b': ('aa', )}) def test_iterablevaluesdict_invalid_iterable_with_cast(self): validator = validation_utils.IterableValuesDict(str, int) self.assertRaises( voluptuous.error.Invalid, validator, {'a': ['1'], 'b': 42, }) def test_iterablevaluesdict_invalid_type_tuple(self): validator = validation_utils.IterableValuesDict(str, int) self.assertRaises( voluptuous.error.Invalid, validator, ('a', '1')) def test_iterablevaluesdict_invalid_type_str(self): validator = validation_utils.IterableValuesDict(str, int) self.assertRaises( voluptuous.error.Invalid, validator, 'aaaa') ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.271487 cloudkitty-21.0.0/cloudkitty/utils/0000775000175000017500000000000000000000000017343 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/utils/__init__.py0000664000175000017500000002261700000000000021464 0ustar00zuulzuul00000000000000# Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import calendar import contextlib import datetime import decimal import fractions import importlib import math import shutil from string import Template import sys import tempfile import yaml from oslo_log import log as logging from oslo_utils import timeutils from stevedore import extension from cloudkitty.utils import tz as tzutils _ISO8601_TIME_FORMAT_SUBSECOND = '%Y-%m-%dT%H:%M:%S.%f' _ISO8601_TIME_FORMAT = '%Y-%m-%dT%H:%M:%S' LOG = logging.getLogger(__name__) def isotime(at=None, subsecond=False): """Stringify time in ISO 8601 format.""" # Python provides a similar instance method for datetime.datetime objects # called isoformat(). The format of the strings generated by isoformat() # have a couple of problems: # 1) The strings generated by isotime are used in tokens and other public # APIs that we can't change without a deprecation period. The strings # generated by isoformat are not the same format, so we can't just # change to it. # 2) The strings generated by isoformat do not include the microseconds if # the value happens to be 0. This will likely show up as random failures # as parsers may be written to always expect microseconds, and it will # parse correctly most of the time. if not at: at = timeutils.utcnow() st = at.strftime(_ISO8601_TIME_FORMAT if not subsecond else _ISO8601_TIME_FORMAT_SUBSECOND) tz = at.tzinfo.tzname(None) if at.tzinfo else 'UTC' st += ('Z' if tz == 'UTC' else tz) return st def iso8601_from_timestamp(timestamp, microsecond=False): """Returns an iso8601 formatted date from timestamp""" # Python provides a similar instance method for datetime.datetime # objects called isoformat() and utcfromtimestamp(). The format # of the strings generated by isoformat() and utcfromtimestamp() # have a couple of problems: # 1) The method iso8601_from_timestamp in oslo_utils is realized # by isotime, the strings generated by isotime are used in # tokens and other public APIs that we can't change without a # deprecation period. The strings generated by isoformat are # not the same format, so we can't just change to it. # 2) The strings generated by isoformat() and utcfromtimestamp() # do not include the microseconds if the value happens to be 0. # This will likely show up as random failures as parsers may be # written to always expect microseconds, and it will parse # correctly most of the time. return isotime(datetime.datetime.utcfromtimestamp(timestamp), microsecond) def dt2ts(orig_dt): """Translate a datetime into a timestamp.""" return calendar.timegm(orig_dt.timetuple()) def iso2dt(iso_date): """iso8601 format to datetime.""" iso_dt = timeutils.parse_isotime(iso_date) trans_dt = timeutils.normalize_time(iso_dt) return trans_dt def ts2dt(timestamp): """timestamp to datetime format.""" if not isinstance(timestamp, float): timestamp = float(timestamp) return datetime.datetime.utcfromtimestamp(timestamp) def ts2iso(timestamp): """timestamp to is8601 format.""" if not isinstance(timestamp, float): timestamp = float(timestamp) return iso8601_from_timestamp(timestamp) def dt2iso(orig_dt): """datetime to is8601 format.""" return isotime(orig_dt) def utcnow(): """Returns a datetime for the current utc time.""" return timeutils.utcnow() def get_month_days(dt): return calendar.monthrange(dt.year, dt.month)[1] def add_days(base_dt, days, stay_on_month=True): if stay_on_month: max_days = get_month_days(base_dt) if days > max_days: return get_month_end(base_dt) return base_dt + datetime.timedelta(days=days) def add_month(dt, stay_on_month=True): next_month = get_next_month(dt) return add_days(next_month, dt.day, stay_on_month) def sub_month(dt, stay_on_month=True): prev_month = get_last_month(dt) return add_days(prev_month, dt.day, stay_on_month) def get_month_start(dt=None): if not dt: dt = utcnow() month_start = datetime.datetime(dt.year, dt.month, 1) return month_start def get_month_start_timestamp(dt=None): return dt2ts(get_month_start(dt)) def get_month_end(dt=None): month_start = get_month_start(dt) days_of_month = get_month_days(month_start) month_end = month_start.replace(day=days_of_month) return month_end def get_last_month(dt=None): if not dt: dt = utcnow() month_end = get_month_start(dt) - datetime.timedelta(days=1) return get_month_start(month_end) def get_next_month(dt=None): month_end = get_month_end(dt) next_month = month_end + datetime.timedelta(days=1) return next_month def get_next_month_timestamp(dt=None): return dt2ts(get_next_month(dt)) def refresh_stevedore(namespace=None): """Trigger reload of entry points. Useful to have dynamic loading/unloading of stevedore modules. """ # NOTE(sheeprine): pkg_resources doesn't support reload on python3 due to # defining basestring which is still there on reload hence executing # python2 related code. try: del sys.modules['pkg_resources'].basestring except AttributeError: # python2, do nothing pass # Force working_set reload importlib.reload(sys.modules['pkg_resources']) # Clear stevedore cache cache = extension.ExtensionManager.ENTRY_POINT_CACHE if namespace: if namespace in cache: del cache[namespace] else: cache.clear() def check_time_state(timestamp=None, period=0, wait_periods=0): """Checks the state of a timestamp compared to the current time. Returns the next timestamp based on the current timestamp and the period if the next timestamp is inferior to the current time and the waiting period or None if not. :param timestamp: Current timestamp :type timestamp: datetime.datetime :param period: Period, in seconds :type period: int :param wait_periods: periods to wait before the current timestamp. :type wait_periods: int :rtype: datetime.datetime """ if not timestamp: return tzutils.get_month_start() period_delta = datetime.timedelta(seconds=period) next_timestamp = tzutils.add_delta(timestamp, period_delta) wait_time = wait_periods * period_delta if tzutils.add_delta(next_timestamp, wait_time) < tzutils.localized_now(): return next_timestamp return None def load_conf(conf_path): """Loads the metric collection configuration. :param conf_path: Path of the file to load :type conf_path: str :rtype: dict """ with open(conf_path) as conf: res = yaml.safe_load(conf) return res or {} @contextlib.contextmanager def tempdir(**kwargs): tmpdir = tempfile.mkdtemp(**kwargs) try: yield tmpdir finally: try: shutil.rmtree(tmpdir) except OSError as e: LOG.debug('Could not remove tmpdir: %s', e) def mutate(value, mode='NONE', mutate_map=None): """Mutate value according to provided mode.""" if mode == 'NUMBOOL': return float(value != 0.0) if mode == 'NOTNUMBOOL': return float(value == 0.0) if mode == 'FLOOR': return math.floor(value) if mode == 'CEIL': return math.ceil(value) if mode == 'MAP': ret = 0.0 if mutate_map is not None: ret = mutate_map.get(value, 0.0) return ret return value def num2decimal(num): """Converts a number into a decimal.Decimal. The number may be an str in float, int or fraction format; a fraction.Fraction, a decimal.Decimal, an int or a float. """ if isinstance(num, decimal.Decimal): return num if isinstance(num, str): if '/' in num: num = float(fractions.Fraction(num)) if isinstance(num, fractions.Fraction): num = float(num) return decimal.Decimal(num) def convert_unit(value, factor, offset): """Return converted value depending on the provided factor and offset.""" return num2decimal(value) * num2decimal(factor) + num2decimal(offset) def flat_dict(item, parent=None): """Returns a flat version of the nested dict item""" if not parent: parent = dict() for k, val in item.items(): if isinstance(val, dict): parent = flat_dict(val, parent) else: parent[k] = val return parent def template_str_substitute(string, replace_map): """Returns a string with subtituted patterns.""" try: tmp = Template(string) return tmp.substitute(replace_map) except (KeyError, ValueError) as e: LOG.error("Error when trying to substitute the string placeholders. \ Please, check your metrics configuration.", e) raise ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/utils/json.py0000664000175000017500000000211000000000000020660 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import datetime import decimal import functools import json class CloudkittyJSONEncoder(json.JSONEncoder): """Cloudkitty custom json encoder.""" def default(self, obj): if isinstance(obj, decimal.Decimal): return float(obj) elif isinstance(obj, datetime.datetime): return obj.isoformat() return super(CloudkittyJSONEncoder, self).default(obj) dumps = functools.partial(json.dumps, cls=CloudkittyJSONEncoder) loads = json.loads ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/utils/tz.py0000664000175000017500000001325600000000000020361 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """ Timezone-related utilities """ import calendar import datetime from dateutil import tz from oslo_utils import timeutils _LOCAL_TZ = tz.tzlocal() def localized_now(): """Returns a datetime object with timezone information.""" return datetime.datetime.now().replace(tzinfo=_LOCAL_TZ, microsecond=0) def local_to_utc(dt, naive=False): """Converts a localized datetime object to UTC. If no tz info is provided, the object will be considered as being already in UTC, and the timezone will be set to UTC. :param dt: object to convert :type dt: datetime.datetime :param naive: If True, remove timezone information from the final object. Defaults to False. :type naive: bool :rtype: datetime.datetime """ # NOTE(peschk_l): In python2, astimezone() raises a ValueError if it is # applied to a naive datetime object. In python3 however, the naive object # is considered as being in the system's time. if dt.tzinfo is None: dt = dt.replace(tzinfo=tz.tzutc()) output = dt.astimezone(tz.tzutc()) if naive: output = output.replace(tzinfo=None) return output def utc_to_local(dt): """Converts an UTC datetime object to a localized datetime object. If no tz info is provided, the object will be considered as being UTC. :param dt: object to convert :type dt: datetime.datetime :rtype: datetime.datetime """ if dt.tzinfo is None: dt = dt.replace(tzinfo=tz.tzutc()) return dt.astimezone(_LOCAL_TZ) def dt_from_iso(time_str, as_utc=False): """Parses a timezone-aware datetime object from an iso8601 str. Returns the object as being from the local timezone. :param time_str: string to parse :type time_str: str :param as_utc: Return the datetime object as being from the UTC timezone :type as_utc: bool :rtype: datetime.datetime """ return timeutils.parse_isotime(time_str).astimezone( tz.tzutc() if as_utc else _LOCAL_TZ).replace(microsecond=0) def dt_from_ts(ts, as_utc=False): """Parses a timezone-aware datetime object from an epoch timestamp. Returns the object as being from the local timezone. """ return datetime.datetime.fromtimestamp( ts, tz.tzutc() if as_utc else _LOCAL_TZ) def add_delta(dt, delta): """Adds a timedelta to a datetime object. This is done by transforming the object to a naive UTC object, adding the timedelta and transforming it back to a localized object. This helps to avoid cases like this when transiting from winter to summertime: >>> dt, delta (datetime.datetime(2019, 3, 31, 0, 0, tzinfo=tzlocal()), datetime.timedelta(0, 3600)) >>> dt += delta >>> dt.isoformat() '2019-03-31T01:00:00+01:00' >>> dt += delta >>> dt.isoformat() '2019-03-31T02:00:00+02:00' # This is the same time as the previous one """ return utc_to_local(local_to_utc(dt, naive=True) + delta) def substract_delta(dt, delta): """Substracts a timedelta from a datetime object.""" return utc_to_local(local_to_utc(dt, naive=True) - delta) def get_month_start(dt=None, naive=False): """Returns the start of the month in the local timezone. If no parameter is provided, returns the start of the current month. If the provided parameter is naive, it will be considered as UTC and tzinfo will be added, except if naive is True. :param dt: Month to return the begin of. :type dt: datetime.datetime :param naive: If True, remove timezone information from the final object. Defaults to False. :type naive: bool :rtype: datetime.datetime """ if not dt: dt = localized_now() if not dt.tzinfo: dt = dt.replace(tzinfo=tz.tzutc()).astimezone(_LOCAL_TZ) if naive: dt = local_to_utc(dt, naive=True) return datetime.datetime(dt.year, dt.month, 1, tzinfo=dt.tzinfo) def get_next_month(dt=None, naive=False): """Returns the start of the next month in the local timezone. If no parameter is provided, returns the start of the next month. If the provided parameter is naive, it will be considered as UTC. :param dt: Datetime to return the next month of. :type dt: datetime.datetime :param naive: If True, remove timezone information from the final object. Defaults to False. :type naive: bool :rtype: datetime.datetime """ start = get_month_start(dt, naive=naive) month_days = calendar.monthrange(start.year, start.month)[1] return add_delta(start, datetime.timedelta(days=month_days)) def diff_seconds(one, two): """Returns the difference in seconds between two datetime objects. Objects will be converted to naive UTC objects before calculating the difference. The return value is the absolute value of the difference. :param one: First datetime object :type one: datetime.datetime :param two: datetime object to substract from the first one :type two: datetime.datetime :rtype: int """ return abs(int((local_to_utc(one, naive=True) - local_to_utc(two, naive=True)).total_seconds())) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/utils/validation.py0000664000175000017500000000670000000000000022052 0ustar00zuulzuul00000000000000# Copyright 2019 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """Common utils for voluptuous schema validation""" try: from collections.abc import Iterable except ImportError: from collections import Iterable import functools import voluptuous class DictTypeValidator(object): """Voluptuous helper validating dict key and value types. When possible, keys and values will be converted to the required type. This behaviour can be disabled through the `cast` param. :param key_type: type of the dict keys :param value_type: type of the dict values :param cast: Set to False if you do not want to cast elements to the required type. :type cast: bool :rtype: dict """ def __init__(self, key_type, value_type, cast=True): if cast: self._kval = voluptuous.Coerce(key_type) self._vval = voluptuous.Coerce(value_type) else: def __type_validator(type_, elem): if not isinstance(elem, type_): raise voluptuous.Invalid( "{e} is not of type {t}".format(e=elem, t=type_)) return elem self._kval = functools.partial(__type_validator, key_type) self._vval = functools.partial(__type_validator, value_type) def __call__(self, item): try: return {self._kval(k): self._vval(v) for k, v in dict(item).items()} except (TypeError, ValueError): raise voluptuous.Invalid( "{} can't be converted to dict".format(item)) class IterableValuesDict(DictTypeValidator): """Voluptuous helper validating dicts with iterable values. When possible, keys and elements of values will be converted to the required type. This behaviour can be disabled through the `cast` param. :param key_type: type of the dict keys :param value_type: type of the dict values :param cast: Set to False if you do not want to convert elements to the required type. :type cast: bool :rtype: dict """ def __init__(self, key_type, value_type, cast=True): super(IterableValuesDict, self).__init__(key_type, value_type, cast) # NOTE(peschk_l): Using type(it) to return an iterable of the same # type as the passed argument. self.__vval = lambda it: type(it)(self._vval(i) for i in it) def __call__(self, item): try: for v in dict(item).values(): if not isinstance(v, Iterable): raise voluptuous.Invalid("{} is not iterable".format(v)) return {self._kval(k): self.__vval(v) for k, v in item.items()} except (TypeError, ValueError) as e: raise voluptuous.Invalid( "{} can't be converted to a dict: {}".format(item, e)) def get_string_type(): """Returns ``basestring`` in python2 and ``str`` in python3.""" return str ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/version.py0000664000175000017500000000121100000000000020235 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pbr.version version_info = pbr.version.VersionInfo('cloudkitty') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/write_orchestrator.py0000664000175000017500000001303100000000000022504 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import copy from oslo_config import cfg from oslo_utils import fileutils from stevedore import named from cloudkitty import state from cloudkitty import storage from cloudkitty import storage_state from cloudkitty import utils as ck_utils CONF = cfg.CONF WRITERS_NAMESPACE = 'cloudkitty.output.writers' class WriteOrchestrator(object): """Write Orchestrator: Handle incoming data from the global orchestrator, and store them in an intermediary data format before final transformation. """ def __init__(self, backend, tenant_id, storage, basepath=None, period=3600): self._backend = backend self._tenant_id = tenant_id self._storage = storage self._storage_state = storage_state.StateManager() self._basepath = basepath if self._basepath: fileutils.ensure_tree(self._basepath) self._period = period self._sm = state.DBStateManager(self._tenant_id, 'writer_status') self._write_pipeline = [] # State vars self.usage_start = None self.usage_end = None # Current total self.total = 0 def init_writing_pipeline(self): CONF.import_opt('pipeline', 'cloudkitty.config', 'output') output_pipeline = named.NamedExtensionManager( WRITERS_NAMESPACE, CONF.output.pipeline) for writer in output_pipeline: self.add_writer(writer.plugin) def add_writer(self, writer_class): writer = writer_class(self, self._tenant_id, self._backend, self._basepath) self._write_pipeline.append(writer) def _update_state_manager_data(self): self._sm.set_state(self.usage_end) metadata = {'total': self.total} self._sm.set_metadata(metadata) def _load_state_manager_data(self): timeframe = self._sm.get_state() if timeframe: self.usage_start = timeframe self.usage_end = self.usage_start + self._period metadata = self._sm.get_metadata() if metadata: self.total = metadata.get('total', 0) def _dispatch(self, data): for service in data: # Update totals for entry in data[service]: self.total += entry['rating']['price'] # Dispatch data to writing pipeline for backend in self._write_pipeline: backend.append(data, self.usage_start, self.usage_end) def get_timeframe(self, timeframe, timeframe_end=None): if not timeframe_end: timeframe_end = timeframe + self._period try: filters = {'project_id': self._tenant_id} data = self._storage.retrieve(begin=timeframe, end=timeframe_end, filters=filters, paginate=False) for df in data['dataframes']: for service, resources in df['usage'].items(): for resource in resources: resource['desc'] = copy.deepcopy(resource['metadata']) resource['desc'].update(resource['groupby']) except storage.NoTimeFrame: return None return data def close(self): for writer in self._write_pipeline: writer.close() def _push_data(self): data = self.get_timeframe(self.usage_start, self.usage_end) if data and data['total'] > 0: for timeframe in data['dataframes']: self._dispatch(timeframe['usage']) return True else: return False def _commit_data(self): for backend in self._write_pipeline: backend.commit() def reset_state(self): self._load_state_manager_data() self.usage_end = self._storage_state.get_last_processed_timestamp() self._update_state_manager_data() def restart_month(self): self._load_state_manager_data() month_start = ck_utils.get_month_start() self.usage_end = ck_utils.dt2ts(month_start) self._update_state_manager_data() def process(self): self._load_state_manager_data() storage_state = self._storage_state.get_last_processed_timestamp( self._tenant_id) if not self.usage_start: self.usage_start = storage_state self.usage_end = self.usage_start + self._period while storage_state > self.usage_start: if self._push_data(): self._commit_data() self._update_state_manager_data() self._load_state_manager_data() storage_state = self._storage_state.get_last_processed_timestamp( self._tenant_id) self.close() ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.271487 cloudkitty-21.0.0/cloudkitty/writer/0000775000175000017500000000000000000000000017517 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/writer/__init__.py0000664000175000017500000001067000000000000021634 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import abc from cloudkitty import state from cloudkitty import utils as ck_utils class BaseReportWriter(object, metaclass=abc.ABCMeta): """Base report writer.""" report_type = None def __init__(self, write_orchestrator, tenant_id, backend, basepath=None): self._write_orchestrator = write_orchestrator self._backend = backend self._tenant_id = tenant_id self._sm = state.DBStateManager(self._tenant_id, self.report_type) self._report = None self._period = 3600 self._basepath = basepath # State vars self.checked_first_line = False self.usage_start = None self.usage_start_dt = None self.usage_end = None self.usage_end_dt = None # Current total self.total = 0 # Current usage period lines self._usage_data = {} @abc.abstractmethod def _gen_filename(self): """Filename generation """ def _open(self): filename = self._gen_filename() self._report = self._backend(filename, 'wb+') self._report.seek(0, 2) def _get_report_size(self): return self._report.tell() @abc.abstractmethod def _recover_state(self): """Recover state from a last run. """ def _update_state_manager(self): self._sm.set_state(self.usage_end) metadata = {'total': self.total} self._sm.set_metadata(metadata) def _get_state_manager_timeframe(self): timeframe = self._sm.get_state() self.usage_start = timeframe self.usage_start_dt = ck_utils.ts2dt(timeframe) self.usage_end = timeframe + self._period self.usage_end_dt = ck_utils.ts2dt(self.usage_end) metadata = self._sm.get_metadata() self.total = metadata.get('total', 0) def get_timeframe(self, timeframe): return self._write_orchestrator.get_timeframe(timeframe) @abc.abstractmethod def _write_header(self): """Write report headers """ @abc.abstractmethod def _write_total(self): """Write current total """ @abc.abstractmethod def _write(self): """Write report content """ def _pre_commit(self): if self._report is None: self._open() if not self.checked_first_line: if self._get_report_size() == 0: self._write_header() else: self._recover_state() self.checked_first_line = True else: self._recover_state() def _commit(self): self._pre_commit() self._write() self._update_state_manager() self._post_commit() def _post_commit(self): self._usage_data = {} self._write_total() def _update(self, data): for service in data: if service in self._usage_data: self._usage_data[service].extend(data[service]) else: self._usage_data[service] = data[service] # Update totals for entry in data[service]: self.total += entry['rating']['price'] def append(self, data, start, end): # FIXME we should use the real time values if self.usage_end is not None and start >= self.usage_end: self.usage_start = None if self.usage_start is None: self.usage_start = start self.usage_end = start + self._period self.usage_start_dt = ck_utils.ts2dt(self.usage_start) self.usage_end_dt = ck_utils.ts2dt(self.usage_end) self._update(data) def commit(self): self._commit() @abc.abstractmethod def _close_file(self): """Close report file """ def close(self): self._close_file() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/writer/csv_base.py0000664000175000017500000002005000000000000021653 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import collections import csv import datetime import os from cloudkitty import utils as ck_utils from cloudkitty import writer class InconsistentHeaders(Exception): pass class BaseCSVBackend(writer.BaseReportWriter): """Report format writer: Generates report in csv format """ report_type = 'csv' def __init__(self, write_orchestrator, user_id, backend, basepath): super(BaseCSVBackend, self).__init__(write_orchestrator, user_id, backend, basepath) # Detailed transform OrderedDict self._field_map = collections.OrderedDict() self._headers = [] self._headers_len = 0 self._extra_headers = [] self._extra_headers_len = 0 # File vars self._csv_report = None # State variables self.cached_start = None self.cached_start_str = '' self.cached_end = None self.cached_end_str = '' self._crumpled = False # Current usage period lines self._usage_data = [] def _gen_filename(self, timeframe): filename = ('{}-{}-{:02d}.csv').format(self._tenant_id, timeframe.year, timeframe.month) if self._basepath: filename = os.path.join(self._basepath, filename) return filename def _open(self): filename = self._gen_filename(self.usage_start_dt) self._report = self._backend(filename, 'rb+') self._csv_report = csv.writer(self._report) self._report.seek(0, 2) def _close_file(self): if self._report is not None: self._report.close() def _get_state_manager_timeframe(self): if self.report_type is None: raise NotImplementedError() def _update_state_manager(self): if self.report_type is None: raise NotImplementedError() super(BaseCSVBackend, self)._update_state_manager() metadata = {'total': self.total} metadata['headers'] = self._extra_headers self._sm.set_metadata(metadata) def _init_headers(self): headers = self._field_map.keys() for header in headers: if ':*' in header: continue self._headers.append(header) self._headers_len = len(self._headers) def _write_header(self): self._csv_report.writerow(self._headers + self._extra_headers) def _write(self): self._csv_report.writerows(self._usage_data) def _post_commit(self): self._crumpled = False self._usage_data = [] self._write_total() def _update(self, data): """Dispatch report data with context awareness. """ if self._crumpled: return try: for service in data: for report_data in data[service]: self._process_data(service, report_data) self.total += report_data['rating']['price'] except InconsistentHeaders: self._crumple() self._crumpled = True def _recover_state(self): # Rewind 3 lines self._report.seek(0, 2) buf_size = self._report.tell() if buf_size > 2000: buf_size = 2000 elif buf_size == 0: return self._report.seek(-buf_size, 2) end_buf = self._report.read() last_line = buf_size for dummy in range(4): last_line = end_buf.rfind('\n', 0, last_line) if last_line > 0: last_line -= len(end_buf) - 1 else: raise RuntimeError('Unable to recover file state.') self._report.seek(last_line, 2) self._report.truncate() def _crumple(self): # Reset states self._usage_data = [] self.total = 0 # Recover state from file if self._report is not None: self._report.seek(0) reader = csv.reader(self._report) # Skip header for dummy in range(2): line = reader.next() self.usage_start_dt = datetime.datetime.strptime( line[0], '%Y/%m/%d %H:%M:%S') self.usage_start = ck_utils.dt2ts(self.usage_start_dt) self.usage_end_dt = datetime.datetime.strptime( line[1], '%Y/%m/%d %H:%M:%S') self.usage_end = ck_utils.dt2ts(self.usage_end_dt) # Reset file self._report.seek(0) self._report.truncate() self._write_header() timeframe = self._write_orchestrator.get_timeframe( self.usage_start) start = self.usage_start self.usage_start = None for data in timeframe: self.append(data['usage'], start, None) self.usage_start = self.usage_end def _update_extra_headers(self, new_head): self._extra_headers.append(new_head) self._extra_headers.sort() self._extra_headers_len += 1 def _allocate_extra(self, line): for dummy in range(self._extra_headers_len): line.append('') def _map_wildcard(self, base, report_data): wildcard_line = [] headers_changed = False self._allocate_extra(wildcard_line) base_section, dummy = base.split(':') if not report_data: return [] for field in report_data: col_name = base_section + ':' + field if col_name not in self._extra_headers: self._update_extra_headers(col_name) headers_changed = True else: idx = self._extra_headers.index(col_name) wildcard_line[idx] = report_data[field] if headers_changed: raise InconsistentHeaders('Headers value changed' ', need to rebuild.') return wildcard_line def _recurse_sections(self, sections, data): if not sections.count(':'): return data.get(sections, '') fields = sections.split(':') cur_data = data for field in fields: if field in cur_data: cur_data = cur_data[field] else: return None return cur_data def _process_data(self, context, report_data): """Transform the raw json data to the final CSV values. """ if not self._headers_len: self._init_headers() formated_data = [] for base, mapped in self._field_map.iteritems(): final_data = '' if isinstance(mapped, str): mapped_section, mapped_field = mapped.rsplit(':', 1) data = self._recurse_sections(mapped_section, report_data) if mapped_field == '*': extra_fields = self._map_wildcard(base, data) formated_data.extend(extra_fields) continue elif mapped_section in report_data: data = report_data[mapped_section] if mapped_field in data: final_data = data[mapped_field] elif mapped is not None: final_data = mapped(context, report_data) formated_data.append(final_data) self._usage_data.append(formated_data) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/writer/csv_map.py0000664000175000017500000001204200000000000021520 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import collections import datetime from cloudkitty.writer import csv_base class CSVMapped(csv_base.BaseCSVBackend): report_type = 'csv' def __init__(self, write_orchestrator, user_id, backend, state_backend): super(CSVMapped, self).__init__(write_orchestrator, user_id, backend, state_backend) # Detailed transform dict self._field_map = collections.OrderedDict( [('UsageStart', self._trans_get_usage_start), ('UsageEnd', self._trans_get_usage_end), ('ResourceId', self._trans_res_id), ('Operation', self._trans_operation), ('UserId', 'desc:user_id'), ('ProjectId', 'desc:project_id'), ('ItemName', 'desc:name'), ('ItemFlavor', 'desc:flavor_name'), ('ItemFlavorId', 'desc:flavor_id'), ('AvailabilityZone', 'desc:availability_zone'), ('Service', self._trans_service), ('UsageQuantity', 'vol:qty'), ('RateValue', 'rating:price'), ('Cost', self._trans_calc_cost), ('user:*', 'desc:metadata:*')]) def _write_total(self): lines = [[''] * self._headers_len for i in range(3)] for i in range(len(lines)): lines[i][1] = self._tenant_id lines[1][2] = self._tenant_id lines[0][3] = 'InvoiceTotal' lines[1][3] = 'AccountTotal' lines[2][3] = 'StatementTotal' lines[0][5] = 'Total amount for invoice' lines[1][5] = 'Total for linked account# {}'.format(self._tenant_id) start_month = datetime.datetime( self.usage_start_dt.year, self.usage_start_dt.month, 1) lines[2][5] = ('Total statement amount for period ' '{} - {}').format(self._format_date(start_month), self._get_usage_end()) lines[0][8] = self.total lines[1][8] = self.total lines[2][8] = self.total self._csv_report.writerows(lines) @staticmethod def _format_date(raw_dt): return raw_dt.strftime('%Y/%m/%d %H:%M:%S') def _get_usage_start(self): """Get the start usage of this period. """ if self.cached_start == self.usage_start: return self.cached_start_str else: self.cached_start = self.usage_start self.cached_start_str = self._format_date(self.usage_start_dt) return self.cached_start_str def _get_usage_end(self): """Get the end usage of this period. """ if self.cached_start == self.usage_start and self.cached_end_str \ and self.cached_end > self.cached_start: return self.cached_end_str else: usage_end = self.usage_start_dt + datetime.timedelta( seconds=self._period) self.cached_end_str = self._format_date(usage_end) return self.cached_end_str def _trans_get_usage_start(self, _context, _report_data): """Dummy transformation function to comply with the standard. """ return self._get_usage_start() def _trans_get_usage_end(self, _context, _report_data): """Dummy transformation function to comply with the standard. """ return self._get_usage_end() def _trans_product_name(self, context, _report_data): """Context dependent product name translation. """ if context == 'compute' or context == 'instance': return 'Nova Computing' else: return context def _trans_operation(self, context, _report_data): """Context dependent operation translation. """ if context == 'compute' or context == 'instance': return 'RunInstances' def _trans_res_id(self, context, report_data): """Context dependent resource id transformation function. """ return report_data['desc'].get('resource_id') def _trans_calc_cost(self, context, report_data): """Cost calculation function. """ try: quantity = report_data['vol'].get('qty') rate = report_data['rating'].get('price') return str(float(quantity) * rate) except TypeError: pass def _trans_service(self, context, report_data): return context ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/cloudkitty/writer/osrf.py0000664000175000017500000000555300000000000021052 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2014 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import os from cloudkitty.utils import json from cloudkitty import writer class OSRFBackend(writer.BaseReportWriter): """OpenStack Report Format Writer: Generates report in native format (json) """ report_type = 'osrf' def _gen_filename(self, timeframe): filename = '{}-osrf-{}-{:02d}.json'.format(self._tenant_id, timeframe.year, timeframe.month) if self._basepath: filename = os.path.join(self._basepath, filename) return filename def _open(self): filename = self._gen_filename(self.usage_start_dt) self._report = self._backend(filename, 'rb+') self._report.seek(0, 2) if self._report.tell(): self._recover_state() else: self._report.seek(0) def _write_header(self): self._report.write('[') self._report.flush() def _write_total(self): total = {'total': self.total} self._report.write(json.dumps(total)) self._report.write(']') self._report.flush() def _recover_state(self): # Search for last comma self._report.seek(0, 2) max_idx = self._report.tell() if max_idx > 2000: max_idx = 2000 hay = '' for idx in range(10, max_idx, 10): self._report.seek(-idx, 2) hay = self._report.read() if hay.count(','): break last_comma = hay.rfind(',') if last_comma > -1: last_comma -= len(hay) else: raise RuntimeError('Unable to recover file state.') self._report.seek(last_comma, 2) self._report.write(', ') self._report.truncate() def _close_file(self): if self._report is not None: self._recover_state() self._write_total() self._report.close() def _write(self): data = {} data['period'] = {'begin': self.usage_start_dt.isoformat(), 'end': self.usage_end_dt.isoformat()} data['usage'] = self._usage_data self._report.write(json.dumps(data)) self._report.write(', ') self._report.flush() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2274861 cloudkitty-21.0.0/cloudkitty.egg-info/0000775000175000017500000000000000000000000017675 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866639.0 cloudkitty-21.0.0/cloudkitty.egg-info/PKG-INFO0000664000175000017500000001152300000000000020774 0ustar00zuulzuul00000000000000Metadata-Version: 1.2 Name: cloudkitty Version: 21.0.0 Summary: Rating as a Service component for OpenStack Home-page: https://docs.openstack.org/cloudkitty/latest Author: OpenStack Author-email: openstack-discuss@lists.openstack.org License: UNKNOWN Description: ======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/cloudkitty.svg :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on ========== CloudKitty ========== .. image:: doc/source/images/cloudkitty-logo.png :alt: cloudkitty :align: center Rating as a Service component +++++++++++++++++++++++++++++ Goal ---- CloudKitty aims at filling the gap between metrics collection systems like ceilometer and a billing system. Every metrics are collected, aggregated and processed through different rating modules. You can then query CloudKitty's storage to retrieve processed data and easily generate reports. Most parts of CloudKitty are modular so you can easily extend the base code to address your particular use case. You can find more information on its architecture in the documentation, `architecture section`_. Status ------ CloudKitty has been successfully deployed in production on different OpenStack systems. You can find the latest documentation on documentation_. Contributing ------------ We are welcoming new contributors, if you've got new ideas, suggestions or want to contribute contact us. You can reach us thought IRC (#cloudkitty @ oftc.net), or on the official OpenStack mailing list openstack-discuss@lists.openstack.org. A storyboard_ is available if you need to report bugs. Additional components --------------------- We're providing an OpenStack dashboard (Horizon) integration, you can find the files in the cloudkitty-dashboard_ repository. A CLI is available too in the python-cloudkittyclient_ repository. Trying it --------- CloudKitty can be deployed with DevStack, more information can be found in the `devstack section`_ of the documentation. Deploying it in production -------------------------- CloudKitty can be deployed in production on OpenStack environments, for more information check the `installation section`_ of the documentation. Getting release notes --------------------- Release notes can be found in the `release notes section`_ of the documentation. Contributing to CloudKitty -------------------------- For information on how to contribute to CloudKitty, please see the contents of the CONTRIBUTING.rst. Any new code must follow the development guidelines detailed in the HACKING.rst file, and pass all unit tests. .. Global references and images .. _documentation: https://docs.openstack.org/cloudkitty/latest/ .. _storyboard: https://storyboard.openstack.org/#!/project/890 .. _python-cloudkittyclient: https://opendev.org/openstack/python-cloudkittyclient .. _cloudkitty-dashboard: https://opendev.org/openstack/cloudkitty-dashboard .. _architecture section: https://docs.openstack.org/cloudkitty/latest/admin/architecture.html .. _devstack section: https://docs.openstack.org/cloudkitty/latest/admin/devstack.html .. _installation section: https://docs.openstack.org/cloudkitty/latest/admin/install/index.html .. _release notes section: https://docs.openstack.org/releasenotes/cloudkitty/ .. _contributing: https://docs.openstack.org/cloudkitty/latest/contributor/contributing.html Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Requires-Python: >=3.8 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866639.0 cloudkitty-21.0.0/cloudkitty.egg-info/SOURCES.txt0000664000175000017500000006131000000000000021562 0ustar00zuulzuul00000000000000.stestr.conf .zuul.yaml AUTHORS CONTRIBUTING.rst ChangeLog HACKING.rst LICENSE README.rst requirements.txt setup.cfg setup.py test-requirements.txt tox.ini api-ref/source/_static api-ref/source/conf.py api-ref/source/index.rst api-ref/source/v1/v1.rst api-ref/source/v1/rating/hashmap.rst api-ref/source/v1/rating/pyscripts.rst api-ref/source/v2/http_status.yml api-ref/source/v2/index.rst api-ref/source/v2/api_samples/dataframes/dataframes_get.json api-ref/source/v2/api_samples/dataframes/dataframes_post.json api-ref/source/v2/api_samples/rating/module_get.json api-ref/source/v2/api_samples/rating/modules_list_get.json api-ref/source/v2/api_samples/scope/scope_get.json api-ref/source/v2/api_samples/summary/summary_get.json api-ref/source/v2/api_samples/summary/summary_get_groupby_time.json api-ref/source/v2/dataframes/dataframes.inc api-ref/source/v2/dataframes/dataframes_parameters.yml api-ref/source/v2/dataframes/http_status.yml api-ref/source/v2/rating/http_status.yml api-ref/source/v2/rating/modules.inc api-ref/source/v2/rating/modules_parameters.yml api-ref/source/v2/scope/http_status.yml api-ref/source/v2/scope/scope.inc api-ref/source/v2/scope/scope_parameters.yml api-ref/source/v2/summary/summary.inc api-ref/source/v2/summary/summary_parameters.yml api-ref/source/v2/task/reprocessing.inc api-ref/source/v2/task/reprocessing_parameters.yml cloudkitty/__init__.py cloudkitty/config.py cloudkitty/dataframe.py cloudkitty/extension_manager.py cloudkitty/i18n.py cloudkitty/messaging.py cloudkitty/orchestrator.py cloudkitty/service.py cloudkitty/state.py cloudkitty/version.py cloudkitty/write_orchestrator.py cloudkitty.egg-info/PKG-INFO cloudkitty.egg-info/SOURCES.txt cloudkitty.egg-info/dependency_links.txt cloudkitty.egg-info/entry_points.txt cloudkitty.egg-info/not-zip-safe cloudkitty.egg-info/pbr.json cloudkitty.egg-info/requires.txt cloudkitty.egg-info/top_level.txt cloudkitty/api/__init__.py cloudkitty/api/app.py cloudkitty/api/app.wsgi cloudkitty/api/middleware.py cloudkitty/api/root.py cloudkitty/api/v1/__init__.py cloudkitty/api/v1/config.py cloudkitty/api/v1/hooks.py cloudkitty/api/v1/types.py cloudkitty/api/v1/controllers/__init__.py cloudkitty/api/v1/controllers/collector.py cloudkitty/api/v1/controllers/info.py cloudkitty/api/v1/controllers/rating.py cloudkitty/api/v1/controllers/report.py cloudkitty/api/v1/controllers/storage.py cloudkitty/api/v1/datamodels/__init__.py cloudkitty/api/v1/datamodels/collector.py cloudkitty/api/v1/datamodels/info.py cloudkitty/api/v1/datamodels/rating.py cloudkitty/api/v1/datamodels/report.py cloudkitty/api/v1/datamodels/storage.py cloudkitty/api/v2/__init__.py cloudkitty/api/v2/base.py cloudkitty/api/v2/utils.py cloudkitty/api/v2/dataframes/__init__.py cloudkitty/api/v2/dataframes/dataframes.py cloudkitty/api/v2/rating/__init__.py cloudkitty/api/v2/rating/modules.py cloudkitty/api/v2/scope/__init__.py cloudkitty/api/v2/scope/state.py cloudkitty/api/v2/summary/__init__.py cloudkitty/api/v2/summary/summary.py cloudkitty/api/v2/task/__init__.py cloudkitty/api/v2/task/reprocess.py cloudkitty/backend/__init__.py cloudkitty/backend/file.py cloudkitty/cli/__init__.py cloudkitty/cli/dbsync.py cloudkitty/cli/processor.py cloudkitty/cli/status.py cloudkitty/cli/storage.py cloudkitty/cli/writer.py cloudkitty/collector/__init__.py cloudkitty/collector/exceptions.py cloudkitty/collector/gnocchi.py cloudkitty/collector/prometheus.py cloudkitty/common/__init__.py cloudkitty/common/config.py cloudkitty/common/context.py cloudkitty/common/custom_session.py cloudkitty/common/defaults.py cloudkitty/common/policy.py cloudkitty/common/prometheus_client.py cloudkitty/common/db/__init__.py cloudkitty/common/db/models.py cloudkitty/common/db/alembic/__init__.py cloudkitty/common/db/alembic/alembic.ini cloudkitty/common/db/alembic/env.py cloudkitty/common/db/alembic/migration.py cloudkitty/common/policies/__init__.py cloudkitty/common/policies/base.py cloudkitty/common/policies/v1/__init__.py cloudkitty/common/policies/v1/collector.py cloudkitty/common/policies/v1/info.py cloudkitty/common/policies/v1/rating.py cloudkitty/common/policies/v1/report.py cloudkitty/common/policies/v1/storage.py cloudkitty/common/policies/v2/__init__.py cloudkitty/common/policies/v2/dataframes.py cloudkitty/common/policies/v2/rating.py cloudkitty/common/policies/v2/scope.py cloudkitty/common/policies/v2/summary.py cloudkitty/common/policies/v2/tasks.py cloudkitty/db/__init__.py cloudkitty/db/api.py cloudkitty/db/sqlalchemy/__init__.py cloudkitty/db/sqlalchemy/api.py cloudkitty/db/sqlalchemy/migration.py cloudkitty/db/sqlalchemy/models.py cloudkitty/db/sqlalchemy/alembic/__init__.py cloudkitty/db/sqlalchemy/alembic/env.py cloudkitty/db/sqlalchemy/alembic/script.py.mako cloudkitty/db/sqlalchemy/alembic/versions/2ac2217dcbd9_added_support_for_meta_collector.py cloudkitty/db/sqlalchemy/alembic/versions/385e33fef139_added_priority_to_modules_state.py cloudkitty/db/sqlalchemy/alembic/versions/464e951dc3b8_initial_migration.py cloudkitty/fetcher/__init__.py cloudkitty/fetcher/gnocchi.py cloudkitty/fetcher/keystone.py cloudkitty/fetcher/prometheus.py cloudkitty/fetcher/source.py cloudkitty/hacking/__init__.py cloudkitty/hacking/checks.py cloudkitty/rating/__init__.py cloudkitty/rating/noop.py cloudkitty/rating/hash/__init__.py cloudkitty/rating/hash/controllers/__init__.py cloudkitty/rating/hash/controllers/field.py cloudkitty/rating/hash/controllers/group.py cloudkitty/rating/hash/controllers/mapping.py cloudkitty/rating/hash/controllers/root.py cloudkitty/rating/hash/controllers/service.py cloudkitty/rating/hash/controllers/threshold.py cloudkitty/rating/hash/datamodels/__init__.py cloudkitty/rating/hash/datamodels/field.py cloudkitty/rating/hash/datamodels/group.py cloudkitty/rating/hash/datamodels/mapping.py cloudkitty/rating/hash/datamodels/service.py cloudkitty/rating/hash/datamodels/threshold.py cloudkitty/rating/hash/db/__init__.py cloudkitty/rating/hash/db/api.py cloudkitty/rating/hash/db/sqlalchemy/__init__.py cloudkitty/rating/hash/db/sqlalchemy/api.py cloudkitty/rating/hash/db/sqlalchemy/migration.py cloudkitty/rating/hash/db/sqlalchemy/models.py cloudkitty/rating/hash/db/sqlalchemy/alembic/__init__.py cloudkitty/rating/hash/db/sqlalchemy/alembic/env.py cloudkitty/rating/hash/db/sqlalchemy/alembic/script.py.mako cloudkitty/rating/hash/db/sqlalchemy/alembic/models/__init__.py cloudkitty/rating/hash/db/sqlalchemy/alembic/models/f8c799db4aa0_fix_unnamed_constraints.py cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/10d2738b67df_rename_mapping_table_to_hashmap_mappings.py cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/3dd7e13527f3_initial_migration.py cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/4da82e1c11c8_add_per_tenant_hashmap_support.py cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/4e0232ce_increase_precision_for_cost_fields.py cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/4fa888fd7eda_added_threshold_support.py cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/54cc17accf2c_fixed_constraint_name.py cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/644faa4491fd_update_tenant_id_type_from_uuid_to_text.py cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/Ifbf5b2515c7_increase_precision_for_cost_fields.py cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/c88a06b1cfce_clean_hashmap_fields_constraints.py cloudkitty/rating/hash/db/sqlalchemy/alembic/versions/f8c799db4aa0_fix_unnamed_constraints.py cloudkitty/rating/pyscripts/__init__.py cloudkitty/rating/pyscripts/controllers/__init__.py cloudkitty/rating/pyscripts/controllers/root.py cloudkitty/rating/pyscripts/controllers/script.py cloudkitty/rating/pyscripts/datamodels/__init__.py cloudkitty/rating/pyscripts/datamodels/script.py cloudkitty/rating/pyscripts/db/__init__.py cloudkitty/rating/pyscripts/db/api.py cloudkitty/rating/pyscripts/db/sqlalchemy/__init__.py cloudkitty/rating/pyscripts/db/sqlalchemy/api.py cloudkitty/rating/pyscripts/db/sqlalchemy/migration.py cloudkitty/rating/pyscripts/db/sqlalchemy/models.py cloudkitty/rating/pyscripts/db/sqlalchemy/alembic/__init__.py cloudkitty/rating/pyscripts/db/sqlalchemy/alembic/env.py cloudkitty/rating/pyscripts/db/sqlalchemy/alembic/script.py.mako cloudkitty/rating/pyscripts/db/sqlalchemy/alembic/versions/4f9efa4601c0_initial_migration.py cloudkitty/rating/pyscripts/db/sqlalchemy/alembic/versions/75c205f6f1a2_move_from_sha1_to_sha512.py cloudkitty/storage/__init__.py cloudkitty/storage/v1/__init__.py cloudkitty/storage/v1/hybrid/__init__.py cloudkitty/storage/v1/hybrid/migration.py cloudkitty/storage/v1/hybrid/models.py cloudkitty/storage/v1/hybrid/alembic/env.py cloudkitty/storage/v1/hybrid/alembic/script.py.mako cloudkitty/storage/v1/hybrid/alembic/versions/03da4bb002b9_initial_revision.py cloudkitty/storage/v1/hybrid/backends/__init__.py cloudkitty/storage/v1/hybrid/backends/gnocchi.py cloudkitty/storage/v1/sqlalchemy/__init__.py cloudkitty/storage/v1/sqlalchemy/migration.py cloudkitty/storage/v1/sqlalchemy/models.py cloudkitty/storage/v1/sqlalchemy/alembic/__init__.py cloudkitty/storage/v1/sqlalchemy/alembic/env.py cloudkitty/storage/v1/sqlalchemy/alembic/script.py.mako cloudkitty/storage/v1/sqlalchemy/alembic/versions/17fd1b237aa3_initial_migration.py cloudkitty/storage/v1/sqlalchemy/alembic/versions/307430ab38bc_improve_qty_precision.py cloudkitty/storage/v1/sqlalchemy/alembic/versions/792b438b663_added_tenant_informations.py cloudkitty/storage/v1/sqlalchemy/alembic/versions/c703a1bad612_improve_qty_digit.py cloudkitty/storage/v1/sqlalchemy/alembic/versions/d875621d0384_create_index_idx_tenantid_begin_end_on_.py cloudkitty/storage/v2/__init__.py cloudkitty/storage/v2/influx.py cloudkitty/storage/v2/elasticsearch/__init__.py cloudkitty/storage/v2/elasticsearch/client.py cloudkitty/storage/v2/elasticsearch/exceptions.py cloudkitty/storage/v2/opensearch/__init__.py cloudkitty/storage/v2/opensearch/client.py cloudkitty/storage/v2/opensearch/exceptions.py cloudkitty/storage_state/__init__.py cloudkitty/storage_state/migration.py cloudkitty/storage_state/models.py cloudkitty/storage_state/alembic/env.py cloudkitty/storage_state/alembic/script.py.mako cloudkitty/storage_state/alembic/versions/4d69395f_add_storage_scope_state_fields.py cloudkitty/storage_state/alembic/versions/750d3050_create_last_processed_timestamp_column.py cloudkitty/storage_state/alembic/versions/9feccd32_create_reprocessing_scheduler.py cloudkitty/storage_state/alembic/versions/c14eea9d3cc1_initial.py cloudkitty/storage_state/alembic/versions/c50ed2c19204_update_storage_state_constraint.py cloudkitty/storage_state/alembic/versions/d9d103dd4dcf_add_state_management_columns.py cloudkitty/tests/__init__.py cloudkitty/tests/samples.py cloudkitty/tests/test_config.py cloudkitty/tests/test_dataframe.py cloudkitty/tests/test_hacking.py cloudkitty/tests/test_hashmap.py cloudkitty/tests/test_keystone_fetcher.py cloudkitty/tests/test_orchestrator.py cloudkitty/tests/test_policy.py cloudkitty/tests/test_pyscripts.py cloudkitty/tests/test_rating.py cloudkitty/tests/test_state.py cloudkitty/tests/test_storage_state.py cloudkitty/tests/utils.py cloudkitty/tests/api/__init__.py cloudkitty/tests/api/v1/__init__.py cloudkitty/tests/api/v1/test_summary.py cloudkitty/tests/api/v1/test_types.py cloudkitty/tests/api/v2/__init__.py cloudkitty/tests/api/v2/test_utils.py cloudkitty/tests/api/v2/dataframes/__init__.py cloudkitty/tests/api/v2/dataframes/test_dataframes.py cloudkitty/tests/api/v2/summary/__init__.py cloudkitty/tests/api/v2/summary/test_summary.py cloudkitty/tests/api/v2/task/__init__.py cloudkitty/tests/api/v2/task/test_reprocess.py cloudkitty/tests/cli/__init__.py cloudkitty/tests/cli/test_status.py cloudkitty/tests/collectors/__init__.py cloudkitty/tests/collectors/test_gnocchi.py cloudkitty/tests/collectors/test_prometheus.py cloudkitty/tests/collectors/test_validation.py cloudkitty/tests/common/test_prometheus_client.py cloudkitty/tests/fetchers/__init__.py cloudkitty/tests/fetchers/test_gnocchi.py cloudkitty/tests/fetchers/test_prometheus.py cloudkitty/tests/gabbi/__init__.py cloudkitty/tests/gabbi/fixtures.py cloudkitty/tests/gabbi/gabbi_paste.ini cloudkitty/tests/gabbi/handlers.py cloudkitty/tests/gabbi/test_gabbi.py cloudkitty/tests/gabbi/gabbits/ks_middleware_auth.yaml cloudkitty/tests/gabbi/gabbits/ks_middleware_cors.yaml cloudkitty/tests/gabbi/gabbits/no_auth.yaml cloudkitty/tests/gabbi/gabbits/root-v1-storage.yaml cloudkitty/tests/gabbi/gabbits/root-v2-storage.yaml cloudkitty/tests/gabbi/gabbits/v1-collector.yaml cloudkitty/tests/gabbi/gabbits/v1-info.yaml cloudkitty/tests/gabbi/gabbits/v1-rating.yaml cloudkitty/tests/gabbi/gabbits/v1-report.yaml cloudkitty/tests/gabbi/gabbits/v1-storage.yaml cloudkitty/tests/gabbi/gabbits/v2-dataframes.yaml cloudkitty/tests/gabbi/gabbits/v2-rating-modules.yaml cloudkitty/tests/gabbi/gabbits/v2-scope-state.yaml cloudkitty/tests/gabbi/gabbits/v2-summary.yaml cloudkitty/tests/gabbi/rating/__init__.py cloudkitty/tests/gabbi/rating/hash/__init__.py cloudkitty/tests/gabbi/rating/hash/fixtures.py cloudkitty/tests/gabbi/rating/hash/test_gabbi.py cloudkitty/tests/gabbi/rating/hash/gabbits/hash-empty.yaml cloudkitty/tests/gabbi/rating/hash/gabbits/hash-errors.yaml cloudkitty/tests/gabbi/rating/hash/gabbits/hash-location.yaml cloudkitty/tests/gabbi/rating/hash/gabbits/hash.yaml cloudkitty/tests/gabbi/rating/pyscripts/__init__.py cloudkitty/tests/gabbi/rating/pyscripts/fixtures.py cloudkitty/tests/gabbi/rating/pyscripts/test_gabbi.py cloudkitty/tests/gabbi/rating/pyscripts/gabbits/pyscripts.yaml cloudkitty/tests/storage/__init__.py cloudkitty/tests/storage/v1/__init__.py cloudkitty/tests/storage/v1/test_hybrid_storage.py cloudkitty/tests/storage/v1/test_storage.py cloudkitty/tests/storage/v2/__init__.py cloudkitty/tests/storage/v2/es_utils.py cloudkitty/tests/storage/v2/influx_utils.py cloudkitty/tests/storage/v2/opensearch_utils.py cloudkitty/tests/storage/v2/test_influxdb.py cloudkitty/tests/storage/v2/test_storage_unit.py cloudkitty/tests/storage/v2/elasticsearch/__init__.py cloudkitty/tests/storage/v2/elasticsearch/test_client.py cloudkitty/tests/storage/v2/opensearch/__init__.py cloudkitty/tests/storage/v2/opensearch/test_client.py cloudkitty/tests/utils_tests/__init__.py cloudkitty/tests/utils_tests/test_json.py cloudkitty/tests/utils_tests/test_tz.py cloudkitty/tests/utils_tests/test_utils.py cloudkitty/tests/utils_tests/test_validation.py cloudkitty/utils/__init__.py cloudkitty/utils/json.py cloudkitty/utils/tz.py cloudkitty/utils/validation.py cloudkitty/writer/__init__.py cloudkitty/writer/csv_base.py cloudkitty/writer/csv_map.py cloudkitty/writer/osrf.py contrib/cloudkitty.logrotate contrib/ci/csv_writer.py contrib/init/cloudkitty-api.service contrib/init/cloudkitty-processor.service devstack/README.rst devstack/apache-cloudkitty.template devstack/plugin.sh devstack/settings devstack/files/influxdb.conf devstack/upgrade/resources.sh devstack/upgrade/settings devstack/upgrade/shutdown.sh devstack/upgrade/upgrade.sh doc/.gitignore doc/Makefile doc/requirements.txt doc/source/api-reference doc/source/common-index.rst doc/source/conf.py doc/source/index.rst doc/source/pdf-index.rst doc/source/_static/cloudkitty.policy.yaml.sample doc/source/admin/architecture.rst doc/source/admin/devstack.rst doc/source/admin/index.rst doc/source/admin/cli/cloudkitty-status.rst doc/source/admin/cli/index.rst doc/source/admin/configuration/collector.rst doc/source/admin/configuration/configuration.rst doc/source/admin/configuration/fetcher.rst doc/source/admin/configuration/index.rst doc/source/admin/configuration/policy.rst doc/source/admin/configuration/storage.rst doc/source/admin/configuration/samples/cloudkitty-conf.rst doc/source/admin/configuration/samples/policy-yaml.rst doc/source/admin/install/index.rst doc/source/admin/install/install-rdo.rst doc/source/admin/install/install-source.rst doc/source/admin/install/install-ubuntu.rst doc/source/admin/install/mod_wsgi.rst doc/source/concepts/index.rst doc/source/contributor/contributing.rst doc/source/developer/collector.rst doc/source/developer/fetcher.rst doc/source/developer/index.rst doc/source/developer/roadmap.rst doc/source/developer/storage.rst doc/source/developer/api/index.rst doc/source/developer/api/tutorial.rst doc/source/developer/api/utils.rst doc/source/images/cloudkitty-logo.png doc/source/images/cloudkitty_architecture.png doc/source/images/cloudkitty_modules.png doc/source/user/index.rst doc/source/user/rating/hashmap.rst doc/source/user/rating/index.rst doc/source/user/rating/pyscripts.rst doc/source/user/rating/graph/hashmap.dot etc/apache2/cloudkitty etc/cloudkitty/api_paste.ini etc/cloudkitty/metrics.yml etc/oslo-config-generator/cloudkitty.conf etc/oslo-policy-generator/cloudkitty.conf releasenotes/notes/add-dataframe-datapoint-objects-a5a4ac3db5289cb6.yaml releasenotes/notes/add-dataframes-v2-api-endpoint-601825c344ba0e2d.yaml releasenotes/notes/add-description-option-to-rating-671430ac73c0315b.yaml releasenotes/notes/add-gnocchi-fetcher-b8a6e2ea49fcfec5.yaml releasenotes/notes/add-influx-storage-backend-3ace5b451e789e64.yaml releasenotes/notes/add-new-validation-to-not-allow-reprocessing-with-incompatible-timewindows-5a44802f20bce4f2.yaml releasenotes/notes/add-opensearch-as-v2-storage-backend-ff4080d6d32d8a2a.yaml releasenotes/notes/add-prometheus-fetcher-be6082f70f279f0e.yaml releasenotes/notes/add-re-aggregation-method-option-gnocchi-collector-249917a14c4fc721.yaml releasenotes/notes/add-scope-key-58135c2a5c6dae68.yaml releasenotes/notes/add-storage-state-v2-api-endpoint-45a29d0b44e177b8.yaml releasenotes/notes/add-storage-state-v2-api-endpoint-492d7092e85ed7b1.yaml releasenotes/notes/add-support-to-influxdb-v2-storage-backend-f94df79f9e5276a8.yaml releasenotes/notes/add-tempest-plugin-3584e1918f344fb2.yaml releasenotes/notes/add-v2-storage-driver-for-elasticsearch-ec41cbb7849e82d3.yaml releasenotes/notes/add_warning_regarding_gnocchi_version-99d5213c35950e39.yaml releasenotes/notes/added-forced-granularity-gnocchi-d52e988194197248.yaml releasenotes/notes/added-v2-api-1ef829355c2feea4.yaml releasenotes/notes/admin-or-owner-policy-c666346da4405d13.yaml releasenotes/notes/allow-multiple-ranting-types-for-same-metric-in-gnocchi-1011ba2d5d36c073.yaml releasenotes/notes/batch-delete-reprocessing-d46df15b078a42a5.yaml releasenotes/notes/change-metrology-organization-1e11900eb30780cc.yaml releasenotes/notes/check-duplicates-metadata-groupby-d5ee99941bb483fd.yaml releasenotes/notes/collector-monasca-f0871406513ff22c.yaml releasenotes/notes/create-use_all_entries_for_timespan-option-for-gnocchi-collector-39d29603b1f554e1.yaml releasenotes/notes/custom-gnocchi-query-a391f5e83d55d771.yaml releasenotes/notes/dataframes-get-v2-policy-check-6070fc047b2e1496.yaml releasenotes/notes/default-to-v2-storage-a5ecac7e73dafa6d.yaml releasenotes/notes/deprecate-ceilometer-collector-6d8f72c84b95662b.yaml releasenotes/notes/deprecate-collector-mappings-5a69b31c8037fc01.yaml releasenotes/notes/deprecate-elasticsearch-for-opensearch-a338965edff23509.yaml releasenotes/notes/deprecate-get-state-2932a4e6a74295ce.yaml releasenotes/notes/deprecate-info-services-endpoints-0c5018cb08a30d5f.yaml releasenotes/notes/deprecate-json-formatted-policy-file-01ceb65712fd0a39.yaml releasenotes/notes/deprecate-monasca-5526b823b227c6ef.yaml releasenotes/notes/deprecate-report-total-62544dce42bb19a6.yaml releasenotes/notes/deprecate_section_name-9f1ce1f84d09adf8.yaml releasenotes/notes/drop-py-2-7-fcf8c0613a7bffa8.yaml releasenotes/notes/fetch-metrics-concurrently-dffffe346bd4900e.yaml releasenotes/notes/fix-begin-end-validation-v2-summary-52401fb47ef9b5d6.yaml releasenotes/notes/fix-csv-usage-end-7bcf4cb5effc4461.yaml releasenotes/notes/fix-dataframe-filtering-282cae643457bb8b.yaml releasenotes/notes/fix-gnocchi-metadata-collection-74665e862483a383.yaml releasenotes/notes/fix-hashmap-mapping-value-match-56570510203ce3e5.yaml releasenotes/notes/fix-lock-release-74d112c8599c9a59.yaml releasenotes/notes/fix-opensearch-report-344508dd4e3d0ccc.yaml releasenotes/notes/fix-project-id-none-d40df33fc7b7db23.yaml releasenotes/notes/fix-quote-v1-api-7282f01b596f0f3b.yaml releasenotes/notes/fix-rating-rules-value-precision-40d1054f8ab494c3.yaml releasenotes/notes/fix-response-total-for-elastic-search-a3a9244380ed046f.yaml releasenotes/notes/fix-scope-state-reset-filters-0a1f5ea503bd32a1.yaml releasenotes/notes/fix-url-building-do-init-7c952afaf6d909cd.yaml releasenotes/notes/fix-v1-storage-groupby-e865d1315bd390cb.yaml releasenotes/notes/fix-v1-summary-and-total-with-es-os-backend-9540741b80819672.yaml releasenotes/notes/fix_py_scripts-fd9ab52c92263844.yaml releasenotes/notes/force-project-id-monasca-collector-cb30ed073d36d40e.yaml releasenotes/notes/get-dataframes-v2-api-endpoint-3a4625c6008a5fca.yaml releasenotes/notes/harden-dataframes-policy-7786286525e52dfb.yaml releasenotes/notes/ignore_disabled_tenants-and-ignore_rating_role-dfe542a0cafd412e.yaml releasenotes/notes/improve-metrics-configuration-271102366f8e6fe7.yaml releasenotes/notes/introduce-active-status-field-cdfecd27c2bb9a42.yaml releasenotes/notes/introduce-bandit-security-linter-592faa26f957a3dd.yaml releasenotes/notes/introduce-cloudkitty.utils-792b9080537405bf.yaml releasenotes/notes/introduce-reprocessing-api-822db3edc256507a.yaml releasenotes/notes/make-cloudkitty-timezone-aware-2b65edc42e913d6c.yaml releasenotes/notes/make-gnocchi-http-max-connections-pool-configurable-52c9f6617466ea30.yaml releasenotes/notes/make-processor-run-several-workers-02597b0f77687ef3.yaml releasenotes/notes/map-mutator-632b8629c0482e94.yaml releasenotes/notes/monasca-fetcher-2ea866f873ab5336.yaml releasenotes/notes/move-api-docs-to-api-ref-be71b864e557110e.yaml releasenotes/notes/multiple_values_filter_summary_get_v2_api-1110373a900fad0d.yaml releasenotes/notes/new-forcegranularity-default-b8aaf7d7823aef3b.yaml releasenotes/notes/notnumbool-mutator-ab056e86f2bc843d.yaml releasenotes/notes/optimize_gnochi-fetcher-41b502e7ca242cb1.yaml releasenotes/notes/optimize_gnochi-fetcher-runtime-3604026816.yaml releasenotes/notes/optimizing-sql-queries-939f48fff1805389.yaml releasenotes/notes/patch-use-all-revision-0325eeb0f7871c35.yaml releasenotes/notes/post-api-create-scope-739098144706a1cf.yaml releasenotes/notes/prometheus-collector-empty-meta-12402d8f0254c011.yaml releasenotes/notes/prometheus-collector-mutate-8da4748b4d1f0b59.yaml releasenotes/notes/prometheus-custom-query-ab2dc00e97b14be2.yaml releasenotes/notes/prometheus-error-8eab9f1793c2280c.yaml releasenotes/notes/raise-exception-on-invalid-config-0aece71caa0947fa.yaml releasenotes/notes/rating-modules-v2-7e4e7a3c5fa96331.yaml releasenotes/notes/refactor-storage-e5453296e477e594.yaml releasenotes/notes/register-keystone-opts-with-keystoneauth-functions-monasca-collector-1a539fc8c23e9dbc.yaml releasenotes/notes/remove-ceilometer-collector-b310bf6c5736c88a.yaml releasenotes/notes/remove-dateutil-tz-utc-usage-1350c00be3fadde7.yaml releasenotes/notes/remove-deprecated-api-endpoints-26606e322b8a225e.yaml releasenotes/notes/remove-deprecated-config-section-names-9a125b1af0932c08.yaml releasenotes/notes/remove-deprecated-storage-backends-158fbec099846ec7.yaml releasenotes/notes/remove-fake-fetcher-9c264520a3cec9d0.yaml releasenotes/notes/remove-fake-meta-collectors-5ed94ab1165e9661.yaml releasenotes/notes/remove-gnocchi-transformer-1dad750b9ba6c2e4.yaml releasenotes/notes/remove-monasca-429122691d0e5d52.yaml releasenotes/notes/remove-state-attribute-scope-28e48ae4ada5208d.yaml releasenotes/notes/remove-transformers-8d9949ed3088b055.yaml releasenotes/notes/remove-v2-gnocchi-storage-a83bd58008bfd92e.yaml releasenotes/notes/replace-eventlet-with-futurist-60f1fe6474a5efcf.yaml releasenotes/notes/reprocess-get-fix-f2bd1f2f9e2d640e.yaml releasenotes/notes/reprocessing-concurrency-issues-2a71f4d86a93c507.yaml releasenotes/notes/response_format-v2-summary-api-270facdb01d9202b.yaml releasenotes/notes/rework-prometheus-collector-02bd6351d447e4fe.yaml releasenotes/notes/rework-prometheus-collector-f9f34a3792888dad.yaml releasenotes/notes/skip-period-if-nonexistent-metric-ba56a671e68f5bf5.yaml releasenotes/notes/source-fetcher-43c4352508f7f944.yaml releasenotes/notes/status-upgrade-check-fdcf054643e071d8.yaml releasenotes/notes/support-cross-tenant-metric-submission-monasca-collector-508b495bc88910ca.yaml releasenotes/notes/support-group-by-timeframes-1247aa336916f3b6.yaml releasenotes/notes/support-groupby-time-v2-summary-48ff5ad671f8c7c5.yaml releasenotes/notes/use-interface-param-endpoint-discovery-monasca-collector-7477e86cd7e5acf4.yaml releasenotes/source/2023.1.rst releasenotes/source/2023.2.rst releasenotes/source/2024.1.rst releasenotes/source/conf.py releasenotes/source/index.rst releasenotes/source/ocata.rst releasenotes/source/pike.rst releasenotes/source/queens.rst releasenotes/source/rocky.rst releasenotes/source/stein.rst releasenotes/source/train.rst releasenotes/source/unreleased.rst releasenotes/source/ussuri.rst releasenotes/source/victoria.rst releasenotes/source/wallaby.rst releasenotes/source/xena.rst releasenotes/source/yoga.rst releasenotes/source/zed.rst releasenotes/source/_static/.placeholder releasenotes/source/_templates/.placeholder././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866639.0 cloudkitty-21.0.0/cloudkitty.egg-info/dependency_links.txt0000664000175000017500000000000100000000000023743 0ustar00zuulzuul00000000000000 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866639.0 cloudkitty-21.0.0/cloudkitty.egg-info/entry_points.txt0000664000175000017500000000343200000000000023175 0ustar00zuulzuul00000000000000[cloudkitty.collector.backends] gnocchi = cloudkitty.collector.gnocchi:GnocchiCollector prometheus = cloudkitty.collector.prometheus:PrometheusCollector [cloudkitty.fetchers] gnocchi = cloudkitty.fetcher.gnocchi:GnocchiFetcher keystone = cloudkitty.fetcher.keystone:KeystoneFetcher prometheus = cloudkitty.fetcher.prometheus:PrometheusFetcher source = cloudkitty.fetcher.source:SourceFetcher [cloudkitty.output.writers] csv = cloudkitty.writer.csv_map:CSVMapped osrf = cloudkitty.writer.osrf:OSRFBackend [cloudkitty.rating.processors] hashmap = cloudkitty.rating.hash:HashMap noop = cloudkitty.rating.noop:Noop pyscripts = cloudkitty.rating.pyscripts:PyScripts [cloudkitty.storage.hybrid.backends] gnocchi = cloudkitty.storage.v1.hybrid.backends.gnocchi:GnocchiStorage [cloudkitty.storage.v1.backends] hybrid = cloudkitty.storage.v1.hybrid:HybridStorage sqlalchemy = cloudkitty.storage.v1.sqlalchemy:SQLAlchemyStorage [cloudkitty.storage.v2.backends] elasticsearch = cloudkitty.storage.v2.elasticsearch:ElasticsearchStorage influxdb = cloudkitty.storage.v2.influx:InfluxStorage opensearch = cloudkitty.storage.v2.opensearch:OpenSearchStorage [console_scripts] cloudkitty-dbsync = cloudkitty.cli.dbsync:main cloudkitty-processor = cloudkitty.cli.processor:main cloudkitty-status = cloudkitty.cli.status:main cloudkitty-storage-init = cloudkitty.cli.storage:main cloudkitty-writer = cloudkitty.cli.writer:main [oslo.config.opts] cloudkitty.common.config = cloudkitty.common.config:list_opts [oslo.config.opts.defaults] cloudkitty.common.config = cloudkitty.common.defaults:set_config_defaults [oslo.policy.enforcer] cloudkitty = cloudkitty.common.policy:get_enforcer [oslo.policy.policies] cloudkitty = cloudkitty.common.policies:list_rules [wsgi_scripts] cloudkitty-api = cloudkitty.api.app:build_wsgi_app ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866639.0 cloudkitty-21.0.0/cloudkitty.egg-info/not-zip-safe0000664000175000017500000000000100000000000022123 0ustar00zuulzuul00000000000000 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866639.0 cloudkitty-21.0.0/cloudkitty.egg-info/pbr.json0000664000175000017500000000005600000000000021354 0ustar00zuulzuul00000000000000{"git_version": "76850ca", "is_release": true}././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866639.0 cloudkitty-21.0.0/cloudkitty.egg-info/requires.txt0000664000175000017500000000117600000000000022302 0ustar00zuulzuul00000000000000Flask-RESTful>=0.3.9 Flask>=2.0.0 PasteDeploy>=2.1.1 SQLAlchemy>=1.3.20 WSME>=0.10.0 alembic>=1.4.3 cotyledon>=1.7.3 datetimerange>=0.6.1 futurist>=2.3.0 gnocchiclient>=7.0.6 influxdb-client>=1.36.0 influxdb>=5.3.1 iso8601>=0.1.13 keystoneauth1>=4.2.1 keystonemiddleware>=9.1.0 oslo.concurrency>=4.3.1 oslo.config>=8.3.3 oslo.context>=3.1.1 oslo.db>=8.4.0 oslo.i18n>=5.0.1 oslo.log>=4.4.0 oslo.messaging>=14.1.0 oslo.middleware>=4.1.1 oslo.policy>=3.6.0 oslo.upgradecheck>=1.3.0 oslo.utils>=4.7.0 pbr>=5.5.1 pecan>=1.3.3 python-dateutil>=2.8.0 python-keystoneclient>=4.1.1 requests>=2.14.2 stevedore>=3.2.2 tooz>=2.7.1 voluptuous>=0.12.0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866639.0 cloudkitty-21.0.0/cloudkitty.egg-info/top_level.txt0000664000175000017500000000001300000000000022421 0ustar00zuulzuul00000000000000cloudkitty ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.271487 cloudkitty-21.0.0/contrib/0000775000175000017500000000000000000000000015450 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.271487 cloudkitty-21.0.0/contrib/ci/0000775000175000017500000000000000000000000016043 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/contrib/ci/csv_writer.py0000775000175000017500000004114200000000000020611 0ustar00zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # Copyright 2015 Objectif Libre # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import calendar import copy import csv import datetime import random import sys import uuid from cloudkitty import json_utils as json COMPUTE = { "type": "compute", "desc": {}, "vol": { "qty": 1, "unit": "instance"}} COMPUTE_RESOURCE = { "availability_zone": "nova", "flavor": "m1.nano", "image_id": "f5600101-8fa2-4864-899e-ebcb7ed6b568", "memory": "64", "metadata": { "farm": "prod"}, "name": "prod1", "project_id": "f266f30b11f246b589fd266f85eeec39", "user_id": "55b3379b949243009ee96972fbf51ed1", "vcpus": "1"} IMAGE = { "type": "image", "desc": {}, "vol": { "qty": 214106112.0, "unit": "B"}} IMAGE_RESOURCE = { "name": "cirros-0.3.4-x86_64-uec-ramdisk", "checksum": "be575a2b939972276ef675752936977f", "disk_format": "ari", "protected": "False", "container_format": "ari", "min_disk": "0", "is_public": "True", "min_ram": "0", "project_id": "f1873b13951542268bf7eed7cf971e52", "resource_id": "08017fbc-b13a-4d8d-b002-4eb4eff54cd4", "source": "openstack", "user_id": "None", "size": "214106112"} VOLUME = { "type": "volume", "desc": {}, "vol": { "qty": 1, "unit": "GB"}} VOLUME_RESOURCE = { 'instance_uuid': 'None', 'status': 'available', 'display_name': 'test-vol', 'event_type': 'volume.create.end', 'availability_zone': 'nova', 'tenant_id': 'cd27b013b9db4f4099e273e4b9949023', 'created_at': '2015-04-28 13:34:25', 'snapshot_id': 'None', 'volume_type': '314150bc-221f-4676-a7cf-16f12850b217', 'host': 'volume.devstack@lvmdriver-1#lvmdriver-1', 'replication_driver_data': 'None', 'replication_status': 'disabled', 'volume_id': '2bed6a3d-468a-459b-802b-44930016c0a3', 'replication_extended_status': 'None', 'user_id': '2524d5a52ce64a569d131d7dc1dfb455', 'launched_at': '2015-04-28 13:34:26.869928', 'size': '1', "project_id": "f1873b13951542268bf7eed7cf971e52", "resource_id": "08017fbc-b13a-4d8d-b002-4eb4eff54cd4", "source": "openstack"} NETWORK_BW_IN = { "type": "network.bw.in", "desc": {}, "vol": { "qty": 4546.0, "unit": "B"}} NETWORK_BW_OUT = { "type": "network.bw.out", "desc": {}, "vol": { "qty": 50.0, "unit": "MB"}} NETWORK_BW_RESOURCE = { 'instance_id': 'eef9673d-5d24-43fd-89f5-2929acc7e193', 'instance_type': '42', 'mac': 'fa:16:3e:dd:b2:80', 'fref': 'None', 'name': 'tap12a7d4e1-fc', "project_id": "f1873b13951542268bf7eed7cf971e52", "resource_id": "08017fbc-b13a-4d8d-b002-4eb4eff54cd4", "source": "openstack", "user_id": "None"} FLOATING = { "type": "network.floating", "desc": {}, "vol": { "qty": 1.0, "unit": "ip"}} FLOATING_RESOURCE = { 'router_id': '3d9b2725-fc90-43d0-b119-630e80e1ec51', 'status': 'DOWN', 'event_type': 'floatingip.update.end', 'tenant_id': 'cd27b013b9db4f4099e273e4b9949023', 'floating_network_id': '14198fb4-dc96-45fb-9dde-546f6b0f892f', 'host': 'network.devstack', 'fixed_ip_address': '10.0.0.5', 'floating_ip_address': '172.24.4.3', 'port_id': '12a7d4e1-fcc3-4a3c-8e57-c4baf7787b57', "project_id": "cd27b013b9db4f4099e273e4b9949023", "resource_id": "ebf6485d-7f6f-4c67-97f7-7896324e12d4", "source": "openstack", "user_id": "7319b5d1269d4166a402868b570aad19", 'id': 'ebf6485d-7f6f-4c67-97f7-7896324e12d4'} class VariationMapper(object): day_map = { 0: 'mon', 1: 'tue', 2: 'wed', 3: 'thu', 4: 'fri', 5: 'sat', 6: 'sun'} var_map = { 'mon': {}, 'tue': {}, 'wed': {}, 'thu': {}, 'fri': {}, 'sat': {}, 'sun': {}} def __init__(self, default=1.0): self.default = default def get_var(self, dt): weekday = self.day_map[dt.weekday()] if weekday in self.var_map: wday_map = self.var_map[weekday] if dt.hour in wday_map: return wday_map[dt.hour] elif 'default' in wday_map: return wday_map['default'] elif 'default' in self.var_map: return self.var_map['default'] return self.default class VolumeVariationMapper(VariationMapper): def get_vol(self, dt): value = self.get_var(dt) var_value = value * 0.1 return random.gauss(value, var_value) class BaseGenerator(object): base_sample = None base_resource = None rand = True field_maps = {} def __init__(self, nb_res=1, var_map=None, vol_map=None): self.nb_res = nb_res self.var_map = var_map if var_map else VariationMapper() self.vol_map = vol_map self.init_mapper() self.resources = [] def init_mapper(self): pass def generate_resources(self): for i in range(self.nb_res): res = copy.deepcopy(self.base_resource) for field, mapping in self.field_maps.items(): if hasattr(self, mapping): mapping = getattr(self, mapping) if isinstance(mapping, dict): if self.rand: value = random.choice(mapping.keys()) else: value = mapping.keys()[i] res[field] = value for k, v in mapping[value].items(): res[k] = v elif isinstance(mapping, list): if self.rand: value = random.choice(mapping) else: value = mapping[i] res[field] = value elif callable(mapping): res[field] = mapping(i) else: res[field] = mapping self.resources.append(res) def generate_samples(self, dt): samples = [] res_var = int(self.var_map.get_var(dt)) for i in range(res_var): sample = copy.deepcopy(self.base_sample) sample['desc'] = self.resources[i] if self.vol_map: qty = self.vol_map.get_vol(dt) sample['vol']['qty'] = qty elif 'size' in sample['desc']: sample['vol']['qty'] = sample['desc']['size'] # Packing sample['desc'] = json.dumps(self.resources[i]) sample['vol'] = json.dumps(sample['vol']) samples.append(sample) return samples class ComputeVarMapper(VariationMapper): var_map = { 'mon': { 'default': 1.0, 12: 2.0, 13: 3.0, 14: 2.0, 18: 2.0, 19: 3.0, 20: 4.0, 21: 4.0, 22: 3.0, 23: 2.0, }, 'tue': { 'default': 1.0, 12: 2.0, 13: 3.0, 14: 2.0, 18: 2.0, 19: 3.0, 20: 4.0, 21: 4.0, 22: 3.0, 23: 2.0, }, 'wed': { 'default': 1.0, 12: 2.0, 13: 3.0, 14: 2.0, 18: 2.0, 19: 3.0, 20: 4.0, 21: 4.0, 22: 3.0, 23: 2.0, }, 'thu': { 'default': 1.0, 12: 2.0, 13: 3.0, 14: 2.0, 18: 2.0, 19: 3.0, 20: 4.0, 21: 4.0, 22: 3.0, 23: 2.0, }, 'fri': { 'default': 1.0, 12: 2.0, 13: 3.0, 14: 2.0, 18: 2.0, 19: 3.0, 20: 4.0, 21: 4.0, 22: 3.0, 23: 2.0, }, 'sat': { 'default': 2.0, 12: 3.0, 13: 4.0, 14: 3.0, 18: 3.0, 19: 4.0, 20: 4.0, 21: 4.0, 22: 4.0, 23: 3.0, }, 'sun': { 'default': 2.0, 12: 3.0, 13: 4.0, 14: 3.0, 18: 3.0, 19: 4.0, 20: 4.0, 21: 4.0, 22: 4.0, 23: 3.0, }} class ComputeGenerator(BaseGenerator): base_sample = COMPUTE base_resource = COMPUTE_RESOURCE field_maps = {'flavor': 'flavors', 'image': 'images', 'name': 'generate_name', 'resource_id': 'res_id'} def init_mapper(self): self.flavors = { 'm1.nano': { 'vcpus': '1', 'memory': '64'}, 'm1.micro': { 'vcpus': '1', 'memory': '128'}} self.images = [] self.res_id = [] for i in range(self.nb_res): self.res_id.append(str(uuid.uuid1())) def generate_name(self, *args): basename = 'instance{}' return basename.format(args[0]) class ImageGenerator(BaseGenerator): base_sample = IMAGE base_resource = IMAGE_RESOURCE field_maps = {'name': 'images'} rand = False def init_mapper(self): self.images = { 'cirros-0.3.4-x86_64-uec-kernel': { 'checksum': '836c69cbcd1dc4f225daedbab6edc7c7', 'disk_format': 'ari', 'container_format': 'ari', 'size': '4969360', 'resource_id': '5dd34048-6eeb-4b6c-aa51-62487733e5a1'}, 'cirros-0.3.4-x86_64-uec-ramdisk': { 'checksum': '68085af2609d03e51c7662395b5b6e4b', 'disk_format': 'aki', 'container_format': 'aki', 'size': '3723817', 'resource_id': 'e512a97d-1ed5-4b27-a55a-1b9e5087936a'}, 'Fedora-x86_64-20-20131211.1-sda': { 'checksum': '51bc16b900bf0f814bb6c0c3dd8f0790', 'disk_format': 'qcow2', 'container_format': 'bare', 'size': '214106112', 'resource_id': '3ee99f3f-7ecf-47b2-9a40-6df2d66ef5ae'}} class VolumeGenerator(BaseGenerator): base_sample = VOLUME base_resource = VOLUME_RESOURCE field_maps = {'volume_id': 'volumes'} rand = False def init_mapper(self): self.volumes = { '2bed6a3d-468a-459b-802b-44930016c0a3': { 'size': '10'}, '4fd33321-6a5f-4351-94ca-db398cd708e9': { 'size': '20'}} def generate_name(self, *args): basename = 'volume{}' return basename.format(args[0]) class NetBWVolMapper(VolumeVariationMapper): var_map = { 'mon': { 'default': 1024, 12: 2048, 13: 3072, 14: 2048, 18: 2048, 19: 3072, 20: 4096, 21: 4096, 22: 3072, 23: 2048, }, 'tue': { 'default': 1024, 12: 2048, 13: 3072, 14: 2048, 18: 2048, 19: 3072, 20: 4096, 21: 4096, 22: 3072, 23: 2048, }, 'wed': { 'default': 1024, 12: 2048, 13: 3072, 14: 2048, 18: 2048, 19: 3072, 20: 4096, 21: 4096, 22: 3072, 23: 2048, }, 'thu': { 'default': 1024, 12: 2048, 13: 3072, 14: 2048, 18: 2048, 19: 3072, 20: 4096, 21: 4096, 22: 3072, 23: 2048, }, 'fri': { 'default': 1024, 12: 2048, 13: 3072, 14: 2048, 18: 2048, 19: 3072, 20: 4096, 21: 4096, 22: 3072, 23: 2048, }, 'sat': { 'default': 2048, 12: 3072, 13: 4096, 14: 3072, 18: 3072, 19: 4096, 20: 4096, 21: 4096, 22: 4096, 23: 3072, }, 'sun': { 'default': 2048, 12: 3072, 13: 4096, 14: 3072, 18: 3072, 19: 4096, 20: 4096, 21: 4096, 22: 4096, 23: 3072, }} class NetworkBWGenerator(BaseGenerator): base_sample = NETWORK_BW_IN base_resource = NETWORK_BW_RESOURCE field_maps = {'instance_id': 'instances', 'name': 'generate_name', 'mac': 'generate_mac', 'resource_id': 'res_id'} rand = False def init_mapper(self): self.instances = [] self.res_id = [] for i in range(self.nb_res): self.res_id.append(str(uuid.uuid1())) def generate_name(self, *args): basename = 'tap{}-fc' return basename.format(args[0]) def generate_mac(self, *args): basemac = 'fa:16:3e:{:0=2x}:{:0=2x}:{:0=2x}' return basemac.format( random.randint(1, 255), random.randint(1, 255), random.randint(1, 255)) def generate_samples(self, dt): samples = [] self.base_sample = NETWORK_BW_OUT samples.extend(super(NetworkBWGenerator, self).generate_samples(dt)) self.base_sample = NETWORK_BW_IN samples.extend(super(NetworkBWGenerator, self).generate_samples(dt)) return samples class FloatingGenerator(BaseGenerator): base_sample = FLOATING base_resource = FLOATING_RESOURCE field_maps = {'fixed_ip_address': 'generate_ip_addr', 'floating_ip_address': 'generate_floating_addr', 'port_id': 'generate_port_id', 'resource_id': 'res_id', 'id': 'res_id'} rand = False def init_mapper(self): self.res_id = [] for i in range(self.nb_res): self.res_id.append(str(uuid.uuid1())) def generate_name(self, *args): basename = 'volume{}' return basename.format(args[0]) def generate_port_id(self, *args): return str(uuid.uuid1()) def generate_ip_addr(self, *args): baseip = '10.0.0.{}' return baseip.format(random.randint(5, 250)) def generate_floating_addr(self, *args): baseip = '172.24.4.{}' return baseip.format(random.randint(5, 250)) def write_samples(writer, dt, samples): for sample in samples: ts = calendar.timegm(dt.timetuple()) sample['begin'] = ts sample['end'] = ts + 3600 writer.writerow(sample) def main(): # Generators compute_var = ComputeVarMapper() image = ImageGenerator(3) image.generate_resources() volume = VolumeGenerator(2) volume.generate_resources() floating = FloatingGenerator(4, compute_var) floating.generate_resources() compute = ComputeGenerator(4, compute_var) compute.images = [resource['resource_id'] for resource in image.resources] compute.generate_resources() net_bw = NetworkBWGenerator(4, compute_var, NetBWVolMapper()) net_bw.instances = [resource['resource_id'] for resource in compute.resources] net_bw.generate_resources() generators = [compute, image, volume, net_bw, floating] # Date now = datetime.datetime.utcnow() hour_delta = datetime.timedelta(hours=1) cur_date = now.replace(day=1, hour=0, minute=0, second=0, microsecond=0) cur_month = cur_date.month filename = sys.argv[1] if len(sys.argv) > 1 else 'generated.csv' with open(filename, 'wb') as csvfile: writer = csv.DictWriter(csvfile, ['begin', 'end', 'type', 'desc', 'vol']) writer.writeheader() while cur_date.month == cur_month: for generator in generators: samples = generator.generate_samples(cur_date) write_samples(writer, cur_date, samples) cur_date += hour_delta if __name__ == '__main__': main() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/contrib/cloudkitty.logrotate0000664000175000017500000000014200000000000021562 0ustar00zuulzuul00000000000000/var/log/cloudkitty/*.log { weekly rotate 4 missingok compress minsize 100k } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.271487 cloudkitty-21.0.0/contrib/init/0000775000175000017500000000000000000000000016413 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/contrib/init/cloudkitty-api.service0000664000175000017500000000042000000000000022733 0ustar00zuulzuul00000000000000[Unit] Description=CloudKitty API Service After=syslog.target network.target [Service] Type=simple User=cloudkitty ExecStart=/usr/bin/cloudkitty-api --logfile /var/log/cloudkitty/api.log --config-file /etc/cloudkitty/cloudkitty.conf [Install] WantedBy=multi-user.target ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/contrib/init/cloudkitty-processor.service0000664000175000017500000000044200000000000024205 0ustar00zuulzuul00000000000000[Unit] Description=CloudKitty processor Service After=syslog.target network.target [Service] Type=simple User=cloudkitty ExecStart=/usr/bin/cloudkitty-processor --logfile /var/log/cloudkitty/processor.log --config-file /etc/cloudkitty/cloudkitty.conf [Install] WantedBy=multi-user.target ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.271487 cloudkitty-21.0.0/devstack/0000775000175000017500000000000000000000000015614 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/devstack/README.rst0000664000175000017500000000233500000000000017306 0ustar00zuulzuul00000000000000==================================== Installing CloudKitty using DevStack ==================================== The ``devstack`` directory contains the required files to integrate CloudKitty with DevStack. Configure DevStack to run CloudKitty ==================================== .. code-block:: bash $ DEVSTACK_DIR=/path/to/devstack 1. Enable Ceilometer: .. code-block:: bash $ cd ${DEVSTACK_DIR} $ cat >> local.conf << EOF [[local|localrc]] # ceilometer enable_plugin ceilometer https://opendev.org/openstack/ceilometer.git master EOF 2. Enable CloudKitty: .. code-block:: bash $ cd ${DEVSTACK_DIR} cat >> local.conf << EOF # cloudkitty enable_plugin cloudkitty https://opendev.org/openstack/cloudkitty master enable_service ck-api, ck-proc EOF 3. Set CloudKitty collector to gnocchi: .. code-block:: bash $ cd ${DEVSTACK_DIR} cat >> local.conf << EOF CLOUDKITTY_COLLECTOR=gnocchi EOF Run devstack as usual: .. code-block:: bash $ ./stack.sh See the documentation_ if you want more details about how to configure the devstack plugin. .. _documentation: https://docs.openstack.org/cloudkitty/latest/devstack.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/devstack/apache-cloudkitty.template0000664000175000017500000000077600000000000022775 0ustar00zuulzuul00000000000000Listen %PORT% WSGIDaemonProcess cloudkitty-api processes=2 threads=10 user=%USER% display-name=%{GROUP} python-home=%VIRTUALENV% WSGIProcessGroup cloudkitty-api WSGIScriptAlias / %WSGIAPP% WSGIApplicationGroup %{GLOBAL} = 2.4> ErrorLogFormat "%{cu}t %M" ErrorLog /var/log/%APACHE_NAME%/cloudkitty.log CustomLog /var/log/%APACHE_NAME%/cloudkitty_access.log combined WSGISocketPrefix /var/run/%APACHE_NAME% ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.271487 cloudkitty-21.0.0/devstack/files/0000775000175000017500000000000000000000000016716 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/devstack/files/influxdb.conf0000664000175000017500000004677500000000000021423 0ustar00zuulzuul00000000000000## Modified version of: https://github.com/influxdata/influxdb/blob/master/etc/config.sample.toml ### Welcome to the InfluxDB configuration file. # The values in this file override the default values used by the system if # a config option is not specified. The commented out lines are the configuration # field and the default value used. Uncommenting a line and changing the value # will change the value used at runtime when the process is restarted. # Once every 24 hours InfluxDB will report usage data to usage.influxdata.com # The data includes a random ID, os, arch, version, the number of series and other # usage data. No data from user databases is ever transmitted. # Change this option to true to disable reporting. # reporting-disabled = true # Bind address to use for the RPC service for backup and restore. # bind-address = "127.0.0.1:8088" ### ### [meta] ### ### Controls the parameters for the Raft consensus group that stores metadata ### about the InfluxDB cluster. ### [meta] # Where the metadata/raft database is stored dir = "/var/lib/influxdb/meta" # Automatically create a default retention policy when creating a database. # retention-autocreate = true # If log messages are printed for the meta service # logging-enabled = true ### ### [data] ### ### Controls where the actual shard data for InfluxDB lives and how it is ### flushed from the WAL. "dir" may need to be changed to a suitable place ### for your system, but the WAL settings are an advanced configuration. The ### defaults should work for most systems. ### [data] # The directory where the TSM storage engine stores TSM files. dir = "/var/lib/influxdb/data" # The directory where the TSM storage engine stores WAL files. wal-dir = "/var/lib/influxdb/wal" # The amount of time that a write will wait before fsyncing. A duration # greater than 0 can be used to batch up multiple fsync calls. This is useful for slower # disks or when WAL write contention is seen. A value of 0s fsyncs every write to the WAL. # Values in the range of 0-100ms are recommended for non-SSD disks. # wal-fsync-delay = "0s" # The type of shard index to use for new shards. The default is an in-memory index that is # recreated at startup. A value of "tsi1" will use a disk based index that supports higher # cardinality datasets. # index-version = "inmem" # Trace logging provides more verbose output around the tsm engine. Turning # this on can provide more useful output for debugging tsm engine issues. # trace-logging-enabled = false # Whether queries should be logged before execution. Very useful for troubleshooting, but will # log any sensitive data contained within a query. # query-log-enabled = true # Validates incoming writes to ensure keys only have valid unicode characters. # This setting will incur a small overhead because every key must be checked. # validate-keys = false # Settings for the TSM engine # CacheMaxMemorySize is the maximum size a shard's cache can # reach before it starts rejecting writes. # Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k). # Values without a size suffix are in bytes. # cache-max-memory-size = "1g" # CacheSnapshotMemorySize is the size at which the engine will # snapshot the cache and write it to a TSM file, freeing up memory # Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k). # Values without a size suffix are in bytes. # cache-snapshot-memory-size = "25m" # CacheSnapshotWriteColdDuration is the length of time at # which the engine will snapshot the cache and write it to # a new TSM file if the shard hasn't received writes or deletes # cache-snapshot-write-cold-duration = "10m" # CompactFullWriteColdDuration is the duration at which the engine # will compact all TSM files in a shard if it hasn't received a # write or delete # compact-full-write-cold-duration = "4h" # The maximum number of concurrent full and level compactions that can run at one time. A # value of 0 results in 50% of runtime.GOMAXPROCS(0) used at runtime. Any number greater # than 0 limits compactions to that value. This setting does not apply # to cache snapshotting. # max-concurrent-compactions = 0 # CompactThroughput is the rate limit in bytes per second that we # will allow TSM compactions to write to disk. Note that short bursts are allowed # to happen at a possibly larger value, set by CompactThroughputBurst # compact-throughput = "48m" # CompactThroughputBurst is the rate limit in bytes per second that we # will allow TSM compactions to write to disk. # compact-throughput-burst = "48m" # The threshold, in bytes, when an index write-ahead log file will compact # into an index file. Lower sizes will cause log files to be compacted more # quickly and result in lower heap usage at the expense of write throughput. # Higher sizes will be compacted less frequently, store more series in-memory, # and provide higher write throughput. # Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k). # Values without a size suffix are in bytes. # max-index-log-file-size = "1m" # The maximum series allowed per database before writes are dropped. This limit can prevent # high cardinality issues at the database level. This limit can be disabled by setting it to # 0. # max-series-per-database = 1000000 # The maximum number of tag values per tag that are allowed before writes are dropped. This limit # can prevent high cardinality tag values from being written to a measurement. This limit can be # disabled by setting it to 0. # max-values-per-tag = 100000 # If true, then the mmap advise value MADV_WILLNEED will be provided to the kernel with respect to # TSM files. This setting has been found to be problematic on some kernels, and defaults to off. # It might help users who have slow disks in some cases. # tsm-use-madv-willneed = false ### ### [coordinator] ### ### Controls the clustering service configuration. ### [coordinator] # The default time a write request will wait until a "timeout" error is returned to the caller. # write-timeout = "10s" # The maximum number of concurrent queries allowed to be executing at one time. If a query is # executed and exceeds this limit, an error is returned to the caller. This limit can be disabled # by setting it to 0. # max-concurrent-queries = 0 # The maximum time a query will is allowed to execute before being killed by the system. This limit # can help prevent run away queries. Setting the value to 0 disables the limit. # query-timeout = "0s" # The time threshold when a query will be logged as a slow query. This limit can be set to help # discover slow or resource intensive queries. Setting the value to 0 disables the slow query logging. # log-queries-after = "0s" # The maximum number of points a SELECT can process. A value of 0 will make # the maximum point count unlimited. This will only be checked every second so queries will not # be aborted immediately when hitting the limit. # max-select-point = 0 # The maximum number of series a SELECT can run. A value of 0 will make the maximum series # count unlimited. # max-select-series = 0 # The maxium number of group by time bucket a SELECT can create. A value of zero will max the maximum # number of buckets unlimited. # max-select-buckets = 0 ### ### [retention] ### ### Controls the enforcement of retention policies for evicting old data. ### [retention] # Determines whether retention policy enforcement enabled. # enabled = true # The interval of time when retention policy enforcement checks run. # check-interval = "30m" ### ### [shard-precreation] ### ### Controls the precreation of shards, so they are available before data arrives. ### Only shards that, after creation, will have both a start- and end-time in the ### future, will ever be created. Shards are never precreated that would be wholly ### or partially in the past. [shard-precreation] # Determines whether shard pre-creation service is enabled. # enabled = true # The interval of time when the check to pre-create new shards runs. # check-interval = "10m" # The default period ahead of the endtime of a shard group that its successor # group is created. # advance-period = "30m" ### ### Controls the system self-monitoring, statistics and diagnostics. ### ### The internal database for monitoring data is created automatically if ### if it does not already exist. The target retention within this database ### is called 'monitor' and is also created with a retention period of 7 days ### and a replication factor of 1, if it does not exist. In all cases the ### this retention policy is configured as the default for the database. [monitor] # Whether to record statistics internally. # store-enabled = true # The destination database for recorded statistics # store-database = "_internal" # The interval at which to record statistics # store-interval = "10s" ### ### [http] ### ### Controls how the HTTP endpoints are configured. These are the primary ### mechanism for getting data into and out of InfluxDB. ### [http] # Determines whether HTTP endpoint is enabled. enabled = true # The bind address used by the HTTP service. bind-address = ":8086" # Determines whether user authentication is enabled over HTTP/HTTPS. # auth-enabled = false # The default realm sent back when issuing a basic auth challenge. # realm = "InfluxDB" # Determines whether HTTP request logging is enabled. # log-enabled = true # Determines whether the HTTP write request logs should be suppressed when the log is enabled. # suppress-write-log = false # When HTTP request logging is enabled, this option specifies the path where # log entries should be written. If unspecified, the default is to write to stderr, which # intermingles HTTP logs with internal InfluxDB logging. # # If influxd is unable to access the specified path, it will log an error and fall back to writing # the request log to stderr. # access-log-path = "" # Determines whether detailed write logging is enabled. # write-tracing = false # Determines whether the pprof endpoint is enabled. This endpoint is used for # troubleshooting and monitoring. # pprof-enabled = true # Enables a pprof endpoint that binds to localhost:6060 immediately on startup. # This is only needed to debug startup issues. # debug-pprof-enabled = false # Determines whether HTTPS is enabled. # https-enabled = false # The SSL certificate to use when HTTPS is enabled. # https-certificate = "/etc/ssl/influxdb.pem" # Use a separate private key location. # https-private-key = "" # The JWT auth shared secret to validate requests using JSON web tokens. # shared-secret = "" # The default chunk size for result sets that should be chunked. # max-row-limit = 0 # The maximum number of HTTP connections that may be open at once. New connections that # would exceed this limit are dropped. Setting this value to 0 disables the limit. # max-connection-limit = 0 # Enable http service over unix domain socket # unix-socket-enabled = false # The path of the unix domain socket. # bind-socket = "/var/run/influxdb.sock" # The maximum size of a client request body, in bytes. Setting this value to 0 disables the limit. # max-body-size = 25000000 # The maximum number of writes processed concurrently. # Setting this to 0 disables the limit. # max-concurrent-write-limit = 0 # The maximum number of writes queued for processing. # Setting this to 0 disables the limit. # max-enqueued-write-limit = 0 # The maximum duration for a write to wait in the queue to be processed. # Setting this to 0 or setting max-concurrent-write-limit to 0 disables the limit. # enqueued-write-timeout = 0 ### ### [ifql] ### ### Configures the ifql RPC API. ### [ifql] # Determines whether the RPC service is enabled. # enabled = true # Determines whether additional logging is enabled. # log-enabled = true # The bind address used by the ifql RPC service. # bind-address = ":8082" ### ### [logging] ### ### Controls how the logger emits logs to the output. ### [logging] # Determines which log encoder to use for logs. Available options # are auto, logfmt, and json. auto will use a more a more user-friendly # output format if the output terminal is a TTY, but the format is not as # easily machine-readable. When the output is a non-TTY, auto will use # logfmt. # format = "auto" # Determines which level of logs will be emitted. The available levels # are error, warn, info, and debug. Logs that are equal to or above the # specified level will be emitted. # level = "info" # Suppresses the logo output that is printed when the program is started. # The logo is always suppressed if STDOUT is not a TTY. # suppress-logo = false ### ### [subscriber] ### ### Controls the subscriptions, which can be used to fork a copy of all data ### received by the InfluxDB host. ### [subscriber] # Determines whether the subscriber service is enabled. # enabled = true # The default timeout for HTTP writes to subscribers. # http-timeout = "30s" # Allows insecure HTTPS connections to subscribers. This is useful when testing with self- # signed certificates. # insecure-skip-verify = false # The path to the PEM encoded CA certs file. If the empty string, the default system certs will be used # ca-certs = "" # The number of writer goroutines processing the write channel. # write-concurrency = 40 # The number of in-flight writes buffered in the write channel. # write-buffer-size = 1000 ### ### [[graphite]] ### ### Controls one or many listeners for Graphite data. ### [[graphite]] # Determines whether the graphite endpoint is enabled. # enabled = false # database = "graphite" # retention-policy = "" # bind-address = ":2003" # protocol = "tcp" # consistency-level = "one" # These next lines control how batching works. You should have this enabled # otherwise you could get dropped metrics or poor performance. Batching # will buffer points in memory if you have many coming in. # Flush if this many points get buffered # batch-size = 5000 # number of batches that may be pending in memory # batch-pending = 10 # Flush at least this often even if we haven't hit buffer limit # batch-timeout = "1s" # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max. # udp-read-buffer = 0 ### This string joins multiple matching 'measurement' values providing more control over the final measurement name. # separator = "." ### Default tags that will be added to all metrics. These can be overridden at the template level ### or by tags extracted from metric # tags = ["region=us-east", "zone=1c"] ### Each template line requires a template pattern. It can have an optional ### filter before the template and separated by spaces. It can also have optional extra ### tags following the template. Multiple tags should be separated by commas and no spaces ### similar to the line protocol format. There can be only one default template. # templates = [ # "*.app env.service.resource.measurement", # # Default template # "server.*", # ] ### ### [collectd] ### ### Controls one or many listeners for collectd data. ### [[collectd]] # enabled = false # bind-address = ":25826" # database = "collectd" # retention-policy = "" # # The collectd service supports either scanning a directory for multiple types # db files, or specifying a single db file. # typesdb = "/usr/local/share/collectd" # # security-level = "none" # auth-file = "/etc/collectd/auth_file" # These next lines control how batching works. You should have this enabled # otherwise you could get dropped metrics or poor performance. Batching # will buffer points in memory if you have many coming in. # Flush if this many points get buffered # batch-size = 5000 # Number of batches that may be pending in memory # batch-pending = 10 # Flush at least this often even if we haven't hit buffer limit # batch-timeout = "10s" # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max. # read-buffer = 0 # Multi-value plugins can be handled two ways. # "split" will parse and store the multi-value plugin data into separate measurements # "join" will parse and store the multi-value plugin as a single multi-value measurement. # "split" is the default behavior for backward compatability with previous versions of influxdb. # parse-multivalue-plugin = "split" ### ### [opentsdb] ### ### Controls one or many listeners for OpenTSDB data. ### [[opentsdb]] # enabled = false # bind-address = ":4242" # database = "opentsdb" # retention-policy = "" # consistency-level = "one" # tls-enabled = false # certificate= "/etc/ssl/influxdb.pem" # Log an error for every malformed point. # log-point-errors = true # These next lines control how batching works. You should have this enabled # otherwise you could get dropped metrics or poor performance. Only points # metrics received over the telnet protocol undergo batching. # Flush if this many points get buffered # batch-size = 1000 # Number of batches that may be pending in memory # batch-pending = 5 # Flush at least this often even if we haven't hit buffer limit # batch-timeout = "1s" ### ### [[udp]] ### ### Controls the listeners for InfluxDB line protocol data via UDP. ### [[udp]] # enabled = false # bind-address = ":8089" # database = "udp" # retention-policy = "" # InfluxDB precision for timestamps on received points ("" or "n", "u", "ms", "s", "m", "h") # precision = "" # These next lines control how batching works. You should have this enabled # otherwise you could get dropped metrics or poor performance. Batching # will buffer points in memory if you have many coming in. # Flush if this many points get buffered # batch-size = 5000 # Number of batches that may be pending in memory # batch-pending = 10 # Will flush at least this often even if we haven't hit buffer limit # batch-timeout = "1s" # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max. # read-buffer = 0 ### ### [continuous_queries] ### ### Controls how continuous queries are run within InfluxDB. ### [continuous_queries] # Determines whether the continuous query service is enabled. # enabled = true # Controls whether queries are logged when executed by the CQ service. # log-enabled = true # Controls whether queries are logged to the self-monitoring data store. # query-stats-enabled = false # interval for how often continuous queries will be checked if they need to run # run-interval = "1s" ### ### [tls] ### ### Global configuration settings for TLS in InfluxDB. ### [tls] # Determines the available set of cipher suites. See https://golang.org/pkg/crypto/tls/#pkg-constants # for a list of available ciphers, which depends on the version of Go (use the query # SHOW DIAGNOSTICS to see the version of Go used to build InfluxDB). If not specified, uses # the default settings from Go's crypto/tls package. # ciphers = [ # "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305", # "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", # ] # Minimum version of the tls protocol that will be negotiated. If not specified, uses the # default settings from Go's crypto/tls package. # min-version = "tls1.2" # Maximum version of the tls protocol that will be negotiated. If not specified, uses the # default settings from Go's crypto/tls package. # max-version = "tls1.2" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/devstack/plugin.sh0000775000175000017500000005030100000000000017450 0ustar00zuulzuul00000000000000# CloudKitty devstack plugin # Install and start **CloudKitty** service # To enable a minimal set of CloudKitty services: # - enable Ceilometer ; # - add the following to the [[local|localrc]] section in the local.conf file: # # enable_service ck-api ck-proc # # Dependencies: # - functions # - OS_AUTH_URL for auth in api # - DEST, DATA_DIR set to the destination directory # - SERVICE_PASSWORD, SERVICE_TENANT_NAME for auth in api # - IDENTITY_API_VERSION for the version of Keystone # - STACK_USER service user # - HORIZON_DIR for horizon integration # stack.sh # --------- # install_cloudkitty # configure_cloudkitty # init_cloudkitty # start_cloudkitty # stop_cloudkitty # cleanup_cloudkitty # install_python_cloudkittyclient # Save trace setting XTRACE=$(set +o | grep xtrace) set +o xtrace # Support potential entry-points console scripts if [[ -d $CLOUDKITTY_DIR/bin ]]; then CLOUDKITTY_BIN_DIR=$CLOUDKITTY_DIR/bin else CLOUDKITTY_BIN_DIR=$(get_python_exec_prefix) fi # Functions # --------- # create_cloudkitty_accounts() - Set up common required cloudkitty accounts # Tenant User Roles # ------------------------------------------------------------------ # service cloudkitty admin # if enabled function create_cloudkitty_accounts { create_service_user "cloudkitty" local cloudkitty_service=$(get_or_create_service "cloudkitty" \ "rating" "OpenStack Rating") get_or_create_endpoint $cloudkitty_service \ "$REGION_NAME" \ "$CLOUDKITTY_SERVICE_PROTOCOL://$CLOUDKITTY_SERVICE_HOSTPORT/" \ "$CLOUDKITTY_SERVICE_PROTOCOL://$CLOUDKITTY_SERVICE_HOSTPORT/" \ "$CLOUDKITTY_SERVICE_PROTOCOL://$CLOUDKITTY_SERVICE_HOSTPORT/" # Create the rating role get_or_create_role "rating" # Make cloudkitty an admin get_or_add_user_project_role admin cloudkitty service # Make CloudKitty monitor demo project for rating purposes get_or_add_user_project_role rating cloudkitty demo } # Test if any CloudKitty services are enabled # is_cloudkitty_enabled function is_cloudkitty_enabled { [[ ,${ENABLED_SERVICES} =~ ,"ck-" ]] && return 0 return 1 } # Remove WSGI files, disable and remove Apache vhost file function _cloudkitty_cleanup_apache_wsgi { if is_service_enabled ck-api && [ "$CLOUDKITTY_USE_MOD_WSGI" == "True" ]; then sudo rm -f "$CLOUDKITTY_WSGI_DIR"/* sudo rm -rf "$CLOUDKITTY_WSGI_DIR" sudo rm -f $(apache_site_config_for cloudkitty) fi } # cleanup_cloudkitty() - Remove residual data files, anything left over from previous # runs that a clean run would need to clean up function cleanup_cloudkitty { _cloudkitty_cleanup_apache_wsgi # Clean up dirs rm -rf $CLOUDKITTY_AUTH_CACHE_DIR/* rm -rf $CLOUDKITTY_CONF_DIR/* rm -rf $CLOUDKITTY_OUTPUT_BASEPATH/* for i in $(find $CLOUDKITTY_ENABLED_DIR -iname '_[0-9]*.py' -printf '%f\n'); do rm -f "${CLOUDKITTY_HORIZON_ENABLED_DIR}/$i" done } # Configure mod_wsgi function _cloudkitty_config_apache_wsgi { sudo mkdir -m 755 -p $CLOUDKITTY_WSGI_DIR local cloudkitty_apache_conf=$(apache_site_config_for cloudkitty) local venv_path="" # Copy proxy vhost and wsgi file sudo cp $CLOUDKITTY_DIR/cloudkitty/api/app.wsgi $CLOUDKITTY_WSGI_DIR/app.wsgi if [[ ${USE_VENV} = True ]]; then venv_path="python-path=${PROJECT_VENV["cloudkitty"]}/lib/$(python_version)/site-packages" fi sudo cp $CLOUDKITTY_DIR/devstack/apache-cloudkitty.template $cloudkitty_apache_conf sudo sed -e " s|%PORT%|$CLOUDKITTY_SERVICE_PORT|g; s|%APACHE_NAME%|$APACHE_NAME|g; s|%WSGIAPP%|$CLOUDKITTY_WSGI_DIR/app.wsgi|g; s|%USER%|$STACK_USER|g; s|%VIRTUALENV%|$DEVSTACK_VENV|g " -i $cloudkitty_apache_conf } # configure_cloudkitty() - Set config files, create data dirs, etc function configure_cloudkitty { setup_develop $CLOUDKITTY_DIR sudo mkdir -m 755 -p $CLOUDKITTY_CONF_DIR sudo chown $STACK_USER $CLOUDKITTY_CONF_DIR sudo mkdir -m 755 -p $CLOUDKITTY_API_LOG_DIR sudo chown $STACK_USER $CLOUDKITTY_API_LOG_DIR touch $CLOUDKITTY_CONF # generate policy sample file oslopolicy-sample-generator --config-file $CLOUDKITTY_DIR/etc/oslo-policy-generator/cloudkitty.conf --output-file $CLOUDKITTY_DIR/etc/cloudkitty/policy.yaml.sample cp $CLOUDKITTY_DIR/etc/cloudkitty/policy.yaml.sample "$CLOUDKITTY_CONF_DIR/policy.yaml" iniset $CLOUDKITTY_CONF oslo_policy policy_file 'policy.yaml' cp $CLOUDKITTY_DIR$CLOUDKITTY_CONF_DIR/api_paste.ini $CLOUDKITTY_CONF_DIR cp $CLOUDKITTY_DIR$CLOUDKITTY_CONF_DIR/metrics.yml $CLOUDKITTY_CONF_DIR iniset_rpc_backend cloudkitty $CLOUDKITTY_CONF DEFAULT iniset $CLOUDKITTY_CONF DEFAULT notification_topics 'notifications' iniset $CLOUDKITTY_CONF DEFAULT debug "$ENABLE_DEBUG_LOG_LEVEL" iniset $CLOUDKITTY_CONF DEFAULT auth_strategy $CLOUDKITTY_AUTH_STRATEGY # auth iniset $CLOUDKITTY_CONF authinfos auth_type v3password iniset $CLOUDKITTY_CONF authinfos auth_protocol http iniset $CLOUDKITTY_CONF authinfos auth_url "$KEYSTONE_SERVICE_URI/v3" iniset $CLOUDKITTY_CONF authinfos identity_uri "$KEYSTONE_SERVICE_URI/v3" iniset $CLOUDKITTY_CONF authinfos username cloudkitty iniset $CLOUDKITTY_CONF authinfos password $SERVICE_PASSWORD iniset $CLOUDKITTY_CONF authinfos project_name $SERVICE_TENANT_NAME iniset $CLOUDKITTY_CONF authinfos tenant_name $SERVICE_TENANT_NAME iniset $CLOUDKITTY_CONF authinfos region_name $REGION_NAME iniset $CLOUDKITTY_CONF authinfos user_domain_name default iniset $CLOUDKITTY_CONF authinfos project_domain_name default iniset $CLOUDKITTY_CONF authinfos debug "$ENABLE_DEBUG_LOG_LEVEL" iniset $CLOUDKITTY_CONF fetcher backend $CLOUDKITTY_FETCHER iniset $CLOUDKITTY_CONF "fetcher_$CLOUDKITTY_FETCHER" auth_section authinfos if [[ "$CLOUDKITTY_FETCHER" == "keystone" ]]; then iniset $CLOUDKITTY_CONF fetcher_keystone keystone_version 3 fi if [ "$CLOUDKITTY_STORAGE_BACKEND" == "influxdb" ] && [ "$CLOUDKITTY_INFLUX_VERSION" == 1 ]; then iniset $CLOUDKITTY_CONF storage_${CLOUDKITTY_STORAGE_BACKEND} user ${CLOUDKITTY_INFLUXDB_USER} iniset $CLOUDKITTY_CONF storage_${CLOUDKITTY_STORAGE_BACKEND} password ${CLOUDKITTY_INFLUXDB_PASSWORD} iniset $CLOUDKITTY_CONF storage_${CLOUDKITTY_STORAGE_BACKEND} database ${CLOUDKITTY_INFLUXDB_DATABASE} iniset $CLOUDKITTY_CONF storage_${CLOUDKITTY_STORAGE_BACKEND} host ${CLOUDKITTY_INFLUXDB_HOST} iniset $CLOUDKITTY_CONF storage_${CLOUDKITTY_STORAGE_BACKEND} port ${CLOUDKITTY_INFLUXDB_PORT} fi if [ "$CLOUDKITTY_STORAGE_BACKEND" == "influxdb" ] && [ "$CLOUDKITTY_INFLUX_VERSION" == 2 ]; then iniset $CLOUDKITTY_CONF storage_${CLOUDKITTY_STORAGE_BACKEND} host ${CLOUDKITTY_INFLUXDB_HOST} iniset $CLOUDKITTY_CONF storage_${CLOUDKITTY_STORAGE_BACKEND} port ${CLOUDKITTY_INFLUXDB_PORT} iniset $CLOUDKITTY_CONF storage_${CLOUDKITTY_STORAGE_BACKEND} url "http://${CLOUDKITTY_INFLUXDB_HOST}:${CLOUDKITTY_INFLUXDB_PORT}" iniset $CLOUDKITTY_CONF storage_${CLOUDKITTY_STORAGE_BACKEND} token ${CLOUDKITTY_INFLUXDB_PASSWORD} iniset $CLOUDKITTY_CONF storage_${CLOUDKITTY_STORAGE_BACKEND} version 2 fi if [ "$CLOUDKITTY_STORAGE_BACKEND" == "elasticsearch" ]; then iniset $CLOUDKITTY_CONF storage_${CLOUDKITTY_STORAGE_BACKEND} host ${CLOUDKITTY_ELASTICSEARCH_HOST} iniset $CLOUDKITTY_CONF storage_${CLOUDKITTY_STORAGE_BACKEND} index_name ${CLOUDKITTY_ELASTICSEARCH_INDEX} fi if [ "$CLOUDKITTY_STORAGE_BACKEND" == "opensearch" ]; then iniset $CLOUDKITTY_CONF storage_${CLOUDKITTY_STORAGE_BACKEND} host ${CLOUDKITTY_OPENSEARCH_HOST} iniset $CLOUDKITTY_CONF storage_${CLOUDKITTY_STORAGE_BACKEND} index_name ${CLOUDKITTY_OPENSEARCH_INDEX} fi # collect iniset $CLOUDKITTY_CONF collect collector $CLOUDKITTY_COLLECTOR iniset $CLOUDKITTY_CONF "collector_${CLOUDKITTY_COLLECTOR}" auth_section authinfos iniset $CLOUDKITTY_CONF collect metrics_conf $CLOUDKITTY_CONF_DIR/$CLOUDKITTY_METRICS_CONF # DO NOT DO THIS IN PRODUCTION! This is done in order to get data quicker # when starting a devstack installation, but is NOT a recommended setting iniset $CLOUDKITTY_CONF collect wait_periods 0 # output iniset $CLOUDKITTY_CONF output backend $CLOUDKITTY_OUTPUT_BACKEND iniset $CLOUDKITTY_CONF output basepath $CLOUDKITTY_OUTPUT_BASEPATH iniset $CLOUDKITTY_CONF output pipeline $CLOUDKITTY_OUTPUT_PIPELINE # storage iniset $CLOUDKITTY_CONF storage backend $CLOUDKITTY_STORAGE_BACKEND iniset $CLOUDKITTY_CONF storage version $CLOUDKITTY_STORAGE_VERSION # database local dburl=`database_connection_url cloudkitty` iniset $CLOUDKITTY_CONF database connection $dburl # keystone middleware configure_keystone_authtoken_middleware $CLOUDKITTY_CONF cloudkitty if is_service_enabled ck-api && [ "$CLOUDKITTY_USE_MOD_WSGI" == "True" ]; then _cloudkitty_config_apache_wsgi fi } function wait_for_gnocchi() { local gnocchi_url=$(openstack --os-cloud devstack-admin endpoint list --service metric --interface public -c URL -f value) if ! wait_for_service $SERVICE_TIMEOUT $gnocchi_url; then die $LINENO "Waited for gnocchi too long." fi } # create_cloudkitty_cache_dir() - Part of the init_cloudkitty() process function create_cloudkitty_cache_dir { # Create cache dir sudo mkdir -p $CLOUDKITTY_AUTH_CACHE_DIR/api sudo chown $STACK_USER $CLOUDKITTY_AUTH_CACHE_DIR/api rm -f $CLOUDKITTY_AUTH_CACHE_DIR/api/* sudo mkdir -p $CLOUDKITTY_AUTH_CACHE_DIR/registry sudo chown $STACK_USER $CLOUDKITTY_AUTH_CACHE_DIR/registry rm -f $CLOUDKITTY_AUTH_CACHE_DIR/registry/* } # create_cloudkitty_data_dir() - Part of the init_cloudkitty() process function create_cloudkitty_data_dir { # Create data dir sudo mkdir -p $CLOUDKITTY_DATA_DIR sudo chown $STACK_USER $CLOUDKITTY_DATA_DIR rm -rf $CLOUDKITTY_DATA_DIR/* # Create locks dir sudo mkdir -p $CLOUDKITTY_DATA_DIR/locks sudo chown $STACK_USER $CLOUDKITTY_DATA_DIR/locks } function create_influxdb_database { if [ "$CLOUDKITTY_STORAGE_BACKEND" == "influxdb" ] && [ "$CLOUDKITTY_INFLUX_VERSION" == 1 ]; then influx -execute "CREATE DATABASE ${CLOUDKITTY_INFLUXDB_DATABASE}" fi if [ "$CLOUDKITTY_STORAGE_BACKEND" == "influxdb" ] && [ "$CLOUDKITTY_INFLUX_VERSION" == 2 ]; then influx setup --username ${CLOUDKITTY_INFLUXDB_USER} --password ${CLOUDKITTY_INFLUXDB_PASSWORD} --token ${CLOUDKITTY_INFLUXDB_PASSWORD} --org openstack --bucket cloudkitty --force fi } function create_elasticsearch_index { if [ "$CLOUDKITTY_STORAGE_BACKEND" == "elasticsearch" ]; then curl -XPUT "${CLOUDKITTY_ELASTICSEARCH_HOST}/${CLOUDKITTY_ELASTICSEARCH_INDEX}" fi } function create_opensearch_index { if [ "$CLOUDKITTY_STORAGE_BACKEND" == "opensearch" ]; then curl -XPUT "${CLOUDKITTY_OPENSEARCH_HOST}/${CLOUDKITTY_OPENSEARCH_INDEX}" fi } # init_cloudkitty() - Initialize CloudKitty database function init_cloudkitty { # Delete existing cache sudo rm -rf $CLOUDKITTY_AUTH_CACHE_DIR sudo mkdir -p $CLOUDKITTY_AUTH_CACHE_DIR sudo chown $STACK_USER $CLOUDKITTY_AUTH_CACHE_DIR # Delete existing cache sudo rm -rf $CLOUDKITTY_OUTPUT_BASEPATH sudo mkdir -p $CLOUDKITTY_OUTPUT_BASEPATH sudo chown $STACK_USER $CLOUDKITTY_OUTPUT_BASEPATH # (Re)create cloudkitty database recreate_database cloudkitty utf8 create_influxdb_database create_elasticsearch_index create_opensearch_index # Migrate cloudkitty database upgrade_cloudkitty_database # Init the storage backend if [ $CLOUDKITTY_STORAGE_BACKEND == 'hybrid' ]; then wait_for_gnocchi fi $CLOUDKITTY_BIN_DIR/cloudkitty-storage-init create_cloudkitty_cache_dir create_cloudkitty_data_dir } function install_influx_ubuntu { local influxdb_file=$(get_extra_file https://dl.influxdata.com/influxdb/releases/influxdb_1.6.3_amd64.deb) sudo dpkg -i --skip-same-version ${influxdb_file} } function install_influx_v2_ubuntu { local influxdb_file=$(get_extra_file https://dl.influxdata.com/influxdb/releases/influxdb2_2.7.5-1_amd64.deb) sudo dpkg -i --skip-same-version ${influxdb_file} local influxcli_file=$(get_extra_file https://dl.influxdata.com/influxdb/releases/influxdb2-client-2.7.3-linux-amd64.tar.gz) tar xvzf ${influxcli_file} sudo cp ./influx /usr/local/bin/ } function install_influx_fedora { local influxdb_file=$(get_extra_file https://dl.influxdata.com/influxdb/releases/influxdb-1.6.3.x86_64.rpm) sudo yum localinstall -y ${influxdb_file} } function install_influx_v2_fedora { local influxdb_file=$(get_extra_file https://dl.influxdata.com/influxdb/releases/influxdb2-2.7.5-1.x86_64.rpm) sudo yum localinstall -y ${influxdb_file} local influxcli_file=$(get_extra_file https://dl.influxdata.com/influxdb/releases/influxdb2-client-2.7.3-linux-amd64.tar.gz) tar xvzf ${influxcli_file} sudo cp ./influx /usr/local/bin/ } function install_influx { if is_ubuntu; then install_influx_ubuntu elif is_fedora; then install_influx_fedora else die $LINENO "Distribution must be Debian or Fedora-based" fi sudo cp -f "${CLOUDKITTY_DIR}"/devstack/files/influxdb.conf /etc/influxdb/influxdb.conf sudo systemctl start influxdb || sudo systemctl restart influxdb } function install_influx_v2 { if is_ubuntu; then install_influx_v2_ubuntu elif is_fedora; then install_influx_v2_fedora else die $LINENO "Distribution must be Debian or Fedora-based" fi sudo cp -f "${CLOUDKITTY_DIR}"/devstack/files/influxdb.conf /etc/influxdb/influxdb.conf sudo systemctl start influxdb || sudo systemctl restart influxdb } function install_elasticsearch_ubuntu { local opensearch_file=$(get_extra_file https://artifacts.opensearch.org/releases/bundle/opensearch/1.3.9/opensearch-1.3.9-linux-x64.deb) sudo dpkg -i --skip-same-version ${opensearch_file} } function install_elasticsearch_fedora { local opensearch_file=$(get_extra_file https://artifacts.opensearch.org/releases/bundle/opensearch/1.3.9/opensearch-1.3.9-linux-x64.rpm) sudo yum localinstall -y ${opensearch_file} } function install_elasticsearch { if is_ubuntu; then install_elasticsearch_ubuntu elif is_fedora; then install_elasticsearch_fedora else die $LINENO "Distribution must be Debian or Fedora-based" fi if ! sudo grep plugins.security.disabled /etc/opensearch/opensearch.yml >/dev/null; then echo "plugins.security.disabled: true" | sudo tee -a /etc/opensearch/opensearch.yml >/dev/null fi sudo systemctl enable opensearch sudo systemctl start opensearch || sudo systemctl restart opensearch } function install_opensearch_ubuntu { local opensearch_file=$(get_extra_file https://artifacts.opensearch.org/releases/bundle/opensearch/2.11.0/opensearch-2.11.0-linux-x64.deb) sudo dpkg -i --skip-same-version ${opensearch_file} } function install_opensearch_fedora { local opensearch_file=$(get_extra_file https://artifacts.opensearch.org/releases/bundle/opensearch/2.11.0/opensearch-2.11.0-linux-x64.rpm) sudo yum localinstall -y ${opensearch_file} } function install_opensearch { if is_ubuntu; then install_opensearch_ubuntu elif is_fedora; then install_opensearch_fedora else die $LINENO "Distribution must be Debian or Fedora-based" fi if ! sudo grep plugins.security.disabled /etc/opensearch/opensearch.yml >/dev/null; then echo "plugins.security.disabled: true" | sudo tee -a /etc/opensearch/opensearch.yml >/dev/null fi sudo systemctl enable opensearch sudo systemctl start opensearch || sudo systemctl restart opensearch } # install_cloudkitty() - Collect source and prepare function install_cloudkitty { git_clone $CLOUDKITTY_REPO $CLOUDKITTY_DIR $CLOUDKITTY_BRANCH setup_develop $CLOUDKITTY_DIR if [ $CLOUDKITTY_STORAGE_BACKEND == 'influxdb' ] && [ "$CLOUDKITTY_INFLUX_VERSION" == 1 ]; then install_influx elif [ $CLOUDKITTY_STORAGE_BACKEND == 'influxdb' ] && [ "$CLOUDKITTY_INFLUX_VERSION" == 2 ]; then install_influx_v2 elif [ $CLOUDKITTY_STORAGE_BACKEND == 'elasticsearch' ]; then install_elasticsearch elif [ $CLOUDKITTY_STORAGE_BACKEND == 'opensearch' ]; then install_opensearch fi } # start_cloudkitty() - Start running processes, including screen function start_cloudkitty { run_process ck-proc "$CLOUDKITTY_BIN_DIR/cloudkitty-processor --config-file=$CLOUDKITTY_CONF" if [[ "$CLOUDKITTY_USE_MOD_WSGI" == "False" ]]; then run_process ck-api "$CLOUDKITTY_BIN_DIR/cloudkitty-api --host $CLOUDKITTY_SERVICE_HOST --port $CLOUDKITTY_SERVICE_PORT" elif is_service_enabled ck-api; then enable_apache_site cloudkitty echo_summary "Waiting 15s for cloudkitty-processor to authenticate against keystone before apache is restarted." sleep 15s restart_apache_server fi echo "Waiting for ck-api ($CLOUDKITTY_SERVICE_HOST:$CLOUDKITTY_SERVICE_PORT) to start..." if ! wait_for_service $SERVICE_TIMEOUT $CLOUDKITTY_SERVICE_PROTOCOL://$CLOUDKITTY_SERVICE_HOST:$CLOUDKITTY_SERVICE_PORT; then die $LINENO "ck-api did not start" fi } # stop_cloudkitty() - Stop running processes function stop_cloudkitty { # Kill the cloudkitty screen windows if is_service_enabled ck-proc ; then stop_process ck-proc fi if is_service_enabled ck-api ; then if [ "$CLOUDKITTY_USE_MOD_WSGI" == "True" ]; then disable_apache_site cloudkitty restart_apache_server else # Kill the cloudkitty screen windows stop_process ck-api fi fi } # install_python_cloudkittyclient() - Collect source and prepare function install_python_cloudkittyclient { # Install from git since we don't have a release (yet) git_clone_by_name "python-cloudkittyclient" setup_dev_lib "python-cloudkittyclient" } # install_cloudkitty_dashboard() - Collect source and prepare function install_cloudkitty_dashboard { # Install from git since we don't have a release (yet) git_clone_by_name "cloudkitty-dashboard" setup_dev_lib "cloudkitty-dashboard" } # update_horizon_static() - Update Horizon static files with CloudKitty's one function update_horizon_static { # Code taken from Horizon lib # Setup alias for django-admin which could be different depending on distro local django_admin if type -p django-admin > /dev/null; then django_admin=django-admin else django_admin=django-admin.py fi DJANGO_SETTINGS_MODULE=openstack_dashboard.settings \ $django_admin collectstatic --noinput DJANGO_SETTINGS_MODULE=openstack_dashboard.settings \ $django_admin compress --force restart_apache_server } # Upgrade cloudkitty database function upgrade_cloudkitty_database { $CLOUDKITTY_BIN_DIR/cloudkitty-dbsync upgrade } # configure_cloudkitty_dashboard() - Set config files, create data dirs, etc function configure_cloudkitty_dashboard { sudo ln -s $CLOUDKITTY_ENABLED_DIR/_[0-9]*.py \ $CLOUDKITTY_HORIZON_ENABLED_DIR/ update_horizon_static } if is_service_enabled ck-api; then if [[ "$1" == "stack" && "$2" == "install" ]]; then echo_summary "Installing CloudKitty" install_cloudkitty install_python_cloudkittyclient if is_service_enabled horizon; then install_cloudkitty_dashboard fi cleanup_cloudkitty elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then echo_summary "Configuring CloudKitty" configure_cloudkitty if is_service_enabled horizon; then configure_cloudkitty_dashboard fi if is_service_enabled key; then create_cloudkitty_accounts fi elif [[ "$1" == "stack" && "$2" == "extra" ]]; then # Initialize cloudkitty echo_summary "Initializing CloudKitty" init_cloudkitty # Start the CloudKitty API and CloudKitty processor components echo_summary "Starting CloudKitty" start_cloudkitty fi if [[ "$1" == "unstack" ]]; then stop_cloudkitty fi fi # Restore xtrace $XTRACE # Local variables: # mode: shell-script # End: ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/devstack/settings0000664000175000017500000000721600000000000017405 0ustar00zuulzuul00000000000000# turn on the CloudKitty services by default enable_service ck-api enable_service ck-proc # Defaults # Set up default directories # -------------------------- CLOUDKITTY_DIR=$DEST/cloudkitty CLOUDKITTY_CONF_DIR=/etc/cloudkitty CLOUDKITTY_CONF=$CLOUDKITTY_CONF_DIR/cloudkitty.conf CLOUDKITTY_API_LOG_DIR=/var/log/cloudkitty CLOUDKITTY_WSGI_DIR=${CLOUDKITTY_WSGI_DIR:-/var/www/cloudkitty} CLOUDKITTY_AUTH_CACHE_DIR=${CLOUDKITTY_AUTH_CACHE_DIR:-/var/cache/cloudkitty} CLOUDKITTY_DATA_DIR=${CLOUDKITTY_DATA_DIR:-/var/lib/cloudkitty} CLOUDKITTY_REPORTS_DIR=${DATA_DIR}/cloudkitty/reports CLOUDKITTY_AUTH_STRATEGY=keystone # Horizon enabled file CLOUDKITTY_DASHBOARD=$DEST/cloudkitty-dashboard/cloudkittydashboard CLOUDKITTY_ENABLED_DIR=${CLOUDKITTY_ENABLED_DIR:-${CLOUDKITTY_DASHBOARD}/enabled} CLOUDKITTY_HORIZON_ENABLED_DIR=${CLOUDKITTY_HORIZON_ENABLED_DIR:-$HORIZON_DIR/openstack_dashboard/enabled} # Set up database backend CLOUDKITTY_BACKEND=${CLOUDKITTY_BACKEND:-sqlite} # Set cloudkitty repository CLOUDKITTY_REPO=${CLOUDKITTY_REPO:-${GIT_BASE}/openstack/cloudkitty.git} CLOUDKITTY_BRANCH=${CLOUDKITTY_BRANCH:-master} # Set CloudKitty connection info CLOUDKITTY_SERVICE_HOST=${CLOUDKITTY_SERVICE_HOST:-$SERVICE_HOST} CLOUDKITTY_SERVICE_PORT=${CLOUDKITTY_SERVICE_PORT:-8889} CLOUDKITTY_SERVICE_HOSTPORT="$CLOUDKITTY_SERVICE_HOST:$CLOUDKITTY_SERVICE_PORT" CLOUDKITTY_SERVICE_PROTOCOL=${CLOUDKITTY_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL} CLOUDKITTY_USE_MOD_WSGI=${CLOUDKITTY_USE_MOD_WSGI:-${ENABLE_HTTPD_MOD_WSGI_SERVICES}} # Set CloudKitty auth info CLOUDKITTY_PRICING_USER=${CLOUDKITTY_PRICING_USER:-"admin"} CLOUDKITTY_PRICING_PASSWORD=${CLOUDKITTY_PRICING_PASSWORD:-$ADMIN_PASSWORD} CLOUDKITTY_PRICING_TENANT=${CLOUDKITTY_PRICING_TENANT:-"demo"} # Set CloudKitty fetcher info CLOUDKITTY_FETCHER=${CLOUDKITTY_FETCHER:-gnocchi} # Set CloudKitty collect info CLOUDKITTY_COLLECTOR=${CLOUDKITTY_COLLECTOR:-gnocchi} CLOUDKITTY_METRICS_CONF=metrics.yml # Set CloudKitty storage info CLOUDKITTY_STORAGE_BACKEND=${CLOUDKITTY_STORAGE_BACKEND:-"influxdb"} CLOUDKITTY_STORAGE_VERSION=${CLOUDKITTY_STORAGE_VERSION:-"2"} CLOUDKITTY_INFLUX_VERSION=${CLOUDKITTY_INFLUX_VERSION:-1} # Set CloudKitty output info CLOUDKITTY_OUTPUT_BACKEND=${CLOUDKITTY_OUTPUT_BACKEND:-"cloudkitty.backend.file.FileBackend"} CLOUDKITTY_OUTPUT_BASEPATH=${CLOUDKITTY_OUTPUT_BASEPATH:-$CLOUDKITTY_REPORTS_DIR} CLOUDKITTY_OUTPUT_PIPELINE=${CLOUDKITTY_OUTPUT_PIPELINE:-"osrf"} # Set Cloudkitty client info GITREPO["python-cloudkittyclient"]=${CLOUDKITTYCLIENT_REPO:-${GIT_BASE}/openstack/python-cloudkittyclient.git} GITDIR["python-cloudkittyclient"]=$DEST/python-cloudkittyclient GITBRANCH["python-cloudkittyclient"]=${CLOUDKITTYCLIENT_BRANCH:-master} # Set CloudKitty dashboard info GITREPO["cloudkitty-dashboard"]=${CLOUDKITTYDASHBOARD_REPO:-${GIT_BASE}/openstack/cloudkitty-dashboard.git} GITDIR["cloudkitty-dashboard"]=$DEST/cloudkitty-dashboard GITBRANCH["cloudkitty-dashboard"]=${CLOUDKITTYDASHBOARD_BRANCH:-master} # Set influxdb info CLOUDKITTY_INFLUXDB_USER=${CLOUDKITTY_INFLUXDB_USER:-cloudkitty} CLOUDKITTY_INFLUXDB_PASSWORD=${CLOUDKITTY_INFLUXDB_PASSWORD:-cloudkitty} CLOUDKITTY_INFLUXDB_HOST=${CLOUDKITTY_INFLUXDB_HOST:-"localhost"} CLOUDKITTY_INFLUXDB_PORT=${CLOUDKITTY_INFLUXDB_PORT:-"8086"} CLOUDKITTY_INFLUXDB_DATABASE=${CLOUDKITTY_INFLUXDB_DATABASE:-"cloudkitty"} # Set elasticsearch info CLOUDKITTY_ELASTICSEARCH_HOST=${CLOUDKITTY_ELASTICSEARCH_HOST:-"http://localhost:9200"} CLOUDKITTY_ELASTICSEARCH_INDEX=${CLOUDKITTY_ELASTICSEARCH_INDEX:-"cloudkitty"} # Set opensearch info CLOUDKITTY_OPENSEARCH_HOST=${CLOUDKITTY_OPENSEARCH_HOST:-"http://localhost:9200"} CLOUDKITTY_OPENSEARCH_INDEX=${CLOUDKITTY_OPENSEARCH_INDEX:-"cloudkitty"} ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.271487 cloudkitty-21.0.0/devstack/upgrade/0000775000175000017500000000000000000000000017243 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/devstack/upgrade/resources.sh0000775000175000017500000000437200000000000021622 0ustar00zuulzuul00000000000000#!/bin/bash set -o errexit source $GRENADE_DIR/grenaderc source $GRENADE_DIR/functions source $TOP_DIR/openrc admin set -o xtrace CLOUDKITTY_GRENADE_DIR=$(dirname $0) CK_SERVICE_NAME='test_service' CK_FIELD_NAME='test_field' CK_MAPPING_VALUE='test_value' function create { CK_SERVICE_ID=$(openstack rating hashmap service create $CK_SERVICE_NAME -c 'Service ID' -f value) CK_FIELD_ID=$(openstack rating hashmap field create $CK_SERVICE_ID $CK_FIELD_NAME -c 'Field ID' -f value) openstack rating hashmap mapping create --field-id $CK_FIELD_ID --value $CK_MAPPING_VALUE 3 echo "CloudKitty create: SUCCESS" } function verify { CK_SERVICE_NAME_VERIFY=$(openstack rating hashmap service list -c 'Name' -f value) if [ $CK_SERVICE_NAME_VERIFY != $CK_SERVICE_NAME ]; then echo "CloudKitty verify invalid service name. Expected $CK_SERVICE_NAME got $CK_SERVICE_NAME_VERIFY." errexit fi CK_SERVICE_ID=$(openstack rating hashmap service list -c 'Service ID' -f value) CK_FIELD_NAME_VERIFY=$(openstack rating hashmap field list $CK_SERVICE_ID -c 'Name' -f value) if [ $CK_FIELD_NAME_VERIFY != $CK_FIELD_NAME ]; then echo "CloudKitty verify invalid field name. Expected $CK_FIELD_NAME got $CK_FIELD_NAME_VERIFY." errexit fi CK_FIELD_ID=$(openstack rating hashmap field list $CK_SERVICE_ID -c 'Field ID' -f value) CK_MAPPING_VALUE_VERIFY=$(openstack rating hashmap mapping list --field-id $CK_FIELD_ID -c 'Value' -f value) if [ $CK_MAPPING_VALUE_VERIFY != $CK_MAPPING_VALUE ]; then echo "CloudKitty verify invalid mapping value. Expected $CK_MAPPING_VALUE got $CK_MAPPING_VALUE_VERIFY." errexit fi echo "CloudKitty verify: SUCCESS" } function verify_noapi { echo "CloudKitty verify_noapi: SUCCESS" } function destroy { CK_SERVICE_ID=$(openstack rating hashmap service list -c 'Service ID' -f value) openstack rating hashmap service delete $CK_SERVICE_ID echo "CloudKitty destroy: SUCCESS" } # Dispatcher case $1 in "create") create ;; "verify_noapi") verify_noapi ;; "verify") verify ;; "destroy") destroy ;; "force_destroy") set +o errexit destroy ;; esac ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/devstack/upgrade/settings0000664000175000017500000000010700000000000021024 0ustar00zuulzuul00000000000000register_project_for_upgrade cloudkitty register_db_to_save cloudkitty ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/devstack/upgrade/shutdown.sh0000775000175000017500000000104200000000000021452 0ustar00zuulzuul00000000000000#!/bin/bash # # set -o errexit source $GRENADE_DIR/grenaderc source $GRENADE_DIR/functions source $BASE_DEVSTACK_DIR/functions source $BASE_DEVSTACK_DIR/stackrc # needed for status directory source $BASE_DEVSTACK_DIR/lib/tls source $BASE_DEVSTACK_DIR/lib/apache # Locate the cloudkitty plugin and get its functions CLOUDKITTY_DEVSTACK_DIR=$(dirname $(dirname $0)) source $CLOUDKITTY_DEVSTACK_DIR/plugin.sh set -o xtrace stop_cloudkitty # ensure everything is stopped SERVICES_DOWN="ck-api ck-proc" ensure_services_stopped $SERVICES_DOWN ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/devstack/upgrade/upgrade.sh0000775000175000017500000000404200000000000021231 0ustar00zuulzuul00000000000000#!/usr/bin/env bash # ``upgrade-cloudkitty`` echo "*********************************************************************" echo "Begin $0" echo "*********************************************************************" # Clean up any resources that may be in use cleanup() { set +o errexit echo "********************************************************************" echo "ERROR: Abort $0" echo "********************************************************************" # Kill ourselves to signal any calling process trap 2; kill -2 $$ } trap cleanup SIGHUP SIGINT SIGTERM # Keep track of the grenade directory RUN_DIR=$(cd $(dirname "$0") && pwd) # Source params source $GRENADE_DIR/grenaderc # Import common functions source $GRENADE_DIR/functions # This script exits on an error so that errors don't compound and you see # only the first error that occurred. set -o errexit # Upgrade cloudkitty # ================== # Get functions from current DevStack source $TARGET_DEVSTACK_DIR/stackrc source $TARGET_DEVSTACK_DIR/lib/apache source $TARGET_DEVSTACK_DIR/lib/tls source $(dirname $(dirname $BASH_SOURCE))/settings source $(dirname $(dirname $BASH_SOURCE))/plugin.sh # Print the commands being run so that we can see the command that triggers # an error. It is also useful for following allowing as the install occurs. set -o xtrace # Save current config files for posterity [[ -d $SAVE_DIR/etc.cloudkitty ]] || cp -pr $CLOUDKITTY_CONF_DIR $SAVE_DIR/etc.cloudkitty # Install the target cloudkitty FILES=/tmp install_cloudkitty # calls upgrade-cloudkitty for specific release upgrade_project cloudkitty $RUN_DIR $BASE_DEVSTACK_BRANCH $TARGET_DEVSTACK_BRANCH # Migrate the database upgrade_cloudkitty_database || die $LINO "ERROR in database migration" start_cloudkitty # Don't succeed unless the services come up ensure_services_started ck-api ck-proc set +o xtrace echo "*********************************************************************" echo "SUCCESS: End $0" echo "*********************************************************************" ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.271487 cloudkitty-21.0.0/doc/0000775000175000017500000000000000000000000014555 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/.gitignore0000664000175000017500000000000600000000000016541 0ustar00zuulzuul00000000000000build ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/Makefile0000664000175000017500000001520300000000000016216 0ustar00zuulzuul00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = build # User-friendly check for sphinx-build ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) endif # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " xml to make Docutils-native XML files" @echo " pseudoxml to make pseudoxml-XML files for display purposes" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/cloudkitty.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/cloudkitty.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/cloudkitty" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/cloudkitty" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." latexpdfja: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through platex and dvipdfmx..." $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." xml: $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml @echo @echo "Build finished. The XML files are in $(BUILDDIR)/xml." pseudoxml: $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml @echo @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/requirements.txt0000664000175000017500000000032000000000000020034 0ustar00zuulzuul00000000000000openstackdocstheme>=2.2.6 # Apache-2.0 sphinxcontrib-httpdomain>=1.7.0 # BSD sphinxcontrib-pecanwsme>=0.10.0 # Apache-2.0 reno>=3.2.0 # Apache-2.0 Pygments>=2.7.2 # BSD license os-api-ref>=2.1.0 # Apache-2.0 ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.271487 cloudkitty-21.0.0/doc/source/0000775000175000017500000000000000000000000016055 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.271487 cloudkitty-21.0.0/doc/source/_static/0000775000175000017500000000000000000000000017503 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/_static/cloudkitty.policy.yaml.sample0000664000175000017500000000701300000000000025341 0ustar00zuulzuul00000000000000#"context_is_admin": "role:admin" #"admin_or_owner": "is_admin:True or (role:admin and is_admin_project:True) or project_id:%(project_id)s" #"default": "" # Return the list of every services mapped to a collector. # LIST /v1/collector/mappings #"collector:list_mappings": "role:admin" # Return a service to collector mapping. # GET /v1/collector/mappings/{service_id} #"collector:get_mapping": "role:admin" # Manage a service to collector mapping. # POST /v1/collector/mappings # DELETE /v1/collector/mappings/{service_id} #"collector:manage_mapping": "role:admin" # Query the enable state of a collector. # GET /v1/collector/states/{collector_id} #"collector:get_state": "role:admin" # Set the enable state of a collector. # PUT /v1/collector/states/{collector_id} #"collector:update_state": "role:admin" # List available services information in Cloudkitty. # LIST /v1/info/services #"info:list_services_info": "" # Get specified service information. # GET /v1/info/services/{metric_id} #"info:get_service_info": "" # List available metrics information in Cloudkitty. # LIST /v1/info/metrics #"info:list_metrics_info": "" # Get specified metric information. # GET /v1/info/metrics/{metric_id} #"info:get_metric_info": "" # Get current configuration in Cloudkitty. # GET /v1/info/config #"info:get_config": "" # Return the list of loaded modules in Cloudkitty. # LIST /v1/rating/modules #"rating:list_modules": "role:admin" # Get specified module. # GET /v1/rating/modules/{module_id} #"rating:get_module": "role:admin" # Change the state and priority of a module. # PUT /v1/rating/modules/{module_id} #"rating:update_module": "role:admin" # Get an instant quote based on multiple resource descriptions. # POST /v1/rating/quote #"rating:quote": "" # Trigger a rating module list reload. # GET /v1/rating/reload_modules #"rating:module_config": "role:admin" # Return the list of rated tenants. # GET /v1/report/tenants #"report:list_tenants": "role:admin" # Return the summary to pay for a given period. # GET /v1/report/summary #"report:get_summary": "rule:admin_or_owner" # Return the amount to pay for a given period. # GET /v1/report/total #"report:get_total": "rule:admin_or_owner" # Return a list of rated resources for a time period and a tenant. # GET /v1/storage/dataframes #"storage:list_data_frames": "rule:admin_or_owner" # Add one or several DataFrames # POST /v2/dataframes #"dataframes:add": "role:admin" # Get DataFrames # GET /v2/dataframes #"dataframes:get": "rule:admin_or_owner" # Returns the list of loaded modules in Cloudkitty. # GET /v2/rating/modules #"v2_rating:list_modules": "role:admin" # Get specified module. # GET /v2/rating/modules/{module_id} #"v2_rating:get_module": "role:admin" # Get the state of one or several scopes # GET /v2/scope #"scope:get_state": "role:admin" # Reset the state of one or several scopes # PUT /v2/scope #"scope:reset_state": "role:admin" # Enables operators to patch a storage scope # PATCH /v2/scope #"scope:patch_state": "role:admin" # Enables operators to create a storage scope # POST /v2/scope #"scope:post_state": "role:admin" # Get a rating summary # GET /v2/summary #"summary:get_summary": "rule:admin_or_owner" # Schedule a scope for reprocessing # POST /v2/task/reprocesses #"schedule:task_reprocesses": "role:admin" # Get reprocessing schedule tasks for scopes. # GET /v2/task/reprocesses #"schedule:get_task_reprocesses": "role:admin" # Change the state and priority of a module. # PUT /v2/rating/modules/{module_id} #"v2_rating:update_module": "role:admin" ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.271487 cloudkitty-21.0.0/doc/source/admin/0000775000175000017500000000000000000000000017145 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/admin/architecture.rst0000664000175000017500000001127600000000000022370 0ustar00zuulzuul00000000000000========================= CloudKitty's Architecture ========================= CloudKitty can be cut into four big parts: * Data retrieval (API) * Data collection (``cloudkitty-processor``) * Data rating * Data storage These parts are handled by two processes: ``cloudkitty-api`` and ``cloudkitty-processor``. The data retrieval part is handled by the ``cloudkitty-api`` process, the other ones are handled by ``cloudkitty-processor``. The following is an overview of CloudKitty's architecture: .. image:: ../images/cloudkitty_architecture.png :scale: 70% For details about the API, see the `api reference`_ The processor falls into the following parts: * The **fetcher** retrieves a list of **scopes** to rate. A scope distinguishes and isolates data. It also allows to split the workload between several cloudkitty-processor workers. It can be anything that makes sense in a given context, like an OpenStack project or a Kubernetes namespace. * The **collector** collects data from a source for a given scope and metric. * The collected data is then passed to the **rating modules** (several modules can be enabled at the same time). These will apply user-defined rating rules to the collected data. * Once the data has been rated, it is passed to the **storage driver**, which will store it in a given storage backend. This data will then be available through the API. .. _api reference: ../api-reference/index.html Module loading and extensions ============================= Nearly every part of CloudKitty makes use of stevedore_ to load extensions dynamically. The following schema shows the modular parts: .. image:: ../images/cloudkitty_modules.png :scale: 70% Every rating module is loaded at runtime and can be enabled/disabled directly via CloudKitty's API. The module is responsible of its own API to ease the management of its configuration. Collectors, fetchers and the storage backend are loaded at runtime but must be configured in CloudKitty's configuration file. .. _stevedore: https://docs.openstack.org/stevedore/latest/ Fetcher ======= Five fetchers are available in CloudKitty: * The ``keystone`` fetcher retrieves a list of projects on which the cloudkitty user has the ``rating`` role from Keystone. * The ``gnocchi`` fetcher retrieves a list of attributes from `Gnocchi`_ for a given resource type. This is used for standalone Gnocchi deployments or to discover new projects from Gnocchi when it is used with OpenStack. It can be used in an OpenStack context or with a standalone Gnocchi deployment. * The ``prometheus`` fetcher works in a similar way to the Gnocchi fetcher, which allows to discover scopes from `Prometheus`_. * The ``source`` fetcher is the simplest one: it reads a list of scopes from the configuation file and provides it to the collector. Details about the configuration of each fetcher are available in the `fetcher configuration guide`_ . .. _fetcher configuration guide: configuration/fetcher.html Collector ========= There are three collectors available in CloudKitty: * The ``gnocchi`` collector retrieves data from `Gnocchi`_. It can be used in an OpenStack context or with a standalone Gnocchi deployment. * The ``prometheus`` collector retrieves data from `Prometheus`_. Details about the configuration of each collector are available in the `collector configuration guide`_. For information about how to write a custom collector, see the `developer documentation`_. .. _developer documentation: ../developer/collector.html .. _collector configuration guide: configuration/collector.html .. _Gnocchi: https://gnocchi.xyz/ .. _Prometheus: https://prometheus.io/docs/introduction/overview/ Rating ====== Two rating modules are available in cloudkitty (``noop`` is not considered a real module, as it does nothing). Several rating modules can be enabled at the same time. Data will be passed to the enabled modules consecutively. The module priority can be set through the API, and it determines the order in which they will process the data (modules with the highest priority first). * The ``hashmap`` rating module is the most used one. It allows to create rating rules based on metric metadatas. * The ``pyscripts`` rating module allows to rate data with custom python scripts. For information about the usage and configuration of rating modules, see the `rating modules documentation`_. .. _rating modules documentation: ../user/rating/index.html Storage ======= The storage module is responsible for storing and retrieving data from a backend. It implements two interfaces (v1 and v2), each providing one or more drivers. For more information about the storage backend, see the `configuration section`_. .. _configuration section: configuration/storage.html ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.271487 cloudkitty-21.0.0/doc/source/admin/cli/0000775000175000017500000000000000000000000017714 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/admin/cli/cloudkitty-status.rst0000664000175000017500000000410200000000000024157 0ustar00zuulzuul00000000000000===================== Cloudkitty Status CLI ===================== This chapter documents :command:`cloudkitty-status`. For help on a specific :command:`cloudkitty-status` command, enter: .. code-block:: console $ cloudkitty-status COMMAND --help cloudkitty-status ================= :program:`cloudkitty-status` is a tool that provides routines for checking the status of a Cloudkitty deployment. The standard pattern for executing a :program:`cloudkitty-status` command is: .. code-block:: console cloudkitty-status [] Run without arguments to see a list of available command categories: .. code-block:: console cloudkitty-status Categories are: * ``upgrade`` Detailed descriptions are below. You can also run with a category argument such as ``upgrade`` to see a list of all commands in that category: .. code-block:: console cloudkitty-status upgrade The following sections describe the available categories and arguments for :program:`cloudkitty-status`. cloudkitty-status upgrade ========================= .. _cloudkitty-status-upgrade-check: cloudkitty-status upgrade check ------------------------------- ``cloudkitty-status upgrade check`` Performs a release-specific readiness check before restarting services with new code. This command expects to have complete configuration and access to the database. **Return Codes** .. list-table:: :widths: 20 80 :header-rows: 1 * - Return code - Description * - 0 - All upgrade readiness checks passed successfully and there is nothing to do. * - 1 - At least one check encountered an issue and requires further investigation. This is considered a warning but the upgrade may be OK. * - 2 - There was an upgrade status check failure that needs to be investigated. This should be considered something that stops an upgrade. * - 255 - An unexpected error occurred. **History of Checks** **9.0.0 (Stein)** * Checks that the storage interface version is 2 (which is default). ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/admin/cli/index.rst0000664000175000017500000000035500000000000021560 0ustar00zuulzuul00000000000000Command-Line Interface Reference ================================ Information on the commands available through Cloudkitty's Command Line Interface (CLI) can be found in this section. .. toctree:: :maxdepth: 1 cloudkitty-status ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.275487 cloudkitty-21.0.0/doc/source/admin/configuration/0000775000175000017500000000000000000000000022014 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/admin/configuration/collector.rst0000664000175000017500000003561600000000000024547 0ustar00zuulzuul00000000000000 ========================= Collector configuration ========================= Common options ============== Options common to all collectors are specified in the ``[collect]`` section of the configuration file. The following options are available: * ``collector``: Defaults to ``gnocchi``. The name of the collector to load. Must be one of [``gnocchi``, ``prometheus``]. * ``period``: Default to 3600. Duration (in seconds) of the collect period. * ``wait_periods``: Defaults to 2. Periods to wait before the current timestamp. This is done to avoid missing some data that hasn't been retrieved by the data source yet. * ``metrics_conf``: Defaults to ``/etc/cloudkitty/metrics.yml``. Path of the metric collection configuration file. See "Metric collection" section below for details. * ``scope_key``: Defaults to ``project_id``. Key at which the scope can be found. The scope defines how data collection is split between the processors. Collector-specific options ========================== Collector-specific options must be specified in the ``collector_{collector_name}`` section of ``cloudkitty.conf``. Gnocchi ------- Section: ``collector_gnocchi``. * ``gnocchi_auth_type``: Defaults to ``keystone``. Defines what authentication method should be used by the gnocchi collector. Must be one of ``basic`` (for gnocchi basic authentication) or ``keystone`` (for classic keystone authentication). If ``keystone`` is chosen, credentials can be specified in a section pointed at by the ``auth_section`` parameter. * ``gnocchi_user``: For gnocchi basic authentication only. The gnocchi user. * ``gnocchi_endpoint``: For gnocchi basic authentication only. The gnocchi endpoint. * ``interface``: Defaults to ``internalURL``. For keystone authentication only. The interface to use for keystone URL discovery. * ``region_name``: Defaults to ``RegionOne``. For keystone authentication only. Region name. Prometheus ---------- Section ``collector_prometheus``. * ``prometheus_url``: Prometheus HTTP API URL. * ``prometheus_user``: For HTTP basic authentication. The username. * ``prometheus_password``: For HTTP basic authentication. The password. * ``cafile``: Option to allow custom certificate authority file. * ``insecure``: Option to explicitly allow untrusted HTTPS connections. Metric collection ================= Metric collection is highly configurable in cloudkitty. In order to keep the main configuration file as clean as possible, metric collection is configured in a yaml file. The path to this file defaults to ``/etc/cloudkitty/metrics.yml``, but can be configured: .. code-block:: ini [collect] metrics_conf = /my/custom/path.yml Minimal Configuration --------------------- This config file has the following format: .. code-block:: yaml metrics: # top-level key metric_one: # metric name unit: squirrel groupby: # attributes by which metrics should be grouped - id metadata: # additional attributes to retrieve - color At the top level of the file, a ``metrics`` key is required. It contains a dict of metrics to collect, each key of the dict being the name of a metric as it is called in the datasource (``volume.size`` or ``image.size`` for example). For each metric, the following attributes are required: * ``unit``: the unit in which the metric will be stored after conversion. This is just an indication for humans and has absolutely no impact on metric collection, conversion or rating. * ``groupby``: A list of attributes by which metrics should be grouped on collection. These will allow to re-group data when it is retrieved through the v2 API. A typical usecase would be to group data by ID, project ID, domain ID and user ID on collection, but only by user ID on retrieval. * ``metadata``: A list of additional attributes that should be collected for the given metric. These can be used for rating rules and will appear in monthly reports. However, it is not possible to group on these attributes. If you need to group on a ``metadata`` attribute, move it to the ``groupby`` list. .. note:: The ``scope_key`` is automatically added to ``groupby``. Optional parameters ------------------- Unit conversion ~~~~~~~~~~~~~~~ If you need to convert the collected qty (from MiB to GiB for example), it can be done with the ``factor`` and ``offset`` options. ``factor`` defaults to 1 and ``offset`` to 0. These options are used to calculate the final result with the following formula: ``qty = collected_qty * factor + offset``. .. note:: ``factor`` and ``offset`` can be floats, integers or fractions. Example from the default configuration file, conversion from B to MiB for the ``image.size`` metric: .. code-block:: yaml metrics: image.size: groupby: - id metadata: - disk_format unit: MiB # Final unit factor: 1/1048576 # Dividing by 1024 * 1024 .. note:: Here we don't add anything, so there is no need to specify ``offset``. Quantity mutation ~~~~~~~~~~~~~~~~~ It is also possible to mutate the collected qty with the ``mutate`` option. Five values are accepted for this parameter: * ``NONE``: This is the default. The collected data is not modifed. * ``CEIL``: The qty is rounded up to the closest integer. * ``FLOOR``: The qty is rounded down to the closest integer. * ``NUMBOOL``: If the collected qty equals 0, leave it at 0. Else, set it to 1. * ``NOTNUMBOOL``: If the collected qty equals 0, set it to 1. Else, set it to 0. * ``MAP``: Map arbritrary values to new values as defined through the ``mutate_map`` option (dictionary). If the value is not found in ``mutate_map``, set it to 0. If ``mutate_map`` is not defined or is empty, all values are set to 0. .. warning:: Quantity mutation is done **after** conversion. Example:: factor: 10 mutate: CEIL In consequence, the configuration above will convert 9.9 to 99 (9.9 -> 99 -> 99) and not to 100 (9.9 -> 10 -> 100) A typical usecase for the ``NUMBOOL`` conversion would be instance uptime collection with the gnocchi collector: In order to know if an instance is running or paused, you can use the ``cpu`` metric. This metric is at 0 when the instance is paused. Thus, the qty is mutated to a ``NUMBOOL`` because the ``cpu`` metric always represents one instance. Rating rules are then defined based on the instance metadata. Example: .. code-block:: yaml metrics: cpu: unit: instance mutate: NUMBOOL groupby: - id metadata: - flavor_id The ``NOTNUMBOOL`` mutator is useful for status-like metrics where 0 denotes the billable state. For example the following Prometheus metric has value of 0 when the instance is in ACTIVE state but 4 if the instance is in ERROR state: .. code-block:: yaml metrics: openstack_nova_server_status: unit: instance mutate: NOTNUMBOOL groupby: - id metadata: - flavor_id The ``MAP`` mutator is useful when multiple statuses should be billabled. For example, the following Prometheus metric has a value of 0 when the instance is in ACTIVE state, but operators may want to rate other non-zero states: .. code-block:: yaml metrics: openstack_nova_server_status: unit: instance mutate: MAP mutate_map: 0.0: 1.0 # ACTIVE 11.0: 1.0 # SHUTOFF 12.0: 1.0 # SUSPENDED 16.0: 1.0 # PAUSED groupby: - id metadata: - flavor_id Display name ~~~~~~~~~~~~ Sometimes, you'll want to use another name for a metric, either to shorten it a bit or to make it more explicit. For example, the ``cpu`` metric from the previous section could be called ``instance``. That's what the ``alt_name`` option does: .. code-block:: yaml metrics: cpu: unit: instance alt_name: instance mutate: NUMBOOL groupby: - id metadata: - flavor_id Metric description ~~~~~~~~~~~~~~~~~~ Sometimes, you will want to use a more descriptive attribute to show more details about the configured rating type. For instance, to provide more details about the rating of operating system licenses or other software licenses configured in the cloud. For that, we have the option called ``description``, which is a String like field (up to 64 kB) that can be used to provide more information for a rating of a metric. When configured, this option is persisted as rating metadata and it is available through the summary GET API. .. code-block:: yaml metrics: instance-status: unit: license-hours alt_name: license-hours description: | Operating system licenses are charged as follows: (i) Linux distro will not be charged; (ii) All Windows up to version 8 are charged .01 every hour, and other versions .5; (iii) Any other operating systems will be charged .02 groupby: - id - operating_system_name - operating_system_distro - operating_system_version - flavor_id - flavor_name - cores - ram metadata: [] Collector-specific configuration -------------------------------- Some collectors require extra options. These must be specified through the ``extra_args`` option. Some options have defaults, other must be systematically specified. The extra args for each collector are detailed below. Gnocchi ~~~~~~~ Besides the common configuration, the Gnocchi collector also accepts a list of rating types definitions for each metric. Using a list of rating types definitions allows operators to rate different aspects of the same resource type collected through the same metric in Gnocchi, otherwise operators would need to create multiple metrics in Gnocchi to create multiple rating types in CloudKitty. .. code-block:: yaml metrics: instance.metric: - unit: instance alt_name: flavor mutate: NUMBOOL groupby: - id metadata: - flavor_id - unit: instance alt_name: operating_system_license mutate: NUMBOOL groupby: - id metadata: - os_license .. note:: In order to retrieve metrics from Gnocchi, Cloudkitty uses the dynamic aggregates endpoint. It builds an operation of the following format: ``(aggregate RE_AGGREGATION_METHOD (metric METRIC_NAME AGGREGATION_METHOD))``. This means "retrieve all aggregates of type ``AGGREGATION_METHOD`` for the metric named ``METRIC_NAME`` and re-aggregate them using ``RE_AGGREGATION_METHOD``". By default, the re-aggregation method defaults to the aggregation method. Setting the re-aggregation method to a different value than the aggregation method is useful when the granularity of the aggregates does not match CloudKitty's collect period, or when using ``rate:`` aggregation, as you're probably don't want a rate of rates, but rather a sum or max of rates. * ``resource_type``: No default value. The resource type the current metric is bound to. * ``resource_key``: Defaults to ``id``. The attribute containing the unique resource identifier. This is an advanced option, do not modify it unless you know what you're doing. * ``aggregation_method``: Defaults to ``max``. The aggregation method to use when retrieving measures from gnocchi. Must be one of ``min``, ``max``, ``mean``, ``rate:min``, ``rate:max``, ``rate:mean``. * ``re_aggregation_method``: Defaults to ``aggregation_method``. The re_aggregation method to use when retrieving measures from gnocchi. * ``force_granularity``: Defaults to ``0``. If > 0, this granularity will be used for metric aggregations. Else, the lowest available granularity will be used (meaning the granularity covering the longest period). * ``use_all_resource_revisions``: Defaults to ``True``. This option is useful when using Gnocchi with the patch introduced via https://github .com/gnocchixyz/gnocchi/pull/1059. That patch can cause queries to return more than one entry per granularity (timespan), according to the revisions a resource has. This can be problematic when using the 'mutate' option of Cloudkitty. This option to allow operators to discard all datapoints returned from Gnocchi, but the last one in the granularity queried by CloudKitty for a resource id. The default behavior is maintained, which means, CloudKitty always use all of the data points returned. * ``custom_query``: Provide means for operators to customize the aggregation query executed against Gnocchi. By default we use the following ``(aggregate RE_AGGREGATION_METHOD (metric METRIC_NAME AGGREGATION_METHOD))``. Therefore, this option enables operators to take full advantage of operations available in Gnocchi such as any arithmetic operations, logical operations and many others. When using a custom aggregation query, you can keep the placeholders ``RE_AGGREGATION_METHOD``, ``AGGREGATION_METHOD``, and ``METRIC_NAME``: they will be replaced at runtime by values from the metric configuration. One example use case is metrics that are supposed to be always growing values, such as RadosGW usage data. The usage data is affected by usage data trimming on RadosGW, which can lead to swaps (meaning, that the right side value of the series is smaller than the left side value) in the data series in Gnocchi. Therefore, to handle this situation one could, for instance, use the following custom query: ``(div (+ (aggregate RE_AGGREGATION_METHOD (metric METRIC_NAME AGGREGATION_METHOD)) (abs (aggregate RE_AGGREGATION_METHOD (metric METRIC_NAME AGGREGATION_METHOD)))) 2)``: this custom query would return ``0`` when the value of the series swap. Prometheus ~~~~~~~~~~ * ``aggregation_method``: Defaults to ``max``. The aggregation method to use when retrieving measures from prometheus. Must be one of ``avg``, ``min``, ``max``, ``sum``, ``count``, ``stddev``, ``stdvar``. * ``query_function``: Optional argument. The function to apply to an instant vector after the ``aggregation_method`` or ``range_function`` has altered the data. Must be one of ``abs``, ``ceil``, ``exp``, ``floor``, ``ln``, ``log2``, ``log10``, ``round``, ``sqrt``. For more information on these functions, you can check `this page`_ * ``query_prefix``: Optional argument. An arbitrary prefix to add to the Prometheus query generated by CloudKitty, separated by a space. * ``query_suffix``: Optional argument. An arbitrary suffix to add to the Prometheus query generated by CloudKitty, separated by a space. * ``range_function``: Optional argument. The function to apply instead of the implicit ``{aggregation_method}_over_time``. Must be one of ``changes``, ``delta``, ``deriv``, ``idelta``, ``irange``, ``irate``, ``rate``. For more information on these functions, you can check `this page`_ .. _this page: https://prometheus.io/docs/prometheus/latest/querying/basics/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/admin/configuration/configuration.rst0000664000175000017500000001217300000000000025421 0ustar00zuulzuul00000000000000================================ Step by step configuration guide ================================ .. note:: For a sample ``cloudkitty.conf`` file, see :doc:`samples/cloudkitty-conf` . Edit ``/etc/cloudkitty/cloudkitty.conf`` to configure cloudkitty. Common options -------------- Options supported by most OpenStack projects are also supported by cloudkitty: .. code-block:: ini [DEFAULT] verbose = true debug = false log_dir = /var/log/cloudkitty transport_url = rabbit://RABBIT_USER:RABBIT_PASSWORD@RABBIT_HOST API authentication method ------------------------- The authentication method is defined through the ``auth_strategy`` option in the ``[DEFAULT]`` section. Standalone mode +++++++++++++++ If you're using CloudKitty in standalone mode, you'll have to use noauth: .. code-block:: ini [DEFAULT] auth_strategy = noauth Keystone integration ++++++++++++++++++++ If you're using CloudKitty with OpenStack, you'll want to use Keystone authentication: .. code-block:: ini [DEFAULT] auth_strategy = keystone When using Keystone, you'll have to provide the CloudKitty credentials for Keystone. These must be specified in the ``[keystone_authtoken]`` section. Since these credentials will be used in multiple places, it is convenient to use a common section: .. code-block:: ini [ks_auth] auth_type = v3password auth_protocol = http auth_url = http://KEYSTONE_HOST:5000/ identity_uri = http://KEYSTONE_HOST:5000/ username = cloudkitty password = CK_PASSWORD project_name = service user_domain_name = default project_domain_name = default [keystone_authtoken] auth_section = ks_auth .. note:: The ``service`` project may also be called ``services``. CloudKitty provides the ``rating`` OpenStack service. To integrate cloudkitty to Keystone, run the following commands (as OpenStack administrator): .. code-block:: shell openstack user create cloudkitty --password CK_PASSWORD openstack role add --project service --user cloudkitty admin openstack service create rating --name cloudkitty \ --description "OpenStack Rating Service" openstack endpoint create rating --region RegionOne \ public http://localhost:8889 openstack endpoint create rating --region RegionOne \ admin http://localhost:8889 openstack endpoint create rating --region RegionOne \ internal http://localhost:8889 Storage ------- The next step is to configure the storage. Start with the SQL and create the ``cloudkitty`` table and user: .. code-block:: shell mysql -uroot -p << EOF CREATE DATABASE cloudkitty; GRANT ALL PRIVILEGES ON cloudkitty.* TO 'CK_DBUSER'@'localhost' IDENTIFIED BY 'CK_DBPASSWORD'; EOF Specify the SQL credentials in the ``[database]`` section of the configuration file: .. code-block:: ini [database] connection = mysql+pymysql://CK_DBUSER:CK_DBPASSWORD@DB_HOST/cloudkitty Once you have set up the SQL database service, the storage backend for rated data can be configured. A complete configuration reference can be found in the `storage backend configuration guide`_. We'll use a v2 storage backend, which enables the v2 API. The storage version and driver to use must be specified in the ``[storage]`` section of the documentation: .. code-block:: ini [storage] version = 2 backend = influxdb Driver-specific options are then specified in the ``[storage_{drivername}]`` section: .. code-block:: ini [storage_influxdb] username = cloudkitty password = cloudkitty database = cloudkitty host = influxdb Once you have configured the SQL and rated data storage backends, initalize the storage:: cloudkitty-storage-init Then, run the database migrations:: cloudkitty-dbsync upgrade .. _storage backend configuration guide: ./storage.html Fetcher ------- The fetcher retrieves the list of scopes to rate, which will then be passed to the collector. A complete configuration reference can be found in the `fetcher configuration guide`_. For this example, we'll use the ``gnocchi`` fetcher, which will discover scopes (in this case OpenStack projects) to rate. The fetcher to use is specified through the ``backend`` option of the ``[fetcher]`` section: .. code-block:: ini [fetcher] backend = gnocchi Fetcher-specific options are then specified in the ``[fetcher_{fetchername}]`` section: .. code-block:: ini [fetcher_gnocchi] auth_section = ks_auth region_name = MyRegion .. _fetcher configuration guide: ./fetcher.html Collector --------- The collector will retrieve data for the scopes provided by the fetcher and pass them to the rating modules. The collector to use is specified in the ``[collect]`` section, and the collector-specific options are specified in the ``[collector_{collectorname}]`` section: .. code-block:: ini [collect] collector = gnocchi [collector_gnocchi] auth_section = ks_auth region_name = MyRegion Note that you'll also have to configure what metrics the collector should collect, and how they should be collected. Have a look at the `collector configuration guide`_ for this: .. _collector configuration guide: ./collector.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/admin/configuration/fetcher.rst0000664000175000017500000000576000000000000024176 0ustar00zuulzuul00000000000000====================== Fetcher configuration ====================== Backend option ============== ``backend`` is a common option specified in the ``[fetcher]`` section of the configuration file. It defaults to ``keystone`` and specifies the driver to be used for fetching the list of scopes to rate. Fetcher-specific options ======================== Fetcher-specific options must be specified in the ``fetcher_{fetcher_name}`` section of ``cloudkitty.conf``. Gnocchi ------- Section ``fetcher_gnocchi``. * ``scope_attribute``: Defaults to ``project_id``. Attribute from which scope_ids should be collected. * ``resource_types``: Defaults to ``[generic]``. List of gnocchi resource types. All if left blank. * ``gnocchi_auth_type``: Defaults to ``keystone``. Defines what authentication method should be used by the gnocchi fetcher. Must be one of ``basic`` (for gnocchi basic authentication) or ``keystone`` (for classic keystone authentication). If ``keystone`` is chosen, credentials can be specified in a section pointed at by the ``auth_section`` parameter. * ``gnocchi_user``: For gnocchi basic authentication only. The gnocchi user. * ``gnocchi_endpoint``: For gnocchi basic authentication only. The gnocchi endpoint. * ``interface``: Defaults to ``internalURL``. For keystone authentication only. The interface to use for keystone URL discovery. * ``region_name``: Defaults to ``RegionOne``. For keystone authentication only. Region name. Keystone -------- Section ``fetcher_keystone``. * ``keystone_version``: Defaults to ``3``. Keystone version to use. * ``auth_section``: If the ``auth_section`` option is defined then all the options declared in the target section will be used in order to fetch scopes through Keystone service. If ``auth_section`` option is not defined then you can configure Keystone fetcher using regular Keystone authentication options as found here: :doc:`configuration`. * ``ignore_rating_role``: if set to true, the Keystone fetcher will not check if a project has the rating role; thus, CloudKitty will execute rating for every project it finds. Defaults to false. * ``ignore_disabled_tenants``: if set to true, Cloudkitty will not rate projects that are disabled in Keystone. Defaults to false. Prometheus ---------- Section ``fetcher_prometheus``. * ``metric``: Metric from which scope_ids should be requested. * ``scope_attribute``: Defaults to ``project_id``. Attribute from which scope_ids should be requested. * ``filters``: Optional key-value dictionary to use additional metadata to filter out some of the Prometheus service response. * ``prometheus_url``: Prometheus HTTP API URL. * ``prometheus_user``: For HTTP basic authentication. The username. * ``prometheus_password``: For HTTP basic authentication. The password. * ``cafile``: Option to allow custom certificate authority file. * ``insecure``: Option to explicitly allow untrusted HTTPS connections. Source ------ Section ``fetcher_source``. * ``sources``: Explicit list of scope_ids. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/admin/configuration/index.rst0000664000175000017500000000025600000000000023660 0ustar00zuulzuul00000000000000################### Configuration Guide ################### .. toctree:: :glob: configuration fetcher collector storage samples/cloudkitty-conf policy ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/admin/configuration/policy.rst0000664000175000017500000000121400000000000024043 0ustar00zuulzuul00000000000000==================== Policy configuration ==================== Configuration ~~~~~~~~~~~~~ .. warning:: JSON formatted policy file is deprecated since Cloudkitty 14.0.0 (Wallaby). This `oslopolicy-convert-json-to-yaml`__ tool will migrate your existing JSON-formatted policy file to YAML in a backward-compatible way. .. __: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html The following is an overview of all available policies in Cloudkitty. For a sample configuration file, refer to :doc:`samples/policy-yaml`. .. show-policy:: :config-file: ../../etc/oslo-policy-generator/cloudkitty.conf ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.275487 cloudkitty-21.0.0/doc/source/admin/configuration/samples/0000775000175000017500000000000000000000000023460 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/admin/configuration/samples/cloudkitty-conf.rst0000664000175000017500000000021300000000000027324 0ustar00zuulzuul00000000000000========================= cloudkitty.conf reference ========================= .. literalinclude:: ../../../_static/cloudkitty.conf.sample ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/admin/configuration/samples/policy-yaml.rst0000664000175000017500000000032700000000000026453 0ustar00zuulzuul00000000000000:orphan: =========== policy.yaml =========== Use the ``policy.yaml`` file to define additional access controls that apply to the Rating service: .. literalinclude:: ../../../_static/cloudkitty.policy.yaml.sample ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/admin/configuration/storage.rst0000664000175000017500000000632200000000000024215 0ustar00zuulzuul00000000000000=============================== Storage backend configuration =============================== Common options ============== .. note:: Two storage backend interfaces are available: v1 and v2. Each supports one or several drivers. The v2 storage interface is required to use CloudKitty's v2 API. It is retrocompatible with the v1 API. However, it is not possible to use the v2 API with the v1 storage interface. The main storage backend options are specified in the ``[storage]`` section of the configuration file. The following options are available: * ``version``: Defaults to 2. Version of the storage interface to use (must be 1 or 2). * ``backend``: Defaults to ``influxdb``. Storage driver to use. Supported v1 drivers are: - ``sqlalchemy`` Supported v2 drivers are: - ``influxdb`` - ``elasticsearch`` - ``opensearch`` Driver-specific options ======================= SQLAlchemy (v1) --------------- This backend has no specific options. It uses the ``connection`` option of the ``database`` section. Example of value for this option: .. code-block:: ini [database] connection = mysql+pymysql://cloudkitty_user:cloudkitty_password@mariadb_host/cloudkitty_database InfluxDB (v2) ------------- Section: ``storage_influxdb``. * ``username``: InfluxDB username. * ``password``: InfluxDB password. * ``database``: InfluxDB database. * ``retention_policy``: Retention policy to use (defaults to ``autogen``) * ``host``: Defaults to ``localhost``. InfluxDB host. * ``port``: Default to 8086. InfluxDB port. * ``use_ssl``: Defaults to false. Set to true to use SSL for InfluxDB connections. * ``insecure``: Defaults to false. Set to true to authorize insecure HTTPS connections to InfluxDB. * ``cafile``: Path of the CA certificate to trust for HTTPS connections. .. note:: CloudKitty will push one point per collected metric per collect period to InfluxDB. Depending on the size of your infra and the capacities of your InfluxDB host / cluster, you might want to do regular exports of your data and create a custom retention policy on cloudkitty's database. Elasticsearch (v2) ------------------ Section ``storage_elasticsearch``: * ``host``: Defaults to ``http://localhost:9200``. Elasticsearch host, along with port and protocol. * ``index_name``: Defaults to ``cloudkitty``. Elasticsearch index to use. * ``insecure``: Defaults to ``false``. Set to true to allow insecure HTTPS connections to Elasticsearch. * ``cafile``: Path of the CA certificate to trust for HTTPS connections. * ``scroll_duration``: Defaults to 30. Duration (in seconds) for which the Elasticsearch scroll contexts should be kept alive. OpenSearch 2.x (v2) ------------------- Section ``storage_opensearch``: * ``host``: Defaults to ``http://localhost:9200``. OpenSearch 2.x host, along with port and protocol. * ``index_name``: Defaults to ``cloudkitty``. OpenSearch index to use. * ``insecure``: Defaults to ``false``. Set to true to allow insecure HTTPS connections to OpenSearch. * ``cafile``: Path of the CA certificate to trust for HTTPS connections. * ``scroll_duration``: Defaults to 30. Duration (in seconds) for which the OpenSearch scroll contexts should be kept alive. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/admin/devstack.rst0000664000175000017500000000146200000000000021506 0ustar00zuulzuul00000000000000DevStack installation ===================== Add the following lines in your ``local.conf`` file to enable CloudKitty, Ceilometer and Gnocchi. By default, the fetcher will be ``gnocchi`` (configurable via the ``CLOUDKITTY_FETCHER`` variable), the collector will be ``gnocchi`` (configurable via the ``CLOUDKITTY_COLLECTOR`` variable), and the storage backend will be ``influxdb`` (configurable via the ``CLOUDKITTY_STORAGE_BACKEND`` and ``CLOUDKITTY_STORAGE_VERSION`` variables). .. code-block:: ini [[local|localrc]] # ceilometer enable_plugin ceilometer https://opendev.org/openstack/ceilometer.git master # cloudkitty enable_plugin cloudkitty https://opendev.org/openstack/cloudkitty.git master enable_service ck-api,ck-proc Then start devstack: .. code-block:: console ./stack.sh ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/admin/index.rst0000664000175000017500000000027100000000000021006 0ustar00zuulzuul00000000000000==================== Administration Guide ==================== .. toctree:: :glob: :maxdepth: 2 architecture devstack install/index configuration/index cli/index ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.275487 cloudkitty-21.0.0/doc/source/admin/install/0000775000175000017500000000000000000000000020613 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/admin/install/index.rst0000664000175000017500000000022100000000000022447 0ustar00zuulzuul00000000000000================== Installation Guide ================== .. toctree:: :glob: install-source install-ubuntu install-rdo mod_wsgi ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/admin/install/install-rdo.rst0000664000175000017500000000070300000000000023575 0ustar00zuulzuul00000000000000Install from package (RDO For RHEL/CentOS 7) ============================================ Packages for RHEL/CentOS 7 are available starting from the Mitaka release. #. Install the RDO repositories for your release:: yum install centos-release-openstack-RELEASE # RELEASE can be any supported release name like rocky #. Install the packages:: yum install openstack-cloudkitty-api openstack-cloudkitty-processor openstack-cloudkitty-ui ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/admin/install/install-source.rst0000664000175000017500000000376300000000000024322 0ustar00zuulzuul00000000000000Install from source =================== Install the services -------------------- Retrieve and install cloudkitty:: git clone https://opendev.org/openstack/cloudkitty.git cd cloudkitty python setup.py install This procedure installs the ``cloudkitty`` python library and the following executables: * ``cloudkitty-api``: API service * ``cloudkitty-processor``: Processing service (collecting and rating) * ``cloudkitty-dbsync``: Tool to create and upgrade the database schema * ``cloudkitty-storage-init``: Tool to initiate the storage backend * ``cloudkitty-writer``: Reporting tool Install sample configuration files:: mkdir /etc/cloudkitty tox -e genconfig cp etc/cloudkitty/cloudkitty.conf.sample /etc/cloudkitty/cloudkitty.conf cp etc/cloudkitty/policy.yaml /etc/cloudkitty cp etc/cloudkitty/api_paste.ini /etc/cloudkitty Create the log directory:: mkdir /var/log/cloudkitty/ Install the client ------------------ Retrieve and install cloudkitty client:: git clone https://opendev.org/openstack/python-cloudkittyclient.git cd python-cloudkittyclient python setup.py install Install the dashboard module ---------------------------- #. Retrieve and install cloudkitty's dashboard:: git clone https://opendev.org/openstack/cloudkitty-dashboard.git cd cloudkitty-dashboard python setup.py install #. Find where the python packages are installed:: PY_PACKAGES_PATH=`pip --version | cut -d' ' -f4` #. Add the enabled file to the horizon settings or installation. Depending on your setup, you might need to add it to ``/usr/share`` or directly in the horizon python package:: # If horizon is installed by packages: ln -sf $PY_PACKAGES_PATH/cloudkittydashboard/enabled/_[0-9]*.py \ /usr/share/openstack-dashboard/openstack_dashboard/enabled/ # Directly from sources: ln -sf $PY_PACKAGES_PATH/cloudkittydashboard/enabled/_[0-9]*.py \ $PY_PACKAGES_PATH/openstack_dashboard/enabled/ #. Restart the web server hosting Horizon. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/admin/install/install-ubuntu.rst0000664000175000017500000000110500000000000024330 0ustar00zuulzuul00000000000000Install from packages for Ubuntu (16.04) ======================================== Packages for Ubuntu 16.04 are available starting from the Newton release. #. Enable the OpenStack repository for the Newton or Ocata release:: apt install software-properties-common add-apt-repository ppa:objectif-libre/cloudkitty # Newton add-apt-repository ppa:objectif-libre/cloudkitty-ocata # Ocata #. Upgrade the packages on your host:: apt update && apt dist-upgrade #. Install the packages:: apt-get install cloudkitty-api cloudkitty-processor cloudkitty-dashboard ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/admin/install/mod_wsgi.rst0000664000175000017500000000341400000000000023157 0ustar00zuulzuul00000000000000.. Copyright 2013 New Dream Network, LLC (DreamHost) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _mod_wsgi: ================================== Installing the API behind mod_wsgi ================================== Cloudkitty comes with a few example files for configuring the API service to run behind Apache with ``mod_wsgi``. app.wsgi ======== The file ``cloudkitty/api/app.wsgi`` sets up the V1 API WSGI application. The file needs to be copied to ``/var/www/cloudkitty/``, and should not need to be modified. etc/apache2/cloudkitty ====================== The ``etc/apache2/cloudkitty`` file contains example settings that work with a copy of cloudkitty installed via devstack. .. literalinclude:: ../../../../etc/apache2/cloudkitty 1. On deb-based systems copy or symlink the file to ``/etc/apache2/sites-available``. For rpm-based systems the file will go in ``/etc/httpd/conf.d``. 2. Modify the ``WSGIDaemonProcess`` directive to set the ``user`` and ``group`` values to an appropriate user on your server. In many installations ``cloudkitty`` will be correct. 3. Enable the cloudkitty site. On deb-based systems:: # a2ensite cloudkitty # service apache2 reload On rpm-based systems:: # service httpd reload ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/common-index.rst0000664000175000017500000000444600000000000021214 0ustar00zuulzuul00000000000000What is CloudKitty ? ==================== CloudKitty is a **Rating-as-a-Service** project for OpenStack and more. The project aims at being a **generic** solution for the chargeback and rating of a cloud. Historically, it was only possible to operate it inside of an OpenStack context, but it is now possible to run CloudKitty in standalone mode. CloudKitty allows to do metric-based rating: it polls endpoints in order to retrieve measures and metadata about specific metrics, applies rating rules to the collected data and pushes the rated data to its storage backend. More details about concepts, expressions, and jargons can be found in the `documentation of CloudKitty concepts`_. CloudKitty is highly modular, which makes it easy to add new features. .. only:: html .. note:: **We're looking for contributors!** If you want to contribute, please have a look at the `developer documentation`_ . .. _developer documentation: developer/index.html .. _documentation of CloudKitty concepts: concepts/index.html What can be done with CloudKitty ? What can't ? =============================================== **With Cloudkitty, it is possible to:** - Collect metrics from OpenStack (through Gnocchi) or from somewhere else (through Gnocchi in standalone mode and Prometheus). Metric collection is **highly customizable**. - Apply rating rules to the previous metrics through the `hashmap`_ module or `custom scripts`_. This is all done via CloudKitty's API. - Retrieve the rated information through the API, grouped by scope and/or by metric type. **However, it is not possible to:** - Limit resources in other OpenStack services once a certain limit has been reached. Ex: block instance creation in Nova above a certain price. Cloudkitty does **rating and only rating**. - Add taxes, convert between currencies, etc... This needs to be done by a billing software. CloudKitty associates a price to a metric for a given period, but the price's unit is what you decide it to be: euros, dollars, cents, squirrels... .. _custom scripts: user/rating/pyscripts.html .. _roadmap: developer/roadmap.html What changes/features are to expect ? ===================================== If you're interested in CloudKitty's evolution, see the project's `roadmap`_ . .. _hashmap: user/rating/hashmap.html ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.275487 cloudkitty-21.0.0/doc/source/concepts/0000775000175000017500000000000000000000000017673 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/concepts/index.rst0000664000175000017500000001172200000000000021537 0ustar00zuulzuul00000000000000CloudKitty Concepts ====================== This page provides the definitions for the concepts used in CloudKitty. It is recommended that you get familiar with them. Rating ------ It is the process of assigning a `value` to the consumption of computing resources. CloudKitty uses the concepts of services, which are rated. Therefore, one can configure services to be collected/monitored, and then through its processes, we can assign monetary values to the service consumption. The `value` assigned can be used to represent a monetary value in any currency. However, CloudKitty has no native module to execute conversions and apply any currency rate. The process to map/link a `value` to a real monetary charge is up to operators when configuring CloudKitty. Modules ------- Modules define the rating processes that are enabled. To get to know more about the rating modules, one should check `rating modules`_ . .. _rating modules: ../user/rating/index.html Services -------- Services define the metrics that are collected in a storage backend, and that are then rated by CloudKitty. Services need to be defined via API to be processed later by the rating modules, and configured in the collectors to be captured. Services are configured to be collected in the ``metrics.yml`` file. More information about service creation can be found at the `service configuration page`_. .. _service configuration page: ../admin/configuration/configuration.html Groups ------ Groups define sets of services that can be manipulated together. Groups are directly linked to rating rules, and not to services or fields. Therefore, if we want to group a set of rules to list them together or delete them, we can create a group and add them to the group, but in the end the resources are going to be charged based on the services, fields and rating rules. Fields ------ Fields define the attributes that are retrieved together with the service collection that can be used to activate a rating rule. PyScripts --------- It is an alternative method of writing rating rule. When writing a PyScript, one will be able to handle the complete processing of the rating. Therefore, there is no need to create services, fields, and groups in CloudKitty. The PyScript logic should take care of all that. Rating rules ------------ Rating rules are the expressions used to create a charge (assign a value to a computing resource consumption). Rating rules can be created with PyScripts or with the use of fields, services and groups with hashmap rating rules. If we have a hashmap mapping configuration for a service and another hashmap map configuration for a field that belongs to the same service, the user is going to be charged twice, one for service and another for the field that activated a rating rule that is linked to the service. Rating type ----------- Rating type is the expression used to determine a service definition in the collection backend. For instance, one can use the following syntax in the ``metrics.yml`` file. The entry ``dynamic_pollster.compute.services.instance.status`` is the definition for rating types. In the example shown here, there are two rating types being defined, one called ``instance-usage-hours`` and the other called ``instance-operating-system-license``. The rating types are configured in CloudKitty API as services. If they are not configured, they will not be rated by rating rules defined with hashmap. Therefore, they would be collected, and persisted with value (price) as zero. .. code-block:: yaml metrics: dynamic_pollster.compute.services.instance.status: - unit: instance alt_name: instance-usage-hours description: "compute" groupby: - id - display_name - flavor_id - flavor_name - user_id - project_id - revision_start - availability_zone metadata: - image_ref - flavor_vcpus - flavor_ram - operating_system_name - operating_system_distro - operating_system_type - operating_system_version - mssql_version extra_args: aggregation_method: max resource_type: instance use_all_resource_revisions: false - unit: license-hours alt_name: "instance-operating-system-license" description: "license" groupby: - id - display_name - flavor_id - flavor_name - user_id - project_id - revision_start - availability_zone - operating_system_distro - operating_system_name metadata: - image_ref - flavor_vcpus - flavor_ram - operating_system_type - operating_system_version extra_args: aggregation_method: max resource_type: instance use_all_resource_revisions: false ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/conf.py0000664000175000017500000002022200000000000017352 0ustar00zuulzuul00000000000000# Copyright (c) 2010 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # # This file is execfile()'d with the current directory set to its containing # dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import os import sys # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path = [ os.path.abspath('../..'), os.path.abspath('../../bin') ] + sys.path # -- General configuration --------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.coverage', 'sphinx.ext.ifconfig', 'sphinx.ext.graphviz', 'stevedore.sphinxext', 'oslo_config.sphinxext', 'sphinx.ext.viewcode', 'oslo_config.sphinxconfiggen', 'sphinx.ext.mathjax', 'wsmeext.sphinxext', 'sphinx.ext.autodoc', 'sphinxcontrib.pecanwsme.rest', 'sphinxcontrib.httpdomain', 'os_api_ref', 'openstackdocstheme', 'oslo_policy.sphinxext', 'oslo_policy.sphinxpolicygen', ] # Ignore the following warning: WARNING: while setting up extension # wsmeext.sphinxext: directive 'autoattribute' is already registered, # it will be overridden. suppress_warnings = ['app.add_directive'] # openstackdocstheme options openstackdocs_repo_name = 'openstack/cloudkitty' openstackdocs_pdf_link = True openstackdocs_use_storyboard = True config_generator_config_file = '../../etc/oslo-config-generator/cloudkitty.conf' policy_generator_config_file = '../../etc/oslo-policy-generator/cloudkitty.conf' sample_policy_basename = sample_config_basename = '_static/cloudkitty' # Add any paths that contain templates here, relative to this directory. # templates_path = [] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. copyright = '2014-present, OpenStack Foundation.' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. #unused_docs = [] # List of directories, relative to source directory, that shouldn't be searched # for source files. #exclude_trees = ['api'] exclude_patterns = [] # The reST default role (for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. show_authors = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'native' # A list of ignored prefixes for module index sorting. modindex_common_prefix = ['cloudkitty.'] # -- Options for man page output -------------------------------------------- # Grouping the document tree for man pages. # List of tuples 'sourcefile', 'target', u'title', u'Authors name', 'manual' man_pages = [ ('index', 'cloudkitty', 'cloudkitty Documentation', ['Objectif Libre'], 1) ] # -- Options for HTML output ------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. # html_theme_path = ["."] # html_theme = '_theme' html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. html_theme_options = { "show_other_versions": "True", } # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = ['_theme'] #html_theme_path = [openstackdocstheme.get_html_theme_path()] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. html_use_modindex = True # If false, no index is generated. html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'cloudkittydoc' # -- Options for LaTeX output ------------------------------------------------ # The paper size ('letter' or 'a4'). #latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). #latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, # documentclass [howto/manual]). latex_documents = [ ('pdf-index', 'doc-cloudkitty.tex', 'Cloudkitty Documentation', 'Cloudkitty Team', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # Additional stuff for the LaTeX preamble. #latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_use_modindex = True # If false, no module index is generated. latex_domain_indices = False latex_elements = { 'makeindex': '', 'printindex': '', 'preamble': r'\setcounter{tocdepth}{3}', } # Disable usage of xindy https://bugzilla.redhat.com/show_bug.cgi?id=1643664 latex_use_xindy = False ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.275487 cloudkitty-21.0.0/doc/source/contributor/0000775000175000017500000000000000000000000020427 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/contributor/contributing.rst0000664000175000017500000000357700000000000023704 0ustar00zuulzuul00000000000000============================ So You Want to Contribute... ============================ For general information on contributing to OpenStack, please check out the `contributor guide `_ to get started. It covers all the basics that are common to all OpenStack projects: the accounts you need, the basics of interacting with our Gerrit review system, how we communicate as a community, etc. Below will cover the more project specific information you need to get started with Cloudkitty. Communication ~~~~~~~~~~~~~ * IRC channel #cloudkitty at `OFTC `_ * Mailing list (prefix subjects with ``[cloudkitty]`` for faster responses) http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss Contacting the Core Team ~~~~~~~~~~~~~~~~~~~~~~~~ Please refer the `Cloudkitty Core Team `_ contacts. New Feature Planning ~~~~~~~~~~~~~~~~~~~~ Cloudkitty features are tracked on `Storyboard `_. Task Tracking ~~~~~~~~~~~~~ We track our tasks in `Storyboard `_. If you're looking for some smaller, easier work item to pick up and get started on, search for the 'low-hanging-fruit' tag. Reporting a Bug ~~~~~~~~~~~~~~~ You found an issue and want to make sure we are aware of it? You can do so on `StoryBoard `_. Getting Your Patch Merged ~~~~~~~~~~~~~~~~~~~~~~~~~ All changes proposed to the Cloudkitty project require one or two +2 votes from Cloudkitty core reviewers before one of the core reviewers can approve patch by giving ``Workflow +1`` vote. Project Team Lead Duties ~~~~~~~~~~~~~~~~~~~~~~~~ All common PTL duties are enumerated in the `PTL guide `_. ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.275487 cloudkitty-21.0.0/doc/source/developer/0000775000175000017500000000000000000000000020042 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.275487 cloudkitty-21.0.0/doc/source/developer/api/0000775000175000017500000000000000000000000020613 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/developer/api/index.rst0000664000175000017500000000010500000000000022450 0ustar00zuulzuul00000000000000===== API ===== .. toctree:: :maxdepth: 2 tutorial utils ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/developer/api/tutorial.rst0000664000175000017500000002311600000000000023213 0ustar00zuulzuul00000000000000==================================== Tutorial: creating an API endpoint ==================================== This section of the document details how to create an endpoint for CloudKitty's v2 API. The v1 API is frozen, no endpoint should be added. Setting up the layout for a new resource ======================================== In this section, we will create an ``example`` endpoint. Create the following files and subdirectories in ``cloudkitty/api/v2/``: .. code-block:: console cloudkitty/api/v2/ └── example ├── example.py └── __init__.py Creating a custom resource ========================== Each v2 API endpoint is based on a Flask Blueprint and one Flask-RESTful resource per sub-endpoint. This allows to have a logical grouping of the resources. Let's take the ``/rating/hashmap`` route as an example. Each of the hashmap module's resources should be a Flask-RESTful resource (eg. ``/rating/hashmap/service``, ``/rating/hashmap/field`` etc...). .. note:: There should be a distinction between endpoints refering to a single resource and to several ones. For example, if you want an endpoint allowing to list resources of some kind, you should implement the following: * A ``MyResource`` resource with support for ``GET``, ``POST`` and ``PUT`` HTTP methods on the ``/myresource/`` route. * A ``MyResourceList`` resource with support for the ``GET`` HTTP method on the ``/myresource`` route. * A blueprint containing these resources. Basic resource -------------- We'll create an ``/example/`` endpoint, used to manipulate fruits. We'll create an ``Example`` resource, supporting ``GET`` and ``POST`` HTTP methods. First of all, we'll create a class with ``get`` and ``post`` methods in ``cloudkitty/api/v2/example/example.py``: .. code-block:: python from cloudkitty.api.v2 import base class Example(base.BaseResource): def get(self): pass def post(self): pass Validating a method's parameters and output ------------------------------------------- A ``GET`` request on our resource will simply return **{"message": "This is an example endpoint"}**. The ``add_output_schema`` decorator adds voluptuous validation to a method's output. This allows to set defaults. .. autofunction:: cloudkitty.api.v2.utils.add_output_schema :noindex: Let's update our ``get`` method in order to use this decorator: .. code-block:: python import voluptuous from cloudkitty.api.v2 import base from cloudkitty import validation_utils class Example(base.BaseResource): @api_utils.add_output_schema({ voluptuous.Required( 'message', default='This is an example endpoint', ): validation_utils.get_string_type(), }) def get(self): return {} .. note:: In this snippet, ``get_string_type`` returns ``basestring`` in python2 and ``str`` in python3. .. code-block:: console $ curl 'http://cloudkitty-api:8889/v2/example' {"message": "This is an example endpoint"} It is now time to implement the ``post`` method. This function will take a parameter. In order to validate it, we'll use the ``add_input_schema`` decorator: .. autofunction:: cloudkitty.api.v2.utils.add_input_schema :noindex: Arguments validated by the input schema are passed as named arguments to the decorated function. Let's implement the post method. We'll use Werkzeug exceptions for HTTP return codes. .. code-block:: python @api_utils.add_input_schema('body', { voluptuous.Required('fruit'): validation_utils.get_string_type(), }) def post(self, fruit=None): policy.authorize(flask.request.context, 'example:submit_fruit', {}) if not fruit: raise http_exceptions.BadRequest( 'You must submit a fruit', ) if fruit not in ['banana', 'strawberry']: raise http_exceptions.Forbidden( 'You submitted a forbidden fruit', ) return { 'message': 'Your fruit is a ' + fruit, } Here, ``fruit`` is expected to be found in the request body: .. code-block:: console $ curl -X POST -H 'Content-Type: application/json' 'http://cloudkitty-api:8889/v2/example' -d '{"fruit": "banana"}' {"message": "Your fruit is a banana"} In order to retrieve ``fruit`` from the query, the function should have been decorated like this: .. code-block:: python @api_utils.add_input_schema('query', { voluptuous.Required('fruit'): api_utils.SingleQueryParam(str), }) def post(self, fruit=None): Note that a ``SingleQueryParam`` is used here: given that query parameters can be specified several times (eg ``xxx?groupby=a&groupby=b``), Flask provides query parameters as lists. The ``SingleQueryParam`` helper checks that a parameter is provided only once, and returns it. .. autoclass:: cloudkitty.api.v2.utils.SingleQueryParam :noindex: .. warning:: ``SingleQueryParam`` uses ``voluptuous.Coerce`` internally for type checking. Thus, ``validation_utils.get_string_type`` cannot be used as ``basestring`` can't be instantiated. Authorising methods ------------------- The ``Example`` resource is still missing some authorisations. We'll create a policy per method, configurable via the ``policy.yaml`` file. Create a ``cloudkitty/common/policies/v2/example.py`` file with the following content: .. code-block:: python from oslo_policy import policy from cloudkitty.common.policies import base example_policies = [ policy.DocumentedRuleDefault( name='example:get_example', check_str=base.UNPROTECTED, description='Get an example message', operations=[{'path': '/v2/example', 'method': 'GET'}]), policy.DocumentedRuleDefault( name='example:submit_fruit', check_str=base.UNPROTECTED, description='Submit a fruit', operations=[{'path': '/v2/example', 'method': 'POST'}]), ] def list_rules(): return example_policies Add the following lines to ``cloudkitty/common/policies/__init__.py``: .. code-block:: python # [...] from cloudkitty.common.policies.v2 import example as v2_example def list_rules(): return itertools.chain( base.list_rules(), # [...] v2_example.list_rules(), ) This registers two documented policies, ``get_example`` and ``submit_fruit``. They are unprotected by default, which means that everybody can access them. However, they can be overriden in ``policy.yaml``. Call them the following way: .. code-block:: python # [...] import flask from cloudkitty.common import policy from cloudkitty.api.v2 import base class Example(base.BaseResource): # [...] def get(self): policy.authorize(flask.request.context, 'example:get_example', {}) return {} # [...] def post(self): policy.authorize(flask.request.context, 'example:submit_fruit', {}) # [...] Loading drivers --------------- Most of the time, resources need to load some drivers (storage, SQL...). As the instantiation of these drivers can take some time, this should be done only once. Some drivers (like the storage driver) are loaded in ``BaseResource`` and are thus available to all resources. Resources requiring some additional drivers should implement the ``reload`` function: .. code-block:: python class BaseResource(flask_restful.Resource): @classmethod def reload(cls): """Reloads all required drivers""" Here's an example taken from ``cloudkitty.api.v2.scope.state.ScopeState``: .. code-block:: python @classmethod def reload(cls): super(ScopeState, cls).reload() cls._client = messaging.get_client() cls._storage_state = storage_state.StateManager() Registering resources ===================== Each endpoint should provide an ``init`` method taking a Flask app as only parameter. This method should call ``do_init``: .. autofunction:: cloudkitty.api.v2.utils.do_init :noindex: Add the following to ``cloudkitty/api/v2/example/__init__.py``: .. code-block:: python from cloudkitty.api.v2 import utils as api_utils def init(app): api_utils.do_init(app, 'example', [ { 'module': __name__ + '.' + 'example', 'resource_class': 'Example', 'url': '', }, ]) return app Here, we call ``do_init`` with the flask app passed as parameter, a blueprint name, and a list of resources. The blueprint name will prefix the URLs of all resources. Each resource is represented by a dict with the following attributes: * ``module``: name of the python module containing the resource class * ``resource_class``: class of the resource * ``url``: url suffix In our case, the ``Example`` resource will be served at ``/example`` (blueprint name + URL suffix). .. note:: In case you need to add a resource to an existing endpoint, just add it to the list. .. warning:: If you created a new module, you'll have to add it to ``API_MODULES`` in ``cloudkitty/api/v2/__init__.py``: .. code-block:: python API_MODULES = [ 'cloudkitty.api.v2.example', ] Documenting your endpoint ========================= The v2 API is documented with `os_api_ref`_ . Each v2 API endpoint must be documented in ``doc/source/api-reference/v2//``. .. _os_api_ref: https://docs.openstack.org/os-api-ref/latest/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/developer/api/utils.rst0000664000175000017500000000063600000000000022512 0ustar00zuulzuul00000000000000================= Utils reference ================= .. note:: This section of the documentation is a reference of the ``cloudkitty.api.v2.utils`` module. It is generated from the docstrings of the functions. Please report any documentation bug you encounter on this page .. automodule:: cloudkitty.api.v2.utils :members: SingleQueryParam, add_input_schema, paginated, add_output_schema, do_init ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/developer/collector.rst0000664000175000017500000001234400000000000022566 0ustar00zuulzuul00000000000000========= Collector ========= Data format =========== Internally, CloudKitty's data format is a bit more detailled than what can be found in the `architecture documentation`_. The internal data format is the following: .. code-block:: json { "bananas": [ { "vol": { "unit": "banana", "qty": 1 }, "rating": { "price": 1 }, "groupby": { "xxx_id": "hello", "yyy_id": "bye", }, "metadata": { "flavor": "chocolate", "eaten_by": "gorilla", }, } ], } However, developers implementing a collector don't need to format the data themselves, as there are helper functions for these matters. Implementation ============== Each collector must implement the following class: .. autoclass:: cloudkitty.collector.BaseCollector :noindex: :members: fetch_all, check_configuration The ``retrieve`` method of the ``BaseCollector`` class is called by the orchestrator. This method calls the ``fetch_all`` method of the child class. To create a collector, you need to implement at least the ``fetch_all`` method. Data collection +++++++++++++++ Collectors must implement a ``fetch_all`` method. This method is called for each metric type, for each scope, for each collect period. It has the following prototype: .. autoclass:: cloudkitty.collector.BaseCollector :noindex: :members: fetch_all This method is supposed to return a list of ``cloudkitty.dataframe.DataPoint`` objects. Example code of a basic collector: .. code-block:: python from cloudkitty.collector import BaseCollector class MyCollector(BaseCollector): def __init__(self, **kwargs): super(MyCollector, self).__init__(**kwargs) def fetch_all(self, metric_name, start, end, project_id=None, q_filter=None): data = [] for CONDITION: # do stuff data.append(dataframe.DataPoint( unit, qty, # int, float, decimal.Decimal or str 0, # price groupby, # dict metadata, # dict )) return data ``project_id`` can be misleading, as it is a legacy name. It contains the ID of the current scope. The attribute corresponding to the scope is specified in the configuration, under ``[collect]/scope_key``. Thus, all queries should filter based on this attribute. Example: .. code-block:: python from oslo_config import cfg from cloudkitty.collector import BaseCollector CONF = cfg.CONF class MyCollector(BaseCollector): def __init__(self, **kwargs): super(MyCollector, self).__init__(**kwargs) def fetch_all(self, metric_name, start, end, project_id=None, q_filter=None): scope_key = CONF.collect.scope_key filters = {'start': start, 'stop': stop, scope_key: project_id} data = self.client.query( filters=filters, groupby=self.conf[metric_name]['groupby']) # Format data etc return output Additional configuration ++++++++++++++++++++++++ If you need to extend the metric configuration (add parameters to the ``extra_args`` section of ``metrics.yml``), you can overload the ``check_configuration`` method of the base collector: .. autoclass:: cloudkitty.collector.BaseCollector :noindex: :members: check_configuration This method uses `voluptuous`_ for data validation. The base schema for each metric can be found in ``cloudkitty.collector.METRIC_BASE_SCHEMA``. This schema is meant to be extended by other collectors. Example taken from the gnocchi collector code: .. code-block:: python from cloudkitty import collector GNOCCHI_EXTRA_SCHEMA = { Required('extra_args'): { Required('resource_type'): All(str, Length(min=1)), # Due to Gnocchi model, metric are grouped by resource. # This parameter allows to adapt the key of the resource identifier Required('resource_key', default='id'): All(str, Length(min=1)), Required('aggregation_method', default='max'): In(['max', 'mean', 'min']), }, } class GnocchiCollector(collector.BaseCollector): collector_name = 'gnocchi' @staticmethod def check_configuration(conf): conf = collector.BaseCollector.check_configuration(conf) metric_schema = Schema(collector.METRIC_BASE_SCHEMA).extend( GNOCCHI_EXTRA_SCHEMA) output = {} for metric_name, metric in conf.items(): met = output[metric_name] = metric_schema(metric) if met['extra_args']['resource_key'] not in met['groupby']: met['groupby'].append(met['extra_args']['resource_key']) return output If your collector does not need any ``extra_args``, it is not required to overload the ``check_configuration`` method. .. _architecture documentation: ../admin/architecture.html .. _voluptuous: https://github.com/alecthomas/voluptuous ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/developer/fetcher.rst0000664000175000017500000000410600000000000022215 0ustar00zuulzuul00000000000000=============== Scope fetcher =============== The fetcher retrieves a list of scopes to rate. These scopes are then passed to the collector, in combination with each metric type. Implementation ============== Fetchers are extremely simple. A custom fetcher must implement the following class: .. autoclass:: cloudkitty.fetcher.BaseFetcher :members: get_tenants The ``get_tenants`` method takes no parameters and returns a list of unique scope_ids, represented as ``str``. The name of the new fetcher must be specified as a class attribute. Options for the new fetcher must be registered under the ``fetcher_`` config section. A new scope fetcher must be implemented in a new module, in ``cloudkitty.fetcher..py``. Its class must be called ``Fetcher``. An entrypoint must be registered for new fetchers. This is done in the ``setup.cfg`` file, located at the root of the repository: .. code-block:: ini cloudkitty.fetchers = keystone = cloudkitty.fetcher.keystone:KeystoneFetcher source = cloudkitty.fetcher.source:SourceFetcher # [...] custom = cloudkitty.fetcher.custom:CustomFetcher Example ======= The most simple scope fetcher is the ``SourceFetcher``. It simply returns a list of scopes read from the configuration file: .. code-block:: python # In cloudkitty/fetcher/source.py from oslo_config import cfg from cloudkitty import fetcher FETCHER_SOURCE_OPTS = 'fetcher_source' fetcher_source_opts = [ cfg.ListOpt( 'sources', default=list(), help='list of source identifiers', ), ] # Registering the 'sources' option in the 'fetcher_source' option cfg.CONF.register_opts(fetcher_source_opts, FETCHER_SOURCE_OPTS) CONF = cfg.CONF class SourceFetcher(fetcher.BaseFetcher): # Defining the name of the fetcher name = 'source' # Returning the list of scopes read from the configuration file def get_tenants(self): return CONF.fetcher_source.sources More complex examples can be found in the ``cloudkitty/fetcher`` directory. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/developer/index.rst0000664000175000017500000000023400000000000021702 0ustar00zuulzuul00000000000000======================= Developer Documentation ======================= .. toctree:: :glob: roadmap fetcher collector storage api/index ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/developer/roadmap.rst0000664000175000017500000003136500000000000022227 0ustar00zuulzuul00000000000000.. raw:: html .. role:: roadmap-not-started .. role:: roadmap-started .. role:: roadmap-review .. role:: roadmap-done .. role:: roadmap-good-first-contribution ========= Roadmap ========= This is the roadmap for planned changes in CloudKitty. Changes are split into: * Continuous * Short-term (planned for the next release) * Mid-term (ideally for the next release, else for release R+2) * Long-term (for changes that will definitely not happen during the next release). .. note:: This document must be kept up-to-date. Any newly planned feature should be added. The statuses of the existing features should be updated regularly. At each release, it is the CloudKitty's PTL's responsibility to remove the changes that have been merged during the previous release. How to edit this document ========================= The first two columns should not need to be modified. If there are several assignees to a change, you can either specify each person individually or write the word ``multiple`` in the ``Assignees`` column. Status columns can be in four states: * :roadmap-not-started:`Not started` * :roadmap-started:`Started` * :roadmap-review:`Review` * :roadmap-done:`Done` See the source file of this document for highlighting syntax (``doc/source/developer/roadmap.rst``). Continuous effort ================= Some points deserve continuous effort. These are not tied to a specific release, but are some of the most important aspects of the project. Some of these can be good first contributions. * **Welcoming and mentoring new contributors.** Reviewers should be especially kind when reviewing a person's first contribution. Don't assume that they know the "developer workflow" document and OpenStack guidelines by heart, and point them to the right resources if needed. * **Improving the documentation.** This includes migrating documentation to the new format (adopt a user-profile and component-based layout), but also adding information you figured out by yourself and couldn't find in the existing documentation (for example: notes for specific configuration options, some examples, additional explanations on some notions that may be difficult to grasp for newcomers...). * **Improving the troubleshooting documentation.** The creation of this documentation is part of the mid-term effort (see below). It will be especially useful for new users. :roadmap-good-first-contribution:`Good first contribution` * **Adding tests.** There are *never* enough tests, so don't be shy and feel free to improve the current unit tests or add some scenarios to the tempest plugin. :roadmap-good-first-contribution:`Good first contribution` Short-term effort ================= .. list-table:: :header-rows: 1 * - Planned Change - Assignees - Spec status - Implementation status - Short summary * - Adding the v2 API - peschk_l - :roadmap-done:`Done` - :roadmap-done:`Done` - The new API of CloudKitty, to which all new endpoints will be added. * - Support local timezones - peschk_l - :roadmap-done:`Done` - :roadmap-done:`Done` - Currently, CloudKitty converts all dates to UTC and is not timezone-aware. This must be changed in order to get a better user experience. * - Add a Prometheus scope fetcher - jferrieu - :roadmap-done:`Done` - :roadmap-done:`Done` - A scope fetcher that will work in a similar way to the Gnocchi fetcher (retrieving all values for a given metadata field on a set of metrics). * - Add support for the v2 API to the client - peschk_l - :roadmap-done:`Done` - :roadmap-done:`Done` - Add the necessary base to the client to start supporting v2 API endpoints. * - Add a v2 API endpoint allowing to reset the state of a scope - jferrieu - :roadmap-done:`Done` - :roadmap-done:`Done` - This will allow to delete all the data for a specific scope after a given date, and reset the state of this scope to that date. * - Add a V2 API endpoint allowing to retrieve rating information - Multiple - :roadmap-done:`Done` - :roadmap-done:`Done` - This will be an improved version of the ``/summary`` endpoint available in the v1 API. It will allow grouping of data on any groupby attribute. * - Add a v2 API endpoint allowing to generate reports - jferrieu - :roadmap-started:`Started` - :roadmap-not-started:`Not started` - This will be a replacement for ``cloudkitty-writer``. Mid-term effort =============== .. list-table:: :header-rows: 1 * - Planned Change - Assignees - Spec status - Implementation status - Short summary * - Creating a new rating module - Multiple - :roadmap-started:`Started` - :roadmap-not-started:`Not started` - This module will add support for validity periods on rating rules, rulesets and will allow rule creation in a declarative way. * - Add a second v2 storage backend - peschk_l - :roadmap-review:`Review:` https://review.opendev.org/#/c/673461/ - :roadmap-not-started:`Not started` - An alternative to InfluxDB, with support for clustering. For now, Elasticsearch has been retained. * - Add a troubleshooting documentation - Multiple - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - A documentation providing responses, checklists and tutorials to the most frequently asked questions on the ``#cloudkitty`` IRC channel. Long-term effort ================ .. list-table:: :header-rows: 1 * - Planned Change - Assignees - Spec status - Implementation status - Short summary * - Complete migration of the v1 API into v2 - Multiple - :roadmap-started:`Started` - :roadmap-not-started:`Not started` - Making every (if not deprecated) endpoint of the v1 API available in the v2 API. * - Adding authentication middlewares to the API in case it is used without keystone. - Undefined - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - This would allow support for RBAC outside of an openstack context. API Migration status ==================== .. note:: v1 API endpoints which are not listed below will not be migrated. .. list-table:: :header-rows: 1 * - v1 endpoint - Spec - Endpoint - Client - Tempest tests * - ``GET /v1/info/config`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``GET /v1/info/metric`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``GET /v1/rating/modules`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``PUT /v1/rating/modules`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``POST /v1/rating/quote`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``GET /v1/report/summary`` - :roadmap-done:`Done` - :roadmap-done:`Done` - :roadmap-done:`Done` - :roadmap-not-started:`Not started` * - ``GET /v1/storage/dataframes`` - :roadmap-done:`Done` - :roadmap-done:`Done` - :roadmap-review:`Review: https://review.opendev.org/#/c/681660/` - :roadmap-not-started:`Not started` * - ``GET /v1/rating/module_config/pyscripts/scripts`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``POST /v1/rating/module_config/pyscripts/scripts`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``PUT /v1/rating/module_config/pyscripts/scripts`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``DELETE /v1/rating/module_config/pyscripts/scripts`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``GET /v1/rating/module_config/hashmap/types`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``GET /v1/rating/module_config/hashmap/services`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``POST /v1/rating/module_config/hashmap/services`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``DELETE /v1/rating/module_config/hashmap/services`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``GET /v1/rating/module_config/hashmap/fields`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``POST /v1/rating/module_config/hashmap/fields`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``DELETE /v1/rating/module_config/hashmap/fields`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``GET /v1/rating/module_config/hashmap/mappings`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``POST /v1/rating/module_config/hashmap/mappings`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``PUT /v1/rating/module_config/hashmap/mappings`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``DELETE /v1/rating/module_config/hashmap/mappings`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``GET /v1/rating/module_config/hashmap/mappings/group`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``GET /v1/rating/module_config/hashmap/groups`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``POST /v1/rating/module_config/hashmap/groups`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``DELETE /v1/rating/module_config/hashmap/groups`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``GET /v1/rating/module_config/hashmap/groups/mappings`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` * - ``GET /v1/rating/module_config/hashmap/groups/thresholds`` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` - :roadmap-not-started:`Not started` ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/developer/storage.rst0000664000175000017500000000077600000000000022252 0ustar00zuulzuul00000000000000==================== Storage backend (v2) ==================== .. warning:: This backend is considered unstable and should be used for upstream development only. In order to implement a storage backend for cloudkitty, you'll have to implement the following abstract class: .. autoclass:: cloudkitty.storage.v2.BaseStorage :members: You'll then need to register an entrypoint corresponding to your storage backend in the ``cloudkitty.storage.v2.backends`` section of the ``setup.cfg`` file. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2794871 cloudkitty-21.0.0/doc/source/images/0000775000175000017500000000000000000000000017322 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/images/cloudkitty-logo.png0000664000175000017500000031221400000000000023164 0ustar00zuulzuul00000000000000ExifII*DuckydAdobedXX     u!"1A2# QBa$3Rqb%C&4r 5'S6DTsEF7Gc(UVWdte)8fu*9:HIJXYZghijvwxyzm!1"AQ2aqB#Rb3 $Cr4%ScD&5T6Ed' sFtUeuV7)(GWf8vgwHXhx9IYiy*:JZjz ?ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽tT~[_dM՘TvJy;yV"<͓p]r7?S<8}]-^ 6'ZS&lE'cǮߟmOg \?):j{:)/|t7Pj6uܚGڿ>.%}xx?3|Q-NCjEPE_sRBDfiz2U3R+7Y!O2"q>&c{;GNټ^*\~3ԃޛ5;w:XF+Ji#*k4qNwuN{{^׺u{{^׺u{{^׺u{{^ꉾ{‡ɇ{U[OISQx:Ǩd F|bŤy1;Yd~»~wnއ=$ sߴYK>흙Shp6{N.~b>"IJR󠨭j]c~?F{X篫?v7 F3l58`d, 'VĠ&X\8FWϭ ¦ |a?lNsce$ ޔ[MCysV: oo6Yș^g[23dh3l.OS1ށ~Cu&'ߔq"|f*W|ܕ{q2%-1%>7Y/mLPchьt4D XX9k*R8FwrOu<:.6n/ջ{%sUw~ڝi$ TllN~0G [p@y`vM+S|ugB`wMYGn`I4 |}mO& XH Fiזh~}RWL7!k*q\FVd(x*rUEYK:2I`J8pߺ^_!{'`Wjo~A I6z ]5)$AGV=2HMTVUqF(^?ʈ 뿺~H#_vGQӲA0ijkU@H0nեϔ#~1/ q۶vGpl ql.Z:&|Nr2o=K 'L#qR22.= zu{{^׺u{`g-+S{zL~kwPmηk)^ԵVZy0xVK(w(/dnй/G=ACw&miۋ3sVzy^*LdxLET-z9R<^b<I,K1$I$ܒy$$c]u{{^kd|d}M9Eٸ 6ފ{mmE;!T'YI 1$vUf4Q^$b@\+r84UQ)^2#{>1Z1n<> ?mu)Sn}5\t48q{ǭj'#h}ܻ&!4r|N׺ڧѲI|"m+t 9J=o8h#Ms8JcU4m+u_Gk*׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^E?Xwwτ,ŋ>IN>I!UGP# Fl^rtYtdwM?>S !?֐~taӮ 6ۘY\v [frpbx:uy 1"G ${uh*xu JU~콱5~ #{ %OG_-FiE6CRdVf8br<S8lWKI_#jj:RHBA{_di>d|"]|k.nXҒpD*%.;a2PF[$b꧀hzeOB7׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ_+R/:izt†b 8zOanUÂX>ˮ e= @>uT/힔|?>wk'07ߝ%DBHl]-&ȭ$Mh~}Q_.]X 8uDH8FI$`B*xف *D4dxm;Իk"0$GIz(^ @`7?_3^,-fS\񚽣ڻ2E{YV %YQIQìs;~q\^ݙS麚>_ğMmnՙ>=UWSnUuoO &-aØ)e90[q?E|qG?2`s?z< Vk[UJpjTKqOTME=DQOQ,* "c-KVRCqM0 zY#TbH  pA=TI>S?=h{UwQ?iv3&UݍJ8ї7GWU:jiKk HxWEjgP};7l=9ك//Tw.SKjogvmʊ鋧vT,24g<:4uTaGku{KO_{>˩LrJڛŏ޻+-~,tEXC솪i4QG[//VwO(ة[QO@֘LA ݔIN;D3HV }z/Ɇb*{Yv.l3{=TP10;rJ]U0y3XXL]iQZפm3i#M/ _YM} mw>>NJ hvUnCK]J9J)^ _ioֲ9=E[ 9*{ddj"ltcN-g[}E!V6qKxR.pEA@xs^׽t}>(2/l=MGUoU6gSWhc*)yHi $a˼DvrC2UjXw#}e͌.LF :׭$8ϋ$m^ 8['L6bM+cԴT!kj3-om2$? Xzp_y^?&GV(bAXZK< q#Q,Q)Rf0َwE'{b)?v߻.B>*1Xz |*yrHQǏXSeXLgֿ@h2׺u{{^׺u{{^׺u:;wUŚmóU7?뽳GNo{A>'09URdh咢8]^;n`ow9V+t&K=9'NwK.AwۀYcBVI$d(uQP+RY~[>Q jV]GoZmZnmH%&|ᦐqwn?.,?5LO]o3ysfw}/F鸊77xqs=aL:}/sdanýfbS6Oqnvo1Z:D 1D$iePdyn[;cu9gyCgVn֑mC:"Z,ԫ'=${^=)CSz:SEme3')}&jz ^{7۫0JKMdI$@?`rϴ\ V|-ݝ̲}'[Kw>ݱ ?xȸ_+Enoy  ϯ={ߺ_FGg?, b͊2}fSW2`rO]k_?=mOHu{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^޳??Z, poo7hoo1 W7w?o?ۇ@?/[=E>Zg j>o[˛%kM|/tu䒯ꩢq^>H>GǕXo֋+{{^]c$6+3AG8䮤#UKau}=vNyMs+c oCSΝczog=qYTk@=ۂ?4O`_u_O&N?Xlu?o}6?;~H}}{ju}ü3TxL\Q-K5TY U*Bm`H06x{lM-bpH#=yw}e5yAiXDq+9i_;x5^e/ ey8Mo<N>N8xlv-%?ȧ/|'¹^~[MܢCsP7+_I[5>K4• @IeYyiwYvYe$9.I$I>%UE |Ls3\;Iq#fbYԳ1$I$ԜMu~{ߺGa,'^{{+VKY{&UX3h1U /uha^å#;|5wnon1mʕv  ]}x[9Av| MC ;>-UO];$1৭ܻӱ^mA+@5{g~q?"3g>an0v_VbaX'yT}=6o_~}) ET ~FOC0i6XrS7OMϭ}l/7? X9xwͭ%0^tܝ/ zh}l_MEcr1}dz`h{6OdG]ꦜR?mWȞ/(WiS[㸩)ôxת7Ep Fz+ۑQE>68zuo?#L_"?7<Fvb5fU'pZw i.EJx_a_"XI!7h)Q)#b2:0 {o{{^׺59װzͩq9fB= 9c _ M2$Im4`ҺSreg\ZkAh(3KF:uy>CJ/~8]6E471 KV^ݳ}XjBf8%.D T/N`9{u럦ܣ*([ ym=|v߼~IHX ,Y2|9zo䍐/ڀ-;;bmǴ+G?2==}< R?Z>뻤qGY:ݻUN6.79 ٺ[*ˠs޺[2>m¶ss+<XwQ'Xn_E{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^l1/dw/}n2(**zcQ;?~IbۻV**39))c(hczv(^cE†~mQMIԽ+2\LShӮ7xo))UOjQ<Eky"zvݙػĖfbIfbnIl>׺u{{^׺u{{^= ڝS/RW{N'7-|[o.޸J:2ݝ9*Vʧ2H-|d(㐽,U,U1,€SEmLuu״-~oǗNڊPMJVXj肆n][U>]0pD9bD3'Y>"i՞{szu{{^׺u{{^׺"*_Ե>ߙö rB5?fl%A%tkFI$uͿH4Ù?<'\C{la2hrHI3y-QuG'~o-{u7|jqS:>Cgd qHZ[W`@Lh‡"H*_{^Ef9=ŵrL jM,Pdx45Ti!^)R=KV#YmA ۛZL^SS[#0 [6S&VX#0AXF|y䎟I-gF׶o~[Stl]ɜٻngv:&۟_X >ffJz\/#EP%xHiHu{uUlWSQTDL(3%Tp{׺ut|.Y%|p|_vޡӿ/j~">]!GoE}{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~/:Cڳy6Zm+1*bo]ᑖdK^F,f,ڏ5EzNstklaۏq 8̾o7TGb1Xj+Y,\C,Tߺ dqo ;^ma3o"J}u..%U;yC۽}$ˌ|2luR4j;brt[нduG?+~*):sK8tr,L;S(AU 8kUUUR/<1¨hqcqtIEO -,H(U@@w鮥{^׺ue 0"x ~;f?/Wi ՐIݽeI81l5s+pNǺV=]d>GZ0oI_N,No/]ԛڞ)awOXgaɯn"9YsԓKJ(LK"aEGF.>[{7'Wv~̬==5WXL9Qd(e d6RǣeqMGAun{{^׺u{:Pv \~zeHUr5s8&LzdQP#K<Ƭ`4z2c@:H'OwD/z }F!檡wҵ&?YJuzG)ɺ+!Zu|PtU=Kڸ?o[{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~S7XJ?ҫfhhiG=/QhK[5 ƓG9xgjHA޽(>̫ߑ;5?$EF>Zݻ('6<[`n6?XUͪ z  hIWRtQ}ק:u{{^jH6g쿙'tb]ś_B4*3P}"?z{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u}/f?;p*Yk#ۻqy8ɲz" 2ѝtS)p>:Y竞ZXe:zzzxi'8aդYd` b@׺[ ou> ECjQfgv <;[9F=tt:F/ H ͐_ !MOfj:G׽u~{ߺ^׽u~{ߺ^B8-ԟ+t}K`br5 OM崽Xػƪ'_T%f|l T2;R$'h[ٝm:s7TvfmӜ{lecX{nd'6 =e36xK:3#+4awn:{vf V6hS>[rnq,cD5Num~鎽{{^׺u{{^׺u{{^׺u{{^׺u{{^0t>_{n ^"hT iۧge&lJ"PK'N*dȴnV/__*?nt'ybٞ\U5BlدU->;v.arSŐSV]C,΍Pu*j_t^׽u%3/ xm7SHAOˤcu~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^9j*% i]cbRK,B$qĐ>׺|UȬL?7Ċ!z gac`6TT0zuS=ƺ'{WdLOOSwO`abOüFhkiN33.WF![n$7 ``Ѣǵx^JR=#ϯ׵u~{ߺ^׽u~{ߺ^׽u~{ߺZº~c::g~GC(撒!fd:qȢ^P ׍ZG>\qѝS2>δ{ߺ^ҏg lM6lgpn<|.CN=fG'c@Yp$qD*xBd}p|>>TcPk!#V6"kMq9??{{k?K~k}PdWq޳zWxj1{c%M2 Adxj~E#:&OJL[yϻ#![ף;Ld7w᧧g- e3}W0*2A`Nnd<(:XP_.?(2# 1_" t?Ʃ1Ӏz~jhuWKgY?oWݑ~RWQEvs`O:DRz0vLӨ2%,x`0C#r{H??tK>?;{/#z/6nmh"0ˑVd #%I>=! "=|ont^׽u~{ߺ^׽u~{ߺU[ÿ@ Goҿ}a**bPel\%``U(ELAڑ*=|xPI/;}OZwߙ,nϏM1=S F52dΟWrO RΕ\ 䱌|d:Rɚ>3>b^9mOx?Fϣ?XӭKx "=~}'FSumfc+gJ-t+~!as Cme jzgy’> |`~W|N<ܰcvWk{tegdf):L,PR(Us"t|i$y7[ {WG;[9;4Gipۃ;Z*ga})NT=~xbˬY +1AISOI#-Vc?w홶guvۓRLU#8+7& mb()V쵔xv=պuf8:xO}ᴓo'?u߷~}%~:'׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{yUu_C+A[>,|TxjR<Õh#\ =XuިjM&[Ofm6Sd6ft}u=5e}{ߺ^׽u~{ߺ^׽u~{ߺ^ —VRS&Fv&9+ r$*lJm ':]^׽u'%_SC@ϼMYX;QRx-_m*ɖvc2:KC:]^ھ?׾~%[{VGhwtmP"4Oz+{^ CTC`| ]}0Q alkEN]]IUUT; :y~!]K6= Ee ez$($mTISX䩨Z?!YZYw'[RTN2RŖi#hiUMKz[C?V/@:W>`'NnL>g;C.t{lۻ[{<$UE/#]CIe{M|n}rNM{!GC6Nqu Lf:*WQ2)eQ:v OZٷ+i21׭:+lCG QpT~C7?u-B@B9BXɂl=?{f_n87\/X|v_gn&YL|[Wb_`xIbZw$zde PzHv)‰7v?g'O55"˝ut*#\OG< 9ڜ#ҋQY^]|=u׽tz­>fx㸓)}d `Q+-45`i FZfkp7,(˟{6wT==yڻveݵŎ퍳14hK0GjX0[$LPp舒ƧI/-{p}ο7?%C;e^`.x9\x'EU3;J'6xT+u_^~e| {N<=DTtӮ;nמht}{3QK\+"RaK[rKBiխnKoo^]ō뎡56~}7G _޴v#DY&@.y~]2/ &? nYbi3}`2;spcESK- tKMSzYd6V,TYXjSQm]o|vO]AnGSnV$.`jl8:<c#HWCS4/tڒP:(u*_cω_#oߍ#g`~oZls-UFWwԮ#|ܔx@>-X2}ܡF#ݺ^׽u~{ߺEWʍg||1CY텒<jwKO[DT;DcK. =|s;_ V~ٽsdgCs:!L4 J+$U AunX^莴}ہVظbh"h֫!QR&;VԴTsH {OUfTuzkb=뾞iB[]:&MkER~6L*>yisyiS:??_4[OWEQAҀ wحlQM SWISL/8FCF9Pע9׽u00ρ8c7޿XlmA,UuVɻVg,ں+j^P=缰['q]PwF4,=4f GI!7C=[Kǟ`IGF'LϖMNUgv'7iSVojVVE)?$dm,IBݻwm-V;nݯby񔹌&VȦ]{5D$hx~ֺu{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{ն[kMpy`@VsQArZJ_uhk緺??h7GT} FдY폹2[g/H4G(hx C(az}n}oڝ;nGJ{gQu;K~K^i IZ C 䪉:߫ދ:!;ݙJ>Pޞ/meL?+_G{ߺ_PK+wf{:؃%;W!n_ޏ:u?os_5{?j{2!>.o\zc0`?->;|'m؛^:yGc.ɥm-Khwndj!'Ͷ ץ)V2y u]~}?" )Ѻ:7_{g꺗+*`J^̜o2xʍ!6vCpjE5%z,#O] c3LS 8^ `v0Ak{C;ą`> bJz'b=gG/|B?<>c?][>vZ]ϸiYocW`H#J>VIbXy//M8˟.>>uŞ;#cx0o Xx̾^:wE7RKڻHavkcY ~fmzS,Y97Ҙ[9A| פvU|g=Q1YwObeYiwRKPP,mJ n<c:v_ IVT&xެKp_uv:>9g~˾Yؾٛ?qQv֛+$Y,kyX6Ij߸~;$̏dV ˺Ԥ4c7؛^@WomA4meir M4m(z}o?#Ϳ?}h(6AC۸VvC88,u-6R.Ffm2WKuUsjF^} c3۝1s'K=f;{AEb+R ^'Yn8iPEԲYC -ë#lpG_ߜW_}cx՜[7~ޢ͘>˵+i7iײsS~FrXQ.cq"c_#ߑ؟;fJm$)Fvm++㶵VEa T==GH=} 諭x?L )U. G*ڎ +:( F˥,_25LjnS-v.Tu*D? c|q׽u{#;5[&]6G°I0hv^ LG Oʟf:ytOx&$pl׷K׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ_2TGJߍg/ݘ\K ib*ǎ8ۘ$˒'7"@>iz5~4|u][#yzQUXqOA֫UCS4,teb0:u*_]Ӧ]/rabOYNǡ}ɱh# jqK4qTHDr9F=/o]|߹~?+};p3UE_v~9-KCfWVhAAA4€k3u{d/2uL[^#nJCمCs Iznl[}7G{췣ν{O&#o\w^۹|7ve^]3g$z|QVB^@ `yŏ|M_ ~Fwlu^$e1K9y hm\t@ss)(i4GTu :a`'~|폔]~/YUKW؛ b^I"ƭITG,_{Zպ j$kv,z;1]tΜ|bv|k;1e2uQQci\ush?,ߺ?>o ٽIq=4e$ 5<;! iװ |;w_'L}c f|q99͸s{/>;h0ႚ\;( &s@ZϷI׽u~{ߺ^׽uF_v螢>{'!$vR͋ϭ6Vk\E/B4r73HcP@E+3ƀm?o{U'Wߡտ +ݿ{`_ ^Rz~}{bonWGC?.Й ݟz'U8oM鯏7aolMNfC+e2ci'+4DKg @lH˂j:Oھu{{^׺uoO#S_"anl~=.#(N=<ի]%B}V9V7EO,#j]xKJW߻{sCcK ugkcyi)VlOqPux1vz[HmtA‡k{ qWˣO X_?~>5N]?םS{n]ݼݩZwW]n=m+6x䪳ÍOQ77Y*AuKhHV:{[_Lۏoawvm=Ɏmφm"+|.Sݽ:BK]EwPhwJj*Lu XZc_ -H|m񝇇K64q|uUK2\xNV!ZoCW׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺU!?*KKCxs5oOJtPͳr3.pϊ"KK[ Eԩ%<~_MG8=|m1O4Hஆ(.TY:$>V=K372Z,5bCPQA1/۴3DI 4I [~ffDADtV׺]Pwc__M:_61طͽ:᳔uST㖧Uи 1oTG[7{Q>O{#?J=٘'j>Ѿ)'mvMji(FӖ:tG-F'%&.<HPGTRTk=1SU4=U\Q# ͯ0j?&CvtǷI6/{n? Mnks5M=,s̀+m$tV׫~ZA8|&ؕK_w i<,6c%H1%d!))FAb#T|fug^׽u~{ߺ^׽u~{ߺ^׽tJ?ݑ~!wR%^0cvvP9cZd5M)` C_-ݕz5{_zwn rSܛc)S3-"}hKd`Ia=H/z}{ߺ[KRo?O8M\3M71ѣɋS:>Fʱcu+$meo]׽u~{ߺ^׽u~{ߺ^׽u~|U%V5ۡ97\g16.*LӄaC&r5NB)"xkޜ|_ks|˭ Ax #G]׿u{{^׺1~\|cvݽ=ck2nz9SJvj)nINMH:+ +2ꎉ ⣫f_S_ @tBb6U?۟Q/3pz3c~_ߺo+mcaGjZ5r} )SQV!$5!vsV58S_{^׺u{{^gIVç sj5h'( ÔUc%+m*EHM!u{{^׺u{{^1龃M{϶:ۦ:d)q-{S|mkfŌⱍZy 0 Lm*lmBAY~]t{ QAҿ{am2ټs}>w<ܛpi0 'ӹ,k-_2SbXM-^C#]S" #,߄?o^1JJ>~׺¹t=d{iG[x lxf43?ۉ<>EmB,: @:w(Ҡ_=ŏ ߰ _ۼqwӟ1~/v%Tםwe{soc ("ye8ƀzюEe`>Ѳn׽u~{ߺ^׽u~h /pSn߀7ϐ[)P m|Sn ,VIL_h~ތ}Hi:0knlsym|˴؍ͷrԤ-N/;Sq9v 9PGg3|`Pi[s+zb2Qӿq5oPdXUOC:JMNI"(oQ*|Cu^{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺uQ_𡿎X/{h`ivt4IB[j!RJФδqīJB(mu!XF?uIG^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ[~:"++=?|}$wVҚm7Aq{_U~ΐfo_Bkz+׺u{{^׺uAuvߟ0k]SCН;’'4WpN=[ ^ya!wW Q jذ>c=Yљ20"~});Wntڲy:ǹqPn̎Kަl9wޢ*R(T!TB;ocd7}55{kUk_7O"Hr8r -LQí,j+QtHq#1-Zn{Kkb'Qu p5y >&R(9K+WT*I_["8cD9B>};`ۑ?׺+'Roƾ靉<{+!kab͍ꢊNj,㪪(4TH*ՋD*ӫ{=ۋ_H݄VA,UˢN{{^׺u{{^LY%$/ kG{Z0t<57mM>zG [}`@h3׺MN~O-S3XG<iY[FfcUxͺ)bSuH(TiUTE7]^L?Ѿt{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{c?LOzj~rB+%=E Egeu+LuluiMGt$NzuC I<Ȑ (K40H4 I# Iëi]T%Թ8r;#nnH)*INr~7K U$l SU5PLv]Zi_RrHLdvGן{{#m6y?Ÿ̛}nѵ?ߺo~׿m'{Sϭ6^}M:q!Hb,}rv*-YY^o{{#m&y= ?pTD7s!e5{/U|jRMyZxw]E:iGhՑt?&U__|14sͳM0nU~U|VB=EضHfr!t[?2J–хʧ{u}oS_*l&? m M9*>ԫ5rtDl-"B`l,~E#u7+u)]lGVٰ^!fqf, 5Uxt =g3D aqnP~Qtks޸Ψh٪hEtWcu~%gam݂۽be52%I<Մ[(*-$%ꆾQ׽u~{ߺ^׽u~&~>{hR=^=օQq+J1FPc7rhd`Du9M2<VF(׈=|z>dJ#7GtbdEOhpkԳTmmm;GWL2B]%D3Fln uz,u{{^׺u{{^׺u呛Лrmٰu)J}|B9sRVvj*иkbC+4j=+?mm?[`{S.{{^׺u{{^BXzn292d_qHY:<돿ug)3xOI_ڃxv?Z9-˲W1Ȏ@DᳱA@P鿳 K T`coR[t8ȚϯQ^׽u~{ߺ^׽u~|F˓g2x(w;4ZXkbs`˸׏M?ԁ1楇?#]ѷBH=a]k}~Uh#mL&fc#WUmݱKš;/wR^xL"ǧ37=ERbY ;#PÇDR!7ѧn׽u~{ߺ^׽u~T/mOԺ![:qĮíKT3{c㒖mיZjc*T]hFzj! O;Bʞ#y_ɎdQ^SMz䊧gojXuJg^6_h6CF,r?D[:s{^Mv^]х.{,f[?ܻ= 6 K[ˤ͸8z e :a}4KshEn{ #?-/6V 5-U#x4L/oSAFMH꘾p-{I~L&_nbjdݡJ{T;.ߴgewM f:SDZl,G64{ˈV湠FJJIcv-Q<$ӝ{ߺO;wqgͩѶ27!U`sjr8)C,ܦ29XeEu`{=hE{V-ݧ?|L5%=$sӳ;8"&vNlɪj- ]>24=$k($ui1 !,Դ!6̅t%fw3mvSLMAA{{p]:e_ϯW|?)v\ d5oMeҪ{1dgH1V,ѷ=j>Y*+i*ihki᪣:ZZZzjYឞx2:H>鎤{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺Y‡~Kwܞ/RdӪŋNݝ*SS3MOK KVӦsk/0zOl;zp~.w=ƬK-t|KDܘZ}b 2( CJĦzfaO#y-=5iv6ڢPG-^&fSLqvjZwbFʓIJC5h$#= 鞽{{^׺u{{^NRVI﯏=3%&_O~e:sɛGejj1ݑ)ko{OcR !WS}ZOd zu 9 ̮cufOdR#Wջ2*,&(w2y8b5_ '0xU?_PMa=& 0ɵwVR0(e_G2I+A:( Cǥ/u{{^׺u{{^9v jsԭCۻܛs5D`4̍+4N T*o,g]fdY:s{޴Ƨٛq`p= ɂ#u:*[WJ:RvI<7Wx>5r9gdM܂@ruzo'>`~][τ>c%|e=6jG365JiJ帢FJ۪Gѧn{vm6[hoR=sk.7qm a3՘܍#4NR<:$=SWsvGt}sI%{{qE״15CǶ.[>yO 1e:8=Ok#vKidj=cun_ @{TtTem-AWZlJ GPU+kO n* yOѪn׽u~{ߺ^׽u~chy icRK,U#4RYs{w(c[g+Pkײ3ݓcmv];]Q 㨻AT!i(*P.z5Q?/=jY7Kzu{{^׺u{oҾw| Q&{NOW{o^#K䪊8mtJI[$J0 .:|'9~1_>K]_]Q_RdW5.>Mt!)XfjEUZZ: irA-.cg͵x˶\ p`2TY&k] TP9lt4}eWom#Tgz m ~Jٙ%ۦX-V}>w8aj d9O Sݷ 2)QY_Irѣm.(z0:u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺;[ˣ:svqoͭ֝qe76Y<H"kdfX\3Eݬ ?/SI{{^׺u{iu޵sfn]! !n=f LT?$UabIdC|,9 *|J=|{7IbGZoMDzsjV%m&d q=< ԕ`HH)6ө:VxdYAAǺt^׽tk>!;Skk`cJ4{?"uɑ%T%+Hg *]Nzƒ .*:t©~._64_{vq o{@ԙyڻpSTLĘM./VGp]d;g{mFٛ~, sm??Xꁪ FgQWQL9Iaa'ڞ9:FAOֺu{{^׺u{{^׺u{{^׺u{{^׺UUMCMQ[[QUUM=5-5}4̋{!lVBzӻEjvQl*B喞fIboFFպ2A_>};׽u~{ߺ^׽u~}'o/;c͠ڛ#eo\]>{h)6+#n]MSʭOibA$+tqRz+YHߙ__Eo{Xd6ewau%mqי|HUQF9m){*K_~u. puN)-_AS0;PQg%P$Kn) JI)^'N<=z_r'}{ߺFk?=_9o핿6ܢm 4Mٛ抗r,ӠI9xjaxu/$k"qʇtK{,SlPuyllTn4$rO$CN* =M B?}={ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^V>tcK}{vZ*4TvVum =O\h]cDzyuA4uO:|~Qm? ;OwruVh%8*{<%fݘl:rfzdtp[c >_~]}V69-dz{'/Cd)oM%B+0 Օ:GNQЏOS{^׺u{{^MQwyvGpl[l%&iT=qevN@UvLj#P JuJoy _~qq0811ϫ'5`~G{ ((wcCrڊi|bVӟHU,mo)VP`H A #۝5~׺u{{^׺u{{^׺u{{^e_>\"T^q#l=D@ȕqv@4Lk(rO48'¤w˟WKmh{wGǍڹSRvGyTP̷0newVVm#ZX֧̆?5&ClXcG>*hVO-=5fKn߲ F^+'-3`3{kRlq5DV{k!C$ULqٙ}푗u9>EKz^+ |I!߸\,wJ8SgY+=UnNݣ˅[E } ?.D<_կ{szn1;gx3CUT,VW]WcXȦQ ׸um}E~F-\ ް֡^jj:} mډWR QԤ޽ ;%~;s[7; ۻodwWa38hx*znO[<F*2{Gї4^'ݿ>Do=l^U_m;mkmǰ72GKmn*juWRG ]3UOjzQu'W~3vj-ǵ* punE.4uRCzz]î$NQoۦu{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{PP?ʿ'LϺOj>ud;d8fw稨B}-&l1ᣈ&RG:[g6?g=~m ҮGff4Ik߭`gk3g1рlTl?z s 臠7 u~{ߺ^׽u~ߗ˟ݝ[OI0Ńͼm1Ι-iP YcFCSL%D 7-- p##қK[#yuV.]3۪{. {u]ˋS1RRUST%㨧%BQԐL9IYc5F }ӧ:u{{^׺u{Tý: (iv#sLvz9S:H ~GMQĠ_?W֖8GWU`v5ޛ!9μU5\ʄ/t\8YBxTաu,"SROL$Vgr)556}{j}?P5.ؑkרa->:*ڨgzcgH{;s9[^&iqsQFaWѫٸ/Ou2 =6ZYgqQZ ݾ/^m'WP/WCEͽc7>Zßz F?(ZUH9rX4pG^9 ??Q~4~}<zd>s,em&^4^4z7')'4Ԕ'˙w TW"} tUGkP"OT{|J^ޛ .Giw|ukKꎳau[GfT!CаPHI#q'Q^嘱,ij1,ĒěI}ӧz{{^׺u{{^UfJWECCEUR$"q}uF[m _ݿOA!sb y2ۏ%KuȨVu~h<),|\6gٔ1k#Ǣ[|Y*>ã;>5v6+v`:T;[x{Y;7tGK"JpdfFu*|j71q2lgvNܛ3tcmmʼnjqyFT($,Y I"qe>1 0{[zGW>'\G1s_]Nmڲ9pRTRm".dcbn-[~}_'o}u~{ߺZ?_Qr٠|ٸcY6.*l~?U>P@ r!.6?O4:ƥk _6<>_]c2ʜ~K_E3VPQԤur"U h:6uj˜v%/ş9<&HkpvGp’6m_fVLTHEJhj*>1u+VVPC+AVR.#a7]^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺<UA55L1TSTE$ƓA<!Xf@VV06<{^a˛>Nrc .kBjh\c6WE&/-&db|z૨u<%AR ozӷs^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽uG%PTwpS0>L+;>^;am̠G64qՐSo!JK?g=}=误{^fu_{,͆oؕRRnM_;v9C&[0SN0)CEU*j7 TѰzZ{Z׺u(L)zيkVi07R;oO=3>jJJ%!W*#<2n>_Ji:0׺n6>a6as|UPf nydtzw#r ل(x[?`o{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺v$__chﮋݕ6o:SVRGnX*j0V qE[O?.SξCi]>x\3pv۹h)ܛw#SarTꧯdd#s+r卾ܭJ4vLkEAn9y$rplʦHb䗣}s~*P8wiEXneIEz}B6׆Xf`OWZ? ( R-Obc̒mmնw$ }1WnH?Z ]UJ=}٦]2/oاE[j-/%{>l8,k҉Kkzn޸"܉p+KiSS? {s*^½OϬfSxrqVB;GRNe AWogRx<[eq5MBaEoygveUFVȞye9m4WҒ臯{^׺u{{^׺pry>+>bO[3\[ۮH՘(6zn>d?cV?ˡ_߻ dIs#nTO$DPG~Z=$oY}9m<"P]wnևˡm|\G&mmNmCUS%i L{9o)}/w/c1Jrhټ{˲j$wP gu$6-:iOCqROFyAƚհo\|0)5_ڝe#vw%;rjVٽjn~7kAr)~6td]+b2 :g<0$_o97Qyߚ]}t@kBFh^Noݲm{&Q*/5Vƾ15^#vFWiAS5f%*tԸ|Ei*`u $1k -ɲGIiZ|!*i^A[a}ܾ[i8#Zϫ@$R$ChPkؗ׺uߋGiDlLy>p U`sz JwܙՆHhbe{0y(d#9Ò6s{=$U˷>as(a{e7MZW.dҠL:OXuwKYu_Jmn}8-Q۔1= O[]\Jʩdgydwi"XI52P ѼSkr"0Gn;/v.{cLlϼ7o6Sj29^M"-$Sç$dfoX iҷwor[_ 6d,DLW %{0=-Ɏe4}hGڽoo.ә{te3[9*ǎᥥTG=4*D*djxf( Hz}{ߺBUwytVPKW&U_[`T,Vm\*y*cO{ YUgW)u;Ovmo]L`xjj du6q6V{[Vyn$^9'{8_?/u]V`R* 򆞖{!}ϔ(4}_x6vCFF;9>qjd|pn嶒,_Q{]zgTۛ;Gg U@x;rj1yK=-bAR^tKWE#D(xub0:~|/[>adb9wP n10I"KF}bˣHǘ{W61ͺjfrc)`bk=TKXX'%dPcf Gg}2qkO}KM׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ_;U~DlsZ>18ڝ?[wehiZ\#S3*EܥXѥ0t~c>dF8M\ƌduRSy)|RZ=,g}F84>/^׽u~힓rYV6JZ [lCFiOiMe>P]^<z3iaH<Xd0O TGF ( 1(H4#4P@a옒MO (8uLm>ziwͯ{>X FcvJT+-%mhK=<tY0Ot/u7MVXy0Xe8 A;+Eyi$B6E!@I`pH zѻg}Oz"?՛sB?oqQ`ܴHaC4 ZXo}nl[*{\oJ;czSm5HO_F%xy#fZ6lyڧ:VhSO:0tR'^QN&}%1ňEIP,7;[ "GC1X38'f2u"]$/6ZQǪaK?{=ט9;=Q.]q%֡\jM$#B/^{{^냢H"8*Xe`A*j0z*PѰ u^fY/fQ#oTe []?+tb,&}b9!c2GuB~g\p`U`=|p~QK1kMԟ 1t7??g3ן|Kb(/_HP}[Tl.K˨|y%oPTn='?O?ꅵU;?gө1|P :ꐑ7Lϝ[7<†(p>P0d8 y80?Oz2C[a>Z!-t ANu~+B@IcJN٘m0 ?K V%}wJ=2@om|qG-NWlLeN[x4[mmXC&- hhb8H"GT*쭝9%E[F@*(~γ{O~{ߺImvnߛ E\]t,[< =I t q9+^qvHVv"8cPdV%wjHډݔQ"`iBa*+*f$F$ vӨ/(lH5b[R|kJV|=߿w/9-;K(䧊-U 9s/=EHW=\2h %fڥ},Zmڹ;jmsnkGy()@}9# }_1RMi_OĪ?nHc (+Ϻsn&5el"4 kXԓs_5nɼY[:cArx%I=m ˛P|݇,6`1 ٛnQmmSܙj3RUK42ʙ7Bگ\FOEX% ~}fϷWofpoD'1?4MWb6Da2}bi#R?`oߴr%WDUF؞IZCө"U;׺u{{^׺C"KR$R̒G"0dtu!р p}uN(oCx䷧PICਨN:\~2Ry_j*˖sUEЪzv^\Z7[c2x7(،&OU_k!ZZ*YVH9#`H [giDnciAX檤/zhsR}1ySψTbZ9aenAjRt9_CW{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺"̷^˓yVD۝qJ:DS4l.2tJEօzr) RU:gpymÍgW!b2=5~+1' )(ki)Q(A{,ᎏp޺[]kI63.^Y$%{MvUKob ?)̖ݯS_ u2J)5jI<#$rIqEueVcEB?97 4,e[={tU?,WԟH8\F8ldnZdI+`TQ9 Iff3vUe{Sލa~{Eя^֡*~Kwpmk7o{kp2&5{'UG}# KsS{GΣYG{C7r]Ji2x-SO4lUsOQ $J񲲃jw =ke hpuehoj/pe# A_~h| ?r[n~z1 䶱M6zMM0gCEJJ- o9WzmW:d#̡%?]fm|hE^E/ 9#ItģWUT[׺}ۛpm6wlf+yz6)U$BLᣑxe#iollv%n*£E= 9O97y4 UM0ԣk uaO&Zl?n9o3f0Qf%IӨ5>0-s˒Tqd9O?z|կcXv}KcTk|K]@DQ_zS1}  BѼԕ}bRE oͶ <_<]C}p߹t6[ʲ$WDqȨ棥'] zu{{^׺u{{^׺u{k\~J2Jz 8G6TJZnOַ7 {HYۂcS>[*r{kЊ=̩ H?$++Rp3o_8z\_ U{rMJ|<#?#, _ߟjA*@=ss#\IOf-}b XFxj>ˋp$[wc9չ`!66>"<ؒfͫgv[aiBEN,}YX>r3tݭq܍Bk4&X1{I(f}G{ߺF>S7kReMY*Y[ %%"\7Zg"evݛl|k7S@ksx.GP9ƾoru> :#3vCt\|sT;"9Wl˘'g8]d";G $ja/3Fͻn8DH|9,rē^SyշmMR" <5bIB;sv'llMfk5'E5 M,AzH HIeDSm{[m/'"($'T6ݺwl{<G1I֦_2>g儩ݹ=ggp]1SUMTrEG}s9Z4E>ro.T]Heh7wby.!GR]WfaW2]ػJQƔ'$ϱ?Bν{u nާ}yzݏume<\]uOion.:b[g.qt?G?ȿݧrW:Sdm.9Q棣oU9EG5/,>/~#6o¢>W *q'~hbK, p4[@z + ~]W_~?#ǵ;!jDwm:KJ[h{jn&S'Sm1-V#LE%zu/֟gZ{goaI'sx)<7R7-< ;vbm]凎fXʸOC0pT&ed4aCԑ$CQj^׽uW|7E2U}[v]\youDˈ-ޚJtzV6+o#G{~6~ C~۳ବٝly*XcܵttY-߸PB-P)hRx#ʌW' qANe{&g&Oڳ[=G_;VnI'F;h\u^3~T|ݟ1:;l,K+w]dwi)nB#8K?S@K6m$+o:Şx}۹9^;=|Jn ~OlAITo&Rl-6LbSiSx1TU*Ojm7ya9r>`~~Klvk= ۚy:G+h2%ϑ4y;r PfhUԺRPЦcܙmf1?cbO\s;rz +Z8C0cEtc{^#j#")gwbUUAffc`$ ։T'޽&3}YN0l-EdHY ze!*# P |nك휮Ru2##W,)?ٲڹPq##W|0xP[쭑zlbvfͯtS-5%4w-#/%EUDdyYVgر r$79^{ Yԟ@ u{on[=jO (PJhA 7ߑVն)zY¢yXɴ6N 4l8*%Pa9IBMW$o_ ?}ق8RAT޹9Dsshm8HITW%>{o'l}o|KPnպ(Αa;++12T6HR I`뛢 /))1T4TQQRC5%%%4k =--<*E{^׺u?fu&~HοF(Ll&/SS :q)$٥5j?R}W|?P=^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~j >Kujx{o5L|~U<*7+motKU z]br|ӬK-i mtoݍ3r[){TA"AܫԹ#]{^׺uq>^Qǥ' p{xb,z ?Tk ߤ龫{V%k4'QQӕaooGVo~y{V~pT߃0Ą~]"ٰɗ=푍 ^MOcL ٥ZD5;޽Ԧ\wqIwJz2#;I#ŝY,IfQ@:$I2ұ$Ih.II[{[k🷢~η*{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽uRc_)m5"On٣m7^>ڨ.;fΎEB?lgS_$/e~}S6݋>2SLj2]sε[Vvܠy|`_cCkIΟn{{^׺umΥ<ݗjZnj7$61eV^Z.|ݡ8J O{jlt`{ߺ^׽u~>5 nvUw;![$S͊xije#}VɖfjC FR+{\`VH!(>}T/q'MWRk q۸)Jhe}t4O\x`7OՋ "7Y ˿xn_ 1e?E~S*η0莥3Իߣ{d 6tm|,T5-mLMn'7: Y!99cG 0ՕR0n+>ci}J5hn4PCL= nȩ&v&Td.#GN/#]J:u{$fM_"16geƣz:oDryڸh[QOϢG^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~R GǪA-6R2Đn.ܸXE>R>]Y 쫡^3%,M!fڟ1+Kk2=Oѹ!~T'o>o=m9GHu{{^׺֗l) vImM*#S}}a~c]Rwm *0?m|?{g/sJ-մR|?_>/C׽u~{ߺ^׽u~` ZjxixUxVpj4 :;HY Ѓ>2O]Q7Nnn덥ػa[;Oh)mٷr9EPsĬ4eH}|^`yۮdbK.jF* ҄6C{3V,o7mUD21w@KXSABHZP?Zu}{ߺJwe&ͩwܕm]goc`uJ3YA y`-n-#5̩ 1/ O$UEH$*H=oo=ol$hA$O'n{#`OQN0٘-$jnuCQUL :l{m= đ,= ךfh⌑Wd0 ]"{(GIKVO MDƌ^彔>׻)[Goym:+F 5 x#˛G(#&R+4?.׹ע>{[v^ҦSfl >`dwH7z>"F+b湪jJJwy9`>7JqgeA}5j^p9GDKhܮqguEҾ j & ]`018\}&+Ghha0xMsssy3\]1gbO͘3 \]^7I-b}K1$gol^׽u~{ߺ^"cOK{ūWݡ}?_Dk>-W m_-?Ro\W r<}jE;:{&ǿǸ'A읕ƾwYҊ[}F![D .?x>BGY^v`0cq]Y<{Oѻ/mIzgRݘ3;Qn FۂxLtyexhf57DiJoy4|ojoLڬHFJT#|a{ p o/3E?;w.}Aнo;4YHUtԆߛ3TjYހJaOϢ=_X׽u~{ߺ^׽u~{ߺUwGF{Ke$*_N3m }}7rDzrY iGRj[k S_)@Uoyv~o>lߌ_vgy_}ژuԱ˒ڝw1&YS͋IKʪ☺QhC+{ []˧{%P#uy(2$ܒʐ&or hpAnP9cQ=1FoV45:?|@ }ia޻yjpmg)V-5]4SBRFB*#)so2r'2Y(^O%,^7_ªD,S][_[=,-O?ՐFA_5 F_NݹnUC{jY*$Qda]"$q}SHc=АPW'nI XC,aj˳zm1+U FZ4?0z&羈/Sڟ. d)I؛Ox'ɤ6r_i7JV#-'Q5ؗMtGYC(?%$ixS u~{ߺ^׽u~{ߺE>֗y|Y '֛^F:u)H{:_y=/UztvFSY+9DtyCN撶9鋪M@efSU4=U\Q#O*_19}YItW䰻tU<{-p>)ay/o33|Dx@Ez^һ0g|:wwVz|[=-Osxc+[CU1 $dVz;ytSxѴ^t{^׺u{sYڟB(wYzDucbd1:fofLTQUJFï_5۟!8ܛG3YA (s " PÁܟlwELSR﮵QI#vvꂦoNF2VUD`ӯu~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^֣I|\|6KuJRQe' ㎿F߇l\A*\/uVB=|=Ӯ$$~Iط[b/u8[7p9=HcfoQ&) I~|nzcgxc@@Pi@tg%fGk{^׺u{T쾽j2M=~Kތ i!W s A֎QِjX*Bz sd[ݥEnT(S'D]7xFHّԫ*2}󔂦u$hpG\}u~{ߺ^׽u~{ߺUKh*.iV1㻯jUN?q joENt[ҁ7<}Dո(onSֳ׺P_6K/IW) t3 M*q F:PlK#@cgi7e-9pMU2Kߞ*⎭A٧hxOo#рtg׽u(xmk18Ii='{?ux{GAOHMJin{Gf;Ym>`s +_gMOXtK#VM^F::ˮwGELYXo=DE`{"Ghg /mF~8ģ=s{^׺u{{^׺YGK:*i]QTRA*j9br? rԨ銮v5OTZ $aU!M%oL\P0.:u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^ u=J }ó6DꢓqQQBd7Tb#SE_pڤ4Z{c]Ys*u[P~7X3uSAQb I662r% 3;腛ΟϢ>{{^׺u{{^W_?!V:{(ϿvۤeiTw6"6D]"[*`+>UMJ(NA${w?!6;`wWUo-4 R_LQe&JI#G. r/3ra+i𧲸ɭ;U+P„n먼ki/Uu#O'g籝}}Ĵu4crq6ahv6TT08d^j/Sq3ÖvלϷ^׶KjReYfAP2 hb)\2[H;(20xPqeoNMڽ[w&[vnHUzfJʧIIIゞ jk9O`NV3חvH^ JA',ƚؖw,K1'n\JkA }'V.=SLB\UK! wFih2"PFM iiv"񋻀5dhC>}ijmb3T"l=hޝlzn{{^׺u{{^f7_bqtEn?٢$8s 52ֿ]K={ߺ^׽u~=6LFYL]!TrHi.-du$,ƁT .^Z֒__HY…*'u:CnjjVϹ7ZKw!W%pm߹lWCbRty3]б _@~{{^׺u{{^޿G[o}u=T_?v}AZ=O)H^M|=ww [[1[pG7E?PAQ<۸'# ;TIڢmW/wݯ !s4MQ29 FeJJIcFv@aCjX2|?k9;.Gqt7l jbJEW"Ɨ)ޘm01Se'^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u*CX{9)iutٸ߰|bV> oVb*}lZ1 NfO*gF7طϗ78mCg q?h9gtx1^pxpda(@#XO6 UIіbM =:Zu+";S3y=KYm7PϬ,hp@sws-guB@4h3U j$#ܟԭ׽u~aԝ۞gA>.6> ٌ,nTUK e;rn#lrQEY* tOsv-wڏ69c([='Q<JU⶯GT4uY'H"b"^sno˨~98i)bN.;݋vsmO1 -0qX ,ts{^׺u{{^׺0~?no8!5,2X|klSW㒺zDLaFŴZUm)DQƒ͈*@9NwZSs? 6#ʤmvf뽥>E{_KPEϊ4&uu2y4ijz[.շ " =}I9brI9=g۷Yz@_RNXI9=*}wDG~a]~>|9MY=O]OM<2C77]^Iiji#j:+hRG,ttcNOov[gCS흛Jk $81xauNTIF=*̺}ӧz A#{^g_z ӽz;gbymρ~ZjYK,9=ݗRj}O2EWJlEpCQgo{gE{ߺDn_ChSII2QliK^AQ8PuM%^w)$$48jk|4GpQzvjo^N~ʾs̬]7vf9K)>0 Ac[*찒MOU 8u]o?*v?Օ^yC7N>[96 <"n :UV\:A|Q5?kz,׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺ur;?W|id~isi-f\ۚX:zdB޵3,!j_AѭR2dGP}oFO|O>6ݕ4IH.#ۻF ϻ"`Hp8f CMqXN6%SGGCCK RR‘Tע=8{^׺u'h|ߔ;5ՔlNU4u|KW=T%#'!Z;\siYxD7[;$DSN֟oF".QeH)T3 QV"vy'KkKB}&xtigr.bp_>_iU׽u~#ݟwޘȳgs( U,/K_ASOO24u!{-6 lh#X΅]O 22ʰdt]mV;6չ t*")) jw57>W`+!wP& -U a)"|6e/fh%%>E'"T^OZѪLHp|rI.~}u{{^׺u{  AX28z&ݩ nTYgj\ĩٵ458i!*sFnI@=獅[זx$J)zFؽ_E^ 8 3 [:PPr@[l G;& аso'cx<ثIm6oPNq}Rm&xSe){VxnH>60ؑH`)})nֶO핥:'ܽ)n[GcL$[o٘(HuE"Ij#WY /,by[\kbƞz@ѽ&?Gg4&y@̶p߷(v6wҪ?'U, юյ_{[^fҪ?'U,N[R|UӶ>2u.Ř2{-=V"糋Q 9D #E -#,ҖoÑv5छid*B/bsy2˒e}%yizpa!If${t4׺u|`OZ=Y:7J!{T߻;*67#O2ԵHhJzFvQt|gӦg6)\\E?{BVGK<[c6Siż(Q c%F0 O1eլh~GȏqY^}1>,׺u{?0?z[oncb8p{^~jdF:`6҇nUIM=UsщĔ-<-;T$}0~`o'wgnƤc)Zѽ3qS]!s{M5*0=S<42UPUutm(*zu7G!yzjrT]BjCXfSS.TOU,,%d?,ڵ> 3cq4Q>MϢ)d/ud>鮽{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{Wyln# r5n>Jک𪤟~p:kg,ԟۻyy{N1QM)hBSS=ƺ'WkIJq/-M&|I4u+u#3p*mtO6?PwE]{ߺ^׽uP;~+b8?On܉?ӷOJ:xAYR L^ F/|lXh9{n:d+Q!b9(hdhePKJr#7@ABA:o{^׺6/Ȟl-I`6~8_k'/rOL%e s(<鳾Ѻ5K[|q=0TQO}t`|vu~c73+a7N I,k^@(6'7Lݛj҇&9tJá)ʜʻ(nCWD85׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^ңelػ6VqnlZQbX+]IKIM+$0D$k7ͮ'f҈$2HU@mo{&Vc@HlŵNC$h]չ]>6MM*,Pfd8TXb:}Ǒ6}3s ,eߦQ䣏r'ȩ v}]I]{ߺ^(竪:zZXe8`i&YHIbx{=hN꫻Q\4>?o=M=䭒3m59)G5*hBN}AGg'_.&mC1z}_z{gOg:o0b^on%-ϱ<W]D[Gq-ʔ\l̑e1HXڇPᖞy6k)GR w x0}?{OҾ{{^&{sߛ KkSc2Y)q9)?\FOf3")>h9#zdr9 mt I5?𰟀낂+QXLR࡯( n(!SFظ&%{UIk^+OVX7nvG7PSA_sVvF \V/dL.R'#JɞꍵY[k~NKrOَYOīoݿ;}FvLSaVFGI 'vuQѣ2)ٙXPiAA[{^xoLb.[l2NJ *(NC_^n,{GUBcVaSHɴT-<6]$XʉߗEw$~{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^Vgɭ4ਟ+RKw3QNK ~+=ֲQ jf8Ek!oG]{ߺ_NKo0}M؛*/|[3:u x6Nfk鎾g52gnNh`c)%\eNvrw?Jc}89ϞotM3ԎA}AOC 4.vz䫚6HPtEq MtJ׺u{{^׺u{{^׺u{{^׺u| *ɗT )۠ʜ(#**` Q9R#}9_pF۲B]Dq)R=Qƃ,Ԣt!Wyp~wƷ85?ڔQ2A8e:1Q7ߕuhԒ+ io:#2LnvnC&$ZKpŒzgD`jĐ)6tn-%tFб$ ?rC޽{{^׺#%;jvV" _e& 8dSbȡ_"[j~?$ܮ/??{3蛯{^}z>_lkk`m_3O95=IfpWS;UE5O%l~MHPX}:GCtSˋôzOYm1Npmv6Y q=.5^SrZF$I>@AbX'C^׽u~VtB 5|bx:yX]U{dkl*nC@g}!;F̎ VFR )ru.KnJw $Hs`3 =ݧ|Hf?}FF{ߺ^׽t՜aw6##'db2pW IOWGTD:exlY"c-U_Wg+IՊl_&P#*ؕkC&.HjI?挌sCHpc:ɦm?攍pSVfe1!Tr4IWS qʛie A3X^OeE$2:`~`~_.nk; J?0h%׺u{{^׺u{{^׺uG$M"E 1#I,#8yI&Y"X2I>C0U4q'=#WI U1# $zyV_iea ~)s. #bAS? M󽔾v;YG>JƯD ׫.;hg\mvQ=3THl<I!U pe?-ml$kX;6bOYO{W/X۳Z/͝Y|Ѓn{{^׺+]Qm:zMS%.ާ9HML6m#KOgcb?xO{33;fbYffcrM$Ou{{^zbӕ[mjȵr q$>&YhmDsm ߄MqU@-F75Vc2t35E4NÎ=@"JA `gIܭoݐusVQo ruM,qmK5%<`0z[$}j[5yݿb'þ:w^ݭmESx`hĖⱲ*Lyx/?d0yZf V|~NH mæz}{ߺK؛|mᶶ&FwFQm5$벹 Z$qDD?6ioZͭϕ*چ[K3Fe Ü\vMZ8$G\h|Vyhv #}h =T7t)M@n!X~h\z CN19)X(UQꕏ>Ǚא;."_YThcc͏X~]ZVXqYicSǪ쮉.z~ӴdF$*/m;kLQ})/-Vk?ёkc'0lVjj1&臯{^׺u{{^׺RmmR,vf&0 l&X@{[an; Xѝbie-%ѝb>_xvev+v$[xM0ʡ2 5UM }<5ng^=LGZM4<:wfͽ2XZ#I%#tg t?.Kk,v9"0CW gNR˔8,uy,mTOY r)pyhThH'W@u׽u~{ߺ^GEڻ*OKI<ɯ!%f6E$~]+}>ϟ:!-Dsm&h2BO~xHGetWȿ%amcg^;z%MZ#G3vnPP#;2;) !F5 . {k']=텥JJ[v&B:̟\o<.rJ2!"`އR}F۽`ymym6fXxqH٨5: Ct׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺD#ߐ[/sW]  d$栕=^(܂,XRF҅]; xy;{t )Q W_>vR}y&߲v{S6/"(g6unO{oܧ9-au}nO1ȣkwr䵍(EoǠG-՝}I:LPejVQW5C!_n>sMy#wA&uR<[ow[dgkqU3߾Wr#wq~(?w~ڣKf2Lb??vɏ(.:~/Op,uX۟)Y=@Gvmfq*#'!ёsyq_7GwݡQƑnj'짆NU)Y3<5l=4ݹPC0ǛWd9kGFڀgCl5̣gg 4n50Cm`6"" x1@ c)驕-}7%Xdn{a#EEyPCۡ|1An? h@(=_^׽u~{ߺL{r`6lRd{XiPUT8Lq;[}m(*RI#u@^Gl|nSr1Yexن>Uuo`c/D7{T??k,׺u{Yb{n/N8*׽%%JVVD,=?l%/=[B޽{{^׺u7UYf?qd׬1yLRu~HC`4Vg]QQ>8W'wG={ߺ_SMdOKry2.cGXXc̈́FJ9ڣ]>&׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{7?`'~4H|y>,pojjim2lƌ䢿wnzx7fKtP/JAO.IIU_UMCCO=emmD4t<5UU,4D$+DPYu`_~)x)v tmv>R/dј|XoߩJ$OGݺo{^׺u{ϸ'qƢy'\D*e8w- quzo[ʴ5~%+3Ns؇^׽u~{ߺNlcn!U9) @$%KYGG+.^$0l wa㌖ ?&C ?zb?mB#ލ GgFnhCmh3.^X US69HĨ˞7pA9cuFӝ{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^,K!Ӯ+aNsϽ* =UPjb+>&J?xԟRjl,8 ]*;=CfSAEslSz?ˢU޹nlVR!YX%5$lna%5$7ɹ٤q$KD$ͪRIWN>]{ߺ^׽u~ϊx:x6D B~]?a\Q?Չ%C׽u~{ߺ^S}쭹[yuǦ[ioVoKnnEf 7Ef2XᾞEE 3wݡ]e0uNܙSj,T+UNJIJI[ ,Tуaun ?//nk!g[bKfK#|FZ ;++zTUlCC[ը}zA|@=}}賯{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺uY𥟑rw`5%cUퟏ;we|~ۄIxkce;?%#wf,NۅRT厎mL# Jz?W|i?6.FWx稜?5}MGc1)[I+@!m;I.,hԞdH2yIʏ#˾7~Gܲ쎎S‹9;3hMUJ*2 ;RZ~JuK7L!؋VÆ_ʣgt^]LQCO0:'cj1opRKP99*sup1ٔS8L8Q=ͱ~G~{ f[)!5Y &}-nE>ʬ4uevFԄtc6mS:ZFS,bG HK/NrS6RhWь[q./yY ݳXSf)ΌS,cT/c:pŹ[I%O㡿g .[ЋYT&Ҳ20 \*5z^׽u~{ߺ^׽u~{ߺX*jiz)iX5,s`h8@8{m ]^;ڏo!~*#y ipzI%\Zgsݟ-35bJmn̸WX$4W=nQM~C_]!P>g'X&7.s!K\B0+ k8@IF$~׺.OedA%;۾*&jbiD_.   {e䨽f)?/,㘟iSKb/^лwׯu//=[`y2?ۿ{iը?߯CoW³;q* o>MJ"0,4Ԫ20Ω[oigܤ`Y@uWbIAՑ//Td|;_;{7]wfcmnڸV={f;!\YUk+0?>5YӢ.{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^ MLtM=-,2L0A4M,BqFGjv$\pvd}k˫jw4Pߏe,jT/_>fP>=uO"Xޙ|ߴйWcmkrV}dEOϭ路{^׺u{*;7};}Ju{{^׺yۏp,;rmLgln,=JV3{)]b#M]CS2E"{Ԥi]JO|,銜NCIS#0R>GMF!4kߢ[]P-@4{]hg^{_ӗˈn?'7^oDoNۓNIG=ٷY]A"*f#+"5_}nmgJ?0|~{{^E-]] SEUQGQ訥Jy ,LO9lhz0ژ= I*`)QYhk$Qn=,통~*?,){ty zq+*WQy(ki*[g%:ԟl XJ #~0*_y( 3a/%my?glGPL?gN ?4o:>\ܷgtĿϭ[u_{d9Y(g~͇^;^H˦J)b(q^p,{}_gw$r-jW̟+,?2?ۋD>&c:iiª?i(<|휰dmZpn}lN*̮w7Ϛe2\'!꥕o* (|J;ؓ5ݺ^׽t#ҿz#]1iЖJyT6Wqg*Q ZjXuZG<x\:~kP)g̟!֠6쯋tFwfJܴlJ30o_$fXq--| V[?oB%#>]P>w?٬꤮g3̾c)[1ՙBjکO-$ߓ젒ƬjOGꪠ*y}~{ߺ^׽ueɻޥ{"~7طٟD}{ߺ^׽u~j] 7΋:tW~G+4z6/qm|q}zM/{S\P7mRBDG=$A pG +?n+ueHvǎko|-z\TIɴy>PP}GD.} T׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺+87u`3˱~/wxجI:vfhn AOxdQ=|^}t׽u }0Z~RvNq@y1{f͡V=-~ 'Ž>oMfhj:G׽u~{ߺ^3rv#wgK|Z.꾱n~9=0ۊfbw'OW@%**)gE:9)ѥZXA_kgz_cÓOO/{޽k/wWa~'u_:53{|^ÓO4?Ŀt޿~uu>_b)Ѥ';ZFe%@7mzxGAXK1O斞) )&)$R^9#u!A>{{^fKݟ{;jh2Uox|RpuvIi䆡#ymĈѿwqU`zߏw1n-ۢ [R.ꚺ<gKYj""2~B|;.[..Go_?Xu~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺD{`swg>=epoO ;Gq[kgLM:8URjiKwwz,x_=OK,&Ï>&@_/{whvrglP;aIGx(NE4U;˾;b]i=ڟkRr?V<=T/@WY3&PzٳFحb..)'uO_:޽k/wWa~'u_:53{|^ÓO4?Ŀue]0'*wo}]:ϸcom=G:iavϝ|ah*uiL 婨USNW"@H W# <0蟯{^׺u{j ~O4RvTij:.xFvx# ?>ژV#ҋSI^]|=u׽uo'?NVii[GlشٴPU`(?̠5tKt);˫Iu{{^׺u{{^׺u{{^׺u{{^׺u{{^׺uO=?`SiȘS{Zi\I?o?/a;~=׺` ѲEo%Wa6*H{cs. ~ڞ?Ϳ-n~ Izu{{^׺u{{^׺uɿF2WOTvUd L7VWlb&CݗN0fPP܁{ZY# iȿ&gsW㨒'[񷴲XS"cx$wOEU.U 2Y-~ΗEzl>ݻGulϞ;mgvvڹZڻ][596N l3#EP%hDab0z^"Iz}>Dn90iW,Wn,4싘ڻ)a'sԊaG:2|9]ן3 f]Kvd>=dXOdbh*UU&Qgmp0S=:᠓}GW:4^/^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ@ߍ9{ôr¶__Z/<IUg7^ *8*Du0niR̲| \̰D*~߳jwͬxJખ v]$>i`E|4QT#ke<eė2_ө+(~g̟2ՁǴ* rxrI?{7&?\fLKc2XQW޽$vnOG 䳠ԐRhqT,Dvl/ijw7u߉?_k1?V|ezc~: ^슚@+Iոͧ[0Z(43{M'M>Xӣ"D$H8B$h*""Lߺ^׽u~{ߺ^׽u~{ߺU\|N@|M0.ja@?$9;ΝeL:{,{^S o9o^ [ީ?Nf6Ϣk&׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺%̃m˼ν<+]!򎄅 ܭaCx$=9?ƣ_G{ߺ_MJ^s*^)Wߐ˳QH-O5o[2Gj}Y cX>/E'@eo{^׺u{{^׺u{{^׺u$.e}qvMw[#c;jOz^:bb32ާ 1Egz/,a 6 s>ξ_d[7m 7_jnۏ]>31QLZj}}3"){/ CǣA:K{[ʯ)[rO_#+;#?Tc`K3uA#hLg G ,)" gx|}+]]ZP~y1?>cŝu{{^׺u{{^׺u{{^׺)V ee>zMg94c|xh)Ȭj2]tT5]k[/~OC>^Bo~|\/dHII?$^# mޝۻ# O6;vtGrOmB%n}o%j4Uݯ[ GSAASM$?cV[|ϯn-WH:u{{^׺u{{^׺u{{^꨿^c)?Hvz`B/ʱSmi:zVuz{,{^ smFyrH,^8ظ`w?f0d?-٭~_^gO׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽t[_y׈o9>nU] emTl,x>EE:45|GlmۺvNji|xHv^<[mc$2Lq -N [~*u}~{ߺ^׽u~{ߺ^׽u~{ߺ^׽uW+h))v+]%L Y|RJċ\"EHDE2s3KTP;Kэ-=Ǯq$2G,Rgr0܃TqLSʅEzQGH&BMS׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^ֳ_烮hdAwWS@$y[:Ln"%gHEUOri=OJ얳Wuʫs'y< (.8$_O3(ϳH#ͪV?3~׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺uHS} ;>̋4tPB7*IDʾf]2ݳkOʟO_t?TsuC~K7$i7n.eZxdMâdHfVz 8-kӈ_Rf=~{ߺ^׽u~{ߺ^׽u~{ߺ^׽t_>X||+4w;`{nÖ!08Bl Km$SJ#YC)SՑ8q.mff[?rPˌ[S;YlByi+d}cA:b?؟)?%J{/}l8<-0$7uQ%^.}I+A'Ae8JTF=l^׽u~{ߺ^׽u~{ߺ^׽uG՟ChԴ5] H1|Se;## *CTlZjylRXe{,r^&D'DaN]{ߺ_\/yNL~{2FC_UxSeڻrtckj&pIfLu~{ߺ^׽u~{ߺ^׽u~{ߺ^׽uz|W^ߎgޛ&}QOK Y7+i3}:4J!S:?)c@{Éwtw^uK>>wnf)ruSU⅘=F z-:OcX V6,v6AGN:zJ*8R F{5?~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^֋߉Eoo♩;# 6H*vSMRb<[ &--4zUˣGzNz}=.;W+WO9ܻs7A'W0Z)l|Ux,sF|28<:/cnCڎ ٔz( W/&mcf3#wj֑ tJPKBˣ7׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^J.h<:$=) Sǿs￷QX"Y(>}k UcM^~zo㵕R~:W]טgak=(_g;!uO~ʵFS7[{t ׺u{{^׺u{{^׺uq7g_v1{?,C AǺ~N>{>kj_ľ Zbѯj6Kblz cqySk>8\YT[-_W@u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺!3RqnﳥRBE]S묔Lo [6;+RJj}u4K"yt|"L׺'i~9?9kյxR?;c?6'BY=/ H(t;ҾQ +&,PCKxDR8i ŭcªG˯ A ~>ͺ߿u{{^׺u{{^׺u{{^׺W)qso->χ&ݓ~T?fݥZuMwyPnL/Z\Y㬕BK?х9?Z Gї^Wbˏ_v7~GGőA8zhUyjol Jji$ ԙz$chL=]IJiuz~7^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ^׽u~{ߺ_8TܨZWwя!( |=!f-$Z|6=-p?miៈJIVRd1u40VQO--ee,=5]%L S̊Ȍ#ߺ_Q/0o)t1z/o;x);ogQ|^+0t55Tے]m0DˬPC{<&Կٟ`o{^׺u{{^׺u{{^׺{G?fw>\v}vVE4{ek]2O8BHBOK0QĞ+ع&{qȮ+緶\ulNB }S^  =|s_-wv|7l{ wZ*,r|F~/ەu-ꢧyyk⧧PdmdX??OϢ`{@|6*qS~=u{{^׺u{{^׺u{@E,z}A۸;fppd(vk!sR>z:cYQ ΑwܬjI?/V-|~XOd] uƟ3i#ݽ51eJ>Ȥ;iižC@{9HRtE2h~׺u{{^׺u{{^׺u3εW3|^m*jS7B,] Unj cA93r#ZtO}{7n.љ[teWfzZd!PDpkO}B;C (LuCYh+i# TjRW7G_!?{wݘޝuj³H:fg}mtij̔ަ0Q DзLu~{ߺ^׽u~{ߺ^׽u~j cީL|oٽ?êA])˭~\E݉ UQt2Qt'Qjo>޾zi׽u~>e?:~>EG֟%& H'bmZJHQEA;sb1TRE_k+ۨp9פ37%|m%5'juM$D؄ܝuS]Ku$kpaĢ_p*tW/-6bwCC>"ύô$`a8cI)$E? H~=M0( 9je" ] ]䉲YYx.9>CojwӾ@?J婏? Q ҵ4cֽ143WTcwdG1m s¸&ApRI\UGOK4̬ ?b7@U7h &H{o]W>۵/Bt2]M=8(:4nQ(#̊jz%u~n )b8tvnz::>{{6Cgu$;6{o)tIJclKMR힟.b,޽{{^׺u{{^׺u{ Ŀ_y\OEީ~gvRӬF˝˶(%ҽtΗ[du'Vm|^؝66G7E3$0eTJ'!ouf~鞽{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^W}?\+#UӛAMqːȾ\-..fӆ?2STW,Jm0xػˬ79Nw'7nT3{par(RQBG\\}A!GO޺B_Oitd>m׳66R,[W!&;/Z9H૤Ii+) `5z* !S;IzCEv2;+HH`#W+SuFo%Djh+|7EsٲwGkf()x'&h]ehPY$D`U qOH{{^׺u{{^׺z;[eV8;$y띉g5xlɲVJcrH#t5T&WUU1J$ c`Y>I3R?G*U$S-Ҩm^\'uk/>o&g?iv9vF݁7klYT<;MS,HոjkEzY`zᦦZcig 1FIeF &ߺ_GNKnͣk,+{'o<[diU~>vSʤvͧ(l*1x#-S|_>|Ծt{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{r?G$wmVB6X`}4(6^Pe >B9z .Θ')rEuoTO^*zɂmճd Z/PC=  GQ 1cVђQA1][ͳx M˺Vg6dYj n# G-]~O'!ZIeuU=Z$Sïo|Gٝ'7ϼ>~ݸZLn'ja/ۧ?xsc*PZq gZ맿-dZaw/H/M__ ;s%n Oul8y$/#[A[)Iy/|`w%ut=V%GI(av->2S(.$EMR,:}m(zR5uQ=DW|۴4%Vmݹ^;XҐ kZ.8F7^ paz#uغʌ~N_I#CUC]M5%e4HE(#:w׺u{{^׺r1X<^G3mM NFAGG7*{GO#VnGK3A^cyV]<1oG ,yd Ci^,:n3N̖3O oMR:vI~=-=q;w]X=ᱷvm1O`k+uE+}" ֺu{ Ϻ}pGm3ĢMO0t5$2d{VQLVWd(,)_5t-&~#H:?7);Jll:?+ld_w&mJhqES `y)^w| iy=7Z33;;wbYݙܳ17$}Ү^gr_fkXF*#2PcqH櫮"F u}'?S_9-o~D:+-g}N|ԹیAwmM&[ {ߏEw7ZN?t{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺/4m%4C>@TwjbgcI71v UM M!yԑ,?O;q屿id'}Ѵ7ԽBI#%16l>N*,9+x3FѴSG(y]޽{{^s]Mu_a~ruRxss!ťd> zz7^?8{S9MMƶ:mmM閗Z'0ˤ뭋t{m*WRn»&tߖh̵9PH}D.P|oEA /?wUZn=U֓@gclL"qiOW;(Lr4YNMK(^妀z qfn$UU@gAun{{^COo_iQu.>摩eia3߻łژH>*Y*]|4;$MeFsE"FuoP?66|7}|z0v5^5j:WMpzO6㭍2pGy]tPy9ni{F\z;%׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^g}Ųw[vڝel3r %UԬrer*lhzJz~Ufy57O>NzwXM6H3dl2ti4p(Pc^EGM_w_ iMkP㷍^(f?8fwmDWLj KXBD:s{^׺u{{^׺wݱmGm|T;wBmACӻVZ<f'|lm(NJ8VD ;Ro:E-Ʈ;;z|mݟJ%1G ٌNV}WJ'*4$@T(Χ5=zu{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^;>Gon\6'q`2Qeyu_ZZL3! Em,z= zOKr#^S.ߛO.i`ذg2s!ۭ#7b8AyPPS'G?F=ۅ͙cIP{b:ufUm [k'+KU&iO2l^ѥ]MO.lDM;fCFfA:/a653=)Zߏu%c]#n&Wg*R }bVn>:hbvczbco߾_O:y#OG̽SI?|i3QŤúbvغWX8/12#@馾| |G-o5X8]|c:?sd: .p PCRgđtρ>_a6۸}vg=C3Nbi1E I`p1$=?Z׺u{{^׺u{{^׺u{{^׺u{{^׺uRSSQVSIYGW ut54+=<9WFYI}uD08NSruVG_bdo5r$Nhb/GڪCiee~])dj>}k޿ϑ:گ,zw1joe {fT$w5$k_OӢGzIQjS5n7DcK-޼ }?Wo_cR?eyjwqB*Pm*uӹ9oa*.OZz66  ^3%So$t{ ^ta*qj馿i>5dW鱹}[4U >o c);fQlfZ|?$ Yum^9?>|ͅ]{czb٘ⶖ8]qSXU@-J,I$Ԛ>ֺu{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^׺u{{^././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/images/cloudkitty_architecture.png0000664000175000017500000033033100000000000024770 0ustar00zuulzuul00000000000000PNG  IHDRZ 54zTXtmxGraphModel5VǮ~#ً̰$!Ean|%U_WWQe1:~PA ܅E ո8ds|_V\_{뮋ֿ>02KȳqzN>_6Fly2;,$!ŕXKu[@d ʉ.ESȓغg22:ƿ  6I1"wdG4 j7ZQ͸"Qݰ+T*̻ҋ07N(هKeH5~Sa61<$`#"K^PCiMXiL*r 6p|J7gKR0سapbD;]Ȳx:7t]WF$lI;@l"NIFK:Rq[AU'SӫJ=Wjd_ qB%?H:4DAާ#T+Ҵ"EPNgI7ilf--tC!DmQ|Q >/U'gA:z@cOT/_4][.eU5x_4]R͗80 -`J:{͏Ey|DřIּ5-c;Ah(@ '7qp1#Ibi1{9-їi^»A5CMGUf1Hʚ^U1 Xչ>ԧt$q=,BKmN7DҜvN j*sod:UZ^W330[CA'yt^>#0t.&:R_WkJTҤx".k|xrlE8l||׌KU)1.Iu>Hi*Yt  &$^p>/O!h&o"U+O@S? YDKɸpE |yy ̵&'42` 2wN+1-A ZUj{vE=ϓy3"ΝFwn$zLiTH-Q;QzF]| ҬaP ӓ"I~ E0uzymAN}88Y5D$x7g'ӿLh*!BW Ƅr8 GOjo."`MQAޣPyen"+ng~DAf$x)Va8`NS ՝ E:" 7&)콈b.ĮtcȜzot[LD#}z_Fw6Ptfjϣ w$ל|Iƒ6*Ϭ'ERfԟQfpA0ýe6l\gyȥBf MG(sR^o*eڗD:\?9L!$༝XْYM5qS@v9U܃/K/5w-ȵϨqLr_N0 DR.!v /&w+昂t*k{( "$QhPma)bq Y c`wdE0) IDATxyxT(VqkVKmmVŶֶ0#3VmRY27"`Na>3IXBa K03am{km}Br2|uUș\=/>4L,>QLCBO}i/SʛVN}Iִ4f%hͮ QPQ>nvT|,Pjt_@]VB:C3׈i()\,IWz|o$Y4_XJѝȔabJj'Yٺzg2z$MCIe!$|^i%)|'$$}iBfQ@ l3K4 .w2w_]3|ve!YiKR*8>ORҽh7/1DԌ¤~\ta+Zo΄߷\veU[o-t?G^tgʮ͑s]$dڽZtf=EǷ/U֌GfEz:? +rrS˅d&EkSl iNMΪ7EkFFT]̼&Z6=W.WoK{oˡ;g@Wo;,CLCIavɾ3雥Y*RёrݒtKGyC}ci/S^zGZP퓃{(~B˒W={:5_EVOEknv_ ?O'GBns\oRo-d޸%ugnn|֔~TzUT**q^"'8QvPjnJ Ɨh/S֋{_qkzfknݤ}_묳$9C7kk䶛o\ᝪ|mqugbJ  8`Ĭj56 %Uk/MY ,snkˡ:9ͷoC_>djyu7O|%_OEJw^=[ivܢyYqR}($_Î2 ?C{Z?3|ބRnVsjњ="W\=hǞH#K• k[n1}_bp`CŖf+)s֊i(Y1Ô~WyǟWM5?^jV@-gTլHV]qO d\m3A48*cGq٭[7y|rQf +\״5Ԗkz{ VV(m^•wOV_|}"[XY V"V+boٳ!qWo;?QC|L)ZiѸɿ>31_ 7YRњ3fn*y߹_y>[vF7bJaףve}ӊ֪D-XY 8#kUk WVLx}rrjxů'rUW^%s[5xmƜ*>_бEfEkxʤ糲i($ub>4nl[]QZe/t}N܀{INн>7(7>p9)!$]m{=i(y:OrFj`˅Yt٥IeÔp?G^yyTBǝZ|?9li>MYGE[cާo'/ ݉QZg+7}A-I{>mD=oGٳiE|nO"=BRE?*ZU ^ d'ͧɓ|on=%7>!ƴp=~#_{Z7_u*&:mĜs#:Ⳮ=s|;Z˙X68*="7|'m)ZßƜ]s/ߏS{hE$ubuGWzfdFkHwrBR,S/ؒt[ ;j*tJKZOzIӯpOj/ yqܾUee%=Okr%J=:w{֩gio37ǾݩZԢu['~ޗHn$##C.f-'R's뮽]sZgU\{CbJ M칿)C 젴`mDe5+!$"ˎ+ݜAwBIZEkNk_wJr ~R8Hn9ЮVg'ʱEk[Kリv3hݵ9bwƖցGc^+jܳ7iIAIS+T$7(ZZ[J!_YɸghAn˙pK{B!qҞ5ݶ%_͘ѣue rх-ݻJ~7[߸!&ޢH4M3hmqT~eǝ:/]slK*urb`#m+kie+!EX"=z#t.O!$j)Xi(YӾ-r;k-ZuG[j5pWn\~Q,[${4xlѺx6|?9>{Dn:_k9|5W[*Zf?yQC4{u?r۷LʞM ME'_].2!'ٖ9 G%VZp t;rl=r&*XUaJjWk/DֆGer9|am:_sF*gԫ 7}Afָ |L.Zv[oM•rѭR:ef?E<+c/ ;14ԟa=&Tvpm!*gEzrtAY*=WY,_tcՑt/ ~t[Z\[rg˟{^g܀=܄~ḇNw֒s|.+C&uu=ل> nk&/PRܖjp[ smoOʪV"99g.'7o]`>yV&լS6MCyڋdEzcmm IϞbV~Z%!y!YO->322dЯz]c{#Nw֒W\S'H(Zn9 W_uuy*a[Ν9.uSM)PRUtfÏjNߖ•UIY*C_ٳ"=u]FA,2A>Y͚ EkLo6+{w|{ܯ6i /cМ>ϖ:K&k=[9K.u;ZKリW.mq[?:ֆGͱo7c)+._c/I}I}R2s)y돯hU$ZhʪV$KE2< IDAT[&6{>2rFPhiA@5kV뜺UjϞdueddSY1*FxW'5l?yVqzҢU皯̓Ϟ?{!7W֭,6헚dxSnfsz##Ϸ|I@܌yݛocRzm6_R~ y\/)w}կ\uu{Akd}13yq\p1_ۭ[7 eoPWst,W-$k~k+Z/a - @hi5k* &֩*[Sbeddȝ}-o;$߽{J={7"r\+c~>) ;W@{ι1w9 \wZ,O vSv5!^y1I=zmmzD㛾 េ }ωM?k_O_zɥ~OΘWrR`^tƄ5=ʓȵjN|Z'{>D2cUn >eHfMe$i()_XHfNhmqTF Ӭhoy "G2j7A׷/|߰OvFcV̶ݻNo;ё EkÎ]6k5u ;JmU]m[K^\Rt⼕)?WgONjV/`U@:y`//w5w>@7.٬úg>N^kLP2kbB2uCrnjV8B8ZwD;C~C+=z9ϑ/\Hٰzqeׯ88|W~̖'m]~N˳O7$~B9咋/Q?XZ#YֆGewmT&>Y~_FuuYҳg/˷oIxa̙>agW'_fg^b{hTu3S#dK;|9ss4L1+EjNTdHݳ@ty*=pwdz2PbJ{@XZyuv1%}8ٿCLĶ>;"XdH=O׵~8@R?X#E=tj9ˢ5ugbJLCi/!S.˲ ~慤;CCvX{щl;,ǯ CҌcuA @<9A.ˢ5, ӣ*LCIzE!DouGd1r9vxڰDLCɶ5mjj5~$@.ˢ ;PMCڋBHZyiGn}ҽ{w/_GA??%-~o%߃NڦuH'^,ZUOKh}-ݳ@G1=*4,_h wV/(d ttMh,Z=ʥs҉aYoēpo*Z_ o=tLP2ڋBHzg׆G.htTuGdLCI+t.@:v뭃p_ Z7 :RVBHgqK4l)ߩ}JmN1 %ۨҍWEZo|ONŪhYYlБ i()Qp wPR:gY:*UNjVy&KEXY/ݛ-g7wi(5fZ[LCɜbtTfY|֡[t@R͖E?}''hY߮{6hbJvD#i(ٯ}TgڽY=#? e-V_ {ezNݳ@G3 54.ܠx w4>KJ4f5THGYz뭃p_ j, :i(i()_x w64 j%Po9(KEj>}''શ^M,mJLCCBH&LV dϦ*{6|,ϣ1ust@:RjˢUE/UEkݳy_i(Y[Y{AI,X&̽YY|*ˢu~ ܀kU:.4gK2g}tU5VEkjēt,Zt:>/veٷL@LCghJOge달,Z}/VEk2w_ݳ.y_6Z !!es4,}H,΂Crcu_ ehU:{6rEkg]ji(Y^FB776ȔabJh'YٶzwS /}t+-Vēp/,Z{6)P%NɌJ1 %Ek%YY1 %W+t_ e"Ek @<9wتhp?{64lVB:CM)PtjYN7KW67UuEVEwON]bY @U^Y {ki/'!M@e׆yڛ]2ei=kt_^_IJhēpWzHly|Ŧdj!$S4y5gio7g(hUрer_ Zhu=tC}VVBUP2e_?zj/S3=ct_*ԲhU xrbˢ5 :U&i/*!%+4,xD,mM[%Mf/R3Z/܋-eug4Ԯ^VB7{65ȴbJ> l>Op-=O\x:?*/Xhu;tQd֘ BHzg͢MbJf@m9}ֲoA1c$9 WE?:8}'7ZhU斸g-dFQo4M^}ֲxլ}4  @<9A5'~Dl |A%Kk//!2u1 %ko>OK<4|g:h.KEXYON,ZtQ^Y ;k"K BHf]p) jyN5{d1Pbzu_ֲ|Qep_ }+Z={6OSvmʌ4f},Y( kV/uՄ̢y/jJM*^hB7bJ.}[ji2'JV%;6eY*:/]s,דgZbJ^lB7;EdwAl;m[eWyC }ė"snēp϶*Z?{6([ !%i()x+({k;~eڃi(?gNZUVEW5p_ fZAׯu]U晭]+Fi(3HvtعEdNNѱÿL5xUtuētM,Zgt]+[罺Dvt\A!'gڽ'1V;V[YG̗}Cu8/:ݲhE/\,g$?-d*n^B3{68)C |:IyuGdZ'CUSɺvBfQoc@xUteoētO:=t52zy|M[ {uԮ^B3KUrhKŮvzS^4L`@xUdJ hēp[ *P<=^Rt!_֗lًOזJUfٿϓ޿T.$- 닚[G/]yVEkN+;5٭M)xDVPjWֳڕaY]Af@NtNl({d4h,Tw=>\'sf XӣZ 8}Y*gUf( {eZ~Alpnz~+!_LO6 }m$G/:ٲhE/-֠wg3̢̈́ަyz ӣ*.GI'y}G<.[g~gg辎/h]6p_ &Xo<8e [G/]XhuytR+KEixrW V/V hētiY ViOnUњp =H-/U5p_ ^,ZgU Y*}'=zE{@jyUtuēpX%,ݳ9[G/[ ɶ,Z xr.Uњp=H,*ZTOn;g ja` @<:Nl *:ʲhUQ @W!}ei/ ;m^r6avfia<ƅ6oiv_ ?^9l_h@;Î~7pN6yGʺ!R~Mla,ty3:Bۃ!GD' XA"ݳha` ЊWBNۤӾ2 ;mLxi~y8zԽƠ-,^,(^OWˁye9YxU6!/=/y*2iGavؖIa rK[(ZtRk[X}222$#[~_ 9l?/uچ?WL=Ճ~#5FԎ͒/O q( S6sVǡ^ӧGڱYRcdAǜ?aiYn#tR.70%g5 h Tثiarڦ/Z\fMCJ>okg$3^#JPy6{T0X~N۽^9TcK\֜k@jyUC?ի"VA㖿-?#:Eٚ-{E׊/B:CZR.*SdknTg(+x,3aѐӶ$:]z,ր+{6Z^!5}oBU IDATӶ8ѲTzF6yG&ʑ E!]1UVȑ d{eS>l]:l 9mυot_7-3ݳΏ̲hE/貪\a{0찍 ;-=C:EZDf/ZV"$eŲ4jT3`vBa=+}MZ46~zE;{6Z^߁]Jamda 9wjqS9ْȬ哲 'BFMҐDf/[rzoo7#촗ewؠZ{6Z^@ hS+8>=9YX%r/[FDI8^# ;xIs-^mn|GHccCg7/tJھU ;lNwrf ύEˎߔ5xΫmO>t`N!䖸hU+uRkЪhRQ :?rڞ ;cV>lDfN*+Ag9]6zE$vaÐ\x~! .s,Zrݳmh@[۴Ql=R(uBs:9TdR+^>m3]Yhu=Hl߁p_Eaj5FDgϐTk/u!7*+Mt]q]uue^w$J%! 71aΡR5D~ 7ܕ!19g[B&'$e%!S3IH@~TS]]RU]3|g7]U} gNO-^:uʖ/(=ǁqFg_'W4,:w>Z)iݦ˧O#pY`at_e?rf,D˧׾fw`i;̥dѳ20&lv^oNqg47=@Ǐ{ugW^~Y֯}fg / Dݯާ7 V"KhX/ҳ⽑g_Nyy)ԫOaM2ZWmWpX)~M͞ A aښnݵF΍ʬhu-geMɨB s5zlceZ֋hֹ0!F^M;s嗱MS}Ӿ>vCM0߸[^SkjZ6mͭ%} ~ K}n5z׋)n- Z /0X)ύY=`yڥt4um?mzxN<|o֩=KgG"ijX熆VK {~H`uM[=լ@k@ЋiJsnxhV.]afP [շmzفMS7.r Gc pzd?ŋխ~fϓj\eV>\``-@yZWZ|8``]fX 2im"k zv|w$^>巃{ZKW FW;j`jV j-\Ib+V@}w)ڮvzaB=zzk| ZrZհSi_Y>b5+Ъ[ $+*sS\Y.b5+(}N9q7}n}M*Bp1cis&h*j- WYVҪq T{ VM_ֳi3S 4ZԲ5\Zլ@KS˪ֆjV5iMSUM)BXNnVPLͫZ#լ@iUk@^S!By+B+(&Vf jjV|iM)BXZAIjjV $YDahhu{C sO dlV5v2*V5H-3g! zv̜ 1lmVlY6vb[͚m.Њ !lErVPrZ Uf)eW2R&\…Kp)X VN Y!BdfϙE5_X 3jb*V/x[fNw7MXp{w]C pywc1piVPZqoV`ؽZ@}Cݣnw,,\Nxǂkݽ=B8 u)ym[ #T]j|0VZ@CuyEoݵfBmuwp`/GhU(w`fW2f?B?<%7MnW콷B>^y}ZWWEhUYqKUX r) ]F7.3 r :w0(7}8FŹce{RMsifP-z}p+WurV.^>uG@!޶uVw\4u;d+WuG^u;&ʭjjV`^X Ms>CZw$s⯒>V¤Yt|\o1U"КhV^t t4u{ճŋ bup"gQ_VPQVb5+0YAY92y"*}l$z\c Bk'=лݵ|]jxқ\gu7+̃nwқJ.c QDh5SӪVf%jjV^3ʯ\/P.x\]zے:6ֱZ]>v% KVWXp{孮=g7=A{v;\Rur au_jfjYՊլUX ʍw.LIy'$}pqHIB؂|aFvw-_̟W6z_d{qW۶{u캣T˭RVݵ|+Wu']EhհfV&{ZD=n=p` p3oNZ wR&{O>KLL˹48$Z:͠X21𶭅eVZqs17WZqsթEqMN nƜ "an..uvqё YqqzsլXu$;tQɪn6rϧܹ2ш"e(a򏄋'YJ-Ъ3S.~G|0aŪj7&3_P..S&.m$ fœ;{$;|>b5bi󄋝ɫۺӪk%g.VP.vS&Q&R.-o5Ҭo "L!LQIJdU=+/y (Ǫm2y\hmrO\>8{>ua%'\%xı4GtZǕ$d}^?2sz]\ ~=6ai06Lbz5Bdfϙ jjV`B_Պլ P.y tű|4:S\wRsmis cݡryOqKgůܘ׋#k_2k rN`?{~ ^toq3pfE2ZK.29۸x/x^hܱZ@ﺦfꏲrUw!py3Ζl3ŋ"_{{xh m[>m|{{xq\za/szѹky%WuOxZz׹;\[3n [Y ~fd6g+uwrs/~zCܼ{FRKȳÎrhq\)wn <%:7VgY3:e28&oZZʝ+=o z]w'S}Tf#6\v,Û^Ҥ?#mX^p 4KL7Xh56p|knT]səS2?w!L~<^てӊN<]” !3.p5!li3M {n=RØsO!L7|f0&&t|":Tq) m4 $;tQ`OiߎȗG~pqodghQm;N&\Kž{}nLru2˾䜐{C5ZC_/Wn:ŽYEX [:x{ճJ. %Eh ! !L&VEQՒrs鄉e[ d'ȹSqIa#\n {S rϾp/ĝ\7'ݝvUņ)3ߧL$ܡW%,k}opsg ]11\ o$\?x޷=ָf Asw-_VޞΥ7nWǙ{uΥ7\ nA}W !,CB3M zVdU+')?\\œs),]])VG/[ic+Ұ.Vud1iwVʥ?/߶{8|}\*OvǬ$,>}[{Zu>Y&6Gha B\|9*޶s.@Кc\o60"Z4q'Z]>v% KVWXp{孮=g7=A{v;\Rur au_jZ?dZ!)1Ъ(Bvi<( YML0*.v<& {{zZÃ!L ١P&a{3)O*岛0!\,%L/)sr  %?.VP>Jߋ١%e򇄋 .N"LPrL#c!0J9$/GLI3Iְ@#Wu/s+_կ^}`ク+cm[݃OvQVWZ̕"BKDh`AhLCZd`г⽔GKЪ(Bԙ~;%_V~SF~sV=V.VD| }5|'(BC6>HPc?iX21𶭅eVZqs17WZqsթEqMN nƜ !,CB3-ZEQ:C_!\>Y[T^NrP {m4ZS(U LR )Ьk{:e?+EEQ! ?񯚭==˟$ezBܳo!L<^;ԐD8f6=]7=xd`W]Y5W\zSъX\NE+Rw.UW]̟WzX|tݎ !,CB3-Ze$ er-aqs (ǚ7躓:Bd?R&._"LHdrup hh-/&>Jsb7Ea rM{~?'L@Y+^&\>IH~n=kg&OR&mXF:ara7}c?D|0玎5>XX+rMC8>rbGs(JЪ(-@M_S._,>+(# ^ڗpu/̟n9U.ІEZ ZXٰ̿4K 1p~z[ZMguw!zsT"!B+CB3Qs{ zqǂkZ?U݅;68[>( w~/*|aᡁmm{޶5Uepꅽd>Eή \=!i-]Xpma̼{H#BKDh`lv IDATAh B+`h=ȥM,, u𦇗4Hj,{>3( Rꍹ?#BKDh`Ah B+``h<л(m9ݵZ86<Zyw{w%ϫբJ _0h][}ZO<_׿"uO Dh&.B+Ahĸ B@1 J:o1Bk-zb+c ;0eW@i+h6{l`e>3 `,m$2szu&@hmm>(Ê"_KJ{O$9% ˾Vv\qB M[{~iX˦ܴS¶eتYuX: (qIg兄.uVx2O˹:a %?u0n߫x BSbU5q29c=VaZDh&nBQذcD֢jk,t^Ch>$Ȋ)jd輎 &" !LqW ̹;;W*|louw2ʫ(P& ΥɟR&\H?ƒȄC|⾸|I_37c=1Z!iB07osaVu(6`ZޢXڳ 6P`$e_U)wg),L.~,er29Prf:񽊗 B+0%=_eZ{VB+WSLIBں (Zp(k)u˄Krj4c´ !LܦVK[UZ՟Wzn_W=ZOQŰ;׵\h]W/5m=aXڋ5-a}:9q9o]}aiWaǖmNPt~ߴi/iiU;n;߰Ն0mi֞7,uiMKkjhVۗ [Ǵߚ'Vik[MK]p?kņlXRRwzİ [fX z~jXlV۰c4,27j+JcAo=ĸ4<r*zNQ9>EiPj>W:sO>w/.\"=CU$;|>b5bi󄋝ɫܪFoFdr >q¥M$ 7%sg.6T=f.(w">]D_3^Zc' R4y%5Q%;FBg1GsU~fAhĸ BkZ m0xN}.}(B޳!?[K'}@+ !Lm=Z7j_ 7omkQz;2lBR!l-gݗ$4\¸!ΰTaYX7lNkQƯ]m@/W/K},t/0lXz4К9.<@Y2᫟rZQU~SpֺOCzy0"U=+#}h,s鷐sq.!hwr.?ǻ=r@ }8~fAhĸ BkZCfQ&o(\[m?{ ϻ:975e B"B+0qZ]WdXꖊNRfX Ʀ6k 4;LK3̼F [V?/lr!}ai'|vҰe~~#y~nH6ϰ۾1r]m_xq}^yuz1l:߰L[妥T%k=lZ}v=sێvaC5`Xڏ Kbګaj8&6:qK3r\werKŘ ?#L@Sq^=5~"=CQ&O$.ʆ.I0y$ [^oa]9œy$+1Bx_on_wOL <~0nutdBQD%\$s) 5}˕K.Ok~^I64;4;$#\'L@˗cI dZ!)1*К8V='6As;C_)\} '~TE jJC̐ VaZDh&n3B(YoԺzn4N9yjZ hs]$^[}IʆC[ޙfv~3FM'n[s5996=0XQ)J=C-ޖxw+Ьxֳ}<_^A>c'k~w& V&R6fkKW6y%=ފRc ^7}P, ٽۨ! dZ!)1*К8ū6 O{eIym{ B"B+0qZ=:za7mduMp;VR]=$#1%p_2!.~(+s^u٦Ѣm2n #. J#}=(K_گ,Q{q' mxx-ھv^, nZ>$vޒ@hMmt|fg`g兄;&~\iXNeɼ^[x~SQs57A<52{O|hYg( :7u2gΜßYӞ=ZOVu޹5Α󹄯hunNI{%+9Zʝ+=o ώpNn2”|AhM$Bkl.{. Ch>Z!iq†Vo{Zt9o3ϘymakL[ecKѼVKhmy[KB=~(F54lP`w*,K*gijaW]2iY_G:LRմJJhkB/kdѕĮL*ݏm:К9.t{6qrMk5X7>W'\FCx( %ead wZit'k ;iZ=Ӌq}.a W_0ybn=#+.]u'|#ə˭JWBkzȌ!LDB+? ӝ+x._l*< | B"B+05_v4OeMk X^$ ;*Jp 񪲾YkG :Xb٦ [۰gBR=B֛ U^h7]%cXi1T}9К[=.>@o kBLѼ^ 7>Wk/H$\;k|ݣcF?mQv1}.J|עg|kJrкd<0n3!bw"ִQɴ@4("N\ kJ~3ic+⺒Ah ZD*0yk ܴS:oiXSx hYWޗd%R#|۔C\ILn;ߴԕ:^krE2KakoI K"ڹQ{gIP[ KghMmd|fg W5N˹ y?>EzM+ZcT_Vup1)+GU[Ln+wݲ&`g"7%, rz? nGooE|{>m:~xC"Q 0)㞯^R5H&*.v ~de⻄B~>sm ´BjuRK}[ŁG{h^%#}IJ–K%VEz{ӺCvi}%YڌX5r[߶cX IV'vR[$l?|3 K;15ϷMz\<)u'2ũ:SůEzM+ZcT_VE2w.#\![헆1(s tfo¶3rrQٷ$yU#Y n9P,yAКVaJ{Jj{I Ъg{W:oAhEFhֲհoh9LK=-f-+Z՟"ugմm Ĥ}Ieq\7=WwOS; ne~0-|Ҙւ퍦^M[I2*+x6 [_R7c\}C|8˔GULEMѼ^cx~SQ~Y([ τuwrz{jǚ}|ܼ{abaqUű}ņ/)< wV¤&o5=rw}7.OV;G)=V?% !LqWIm/$ZEQ:r}ˇj|=B+ !L- Ѭ%dU<ޒznF^Wh^ VR nki %+,֑|}I8EQ߳jؚ(nŮL[=R291Պ=!cf_| F^~i*WݟoǸ4<r4:sButd2y" T2u"\We[)&4/Y~Uoql0-0'1ݔCů>_Q\ʄ#\͓;.y'8qгUZj^2”|ёYXdhUEQ\wRS0GQK IQ.>-RVBZ[ AYKhgeS%NRfݦLiU_ܨӿMRBѥzIFcjꎐ=`jak5-ڣK5ą^[1,ma_dX5 S 7憋uʜEmܹDg87U!Q&fEk({,%?.VP>JsQ_>3|rM.&>Nj9Z dp/+=)t/oE|wIJä9#LRsUAhĸ竤|]ph´BZB(J~ҰQ%W|۔nԾZ5ܴ_c}{ [;X K{q=-)U{?W.//[+~j86TxU (0lX֑1mt|fg l蓔˄&^d^Yr}gUa(_KB`mVV"w03KVӵ1|.:zo}߇P4:^IAhĸ竤 ϋx B+0-"@4FЪ(#*w9S=7tVZhUE17}>T3춒riz[RUdyuzإ_bi?]}je}mo1m hթ%EN" IDATDjURVaNvaiOehmmt|fg`|&~S{\]VTϹS&ZR7UOhsO\V5(1K=;T23'(?~qo&T{ht14he9‡?E}qI|^Iw-ܳo!L<^!xn:SC[e?qWwT2”|4?jBZ[ A%*ȥS "RW6`Ӧ7,e֞6-צnZퟩumj;{de=i궰׹2̷M1lGRikҴ\~=h= qcg԰; [=W=lXڠakEd:m릭3l7a㈊? dZگ K{Ѵ?ڷtUEQ׼jZ=:=zIKEQ:z62qK330NpI$;tr%erp3˄)LT+Kȳ ]')F-u o\L|0n勄ÄAr٣=ru@ot(ѱf_߷8=#+T˄' )ͻgT<>Ъ(#?D\KG8N;kF>j4:޵ 4D5ő>_U{/!YU+Q BSbURKj X @k !L- ьZ(ZJB㞯^tլ&´BMVB+ B+0%=_%jYh]Z!i0VVaJ{Jj{˵VfuAhEFh" ZA\dZ!)1*;G X @k !L- Dh@ BSbURۋ:XBZ[ A 8׊ (z.l1tAhĸ竤W[o`jVZVaZDhmh!&B+h%2”|ͻVfAhEFh"V" !LqWIm|G`jVB+0-"@4FVaJ{Jj{yJ1ak!Dhmh!&B+h%2”|ʪl6P - DhDB㞯^CJDjVe V0 BSbUR HAIfBh%VBSa V0 BSbUR׆kf!&Bk D#a4Z@+Ahĸ竤g- !Dhmh!&Bkyso{ik_j?wuV`^u#S?7"|_i B+0%=_%rm]YqDd)SP - DhmV{a<`H/-(D¤Y @؊q3”|ͻqWeXB+0-"@4FqZ#eZh:q3”|j X @ !L- DhmCc ^&ZP6.Ve_g/~nB+&gURۋ:\BZ[ A8VkeKB+&gUR۫w>'bU+ B+0-"@4F~д?K{ִri48ke BroqR&(r.~n B+&gURkt^\BZ[ A9Bk=`y˜>ٰL[}İc6|.ھd=ִ?zĴM[jZ"YzGp_AR ,w+6\fR˴#=e6RoSaWkӆ=eZڪN{9ܴS^Mr.~se^->Zu~ߴi/iiU;|VWct7,uiMKk(c@Ьo "L!Lw/.\"=CUz7q!  ;OS&'\$\L^FרBsG_Ck@(Bsep4)ŪsSs38Ӭ3Y&բqx$3 !LqWImWj@hEFhNК KoкlwaբiG-uIVcܶ [0mu},N[Ckqj䵅\1l|S?|3MKP|ІKq|H#96,@C9eC#L>Z&OP&cNo Hrr1b(8𙄋 U!ʝHR=Z$Z9_.חy3zvxh0cZ$q~fZ!)1*2FZpŪVZVaZDhmh!D-7/v fZ5DB\fi{FXcۃsN7,*|_IXVRiiY`,i;0m]i { WX\>Vu;L;kcO Ae3m۰k'?/:K[V5ldžn3mdX0Lr*}3,8iX]Km2V??YGrc~n [gm/Q١s0 7P.9" WPR.^&LG&Ln U' sO\yaG-TJx\"=CQ&OBI] ] eS߸ƅ0ܐms摬b)r_票踌BkwxlQ)'Âac\8Mhm{Ahĸ竤gV=Wj@hEFhN:öw浙va_2? E .nӴEyVjOV2y2MK[Ut :մWyBS̷}6yҹwlmN|+J.DN(A WxO)w(w斄)3T;Il6:b0b5ʅ5vL>d.w9nyOl+7f;k\hV cԳo踌l#Z U:S Z+cZ$u~fZ!)1*UIU´B Z7{|:+ n\wɆe/s K/1z)u>85q}Ʉz_.]|,pҸϻ3Zq =I BSbUR۫4&\BZ[ A9Ba_Tk]W#ymiM[jZ !ЪUWQhگ,=fB3t^}FCwgP\Rx!j I&ܹ"<׆eBkWQŪ*lzCkq$3 !LqWIm/'Yq^լ^BZ[ A9Bk`lSنmX3!Q%Vw7#޼eBXYZmus hҰmܲo;V}B$̷;k{a]F;dy6aݔgBK{V W wJV*=7%+cdex9Z.uu\.Tӿ{\BVSQuz[e\BkwXcqBno Zi"=I BSbUR۫`< !L- ќ\TEYfi6lⰧn3ڂN[? ^j/al;mtgl8IhͷM;jm盖ҰTk [ݦ(@=CL<1( t6In+8BՍwģxoc\+x;Rw@d#ĸ˕ `r5KQƵq) _F|܆Ckq#3 !LqWImV B+0-"@4FsVsR >ǰZM[@ F_AvlC=_u_:j(J@TӺCvi}[-mF 9Y2&o(bYѪ ]sgR&OT r$;tQ EߤǥG~pqodihi^Ͻ̼邏)^%!c`-6:om̬`S=_?i Rzx%ԣhED&zuPuL 3<U,WժE5%MWh)/JOK^ؔ$<ܽ㼒O HEkFKxNL׽ڟ}X;]l/(E}DO׽dnh|کd dqh7YmϹ̵lvܬ8Wcqhgm{xꄼ%eJgy2ư_J@F+oV/7X l*Z `c< 4kAeqG+]We 1XYc$p^5u]TܝW}3HEkʰoN1/m̭`\fd1NVd@z$b_UҊ+yq#4OS P"A E+]GzZ:١׻e߿];٣hcCʬ43D.EBPC^]gkYZyQ),1K{l׮8;,/%ufn&]lٮix.Pugk{Z١KÔ6Ĉ0Xݿy$7 Vb2z`u׫` w!VQeJw!Đ1Z 6.h5&38`u1\z`cF7Q_4V1Roh20ܼKWB0Z &{mQ{:l 6a5o0ֻ5' V a:%(w~Q""} |$EZkS!9}+P"H5J#A%VB0u=27kԤhu {׳D;+Z sG b!EBP-n[}Dmgu]=o']\|!X 6f5gI*=^{\he~qLsS/6ؘ#E|ͯl=V -ZK9މ~%(Z.[/xV&5iւTi2 ׫@.?1֪*|w9ٴv7loؘ:QOP"DAQ8'>ο SRr%6.tch H]ZtMY*? @S֠xC7uu,vMC뜙P(2F񾅰y>f?-p6;-cvh_h7'}]{6;?Z-&ǰƦ,StVB0ٮiq2vUC{l}eq>0e+ EI{~lvh/]l7۵}={Њcf9}-6Cwl]5۵8t,vAC9 LkƔF;hcnݽm'40= bZ:r5B!~QP|Nhe1wVBPh7/Z`1 6/+s`c6zBQWe 1} Vf>f1W6 6%9h|<}U%W1ޗw72s ܯVfVf%\0{s1l2`c6]ilʢ` Ba2Q)i1FshenWF+iÍ_{cq|N6!ŽωB!+qqԳ{ ׫@W>O3Fxmڵْ9fP"DAQ8gܻc}`qFYҢ0UVEkhEՒh1^s#'K~Vbch_|hz26Tv-P,PʕfhFM-?+|/_Fc6͛S5}׮E~f!ڳQ@iiи}1Pz_,ZK.M㧍ةch衤쭤s~3)S ۊsͶ54h δܞ>_;h Yɿ.rO‚^ɗ#š}Qfn ) 6F/Vif uG)LJt24qlLj!ԳԤY Οʖ-G[K^!} }'Uzqz9jUNvRvx˖%S>F`Y1@U+U=#5km|ԬAIiֽVRƹsr ТpӼφuЮ9>ZN-sA\Q}kgQ!Pk69SոWTN_ϢRJsӫ z}7O/8UzwWQfzHcr}vҔ5(ͷص*ZDlw=YL-Mk񦍗lf;V-\!PVRv I^?5hԄ>|[AFU5ȇEkκ^Vނb˵%Ӄ*Uzs9JQ 3_?o,1&>.9R)dYGӧy6W3)&:گtʐ~'}׈uk:GD*yaV_H﷤ŦvDh*X5GݧR@*kj F᪗QZ{H^!؈} ľNHY9y7l>-׮ڦt|ߏҖ-۷n:jMS&{m }޶r֭q}-[\9e3}|wN]ayq4$1v,Iײ2ZV&}k'm}׽;EEEٹ#Gx͹Yxd:}+p0GG7tpX뵥Jիx%Ezq]߅p<r0ܲbcbXu[ѳ-[6U *ZW[fzU*VI8wt_l ~8W-F/O}zhS}ndQ-LFx@ҥ^ي5JpKRO *Й+ius΂]bkgIs;=h ~UYb5kͲhk(`^0k$R< |d 4ژ.Y ւ7D}sUk W /6oNj;ymf5/ܤ z͇)6.|_^/LN7ǟhC5EqqT\yH}ziԄ~Py(y{mׁm@WXR=D;PKlK<օ5vh|S=|G7y&Rc;E>RoR[Q(6.bbcrmHg-[]Sv|oľ^jb]@.Z=WRi޼/8K|GQ6mX۴jįs8-]+Wew\l,.'u  T&o\Ԥ^=6 ԡ*v<֗ڲWBݾ:JŊۓr^<|nd;;|ٲT/Dײ2}ns-+:?֫ε%;[@+̽L=QVtD(1u*{+wKj5cbcÂ-FI}ėXQ?bX%׫@•E*3{5^Lt4֧{z>=5{Eݯct谳ZwsZW_<}J/v',+vxh(v;7XN2L}Bӫk8۴kbtyⷹs8͠jVK[Th (ҫ/Neo63ڷmk.u(ZV+VRQZ,\2-Z7lsщ?oe=QOx?IH4zwavi-ߗKR/fĸw7ŴЯgIURR7hXMPи3C~`/R|JW"E ^kfOPu"uEz葚5i歹y.pΟ1h2WbmxNy)V.'=uQmۢEHUN:m* ~;\M)_>8Ux11{cGSzEkFCOIĽqF։V/Fuo{u(ZB Vf/>Z(\2-Zw(+beg9C1}EEEQMSQf-j5K&{?۩Q4cRrN~3}mȣѓTNJ"ۮlɂƫT E+l=RvВO7Ӡc9Wib=ތKohS9;~&ܛO㧘rY-SVXQhen̻HOOr/G붓RYg+Bv`/<Wڟ0wrfVQz1O~5Ǘn-kN<9.׫+hVz+E"I{5**$oY{͠iÇ{&v.YL?>˦MezsgXE{|\շEkFwgO^u޿(+ZAߢկYd֒zݮ]άm\>^F*WQ۟W:~fluh}w_y+b_?WyWRwg MYc3گj]t qB;'y^MxyHUz zu"ծOf@ϱ =X?N>֯ڇwNh0 <~+7z[#^3jH؜d(Z95Z[b5+ ]O᪗i:8߯}1{=d[xXO?׾0{~鵚p+5Uo6ϴ@_\Svאmݮ{BW}9URk)n}[67x-TIoۮBJtTAȍ%؟"*P|#uz#'˫{\A%QYcXFO%9ڷiޜOLwmu~5džX?a|h0=ݳr_uLS@ϱ>ϟo't*LAۣh (;gXtץflfN:(iygԬE3>:w1h ~%.X ,UH᪗azy֬}Mkۑ㧱+\EhNJ}+9rkÖü;PڶUg|3˺{IY~6.>|>{5^Ll,8mP>ŮmW6k)>-w\7| Ä} Jzh*UHdn3v _y֯O*W8qEEk˴]TA4Cрb&wss5/Q3eChz~x+> عx"3wXAs@Q8eȨ}^k'9z}pZ1h ~+Z(X |dU+U/â8gEKWvi׶QQQ=#O4}k 5u~_q `Yr{܋72O44pzZF >b׹kA:sU,6q5s/s4|?<WڟX/S[SiEns:y񏐄r*۹mْ׾[5mBQ#t; 7vaC+ׇJ㏉z~LSHB2uAs@QkgC5X'&gkDZ/}5NIwgO(M7ϮQ?V VmQYK^"H ck6;vyCokfvlǰVL6wSl]|a/Ob3H~=pˬh5碦[җk{ϒPPkޢ+U6FR3|}X6~{d7ehؘ}V枻w9^ϗσz={]߿r MttթY7lH-7fsP-ZTn/ y"7O䗨EQd妕z}SŎ÷RFxb"|oRf:ʀ|3sK/mO 6 AHJ1pՇpzW?=4-f(v=WxcLQvzz}4iւ)>_[s ׯisʌ+7 ݔ9/Y[6/uykעS4cR̻(qk`/'1#*P ̿@N.ZLժ|9o_zoէEޚUh 㻿0w6 HAW/,WV[VX/|fR{P)Zt{UI9֊:gʗ-oLQh (\ZknӨ %kgoh%>nODkתUfEd1|)mx|cIec>=)_u_ҸGZx&.>>^C؞R\|<絺p gԢZ 4 ׯZ|dbb`@Ȝ).A$bS0dL) USםehzgԲ3o{uGA#8_ŕ;'$q##O5#Sgi3p}/K?bGdBʢV^.UX#G|c^yvmCh ˙{I׫׿ >5(rk^|ڻn֠kL.%)Z=1Ԋֆu-w%GbEkF"7-y4]Ll p[FEEQgZӜ%s̕Ӣ;~1۵+LI& CS 4F?R9H`sgʭ&**<<_rǯ=/w؎_5?&YRBKҢ6z+]쾣ϛ?ú/K?bGU? lHYn-'R7qc^S=֣uم[r|B2u֘I]zMWX/K11}wabvߪOysϲ WWmGЯ>?+(!O=h?X`,r,bt4<3KϦ[gZHe,"2F 7z1X ? >2eһ9c/^CgR?xlbGb_QkÂ]k7|׊Tǡ~ʖ+o%J7 7i'VJ=j٩}nC7-6u]K},R10y,6,4!r[Ԕ)1@MOqw,X Ch\j,Dt21֥fGǗQԭu=Ѻv+a͇ٷh?EeϑdY1MIqq^?7jܜ>|; ry3b_?HUr=tY>_%smk@BmQ~sr|k(G~HD-3iL~#1;?۶_tةTRm֮=^tymcRy樑Z7{{jE7tbn ?ibm[lY-Bv]xWF҆?hkZ?yl*|`L)l1S6Z!x c1Y(X Q>#,X>5nnjK}nF%OHZa8gqP=w5gyWx>_V.ZoȧYcF{}tDϣ?0wvqݞXPФA::5kfRߥ}{XgVWs^|, mo݊+vcl.|ƿ9M-FNծQ= ԳK^WTYZ&걶س'iX#tsgims֭-vM)^Y}ng;yܯwhk[>x|tz걻nٸqgo2zt@׳z~{ҥ#=.Z"k>J_ۋn ] ;'QJ^=wodQQQs6[~ƽ; |m6HТ߅+V th7SrR X 6"n+U#?Y(ZEci!gmf}Ll,+Yث.7l,h} IDATOeeϑ>_ɬTF_\sHH5O=R|?(ZQ@`Yq޸o=^.y=0ql_) ѻWyvVN{neMy ^sU:ߏ;>>KS !BTbEj\gJq|y)Vfm|VlT:q=}rz~2m-[JcO˕cmӧ۫Chqlܘ>5͠ CT72W1Q#jT{孼\^ kRJQR.__l9/v.[/=} %0}yE=4-XҎO64ֹ??wx9 σz @hUkVHu~X)Z:OM3 Igҏ :w'jբs; ťRt|&ڷSE띓i^V擘hZe)xsf{|ӯ{w(ZHIUXu2Y2=}P+6Cm>X 21ӡ}h%`eR|.xjjRjV BzYibەy:3ݮ뉬mZyȲ0gݷMYzchcmSCt5^_-G]c3 Zߺ5acBj<)Η`Ċ׫@1)24]r\ʤOyƲ|^{gomnqiZ~Hߞ+F Srei1E5z^cEGG0.gk%Sx\i\.d;' 6H=^;'SUOoӆNmM{80WԮMf*1h Hyo<)c~ϴpkTӗOy.ԊPO$DZA&,@ 6^nd2X:D@YZ҂Ek@ٽs3{B_s_C갶ܵ\>ҥ˰}u*eRV7Ŏw\*[R wpgŨ<Ǿc:s6n6q( σXz 5::THO4iBzN-3y'i{ g痨NG1TjUSd'fsS ).6bcjըOO ïGhe&);uzjQҥTRT:>j>Xl֌4]^!^~8wD>tM4:?ۖժEʕ(j+:&,WrV^.m}N>J5J1T|yjXp^-.߸iaBTJXzfMѱ2dgme˚.K0׬C[Q݇reP|\}aj 2d0ڒE/kj_;h YɿvrEkb Wj R<`v8wI=?@2,S̵TŠnRjV Bzk4[5} x5|*ve[qzj^%zyG^UZҍɻiǞc+LMoSZ{RSl9:rveS}[ʦ)^+g %k]SNJ6m_ h(<}N\2}ES-x*_}Cr<)Η`Ĉ׫@O3WIVA`EkFrLg^ϯYqyrά}v5R㧍+̕Ӓ@,\d\j `c2}+լb(Z:IF~H}sf]߳bbhdBoŕ;ǚxuu|xR/(ZQ@`hED.AmU+sWh>\!kqtkP".Z qX 21{f/XBR*EmrŔz\Ek\vk(@-垻x]xLsrvxHGkLjqA?ǖ+_1+62^(Z 0P""h HiҼ1S'B }3^ VZȫpŪV CV + x_肵EkPmhvpjڼ%U^bbbBJT~#йk~-ژ)shiԦ TN=*[S:3ӈqν)NO=G~KXR=d[?h9h G3izTT|7^[Qb WJ=^0(Z 0P""h Aa vZ^U fv9sZ+@ l)ܷ e6J=7 VZH/Aъ Oľ^j~@Bъ \5J#AE%Mє5Teqh^Ѫ/hcp)ns0pW fZHADU8%(ZC4BDX.Zœ>'@ mvueMADU'uAEkF V~f;G\j}%`c2}:9MADU'uAEkF V~fe}`wR  V8Wj]n) Q""} P""h;b5RFV~LY#q:ء#heuK 7=VAdW;R_@z(ZKP@i hGъ%pӬJ=7oz$b_(Z @P""hVA ?I V[L}MAD(VA!P!",(ZC he|FsGъ L"JE+ r (hHI=7oz$b_(Z @P""h AaA V$"筃zn(ZIľ^Q@hED.A ‚?I VEƫM7=VAdWz (ZKP@i hE+D9y`sGъ L"JE+ r (hHb2sͥ7=VAdWz (ZKP@i hE+D9UlI=7oz$b_(Z @P""h AaA V$,gњ|sGъ L"JE+ r (hHbp?rKMADE+VA!P!",(ZC `s\ѺR ›E+ 2+=V%(ZC4BDXP"Ɯ,ZSR ›E+ 2+=V%(ZC4BDXP"ʜu&IMADE+VA!P!",(ZC hc9֔KOI=7oz$b_(Z @P""h AaA V$3Z/=- Q""}ңhAъ \5J#AE+(Z!l\g^n+ Q""}ңhAъ \5J#AE+(Z!Q['zn(ZIľ^Q@hED.A ‚?I603ZvR ›E+ 2+=V%+ZŎԅ ƒ?I V筃 zn(ZI WbG0%(ZC4BDXP">K/J=7oz$z (ZKdS E+7(ZC hup:zsGъ LG1 2AEkF VPB$1ؘ,uzsGъ LG 2(4*%+SAW~D*ua hE+DUL'MADd¥QmsiTTaE 0נh0BDxP",ZS :K=7oz$z .FEgfϔHA3L(Z.hHb2{9ol-xY@xӣhED&ѣhp$uJ޾I~>Aϙ{hO V #AE+(Z! ΢uӥ.R ›E+ 2E+ȈKҨ(-ϓTA)̟rn4*rU뤾^JE+7(ZC huzsGъ LG 2Ҩ\}6g Ha̞I.\Wi=zZ) "ߠhE+DI,ZnR ›E+ 2E+LFRNK\ A"7˥3Saz;=I\dP"|?I 6VsGъ LG 2Qsj]̽-D^~ڗA9ҨȥVLOGIhEE+(Z!Lgњ\sGъ LG 2ԭK<Ҩ( .DN(J鉉.JE+7(ZC hsusGъ LG 2T+ʿ]0o*y HtdhriTԨP+}+0$hEE+(Z!l]+ZS.%H=7oz$z ]՞HT1j& A'W6Txqj2:R_B VoP",ZmnsGъ LG aęZ_X e鷃ْ4/짋Sз.*S*.$hEE+(Z!mv[0=7=VAd=VCjeZٽb$'sW*"Eznqe;F5ѥQmsU8VQFRO:J'諏ҵ-ْS"v mL_-_B'nepT%FyթQnq3ʶX 2KB!MQHE+Dik8-"Yf\=^P+95TZuRѾ}ȸtb?YN׶l_3%/$kV&]۲d9ȸѴ?*{WU1NrS8=*U=abe+"V$F{ R /JqsI=stNZyȥQUDFStqbF7]A/3>H7te.._BLggK\R.ŬrR"k5+V@C `-X˹$B oPnR Jƕ+1KR+ʽ.SbdDGƍӧҹs돗ѥ軝=\ɋ8DX.o}s;]ڸh)7OJG)k@^2U?Ϲ>ʕɑԽԟC(jVj"ʬhs˔Eܷg' p\I]8 ->V5ܥVtikU/jo|K}APʛ4Nߥ΢ K7Vec+N̖CdӏtcVl@߬ZA,΢w)o8:R3RͥV}Ҩ45.rfZ5ܩVvuZV~\͊UPB$1ژN)&B dtR6Cݣ+1+Q5VNqjK\6ZR+/-lo;3NҙY3܂ta`J7oi=t`Eb8ct`otэ[JJ2_C_.,-Kg,&:aJޙ@GƎoѾ.\j%ZyԥQm{n)'U\ 2=ftRVHY"V${3Z K=7syp$uJS(t&Y.SZéVrj_Ԫ~q72OYP:4JGGǍSԷa*91ӹ'U+`l@W7m[;fzEϤ[r%/??z~;MG?q4~m ]ݜB`v tqb:=:pm3g Tʟ6MGGƍCtt`<2O{kr8UtAZ9T'L{n({R+sjnF|kyd&Τ/ٳYVJ}^Z͊U̾ IDATPB$1Z!2R,zndT\S gǝ=HLhT+:=]~DՐ jKz'CR+9.rSR:JKҨ4SNKFYԨwiT$/>5[w,;7)FuĩQe4JcPm{L.rK44w\jޙrPt]3;{ ֿ"V${&3Z R ɩ?U_TqY]%M۝{ZZ hHbM+R fH=7PZz`uT(UzgUNPB$1 >~1=Se}`g2ĵ)}te׳l|X aE+D1ŴYD㹚`-4oܥ- Wjp"LŴ%b-y+s)\ZTZȟq-\V$+_Lϖzn|R/WC?J=7C7OZȔ5筃Z޷X*,\V${ 3Z J=7P(b PB$1Z &s?=O5u>pS@kQT,H(Z!m>V-zn|laRsEC[,@DB `e&XP$ E+DV}ոznUǭ,H(Z!he>zn|hI҆?ȹծ- "hHbqb`s˴Fu?r' h`C `-Vb׻jXzC@V5n E+D`4 D$ E+Dc 33ZI=7L>ps0fUuCg@DB `cFrbK=7P$fZTkq%8޽QoPr~{v]UY=g(W\ Hd+ $SEz 9㠉1U5dH0$2 C |3=鮩tfz^][՝NmƒVdkjAj=Z?{mkcp!CDkjAJx,P.#^1:KVL7z&?;(A+ 51&K+"_{m@-hqֻ^Z房z^)f]/Ž6.{mX{m:f꺿Ɉ% Ԃn_xOk\X)FkDŸԂn Z@d%|BuyԻ q ){m Û#T`!ht#zk\Fq PZ;߉^P %݊@X!q Pǚ_-F,v G`kjA+*@Z065"T`i;o?>Q{m@-hE?LLk\F??C~k1V8eɚ(4huq Χ4PVW?z2X;mEk3nx}=Zg^P OFܣ5P.V@,Z6!bt@kjAe(aá'^m[mGOǽ6,o@?{mYq?BF,vu_ycOq =#Ѻ*PuN16Cq |x@{GDGkWk1i|{h2b cW,bSq L;5q rbڽ*X0,Zp^wBӊg"Ѻ6P&Fpcpˈq =#>qV'P.V@,uX04s^P qIĨE/P.ce_puNouQ{m@-4';D[k\MQ~gkԱZF X0D}'ԫq rba:4TpF,v ~jC](aqcpn?D|'f^P z&{n{miQcƿ{ ,jk|ww^P Mh?Lo{m V8t/oo^P M("J½lL;^#Fo{m@-hvGƒVKk\X.K涆Zխq -}aDpk\ {m:F ,t%|tU%sh-{mXΈvǽ6@S>bL&#L;iAq ##>q rbahei'ݦ|*btڀZl{mK[п,q PZو5q -{n{mEΈPKpi8_Ľ6 -Gk\^)NC2;5`dqOĽ6ʥ_vŽ6@3݆wEto1b c _GڀZЬθ@ZX-We)/5\ZMWR%Ueϭ+ ,~[~~EJ Ogu@=Ka4^[=uvxЪ}U/J;jyfEP s0m3[5T;U<u\1ˆ%SyiWj:0޾ï1v@ ?p=QUđ`7yTEw*XPҮjoNU/ ?>nVvY'FڊwߣWu(j"ѦqQ(7E|'nx~3ul@@W榌QsWv3m;]_[GlY"h>&SevԖQ˴n6D"*>S #`iWtb2ljIj}?Lg -tbY '4q PZL]{.t;"vUo58ۘ勨.k#ˍ*%t7Ľ6@[wx)XPWj- K;Y h=VkE,!iQelqE3 i 4hpTF,+t}2ڙq#4^WĊnV䠭~_`q PZL'q nVdaX=縿#X7<4Pת$h]^QjJ7+` hlXPLWR͐t-q0Zt LLBGcw͊)]t&V@,ת KVZfTQͮVY``-A+(5\:4O}fj [l_j/u˧0hVhk1m@xˆ%u-"&CQ[>vf ktJ7+`2"hĢk`D%utՙv3>Rޫ53J7+` 12Cǽ6@#h0J7+̸]t V@,6CG; #ԽZfEnVS`Kl{m:F `K@EA+ݬj0v#{$X3CA#L iW5Zcs^30QޫnVB ꪌX0-g[AWz˻uv^w;:仲epg|&RYwƗsnjNM~-ukZswsnY7,ιɟT&~kr^͹_欗ښu;r^򱜗}MvdެyɁz.dq sgd~97;%˺Y/5%73f䚬YKevwM-˹=y{Zі5&sSs/gTÝN/Ѭ{:仺1tz}W5UJ7+`2^@]5.ǝ$/ȶXsgʹo d5>fdv(L> 52M>^2;ܬrn^jQKݔsg3m3>$/xܹy%VYau8Cq Pǚ9|tˆ%F]Fױgmmyˇ97upơД ;]%|<޿IxABzB_<ߑ]-{~.~@>+Vʣ-yrx>9EynwÝ塎Cߗ٥ʾ]&Ɛ=ۛd~Rl+|w*6̒{Vv*AmM~zɍY7+&ozn*і=ko筜VSu0QXNㅡʈ%JUe7}]gez3.ɵ͸>nzI+%ɹIQIX׹jwS[ʞ nw)~(~.9"Wgrgpȹ^(_><9}$Ф*ˇ7c9سZ>o+@W|z]P;n=;HuܻAvZsgf\f\vٻ&lKTȪn3A{>e3mQA3{|Sv]ٶɬ)%RyߕہW_Ol#"QvoC[~)y~ycmߒ{W6륺snWΚ% tKK=ZŦc1[^ 9{._M9/{ Ýڕڽ[oH=rQTX|a9 ynٷvٽ[ݰk8m}hK8P:ޛmMSB EzzQhG2b @]93P^Ɡ yW|ny?}yԷ QT5Qߖ_?Ñ޲ݯCVhK6d]q9X->1:xck1V˻u3.hK~+&z: yZ)o}n]-WȭFAEɺ{u{j}n&W׆Nιɵ9/us.W0}bi #L9u7fy%9/h})7ʧ~N=kY{EQgO߅>ҡ-||w!Gu׭9/-?0}"Fo{m:ԏv*0%]?˺d{K,~:_}"7erw<9=0Nle{2ͲsOɓY/HꞵW'khG%{m:F `*ʮK1dԦ`WݓM/wb(j:ձ.O>|C=^7dcv]qA+  ۔vF,t:G^򁬗|t])"<~|ñME/~XGvoEv]Y|oחrn^#qn/l{m:F `z}M}-rr&9q569E7Ԝ5gS[w^ 쨌XGyԳ`f&E'_:%CevP>R7g&D BW8U 5/5;]~ dO"|C"W,;_u2d-/륮8;!Ԣ[ݟl^Lr^ˆ+N!jW'ɃCvr(ppg|&#A Ekx`Wa3Fn9/)Cc(:suVWlmul~"Q/3|9/HK=>h=(*zV=>1>]B\*cԹ>WY7DK'by(^E%][L^X]ΆW弔R7cw(::NkԱtЎVGᗿjMC&Z9/%n?L>M+ hڸcs^J'_O﹫0>A+  OvF,Eվo}..wV@V@,^å]Kj&-eGB>Om!t:)y\۵8;A+ J%-tt%~u@B8Ut.[Ⱦ]K`C"*ղoĖ9%6̒]-cާEe?{m:pʈ%53^Z\xpQLkdώ?&i ٥gٹQ?8ܹL;-V@V@,ҎoXP3u<ܹLvo_;^PU:v{[B;V ս}<ܹF4e3BG⁸cpAk{Wu v#땢&V7tiF V{WMx(X,tBG*#L`;Ɂo)E0vX `ST{Vˣ~f$P"L弔s徝MG9A+ n員-u,)3B;Z_9/uFKQ?#Skì1âNXުЙ SR V׆YRϕw)32xIV(Ow?{m:F pXGUrE}>ؒil)ɞ dv<ܹ\#tU\\.KeώRϓ]gYse;@W]%O˱\6.VhX4; Ueh;=z&*6cuŊyoٟ]:rՓSx#oגxݨ{N0uc.WGG+^CKocM%Al߮R-#6sp^h0;e$T#*L{Vˁ-xE#Uϕ}Si vI=ZZ0]5ʈ%5Sv- [K']'V+vw<ܹlT Ė9Eupo`zU/g ;ׅ{W%c[ Pw.yVٟ]:r 0_::c͞ryhG_5SLѝrw0k{[bytB`;Uh+=i+LV-լTϕv#{qp:ػ*癮-#I׆Y#瀠E+C;Zmaǽ6@#h⠵нVֵܱq۵dJ8;g UiP*t½L*\FWt֪o}}7V@V@,LG"tt2b @ҰeqI'Ɂ-1`p dU^>[}܇sղoEˈ/F^5ʕv"HzWQ u*?2<(QS.{Mdlyn;mƹvP'l]-##ǭZ"{VQUy }ݵaj>c|eSs/,fkn5_b|7 o+-b&[$ tW|ͯ[ЎVKŚH$Zb# K>ƉnA*`uf:/h-c5݂>V)kwHZ|_`o98)jbuH꾝fdڹ`uwՄK Tq?)f5K<@xp:al"hLY͞zUx8H$F􌵽iW9:3GS}-#hřR5XxrEiJ>74)Fأ~ TE?S9/%E\o)jQN TnBƫjFSvmiG O]u[U&CUuM?ܤy3]z'ypRϕ]fEtRV`j6̒}mҎ۴JsiWLƴm*=MW}t]iWYv,%ٝrۆ9bHlQKrlњYoy1]Go7,hW';\KtV9ػJtȃ[2m-E<ٳc.;tR Ýevٱ@yk>}>_}j#[c9݈E Ln-a˱obwq5J$ hFwn l,f?2Q[`[p'!2:[|QȆnun#ln-q]L4ۿO-wiX[+ e9=#>g%~Yf4lQ756~qfKtߣ[nOp2EA+` kvk"$:6=8Vg/|5dcㅃiWy/~bc3$hVOqGo|mQ׌{|weHh}OMG{^N'hW󏩫T膫Bب{+ɾ]Kdv=b{VOcN;| ]KF>ou:Աp\m&JѪiwF644wLSw`0o_Q׮7V5u;_5|7 CfGOyK$ f_'NXv-n 򥈀Du/Tʕ(zd!;^vG}/-rwk*{(c}Zٙrߛuxo+8/ޣu"_5? [ǪJؾ]h[vGlx`v`oHF@[_U{[B:^Gh+v-. R쒐{UgCBY7 ͢{OcE_80^WmCa`+&5;O1Z\j*xKFF}_4,mHEY]r_5f?[x-1H * ZӮBQUPzh4m0d[iH}~'stԻӮɡOyQN͞rytUtܔv;ϗ>a~FkU6Qg:ʆ`XN㨭~th:BQC&4hug։{u_?BԷkHڟ]::N ]e'9,sP{8+z95?#_ع.봸*42gTzs^GґV h@ʳtKlCQlqhI0&Gvn MĦ/LLT0~nEk;n|RDZs;-Aiլ״pe_WzMWY龔礽cn^(7BU#W}2K^ߦ|*J0'u{7!A66{Uipwëq^1~:ZS[!0GwUkìm=9UɊكV= 3Y{VS>Wi`?g']fVzn++Vu5&(/k::D;!YӭPȷUUZZQ3vЪ K;͆xYئ|P`t&B8Ut.ڷkY{HDQT5سZZ2j v׆Yre_V3lo5[0-k?Ue?;ilL$ƒIFnZLTH:^w`궿l녵)8OY{VN=ouLڐgDV_ ]%mPja ׏k{.rwL`]Ҟ:U>llQ תo61uxA6| (KMGɏw l2:"mԹ Zo0O?rX Z'cj/h-c~h!XQ vT궐Qa-^#7Lt[,W8ĦAO"H+ߩ[3鶘gX6_'N-hy5uL-$Q Du;=x5^rtѴEh3ECg]a_k:JsߙNӮҖv#gADW󏩣o;CYxRvxCQTuOm X-TpL Tg u<Q{n@LeuiyQ* <{iZ)Nkhu>Oe]W- %Ũk~7M$hy3 yf/k[{^[DVB[U([Vcm*.<7 ׳\? ݖ4nmMP466.*' Ӯ2*P40=7!-}fƿ)z7t+Ҏt`u * Zq^yu_"Qt`  ^ _4G:_'EMoTݷӌ U;_=~ ZlX ݢQtK0`.fY»]Qݴ-6u* &on} -^.Zov5ŝ!66W6ݿo+74)Fأ~ TE?S9/%E\o)jQN Tn_ChhjP% F]C+%!c]`تGoم+lTvc~,U%+y|5fK˺(hmslQrodćF_ޯD"-"^Ь|5f/lA Zu[mP6mw =^Yz,/ vWz4^h:>xζD"2qVQCMOz!dzfWyoDPUH;t`mX P֡.iԫ=rQtU6 [{SIZH$iG9.tת2qD߿jLm ZO3;h+63, a鄥 05x/`um%E\yp9cc Z0V8z,ך-Ь-t[ Z{\M֠n/6o 2Ioix0d]O6*+5˿Bt_zO9r:*@!薸Jt_y]܇ae]WtOvm]g]ϕ˃]-rw5,=4hM$ ^Dp\t4:t.}:XO5DZ}%N8F Ҟ:;"q"qZh}InȂ+ҮrAlxCU;xڴ=u^:ߚvɴh:jXA_\66,4LSZJ֪9d Z'Uc;hBب{+ɾ]Kdv=b{V@osT\Z2y5xU@uvϻ5uC\U[";^pH$Mt[}vh+zV 70LkסՐSZJ֧ߠY~ڦL~FcE_=oFjx`Io-5ѠUJ~en  *ʃm:ZȝU~dQMʀ;MG?ꞧD"alޚv>vCiWi[VB"H4-!o[p;]Q/lU.:;^*W]禫t}vyޖP80Y-uT>WZ\N1@ HyPR4?84Q-&+ㅇi\^4%6NʳlRf'uK jx~5_ h N'h-0,=n{t[ɪtSӭZQ4[\::?{m:vЎVG嗿jJEψ9ߏ{m:F EUZ(Xr}`F,V*-V@_ lŽ6@K sB;Z]_V*-V@V@,ZāJE7Ҹc~5tt_5CJQTE Yw7A+8RUibvBG; # A+EQA+ n=|tgk1U@RUihЎV;ݸc@JQTE B EU=tt0b)HU|^=jnqZr ^NZu;bS ɴ.~NMT0 :Ҏrs $]Ux9Ak5L6EUZʡ[@yRK,`x ZGQຕ9krj(|P5~3bltU/\>?^Nt?dCJQTE `־Ҏq}W_wMtגB)ڟ E+!$PVaqp9o(N[O.4Mh4M+8[J '[ڏ/N^ivwSn ';?ߦ'{B"O8Y %?Io(Py;O8.^c; E+!$P2h(A @VBHhd-NkupP6拸[G\ g(Z !ن7(,ZS^(Z@J6L(ZJu[Bk`_lC Ng֔W6݅V3lC ȄE+(Z !ن7YnJzT `c h_9CJ6L#i~ EM>Xq/MsӯLE+ VThͿh< 2tP᚟uW̔~t-Yhd ':8[6]~˭uw#dȐ!YsʗNYYs9Oe}WfaVhG(ZxB,'ZCɿ^(Z@њ?l%Γ7ˋ9Oȼ%7^?Z;nǜSe>;{ iOGɭkwRw9eg!MWohG(ZP-b @PGvl,}nۖZ":.V{=g]Lc9[{]d׺>l༳:}wR#Lԃ['R6",'Z#@PG} ȣ>RֿJmt:cxu˻Ӟ-񘏷/N^_4OkhG(ZPhEk~EՕ/СC;Gsd|}ݞ{5?ӹ-e.>OR#Lk­7trxs^hG(ZxC[NRU6QP5?rT@5.;뛧w:~'o}qE+ V>- Ek~7\٩Gٹ&w+ISN~1E+ O8^-'Z#.3v+?_z᜜k^\~뜗wzo~^hG(ZPhEk~~;vL`~=דѭh_Uw[7z<hd ':8C6[nl@POfO[iqrťߗ'Cbgxۯ:GyP3nkhG(ZxBG-'ZCɉ1zrUǝr52B9#, tNy؟ȴg j_v{Y뼙7{<hdB P usW^"Cp4M "iy{iZ:wYg^_4}c(ZV@&prDk8gkE+(ZFƫ䌯czȳ18{uz7ZR۹_-wP#L(ZJ#,6l g(Zfbw^-˲;#;d׏/ŃZ>n}sɴhG(ZxCXnJܮzmI; EOQd96ϞpAwvεo1E+ VThG_JSN| ]_=nccyP#L3nwo=Rш ≝Qe|:|e{^ϵ`#wiE+ O(Dk1DkŝrU}5sne?Y9cgOXtT/t;wܴ,1kVVחˮͯJjP2h(A @^Ϟp\S49_ nȿy O YI~zk(Zxɧ-'ZɛU `c~D>; Ek~ew[;ج wdZlL,]|~V:昣$YhdB P ݍo!~GN=$9cc9JNΐ뜗ˤo ηw2B93z,7>Hy$?P#L3[U `c>wV5Udڳrygw*6xdeGԇ E+@ V*PLg?mt畯E+ O(o˭nkؘ?~r; E+ɔONE:JP2ZNS.kؘOwrUwq/h%=~nS?|]Dm(ZPhE+)urT}27uhd %,Ʃ^'B IDATY˭u7wV[B6ժi\|w"ssiJټ>L(ZJ""ˉֈ R9CJ?z˲kzo>LdrTl /i<#h%; E+!$P2h(]aˉV)h%dV@&PrPbl lC J̴h-F6拸-@PB E+ VT0 &p'K7Ϟ%VݱQ x‰-1iu_9cΗM);ֿ!vԼ.Zo(9r5r^|EˉV͝r&4݇8EE+j^ӸfW|m{!o>L o6W?p%kؘ?eupdwȩ>eNY_~k^!_m_!5KYK7'V%o sXiNI'5!$IkambNT_7mup8ylJU%מhL)կԶar>m.Woj;\֕8^ yB9/U `c>=۪hOws/CڸȨx4_p][KE%!D]v&%t:GV;\ QhD!x4bN{#o)S^B]>`ԽyLwMNε:8yl̯Xnqq/C.n8.1 tR_>i\".{.M}bӌE|_`h(ዸZN HXci8Sk޼GvRyKvվ6pJp~3"CUO8/T `cU|ޕLwpNOVZ SiYEli4OJ(;jXO LPˉPU `c~8J=w3Hݛwˮڗ)WB=oIKjց^ӡ`uģsbUɵ' l%(Z $c~hLa*Mۦ ywkyP*Bn{MkoJUٯQj P ><<˭kT6t+[.KFK8#d[ʆeʖ!_+!$;VIk{dò?tXc{pΈ1l o81r5zmG,'Z# 34ϘsSǂ'ڶT}BZ^ )BlҺq4FT[p][ǿgLq,S'F PK>';o3 ܨ֥M]䓭 +BB>Z.-Y{$*j8L1'j5TO(YbuT `c>5r`ŝ<m/^/G GSׂ7( >"5ͺ5"DEښWGui& >,ɊR7݊Upl=p3m1q6D;`h(% R6KG Ǽ޽@ڟe*4ODXvo^#?{ٙKl|+5Kݖ YQmt '#['zmh0hCkN7KWEug =zpt}d{wJûu˲iҌmZ.u/)#RN^2.cj1 X^qUqL'-'ZCɟ^|Wr80h՗>&}p7 ?MY5)\/K;e;Viٱa|i>ς%9'[Medž5l~AIԔ% UD g;LϨt"c__2dE+]bGc3^p㝨Sq6Uqlz''d{sv|"|BUl~NlZTLv*SwSwck8F&x'74,B6揸t˭K] YYx GK8sbp<5Mi8RU-/EjW%+F)ٶ.(;6L]/ɇ%2\6-?(7͗]dm]Pͧd;Iʻd}-_c{Tw51fBp*漚W}AE+@ 6,'Z\@?/}LsƘ+17 QLݑKf]7[$Y1A޼[6IûKc iIs4iIΔֺWdw!{.}P^8ٺDv7Z$gJs4iIc ixaٴ'+&Ɋ Y^ݑ4u f阛Ƙ+bl`?O8Yjup8qlGUKR5cRpM{QpPe]EmUo)wˆJϒZW[ylZ}l~AixQٲ柲54ŋd)c 3Q,j_֍Gai|U6-mk^!{˞WㆅQ}DZ7,j_4^W?'M"{J4l~Aٴ>[yVU orTZ Ue94$fpgEQ9cz̄t5T8@ %XNj/S6uW; OUG\}Vp\bk?F }GTw>kkQځgtVccTwnp0&;c;?Zp.0 \SwNgcRTwFumf阱肫Ϫ*D՟솢/^`9qsA 6Wm\bq1qvTqqILtF15u-#V)pN1 gqp3ug;*ԝUQQ5M𐗠CplZSwV_ԝeQ14Ŧ5O;LyKTۨt1?Kߍ鎳kN33_'Xhu^(Z7Xd9N]zm뮅[(ZJP2O(Yfup(#kؘOw-:Xwq/ QGe\7\lupqT `cNj9N^zmE\-.uq/ QW-'Z 7 %XnJ^zmh dprDkq"kؘOw-:82;yE+` VE'\7J6揸Zn\H E+@ .*Z}x.HyJ,*Z C^(Z'Xn9Z\Ckؘ_w-*Z] {%|rˉx.HyNfup8kE+` h(ᏸ^:t %_:xFU `cMӴm؆vR< hgUbmTU@"Ю6گbE+l{̀ڤXP{ jăZ0Ц 3h6?J͠ jb"V hkcA- j͠Vkͱ hf@k53Zs,5AmjcAm},̀ h**̠ jcA4m^65ԂfP{&f@ǃdX2گb?QUh@;+NNOцM^|u˭ 7wRysvfdxP$М6.̀t,̠AmF3Zhb] hk͠<":3 x@sƃ%UAmy,*JSuӎ_HoHr nEI\6/4-KnenٺizT1Al_7i~+ͫ|c󝇥rzii]|v|dNb3dwU|%SY_"l0dobK-}˥m"EWK7d_rٗZ"{ Y_"{j^׽$ Ȝ&,k]?--=.-d;ˎJ*4+&VU*&HoenٲFi(+B6K $9ےx>EY78O9ڀ juv^,=oAs3piUT(՟O'|rp{1V8cA"3: hϚAM?PmϦ[7xY_||G6\* eH24'پNi~pҺ& Ŧ/7חdս^fK[7Y&{jJu/G Һ& -=.;yX*m+$Ұ)K.Ԝdݴ-fł{>3]>KUi|pyC3^|[BuveHC,=ty+ceZ͌%9\٤_.[(Mde7HyH7$b3cT1A,Q6KrιR3 j+̀b,=+hWVMѾzF;A E+HhN3cbAXPٗ%9[GAT1Av|d $(/H~|^v4ULʮ%?KH' joxP;>YsEW O8r^{x0%jGA3lg̠ j=? *K}Ҹ|4*5ʞWtdpdO+ҺҼPJI=OjB5qP5ZО>ٿgE+@ Zi9Ѫ QfP^0XPkYFM*5/I+ߗ}ҺCyFH_CZ[SJ+ߓЗ$ jk̀dN.Oц@4Jiup|kE+ʜ]hN j/&t߆g& ڭGew,%!YGekMa晙_2'k j?Tw <*ˉ֙^[mDj_134ڞ΅qlY[ٱUK/B"VYceonzpm?6ڊX@ jh(ݫ,'ZK 0`E'kA34ZknqmuGIN}Y ].,MQ](m5%bA-lo~؏7|r^(Z؁̠6 jM]/K}Ҽ^zr;9WKGKMT-ƃZ j,0_weu@ޫ hW(Ww,rk/RvMx"d0goUߥ-USujx{<c2'J:8zmh0И2ñƎ͆˖%7ʇ)<[|ͦUatly7]&]| j<{0xCɷ-'Zg&G^|j˭#@ސYaX@[ jq5e˒dO $BHSlY;qrgxP9[ ?z۪h}`8.H(' M֮M-l^Kpd%!|6(^%US) Q}_p˭^(Z䫪vE,0XP6\,;yT6ȃRmZ!;yT_,)j5TARK %+-'ZCo^|˭ 7wPjv~, !R_:Z&)/!>{-屠 PJˉV R-CSsm]5SyCQU3%9bͫ ?<Ļ[-kE+|P7K; jĂħѶ}=ʋBHeʻ%>制XP3Rv/z%|׻[ pHEXP[ j>tʾLB2L1<]ciUYn;G6F @%fh,%mj_S^B?{kI/[Z 'xruF٪1~r bAXPs7KgK,I<ݭ E+@ zrU).NOS>նf†2{lO9->{ 7\cuQ6QP jܿeBƒK$$UPNh R6#5[G bAXP+V^Bn+].TP/Zk9Ѻ R9e&{G5=5H,TQ˭g[6F @3I,ɞE !dfOMIhk5(ZJtWԪh}pqA b2pT1ahNVE=3T6QP%v`j0+/k!/a:L'jxɘDk8 kؘ_wVE?r3wȩt1BJ9t-Y)Z`h(ዸbVEka8.HTi^U^4.sɾڥ BHf_Ri\k5źaҺH~5E+ r%;nupx.HTǢ5N RA;mX^50yÉ*˭gnl*VE}5Ԥfɲb"+F IDAT!$aV1Qjf=P=c4Xn/N 'Xg9;S6]U[ 2Rh/Qҥk1ZZ*'۵nEҸ%kL8jJ뚀4.sI:=kz\V0QZ˭)KњΞi^U(uGuu̯Ikr[2@DZ*'Ike̯u?JWo7o(Ygu^(ZM1'zg:L_$\Һ&+!v H2ͿӳV;n \oLhNnhU%kؘOwYnl;]vm(+f٭I#a񵲭b (/cZd[DiX|掴,UO) ecZ)0QG-'ZK\ȩ*Zf_Ri][$3r^lzvw<"$;nPM]uZ55g4HڢLS '7Yn\\{l*bP2lIٮl@^^>S~KӎLT6i|,oԞҵdhOwo:X R9.FW$\l][$kyU4Hcy K8)ڞJ[ӵTZMӵ$<}nYϟWf_Ri\jwQPN4Xn$kE+UH10V1Qy;P'eԜ?C2d*{eJ_%h61ciqt NWULllyf{L E+@ `usA :4t*nKaUXZMvL2Ouy;Pӵy^f;ߍ^C 7rzm,'Z SVESW3dV1Uq.VLeU[춅@xXUL,åwC 'bU5kU `cTTvLKc3$a͌--(I.IRoTƂ挐z<7V~ݽŪh}-r/Ek:{jJyUm 3& IlJɞi$ 3n(i^UiVܼdU `cTɦhoI3gVM&u/e.i]`WTZqK_Y7~jP %ZN>s1_hup8S[vMzڵllyf']掔ʶҺ&|"Ďi]ma񵒚;ҲT?~4jjPF Pqoh]| r֮WTZIcy挰zt+/쎇'TӅjvVM!Һ('S0y&˭g}V6F @\VIMlX u-aҸ%-؊ m$\RoLM'5g4 Һ萭7V>du Sh͔=5%lz6Ù`$.QHMoPMV<6Pkؘ?fU>)ʇX9#cTFOi2t,,t,OS'QSsGfz{>=7o(r1V{[ElKcX^о%]dNv-g["gciq4ӞJԆX^ -}[(Z`ph(ዸ[nF.H@/ZֵEҺHWJcy4HQq)ۭ@:>]KLti:USCyZ7TyUaO+סhN4[8kE+UKњuywKOZ9#z~)2d*{eJ_%h61ciqtO"0yC3>zmfe@NQo:VSӱS]t-E;gqC E+@ a9j)hxW[Y[22V%goWTϛ=0yBVE_5S6QP_h?iU>I;E+!C ם V%7ܮzmEܫV>W@,ZrR'yCRzBM$>|xh93v;NUYh%"0x3 Ezk؜?.h-q_zmm;s'׭0},q%r/.V^7:(Z E+ ^Y[9D>Z:I_裎+^ӱmޒ?u˲ZCR?{8^`0[Pblq?`9qT Sy/СC;G3x~l:?o޷qn+#|i#a~ZN=W<^ޘA{nɓ<^=TQ.yr_{wUUqdgrys7,5kif7Q3MќJ+Msp4!gSqDhTEQ*fYV?9~YϳֲO>%:gOaxovȤQKR.# Iz βdzsZrd`Rze)VxxK䙮$dLuaA|ws޽-}"fJy_I6d;份z8h.ÔSBL IZΣ?qCzYF khkEԙ>Gde̢uRfUthTw6d~X~.O4mpFjJRtHJl sҨ^Mػ{'?ovnLNk̟.CN К};ZC^v_ .棃Ç ?kEm10vH鬢u!S6349֬zLOYF[~k@>oMl>g +)ݡUi9۶~<5kHi 9av\ݚUe&+1;ȴChBc]]]Ж`Cs+}uxxk&իT-Kf ARCRR997KR p1_Ox,1.W֌^yy;܍ -Q|_msi٤fL}o?lhͨ`e7+8W˅$^Oh:;2}^^GS^+Z?iM}Y~~w!V³M2,RsfݧvQs&-Xyz|4wTmƚq/pX_=E =hI:n-n[:'Y.Zf$giVR78{}=yh 3MpnZˢmggz&<Rh>ѴzM36uWmZYr}w/Et'yzR\9ɤklMGopOWWW͸v>.Zvм'c"tߟק6$V+Y&&\9OFۍusskzKJNų;y1eJo׺O Ν9gv1囝Sn]8{7[)X 5iCr)! a8{|(0?.5`ېǜFZJ??{wdc1H™䷸۝Ekn5;9A_kmI~vu뜚t׬mΤ75L.E *Ze:Ԩ9w{9Mff~jhtCl\C;[[F fD?ux455'#V.72S0;4609ԩ9iu30^ r뛚W^xZG8cFkwuT6[mS4Xt{1Zf`e7+8GKM Ir?¬KۇQOoz:{Gh}0?+떼'KWaڼqLYFx˓sLxU]lV//Ox,[_2oHLgS-.^6 )ZmKK=`?gh-SWڍ)rs-Z`e7+䬀 %$?%4O :{swFFϟ;0bH; ^ {ԊV$D9MFiX55֛Iy=Usg[x^. Q[Wv =Ek'[h,9: hh[^sgHYQ $>ΔVSCL M\<5$!sÇL zGoJ!g4ٝ:oHCɮu2:gt-Z{tn-CRX4Kܤ_iW4WS1z6I:mEKU;vf+Z3SN MԟBqN&ᗢ`!:!;{/6|fѳEKe!E:盙^,mhJk߲b~sSRև*Xf%ܓ) O Mz!F C P 0|h.c!f]Tn$WWWM!#dYV\^sy nͪs,5^׸(Z׮.OۭSkEN5zkih}iLȏJEkC>$QK_NqB<%4ij29wavC͈ !щKr FOf!C[܁LXšBlӲu΢]2L:.zpv. 0\kVdh}w ͘w7B> o~Qs}T3Yft &ϔЄSBM M% IDAT 53¿݌CΈ$0bhX`3";!8(?k u'32ukg=t΢uњsWR]>ьpE5)ZW|8M3Q{evs:*ZC9oquxۢ5U \P,BHJHթ!SCSC&O I|;`}b,k,#.hkWIK,PYZf|-YvIvx5c_E.Z>׮vOYp8}5c\]]%囝طRfQ4_O:u9R\GRt-67 ~3a=ѹm[4Q}'T4cl =ovS3Ǻ%9u])%xE׼˕ьk۶®V:VIa<ScuvGlN58*ZSeE:%4q|.%Z*cvs4WSV|8Msvɡ-2zH)Z]ɉu͢@ vEuld-...eˊq}#;qϞ?,.kdev-e;2~ 3Ͻ)(CLC[eRze2gy3EkjN׊VIkdԨZ1To\\\d›fj?I ~4RJJL;$SVb99sxʉvron^Ҽq=iլݮ#vǧ<^ԩQE~;(RjE].Z%krrvv2[LʮVyhqChBKvO45ldԩQмujT1<. p{2uM9QJ?iem<)4c6 Zg89XbwVwDRbhޱ:b9T\^Bt+yآ5•])jњ֕ dkcfR-RHܤ.+ TF_dܫN^:Jħ)+M֑zvqCeGBv`湙t\/zKú5L)_qw/ ŊU*Hu=9UJJܽ-I_iwRtlLfO!?|3q󧍖zxzz_ҩms~ywj,-4~]RR9ئxg|w]p!RϮ9( gퟐ2|ݽ)(U**}:;TRbF1Y yRry)%E+]UOkg? oE놠9v["Oulヲ…|dwzwNm랳z }džLѺyiݽ~4WSIn14G/Xs\-^MԖ]k^jXbuy^yA~+ZsP=޵V+QLw!`6 S׺^ SsN5\FcVf̼# 9hqqq1|mŋʱm+tS>ERbӹ ]ryl9߷^);h [8nf|'8G!Zf_v.0I*_Z+gZ*i;fv(Q {yYt$禹ܙ9s0)ZdW:3|[Үf|k9Oi5 ;|P_'H&5cڶh,w/E;FՊq tWjݟ昫Ҹ~-/nnn>C_RRh#I3A꺊iyvD&EZkgjƵoTn]8mK>=:SOzai^<i兞]Jd=~n-3tqi0nլL\c7-v_~; +a7E+@.7{Ж`6v)W\9wg^.OJb2rpIZk|8Nen:'hDX//O~qQҳk;1h"?b[yxKrLs'iH-Z]\\$62L|ukVՌ]2 ?z]ŗޡk96צ(Zr^ܧ|joն\9S2|:5k[sc^y^s`lL K姢55ʗc^4W^xZ3#^fʏr侦W.yOL)_]\N/ ̣qʶysxzFichƎ25ciÂssso>tQnpNۢkhmT3 ݟ^:Ryh-^Te^^r<|ñ Q[WݝFhuݘkˊzM-l>n OEͤv+/+7;&zj)^H%_nRԫUMիGEU~;Ѭmњ^!w/E?TQ^9Y)9>b}4E+@.ߊVIд-A/汿Hq= Zњ֎cV}-ZSSZ*25OB>sY(gl~|josdZb9񛖽o<(ZXJJ Cs>=:*ڔh*)Ѻ{i ֐37//_i͟گg)os-+gdڠNuk?ah]ڵh k8nM̑j[гy?$c-y C9E9L_[N9~>~(F Xլֹp4zUEoavli8ov:w{ٍS~8V3fJwK&59sxqz֛IǥYZ6/.׽%iƗ-nh|fE#w+ؽgRz\?EI]Ư,bRf%tzQ'YQ|j.pϧ_ 5Ǡ;k]jݮƣ[뾶h.#79h;wdǿ~6ޯYhܯL_)w.px|NG."'˶Wyh-:0uG5&5jVn8ufhuѮЊl;vrx{y* LZ;BONqO4m,a]f/5ޢURbd2qź_a<_cuۭSkcrh{)ZʔՌ>y|1*?%VXU媤5巸woQh=%X4odWfumDc:ifWzf4H!!K3W&4cw{ec~|):֌){7URbwwm\\>G_!N-^3~|S]3nɢURbdvE{oqQJIed0]yjV.W% YɹJrBKb[GCdЗWRޯtL˕-ai;RJF~V{b#@-+ۭ}u]s'[ҷIW$_Z+ Mq*}_WNreK.{)Z|av;8 qʡ-r.iv*dЋ+_c^Y٧]kU!hԻ Ȅ7_ՌY;"%>\n֗ W^Y5N/oKZgfSfU9 绝%[5)דVC_Fpג:5oq2{K8#qw/ʴbhٷa]Y `9C'2ڽܧܺp\<9]>(^Tb#dsYVImw@vv*b5/f%gVN/qJVeJIo$}EݕCk6^^QoW}hKNq}r53Eȸv=oiZxksF*)1gbXkY|wHJ̟6Zg'pg.yzYU$dj/ LդLV`׻0+2'bhussŊH ~2n@Pn'Gho_;g#y[R|YOOWZzvm'>)W/oN̥_#wf%VZ.\r 5QVc5+ILݭSHcjr&UjV*rauӋBH/I"`I8mR=kY̪ŤN,7_OL{K{|XFMjV'L٠ܬ&Ŭ~H-\VxaW/v/Jas;XS&(Wg9g*OYbV{fu9rsK"Bܹ_~9_W߱I;Ay8\UUŬ3nw66ENK'^ Bȥe9L7>.`w&XjZrU3,ZX=Iz"w/n"Wȭ/ǹu~\,uE>YIYZ(g;`UjRc,fujV,sCIr5j8≐G97|.WOKۻK|&uF[U%goTaKa5-fu̦葸&om/W?O˝ N/ɋs!R<,W/IB;VjVG}.SRTAIjR&ubR7mˠ~œry[,qs~땐\:!mbfJ'ڲXbIDATXլ[=I;yvY$aTgIMUd;r)w^dc ׾~_wh\xJί^zjVq_l Ir`?֤X*jVKRN.li-?\=\?Tn=u~\?TW~O.li%!jvͨPp1[Tٯ 8 \ZUjRAjŤ[jŬ.d/vćTM-$eGO9rz"@TnoqzFGno'ʯ'C%eGOI\C*d;Bu`ZUٟG<,AIlkV,&bVQIJ]$~P.n,?(Wk_͓,VNt$wVN˲J}5O+? uU9|+SZLŤV[jblk RM@>D?cRcMդ޴lI}f1VJSYJIRD7 r-5E}5O\*Yr~r'KU'ʵըr[rAr))IįޱXM*bV,&լf[ԈXL5JZ;sv6HUUY DI}j1I}wWmV~rZ$o();#/Gߖ_o&;n-B+5eʢ3j{:{K\By9 y˔jl R]ceI5@Y}h5 I&I!Y}m5XY%[JV9ekIw7YMjլ,faY{^gLq2L] 7ұfUIՋ3VjgRݬ&bRAF[LjլK-s-&jRan5II[M*jR:5_&b1xY~oͫSKPYg1ɖ`5-&5jR}݃q&IՋ5*ͪt2Uٯ-4#IENDB`././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/images/cloudkitty_modules.png0000664000175000017500000013236100000000000023761 0ustar00zuulzuul00000000000000PNG  IHDR VsBITO IDATx^Tםo `Tx@dHA14(ڪcITI 6b-81;~㐮0 F0Vp ]d g?0 =8s{?up}ꫯ$<@@@@R ^i@@@@!^    @  ޱ    !^    @  ޱ    %  ǀ.n5]%FdS'Fx~_Љ/# 8ţЎ;   744/]?yyJ"S0Pe>z՘ʇ[ jdrCSRRe@@ @ʘ=lc;:l|Tak@kWӽ&/UN]⃑3`Ai2"CٞH@@ĤlrC} ٨3k+*FD೦䆆MĨG [s4Z07^^E@@@@ܐ !"   ^ 7%XE@@@@ܐ !"   ^ 7%XE@@@@ܐ !"   ^ 7%XE@@@@ܐ !"   ^ 7%XE@@@@ܐ !"   ^ 7%XE@@@@ܐ !"   ^ 7%XE@@@@ܐ !"   ^ 7%XE@@@@ܐ !"   ^ 7%XE@@@@ܐ !"   ^ 7%XE@@@@ܐ !"   ^ 7%XE@@@@ܐ !"   ^ 7%XE@@@@ܐ !"   ^ 7%XE@@@@ܐ !"   ^ 7%XE@@@@ܐ !"   ^ 7%XE@@@@ܐ !"   ^ 7%XE@@@@ܐ !"   ^ 7%XE@@@@ܐ !"   ^ 7%XE@@@@ܐ !"   ^ 7%XE@@@@ܐ !"   ^ 7%XE@@@@ܐ !"   ^ 7%XE@@@@ܐ !"   ^ 7%XE@@@@k~#!" 2S24b,.!)ITe} @͒jvuJDL9oY¹G&*ZE {Z>kG&0v݉}m^z5QU.]JqM>iSi/|[U\qO IL@SwؾR'9 ?E@: 6͑/^+7:{z$=]]9_td=sNםlΏw΂*pr@Te;Ӻನ^?]-.M44!!w_Ӊ<"3PGy_?j(6;=e໇>{A!הגYvqkg܌액FNt࣠'5R~]™ gSݲ񍉶Oԥ] tLLB9=~?];*/rC}p`T)C- ܐ&=#!^P$FۙTd(dѣB -KRf{muKφ,_s Iq9Ckq r*GSn :uJHJ0^f*I!nӖ]e̓%KR{&4ܗu3m34c 4nfbL޲GkEO)|FiWvjPX'N|jʷF'7'j:>O0 Ut7{㥧g~\gY~(gKmsJSP6$Mv7[SgxiDZKֻ*CYuOLIhp5;w  g/:m62}O{}FQ4rUCv%[ y7B~d\@ F%X>ݐ莑D\d{T[uܹ%.JTȣ]YW窔YE_ r3z5&jSga0֪SW#ʊeOt}Hk+ibqѱ}YsTqU"0sro c ξ}rss?0L2M(++vIO=33sÆ ?.VP[9oT?~ɏ TO _|M|]:ޑ']`Śk%9/f}\K_̻'7>a(eg7-jo[1n.9{|?6g,u@VM|w|w2ӽSm x%NJ go ?z8E(Q;x S g4 TI&|iꥨxI @wK^$zѐLH;1ԫVe~~we*œ3bb]_aQ<]yTyp6֑*ZD$$)M>zH/ 7~aZX}e%2o(. 0Zqqq27gL`+S2O%2 4TWܲBpHPg]oұ bRq4U:+tH'ohweJJm~Qf_*ZtEu֒ZVGqfFQcJ"^]G@ `eM2y 6(*m{YL^6eԹsSg'Ǎw/3yq{{SZF6aGh:']3{ Q,غ~Yu͟O}#T={vw_/_h2i_5gGw_ꔴ\=g-m; B&zW9Q8>lZ6;<ʎRçӰiKm~ ֊[瞿\*;σ]yxp6ґnq|cr飆'.J2,,kYju}ClZd4$lk`(@ t4Tksӓҋv&)n~eX` o BKꓟu峓&؉ζ&Cg1xBfl|.1d}Kc 8=b؊ibχCXTO>cUGfSvw } G ߘnj$6]a7siM< nsu鋶ٽQHgvz)FyCn`8@ 7}tslzV%e"/WL/Вg7(bZN 4.]$ZZth4J|]V)>Ke(yN*?io#V"3󊭿]# G\"h]0ZZZ~,Wglh-YQlclW݇0w*OfXLc 2(tSNZ[ќ^ì=GIO*>i#J5ƿ<9Vjĭz~ g |tE.\7i6Τey<%JJw3_nSj󡦳EW .mc+i3+XF_)1N[}qӟYzbkiFIy㳓MK87 >9Fr!+YO1O$_7mTts1=u/(giSUd ӧ$s/֮;=Ȗ'.ݞ3ڸA/ p) ]֪}$er҆[jH$Or-4Adg/hhhH ;k#**km 7!z.wKDPxF;5e犈 ytEYQ~L;ܱ @ycǎuLJ0x ?! 8G|)_uެ())(<^pŐwwjsGͻM a=D:iۤI)QRiv(jACj}7kos~457τnjmxԫ"75ms|N?\k#NKK ]2TkJ>,}W_7?v&Ɗ^*t67&7KcSm>zV.&tsGvM(%iqa_#-_QO:?[ˇ&t\=k_;n],>GGst"wey 1q+s2Um v}}?<`5ɚ2}]]0C"] T\<'0qÅܤWۻ-j1 "$IECTTlmTR2#Z]t5͵{=w^Ue,| ]ϩ A@ģS?gb̜(rg| gsW#v\R!lO/_c_J$R+c4aQegz}˟B=O!}z#ՊWwMDcMID.MU. 1^Pl7̇o*G;z%"޳ ;ɽtw o }cejREg.?u[_LqNM*@nȫT@ DdR&jmD}[r0RI:XGmmD!RNNR&E}Jc_hsj1NPfyQCNM$ley9 1'vN#@ bFHj 4CzjZ <SČ*ϗØ> CYvn/U\)Ϊ%;ܫgSb$ZOTyaQ܊^zY&cI9K~cL L[9cL-uXJvCY\svs9QFbYd=Y@(!%<Ao^v~yb0iX֋y?}bFf^Ca 744@ ,EO-evXI+)$.k3uXS&CnH4VRgNmuDhoiS -uj,dtDQ$enS&' "qɓ$ogW*]^Ra3:1ϒ/5r^1ʟnj W9$k˫:-tylc,丐{$M*Z%80PC8{G"zOxwxxhViW)4w&~CFMC@2cm3""CcYmJa͹M;%+cmH5J-#ieem9 q"O\k,'3_l6mhOO7Hφhs^>22ս/hӾ؃J\~&IO{"wuGeW7)>BIXO{WӔӌ篝Fi kO)16G9; ?|{8򉫅7;r^܅t뎉K S$d,k/)Yck{ '3rrss2岸$)99Jt-Q\,A+23Ľdn IDATqIC,Rf]%Ca/$:?[$Hѯ ;?yKkVNGXBTyzki>!Ӟ٨t Yqg—^,ش+U/7d 6fz.9:jo_dGxvkƯŜ"9?\etι^, 6-?b0r ]N+rQ]yxp6#U/lM+\/GEd D[e 7!S5e  (ɫLZSd7˚d':]MP SWVdw c3dzdj6i=eQEzK^YYc$n`_\Px4>;r2r&'GLS<\F/}?Wiy m^{vw؄ظ) ukm%3zvB%kY¸АiB%UޓtU䮘YXcqKV]tt[~ֳe"ʥ3~I/?~ɍfDHZkk+ΗwIBb&D67CoZ6(]3{[H&iVIE]>fnZTIJTҸMoo,]!2b=~ҜL޼fjl۶3e.`оK8&Cxi؞M5A<,4Ws <|OyC7&D T곛SƺtljM̩jmJWiT/k*y}cg>ZJ_dT\cdcS게{p}X[ C.4s_{N~y .-Ҟ-*9_qŒ I1_{#4i3 8[U{󆡹)T*WZg[{v6_^)Oo__{//إFMJlӳZa漿uD+ќ(Ж_zxy/.D/ǃ]yDxp6#ԥ_*|89[MMO,ҮoPb{?׾,  Jfu_;^)+--unXw\VHwYߌl/P+;v2P`}t5uQŞw7%}p;zdsAZjPԌQ35e}p@]#CǑY+< UU<=o&>ݼT!CbSl\P&=.l GPܐPC@|T_`:$UšrZݹn6Z5o('y޿@@K8w g !( _JJJuuO_p"~0A|8B`UJ=r׭QUSk^zꥏ|CaÒs7=w g'7M8E%@@_0}(/L |η7~rW[WJ8v/g ]<0% G\yC[Q@@&Gŧ06BFOzrkGˏ(-ؼ#A" h  ))$ݝMͭ¹"_ #x@@Q 4|#g A֔    0L䆆 f@@@@ 7䃃BH    0 &hA@@@|Pܐ !!   $@nhi@@@ArC>8(    a@@@@     0L䆆 f@@@@ 7䃃BH    0 &hA@@@|Pܐ !!   $@nhi@@@ArC>8(    a@@@@     0L䆆 f@@@@ 7䃃BH    0 &hA@@@|Pܐ !!   $@nhi@@@A`L#_Љ@NWp^# #%@nQ^$Z/peNW[{O܍ޭu±w-ز+N]]EF@@``MY4 0 ]]cb)C^&g=WiQYOte'4k6N ٟ߾//q-}}bdϭzt_}_ݷC (޾^a_^]oǵ3kmR[%ɲ E[ M/hJ2't+Uk's+]#(o˪ugUgŖ5n{joKSM>P&Wl\~ŻҲntWtͧϱmV0ԡ;+iem.}tu2ቿ][-ylGq<{SQf1j;CY}.cv9!  @  ҁ~.kQS-<~w97$2_jv_29  A+@n(h@HT,˲.i96$ ܈t:uĐDUY`Ú6_ic>7Nt)+(ݒ2&\;ۚTg["W8:C.1jFEO:qoϖ&&L얻|I/׾лkV_ҕ~y֒z]駶URh46烃\:}s'|W =Eǯϩh\|DM]Ƿw,3$B~c{^۞oLC#\۔6IA@@ 7L'O`T$ H]^rpwUne8+MTo\vfFʎ9DoYģϴիΎE^FmHML3vg-Wkr2>qhOwG=ܬtwv/fRr׮c֛I$cKOd:R&JZqA5-mRV>z2is{*[8E?]8L#nT)i.ܲW;z]4@+wi>jWmu]5kyc9} gю%x  x"@n% aʗU5m_]' MzYT2m{F_>cEvU?dO$SFK,7>*qJ6mnt<4<2Z,n?k-/O7~ԄG׋ݍ~rc/I=KZ]= `ӗg/8%qU.Z#  N䆜H8? Ĥ=*|QUx˃)!Kks]oo:fBYC=g.x客D"2C/ʊ>,+r,1-J[^Xa]5&n`_\tx1svӦI>sÍ$K(7$y4cu5PSԡ~ڍQ&uQ)Wcw=!  @ 0o(G@X²-u2Vjh.V3EKTȕGUIs_2*>+X[qីzl,nDy"ͭ=m_R]BsTVr'qqdq3k'q2 @RZmk- ~8F6al\j|Q_C'hEO0lLu$UbYCgȨcEk'wOؖ].yԲ7PF)lX[_XVU6{o,W.]rڥS\!  ꫾5wر}vK٭wDM]Ⱥ@@ZjPԌQ3}?Pr%~ ߯_@@{)-5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .@nG@@@@ -5#   .5_pK[Ujf@:*L( 001k({\ /8MQ~w/ 3o "}@;v\x*@nS).Gnqp'x#K_0#V7_ o`wp0t>")))|> ȑS(++cΠ;;Nnqob@dg{s]1 0R`|{|H "#0 1aЀ|0{Q ۷OT*+=ĵSH 0#=m/#  znn)%$+|c7 w  #(@nhizXL2uhX}lI}F@trC>"`t`:ԡ{i06a|{|  YZ&IV:4LoH>m#  d(8L:00u(`^o ˎ0.Y8  P *I]L2uw_ F  ~%@nȯ`%rҁ& JԷ.b|}k<:wE@@GrC"<0>&z!qׯ~(  Cji>&fO_1 y׿Ǐ@@Dܐ aJIZ:4(ݑ1f7u@@!^ ,S:/Oð_(  } O@.]':!H71n`0 I7@@Aܐ?1JI:4(㑼I}zߘ@@0 t`"`_?AK@@AqO hҁ'LG5PAq   ڠB|V`L}S233G]#0!ˀ w "Nh}d ?{E夃Lv1SWnhh(++4u(;;+aQ I&jS#*"a;@c p ﮟhL]Dn M&7,#s#t:q>!1;v/ѣsx}/߿\AF`t: v5j K䆼K#&`?rH}C|8#0_/R- Q3fIxgM SE=P1uҁ )C$ʋDanX֯H`|GJ~xe|ǙV@@p 7K"ĤxϳBfD r 02v(  ^`M~xJ}$mUWWob|v@@7G Cb2T I0TbqzqzwԆ  ١!0@@@@䆼NL    ١!0@@@@䆼NL    ١!0@@@@䆼NL    ١!0@@@@䆼NL    ١!0@@@@䆼NL    ١!0@@@@䆼NL    ١!0@@@@䆼NL#`w#W{{ )  } ۇ%aODDD@ ߳ˁo   y|G??|_^B   @l@  3A&@n("  ᳦ 74C@ ZР@@ ـϬaMY@ -  1]D@@` 4͌k?׾6y@s=+sMSWBI@@p 7l‘~gXRRRhhh`^!{'v}vn!  0䆆[FJtJJHA 0h\AWȅ 'PӢ_3s]|ssٛJtʜ6 JQ'@ 7t?z\7bIYY5\rC~3r-볒?  VGWݞ4h*] @ nV1cƤGwرca'@`K4}i,9{m؂!F|l1 g;>jA,@nB TWW8qL뎶yz@ H=zC  @4DU=Gr+4P 䆂{]]]w׿~ES@D7}dɒ ̮+@Kqq2̐xr}4ĥ @` qnV^`=_Ҝ@;Je퐉@`c)| k b*ANY E=$ոdtۀw0']P===Vgkcbbv]k]Ǐ_~~G@O1s !7ZjkE.b§STQCbѣ8)ޔsR \7o~(t`K-"_4o͋i>itʳ-PZ\SpW"n:EΘKN5:yC+_3]ZWS2Xf!75;U3g̐%N]xtMQthO ?zmgD3f.YA` ?xj8E<.D#g^ Z>v9oesO; /2 j86K?}RקO zBDX|/'%%ԡC788n8q@ /Totɰlct+U'qwgl-AV__yczxnWnX4_;_Y3Yޑ[ 󣳹TriyZa͂5kvNWi(e;}SClؽKEG.w'ᶢ3\=/;oCOϽ+E+Qfkkjq _.(/:x Vdw|AyM}G(V "NIê+\S,J>ҼqCNvDrN|].9q`■|1|?%H]Tf6 HyC⢰ܻw/++}bH}3fA/%n?9 c-tn*Cjhnd7x+P4}o}$1FѦ Pd͢{{uܴr?;Ͻ՘ {l?ͩ)VE蛋~OgH5{^ɩ>w%Bgfg}_s0/s41٪ʁ$ۭ+3W=}EW ’{%[|w+FW\v?fBFL}Cѯ#U3cD}ٌ*CjN1DV{o;52&B}m[dH4lW>r_o'^?O( 3B}i>=tQ:O7|\/]TZZ9_Wk֬q>S)g?[v\.9s w 0Qs*.ee7gj-=[#S>%X%yYxCj gz OPmuD'י'4}(g$yjټ-_mPXGGIqNͤy-sG&%Ι3Mh^͕F84x &v͵>&}UCCKL5(f#׺JUn5 Pfm]f=&j} I'rYީ}sLF)Ke\,tcϜQaʽUGtVd]] NQ6j=yzxs@r^3L K}Qx+G?wra?^,yܜz; z֭[---_~KW_}+!E;G{{{MM2?|LJY@Xn]bbacy[\wwRdUcEXF"ۿۇz(,,LWt4::zĉ↌CQ=u ~/Pҕ%7Ȗ%,\2R,4$5\=jǥOfԒ2ƭ{cf=W \1T[p!3=Đ|{!}˖YgM C $~D䍺Z]m P ܟ"w7PMyΞߞI 0X2 W&Ef*0[l(XQibQXPZPO ?nC%њJv@)&HmIzp{v{C$! yQ߇@>>X%JGHx>Sdݯ~2A@Dl9([^]:shLI';L(4dB"4Y;9*jz ÿߣf: Q y2;CԖJe못=]ԂvL`&N >LiD|TQCt2qȪ \>R3JE%ӌQC^f>Z-)ۧ4.=O]NJj3{eZpNJ"MsPRȀ֐fUҌ9~'򤦏G;"߲#o߾__| 2w޲e PF ~C ^/lӟ>%'"t\vA L%Pj\9Tm*_Z (a"c{l7AHHt~[%a9)LDlHq|==13 u=]:GKg1p"^4M=5tn\`.e.-\$;w^" @`vH4LS(D3& 3" /A؂c( '>n$$pcCdڲH8᱒LPA~덃J:&o$3'yK%^y#G@V *]Z=IPV6v]Nzv3f Q| (P/Q=yo5L&BEDȆ4f9lg8Tܾ6a 2iD'_ÍkƆl2e;l4SC#.͒{ADҚLFļ]N9LN^p|.ؐ!L?*'᧟~zŊ[gX<$XL^O}\_׎˂cKJrCn]Ź@@ Df2Ay@Ytl/3kM:H4m1 +G#d ,$ۑG3mT?6L`,/rږq8)$/IӫlCiCm ׵'Z墘vʈ,f;lrxMFq>q KkurHc_*0D~_ y AXg!o2UڵkV @5fXVfP51'3+ϖF$ %:mv4zE30 %eɦj&{_)gtjS:MH4I604eCWB94 qtҲo1D.ͻh6  ;8mSs Kd^&1J"]O `\og ꫯȦΎ$KӝMA@ 1Uu2ӎ;LՅXVF?ICYxK*54DQyRf,j\;czX'!/{rU2u& -}[5cX*61Ѳ}f4i[isl\.piXoS͙Ȁ=OKX`ZN3vfe4t*Q3anb.g4we^iht&$H:Z!:; 7:΄bCqK XA(=J @ *dUR} Z(d@gdXY,lh:-U~ڪyGOTs.m兇Y/T),6(K~ۙwIS#uzØAv}CQ)*7xWԕM/-'oEIDE1Us.zGhxkӚLr'x.O!U*!ml߷%DlhKI} S҉)<$;Tw ݝZ|C얓O FQJ@RKq2_87 Y"@@T[V=@H<#L*M;C~rűʱz[ʼnlϝ-$u/*M?:‹p.]@Tdvo:ee"esy]-lg|rƁ,K/no+l6~TM;@1Ntii Lͯ؜9tBLvYRݹ;? z7կ^]b6\($>X}Ǟծ \oȟ[glSOP@/o{dh_(  F#0$jB?vQz%K@QD@W#_( @:37$%L6 >8U瑑?* @ `#ؐ sN(+2J@~*`d?㧥D @|(5} @ @C ;+ThVxH E߾}#T:) -1: ٚ`"pI9{[Y<)oT  8/~C[yLۏyiCOhGxR-!@@lȿr'KNB$'e1&=<)޳E @`1 6_Ǽ4_MDOOI I &D @O +1/: BsE՜x C #ؐߵKӐߵ +<)F>-'% @>@l'.fǼ4+N_lxR[>W @bCؖN~̋NCx(Yn< D!0ɜӐ머b IY*yAOP$ @X< i[1/: iˡX[oxRPn@ Ć|V&|̋NCn)'eq+ji<)Ez @` 6m9Ǽ4͆\Oɑa@ I fC!@O uVOʢmZṬxR<ʉ @@lȯǼ4m-PG''% % @>@l'c^t%.]xRsn9K@bCޖVӐ7ʷ@xRk0 D!0/yih~z IY yHO  @X< @[N}̋NCZ( IY8{HxRPV@ j._,$q ^s)Hj. {ѰXfRxR\d<)qOhq 6=)RoL7y!@R`5@ @"?޹ h-*G D{ޯbCxR @ @ ,7 @ @ (#k@ @ zC@p +6NVi^P%\*NʖGrg;#z$nWr#=$ @ĆPR@n|u_W6~?e{Pꤊ9;stP  @W*/ 8 %aQӘ^_Cu]̉>ѭ; -FZ!q @X -FE @ +tl $c;HΝ~`Mď;-$@ p \ ѫ7 >Si"ѡzm _,U @+u`' @`9B5/4='EA1 @<#ؐg  @09QfC'J^XǟG鱧.?SkuA[=ll SZ@'K%e4jgAߴOkd#CoӛyrI%q>E2ڸcAH:ڎ3cقm-@D73YolbgT@ S-!@ #Fs9$8S**Pm`Gu}Kjv狓=QYi}f*DI.뺼QzLfaER gH; @ #¸#W@\ p$RLtoN$#-V%tv 3u2C E.QκJr4Ln *EK U$0ċY&Iw&jD:Ŷݒ39錵#JROӥk޺v39 @ _ 6 e@pM`|rwE+3R+j1 qD/?oFmB@*&gnDttE Ot?E:٠JMqbȘ WsNJl:d(haCQRNߕۭ5ks gC xM!"a@,ԶR FݰV;8cqigrOpiYu"n|fMF0%yn2fBį_h(0Gh264x_NQŇ-Cdwu, . @bCDb 'h&ͬ~7r~+pfMՃaERKL1$hE@4=5UMlBZ7eCn\su @^:e^E y4 ˇύR͘GQ{T.:ϞbwZdbйp @oG 0CuNep_iozCI׺{SYKӫ 봃C}x$.@`wmjZ0?_I7EƓgUf @ 6@ `_ˏNu~yڟWYƎ IutFQ_]UXa$.huDpq/A`N2&<\Ĭ62, @IĆi  ,IUw4=TFeOh5 MKMNIHG/E H g<r&{43鐽LN= @ &Ђ#c@|Q d3y6O"Dאwn @ؐV}E"/FèzjQΚ9Pʻ*8ggN[x(p/bb>g*BE˲%5ȫaӚAiNDJ;+5F# Co7 r$)EC(Fz+C8Yb) Xu` [M'?ߚ IIt}=:Fiaz@B˜sC W cCM7UL熉⢅<.j5ã'&M˾w%/p|*"e;bC}zhEZNu$Mu|֬GpʃR 2&C GƶČDQ9TjheجwE5yC[zGKNչ(D~@w*)*agJ|ȃ/֖ jeTðՇv0k#6Ar$GȺD orμje?;C=ʅH:GOwrhG/}Y2 ؐ3t+9HNǵVU ?>ܺ-Yխ@D-HMQQx_U2:GZKe\;+>)TӨ) &A"Iپk*t/*I&gV IDAT\UwI`9(@ HO|% _!9 ɱōvrk>ɝy@jGv| }순:>2զ Zf|٢=_N TڋyK5pB.}q&lZ6]Ygq/`O264Q3CƸ&_PS|<8,N֧82i`q[-Zx[ς t]A.:^#q;1la!-KIɴ.F̺N@(d rjJ5olSKSLi77ɕ.@z 2_%GyK _wtEFES7JgEԡӏp0>uUNvL<5*g֝dLg)q).Qېlu: j\VS~J3̑^?e"#rv.ng&eCDDIҼTef³`@380@ɮgXMt5 2L \*@@al_b^?76'/qM_Dz2ōm}dɍ/76Y=OvV4gsj\%[=ccI]EzfЩL#fu75w:,zg^\tgPn^'9#iSGرs+ bVH~S\Q[lsܥN3Եk?^^v[$SE딯߷Bw3.Ƴn`gv~jo;{%I˗\;*:Y ?aW9Y( >'?˖`0]gU]?e`uh|j΢e> J I\׶7tcDZOjd%(*QNj|RCr$D)OӀUyrN$k2ɅZ}@{{̽8q y^߭al?| y\z,- i B/Љ̇c;W-;OI״<-~zZ %S6(%zN-6oFDGCC/Jtjټh-5$.JJ~%7\gK]g0Ԫ>Nlޡdy% nJ^*2[g!npTB >4U~78"7%| h@~$TUB.~u¦Zso[^% ,.1}dI3MvgϏ:9|5ݛ|95 Uߍ{n|RS0;ۢO#_]kZ˅cԷ>n1|5Gn\!_N>2;!~S7&Jd"rD6] l$kov߮H4~د`&fΧw>mS{cpr7I]#O֍6|3M}ܲKfdbK:>?patP>7=_3[?R _Sy[n߻QY5g,Cd7:4@OdVvs8UNYs20D R oe6*k=ogIy|%ݹy1Z4:F̲llfzͲ6ՎQ zϿ9!Ӯu%ЃOPKοt|guՍH8|wG߸rFqtKNI?(_>ǏMkPh30%<,S@/iӝH`(ߥ?|s7}4xuOm>l?"MzNӴh{[+yj; =]Gq)#_oԵʩWV\ӏ>}{=~So'tDO bC4=j+*m07[{K9-3PHtCͅ!/3I4)ɥä[d63%sd"QH5켦Ouu nL]] \HܻU~w{XR;gh⩙| Yϻx5 @?r81>::j~w UZ fߞ|oYk:CS+_;ce=LЕ{ݪ+cL[YXQi= IaթMezQĂ/˯kOǻ INsE'ϔ|~gyC}˻'8xyi.\Su$ևlaI|N$<TiӃ5Ϭ% >){M^7}E:;b͋i$۫Kl8c{,Y a9G?Z:3)ϲu{iWMŖU PANFhӯv4Me 0WJғبHOdNJҴپIP|q9!U]b78trǥme]*мK`:` E+rfJ}ݺnS?[f< !@I YFzOmOmVh2?sooYW-GC67LJyfsd[V%MLya΄#Cw_8fi>nz:6_\?NOv%eu2 @ E3ҏ19к4Hd;tYNz yuxu?掝Ɵ7r"s3.R!sL9yxu zTT/;[PjGnG:}λx5 CHїj_{ѠgaGHkiңd4YV՞W3ęI+A${4D=gYɘQeVI9`CeIV=\٭IoI-zjF)4mT \l);~ ]l !}~Lj a֦EϤ kgK+z'˳ymJav٤=hpn3QĆ FsPwCx,N\jG՝qw,8jA oYJJ;ZցY־ףd:}Uj_wʼnGԞ6T~,A ߻,Px2`Y\=93{m^?Rx{'𖘗Zf}syILE=TW136DKtM=Kmǖ=B]7HBWDr.iU[jjsT; AJOk+pGb+Ȅ&N\5"Wޢi#Ax4fp\y-u솋2ڎon[rZ%Mg-.BjGe`2ϞB I?EopYQ`Loehg?emp(2hˉؐ@茱Zr74?Ώa|i'w/qn-5]k'ӟ'./6D%Hf@IU$Je,bClCZÄ)zhP˦" wٸ($Q8z09$2tݸ՘v>Ӄ-{O:'WQ}=u;J߼߳e=/ @ 620kO_6#58=msWoĘs0aY݉KHg6B'#w94l3*s7pmI.-e[^@@ \0h9 k60GcWO4mJs%ۈu X'"Nzb9gn*-8}lrOw|bjg~ػxܺqxnw~yp @0_N>+WcG?MD6[}#[`x 1ݳs=su'/2K脉ٹ ];2Dž6]#n]gm˒]N[jXh n>⼉Nc&v{Rxx\ɉ3~[C'OAtXʾkB(Yg s%9+N^#;侲Q; O4Q Ѻy uW;B(~yp }ͽkqӤ_Z>ɴH r23{r](%$ݛg1]kl.uOJf}1jB8=28\_Wvހ?nfҦ~LmǭdvM&㇟\.dy.uy3L (cCdu}R~UT6d7udsuj6pY,geKel [ןq9=TTmIlLh+0Hk Um3Xݦɼ$E:U>nzAAolVnjXö %d'm-~6eE.]mibynwyev!/ @@` Y7ȏQY냝N.HTlIgÍ/CR(| n}{3JH7eQ'X~p\w˫z-jw]g_|H?nM!Yݼ]N[;~. |Cf3k69۔,dl {\ AŸzIA侶a}ߴo짞~;Om>4x1#y/MY }|ׅKUW}y%_}Ȓ~e+pILRf]yIr {߽@@/Y-ke16o^N[ |go럾9 t/o_w={of?;x#GG}ǞվQxR`B IY(c mVomōW;QZ-nϛ?_V}5rzxZ`>{6 nt^ Y_Q?zϊX;'wO,j5U1>?8A @Tn-wEl!b+#N՘W]\ @bCF> 1 Ҥd<^ft%e5tîB ` @F@ߴjeE7I;x 5 YՅ"c$7"fg S'd:͋Eǥfn^e:CITui2ۛs3hd=2A^bfVn, dʔo\Y٫g5Rٗ6wХhnٯcfEFES7J͂Gu4I>TY]S'=0IP"gKn]ԍ򡜢BrXE @"P4 @0uI b7aTzP:~nHKժP;W#yn+g.=#_*+JoPM-K&ݠJwDhhpvЏ'lƸMӣ>EJ^-rnmI -0$}3lcSKr7Nx} Fj4ˊ!@@lhn#@0 :Jڍ&)] 5-$HbҜ,Ԙ9I"JԢ&}v(ZYfe3JVmS0H$,6IɽӭV)Zwl}>6%[uI @rmIDAT1|}K3jPA_WNz-,6%Q'^[g#֌<(cf$V)TL'iEu&ml/Zjk&"ҤKV5Nь*;z,Sdǽ! f č=|WDC175VUwRQY/2mL/`c@#mTjІwC퇏$OϿC5nF@FusySQ́!(p4z:Ҥz/nDț2C'v!^\vٱcTw}Ĥk]83nc?Vﳇ8q^,Wmڴfu#eTPheVZ"۬5mڂi!@+$ޞ䕔( 0梞'@8 C^;CS!`Ddd'Gmy%Q$Ts20DvqsjKc9[Z}jjD8)!|wW33kHlL@5!˒g:IqD;MA⤖l2)2g,r8eQb ` @p^!p& Wc3qxJ\vd:1L"]N"ѐ7Zb6g*=dӶ7RN2%LN[Tl yiluP/>)Qe[Db㉻:?%^e4'n  @pF!gp @G Espʼn.N=le 0GB%IU#=x(&}U<)z;8iW)JNqCEkyQLI08)[c[/Y|ɍz^EN^ @`!;(@pG˵]Y&>'D6ɓ1L'CHHgpbփ2)vk/mMZ92rx͕[T[ ɄMM[ɌԱ iԔԤYHQ<&YZ%v9GAq @p@|)`ԓ&ɟF=(q6!9iIFLDMl#Rz;w_/ogu3:[+m}'_SR2Gl/&@ 9M!@ عHH"C1s3?  {p,KARd)X솋pr)p#dj}g~lyٞq}gǭvUGJݧ#U5 (*׷)X ce&9?C Ćp@|)@zZq84/%4>D)̻ܯbj47f߇t3Q\2q/qDsDqBt#׍{Zk:LGdg_͉ P< @\cʂ@!@|qy MLi;l%\`$J_/0ݩ^)2NvMz6q,Uu:eukٰmԛMgAcÉƆ:-BdzØM=e4143 @ Ć@)  DgJcL#oYJC'OsK'$qMsN`fR9[ DR~@nܔTfI]Sft銲}edax![x2U^uCuuYu65NytL#mrna$ Yg@ bC.`T@ 2I?3#B5ldpcNED"e21mNTgPyЌy[[Һf䱝FZK^j}di s8,cr?F̮lz(Z$dP^o췊7*kfHxڌMd̉Itv% @] o Vx[ٵeu$ I HK4IJX@fVe4xFq\{:ψU,ydFNBm 6ez] 1:ráMvJ⒃HTElZjrH@+5 s- QTD?1y-SN,ڐдOMSk$͒Q(k:ZU:6ĉJ~5Hks^ٱ @bC` =Or֖/|YdWyiF Hkn֢!FN&'l8{xrMl{{OV32ڛkۛ-rJmSι5!₋g8[kGM@kˀ7LWV/:XBVELod(q4*f&PeШZ5aZn!{ zwk @ K!@X'onhRzu#DE'ef$K^)DrJDU=t_?Js"DYE9T^ACegHT5aD/pɆCo_b&板76vti[d~աe̜H^v#K~O# u٢"Kd/oXoh2M۩bJJ&VX @S9p @@Io#PىoeR5_|[(_,+/+ppDnex>{ȱ :=%-Țzm];kH9N)p_7:tȗR9ܣ3=Yu;< @f\Գ( @` K=Ѣr~ wȻ؜<" @pR!'p @feFV둹ꐼYMN˕YNz: @ 6 q(CG\?{Ȁ2D*WB @@l @J7rM#Goied3oJ@  oۣ _ RYޯQ}4&!. @V ᖀ @g䊃1&u}{ 5*Fx+2##@ "o`T _!j5 ,28U  @,P @D rsCnŔA ``he @ @!. @ F௣]zoB._o@ @m@ , ,  w @X8x`e!V<rĆ@!@ `)ޔ.>2 0)q/@ @ @l(x5 @ @  @ @+P=j@ @=@ @Wm{ @  6{ @ bC9 @ @@l @ @^ĆQs@ @ @ @  oۣ @ @! @ @ x ޶G!@ @bC @ @@ 6mC @  #d9F_yB'e䑯[쁀??)x[ڵkC#pU) J2O_ĆFZ5ςr,z<)QAI# q.o=!F%aQ= 8/ܙI@Yn38Sf \ҳ "5@  @5KQz4,|͒=ADO n8#'%@!9B & @ @&P5 @ @<(ؐ1 @ @0ĆP\@ @ AĆ< @ @@ 6` B @  6AL$@ @Lk0 @ xP!b")@ @ ` X @ @ yIA @ @l( Ņ @ @@lȃH  @ bC`(. @ @bCDR @ @  Cq!@ @ɴ]V^^8  @`Wr @/ `G/yA @ Zrʁ, @@ ௬`kq ?%C @ @--!@ @ 6mA @ o 6ma@ @W!m  @ x[!o #}@ @  o۠d @ @ y[C @ @l% @ @@lH @ >}vuIENDB`././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/index.rst0000664000175000017500000000171700000000000017724 0ustar00zuulzuul00000000000000====================================== Welcome to CloudKitty's documentation! ====================================== .. NOTE: This is the index for the rst version only. If you update this file please update pdf-index.rst accordingly .. include:: common-index.rst Documentation contents ====================== .. list-table:: :header-rows: 1 * - Documentation type - Table of contents * - **Concepts** - .. toctree:: :maxdepth: 3 concepts/index * - **End User** - .. toctree:: :maxdepth: 3 user/index * - **Admin / Operator** - .. toctree:: :maxdepth: 2 admin/index * - **Developer** - .. toctree:: :maxdepth: 2 developer/index * - **Contributors** - .. toctree:: :maxdepth: 2 contributor/contributing * - **API Reference** - .. toctree:: :maxdepth: 2 api-reference/index ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/pdf-index.rst0000664000175000017500000000111600000000000020464 0ustar00zuulzuul00000000000000:orphan: ====================================== Welcome to CloudKitty's documentation! ====================================== .. include:: common-index.rst Documentation contents ====================== End User -------- .. toctree:: :maxdepth: 3 user/index Admin / Operator ---------------- .. toctree:: :maxdepth: 2 admin/index Developer --------- .. toctree:: :maxdepth: 2 developer/index Contributors ------------ .. toctree:: :maxdepth: 2 contributor/contributing API Reference ------------- .. toctree:: :maxdepth: 2 api-reference/index ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2794871 cloudkitty-21.0.0/doc/source/user/0000775000175000017500000000000000000000000017033 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/user/index.rst0000664000175000017500000000015000000000000020670 0ustar00zuulzuul00000000000000================== User documentation ================== .. toctree:: :maxdepth: 2 rating/index ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2794871 cloudkitty-21.0.0/doc/source/user/rating/0000775000175000017500000000000000000000000020317 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2794871 cloudkitty-21.0.0/doc/source/user/rating/graph/0000775000175000017500000000000000000000000021420 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/user/rating/graph/hashmap.dot0000664000175000017500000000231500000000000023552 0ustar00zuulzuul00000000000000digraph "Hashmap's data structure" { label="HashMap module data structure"; compound=true; compute; network [label="network.floating_ip"]; volume; subgraph cluster_0 { label="services"; style=dashed; {rank=same; compute -> network -> volume [style=invis];} } compute -> flavor; subgraph cluster_1 { label="fields:\nAssociate to metadata"; style=dashed; flavor; } // Mappings micro [label="value=m1.micro\ntype=flat\ncost=0.1"]; tiny [label="value=m1.tiny\ntype=flat\ncost=0.2"]; small [label="value=m1.small\ntype=flat\ncost=0.4"]; floating [label="\ntype=flat\ncost=0.5"]; // Thresholds 1024 [label="level=1024\ntype=flat\ncost=0.1"]; 10240 [label="level=10240\ntype=flat\ncost=0.2"]; subgraph cluster_2 { label="mappings"; style=dashed; {rank=same; micro -> tiny -> small -> floating [style=invis];} } subgraph cluster_3 { label="thresholds"; style=dashed; {rank=same; 1024 -> 10240 [style=invis];} } flavor -> micro; flavor -> tiny; flavor -> small; network -> floating; volume -> 1024; volume -> 10240; } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/user/rating/hashmap.rst0000664000175000017500000003747500000000000022512 0ustar00zuulzuul00000000000000===================== Hashmap rating module ===================== CloudKitty is shipped with core rating modules. Hashmap composition =================== HashMap is composed of different resources and groups. Group ----- A group is a way to group calculations of mappings. For example you might want to apply a set of rules to rate instance uptime and another set to block storage volume. You don't want the two to be linked so you'll create one group for each calculation. See **mappings** for information about how groups impact rating. Service ------- A service is a way to map a rule to the type of data collected. One hashmap service must be created for each metric type you want to rate. If the metric has an ``alt_name``, the name of the hashmap service must match the ``alt_name``. If no ``alt_name`` is provided, use the name of the metric. Example with the default configuration: .. code-block:: yaml metrics: cpu: unit: instance alt_name: instance # [...] image.size: unit: MiB # [...] In this case, ``cpu`` has an alt_name and ``image.size`` hasn't. Thus, the hashmap service for the cpu metric must be called ``instance`` and the service for images must be called ``image.size``. Field ----- A field is referring to a metadata field of a resource. For example on an instance object (in the ``instance`` service), you can use the flavor to define specific rules. Each ``groupby`` and ``metadata`` attribute specified in the configuration can be used for a field: .. code-block:: yaml metrics: cpu: unit: instance alt_name: instance groupby: - id - project_id metadata: - flavor_id # [...] volume.size: unit: GiB groupby: - id - project_id metadata: - volume_type # [...] With the configuration above, the ``instance`` service could have the following fields: * id * project_id * flavor_id The ``volume.size`` service could have the following fields: * id * project_id * volume_type In this case, ``flavor_id`` and ``volume_type`` can be used to apply a different pricing based on the flavor of an instance or the type of a volume. Mapping ------- A mapping is the final object, it's what triggers calculation, for example a specific value of flavor on an instance. There are two kinds of mappings: **field** and **service** mappings. Field mappings ++++++++++++++ A field mapping is used to match the attributes/metadata of a resource. For example, if you have three volume types on which you want to apply distinct rating rules, you must proceed in the following way: 1. Create a hashmap service matching the name or ``alt_name`` of the volume metric (``volume.size`` with default gnocchi). 2. In that service, create a field with the name of the volume type metadata (``volume_type`` with default gnocchi). 3. In that field, create one mapping per possible value of the ``volume_type`` metadata. Example: * ``SSD_gold``: 0.03 * ``SSD_silver``: 0.02 * ``HDD_bronze``: 0.01 Each element of the volume metric will now be based on its ``volume_type`` metadata. A 10GiB ``SSD_gold`` volume will be rated 0.3 per collect period, a 1GiB ``HDD_bronze`` volume will be rated 0.01, a 0.5GiB ``SSD_silver`` will be 0.01... Service mappings ++++++++++++++++ A service mapping is not associated with a field, but directly with a service. If a mapping is created directly on the ``volume.size`` service, each volume will be rated based on this mapping, with no metadata-based distinction. Flat and Rate +++++++++++++ A mapping can have two types: ``flat`` or ``rate``. A flat mapping is simply added to the total for a given item, whereas a rate multiplies the total. See the examples below use cases. .. note:: If several flat mappings of the same group match, only the most expensive one is applied. Scope +++++ It is possible to tie a mapping to a specific scope/tenant_id. Threshold --------- A threshold entry is used to apply rating rules only after a specific level. Apart from that, it works the same way as a mapping. As for mappings, a threshold can be tied to a specific scope/project. Cost ---- The cost option is the actual cost for the rating period. It has a precision of 28 decimal digits (on the right side of the decimal point), and 12 digits on the left side of the decimal point (the integer part of the number). Examples ======== Instance uptime --------------- Apply rating rules to rate instances based on their flavor_id and uptime: Create an ``instance_uptime_flavor_id`` group: .. code-block:: console $ cloudkitty hashmap group create instance_uptime_flavor_id +---------------------------+--------------------------------------+ | Name | Group ID | +---------------------------+--------------------------------------+ | instance_uptime_flavor_id | 9a2ff37d-be86-4642-8b7d-567bace61f06 | +---------------------------+--------------------------------------+ $ cloudkitty hashmap group list +---------------------------+--------------------------------------+ | Name | Group ID | +---------------------------+--------------------------------------+ | instance_uptime_flavor_id | 9a2ff37d-be86-4642-8b7d-567bace61f06 | +---------------------------+--------------------------------------+ Create the service matching rule: .. code-block:: console $ cloudkitty hashmap service create instance +----------+--------------------------------------+ | Name | Service ID | +----------+--------------------------------------+ | instance | b19d801d-e7d4-46f9-970b-3e6d60fc07b5 | +----------+--------------------------------------+ Create a field matching rule: .. code-block:: console $ cloudkitty hashmap field create b19d801d-e7d4-46f9-970b-3e6d60fc07b5 flavor_id +-----------+--------------------------------------+--------------------------------------+ | Name | Field ID | Service ID | +-----------+--------------------------------------+--------------------------------------+ | flavor_id | 18aa50b6-6da8-4c47-8a1f-43236b971625 | b19d801d-e7d4-46f9-970b-3e6d60fc07b5 | +-----------+--------------------------------------+--------------------------------------+ Create a mapping in the ``instance_uptime_flavor`` group that will map m1.tiny instance to a cost of 0.01: .. code-block:: console $ openstack flavor show m1.tiny +----------------------------+----------------------------------------+ | Field | Value | +----------------------------+----------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | access_project_ids | None | | disk | 20 | | id | 93195dd4-bbf3-4b13-929d-8293ae72e056 | | name | m1.tiny | | os-flavor-access:is_public | True | | properties | baremetal='false', flavor-type='small' | | ram | 512 | | rxtx_factor | 1.0 | | swap | | | vcpus | 1 | +----------------------------+----------------------------------------+ $ cloudkitty hashmap mapping create 0.01 \ --field-id 18aa50b6-6da8-4c47-8a1f-43236b971625 \ --value 93195dd4-bbf3-4b13-929d-8293ae72e056 \ -g 9a2ff37d-be86-4642-8b7d-567bace61f06 \ -t flat +--------------------------------------+--------------------------------------+------------+------+--------------------------------------+------------+--------------------------------------+------------+ | Mapping ID | Value | Cost | Type | Field ID | Service ID | Group ID | Project ID | +--------------------------------------+--------------------------------------+------------+------+--------------------------------------+------------+--------------------------------------+------------+ | 9c2418dc-99d3-44b6-8fdf-e9fa02f3ceb5 | 93195dd4-bbf3-4b13-929d-8293ae72e056 | 0.01000000 | flat | 18aa50b6-6da8-4c47-8a1f-43236b971625 | None | 9a2ff37d-be86-4642-8b7d-567bace61f06 | None | +--------------------------------------+--------------------------------------+------------+------+--------------------------------------+------------+--------------------------------------+------------+ In this example every machine in any project with the flavor m1.tiny will be rated 0.01 per collection period. Volume per GiB with discount ---------------------------- Now let's do some threshold based rating. Create a ``volume_thresholds`` group: .. code-block:: console $ cloudkitty hashmap group create volume_thresholds +-------------------+--------------------------------------+ | Name | Group ID | +-------------------+--------------------------------------+ | volume_thresholds | 9736bbc0-8888-4700-96fc-58db5fded493 | +-------------------+--------------------------------------+ $ cloudkitty hashmap group list +-------------------+--------------------------------------+ | Name | Group ID | +-------------------+--------------------------------------+ | volume_thresholds | 9736bbc0-8888-4700-96fc-58db5fded493 | +-------------------+--------------------------------------+ Create the service matching rule: .. code-block:: console $ cloudkitty hashmap service create volume.size +-------------+--------------------------------------+ | Name | Service ID | +-------------+--------------------------------------+ | volume.size | 74ad7e4e-9cae-45a8-884b-368a92803afe | +-------------+--------------------------------------+ Now let's setup the price per gigabyte: .. code-block:: console $ cloudkitty hashmap mapping create 0.001 \ -s 74ad7e4e-9cae-45a8-884b-368a92803afe \ -t flat -g 9736bbc0-8888-4700-96fc-58db5fded493 +--------------------------------------+-------+------------+------+----------+--------------------------------------+--------------------------------------+------------+ | Mapping ID | Value | Cost | Type | Field ID | Service ID | Group ID | Project ID | +--------------------------------------+-------+------------+------+----------+--------------------------------------+--------------------------------------+------------+ | 09e36b13-ce89-4bd0-bbf1-1b80577031e8 | None | 0.00100000 | flat | None | 74ad7e4e-9cae-45a8-884b-368a92803afe | 9736bbc0-8888-4700-96fc-58db5fded493 | None | +--------------------------------------+-------+------------+------+----------+--------------------------------------+--------------------------------------+------------+ We have the basic price per gigabyte be we now want to apply a discount on huge data volumes. Create the thresholds in the group *volume_thresholds* that will map different volume quantities to costs: Here we set a threshold when going past 50GiB, and apply a 2% discount (0.98): .. code-block:: console $ cloudkitty hashmap threshold create 50 0.98 \ -s 74ad7e4e-9cae-45a8-884b-368a92803afe \ -t rate -g 9736bbc0-8888-4700-96fc-58db5fded493 +--------------------------------------+-------------+------------+------+----------+--------------------------------------+--------------------------------------+------------+ | Threshold ID | Level | Cost | Type | Field ID | Service ID | Group ID | Project ID | +--------------------------------------+-------------+------------+------+----------+--------------------------------------+--------------------------------------+------------+ | ae02175d-beff-4b01-bb3a-00907b05fe66 | 50.00000000 | 0.98000000 | rate | None | 74ad7e4e-9cae-45a8-884b-368a92803afe | 9736bbc0-8888-4700-96fc-58db5fded493 | None | +--------------------------------------+-------------+------------+------+----------+--------------------------------------+--------------------------------------+------------+ Here we set the same threshold for project 2d5b39657dc542d4b2a14b685335304e but with a 3% discount (0.97): .. code-block:: console $ cloudkitty hashmap threshold create 50 0.97 \ -s 74ad7e4e-9cae-45a8-884b-368a92803afe \ -t rate -g 9736bbc0-8888-4700-96fc-58db5fded493 \ -p 2d5b39657dc542d4b2a14b685335304e +--------------------------------------+-------------+------------+------+----------+--------------------------------------+--------------------------------------+----------------------------------+ | Threshold ID | Level | Cost | Type | Field ID | Service ID | Group ID | Project ID | +--------------------------------------+-------------+------------+------+----------+--------------------------------------+--------------------------------------+----------------------------------+ | b20504bf-da34-434c-909d-46c2168c6166 | 50.00000000 | 0.97000000 | rate | None | 74ad7e4e-9cae-45a8-884b-368a92803afe | 9736bbc0-8888-4700-96fc-58db5fded493 | 2d5b39657dc542d4b2a14b685335304e | +--------------------------------------+-------------+------------+------+----------+--------------------------------------+--------------------------------------+----------------------------------+ Here we set a threshold when going past 200GiB, and apply a 5% discount (0.95): .. code-block:: console $ cloudkitty hashmap threshold create 200 0.95 \ -s 74ad7e4e-9cae-45a8-884b-368a92803afe \ -t rate -g 9736bbc0-8888-4700-96fc-58db5fded493 +--------------------------------------+--------------+------------+------+----------+--------------------------------------+--------------------------------------+------------+ | Threshold ID | Level | Cost | Type | Field ID | Service ID | Group ID | Project ID | +--------------------------------------+--------------+------------+------+----------+--------------------------------------+--------------------------------------+------------+ | ed9fd297-37d4-4d9c-8f65-9919d554617b | 200.00000000 | 0.95000000 | rate | None | 74ad7e4e-9cae-45a8-884b-368a92803afe | 9736bbc0-8888-4700-96fc-58db5fded493 | None | +--------------------------------------+--------------+------------+------+----------+--------------------------------------+--------------------------------------+------------+ In this example every volume is rated 0.001 per GiB but if the size goes past 50GiB you'll get a 2% discount, if you even go further you'll get 5% discount (only one level apply at a time). For project 2d5b39657dc542d4b2a14b685335304e only, you'll get a 3% discount instead of 2% when the size goes past 50GiB and the same %5 discount it goes further. :20GiB: 0.02 per collection period. :50GiB: 0.049 per collection period (0.0485 for project 2d5b39657dc542d4b2a14b685335304e). :80GiB: 0.0784 per collection period (0.0776 for project 2d5b39657dc542d4b2a14b685335304e). :250GiB: 0.2375 per collection period. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/user/rating/index.rst0000664000175000017500000000415500000000000022165 0ustar00zuulzuul00000000000000====== Rating ====== CloudKitty is shipped with three rating modules: * ``noop``: Rating module for testing purpose (enabled only). * ``hashmap``: Default rating module corresponding to usual CloudKitty use cases (disabled by default). * ``pyscripts``: Custom rating module allowing you to add your own python scripts (disabled by default). You can enable or disable each module independently and prioritize one over another at will. * ``Enabled`` state is represented by a boolean value (``True`` or ``False``). * ``Priority`` is represented by an integer value. .. note:: The module with the biggest priority value will process data first (descending order). List available modules ====================== List available rating modules: .. code-block:: console $ cloudkitty module list +-----------+---------+----------+ | Module | Enabled | Priority | +-----------+---------+----------+ | hashmap | False | 1 | | noop | True | 1 | | pyscripts | False | 1 | +-----------+---------+----------+ Enable or disable module ======================== Enable the hashmap rating module: .. code-block:: console $ cloudkitty module enable hashmap +---------+---------+----------+ | Module | Enabled | Priority | +---------+---------+----------+ | hashmap | True | 1 | +---------+---------+----------+ Disable the pyscripts rating module: .. code-block:: console $ cloudkitty module disable pyscripts +-----------+---------+----------+ | Module | Enabled | Priority | +-----------+---------+----------+ | pyscripts | False | 1 | +-----------+---------+----------+ Set priority ============ Set the hashmap rating module priority to 100: .. code-block:: console $ cloudkitty module set priority hashmap 100 +---------+---------+----------+ | Module | Enabled | Priority | +---------+---------+----------+ | hashmap | True | 100 | +---------+---------+----------+ More details ============ .. toctree:: :maxdepth: 2 :glob: hashmap.rst pyscripts.rst ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/doc/source/user/rating/pyscripts.rst0000664000175000017500000001237300000000000023117 0ustar00zuulzuul00000000000000======================= PyScripts rating module ======================= The PyScripts module allows you to create your own rating module. A script is supposed to process the given data and to set the different prices. CAUTION: If you add several PyScripts, the order in which they will be executed is not guaranteed. Custom module example ===================== Price definitions ----------------- .. code-block:: python import decimal # Price for each flavor. These are equivalent to hashmap field mappings. flavors = { 'm1.micro': decimal.Decimal(0.65), 'm1.nano': decimal.Decimal(0.35), 'm1.large': decimal.Decimal(2.67) } # Price per MB / GB for images and volumes. These are equivalent to # hashmap service mappings. image_mb_price = decimal.Decimal(0.002) volume_gb_price = decimal.Decimal(0.35) Price calculation functions --------------------------- .. code-block:: python # These functions return the price of a service usage on a collect period. # The price is always equivalent to the price per unit multiplied by # the quantity. def get_instance_price(item): if not item['metadata']['flavor_name'] in flavors: return 0 else: return (decimal.Decimal(item['vol']['qty']) * flavors[item['metadata']['flavor_name']]) def get_image_price(item): if not item['vol']['qty']: return 0 else: return decimal.Decimal(item['vol']['qty']) * image_mb_price def get_volume_price(item): if not item['vol']['qty']: return 0 else: return decimal.Decimal(item['vol']['qty']) * volume_gb_price # Mapping each service to its price calculation function services = { 'instance': get_instance_price, 'volume': get_volume_price, 'image': get_image_price } Processing the data ------------------- .. code-block:: python def process(data): # The 'data' is a dictionary with the usage entries for each service # in a given period. usage_data = data['usage'] for service_name, service_data in usage_data.items(): # Do not calculate the price if the service has no # price calculation function if service_name in services.keys(): # A service can have several items. For example, # each running instance is an item of the compute service for item in service_data: item['rating'] = {'price': services[service_name](item)} return data # 'data' is passed as a global variable. The script is supposed to set the # 'rating' element of each item in each service data = process(data) Using your Script for rating ============================ Enabling the PyScripts module ----------------------------- To use your script for rating, you will need to enable the pyscripts module .. code-block:: console $ cloudkitty module enable pyscripts +-----------+---------+----------+ | Module | Enabled | Priority | +-----------+---------+----------+ | pyscripts | True | 1 | +-----------+---------+----------+ Adding the script to CloudKitty ------------------------------- Create the script and specify its name. .. code-block:: console $ cloudkitty pyscript create my_awesome_script script.py +-------------------+--------------------------------------+------------------------------------------+---------------------------------------+ | Name | Script ID | Checksum | Data | +-------------------+--------------------------------------+------------------------------------------+---------------------------------------+ | my_awesome_script | 78e1955a-4e7e-47e3-843c-524d8e6ad4c4 | 49e889018eb86b2035437ebb69093c0b6379f18c | from __future__ import print_function | | | | | from cloudkitty import rating | | | | | | | | | | import decimal | | | | | | | | | | {...} | | | | | | | | | | data = process(data) | | | | | | +-------------------+--------------------------------------+------------------------------------------+---------------------------------------+ ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.219486 cloudkitty-21.0.0/etc/0000775000175000017500000000000000000000000014563 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2794871 cloudkitty-21.0.0/etc/apache2/0000775000175000017500000000000000000000000016066 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/etc/apache2/cloudkitty0000664000175000017500000000264700000000000020215 0ustar00zuulzuul00000000000000# Copyright (c) 2013 New Dream Network, LLC (DreamHost) # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is an example Apache2 configuration file for using the # cloudkitty API through mod_wsgi. # Note: If you are using a Debian-based system then the paths # "/var/log/httpd" and "/var/run/httpd" will use "apache2" instead # of "httpd". # # The number of processes and threads is an example only and should # be adjusted according to local requirements. Listen 8889 WSGIDaemonProcess cloudkitty-api processes=2 threads=10 user=SOMEUSER display-name=%{GROUP} WSGIProcessGroup cloudkitty-api WSGIScriptAlias / /var/www/cloudkitty/app.wsgi WSGIApplicationGroup %{GLOBAL} = 2.4> ErrorLogFormat "%{cu}t %M" ErrorLog /var/log/httpd/cloudkitty_error.log CustomLog /var/log/httpd/cloudkitty_access.log combined WSGISocketPrefix /var/run/httpd ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2794871 cloudkitty-21.0.0/etc/cloudkitty/0000775000175000017500000000000000000000000016756 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/etc/cloudkitty/api_paste.ini0000664000175000017500000000161200000000000021424 0ustar00zuulzuul00000000000000[pipeline:cloudkitty+noauth] pipeline = cors healthcheck http_proxy_to_wsgi request_id ck_api [pipeline:cloudkitty+keystone] pipeline = cors healthcheck http_proxy_to_wsgi request_id authtoken ck_api [app:ck_api] paste.app_factory = cloudkitty.api.app:app_factory [filter:authtoken] acl_public_routes = /, /v1, /v2, /healthcheck paste.filter_factory = cloudkitty.api.middleware:AuthTokenMiddleware.factory [filter:request_id] paste.filter_factory = oslo_middleware:RequestId.factory [filter:cors] paste.filter_factory = oslo_middleware.cors:filter_factory oslo_config_project = cloudkitty [filter:healthcheck] paste.filter_factory = oslo_middleware:Healthcheck.factory backends = disable_by_file disable_by_file_path = /etc/cloudkitty/healthcheck_disable [filter:http_proxy_to_wsgi] paste.filter_factory = oslo_middleware.http_proxy_to_wsgi:HTTPProxyToWSGI.factory oslo_config_project = cloudkitty ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/etc/cloudkitty/metrics.yml0000664000175000017500000000335100000000000021151 0ustar00zuulzuul00000000000000metrics: cpu: unit: instance alt_name: instance groupby: - id - user_id - project_id metadata: - flavor_name - flavor_id - vcpus mutate: NUMBOOL extra_args: aggregation_method: mean resource_type: instance force_granularity: 300 image.size: unit: MiB factor: 1/1048576 groupby: - id - user_id - project_id metadata: - container_format - disk_format extra_args: aggregation_method: mean resource_type: image volume.size: unit: GiB groupby: - id - user_id - project_id metadata: - volume_type extra_args: aggregation_method: mean resource_type: volume force_granularity: 300 network.outgoing.bytes.rate: unit: MB groupby: - id - project_id - user_id # Converting B/s to MB/h factor: 3600/1000000 metadata: - instance_id extra_args: aggregation_method: mean resource_type: instance_network_interface network.incoming.bytes.rate: unit: MB groupby: - id - project_id - user_id # Converting B/s to MB/h factor: 3600/1000000 metadata: - instance_id extra_args: aggregation_method: mean resource_type: instance_network_interface ip.floating: unit: ip groupby: - id - user_id - project_id metadata: - state mutate: NUMBOOL extra_args: aggregation_method: mean resource_type: network radosgw.objects.size: unit: GiB groupby: - id - user_id - project_id factor: 1/1073741824 extra_args: aggregation_method: mean resource_type: ceph_account ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2794871 cloudkitty-21.0.0/etc/oslo-config-generator/0000775000175000017500000000000000000000000020766 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/etc/oslo-config-generator/cloudkitty.conf0000664000175000017500000000052500000000000024032 0ustar00zuulzuul00000000000000[DEFAULT] output_file = etc/cloudkitty/cloudkitty.conf.sample namespace = cloudkitty.common.config namespace = oslo.concurrency namespace = oslo.db namespace = oslo.log namespace = oslo.messaging namespace = oslo.middleware.http_proxy_to_wsgi namespace = oslo.middleware.cors namespace = oslo.policy namespace = keystonemiddleware.auth_token././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2794871 cloudkitty-21.0.0/etc/oslo-policy-generator/0000775000175000017500000000000000000000000021020 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/etc/oslo-policy-generator/cloudkitty.conf0000664000175000017500000000012100000000000024054 0ustar00zuulzuul00000000000000[DEFAULT] output_file = etc/cloudkitty/policy.yaml.sample namespace = cloudkitty ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1727866639.219486 cloudkitty-21.0.0/releasenotes/0000775000175000017500000000000000000000000016501 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2954874 cloudkitty-21.0.0/releasenotes/notes/0000775000175000017500000000000000000000000017631 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/add-dataframe-datapoint-objects-a5a4ac3db5289cb6.yaml0000664000175000017500000000015100000000000031144 0ustar00zuulzuul00000000000000--- other: - | Data frames/points are now internally represented as objects rather than dicts. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/add-dataframes-v2-api-endpoint-601825c344ba0e2d.yaml0000664000175000017500000000035600000000000030407 0ustar00zuulzuul00000000000000--- features: - | Added a v2 API endpoint allowing to push dataframes into the CloudKitty storage. This endpoint is available via a ``POST`` request on ``/v2/dataframes``. Admin privileges are required to use this endpoint. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/add-description-option-to-rating-671430ac73c0315b.yaml0000664000175000017500000000023100000000000031030 0ustar00zuulzuul00000000000000--- features: - | Add description option to a rating metric definition, which can be used to create custom reports in the ``summary`` GET API. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/add-gnocchi-fetcher-b8a6e2ea49fcfec5.yaml0000664000175000017500000000015400000000000027017 0ustar00zuulzuul00000000000000--- features: - | A fetcher for gnocchi has been added, allowing dynamic scope/project discovery. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/add-influx-storage-backend-3ace5b451e789e64.yaml0000664000175000017500000000052000000000000030022 0ustar00zuulzuul00000000000000--- features: - | An InfluxDB v2 storage backend has been added. It will become the default backend of the v2 storage interface. The v1 storage interface will be deprecated in a future release. At that point, documentation about how to upgrade the storage backend will be made available, along with some helpers. ././@PaxHeader0000000000000000000000000000024500000000000011456 xustar0000000000000000143 path=cloudkitty-21.0.0/releasenotes/notes/add-new-validation-to-not-allow-reprocessing-with-incompatible-timewindows-5a44802f20bce4f2.yaml 22 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/add-new-validation-to-not-allow-reprocessing-with-incompatible-0000664000175000017500000000033600000000000033752 0ustar00zuulzuul00000000000000--- fixes: - | Add a validation to not allow users to schedule reprocesses via ``POST`` request on ``/v2/task/reprocesses`` using a time window not compatible with the configured ``period`` in the collector. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/add-opensearch-as-v2-storage-backend-ff4080d6d32d8a2a.yaml0000664000175000017500000000041300000000000031635 0ustar00zuulzuul00000000000000--- features: - | OpenSearch has been added as an alternative v2 storage backend. It is a duplicate of the ElasticSearch backend, with the naming changed where appropriate. This change is in support of the deprecation of ElasticSearch as a backend. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/add-prometheus-fetcher-be6082f70f279f0e.yaml0000664000175000017500000000067600000000000027311 0ustar00zuulzuul00000000000000--- features: - | A Prometheus scope fetcher has been added in order to dynamically discover scopes from a Prometheus service using a user defined metric and a scope attribute. It can also filter out the response from Prometheus using metadata filters to have a more fine-grained control over scope discovery. It features HTTP basic auth capabilities and HTTPS configuration options similar to Prometheus collector. ././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=cloudkitty-21.0.0/releasenotes/notes/add-re-aggregation-method-option-gnocchi-collector-249917a14c4fc721.yaml 22 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/add-re-aggregation-method-option-gnocchi-collector-249917a14c4f0000664000175000017500000000042000000000000033031 0ustar00zuulzuul00000000000000--- features: - | It is now possible to differentiate the aggregation method from the aggregate type in the gnocchi collector, in case the retrieved aggregates need to be re-aggregated. This has been introduced with the ``re_aggregation_method`` option. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/add-scope-key-58135c2a5c6dae68.yaml0000664000175000017500000000023600000000000025362 0ustar00zuulzuul00000000000000--- other: - | The "scope_key" option is now defined in cloudkitty.conf and has been removed from the cloudkitty and monasca collector's extra_args ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/add-storage-state-v2-api-endpoint-45a29d0b44e177b8.yaml0000664000175000017500000000036600000000000031102 0ustar00zuulzuul00000000000000--- features: - | Added a v2 API endpoint allowing to retrieve the state of several scopes. This endpoint is available via a ``GET`` request on ``/v2/scope`` and supports filters. Admin privileges are required to use this endpoint. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/add-storage-state-v2-api-endpoint-492d7092e85ed7b1.yaml0000664000175000017500000000036300000000000031110 0ustar00zuulzuul00000000000000--- features: - | Added a v2 API endpoint allowing to reset the state of several scopes. This endpoint is available via a ``PUT`` request on ``/v2/scope`` and supports filters. Admin privileges are required to use this endpoint. ././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=cloudkitty-21.0.0/releasenotes/notes/add-support-to-influxdb-v2-storage-backend-f94df79f9e5276a8.yaml 22 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/add-support-to-influxdb-v2-storage-backend-f94df79f9e5276a8.yam0000664000175000017500000000011600000000000032661 0ustar00zuulzuul00000000000000--- features: - | Add support to Influx v2 database as storage backend. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/add-tempest-plugin-3584e1918f344fb2.yaml0000664000175000017500000000015200000000000026302 0ustar00zuulzuul00000000000000--- other: - | A tempest plugin has been created for CloudKitty, and it is used for gate tests. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/add-v2-storage-driver-for-elasticsearch-ec41cbb7849e82d3.yaml0000664000175000017500000000017400000000000032426 0ustar00zuulzuul00000000000000--- features: - | A v2 storage driver for Elasticsearch has been added. It is marked as ``EXPERIMENTAL`` for now. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/add_warning_regarding_gnocchi_version-99d5213c35950e39.yaml0000664000175000017500000000130400000000000032261 0ustar00zuulzuul00000000000000--- fixes: - | CloudKitty will always use the correct metadata for the processing and reprocessing jobs. This means, we always use the metadata for the timestamp that we are collecting at Gnocchi backend.This is achieved with the use of ``use_history=true`` in Gnocchi, which was released under `version 4.5.0 `__. Before that release, the ``aggregates`` API would only return the latest metadata for the resource of the metric being handled. Therefore, for CloudKitty processing and reprocessing, we would always have the possibility of using the wrong attribute version to rate the computing resources.././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/added-forced-granularity-gnocchi-d52e988194197248.yaml0000664000175000017500000000042200000000000030726 0ustar00zuulzuul00000000000000--- features: - | A ``force_granularity`` option has been added to the gnocchi collector's ``extra_args``. It allows to force a granularity to use when doing metric aggregations. If not specified or set to 0, the lowest available granularity will be used. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/added-v2-api-1ef829355c2feea4.yaml0000664000175000017500000000047300000000000025161 0ustar00zuulzuul00000000000000--- features: - | A v2 API has been bootstrapped. It is compatible with the v2 storage and will be the base for all upcoming API endpoints. It is marked as ``EXPERIMENTAL`` for now. upgrade: - | The v1 API is now marked as ``CURRENT``. The API root is now built with Flask instead of pecan ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/admin-or-owner-policy-c666346da4405d13.yaml0000664000175000017500000000033500000000000026722 0ustar00zuulzuul00000000000000--- fixes: - | Fix the definition of the ``admin_or_owner`` policy expression, which was preventing even admins from using the ``get_summary`` endpoint with ``all_tenants=True`` or ``tenant_id`` parameters. ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=cloudkitty-21.0.0/releasenotes/notes/allow-multiple-ranting-types-for-same-metric-in-gnocchi-1011ba2d5d36c073.yaml 22 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/allow-multiple-ranting-types-for-same-metric-in-gnocchi-1011ba20000664000175000017500000000042200000000000033274 0ustar00zuulzuul00000000000000--- features: - | Extends the Gnocchi collector to allow operators to create multiple rating types for the same metric in Gnocchi. The Gnocchi collector is now accepting a list of configs for each metric entry, instead of accepting only one configuration. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/batch-delete-reprocessing-d46df15b078a42a5.yaml0000664000175000017500000000021000000000000027741 0ustar00zuulzuul00000000000000--- features: - | Optimized the reprocessing workflow to execute batch cleaning of data in the storage backend of CloudKitty. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/change-metrology-organization-1e11900eb30780cc.yaml0000664000175000017500000000026600000000000030604 0ustar00zuulzuul00000000000000--- other: - | Cloudkitty now bases itself on metrics and no more on services for data valorization. The metrics.yml file information has been reorganized accordingly. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/check-duplicates-metadata-groupby-d5ee99941bb483fd.yaml0000664000175000017500000000022700000000000031506 0ustar00zuulzuul00000000000000--- upgrade: - | When the config is loaded, there is now a verification for duplicates between ``groupby`` and ``metadata`` for each metric. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/collector-monasca-f0871406513ff22c.yaml0000664000175000017500000000023700000000000026175 0ustar00zuulzuul00000000000000--- features: - | A collector for Monasca has been added. It works with telemetry metrics published to Monasca by Ceilometer agent through Ceilosca. ././@PaxHeader0000000000000000000000000000023300000000000011453 xustar0000000000000000133 path=cloudkitty-21.0.0/releasenotes/notes/create-use_all_entries_for_timespan-option-for-gnocchi-collector-39d29603b1f554e1.yaml 22 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/create-use_all_entries_for_timespan-option-for-gnocchi-collecto0000664000175000017500000000013600000000000034244 0ustar00zuulzuul00000000000000--- features: - | Create the option 'use_all_resource_revisions' for Gnocchi collector. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/custom-gnocchi-query-a391f5e83d55d771.yaml0000664000175000017500000000040600000000000026751 0ustar00zuulzuul00000000000000--- features: - | Enable using custom queries with the Gnocchi collector. This option enables operators to take full advantage of the operations that are available on Gnocchi such as any arithmetic operation, logical operation and many others. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/dataframes-get-v2-policy-check-6070fc047b2e1496.yaml0000664000175000017500000000036000000000000030350 0ustar00zuulzuul00000000000000--- fixes: - | Fixes policy check when getting dataframes using the v2 API, causing the operation to fail when run by a non-admin user. See story 2009879 `_ for more details. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/default-to-v2-storage-a5ecac7e73dafa6d.yaml0000664000175000017500000000075400000000000027351 0ustar00zuulzuul00000000000000--- upgrade: - | CloudKitty's storage interface defaults to v2 from now on. v1 will be deprecated in a future release. Documentation about how to upgrade the storage backend along with some tools will be available at that point. New deployments should use the v2 storage interface. The default v2 backend is ``influxdb``. In order to keep using ``sqlalchemy``, specify "version = 1" and "backend = sqlalchemy" in the ``[storage]`` section of the configuration. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/deprecate-ceilometer-collector-6d8f72c84b95662b.yaml0000664000175000017500000000026300000000000030741 0ustar00zuulzuul00000000000000--- deprecations: - | The ceilometer collector has been deprecated. Gnocchi should be used by default. The collector will be removed during the Rocky development cycle. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/deprecate-collector-mappings-5a69b31c8037fc01.yaml0000664000175000017500000000023100000000000030374 0ustar00zuulzuul00000000000000--- deprecations: - | Collector mappings have been deprecated and should not be used anymore. They will be removed during OpenStack's S cycle. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/deprecate-elasticsearch-for-opensearch-a338965edff23509.yaml0000664000175000017500000000041500000000000032342 0ustar00zuulzuul00000000000000--- deprecations: - | Support for using Elasticsearch as a storage backend is being deprecated in the Antelope release in favour of OpenSearch. We will try to keep CloudKitty compatible with both solutions. However, we will only test with OpenSearch. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/deprecate-get-state-2932a4e6a74295ce.yaml0000664000175000017500000000023200000000000026502 0ustar00zuulzuul00000000000000--- upgrade: - | The ``storage_state.get_state`` method has been removed in favor of the ``storage_state.get_last_processed_timestamp`` method. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/deprecate-info-services-endpoints-0c5018cb08a30d5f.yaml0000664000175000017500000000052600000000000031425 0ustar00zuulzuul00000000000000--- deprecations: - | The /v1/info/services and /v1/info/services/ endpoints have been deprecated. The /v1/info/metrics and /v1/info/metrics/ endpoints should be used instead. The whole /v1/info API part is currently being reworked, and some endpoints will also be deprecated and deleted in the future. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/deprecate-json-formatted-policy-file-01ceb65712fd0a39.yaml0000664000175000017500000000176000000000000032025 0ustar00zuulzuul00000000000000--- upgrade: - | The default value of ``[oslo_policy] policy_file`` config option has been changed from ``policy.json`` to ``policy.yaml``. Operators who are utilizing customized or previously generated static policy JSON files (which are not needed by default), should generate new policy files or convert them in YAML format. Use the `oslopolicy-convert-json-to-yaml `_ tool to convert a JSON to YAML formatted policy file in backward compatible way. deprecations: - | Use of JSON policy files was deprecated by the ``oslo.policy`` library during the Victoria development cycle. As a result, this deprecation is being noted in the Wallaby cycle with an anticipated future removal of support by ``oslo.policy``. As such operators will need to convert to YAML policy files. Please see the upgrade notes for details on migration of any custom policy files. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/deprecate-monasca-5526b823b227c6ef.yaml0000664000175000017500000000027700000000000026235 0ustar00zuulzuul00000000000000--- deprecations: - | Support for using Monasca as a fetcher and collector is being deprecated in the Antelope release. The complete removal is going to take place in B release.././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/deprecate-report-total-62544dce42bb19a6.yaml0000664000175000017500000000017300000000000027315 0ustar00zuulzuul00000000000000--- deprecations: - | The /v1/report/total route has been deprecated. /v1/report/summary should be used instead. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/deprecate_section_name-9f1ce1f84d09adf8.yaml0000664000175000017500000000044300000000000027566 0ustar00zuulzuul00000000000000--- deprecations: - | The 'gnocchi_collector', 'hybrid_storage', 'keystone_fetcher' and 'source_fetcher' group names have been deprecated and will be removed in the future. Use the 'collector_gnocchi', 'storage_hybrid', 'fetcher_keystone' and 'fetcher_source' group names instead. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/drop-py-2-7-fcf8c0613a7bffa8.yaml0000664000175000017500000000032200000000000025047 0ustar00zuulzuul00000000000000--- upgrade: - | Python 2.7 support has been dropped. Last release of Cloudkitty to support python 2.7 is OpenStack Train. The minimum version of Python now supported by Cloudkitty is Python 3.6. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/fetch-metrics-concurrently-dffffe346bd4900e.yaml0000664000175000017500000000050100000000000030436 0ustar00zuulzuul00000000000000--- upgrade: - | Metrics are now fetched concurrently with ``eventlet`` instead of one after another by the orchestrator, leading to a consequent performance improvement. The maximum number of greenthreads to use can be specified through the ``max_greenthreads`` option of the ``orchestrator`` section. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/fix-begin-end-validation-v2-summary-52401fb47ef9b5d6.yaml0000664000175000017500000000031200000000000031516 0ustar00zuulzuul00000000000000--- fixes: - | A validation issue causing the ``GET /v2/summary`` endpoint to systematically return a 400 error if any of the ``begin`` or ``end`` parameters was specified has been fixed. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/fix-csv-usage-end-7bcf4cb5effc4461.yaml0000664000175000017500000000012400000000000026377 0ustar00zuulzuul00000000000000--- fixes: - | The value of the UsageEnd field in CSV reports has been fixed. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/fix-dataframe-filtering-282cae643457bb8b.yaml0000664000175000017500000000034300000000000027423 0ustar00zuulzuul00000000000000--- security: - | Data filtering on the ``GET /v1/dataframes`` and ``GET /v2/dataframes`` endpoints has been fixed. It was previously possible for users to retrieve data from other scopes through these endpoints. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/fix-gnocchi-metadata-collection-74665e862483a383.yaml0000664000175000017500000000041400000000000030555 0ustar00zuulzuul00000000000000--- fixes: - | Metadata collection failures in the gnocchi collector caused by resources having measures in periods where they are supposed to be deleted have been fixed. Metadata is now collected over a three-period window in the gnocchi collector. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/fix-hashmap-mapping-value-match-56570510203ce3e5.yaml0000664000175000017500000000047000000000000030544 0ustar00zuulzuul00000000000000--- fixes: - | HashMap module field mapping matching has been fixed: Field mapping values are always stored as strings. However, metadatas to match can be floats or integers (eg vcpus or ram). Given that mappings were matched with ``==`` until now, integers or float metadatas did never match. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/fix-lock-release-74d112c8599c9a59.yaml0000664000175000017500000000017000000000000025742 0ustar00zuulzuul00000000000000--- fixes: - | cloudkitty-processor crashes which happened when using distributed tooz locks have been fixed. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/fix-opensearch-report-344508dd4e3d0ccc.yaml0000664000175000017500000000030700000000000027227 0ustar00zuulzuul00000000000000--- fixes: - | Fix some API report requests that were returning HTTP 500 errors when using the ``opensearch`` storage backend. This fixes failures to load the Horizon ``Rating`` panel. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/fix-project-id-none-d40df33fc7b7db23.yaml0000664000175000017500000000026100000000000026646 0ustar00zuulzuul00000000000000--- fixes: - | The 500 errors happening on some API endpoints when keystone authentication is enabled and the request context bears no ``project_id`` have been fixed. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/fix-quote-v1-api-7282f01b596f0f3b.yaml0000664000175000017500000000021700000000000025667 0ustar00zuulzuul00000000000000--- fixes: - | Fixes the quote API method. See story 2009022 `_ for more details. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/fix-rating-rules-value-precision-40d1054f8ab494c3.yaml0000664000175000017500000000067300000000000031155 0ustar00zuulzuul00000000000000--- fixes: - | Allow rating rules that have more than 2 digits in integer part. Currently, CloudKitty only allows creating rating rules as ``99.999999999999999999999999``. Therefore, for prices equal to or higher than 100, we would not be able to use them. This patch will enable operators to use any value between ``0`` and ``999999999999`` (in the integer part of the number), which will provide more flexibility. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/fix-response-total-for-elastic-search-a3a9244380ed046f.yaml0000664000175000017500000000046500000000000032067 0ustar00zuulzuul00000000000000--- fixes: - | Fix response format ``total`` for v2 dataframes API. The response for v2 dataframes is ``{"total": 3}``. However, for Elasticsearch search response, the ``"hits.total"`` in the response body is ``{"value": 3, "relation": "eq"}``, which does not match the API response schema. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/fix-scope-state-reset-filters-0a1f5ea503bd32a1.yaml0000664000175000017500000000023100000000000030557 0ustar00zuulzuul00000000000000--- fixes: - | An issue causing data not to be deleted from the storage backend when resetting a scope's state through the API has been fixed. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/fix-url-building-do-init-7c952afaf6d909cd.yaml0000664000175000017500000000022100000000000027622 0ustar00zuulzuul00000000000000--- fixes: - | It is not required anymore to prefix the url of a resource with a ``/`` when using ``cloudkitty.api.v2.utils.do_init``. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/fix-v1-storage-groupby-e865d1315bd390cb.yaml0000664000175000017500000000037000000000000027170 0ustar00zuulzuul00000000000000--- fixes: - | ``CompileError: Can't resolve label reference for ORDER BY / GROUP BY.`` errors that were sometimes raised by SQLAlchemy when using the v1 storage backend and grouping on ``tenant_id`` and ``res_type`` have been fixed. ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=cloudkitty-21.0.0/releasenotes/notes/fix-v1-summary-and-total-with-es-os-backend-9540741b80819672.yaml 22 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/fix-v1-summary-and-total-with-es-os-backend-9540741b80819672.ya0000664000175000017500000000046300000000000032105 0ustar00zuulzuul00000000000000--- fixes: - | Fixed a bug where ``openstack rating summary get -s `` and ``openstack rating total get -s `` would fail when using the Elasticsearch or OpenSearch storage backends. See story 2011128 `_ for more details. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/fix_py_scripts-fd9ab52c92263844.yaml0000664000175000017500000000012600000000000025724 0ustar00zuulzuul00000000000000--- fixes: - | Fix failure to process rating using the PyScripts rating module. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/force-project-id-monasca-collector-cb30ed073d36d40e.yaml0000664000175000017500000000020700000000000031535 0ustar00zuulzuul00000000000000--- features: - | It is now possible to force a project_id to retrieve a specific metric from it with the monasca collector. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/get-dataframes-v2-api-endpoint-3a4625c6008a5fca.yaml0000664000175000017500000000043400000000000030517 0ustar00zuulzuul00000000000000--- features: - | Added a v2 API endpoint allowing to retrieve dataframes from the CloudKitty storage. This endpoint is available via a ``GET`` request on ``/v2/dataframes``. Being the owner of the scope or having admin privileges are required to use this endpoint. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/harden-dataframes-policy-7786286525e52dfb.yaml0000664000175000017500000000037300000000000027460 0ustar00zuulzuul00000000000000--- security: - | The default policy for the ``/v1/storage/dataframes`` endpoint has been changed from ``unprotected`` (accessible by any unauthenticated used) to ``admin_or_owner`` (accessible only by admins or members of the project). ././@PaxHeader0000000000000000000000000000021100000000000011447 xustar0000000000000000115 path=cloudkitty-21.0.0/releasenotes/notes/ignore_disabled_tenants-and-ignore_rating_role-dfe542a0cafd412e.yaml 22 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/ignore_disabled_tenants-and-ignore_rating_role-dfe542a0cafd412e0000664000175000017500000000054300000000000033454 0ustar00zuulzuul00000000000000--- features: - | Two new options ``ignore_disabled_tenants`` and ``ignore_rating_role`` were added in the ``fetcher_keystone`` section. ``ignore_disabled_tenants`` skips disabled tenants when doing the rating. ``ignore_rating_role`` rates everyone, without reading the rating role for each project, which can be resource consuming. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/improve-metrics-configuration-271102366f8e6fe7.yaml0000664000175000017500000000016200000000000030572 0ustar00zuulzuul00000000000000features: - | The format of the 'metrics.yml' configuration file has been improved, and will be stable. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/introduce-active-status-field-cdfecd27c2bb9a42.yaml0000664000175000017500000000012400000000000031071 0ustar00zuulzuul00000000000000--- features: - | Add active status option in the storage state table and API.././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/introduce-bandit-security-linter-592faa26f957a3dd.yaml0000664000175000017500000000037700000000000031423 0ustar00zuulzuul00000000000000--- security: - | Introduce bandit security checks and fix potential security issues detected by bandit linter. Remove unused option where host_ip was a binding to all interfaces. Using of insecure hash function, switch from sha1 to sha512. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/introduce-cloudkitty.utils-792b9080537405bf.yaml0000664000175000017500000000016600000000000030046 0ustar00zuulzuul00000000000000--- other: - | Cloudkitty's ``*_utils`` modules have been grouped into the new ``cloudkitty.utils`` module. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/introduce-reprocessing-api-822db3edc256507a.yaml0000664000175000017500000000024400000000000030171 0ustar00zuulzuul00000000000000--- features: - | Introduce the reprocessing schedule API, which allows operators to schedule reprocessing tasks to reprocess scopes in given timeframes. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/make-cloudkitty-timezone-aware-2b65edc42e913d6c.yaml0000664000175000017500000000016400000000000031046 0ustar00zuulzuul00000000000000--- upgrade: - | CloudKitty is now aware of timezones, and the API supports iso8601 formatted timestamps. ././@PaxHeader0000000000000000000000000000021600000000000011454 xustar0000000000000000120 path=cloudkitty-21.0.0/releasenotes/notes/make-gnocchi-http-max-connections-pool-configurable-52c9f6617466ea30.yaml 22 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/make-gnocchi-http-max-connections-pool-configurable-52c9f6617460000664000175000017500000000047600000000000033121 0ustar00zuulzuul00000000000000--- features: - | Adds a new configuration ``http_pool_maxsize`` that defines the maximum size of Gnocchi's fetcher and collector HTTP connection pools. The default value of this new configuration is defined by the ``requests`` library in the ``requests.adapters.DEFAULT_POOLSIZE`` global variable. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/make-processor-run-several-workers-02597b0f77687ef3.yaml0000664000175000017500000000036000000000000031476 0ustar00zuulzuul00000000000000--- features: - | The processor is now able to run several parallel workers. By default, one worker is spawned for each available CPU. Workers can be limited through the ``max_workers`` option of the ``orchestrator`` section. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/map-mutator-632b8629c0482e94.yaml0000664000175000017500000000031000000000000024763 0ustar00zuulzuul00000000000000--- features: - | Adds a ``MAP`` mutator to map arbitrary values to new values. This is useful with metrics reporting resource status as their value, but multiple statuses are billable. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/monasca-fetcher-2ea866f873ab5336.yaml0000664000175000017500000000036200000000000025722 0ustar00zuulzuul00000000000000--- features: - | Adds a Monasca fetcher retrieving scopes from Monasca dimensions. See the `fetcher documentation `__ for more details. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/move-api-docs-to-api-ref-be71b864e557110e.yaml0000664000175000017500000000022300000000000027253 0ustar00zuulzuul00000000000000--- other: - | API reference/docs have been moved to ``api-ref/source/`` and the original path now contains a symlink to this directory. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/multiple_values_filter_summary_get_v2_api-1110373a900fad0d.yaml0000664000175000017500000000013100000000000033251 0ustar00zuulzuul00000000000000--- features: - | Add support for multiple value filters in the summary GET V2 API.././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/new-forcegranularity-default-b8aaf7d7823aef3b.yaml0000664000175000017500000000041400000000000030744 0ustar00zuulzuul00000000000000--- fixes: - | A new directive ``force_granularity: 300`` was added to the default ``metrics.yml`` file for ``cpu`` and ``volume.size``, to match the defaults of ceilometer and avoid logging errors in ``cloudkitty-processor`` with the default setup. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/notnumbool-mutator-ab056e86f2bc843d.yaml0000664000175000017500000000031000000000000026672 0ustar00zuulzuul00000000000000--- features: - | The new "NOTNUMBOOL" mutator has been added. This mutator is, essentially, an opposite of the "NUMBOOL" mutator as it returns 1.0 when quantity is 0 and 0.0 otherwise. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/optimize_gnochi-fetcher-41b502e7ca242cb1.yaml0000664000175000017500000000024700000000000027516 0ustar00zuulzuul00000000000000--- issues: - | Optimize Gnocchi fetcher to avoid consuming too much RAM when CloudKitty runs in cloud environments with hundreds of thousands of resources. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/optimize_gnochi-fetcher-runtime-3604026816.yaml0000664000175000017500000000027300000000000030066 0ustar00zuulzuul00000000000000--- issues: - | Optimize Gnocchi fetcher runtime to avoid taking too long to load scopes when CloudKitty runs in cloud environments with hundreds of thousands of resources. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/optimizing-sql-queries-939f48fff1805389.yaml0000664000175000017500000000012500000000000027264 0ustar00zuulzuul00000000000000--- features: - | Improve performance of SQL queries filtering on date fields. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/patch-use-all-revision-0325eeb0f7871c35.yaml0000664000175000017500000000021200000000000027136 0ustar00zuulzuul00000000000000--- fixes: - | Fixes accounting of quantity values when ``use_all_resource_revisions`` option is used in the Gnocchi collector. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/post-api-create-scope-739098144706a1cf.yaml0000664000175000017500000000044500000000000026626 0ustar00zuulzuul00000000000000--- features: - | Introduce an API to create scopes with a POST request. This is useful for operators to register scopes before they are created as resources in the collected backend and disable their processing without waiting for the scopes to be discovered by CloudKitty. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/prometheus-collector-empty-meta-12402d8f0254c011.yaml0000664000175000017500000000024100000000000030721 0ustar00zuulzuul00000000000000--- issues: - | Fixes exceptions in Prometheus collector when metadata defined in ``metrics.yml`` is not present on metrics retrieved from Prometheus. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/prometheus-collector-mutate-8da4748b4d1f0b59.yaml0000664000175000017500000000013100000000000030406 0ustar00zuulzuul00000000000000--- issues: - | The Prometheus collector now applies quantity mutation on metrics. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/prometheus-custom-query-ab2dc00e97b14be2.yaml0000664000175000017500000000016400000000000027722 0ustar00zuulzuul00000000000000--- features: - | Adds support for specifying optional prefix and/or suffix to add to Prometheus queries. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/prometheus-error-8eab9f1793c2280c.yaml0000664000175000017500000000020100000000000026252 0ustar00zuulzuul00000000000000--- fixes: - | Raises a ``CollectError`` exception with error details when a Prometheus query returns an error status. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/raise-exception-on-invalid-config-0aece71caa0947fa.yaml0000664000175000017500000000025200000000000031537 0ustar00zuulzuul00000000000000--- other: - | The ``cloudkitty.utils.load_conf`` function does now raise an exception in case the ``metrics.yml`` file can't be read or has an invalid format. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/rating-modules-v2-7e4e7a3c5fa96331.yaml0000664000175000017500000000010400000000000026211 0ustar00zuulzuul00000000000000--- features: - | Add rating modules GET endpoints to v2 API. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/refactor-storage-e5453296e477e594.yaml0000664000175000017500000000074200000000000026011 0ustar00zuulzuul00000000000000--- features: - | The storage system is being refactored. A hybrid storage backend has been added. This backend handles states via SQLAlchemy and pure storage via another storage backend. Once this new storage is considered stable, it will become the default storage. This will ease the creation of storage backends (no more state handling). deprecations: - | All storage backends except sqlalchemy and the new hybrid storage have been deprecated. ././@PaxHeader0000000000000000000000000000023700000000000011457 xustar0000000000000000137 path=cloudkitty-21.0.0/releasenotes/notes/register-keystone-opts-with-keystoneauth-functions-monasca-collector-1a539fc8c23e9dbc.yaml 22 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/register-keystone-opts-with-keystoneauth-functions-monasca-coll0000664000175000017500000000035200000000000034270 0ustar00zuulzuul00000000000000--- fixes: - | Keystone authentication options are now registered with ``keystoneauth1`` in the monasca collector helper functions, which allows to use the ``auth_section`` option even when using the ``source`` fetcher. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/remove-ceilometer-collector-b310bf6c5736c88a.yaml0000664000175000017500000000013000000000000030333 0ustar00zuulzuul00000000000000--- deprecations: - | The ceilometer collector and transformer have been removed. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/remove-dateutil-tz-utc-usage-1350c00be3fadde7.yaml0000664000175000017500000000024200000000000030505 0ustar00zuulzuul00000000000000--- fixes: - | The use of ``tz.UTC`` from the ``dateutil`` package was removed, bringing compatibility with the version available in RHEL and CentOS 8. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/remove-deprecated-api-endpoints-26606e322b8a225e.yaml0000664000175000017500000000021700000000000030726 0ustar00zuulzuul00000000000000--- other: - | The deprecated 'Billing' API endpoint has been removed, and its code has been deleted from the CloudKitty repository. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/remove-deprecated-config-section-names-9a125b1af0932c08.yaml0000664000175000017500000000036100000000000032240 0ustar00zuulzuul00000000000000--- upgrade: - | Section names that had been deprecated in cloudkitty 9.0.0 have been removed in 11.0.0. These include ``gnocchi_collector``, ``tenant_fetcher``, ``keystone_fetcher``, ``source_fetcher`` and ``hybrid_storage``. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/remove-deprecated-storage-backends-158fbec099846ec7.yaml0000664000175000017500000000013600000000000031560 0ustar00zuulzuul00000000000000--- deprecations: - | The gnocchi and gnocchihybrid storage backends have been removed. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/remove-fake-fetcher-9c264520a3cec9d0.yaml0000664000175000017500000000013200000000000026540 0ustar00zuulzuul00000000000000--- deprecations: - | The fake fetcher has been removed from CloudKitty's codebase. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/remove-fake-meta-collectors-5ed94ab1165e9661.yaml0000664000175000017500000000014700000000000030156 0ustar00zuulzuul00000000000000--- deprecations: - | The fake and meta collectors have been removed from CloudKitty's codebase. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/remove-gnocchi-transformer-1dad750b9ba6c2e4.yaml0000664000175000017500000000014100000000000030323 0ustar00zuulzuul00000000000000--- deprecations: - | The gnocchi transformer has been removed from CloudKitty's codebase. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/remove-monasca-429122691d0e5d52.yaml0000664000175000017500000000016300000000000025423 0ustar00zuulzuul00000000000000--- upgrade: - | The Monasca collector and fetcher are removed due to the unmaintained state of Monasca. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/remove-state-attribute-scope-28e48ae4ada5208d.yaml0000664000175000017500000000017000000000000030531 0ustar00zuulzuul00000000000000--- upgrade: - | The ``state`` field is removed from the scope API. Use ``last_processed_timestamp`` instead. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/remove-transformers-8d9949ed3088b055.yaml0000664000175000017500000000025400000000000026636 0ustar00zuulzuul00000000000000--- other: - | Since data frames are now represented as objects internally, transformers are not used anymore and have been completely removed from the codebase. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/remove-v2-gnocchi-storage-a83bd58008bfd92e.yaml0000664000175000017500000000023600000000000027712 0ustar00zuulzuul00000000000000--- deprecations: - | The gnocchi v2 storage backend has been removed. Users wanting to use the v2 storage interface must use the InfluxDB backend. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/replace-eventlet-with-futurist-60f1fe6474a5efcf.yaml0000664000175000017500000000044100000000000031173 0ustar00zuulzuul00000000000000--- deprecations: - | Since ``eventlet`` has been replaced with ``futurist``, the ``[orchestrator]/max_greenthreads`` option has been deprecated and replaced with ``[orchestrator]/max_threads``. other: - | The ``eventlet`` library has been replaced with ``futurist``. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/reprocess-get-fix-f2bd1f2f9e2d640e.yaml0000664000175000017500000000015400000000000026440 0ustar00zuulzuul00000000000000--- fixes: - | Fix retrieval of reprocessing tasks which was returning ``Internal Server Error``. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/reprocessing-concurrency-issues-2a71f4d86a93c507.yaml0000664000175000017500000000011100000000000031212 0ustar00zuulzuul00000000000000--- fixes: - | Fixed concurrency issues during reprocessing tasks. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/response_format-v2-summary-api-270facdb01d9202b.yaml0000664000175000017500000000020300000000000030757 0ustar00zuulzuul00000000000000--- features: - | Introduce ``response_format`` option for the V2 summary API, which can facilitate parsing the response.././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/rework-prometheus-collector-02bd6351d447e4fe.yaml0000664000175000017500000000031000000000000030410 0ustar00zuulzuul00000000000000--- features: - | Prometheus collector now supports, under extra_args section, an aggregation_method option to decide which aggregation method is to be performed over collected metrics. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/rework-prometheus-collector-f9f34a3792888dad.yaml0000664000175000017500000000026100000000000030442 0ustar00zuulzuul00000000000000--- features: - | Prometheus collector now supports HTTPS with custom CA file, an insecure option to allow an untrusted certificate and basic HTTP authentication. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/skip-period-if-nonexistent-metric-ba56a671e68f5bf5.yaml0000664000175000017500000000034200000000000031500 0ustar00zuulzuul00000000000000--- fixes: - | The behaviour of the gnocchi collector has changed in case of a nonexistent metric. Given that a nonexistent metric is unlikely to exist in the next collection cycle, the metric is simply skipped. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/source-fetcher-43c4352508f7f944.yaml0000664000175000017500000000021100000000000025433 0ustar00zuulzuul00000000000000--- features: - | A Source Fetcher has been added, allowing to add new collectors to scrap metrics from non-OpenStack sources. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/status-upgrade-check-fdcf054643e071d8.yaml0000664000175000017500000000026100000000000026756 0ustar00zuulzuul00000000000000--- features: - | A new ``cloudkitty-status upgrade check`` command has been added. It can be used to validate a deployment before upgrading it from release N-1 to N. ././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=cloudkitty-21.0.0/releasenotes/notes/support-cross-tenant-metric-submission-monasca-collector-508b495bc88910ca.yaml 22 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/support-cross-tenant-metric-submission-monasca-collector-508b490000664000175000017500000000036500000000000033620 0ustar00zuulzuul00000000000000--- features: - | Cross-tenant metric submission is now supported in the monasca collector. In order to a fetch metric from the scope that is currently being processed, the ``forced_project_id`` option must be set to ``SCOPE_ID``. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/support-group-by-timeframes-1247aa336916f3b6.yaml0000664000175000017500000000104400000000000030176 0ustar00zuulzuul00000000000000--- features: - | Introduce new default groupby options: (i) time: to group data hourly. The actual group by process will depend on the ``period`` parameter. The default value is ``3600``, which represents one hour; (ii) time-d: to group data by day of the year; (iii) time-w: to group data by week of the year; (iv) time-m: to group data by month; and, (v) time-y: to group data by year. If you have old data in CloudKitty and you wish to use these group by methods, you will need to reprocess the desired timeframe. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/support-groupby-time-v2-summary-48ff5ad671f8c7c5.yaml0000664000175000017500000000032600000000000031210 0ustar00zuulzuul00000000000000--- upgrade: - | It is now possible to group v2 summaries by timestamp. In order to do this, the ``time`` parameter must be specified in the ``groupby`` list: ``cloudkitty summary get -g time,type``. ././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=cloudkitty-21.0.0/releasenotes/notes/use-interface-param-endpoint-discovery-monasca-collector-7477e86cd7e5acf4.yaml 22 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/notes/use-interface-param-endpoint-discovery-monasca-collector-7477e80000664000175000017500000000022600000000000033475 0ustar00zuulzuul00000000000000--- fixes: - | The ``interface`` parameter of the ``collector_monasca`` section is now also used for the discovery of the monasca endpoint. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.2994874 cloudkitty-21.0.0/releasenotes/source/0000775000175000017500000000000000000000000020001 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/source/2023.1.rst0000664000175000017500000000020200000000000021252 0ustar00zuulzuul00000000000000=========================== 2023.1 Series Release Notes =========================== .. release-notes:: :branch: stable/2023.1 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/source/2023.2.rst0000664000175000017500000000020200000000000021253 0ustar00zuulzuul00000000000000=========================== 2023.2 Series Release Notes =========================== .. release-notes:: :branch: stable/2023.2 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/source/2024.1.rst0000664000175000017500000000020200000000000021253 0ustar00zuulzuul00000000000000=========================== 2024.1 Series Release Notes =========================== .. release-notes:: :branch: stable/2024.1 ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.3034875 cloudkitty-21.0.0/releasenotes/source/_static/0000775000175000017500000000000000000000000021427 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/source/_static/.placeholder0000664000175000017500000000000000000000000023700 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.3034875 cloudkitty-21.0.0/releasenotes/source/_templates/0000775000175000017500000000000000000000000022136 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/source/_templates/.placeholder0000664000175000017500000000000000000000000024407 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/source/conf.py0000664000175000017500000002000300000000000021273 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # # Cloudkitty Release Notes documentation build configuration file. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.insert(0, os.path.abspath('.')) # -- General configuration ---------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'reno.sphinxext', 'openstackdocstheme' ] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = 'Cloudkitty Client Release Notes' copyright = '2016, Cloudkitty developers' # Release notes are version independent. # The short X.Y version. version = '' # The full version, including alpha/beta/rc tags. release = '' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'native' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. #keep_warnings = False # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = ["."] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. #html_extra_path = [] # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'CloudkittyReleaseNotestdoc' # -- Options for LaTeX output ------------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'PythonCloudkitty.tex', 'Cloudkitty Release Notes Documentation', 'Cloudkitty developers', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output ------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'cloudkitty', 'Cloudkitty Release Notes Documentation', ['Cloudkitty developers'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ----------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'cloudkitty', 'Cloudkitty Release Notes Documentation', 'Cloudkitty developers', 'Cloudkitty', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. #texinfo_no_detailmenu = False # -- Options for openstackdocstheme ------------------------------------------- openstackdocs_repo_name = 'openstack/cloudkitty' openstackdocs_auto_name = False openstackdocs_use_storyboard = True ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/source/index.rst0000664000175000017500000000176600000000000021654 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================================================== Welcome to cloudkitty Release Notes documentation! ================================================== Contents ======== .. toctree:: :maxdepth: 2 unreleased 2024.1 2023.2 2023.1 zed yoga xena wallaby victoria ussuri train stein rocky queens pike ocata Indices and tables ================== * :ref:`genindex` * :ref:`search` ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/source/ocata.rst0000664000175000017500000000023000000000000021615 0ustar00zuulzuul00000000000000=================================== Ocata Series Release Notes =================================== .. release-notes:: :branch: origin/stable/ocata ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/source/pike.rst0000664000175000017500000000021700000000000021463 0ustar00zuulzuul00000000000000=================================== Pike Series Release Notes =================================== .. release-notes:: :branch: stable/pike ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/source/queens.rst0000664000175000017500000000022300000000000022030 0ustar00zuulzuul00000000000000=================================== Queens Series Release Notes =================================== .. release-notes:: :branch: stable/queens ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/source/rocky.rst0000664000175000017500000000022100000000000021655 0ustar00zuulzuul00000000000000=================================== Rocky Series Release Notes =================================== .. release-notes:: :branch: stable/rocky ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/source/stein.rst0000664000175000017500000000022100000000000021650 0ustar00zuulzuul00000000000000=================================== Stein Series Release Notes =================================== .. release-notes:: :branch: stable/stein ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/source/train.rst0000664000175000017500000000017600000000000021654 0ustar00zuulzuul00000000000000========================== Train Series Release Notes ========================== .. release-notes:: :branch: stable/train ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/source/unreleased.rst0000664000175000017500000000015300000000000022661 0ustar00zuulzuul00000000000000============================ Current Series Release Notes ============================ .. release-notes:: ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/source/ussuri.rst0000664000175000017500000000020200000000000022057 0ustar00zuulzuul00000000000000=========================== Ussuri Series Release Notes =========================== .. release-notes:: :branch: stable/ussuri ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/source/victoria.rst0000664000175000017500000000022000000000000022345 0ustar00zuulzuul00000000000000============================= Victoria Series Release Notes ============================= .. release-notes:: :branch: unmaintained/victoria ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/source/wallaby.rst0000664000175000017500000000021400000000000022163 0ustar00zuulzuul00000000000000============================ Wallaby Series Release Notes ============================ .. release-notes:: :branch: unmaintained/wallaby ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/source/xena.rst0000664000175000017500000000020000000000000021456 0ustar00zuulzuul00000000000000========================= Xena Series Release Notes ========================= .. release-notes:: :branch: unmaintained/xena ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/source/yoga.rst0000664000175000017500000000020000000000000021462 0ustar00zuulzuul00000000000000========================= Yoga Series Release Notes ========================= .. release-notes:: :branch: unmaintained/yoga ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/releasenotes/source/zed.rst0000664000175000017500000000017400000000000021317 0ustar00zuulzuul00000000000000======================== Zed Series Release Notes ======================== .. release-notes:: :branch: unmaintained/zed ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/requirements.txt0000664000175000017500000000263300000000000017300 0ustar00zuulzuul00000000000000# Requirements lower bounds listed here are our best effort to keep them up to # date but we do not test them so no guarantee of having them all correct. If # you find any incorrect lower bounds, let us know or propose a fix. # The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. pbr>=5.5.1 # Apache-2.0 alembic>=1.4.3 # MIT keystonemiddleware>=9.1.0 # Apache-2.0 gnocchiclient>=7.0.6 # Apache-2.0 python-keystoneclient>=4.1.1 # Apache-2.0 keystoneauth1>=4.2.1 # Apache-2.0 iso8601>=0.1.13 # MIT PasteDeploy>=2.1.1 # MIT pecan>=1.3.3 # BSD WSME>=0.10.0 # MIT oslo.config>=8.3.3 # Apache-2.0 oslo.context>=3.1.1 # Apache-2.0 oslo.concurrency>=4.3.1 # Apache-2.0 oslo.db>=8.4.0 # Apache-2.0 oslo.i18n>=5.0.1 # Apache-2.0 oslo.log>=4.4.0 # Apache-2.0 oslo.messaging>=14.1.0 # Apache-2.0 oslo.middleware>=4.1.1 # Apache-2.0 oslo.policy>=3.6.0 # Apache-2.0 oslo.utils>=4.7.0 # Apache-2.0 oslo.upgradecheck>=1.3.0 # Apache-2.0 python-dateutil>=2.8.0 # BSD SQLAlchemy>=1.3.20 # MIT stevedore>=3.2.2 # Apache-2.0 tooz>=2.7.1 # Apache-2.0 voluptuous>=0.12.0 # BSD License influxdb>=5.3.1 # MIT influxdb-client>=1.36.0 # MIT Flask>=2.0.0 # BSD Flask-RESTful>=0.3.9 # BSD cotyledon>=1.7.3 # Apache-2.0 futurist>=2.3.0 # Apache-2.0 datetimerange>=0.6.1 # MIT requests>=2.14.2 # Apache-2.0 ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1727866639.3034875 cloudkitty-21.0.0/setup.cfg0000664000175000017500000000512500000000000015634 0ustar00zuulzuul00000000000000[metadata] name = cloudkitty summary = Rating as a Service component for OpenStack description_file = README.rst author = OpenStack author_email = openstack-discuss@lists.openstack.org home_page = https://docs.openstack.org/cloudkitty/latest python_requires = >=3.8 classifier = Environment :: OpenStack Intended Audience :: Information Technology Intended Audience :: System Administrators License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux Programming Language :: Python Programming Language :: Python :: 3 Programming Language :: Python :: 3.8 Programming Language :: Python :: 3.9 Programming Language :: Python :: 3.10 Programming Language :: Python :: 3.11 [files] packages = cloudkitty [entry_points] console_scripts = cloudkitty-dbsync = cloudkitty.cli.dbsync:main cloudkitty-processor = cloudkitty.cli.processor:main cloudkitty-storage-init = cloudkitty.cli.storage:main cloudkitty-writer = cloudkitty.cli.writer:main cloudkitty-status = cloudkitty.cli.status:main wsgi_scripts = cloudkitty-api = cloudkitty.api.app:build_wsgi_app oslo.policy.enforcer = cloudkitty = cloudkitty.common.policy:get_enforcer oslo.policy.policies = cloudkitty = cloudkitty.common.policies:list_rules oslo.config.opts = cloudkitty.common.config = cloudkitty.common.config:list_opts oslo.config.opts.defaults = cloudkitty.common.config = cloudkitty.common.defaults:set_config_defaults cloudkitty.collector.backends = gnocchi = cloudkitty.collector.gnocchi:GnocchiCollector prometheus = cloudkitty.collector.prometheus:PrometheusCollector cloudkitty.fetchers = keystone = cloudkitty.fetcher.keystone:KeystoneFetcher source = cloudkitty.fetcher.source:SourceFetcher gnocchi = cloudkitty.fetcher.gnocchi:GnocchiFetcher prometheus = cloudkitty.fetcher.prometheus:PrometheusFetcher cloudkitty.rating.processors = noop = cloudkitty.rating.noop:Noop hashmap = cloudkitty.rating.hash:HashMap pyscripts = cloudkitty.rating.pyscripts:PyScripts cloudkitty.storage.v1.backends = sqlalchemy = cloudkitty.storage.v1.sqlalchemy:SQLAlchemyStorage hybrid = cloudkitty.storage.v1.hybrid:HybridStorage cloudkitty.storage.v2.backends = influxdb = cloudkitty.storage.v2.influx:InfluxStorage elasticsearch = cloudkitty.storage.v2.elasticsearch:ElasticsearchStorage opensearch = cloudkitty.storage.v2.opensearch:OpenSearchStorage cloudkitty.storage.hybrid.backends = gnocchi = cloudkitty.storage.v1.hybrid.backends.gnocchi:GnocchiStorage cloudkitty.output.writers = osrf = cloudkitty.writer.osrf:OSRFBackend csv = cloudkitty.writer.csv_map:CSVMapped [egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/setup.py0000664000175000017500000000137600000000000015531 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT import setuptools setuptools.setup( setup_requires=['pbr>=2.0.0'], pbr=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/test-requirements.txt0000664000175000017500000000077400000000000020261 0ustar00zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. # hacking should be first hacking>=6.1.0,<6.2.0 # Apache-2.0 coverage>=5.3 # Apache-2.0 kombu>=5.0.2 # BSD ddt>=1.4.1 # MIT gabbi>=2.0.4 # Apache-2.0 testscenarios>=0.5.0 # Apache-2.0/BSD stestr>=3.0.1 # Apache-2.0 oslotest>=4.4.1 # Apache-2.0 doc8>=0.8.1 # Apache-2.0 bandit>=1.6.0 # Apache-2.0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1727866580.0 cloudkitty-21.0.0/tox.ini0000664000175000017500000000622200000000000015325 0ustar00zuulzuul00000000000000[tox] minversion = 3.18.0 envlist = py3,pep8 ignore_basepython_conflict = True [testenv] basepython = python3 allowlist_externals = find rm setenv = VIRTUAL_ENV={envdir} PYTHONWARNINGS=default::DeprecationWarning usedevelop = True deps = -c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt commands = find . -type f -name "*.py[co]" -delete rm -f .testrepository/times.dbm stestr run {posargs} [testenv:debug] commands = oslo_debug_helper {posargs} [testenv:pep8] commands = flake8 {posargs} cloudkitty doc8 {posargs} [testenv:bandit] deps = -r{toxinidir}/test-requirements.txt commands = bandit -r cloudkitty -n5 -x cloudkitty/tests/* -ll [testenv:cover] setenv = VIRTUAL_ENV={envdir} PYTHON=coverage run --source cloudkitty --parallel-mode commands = stestr run {posargs} coverage combine coverage html -d cover coverage xml -o cover/coverage.xml coverage report [testenv:genconfig] commands = oslo-config-generator --config-file etc/oslo-config-generator/cloudkitty.conf [testenv:genpolicy] commands = oslopolicy-sample-generator --config-file=etc/oslo-policy-generator/cloudkitty.conf [testenv:docs] deps = -c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} -r{toxinidir}/requirements.txt -r{toxinidir}/doc/requirements.txt commands = sphinx-build -W --keep-going -b html doc/source doc/build/html # TODO(smcginnis) Temporarily disabling this as it fails. Error is that # something is too large, likely from pulling in one of the conf sample files # [testenv:pdf-docs] # envdir = {toxworkdir}/docs # allowlist_externals = # make # commands = # sphinx-build -W --keep-going -b latex doc/source doc/build/pdf # make -C doc/build/pdf [testenv:api-ref] # This environment is called from CI scripts to test and publish # the API Ref to docs.openstack.org. deps = -r{toxinidir}/doc/requirements.txt allowlist_externals = rm commands = rm -rf api-ref/build sphinx-build -W -b html -d api-ref/build/doctrees api-ref/source api-ref/build/html [testenv:venv] commands = {posargs} [flake8] filename = *.py,app.wsgi exclude = .git,.venv,.tox,dist,doc,*egg,build,.ropeproject,releasenotes [doc8] ignore-path = .venv,.git,.tox,.tmp,*cloudkitty/locale*,*lib/python*,cloudkitty.egg*,doc/build,releasenotes/* [hacking] import_exceptions = cloudkitty.i18n [flake8:local-plugins] extension = C310 = checks:CheckLoggingFormatArgs C311 = checks:validate_assertIsNone C312 = checks:validate_assertTrue C313 = checks:no_translate_logs C314 = checks:CheckForStrUnicodeExc C315 = checks:CheckForTransAdd C317 = checks:check_oslo_namespace_imports C318 = checks:dict_constructor_with_list_copy C319 = checks:no_xrange C320 = checks:no_log_warn_check C321 = checks:check_explicit_underscore_import C322 = checks:assert_raises_regexp paths = ./cloudkitty/hacking [testenv:releasenotes] deps = {[testenv:docs]deps} commands = sphinx-build -a -E -W -d releasenotes/build/doctrees --keep-going -b html releasenotes/source releasenotes/build/html