pax_global_header00006660000000000000000000000064145430134440014514gustar00rootroot0000000000000052 comment=24b40de258d5c057977d35988b136377587d9e14 python-prometheus-client-0.19.0+ds1/000077500000000000000000000000001454301344400172545ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/.coveragerc000066400000000000000000000004001454301344400213670ustar00rootroot00000000000000[run] branch = True source = prometheus_client omit = prometheus_client/decorator.py [paths] source = prometheus_client .tox/*/lib/python*/site-packages/prometheus_client .tox/pypy/site-packages/prometheus_client [report] show_missing = Truepython-prometheus-client-0.19.0+ds1/CODE_OF_CONDUCT.md000066400000000000000000000002301454301344400220460ustar00rootroot00000000000000# Prometheus Community Code of Conduct Prometheus follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md). python-prometheus-client-0.19.0+ds1/CONTRIBUTING.md000066400000000000000000000032161454301344400215070ustar00rootroot00000000000000# Contributing Prometheus uses GitHub to manage reviews of pull requests. * If you have a trivial fix or improvement, go ahead and create a pull request, addressing (with `@...`) the maintainer of this repository (see [MAINTAINERS.md](MAINTAINERS.md)) in the description of the pull request. * If you plan to do something more involved, first discuss your ideas on [our mailing list]. This will avoid unnecessary work and surely give you and us a good deal of inspiration. * Before your contributions can be landed, they must be signed off under the [Developer Certificate of Origin] which asserts you own and have the right to submit the change under the open source licence used by the project. ## Testing Submitted changes should pass the current tests, and be covered by new test cases when adding functionality. * Run the tests locally using [tox] which executes the full suite on all supported Python versions installed. * Each pull request is gated using [Travis CI] with the results linked on the github page. This must pass before the change can land, note pushing a new change will trigger a retest. ## Style * Code style should follow [PEP 8] generally, and can be checked by running: ``tox -e flake8``. * Import statements can be automatically formatted using [isort]. [our mailing list]: https://groups.google.com/forum/?fromgroups#!forum/prometheus-developers [Developer Certificate of Origin]: https://github.com/prometheus/prometheus/wiki/DCO-signing [isort]: https://pypi.org/project/isort/ [PEP 8]: https://www.python.org/dev/peps/pep-0008/ [tox]: https://tox.readthedocs.io/en/latest/ [Travis CI]: https://docs.travis-ci.com/ python-prometheus-client-0.19.0+ds1/LICENSE000066400000000000000000000261351454301344400202700ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. python-prometheus-client-0.19.0+ds1/MAINTAINERS.md000066400000000000000000000000721454301344400213470ustar00rootroot00000000000000* Chris Marchbanks @csmarchbanks python-prometheus-client-0.19.0+ds1/MANIFEST.in000066400000000000000000000001131454301344400210050ustar00rootroot00000000000000graft tests global-exclude *.py[cod] prune __pycache__ prune */__pycache__ python-prometheus-client-0.19.0+ds1/NOTICE000066400000000000000000000003541454301344400201620ustar00rootroot00000000000000Prometheus instrumentation library for Python applications Copyright 2015 The Prometheus Authors This product bundles decorator 4.0.10 which is available under a "2-clause BSD" license. For details, see prometheus_client/decorator.py. python-prometheus-client-0.19.0+ds1/README.md000066400000000000000000000010361454301344400205330ustar00rootroot00000000000000# Prometheus Python Client The official Python client for [Prometheus](https://prometheus.io). ## Installation ``` pip install prometheus-client ``` This package can be found on [PyPI](https://pypi.python.org/pypi/prometheus_client). ## Documentation Documentation is available on https://prometheus.github.io/client_python ## Links * [Releases](https://github.com/prometheus/client_python/releases): The releases page shows the history of the project and acts as a changelog. * [PyPI](https://pypi.python.org/pypi/prometheus_client) python-prometheus-client-0.19.0+ds1/SECURITY.md000066400000000000000000000002541454301344400210460ustar00rootroot00000000000000# Reporting a security issue The Prometheus security policy, including how to report vulnerabilities, can be found here: python-prometheus-client-0.19.0+ds1/mypy.ini000066400000000000000000000003241454301344400207520ustar00rootroot00000000000000[mypy] exclude = prometheus_client/decorator.py|prometheus_client/twisted|tests/test_twisted.py implicit_reexport = False disallow_incomplete_defs = True [mypy-prometheus_client.decorator] follow_imports = skip python-prometheus-client-0.19.0+ds1/prometheus_client/000077500000000000000000000000001454301344400230055ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/prometheus_client/__init__.py000066400000000000000000000034271454301344400251240ustar00rootroot00000000000000#!/usr/bin/env python from . import ( exposition, gc_collector, metrics, metrics_core, platform_collector, process_collector, registry, ) from .exposition import ( CONTENT_TYPE_LATEST, delete_from_gateway, generate_latest, instance_ip_grouping_key, make_asgi_app, make_wsgi_app, MetricsHandler, push_to_gateway, pushadd_to_gateway, start_http_server, start_wsgi_server, write_to_textfile, ) from .gc_collector import GC_COLLECTOR, GCCollector from .metrics import ( Counter, disable_created_metrics, enable_created_metrics, Enum, Gauge, Histogram, Info, Summary, ) from .metrics_core import Metric from .platform_collector import PLATFORM_COLLECTOR, PlatformCollector from .process_collector import PROCESS_COLLECTOR, ProcessCollector from .registry import CollectorRegistry, REGISTRY __all__ = ( 'CollectorRegistry', 'REGISTRY', 'Metric', 'Counter', 'Gauge', 'Summary', 'Histogram', 'Info', 'Enum', 'enable_created_metrics', 'disable_created_metrics', 'CONTENT_TYPE_LATEST', 'generate_latest', 'MetricsHandler', 'make_wsgi_app', 'make_asgi_app', 'start_http_server', 'start_wsgi_server', 'write_to_textfile', 'push_to_gateway', 'pushadd_to_gateway', 'delete_from_gateway', 'instance_ip_grouping_key', 'ProcessCollector', 'PROCESS_COLLECTOR', 'PlatformCollector', 'PLATFORM_COLLECTOR', 'GCCollector', 'GC_COLLECTOR', ) if __name__ == '__main__': c = Counter('cc', 'A counter') c.inc() g = Gauge('gg', 'A gauge') g.set(17) s = Summary('ss', 'A summary', ['a', 'b']) s.labels('c', 'd').observe(17) h = Histogram('hh', 'A histogram') h.observe(.6) start_http_server(8000) import time while True: time.sleep(1) python-prometheus-client-0.19.0+ds1/prometheus_client/asgi.py000066400000000000000000000031061454301344400243020ustar00rootroot00000000000000from typing import Callable from urllib.parse import parse_qs from .exposition import _bake_output from .registry import CollectorRegistry, REGISTRY def make_asgi_app(registry: CollectorRegistry = REGISTRY, disable_compression: bool = False) -> Callable: """Create a ASGI app which serves the metrics from a registry.""" async def prometheus_app(scope, receive, send): assert scope.get("type") == "http" # Prepare parameters params = parse_qs(scope.get('query_string', b'')) accept_header = ",".join([ value.decode("utf8") for (name, value) in scope.get('headers') if name.decode("utf8").lower() == 'accept' ]) accept_encoding_header = ",".join([ value.decode("utf8") for (name, value) in scope.get('headers') if name.decode("utf8").lower() == 'accept-encoding' ]) # Bake output status, headers, output = _bake_output(registry, accept_header, accept_encoding_header, params, disable_compression) formatted_headers = [] for header in headers: formatted_headers.append(tuple(x.encode('utf8') for x in header)) # Return output payload = await receive() if payload.get("type") == "http.request": await send( { "type": "http.response.start", "status": int(status.split(' ')[0]), "headers": formatted_headers, } ) await send({"type": "http.response.body", "body": output}) return prometheus_app python-prometheus-client-0.19.0+ds1/prometheus_client/bridge/000077500000000000000000000000001454301344400242415ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/prometheus_client/bridge/__init__.py000066400000000000000000000000001454301344400263400ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/prometheus_client/bridge/graphite.py000077500000000000000000000055211454301344400264240ustar00rootroot00000000000000#!/usr/bin/env python import logging import re import socket import threading import time from timeit import default_timer from typing import Callable, Tuple from ..registry import CollectorRegistry, REGISTRY # Roughly, have to keep to what works as a file name. # We also remove periods, so labels can be distinguished. _INVALID_GRAPHITE_CHARS = re.compile(r"[^a-zA-Z0-9_-]") def _sanitize(s): return _INVALID_GRAPHITE_CHARS.sub('_', s) class _RegularPush(threading.Thread): def __init__(self, pusher, interval, prefix): super().__init__() self._pusher = pusher self._interval = interval self._prefix = prefix def run(self): wait_until = default_timer() while True: while True: now = default_timer() if now >= wait_until: # May need to skip some pushes. while wait_until < now: wait_until += self._interval break # time.sleep can return early. time.sleep(wait_until - now) try: self._pusher.push(prefix=self._prefix) except OSError: logging.exception("Push failed") class GraphiteBridge: def __init__(self, address: Tuple[str, int], registry: CollectorRegistry = REGISTRY, timeout_seconds: float = 30, _timer: Callable[[], float] = time.time, tags: bool = False, ): self._address = address self._registry = registry self._tags = tags self._timeout = timeout_seconds self._timer = _timer def push(self, prefix: str = '') -> None: now = int(self._timer()) output = [] prefixstr = '' if prefix: prefixstr = prefix + '.' for metric in self._registry.collect(): for s in metric.samples: if s.labels: if self._tags: sep = ';' fmt = '{0}={1}' else: sep = '.' fmt = '{0}.{1}' labelstr = sep + sep.join( [fmt.format( _sanitize(k), _sanitize(v)) for k, v in sorted(s.labels.items())]) else: labelstr = '' output.append(f'{prefixstr}{_sanitize(s.name)}{labelstr} {float(s.value)} {now}\n') conn = socket.create_connection(self._address, self._timeout) conn.sendall(''.join(output).encode('ascii')) conn.close() def start(self, interval: float = 60.0, prefix: str = '') -> None: t = _RegularPush(self, interval, prefix) t.daemon = True t.start() python-prometheus-client-0.19.0+ds1/prometheus_client/context_managers.py000066400000000000000000000044471454301344400267310ustar00rootroot00000000000000from timeit import default_timer from types import TracebackType from typing import ( Any, Callable, Literal, Optional, Tuple, Type, TYPE_CHECKING, TypeVar, Union, ) from .decorator import decorate if TYPE_CHECKING: from . import Counter F = TypeVar("F", bound=Callable[..., Any]) class ExceptionCounter: def __init__(self, counter: "Counter", exception: Union[Type[BaseException], Tuple[Type[BaseException], ...]]) -> None: self._counter = counter self._exception = exception def __enter__(self) -> None: pass def __exit__(self, typ: Optional[Type[BaseException]], value: Optional[BaseException], traceback: Optional[TracebackType]) -> Literal[False]: if isinstance(value, self._exception): self._counter.inc() return False def __call__(self, f: "F") -> "F": def wrapped(func, *args, **kwargs): with self: return func(*args, **kwargs) return decorate(f, wrapped) class InprogressTracker: def __init__(self, gauge): self._gauge = gauge def __enter__(self): self._gauge.inc() def __exit__(self, typ, value, traceback): self._gauge.dec() def __call__(self, f: "F") -> "F": def wrapped(func, *args, **kwargs): with self: return func(*args, **kwargs) return decorate(f, wrapped) class Timer: def __init__(self, metric, callback_name): self._metric = metric self._callback_name = callback_name def _new_timer(self): return self.__class__(self._metric, self._callback_name) def __enter__(self): self._start = default_timer() return self def __exit__(self, typ, value, traceback): # Time can go backwards. duration = max(default_timer() - self._start, 0) callback = getattr(self._metric, self._callback_name) callback(duration) def labels(self, *args, **kw): self._metric = self._metric.labels(*args, **kw) def __call__(self, f: "F") -> "F": def wrapped(func, *args, **kwargs): # Obtaining new instance of timer every time # ensures thread safety and reentrancy. with self._new_timer(): return func(*args, **kwargs) return decorate(f, wrapped) python-prometheus-client-0.19.0+ds1/prometheus_client/core.py000066400000000000000000000015341454301344400243120ustar00rootroot00000000000000from .metrics import Counter, Enum, Gauge, Histogram, Info, Summary from .metrics_core import ( CounterMetricFamily, GaugeHistogramMetricFamily, GaugeMetricFamily, HistogramMetricFamily, InfoMetricFamily, Metric, StateSetMetricFamily, SummaryMetricFamily, UnknownMetricFamily, UntypedMetricFamily, ) from .registry import CollectorRegistry, REGISTRY from .samples import Exemplar, Sample, Timestamp __all__ = ( 'CollectorRegistry', 'Counter', 'CounterMetricFamily', 'Enum', 'Exemplar', 'Gauge', 'GaugeHistogramMetricFamily', 'GaugeMetricFamily', 'Histogram', 'HistogramMetricFamily', 'Info', 'InfoMetricFamily', 'Metric', 'REGISTRY', 'Sample', 'StateSetMetricFamily', 'Summary', 'SummaryMetricFamily', 'Timestamp', 'UnknownMetricFamily', 'UntypedMetricFamily', ) python-prometheus-client-0.19.0+ds1/prometheus_client/decorator.py000066400000000000000000000366721454301344400253570ustar00rootroot00000000000000# ######################### LICENSE ############################ # # Copyright (c) 2005-2016, Michele Simionato # All rights reserved. # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # Redistributions in bytecode form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS # OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR # TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE # USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH # DAMAGE. """ Decorator module, see http://pypi.python.org/pypi/decorator for the documentation. """ from __future__ import print_function import collections import inspect import itertools import operator import re import sys __version__ = '4.0.10' if sys.version_info >= (3,): from inspect import getfullargspec def get_init(cls): return cls.__init__ else: class getfullargspec(object): "A quick and dirty replacement for getfullargspec for Python 2.X" def __init__(self, f): self.args, self.varargs, self.varkw, self.defaults = \ inspect.getargspec(f) self.kwonlyargs = [] self.kwonlydefaults = None def __iter__(self): yield self.args yield self.varargs yield self.varkw yield self.defaults getargspec = inspect.getargspec def get_init(cls): return cls.__init__.__func__ # getargspec has been deprecated in Python 3.5 ArgSpec = collections.namedtuple( 'ArgSpec', 'args varargs varkw defaults') def getargspec(f): """A replacement for inspect.getargspec""" spec = getfullargspec(f) return ArgSpec(spec.args, spec.varargs, spec.varkw, spec.defaults) DEF = re.compile(r'\s*def\s*([_\w][_\w\d]*)\s*\(') # basic functionality class FunctionMaker(object): """ An object with the ability to create functions with a given signature. It has attributes name, doc, module, signature, defaults, dict and methods update and make. """ # Atomic get-and-increment provided by the GIL _compile_count = itertools.count() def __init__(self, func=None, name=None, signature=None, defaults=None, doc=None, module=None, funcdict=None): self.shortsignature = signature if func: # func can be a class or a callable, but not an instance method self.name = func.__name__ if self.name == '': # small hack for lambda functions self.name = '_lambda_' self.doc = func.__doc__ self.module = func.__module__ if inspect.isfunction(func): argspec = getfullargspec(func) self.annotations = getattr(func, '__annotations__', {}) for a in ('args', 'varargs', 'varkw', 'defaults', 'kwonlyargs', 'kwonlydefaults'): setattr(self, a, getattr(argspec, a)) for i, arg in enumerate(self.args): setattr(self, 'arg%d' % i, arg) if sys.version_info < (3,): # easy way self.shortsignature = self.signature = ( inspect.formatargspec( formatvalue=lambda val: "", *argspec)[1:-1]) else: # Python 3 way allargs = list(self.args) allshortargs = list(self.args) if self.varargs: allargs.append('*' + self.varargs) allshortargs.append('*' + self.varargs) elif self.kwonlyargs: allargs.append('*') # single star syntax for a in self.kwonlyargs: allargs.append('%s=None' % a) allshortargs.append('%s=%s' % (a, a)) if self.varkw: allargs.append('**' + self.varkw) allshortargs.append('**' + self.varkw) self.signature = ', '.join(allargs) self.shortsignature = ', '.join(allshortargs) self.dict = func.__dict__.copy() # func=None happens when decorating a caller if name: self.name = name if signature is not None: self.signature = signature if defaults: self.defaults = defaults if doc: self.doc = doc if module: self.module = module if funcdict: self.dict = funcdict # check existence required attributes assert hasattr(self, 'name') if not hasattr(self, 'signature'): raise TypeError('You are decorating a non function: %s' % func) def update(self, func, **kw): "Update the signature of func with the data in self" func.__name__ = self.name func.__doc__ = getattr(self, 'doc', None) func.__dict__ = getattr(self, 'dict', {}) func.__defaults__ = getattr(self, 'defaults', ()) func.__kwdefaults__ = getattr(self, 'kwonlydefaults', None) func.__annotations__ = getattr(self, 'annotations', None) try: frame = sys._getframe(3) except AttributeError: # for IronPython and similar implementations callermodule = '?' else: callermodule = frame.f_globals.get('__name__', '?') func.__module__ = getattr(self, 'module', callermodule) func.__dict__.update(kw) def make(self, src_templ, evaldict=None, addsource=False, **attrs): "Make a new function from a given template and update the signature" src = src_templ % vars(self) # expand name and signature evaldict = evaldict or {} mo = DEF.match(src) if mo is None: raise SyntaxError('not a valid function template\n%s' % src) name = mo.group(1) # extract the function name names = set([name] + [arg.strip(' *') for arg in self.shortsignature.split(',')]) for n in names: if n in ('_func_', '_call_'): raise NameError('%s is overridden in\n%s' % (n, src)) if not src.endswith('\n'): # add a newline for old Pythons src += '\n' # Ensure each generated function has a unique filename for profilers # (such as cProfile) that depend on the tuple of (, # , ) being unique. filename = '' % (next(self._compile_count),) try: code = compile(src, filename, 'single') exec(code, evaldict) except: print('Error in generated code:', file=sys.stderr) print(src, file=sys.stderr) raise func = evaldict[name] if addsource: attrs['__source__'] = src self.update(func, **attrs) return func @classmethod def create(cls, obj, body, evaldict, defaults=None, doc=None, module=None, addsource=True, **attrs): """ Create a function from the strings name, signature and body. evaldict is the evaluation dictionary. If addsource is true an attribute __source__ is added to the result. The attributes attrs are added, if any. """ if isinstance(obj, str): # "name(signature)" name, rest = obj.strip().split('(', 1) signature = rest[:-1] # strip a right parens func = None else: # a function name = None signature = None func = obj self = cls(func, name, signature, defaults, doc, module) ibody = '\n'.join(' ' + line for line in body.splitlines()) return self.make('def %(name)s(%(signature)s):\n' + ibody, evaldict, addsource, **attrs) def decorate(func, caller): """ decorate(func, caller) decorates a function using a caller. """ evaldict = dict(_call_=caller, _func_=func) fun = FunctionMaker.create( func, "return _call_(_func_, %(shortsignature)s)", evaldict, __wrapped__=func) if hasattr(func, '__qualname__'): fun.__qualname__ = func.__qualname__ return fun def decorator(caller, _func=None): """decorator(caller) converts a caller function into a decorator""" if _func is not None: # return a decorated function # this is obsolete behavior; you should use decorate instead return decorate(_func, caller) # else return a decorator function if inspect.isclass(caller): name = caller.__name__.lower() doc = 'decorator(%s) converts functions/generators into ' \ 'factories of %s objects' % (caller.__name__, caller.__name__) elif inspect.isfunction(caller): if caller.__name__ == '': name = '_lambda_' else: name = caller.__name__ doc = caller.__doc__ else: # assume caller is an object with a __call__ method name = caller.__class__.__name__.lower() doc = caller.__call__.__doc__ evaldict = dict(_call_=caller, _decorate_=decorate) return FunctionMaker.create( '%s(func)' % name, 'return _decorate_(func, _call_)', evaldict, doc=doc, module=caller.__module__, __wrapped__=caller) # ####################### contextmanager ####################### # try: # Python >= 3.2 from contextlib import _GeneratorContextManager except ImportError: # Python >= 2.5 from contextlib import GeneratorContextManager as _GeneratorContextManager class ContextManager(_GeneratorContextManager): def __call__(self, func): """Context manager decorator""" return FunctionMaker.create( func, "with _self_: return _func_(%(shortsignature)s)", dict(_self_=self, _func_=func), __wrapped__=func) init = getfullargspec(_GeneratorContextManager.__init__) n_args = len(init.args) if n_args == 2 and not init.varargs: # (self, genobj) Python 2.7 def __init__(self, g, *a, **k): return _GeneratorContextManager.__init__(self, g(*a, **k)) ContextManager.__init__ = __init__ elif n_args == 2 and init.varargs: # (self, gen, *a, **k) Python 3.4 pass elif n_args == 4: # (self, gen, args, kwds) Python 3.5 def __init__(self, g, *a, **k): return _GeneratorContextManager.__init__(self, g, a, k) ContextManager.__init__ = __init__ contextmanager = decorator(ContextManager) # ############################ dispatch_on ############################ # def append(a, vancestors): """ Append ``a`` to the list of the virtual ancestors, unless it is already included. """ add = True for j, va in enumerate(vancestors): if issubclass(va, a): add = False break if issubclass(a, va): vancestors[j] = a add = False if add: vancestors.append(a) # inspired from simplegeneric by P.J. Eby and functools.singledispatch def dispatch_on(*dispatch_args): """ Factory of decorators turning a function into a generic function dispatching on the given arguments. """ assert dispatch_args, 'No dispatch args passed' dispatch_str = '(%s,)' % ', '.join(dispatch_args) def check(arguments, wrong=operator.ne, msg=''): """Make sure one passes the expected number of arguments""" if wrong(len(arguments), len(dispatch_args)): raise TypeError('Expected %d arguments, got %d%s' % (len(dispatch_args), len(arguments), msg)) def gen_func_dec(func): """Decorator turning a function into a generic function""" # first check the dispatch arguments argset = set(getfullargspec(func).args) if not set(dispatch_args) <= argset: raise NameError('Unknown dispatch arguments %s' % dispatch_str) typemap = {} def vancestors(*types): """ Get a list of sets of virtual ancestors for the given types """ check(types) ras = [[] for _ in range(len(dispatch_args))] for types_ in typemap: for t, type_, ra in zip(types, types_, ras): if issubclass(t, type_) and type_ not in t.__mro__: append(type_, ra) return [set(ra) for ra in ras] def ancestors(*types): """ Get a list of virtual MROs, one for each type """ check(types) lists = [] for t, vas in zip(types, vancestors(*types)): n_vas = len(vas) if n_vas > 1: raise RuntimeError( 'Ambiguous dispatch for %s: %s' % (t, vas)) elif n_vas == 1: va, = vas mro = type('t', (t, va), {}).__mro__[1:] else: mro = t.__mro__ lists.append(mro[:-1]) # discard t and object return lists def register(*types): """ Decorator to register an implementation for the given types """ check(types) def dec(f): check(getfullargspec(f).args, operator.lt, ' in ' + f.__name__) typemap[types] = f return f return dec def dispatch_info(*types): """ An utility to introspect the dispatch algorithm """ check(types) lst = [] for anc in itertools.product(*ancestors(*types)): lst.append(tuple(a.__name__ for a in anc)) return lst def _dispatch(dispatch_args, *args, **kw): types = tuple(type(arg) for arg in dispatch_args) try: # fast path f = typemap[types] except KeyError: pass else: return f(*args, **kw) combinations = itertools.product(*ancestors(*types)) next(combinations) # the first one has been already tried for types_ in combinations: f = typemap.get(types_) if f is not None: return f(*args, **kw) # else call the default implementation return func(*args, **kw) return FunctionMaker.create( func, 'return _f_(%s, %%(shortsignature)s)' % dispatch_str, dict(_f_=_dispatch), register=register, default=func, typemap=typemap, vancestors=vancestors, ancestors=ancestors, dispatch_info=dispatch_info, __wrapped__=func) gen_func_dec.__name__ = 'dispatch_on' + dispatch_str return gen_func_dec python-prometheus-client-0.19.0+ds1/prometheus_client/exposition.py000066400000000000000000000616601454301344400255710ustar00rootroot00000000000000import base64 from contextlib import closing import gzip from http.server import BaseHTTPRequestHandler import os import socket from socketserver import ThreadingMixIn import ssl import sys import threading from typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Union from urllib.error import HTTPError from urllib.parse import parse_qs, quote_plus, urlparse from urllib.request import ( BaseHandler, build_opener, HTTPHandler, HTTPRedirectHandler, HTTPSHandler, Request, ) from wsgiref.simple_server import make_server, WSGIRequestHandler, WSGIServer from .openmetrics import exposition as openmetrics from .registry import CollectorRegistry, REGISTRY from .utils import floatToGoString __all__ = ( 'CONTENT_TYPE_LATEST', 'delete_from_gateway', 'generate_latest', 'instance_ip_grouping_key', 'make_asgi_app', 'make_wsgi_app', 'MetricsHandler', 'push_to_gateway', 'pushadd_to_gateway', 'start_http_server', 'start_wsgi_server', 'write_to_textfile', ) CONTENT_TYPE_LATEST = 'text/plain; version=0.0.4; charset=utf-8' """Content type of the latest text format""" class _PrometheusRedirectHandler(HTTPRedirectHandler): """ Allow additional methods (e.g. PUT) and data forwarding in redirects. Use of this class constitute a user's explicit agreement to the redirect responses the Prometheus client will receive when using it. You should only use this class if you control or otherwise trust the redirect behavior involved and are certain it is safe to full transfer the original request (method and data) to the redirected URL. For example, if you know there is a cosmetic URL redirect in front of a local deployment of a Prometheus server, and all redirects are safe, this is the class to use to handle redirects in that case. The standard HTTPRedirectHandler does not forward request data nor does it allow redirected PUT requests (which Prometheus uses for some operations, for example `push_to_gateway`) because these cannot generically guarantee no violations of HTTP RFC 2616 requirements for the user to explicitly confirm redirects that could have unexpected side effects (such as rendering a PUT request non-idempotent or creating multiple resources not named in the original request). """ def redirect_request(self, req, fp, code, msg, headers, newurl): """ Apply redirect logic to a request. See parent HTTPRedirectHandler.redirect_request for parameter info. If the redirect is disallowed, this raises the corresponding HTTP error. If the redirect can't be determined, return None to allow other handlers to try. If the redirect is allowed, return the new request. This method specialized for the case when (a) the user knows that the redirect will not cause unacceptable side effects for any request method, and (b) the user knows that any request data should be passed through to the redirect. If either condition is not met, this should not be used. """ # note that requests being provided by a handler will use get_method to # indicate the method, by monkeypatching this, instead of setting the # Request object's method attribute. m = getattr(req, "method", req.get_method()) if not (code in (301, 302, 303, 307) and m in ("GET", "HEAD") or code in (301, 302, 303) and m in ("POST", "PUT")): raise HTTPError(req.full_url, code, msg, headers, fp) new_request = Request( newurl.replace(' ', '%20'), # space escaping in new url if needed. headers=req.headers, origin_req_host=req.origin_req_host, unverifiable=True, data=req.data, ) new_request.method = m return new_request def _bake_output(registry, accept_header, accept_encoding_header, params, disable_compression): """Bake output for metrics output.""" # Choose the correct plain text format of the output. encoder, content_type = choose_encoder(accept_header) if 'name[]' in params: registry = registry.restricted_registry(params['name[]']) output = encoder(registry) headers = [('Content-Type', content_type)] # If gzip encoding required, gzip the output. if not disable_compression and gzip_accepted(accept_encoding_header): output = gzip.compress(output) headers.append(('Content-Encoding', 'gzip')) return '200 OK', headers, output def make_wsgi_app(registry: CollectorRegistry = REGISTRY, disable_compression: bool = False) -> Callable: """Create a WSGI app which serves the metrics from a registry.""" def prometheus_app(environ, start_response): # Prepare parameters accept_header = environ.get('HTTP_ACCEPT') accept_encoding_header = environ.get('HTTP_ACCEPT_ENCODING') params = parse_qs(environ.get('QUERY_STRING', '')) if environ['PATH_INFO'] == '/favicon.ico': # Serve empty response for browsers status = '200 OK' headers = [('', '')] output = b'' else: # Bake output status, headers, output = _bake_output(registry, accept_header, accept_encoding_header, params, disable_compression) # Return output start_response(status, headers) return [output] return prometheus_app class _SilentHandler(WSGIRequestHandler): """WSGI handler that does not log requests.""" def log_message(self, format, *args): """Log nothing.""" class ThreadingWSGIServer(ThreadingMixIn, WSGIServer): """Thread per request HTTP server.""" # Make worker threads "fire and forget". Beginning with Python 3.7 this # prevents a memory leak because ``ThreadingMixIn`` starts to gather all # non-daemon threads in a list in order to join on them at server close. daemon_threads = True def _get_best_family(address, port): """Automatically select address family depending on address""" # HTTPServer defaults to AF_INET, which will not start properly if # binding an ipv6 address is requested. # This function is based on what upstream python did for http.server # in https://github.com/python/cpython/pull/11767 infos = socket.getaddrinfo(address, port) family, _, _, _, sockaddr = next(iter(infos)) return family, sockaddr[0] def _get_ssl_ctx( certfile: str, keyfile: str, protocol: int, cafile: Optional[str] = None, capath: Optional[str] = None, client_auth_required: bool = False, ) -> ssl.SSLContext: """Load context supports SSL.""" ssl_cxt = ssl.SSLContext(protocol=protocol) if cafile is not None or capath is not None: try: ssl_cxt.load_verify_locations(cafile, capath) except IOError as exc: exc_type = type(exc) msg = str(exc) raise exc_type(f"Cannot load CA certificate chain from file " f"{cafile!r} or directory {capath!r}: {msg}") else: try: ssl_cxt.load_default_certs(purpose=ssl.Purpose.CLIENT_AUTH) except IOError as exc: exc_type = type(exc) msg = str(exc) raise exc_type(f"Cannot load default CA certificate chain: {msg}") if client_auth_required: ssl_cxt.verify_mode = ssl.CERT_REQUIRED try: ssl_cxt.load_cert_chain(certfile=certfile, keyfile=keyfile) except IOError as exc: exc_type = type(exc) msg = str(exc) raise exc_type(f"Cannot load server certificate file {certfile!r} or " f"its private key file {keyfile!r}: {msg}") return ssl_cxt def start_wsgi_server( port: int, addr: str = '0.0.0.0', registry: CollectorRegistry = REGISTRY, certfile: Optional[str] = None, keyfile: Optional[str] = None, client_cafile: Optional[str] = None, client_capath: Optional[str] = None, protocol: int = ssl.PROTOCOL_TLS_SERVER, client_auth_required: bool = False, ) -> None: """Starts a WSGI server for prometheus metrics as a daemon thread.""" class TmpServer(ThreadingWSGIServer): """Copy of ThreadingWSGIServer to update address_family locally""" TmpServer.address_family, addr = _get_best_family(addr, port) app = make_wsgi_app(registry) httpd = make_server(addr, port, app, TmpServer, handler_class=_SilentHandler) if certfile and keyfile: context = _get_ssl_ctx(certfile, keyfile, protocol, client_cafile, client_capath, client_auth_required) httpd.socket = context.wrap_socket(httpd.socket, server_side=True) t = threading.Thread(target=httpd.serve_forever) t.daemon = True t.start() start_http_server = start_wsgi_server def generate_latest(registry: CollectorRegistry = REGISTRY) -> bytes: """Returns the metrics from the registry in latest text format as a string.""" def sample_line(line): if line.labels: labelstr = '{{{0}}}'.format(','.join( ['{}="{}"'.format( k, v.replace('\\', r'\\').replace('\n', r'\n').replace('"', r'\"')) for k, v in sorted(line.labels.items())])) else: labelstr = '' timestamp = '' if line.timestamp is not None: # Convert to milliseconds. timestamp = f' {int(float(line.timestamp) * 1000):d}' return f'{line.name}{labelstr} {floatToGoString(line.value)}{timestamp}\n' output = [] for metric in registry.collect(): try: mname = metric.name mtype = metric.type # Munging from OpenMetrics into Prometheus format. if mtype == 'counter': mname = mname + '_total' elif mtype == 'info': mname = mname + '_info' mtype = 'gauge' elif mtype == 'stateset': mtype = 'gauge' elif mtype == 'gaugehistogram': # A gauge histogram is really a gauge, # but this captures the structure better. mtype = 'histogram' elif mtype == 'unknown': mtype = 'untyped' output.append('# HELP {} {}\n'.format( mname, metric.documentation.replace('\\', r'\\').replace('\n', r'\n'))) output.append(f'# TYPE {mname} {mtype}\n') om_samples: Dict[str, List[str]] = {} for s in metric.samples: for suffix in ['_created', '_gsum', '_gcount']: if s.name == metric.name + suffix: # OpenMetrics specific sample, put in a gauge at the end. om_samples.setdefault(suffix, []).append(sample_line(s)) break else: output.append(sample_line(s)) except Exception as exception: exception.args = (exception.args or ('',)) + (metric,) raise for suffix, lines in sorted(om_samples.items()): output.append('# HELP {}{} {}\n'.format(metric.name, suffix, metric.documentation.replace('\\', r'\\').replace('\n', r'\n'))) output.append(f'# TYPE {metric.name}{suffix} gauge\n') output.extend(lines) return ''.join(output).encode('utf-8') def choose_encoder(accept_header: str) -> Tuple[Callable[[CollectorRegistry], bytes], str]: accept_header = accept_header or '' for accepted in accept_header.split(','): if accepted.split(';')[0].strip() == 'application/openmetrics-text': return (openmetrics.generate_latest, openmetrics.CONTENT_TYPE_LATEST) return generate_latest, CONTENT_TYPE_LATEST def gzip_accepted(accept_encoding_header: str) -> bool: accept_encoding_header = accept_encoding_header or '' for accepted in accept_encoding_header.split(','): if accepted.split(';')[0].strip().lower() == 'gzip': return True return False class MetricsHandler(BaseHTTPRequestHandler): """HTTP handler that gives metrics from ``REGISTRY``.""" registry: CollectorRegistry = REGISTRY def do_GET(self) -> None: # Prepare parameters registry = self.registry accept_header = self.headers.get('Accept') accept_encoding_header = self.headers.get('Accept-Encoding') params = parse_qs(urlparse(self.path).query) # Bake output status, headers, output = _bake_output(registry, accept_header, accept_encoding_header, params, False) # Return output self.send_response(int(status.split(' ')[0])) for header in headers: self.send_header(*header) self.end_headers() self.wfile.write(output) def log_message(self, format: str, *args: Any) -> None: """Log nothing.""" @classmethod def factory(cls, registry: CollectorRegistry) -> type: """Returns a dynamic MetricsHandler class tied to the passed registry. """ # This implementation relies on MetricsHandler.registry # (defined above and defaulted to REGISTRY). # As we have unicode_literals, we need to create a str() # object for type(). cls_name = str(cls.__name__) MyMetricsHandler = type(cls_name, (cls, object), {"registry": registry}) return MyMetricsHandler def write_to_textfile(path: str, registry: CollectorRegistry) -> None: """Write metrics to the given path. This is intended for use with the Node exporter textfile collector. The path must end in .prom for the textfile collector to process it.""" tmppath = f'{path}.{os.getpid()}.{threading.current_thread().ident}' with open(tmppath, 'wb') as f: f.write(generate_latest(registry)) # rename(2) is atomic but fails on Windows if the destination file exists if os.name == 'nt': os.replace(tmppath, path) else: os.rename(tmppath, path) def _make_handler( url: str, method: str, timeout: Optional[float], headers: Sequence[Tuple[str, str]], data: bytes, base_handler: Union[BaseHandler, type], ) -> Callable[[], None]: def handle() -> None: request = Request(url, data=data) request.get_method = lambda: method # type: ignore for k, v in headers: request.add_header(k, v) resp = build_opener(base_handler).open(request, timeout=timeout) if resp.code >= 400: raise OSError(f"error talking to pushgateway: {resp.code} {resp.msg}") return handle def default_handler( url: str, method: str, timeout: Optional[float], headers: List[Tuple[str, str]], data: bytes, ) -> Callable[[], None]: """Default handler that implements HTTP/HTTPS connections. Used by the push_to_gateway functions. Can be re-used by other handlers.""" return _make_handler(url, method, timeout, headers, data, HTTPHandler) def passthrough_redirect_handler( url: str, method: str, timeout: Optional[float], headers: List[Tuple[str, str]], data: bytes, ) -> Callable[[], None]: """ Handler that automatically trusts redirect responses for all HTTP methods. Augments standard HTTPRedirectHandler capability by permitting PUT requests, preserving the method upon redirect, and passing through all headers and data from the original request. Only use this handler if you control or trust the source of redirect responses you encounter when making requests via the Prometheus client. This handler will simply repeat the identical request, including same method and data, to the new redirect URL.""" return _make_handler(url, method, timeout, headers, data, _PrometheusRedirectHandler) def basic_auth_handler( url: str, method: str, timeout: Optional[float], headers: List[Tuple[str, str]], data: bytes, username: Optional[str] = None, password: Optional[str] = None, ) -> Callable[[], None]: """Handler that implements HTTP/HTTPS connections with Basic Auth. Sets auth headers using supplied 'username' and 'password', if set. Used by the push_to_gateway functions. Can be re-used by other handlers.""" def handle(): """Handler that implements HTTP Basic Auth. """ if username is not None and password is not None: auth_value = f'{username}:{password}'.encode() auth_token = base64.b64encode(auth_value) auth_header = b'Basic ' + auth_token headers.append(('Authorization', auth_header)) default_handler(url, method, timeout, headers, data)() return handle def tls_auth_handler( url: str, method: str, timeout: Optional[float], headers: List[Tuple[str, str]], data: bytes, certfile: str, keyfile: str, cafile: Optional[str] = None, protocol: int = ssl.PROTOCOL_TLS_CLIENT, insecure_skip_verify: bool = False, ) -> Callable[[], None]: """Handler that implements an HTTPS connection with TLS Auth. The default protocol (ssl.PROTOCOL_TLS_CLIENT) will also enable ssl.CERT_REQUIRED and SSLContext.check_hostname by default. This can be disabled by setting insecure_skip_verify to True. Both this handler and the TLS feature on pushgateay are experimental.""" context = ssl.SSLContext(protocol=protocol) if cafile is not None: context.load_verify_locations(cafile) else: context.load_default_certs() if insecure_skip_verify: context.check_hostname = False context.verify_mode = ssl.CERT_NONE context.load_cert_chain(certfile=certfile, keyfile=keyfile) handler = HTTPSHandler(context=context) return _make_handler(url, method, timeout, headers, data, handler) def push_to_gateway( gateway: str, job: str, registry: CollectorRegistry, grouping_key: Optional[Dict[str, Any]] = None, timeout: Optional[float] = 30, handler: Callable = default_handler, ) -> None: """Push metrics to the given pushgateway. `gateway` the url for your push gateway. Either of the form 'http://pushgateway.local', or 'pushgateway.local'. Scheme defaults to 'http' if none is provided `job` is the job label to be attached to all pushed metrics `registry` is an instance of CollectorRegistry `grouping_key` please see the pushgateway documentation for details. Defaults to None `timeout` is how long push will attempt to connect before giving up. Defaults to 30s, can be set to None for no timeout. `handler` is an optional function which can be provided to perform requests to the 'gateway'. Defaults to None, in which case an http or https request will be carried out by a default handler. If not None, the argument must be a function which accepts the following arguments: url, method, timeout, headers, and content May be used to implement additional functionality not supported by the built-in default handler (such as SSL client certicates, and HTTP authentication mechanisms). 'url' is the URL for the request, the 'gateway' argument described earlier will form the basis of this URL. 'method' is the HTTP method which should be used when carrying out the request. 'timeout' requests not successfully completed after this many seconds should be aborted. If timeout is None, then the handler should not set a timeout. 'headers' is a list of ("header-name","header-value") tuples which must be passed to the pushgateway in the form of HTTP request headers. The function should raise an exception (e.g. IOError) on failure. 'content' is the data which should be used to form the HTTP Message Body. This overwrites all metrics with the same job and grouping_key. This uses the PUT HTTP method.""" _use_gateway('PUT', gateway, job, registry, grouping_key, timeout, handler) def pushadd_to_gateway( gateway: str, job: str, registry: Optional[CollectorRegistry], grouping_key: Optional[Dict[str, Any]] = None, timeout: Optional[float] = 30, handler: Callable = default_handler, ) -> None: """PushAdd metrics to the given pushgateway. `gateway` the url for your push gateway. Either of the form 'http://pushgateway.local', or 'pushgateway.local'. Scheme defaults to 'http' if none is provided `job` is the job label to be attached to all pushed metrics `registry` is an instance of CollectorRegistry `grouping_key` please see the pushgateway documentation for details. Defaults to None `timeout` is how long push will attempt to connect before giving up. Defaults to 30s, can be set to None for no timeout. `handler` is an optional function which can be provided to perform requests to the 'gateway'. Defaults to None, in which case an http or https request will be carried out by a default handler. See the 'prometheus_client.push_to_gateway' documentation for implementation requirements. This replaces metrics with the same name, job and grouping_key. This uses the POST HTTP method.""" _use_gateway('POST', gateway, job, registry, grouping_key, timeout, handler) def delete_from_gateway( gateway: str, job: str, grouping_key: Optional[Dict[str, Any]] = None, timeout: Optional[float] = 30, handler: Callable = default_handler, ) -> None: """Delete metrics from the given pushgateway. `gateway` the url for your push gateway. Either of the form 'http://pushgateway.local', or 'pushgateway.local'. Scheme defaults to 'http' if none is provided `job` is the job label to be attached to all pushed metrics `grouping_key` please see the pushgateway documentation for details. Defaults to None `timeout` is how long delete will attempt to connect before giving up. Defaults to 30s, can be set to None for no timeout. `handler` is an optional function which can be provided to perform requests to the 'gateway'. Defaults to None, in which case an http or https request will be carried out by a default handler. See the 'prometheus_client.push_to_gateway' documentation for implementation requirements. This deletes metrics with the given job and grouping_key. This uses the DELETE HTTP method.""" _use_gateway('DELETE', gateway, job, None, grouping_key, timeout, handler) def _use_gateway( method: str, gateway: str, job: str, registry: Optional[CollectorRegistry], grouping_key: Optional[Dict[str, Any]], timeout: Optional[float], handler: Callable, ) -> None: gateway_url = urlparse(gateway) # See https://bugs.python.org/issue27657 for details on urlparse in py>=3.7.6. if not gateway_url.scheme or gateway_url.scheme not in ['http', 'https']: gateway = f'http://{gateway}' gateway = gateway.rstrip('/') url = '{}/metrics/{}/{}'.format(gateway, *_escape_grouping_key("job", job)) data = b'' if method != 'DELETE': if registry is None: registry = REGISTRY data = generate_latest(registry) if grouping_key is None: grouping_key = {} url += ''.join( '/{}/{}'.format(*_escape_grouping_key(str(k), str(v))) for k, v in sorted(grouping_key.items())) handler( url=url, method=method, timeout=timeout, headers=[('Content-Type', CONTENT_TYPE_LATEST)], data=data, )() def _escape_grouping_key(k, v): if v == "": # Per https://github.com/prometheus/pushgateway/pull/346. return k + "@base64", "=" elif '/' in v: # Added in Pushgateway 0.9.0. return k + "@base64", base64.urlsafe_b64encode(v.encode("utf-8")).decode("utf-8") else: return k, quote_plus(v) def instance_ip_grouping_key() -> Dict[str, Any]: """Grouping key with instance set to the IP Address of this host.""" with closing(socket.socket(socket.AF_INET, socket.SOCK_DGRAM)) as s: if sys.platform == 'darwin': # This check is done this way only on MacOS devices # it is done this way because the localhost method does # not work. # This method was adapted from this StackOverflow answer: # https://stackoverflow.com/a/28950776 s.connect(('10.255.255.255', 1)) else: s.connect(('localhost', 0)) return {'instance': s.getsockname()[0]} from .asgi import make_asgi_app # noqa python-prometheus-client-0.19.0+ds1/prometheus_client/gc_collector.py000066400000000000000000000027521454301344400260240ustar00rootroot00000000000000import gc import platform from typing import Iterable from .metrics_core import CounterMetricFamily, Metric from .registry import Collector, CollectorRegistry, REGISTRY class GCCollector(Collector): """Collector for Garbage collection statistics.""" def __init__(self, registry: CollectorRegistry = REGISTRY): if not hasattr(gc, 'get_stats') or platform.python_implementation() != 'CPython': return registry.register(self) def collect(self) -> Iterable[Metric]: collected = CounterMetricFamily( 'python_gc_objects_collected', 'Objects collected during gc', labels=['generation'], ) uncollectable = CounterMetricFamily( 'python_gc_objects_uncollectable', 'Uncollectable objects found during GC', labels=['generation'], ) collections = CounterMetricFamily( 'python_gc_collections', 'Number of times this generation was collected', labels=['generation'], ) for gen, stat in enumerate(gc.get_stats()): generation = str(gen) collected.add_metric([generation], value=stat['collected']) uncollectable.add_metric([generation], value=stat['uncollectable']) collections.add_metric([generation], value=stat['collections']) return [collected, uncollectable, collections] GC_COLLECTOR = GCCollector() """Default GCCollector in default Registry REGISTRY.""" python-prometheus-client-0.19.0+ds1/prometheus_client/metrics.py000066400000000000000000000647541454301344400250450ustar00rootroot00000000000000import os from threading import Lock import time import types from typing import ( Any, Callable, Dict, Iterable, List, Literal, Optional, Sequence, Tuple, Type, TypeVar, Union, ) from . import values # retain this import style for testability from .context_managers import ExceptionCounter, InprogressTracker, Timer from .metrics_core import ( Metric, METRIC_LABEL_NAME_RE, METRIC_NAME_RE, RESERVED_METRIC_LABEL_NAME_RE, ) from .registry import Collector, CollectorRegistry, REGISTRY from .samples import Exemplar, Sample from .utils import floatToGoString, INF T = TypeVar('T', bound='MetricWrapperBase') F = TypeVar("F", bound=Callable[..., Any]) def _build_full_name(metric_type, name, namespace, subsystem, unit): full_name = '' if namespace: full_name += namespace + '_' if subsystem: full_name += subsystem + '_' full_name += name if metric_type == 'counter' and full_name.endswith('_total'): full_name = full_name[:-6] # Munge to OpenMetrics. if unit and not full_name.endswith("_" + unit): full_name += "_" + unit if unit and metric_type in ('info', 'stateset'): raise ValueError('Metric name is of a type that cannot have a unit: ' + full_name) return full_name def _validate_labelname(l): if not METRIC_LABEL_NAME_RE.match(l): raise ValueError('Invalid label metric name: ' + l) if RESERVED_METRIC_LABEL_NAME_RE.match(l): raise ValueError('Reserved label metric name: ' + l) def _validate_labelnames(cls, labelnames): labelnames = tuple(labelnames) for l in labelnames: _validate_labelname(l) if l in cls._reserved_labelnames: raise ValueError('Reserved label metric name: ' + l) return labelnames def _validate_exemplar(exemplar): runes = 0 for k, v in exemplar.items(): _validate_labelname(k) runes += len(k) runes += len(v) if runes > 128: raise ValueError('Exemplar labels have %d UTF-8 characters, exceeding the limit of 128') def _get_use_created() -> bool: return os.environ.get("PROMETHEUS_DISABLE_CREATED_SERIES", 'False').lower() not in ('true', '1', 't') _use_created = _get_use_created() def disable_created_metrics(): """Disable exporting _created metrics on counters, histograms, and summaries.""" global _use_created _use_created = False def enable_created_metrics(): """Enable exporting _created metrics on counters, histograms, and summaries.""" global _use_created _use_created = True class MetricWrapperBase(Collector): _type: Optional[str] = None _reserved_labelnames: Sequence[str] = () def _is_observable(self): # Whether this metric is observable, i.e. # * a metric without label names and values, or # * the child of a labelled metric. return not self._labelnames or (self._labelnames and self._labelvalues) def _raise_if_not_observable(self): # Functions that mutate the state of the metric, for example incrementing # a counter, will fail if the metric is not observable, because only if a # metric is observable will the value be initialized. if not self._is_observable(): raise ValueError('%s metric is missing label values' % str(self._type)) def _is_parent(self): return self._labelnames and not self._labelvalues def _get_metric(self): return Metric(self._name, self._documentation, self._type, self._unit) def describe(self) -> Iterable[Metric]: return [self._get_metric()] def collect(self) -> Iterable[Metric]: metric = self._get_metric() for suffix, labels, value, timestamp, exemplar in self._samples(): metric.add_sample(self._name + suffix, labels, value, timestamp, exemplar) return [metric] def __str__(self) -> str: return f"{self._type}:{self._name}" def __repr__(self) -> str: metric_type = type(self) return f"{metric_type.__module__}.{metric_type.__name__}({self._name})" def __init__(self: T, name: str, documentation: str, labelnames: Iterable[str] = (), namespace: str = '', subsystem: str = '', unit: str = '', registry: Optional[CollectorRegistry] = REGISTRY, _labelvalues: Optional[Sequence[str]] = None, ) -> None: self._name = _build_full_name(self._type, name, namespace, subsystem, unit) self._labelnames = _validate_labelnames(self, labelnames) self._labelvalues = tuple(_labelvalues or ()) self._kwargs: Dict[str, Any] = {} self._documentation = documentation self._unit = unit if not METRIC_NAME_RE.match(self._name): raise ValueError('Invalid metric name: ' + self._name) if self._is_parent(): # Prepare the fields needed for child metrics. self._lock = Lock() self._metrics: Dict[Sequence[str], T] = {} if self._is_observable(): self._metric_init() if not self._labelvalues: # Register the multi-wrapper parent metric, or if a label-less metric, the whole shebang. if registry: registry.register(self) def labels(self: T, *labelvalues: Any, **labelkwargs: Any) -> T: """Return the child for the given labelset. All metrics can have labels, allowing grouping of related time series. Taking a counter as an example: from prometheus_client import Counter c = Counter('my_requests_total', 'HTTP Failures', ['method', 'endpoint']) c.labels('get', '/').inc() c.labels('post', '/submit').inc() Labels can also be provided as keyword arguments: from prometheus_client import Counter c = Counter('my_requests_total', 'HTTP Failures', ['method', 'endpoint']) c.labels(method='get', endpoint='/').inc() c.labels(method='post', endpoint='/submit').inc() See the best practices on [naming](http://prometheus.io/docs/practices/naming/) and [labels](http://prometheus.io/docs/practices/instrumentation/#use-labels). """ if not self._labelnames: raise ValueError('No label names were set when constructing %s' % self) if self._labelvalues: raise ValueError('{} already has labels set ({}); can not chain calls to .labels()'.format( self, dict(zip(self._labelnames, self._labelvalues)) )) if labelvalues and labelkwargs: raise ValueError("Can't pass both *args and **kwargs") if labelkwargs: if sorted(labelkwargs) != sorted(self._labelnames): raise ValueError('Incorrect label names') labelvalues = tuple(str(labelkwargs[l]) for l in self._labelnames) else: if len(labelvalues) != len(self._labelnames): raise ValueError('Incorrect label count') labelvalues = tuple(str(l) for l in labelvalues) with self._lock: if labelvalues not in self._metrics: self._metrics[labelvalues] = self.__class__( self._name, documentation=self._documentation, labelnames=self._labelnames, unit=self._unit, _labelvalues=labelvalues, **self._kwargs ) return self._metrics[labelvalues] def remove(self, *labelvalues: Any) -> None: if not self._labelnames: raise ValueError('No label names were set when constructing %s' % self) """Remove the given labelset from the metric.""" if len(labelvalues) != len(self._labelnames): raise ValueError('Incorrect label count (expected %d, got %s)' % (len(self._labelnames), labelvalues)) labelvalues = tuple(str(l) for l in labelvalues) with self._lock: del self._metrics[labelvalues] def clear(self) -> None: """Remove all labelsets from the metric""" with self._lock: self._metrics = {} def _samples(self) -> Iterable[Sample]: if self._is_parent(): return self._multi_samples() else: return self._child_samples() def _multi_samples(self) -> Iterable[Sample]: with self._lock: metrics = self._metrics.copy() for labels, metric in metrics.items(): series_labels = list(zip(self._labelnames, labels)) for suffix, sample_labels, value, timestamp, exemplar in metric._samples(): yield Sample(suffix, dict(series_labels + list(sample_labels.items())), value, timestamp, exemplar) def _child_samples(self) -> Iterable[Sample]: # pragma: no cover raise NotImplementedError('_child_samples() must be implemented by %r' % self) def _metric_init(self): # pragma: no cover """ Initialize the metric object as a child, i.e. when it has labels (if any) set. This is factored as a separate function to allow for deferred initialization. """ raise NotImplementedError('_metric_init() must be implemented by %r' % self) class Counter(MetricWrapperBase): """A Counter tracks counts of events or running totals. Example use cases for Counters: - Number of requests processed - Number of items that were inserted into a queue - Total amount of data that a system has processed Counters can only go up (and be reset when the process restarts). If your use case can go down, you should use a Gauge instead. An example for a Counter: from prometheus_client import Counter c = Counter('my_failures_total', 'Description of counter') c.inc() # Increment by 1 c.inc(1.6) # Increment by given value There are utilities to count exceptions raised: @c.count_exceptions() def f(): pass with c.count_exceptions(): pass # Count only one type of exception with c.count_exceptions(ValueError): pass """ _type = 'counter' def _metric_init(self) -> None: self._value = values.ValueClass(self._type, self._name, self._name + '_total', self._labelnames, self._labelvalues, self._documentation) self._created = time.time() def inc(self, amount: float = 1, exemplar: Optional[Dict[str, str]] = None) -> None: """Increment counter by the given amount.""" self._raise_if_not_observable() if amount < 0: raise ValueError('Counters can only be incremented by non-negative amounts.') self._value.inc(amount) if exemplar: _validate_exemplar(exemplar) self._value.set_exemplar(Exemplar(exemplar, amount, time.time())) def count_exceptions(self, exception: Union[Type[BaseException], Tuple[Type[BaseException], ...]] = Exception) -> ExceptionCounter: """Count exceptions in a block of code or function. Can be used as a function decorator or context manager. Increments the counter when an exception of the given type is raised up out of the code. """ self._raise_if_not_observable() return ExceptionCounter(self, exception) def _child_samples(self) -> Iterable[Sample]: sample = Sample('_total', {}, self._value.get(), None, self._value.get_exemplar()) if _use_created: return ( sample, Sample('_created', {}, self._created, None, None) ) return (sample,) class Gauge(MetricWrapperBase): """Gauge metric, to report instantaneous values. Examples of Gauges include: - Inprogress requests - Number of items in a queue - Free memory - Total memory - Temperature Gauges can go both up and down. from prometheus_client import Gauge g = Gauge('my_inprogress_requests', 'Description of gauge') g.inc() # Increment by 1 g.dec(10) # Decrement by given value g.set(4.2) # Set to a given value There are utilities for common use cases: g.set_to_current_time() # Set to current unixtime # Increment when entered, decrement when exited. @g.track_inprogress() def f(): pass with g.track_inprogress(): pass A Gauge can also take its value from a callback: d = Gauge('data_objects', 'Number of objects') my_dict = {} d.set_function(lambda: len(my_dict)) """ _type = 'gauge' _MULTIPROC_MODES = frozenset(('all', 'liveall', 'min', 'livemin', 'max', 'livemax', 'sum', 'livesum', 'mostrecent', 'livemostrecent')) _MOST_RECENT_MODES = frozenset(('mostrecent', 'livemostrecent')) def __init__(self, name: str, documentation: str, labelnames: Iterable[str] = (), namespace: str = '', subsystem: str = '', unit: str = '', registry: Optional[CollectorRegistry] = REGISTRY, _labelvalues: Optional[Sequence[str]] = None, multiprocess_mode: Literal['all', 'liveall', 'min', 'livemin', 'max', 'livemax', 'sum', 'livesum', 'mostrecent', 'livemostrecent'] = 'all', ): self._multiprocess_mode = multiprocess_mode if multiprocess_mode not in self._MULTIPROC_MODES: raise ValueError('Invalid multiprocess mode: ' + multiprocess_mode) super().__init__( name=name, documentation=documentation, labelnames=labelnames, namespace=namespace, subsystem=subsystem, unit=unit, registry=registry, _labelvalues=_labelvalues, ) self._kwargs['multiprocess_mode'] = self._multiprocess_mode self._is_most_recent = self._multiprocess_mode in self._MOST_RECENT_MODES def _metric_init(self) -> None: self._value = values.ValueClass( self._type, self._name, self._name, self._labelnames, self._labelvalues, self._documentation, multiprocess_mode=self._multiprocess_mode ) def inc(self, amount: float = 1) -> None: """Increment gauge by the given amount.""" if self._is_most_recent: raise RuntimeError("inc must not be used with the mostrecent mode") self._raise_if_not_observable() self._value.inc(amount) def dec(self, amount: float = 1) -> None: """Decrement gauge by the given amount.""" if self._is_most_recent: raise RuntimeError("dec must not be used with the mostrecent mode") self._raise_if_not_observable() self._value.inc(-amount) def set(self, value: float) -> None: """Set gauge to the given value.""" self._raise_if_not_observable() if self._is_most_recent: self._value.set(float(value), timestamp=time.time()) else: self._value.set(float(value)) def set_to_current_time(self) -> None: """Set gauge to the current unixtime.""" self.set(time.time()) def track_inprogress(self) -> InprogressTracker: """Track inprogress blocks of code or functions. Can be used as a function decorator or context manager. Increments the gauge when the code is entered, and decrements when it is exited. """ self._raise_if_not_observable() return InprogressTracker(self) def time(self) -> Timer: """Time a block of code or function, and set the duration in seconds. Can be used as a function decorator or context manager. """ return Timer(self, 'set') def set_function(self, f: Callable[[], float]) -> None: """Call the provided function to return the Gauge value. The function must return a float, and may be called from multiple threads. All other methods of the Gauge become NOOPs. """ self._raise_if_not_observable() def samples(_: Gauge) -> Iterable[Sample]: return (Sample('', {}, float(f()), None, None),) self._child_samples = types.MethodType(samples, self) # type: ignore def _child_samples(self) -> Iterable[Sample]: return (Sample('', {}, self._value.get(), None, None),) class Summary(MetricWrapperBase): """A Summary tracks the size and number of events. Example use cases for Summaries: - Response latency - Request size Example for a Summary: from prometheus_client import Summary s = Summary('request_size_bytes', 'Request size (bytes)') s.observe(512) # Observe 512 (bytes) Example for a Summary using time: from prometheus_client import Summary REQUEST_TIME = Summary('response_latency_seconds', 'Response latency (seconds)') @REQUEST_TIME.time() def create_response(request): '''A dummy function''' time.sleep(1) Example for using the same Summary object as a context manager: with REQUEST_TIME.time(): pass # Logic to be timed """ _type = 'summary' _reserved_labelnames = ['quantile'] def _metric_init(self) -> None: self._count = values.ValueClass(self._type, self._name, self._name + '_count', self._labelnames, self._labelvalues, self._documentation) self._sum = values.ValueClass(self._type, self._name, self._name + '_sum', self._labelnames, self._labelvalues, self._documentation) self._created = time.time() def observe(self, amount: float) -> None: """Observe the given amount. The amount is usually positive or zero. Negative values are accepted but prevent current versions of Prometheus from properly detecting counter resets in the sum of observations. See https://prometheus.io/docs/practices/histograms/#count-and-sum-of-observations for details. """ self._raise_if_not_observable() self._count.inc(1) self._sum.inc(amount) def time(self) -> Timer: """Time a block of code or function, and observe the duration in seconds. Can be used as a function decorator or context manager. """ return Timer(self, 'observe') def _child_samples(self) -> Iterable[Sample]: samples = [ Sample('_count', {}, self._count.get(), None, None), Sample('_sum', {}, self._sum.get(), None, None), ] if _use_created: samples.append(Sample('_created', {}, self._created, None, None)) return tuple(samples) class Histogram(MetricWrapperBase): """A Histogram tracks the size and number of events in buckets. You can use Histograms for aggregatable calculation of quantiles. Example use cases: - Response latency - Request size Example for a Histogram: from prometheus_client import Histogram h = Histogram('request_size_bytes', 'Request size (bytes)') h.observe(512) # Observe 512 (bytes) Example for a Histogram using time: from prometheus_client import Histogram REQUEST_TIME = Histogram('response_latency_seconds', 'Response latency (seconds)') @REQUEST_TIME.time() def create_response(request): '''A dummy function''' time.sleep(1) Example of using the same Histogram object as a context manager: with REQUEST_TIME.time(): pass # Logic to be timed The default buckets are intended to cover a typical web/rpc request from milliseconds to seconds. They can be overridden by passing `buckets` keyword argument to `Histogram`. """ _type = 'histogram' _reserved_labelnames = ['le'] DEFAULT_BUCKETS = (.005, .01, .025, .05, .075, .1, .25, .5, .75, 1.0, 2.5, 5.0, 7.5, 10.0, INF) def __init__(self, name: str, documentation: str, labelnames: Iterable[str] = (), namespace: str = '', subsystem: str = '', unit: str = '', registry: Optional[CollectorRegistry] = REGISTRY, _labelvalues: Optional[Sequence[str]] = None, buckets: Sequence[Union[float, str]] = DEFAULT_BUCKETS, ): self._prepare_buckets(buckets) super().__init__( name=name, documentation=documentation, labelnames=labelnames, namespace=namespace, subsystem=subsystem, unit=unit, registry=registry, _labelvalues=_labelvalues, ) self._kwargs['buckets'] = buckets def _prepare_buckets(self, source_buckets: Sequence[Union[float, str]]) -> None: buckets = [float(b) for b in source_buckets] if buckets != sorted(buckets): # This is probably an error on the part of the user, # so raise rather than sorting for them. raise ValueError('Buckets not in sorted order') if buckets and buckets[-1] != INF: buckets.append(INF) if len(buckets) < 2: raise ValueError('Must have at least two buckets') self._upper_bounds = buckets def _metric_init(self) -> None: self._buckets: List[values.ValueClass] = [] self._created = time.time() bucket_labelnames = self._labelnames + ('le',) self._sum = values.ValueClass(self._type, self._name, self._name + '_sum', self._labelnames, self._labelvalues, self._documentation) for b in self._upper_bounds: self._buckets.append(values.ValueClass( self._type, self._name, self._name + '_bucket', bucket_labelnames, self._labelvalues + (floatToGoString(b),), self._documentation) ) def observe(self, amount: float, exemplar: Optional[Dict[str, str]] = None) -> None: """Observe the given amount. The amount is usually positive or zero. Negative values are accepted but prevent current versions of Prometheus from properly detecting counter resets in the sum of observations. See https://prometheus.io/docs/practices/histograms/#count-and-sum-of-observations for details. """ self._raise_if_not_observable() self._sum.inc(amount) for i, bound in enumerate(self._upper_bounds): if amount <= bound: self._buckets[i].inc(1) if exemplar: _validate_exemplar(exemplar) self._buckets[i].set_exemplar(Exemplar(exemplar, amount, time.time())) break def time(self) -> Timer: """Time a block of code or function, and observe the duration in seconds. Can be used as a function decorator or context manager. """ return Timer(self, 'observe') def _child_samples(self) -> Iterable[Sample]: samples = [] acc = 0.0 for i, bound in enumerate(self._upper_bounds): acc += self._buckets[i].get() samples.append(Sample('_bucket', {'le': floatToGoString(bound)}, acc, None, self._buckets[i].get_exemplar())) samples.append(Sample('_count', {}, acc, None, None)) if self._upper_bounds[0] >= 0: samples.append(Sample('_sum', {}, self._sum.get(), None, None)) if _use_created: samples.append(Sample('_created', {}, self._created, None, None)) return tuple(samples) class Info(MetricWrapperBase): """Info metric, key-value pairs. Examples of Info include: - Build information - Version information - Potential target metadata Example usage: from prometheus_client import Info i = Info('my_build', 'Description of info') i.info({'version': '1.2.3', 'buildhost': 'foo@bar'}) Info metrics do not work in multiprocess mode. """ _type = 'info' def _metric_init(self): self._labelname_set = set(self._labelnames) self._lock = Lock() self._value = {} def info(self, val: Dict[str, str]) -> None: """Set info metric.""" if self._labelname_set.intersection(val.keys()): raise ValueError('Overlapping labels for Info metric, metric: {} child: {}'.format( self._labelnames, val)) with self._lock: self._value = dict(val) def _child_samples(self) -> Iterable[Sample]: with self._lock: return (Sample('_info', self._value, 1.0, None, None),) class Enum(MetricWrapperBase): """Enum metric, which of a set of states is true. Example usage: from prometheus_client import Enum e = Enum('task_state', 'Description of enum', states=['starting', 'running', 'stopped']) e.state('running') The first listed state will be the default. Enum metrics do not work in multiprocess mode. """ _type = 'stateset' def __init__(self, name: str, documentation: str, labelnames: Sequence[str] = (), namespace: str = '', subsystem: str = '', unit: str = '', registry: Optional[CollectorRegistry] = REGISTRY, _labelvalues: Optional[Sequence[str]] = None, states: Optional[Sequence[str]] = None, ): super().__init__( name=name, documentation=documentation, labelnames=labelnames, namespace=namespace, subsystem=subsystem, unit=unit, registry=registry, _labelvalues=_labelvalues, ) if name in labelnames: raise ValueError(f'Overlapping labels for Enum metric: {name}') if not states: raise ValueError(f'No states provided for Enum metric: {name}') self._kwargs['states'] = self._states = states def _metric_init(self) -> None: self._value = 0 self._lock = Lock() def state(self, state: str) -> None: """Set enum metric state.""" self._raise_if_not_observable() with self._lock: self._value = self._states.index(state) def _child_samples(self) -> Iterable[Sample]: with self._lock: return [ Sample('', {self._name: s}, 1 if i == self._value else 0, None, None) for i, s in enumerate(self._states) ] python-prometheus-client-0.19.0+ds1/prometheus_client/metrics_core.py000066400000000000000000000362741454301344400260510ustar00rootroot00000000000000import re from typing import Dict, List, Optional, Sequence, Tuple, Union from .samples import Exemplar, Sample, Timestamp METRIC_TYPES = ( 'counter', 'gauge', 'summary', 'histogram', 'gaugehistogram', 'unknown', 'info', 'stateset', ) METRIC_NAME_RE = re.compile(r'^[a-zA-Z_:][a-zA-Z0-9_:]*$') METRIC_LABEL_NAME_RE = re.compile(r'^[a-zA-Z_][a-zA-Z0-9_]*$') RESERVED_METRIC_LABEL_NAME_RE = re.compile(r'^__.*$') class Metric: """A single metric family and its samples. This is intended only for internal use by the instrumentation client. Custom collectors should use GaugeMetricFamily, CounterMetricFamily and SummaryMetricFamily instead. """ def __init__(self, name: str, documentation: str, typ: str, unit: str = ''): if unit and not name.endswith("_" + unit): name += "_" + unit if not METRIC_NAME_RE.match(name): raise ValueError('Invalid metric name: ' + name) self.name: str = name self.documentation: str = documentation self.unit: str = unit if typ == 'untyped': typ = 'unknown' if typ not in METRIC_TYPES: raise ValueError('Invalid metric type: ' + typ) self.type: str = typ self.samples: List[Sample] = [] def add_sample(self, name: str, labels: Dict[str, str], value: float, timestamp: Optional[Union[Timestamp, float]] = None, exemplar: Optional[Exemplar] = None) -> None: """Add a sample to the metric. Internal-only, do not use.""" self.samples.append(Sample(name, labels, value, timestamp, exemplar)) def __eq__(self, other: object) -> bool: return (isinstance(other, Metric) and self.name == other.name and self.documentation == other.documentation and self.type == other.type and self.unit == other.unit and self.samples == other.samples) def __repr__(self) -> str: return "Metric({}, {}, {}, {}, {})".format( self.name, self.documentation, self.type, self.unit, self.samples, ) def _restricted_metric(self, names): """Build a snapshot of a metric with samples restricted to a given set of names.""" samples = [s for s in self.samples if s[0] in names] if samples: m = Metric(self.name, self.documentation, self.type) m.samples = samples return m return None class UnknownMetricFamily(Metric): """A single unknown metric and its samples. For use by custom collectors. """ def __init__(self, name: str, documentation: str, value: Optional[float] = None, labels: Optional[Sequence[str]] = None, unit: str = '', ): Metric.__init__(self, name, documentation, 'unknown', unit) if labels is not None and value is not None: raise ValueError('Can only specify at most one of value and labels.') if labels is None: labels = [] self._labelnames = tuple(labels) if value is not None: self.add_metric([], value) def add_metric(self, labels: Sequence[str], value: float, timestamp: Optional[Union[Timestamp, float]] = None) -> None: """Add a metric to the metric family. Args: labels: A list of label values value: The value of the metric. """ self.samples.append(Sample(self.name, dict(zip(self._labelnames, labels)), value, timestamp)) # For backward compatibility. UntypedMetricFamily = UnknownMetricFamily class CounterMetricFamily(Metric): """A single counter and its samples. For use by custom collectors. """ def __init__(self, name: str, documentation: str, value: Optional[float] = None, labels: Optional[Sequence[str]] = None, created: Optional[float] = None, unit: str = '', ): # Glue code for pre-OpenMetrics metrics. if name.endswith('_total'): name = name[:-6] Metric.__init__(self, name, documentation, 'counter', unit) if labels is not None and value is not None: raise ValueError('Can only specify at most one of value and labels.') if labels is None: labels = [] self._labelnames = tuple(labels) if value is not None: self.add_metric([], value, created) def add_metric(self, labels: Sequence[str], value: float, created: Optional[float] = None, timestamp: Optional[Union[Timestamp, float]] = None, ) -> None: """Add a metric to the metric family. Args: labels: A list of label values value: The value of the metric created: Optional unix timestamp the child was created at. """ self.samples.append(Sample(self.name + '_total', dict(zip(self._labelnames, labels)), value, timestamp)) if created is not None: self.samples.append(Sample(self.name + '_created', dict(zip(self._labelnames, labels)), created, timestamp)) class GaugeMetricFamily(Metric): """A single gauge and its samples. For use by custom collectors. """ def __init__(self, name: str, documentation: str, value: Optional[float] = None, labels: Optional[Sequence[str]] = None, unit: str = '', ): Metric.__init__(self, name, documentation, 'gauge', unit) if labels is not None and value is not None: raise ValueError('Can only specify at most one of value and labels.') if labels is None: labels = [] self._labelnames = tuple(labels) if value is not None: self.add_metric([], value) def add_metric(self, labels: Sequence[str], value: float, timestamp: Optional[Union[Timestamp, float]] = None) -> None: """Add a metric to the metric family. Args: labels: A list of label values value: A float """ self.samples.append(Sample(self.name, dict(zip(self._labelnames, labels)), value, timestamp)) class SummaryMetricFamily(Metric): """A single summary and its samples. For use by custom collectors. """ def __init__(self, name: str, documentation: str, count_value: Optional[int] = None, sum_value: Optional[float] = None, labels: Optional[Sequence[str]] = None, unit: str = '', ): Metric.__init__(self, name, documentation, 'summary', unit) if (sum_value is None) != (count_value is None): raise ValueError('count_value and sum_value must be provided together.') if labels is not None and count_value is not None: raise ValueError('Can only specify at most one of value and labels.') if labels is None: labels = [] self._labelnames = tuple(labels) # The and clause is necessary only for typing, the above ValueError will raise if only one is set. if count_value is not None and sum_value is not None: self.add_metric([], count_value, sum_value) def add_metric(self, labels: Sequence[str], count_value: int, sum_value: float, timestamp: Optional[Union[float, Timestamp]] = None ) -> None: """Add a metric to the metric family. Args: labels: A list of label values count_value: The count value of the metric. sum_value: The sum value of the metric. """ self.samples.append(Sample(self.name + '_count', dict(zip(self._labelnames, labels)), count_value, timestamp)) self.samples.append(Sample(self.name + '_sum', dict(zip(self._labelnames, labels)), sum_value, timestamp)) class HistogramMetricFamily(Metric): """A single histogram and its samples. For use by custom collectors. """ def __init__(self, name: str, documentation: str, buckets: Optional[Sequence[Union[Tuple[str, float], Tuple[str, float, Exemplar]]]] = None, sum_value: Optional[float] = None, labels: Optional[Sequence[str]] = None, unit: str = '', ): Metric.__init__(self, name, documentation, 'histogram', unit) if sum_value is not None and buckets is None: raise ValueError('sum value cannot be provided without buckets.') if labels is not None and buckets is not None: raise ValueError('Can only specify at most one of buckets and labels.') if labels is None: labels = [] self._labelnames = tuple(labels) if buckets is not None: self.add_metric([], buckets, sum_value) def add_metric(self, labels: Sequence[str], buckets: Sequence[Union[Tuple[str, float], Tuple[str, float, Exemplar]]], sum_value: Optional[float], timestamp: Optional[Union[Timestamp, float]] = None) -> None: """Add a metric to the metric family. Args: labels: A list of label values buckets: A list of lists. Each inner list can be a pair of bucket name and value, or a triple of bucket name, value, and exemplar. The buckets must be sorted, and +Inf present. sum_value: The sum value of the metric. """ for b in buckets: bucket, value = b[:2] exemplar = None if len(b) == 3: exemplar = b[2] # type: ignore self.samples.append(Sample( self.name + '_bucket', dict(list(zip(self._labelnames, labels)) + [('le', bucket)]), value, timestamp, exemplar, )) # Don't include sum and thus count if there's negative buckets. if float(buckets[0][0]) >= 0 and sum_value is not None: # +Inf is last and provides the count value. self.samples.append( Sample(self.name + '_count', dict(zip(self._labelnames, labels)), buckets[-1][1], timestamp)) self.samples.append( Sample(self.name + '_sum', dict(zip(self._labelnames, labels)), sum_value, timestamp)) class GaugeHistogramMetricFamily(Metric): """A single gauge histogram and its samples. For use by custom collectors. """ def __init__(self, name: str, documentation: str, buckets: Optional[Sequence[Tuple[str, float]]] = None, gsum_value: Optional[float] = None, labels: Optional[Sequence[str]] = None, unit: str = '', ): Metric.__init__(self, name, documentation, 'gaugehistogram', unit) if labels is not None and buckets is not None: raise ValueError('Can only specify at most one of buckets and labels.') if labels is None: labels = [] self._labelnames = tuple(labels) if buckets is not None: self.add_metric([], buckets, gsum_value) def add_metric(self, labels: Sequence[str], buckets: Sequence[Tuple[str, float]], gsum_value: Optional[float], timestamp: Optional[Union[float, Timestamp]] = None, ) -> None: """Add a metric to the metric family. Args: labels: A list of label values buckets: A list of pairs of bucket names and values. The buckets must be sorted, and +Inf present. gsum_value: The sum value of the metric. """ for bucket, value in buckets: self.samples.append(Sample( self.name + '_bucket', dict(list(zip(self._labelnames, labels)) + [('le', bucket)]), value, timestamp)) # +Inf is last and provides the count value. self.samples.extend([ Sample(self.name + '_gcount', dict(zip(self._labelnames, labels)), buckets[-1][1], timestamp), # TODO: Handle None gsum_value correctly. Currently a None will fail exposition but is allowed here. Sample(self.name + '_gsum', dict(zip(self._labelnames, labels)), gsum_value, timestamp), # type: ignore ]) class InfoMetricFamily(Metric): """A single info and its samples. For use by custom collectors. """ def __init__(self, name: str, documentation: str, value: Optional[Dict[str, str]] = None, labels: Optional[Sequence[str]] = None, ): Metric.__init__(self, name, documentation, 'info') if labels is not None and value is not None: raise ValueError('Can only specify at most one of value and labels.') if labels is None: labels = [] self._labelnames = tuple(labels) if value is not None: self.add_metric([], value) def add_metric(self, labels: Sequence[str], value: Dict[str, str], timestamp: Optional[Union[Timestamp, float]] = None, ) -> None: """Add a metric to the metric family. Args: labels: A list of label values value: A dict of labels """ self.samples.append(Sample( self.name + '_info', dict(dict(zip(self._labelnames, labels)), **value), 1, timestamp, )) class StateSetMetricFamily(Metric): """A single stateset and its samples. For use by custom collectors. """ def __init__(self, name: str, documentation: str, value: Optional[Dict[str, bool]] = None, labels: Optional[Sequence[str]] = None, ): Metric.__init__(self, name, documentation, 'stateset') if labels is not None and value is not None: raise ValueError('Can only specify at most one of value and labels.') if labels is None: labels = [] self._labelnames = tuple(labels) if value is not None: self.add_metric([], value) def add_metric(self, labels: Sequence[str], value: Dict[str, bool], timestamp: Optional[Union[Timestamp, float]] = None, ) -> None: """Add a metric to the metric family. Args: labels: A list of label values value: A dict of string state names to booleans """ labels = tuple(labels) for state, enabled in sorted(value.items()): v = (1 if enabled else 0) self.samples.append(Sample( self.name, dict(zip(self._labelnames + (self.name,), labels + (state,))), v, timestamp, )) python-prometheus-client-0.19.0+ds1/prometheus_client/mmap_dict.py000066400000000000000000000124211454301344400253140ustar00rootroot00000000000000import json import mmap import os import struct from typing import List _INITIAL_MMAP_SIZE = 1 << 16 _pack_integer_func = struct.Struct(b'i').pack _pack_two_doubles_func = struct.Struct(b'dd').pack _unpack_integer = struct.Struct(b'i').unpack_from _unpack_two_doubles = struct.Struct(b'dd').unpack_from # struct.pack_into has atomicity issues because it will temporarily write 0 into # the mmap, resulting in false reads to 0 when experiencing a lot of writes. # Using direct assignment solves this issue. def _pack_two_doubles(data, pos, value, timestamp): data[pos:pos + 16] = _pack_two_doubles_func(value, timestamp) def _pack_integer(data, pos, value): data[pos:pos + 4] = _pack_integer_func(value) def _read_all_values(data, used=0): """Yield (key, value, timestamp, pos). No locking is performed.""" if used <= 0: # If not valid `used` value is passed in, read it from the file. used = _unpack_integer(data, 0)[0] pos = 8 while pos < used: encoded_len = _unpack_integer(data, pos)[0] # check we are not reading beyond bounds if encoded_len + pos > used: raise RuntimeError('Read beyond file size detected, file is corrupted.') pos += 4 encoded_key = data[pos:pos + encoded_len] padded_len = encoded_len + (8 - (encoded_len + 4) % 8) pos += padded_len value, timestamp = _unpack_two_doubles(data, pos) yield encoded_key.decode('utf-8'), value, timestamp, pos pos += 16 class MmapedDict: """A dict of doubles, backed by an mmapped file. The file starts with a 4 byte int, indicating how much of it is used. Then 4 bytes of padding. There's then a number of entries, consisting of a 4 byte int which is the size of the next field, a utf-8 encoded string key, padding to a 8 byte alignment, and then a 8 byte float which is the value and a 8 byte float which is a UNIX timestamp in seconds. Not thread safe. """ def __init__(self, filename, read_mode=False): self._f = open(filename, 'rb' if read_mode else 'a+b') self._fname = filename capacity = os.fstat(self._f.fileno()).st_size if capacity == 0: self._f.truncate(_INITIAL_MMAP_SIZE) capacity = _INITIAL_MMAP_SIZE self._capacity = capacity self._m = mmap.mmap(self._f.fileno(), self._capacity, access=mmap.ACCESS_READ if read_mode else mmap.ACCESS_WRITE) self._positions = {} self._used = _unpack_integer(self._m, 0)[0] if self._used == 0: self._used = 8 _pack_integer(self._m, 0, self._used) else: if not read_mode: for key, _, _, pos in self._read_all_values(): self._positions[key] = pos @staticmethod def read_all_values_from_file(filename): with open(filename, 'rb') as infp: # Read the first block of data, including the first 4 bytes which tell us # how much of the file (which is preallocated to _INITIAL_MMAP_SIZE bytes) is occupied. data = infp.read(mmap.PAGESIZE) used = _unpack_integer(data, 0)[0] if used > len(data): # Then read in the rest, if needed. data += infp.read(used - len(data)) return _read_all_values(data, used) def _init_value(self, key): """Initialize a value. Lock must be held by caller.""" encoded = key.encode('utf-8') # Pad to be 8-byte aligned. padded = encoded + (b' ' * (8 - (len(encoded) + 4) % 8)) value = struct.pack(f'i{len(padded)}sdd'.encode(), len(encoded), padded, 0.0, 0.0) while self._used + len(value) > self._capacity: self._capacity *= 2 self._f.truncate(self._capacity) self._m = mmap.mmap(self._f.fileno(), self._capacity) self._m[self._used:self._used + len(value)] = value # Update how much space we've used. self._used += len(value) _pack_integer(self._m, 0, self._used) self._positions[key] = self._used - 16 def _read_all_values(self): """Yield (key, value, pos). No locking is performed.""" return _read_all_values(data=self._m, used=self._used) def read_all_values(self): """Yield (key, value, timestamp). No locking is performed.""" for k, v, ts, _ in self._read_all_values(): yield k, v, ts def read_value(self, key): if key not in self._positions: self._init_value(key) pos = self._positions[key] return _unpack_two_doubles(self._m, pos) def write_value(self, key, value, timestamp): if key not in self._positions: self._init_value(key) pos = self._positions[key] _pack_two_doubles(self._m, pos, value, timestamp) def close(self): if self._f: self._m.close() self._m = None self._f.close() self._f = None def mmap_key(metric_name: str, name: str, labelnames: List[str], labelvalues: List[str], help_text: str) -> str: """Format a key for use in the mmap file.""" # ensure labels are in consistent order for identity labels = dict(zip(labelnames, labelvalues)) return json.dumps([metric_name, name, labels, help_text], sort_keys=True) python-prometheus-client-0.19.0+ds1/prometheus_client/multiprocess.py000066400000000000000000000165631454301344400261230ustar00rootroot00000000000000from collections import defaultdict import glob import json import os import warnings from .metrics import Gauge from .metrics_core import Metric from .mmap_dict import MmapedDict from .samples import Sample from .utils import floatToGoString try: # Python3 FileNotFoundError except NameError: # Python >= 2.5 FileNotFoundError = IOError class MultiProcessCollector: """Collector for files for multi-process mode.""" def __init__(self, registry, path=None): if path is None: # This deprecation warning can go away in a few releases when removing the compatibility if 'prometheus_multiproc_dir' in os.environ and 'PROMETHEUS_MULTIPROC_DIR' not in os.environ: os.environ['PROMETHEUS_MULTIPROC_DIR'] = os.environ['prometheus_multiproc_dir'] warnings.warn("prometheus_multiproc_dir variable has been deprecated in favor of the upper case naming PROMETHEUS_MULTIPROC_DIR", DeprecationWarning) path = os.environ.get('PROMETHEUS_MULTIPROC_DIR') if not path or not os.path.isdir(path): raise ValueError('env PROMETHEUS_MULTIPROC_DIR is not set or not a directory') self._path = path if registry: registry.register(self) @staticmethod def merge(files, accumulate=True): """Merge metrics from given mmap files. By default, histograms are accumulated, as per prometheus wire format. But if writing the merged data back to mmap files, use accumulate=False to avoid compound accumulation. """ metrics = MultiProcessCollector._read_metrics(files) return MultiProcessCollector._accumulate_metrics(metrics, accumulate) @staticmethod def _read_metrics(files): metrics = {} key_cache = {} def _parse_key(key): val = key_cache.get(key) if not val: metric_name, name, labels, help_text = json.loads(key) labels_key = tuple(sorted(labels.items())) val = key_cache[key] = (metric_name, name, labels, labels_key, help_text) return val for f in files: parts = os.path.basename(f).split('_') typ = parts[0] try: file_values = MmapedDict.read_all_values_from_file(f) except FileNotFoundError: if typ == 'gauge' and parts[1].startswith('live'): # Files for 'live*' gauges can be deleted between the glob of collect # and now (via a mark_process_dead call) so don't fail if # the file is missing continue raise for key, value, timestamp, _ in file_values: metric_name, name, labels, labels_key, help_text = _parse_key(key) metric = metrics.get(metric_name) if metric is None: metric = Metric(metric_name, help_text, typ) metrics[metric_name] = metric if typ == 'gauge': pid = parts[2][:-3] metric._multiprocess_mode = parts[1] metric.add_sample(name, labels_key + (('pid', pid),), value, timestamp) else: # The duplicates and labels are fixed in the next for. metric.add_sample(name, labels_key, value) return metrics @staticmethod def _accumulate_metrics(metrics, accumulate): for metric in metrics.values(): samples = defaultdict(float) sample_timestamps = defaultdict(float) buckets = defaultdict(lambda: defaultdict(float)) samples_setdefault = samples.setdefault for s in metric.samples: name, labels, value, timestamp, exemplar = s if metric.type == 'gauge': without_pid_key = (name, tuple(l for l in labels if l[0] != 'pid')) if metric._multiprocess_mode in ('min', 'livemin'): current = samples_setdefault(without_pid_key, value) if value < current: samples[without_pid_key] = value elif metric._multiprocess_mode in ('max', 'livemax'): current = samples_setdefault(without_pid_key, value) if value > current: samples[without_pid_key] = value elif metric._multiprocess_mode in ('sum', 'livesum'): samples[without_pid_key] += value elif metric._multiprocess_mode in ('mostrecent', 'livemostrecent'): current_timestamp = sample_timestamps[without_pid_key] timestamp = float(timestamp or 0) if current_timestamp < timestamp: samples[without_pid_key] = value sample_timestamps[without_pid_key] = timestamp else: # all/liveall samples[(name, labels)] = value elif metric.type == 'histogram': # A for loop with early exit is faster than a genexpr # or a listcomp that ends up building unnecessary things for l in labels: if l[0] == 'le': bucket_value = float(l[1]) # _bucket without_le = tuple(l for l in labels if l[0] != 'le') buckets[without_le][bucket_value] += value break else: # did not find the `le` key # _sum/_count samples[(name, labels)] += value else: # Counter and Summary. samples[(name, labels)] += value # Accumulate bucket values. if metric.type == 'histogram': for labels, values in buckets.items(): acc = 0.0 for bucket, value in sorted(values.items()): sample_key = ( metric.name + '_bucket', labels + (('le', floatToGoString(bucket)),), ) if accumulate: acc += value samples[sample_key] = acc else: samples[sample_key] = value if accumulate: samples[(metric.name + '_count', labels)] = acc # Convert to correct sample format. metric.samples = [Sample(name_, dict(labels), value) for (name_, labels), value in samples.items()] return metrics.values() def collect(self): files = glob.glob(os.path.join(self._path, '*.db')) return self.merge(files, accumulate=True) _LIVE_GAUGE_MULTIPROCESS_MODES = {m for m in Gauge._MULTIPROC_MODES if m.startswith('live')} def mark_process_dead(pid, path=None): """Do bookkeeping for when one process dies in a multi-process setup.""" if path is None: path = os.environ.get('PROMETHEUS_MULTIPROC_DIR', os.environ.get('prometheus_multiproc_dir')) for mode in _LIVE_GAUGE_MULTIPROCESS_MODES: for f in glob.glob(os.path.join(path, f'gauge_{mode}_{pid}.db')): os.remove(f) python-prometheus-client-0.19.0+ds1/prometheus_client/openmetrics/000077500000000000000000000000001454301344400253355ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/prometheus_client/openmetrics/__init__.py000066400000000000000000000000001454301344400274340ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/prometheus_client/openmetrics/exposition.py000066400000000000000000000056611454301344400301200ustar00rootroot00000000000000#!/usr/bin/env python from ..utils import floatToGoString CONTENT_TYPE_LATEST = 'application/openmetrics-text; version=0.0.1; charset=utf-8' """Content type of the latest OpenMetrics text format""" def _is_valid_exemplar_metric(metric, sample): if metric.type == 'counter' and sample.name.endswith('_total'): return True if metric.type in ('histogram', 'gaugehistogram') and sample.name.endswith('_bucket'): return True return False def generate_latest(registry): '''Returns the metrics from the registry in latest text format as a string.''' output = [] for metric in registry.collect(): try: mname = metric.name output.append('# HELP {} {}\n'.format( mname, metric.documentation.replace('\\', r'\\').replace('\n', r'\n').replace('"', r'\"'))) output.append(f'# TYPE {mname} {metric.type}\n') if metric.unit: output.append(f'# UNIT {mname} {metric.unit}\n') for s in metric.samples: if s.labels: labelstr = '{{{0}}}'.format(','.join( ['{}="{}"'.format( k, v.replace('\\', r'\\').replace('\n', r'\n').replace('"', r'\"')) for k, v in sorted(s.labels.items())])) else: labelstr = '' if s.exemplar: if not _is_valid_exemplar_metric(metric, s): raise ValueError(f"Metric {metric.name} has exemplars, but is not a histogram bucket or counter") labels = '{{{0}}}'.format(','.join( ['{}="{}"'.format( k, v.replace('\\', r'\\').replace('\n', r'\n').replace('"', r'\"')) for k, v in sorted(s.exemplar.labels.items())])) if s.exemplar.timestamp is not None: exemplarstr = ' # {} {} {}'.format( labels, floatToGoString(s.exemplar.value), s.exemplar.timestamp, ) else: exemplarstr = ' # {} {}'.format( labels, floatToGoString(s.exemplar.value), ) else: exemplarstr = '' timestamp = '' if s.timestamp is not None: timestamp = f' {s.timestamp}' output.append('{}{} {}{}{}\n'.format( s.name, labelstr, floatToGoString(s.value), timestamp, exemplarstr, )) except Exception as exception: exception.args = (exception.args or ('',)) + (metric,) raise output.append('# EOF\n') return ''.join(output).encode('utf-8') python-prometheus-client-0.19.0+ds1/prometheus_client/openmetrics/parser.py000066400000000000000000000531551454301344400272140ustar00rootroot00000000000000#!/usr/bin/env python import io as StringIO import math import re from ..metrics_core import Metric, METRIC_LABEL_NAME_RE from ..samples import Exemplar, Sample, Timestamp from ..utils import floatToGoString def text_string_to_metric_families(text): """Parse Openmetrics text format from a unicode string. See text_fd_to_metric_families. """ yield from text_fd_to_metric_families(StringIO.StringIO(text)) _CANONICAL_NUMBERS = {float("inf")} def _isUncanonicalNumber(s): f = float(s) if f not in _CANONICAL_NUMBERS: return False # Only the canonical numbers are required to be canonical. return s != floatToGoString(f) ESCAPE_SEQUENCES = { '\\\\': '\\', '\\n': '\n', '\\"': '"', } def _replace_escape_sequence(match): return ESCAPE_SEQUENCES[match.group(0)] ESCAPING_RE = re.compile(r'\\[\\n"]') def _replace_escaping(s): return ESCAPING_RE.sub(_replace_escape_sequence, s) def _unescape_help(text): result = [] slash = False for char in text: if slash: if char == '\\': result.append('\\') elif char == '"': result.append('"') elif char == 'n': result.append('\n') else: result.append('\\' + char) slash = False else: if char == '\\': slash = True else: result.append(char) if slash: result.append('\\') return ''.join(result) def _parse_value(value): value = ''.join(value) if value != value.strip() or '_' in value: raise ValueError(f"Invalid value: {value!r}") try: return int(value) except ValueError: return float(value) def _parse_timestamp(timestamp): timestamp = ''.join(timestamp) if not timestamp: return None if timestamp != timestamp.strip() or '_' in timestamp: raise ValueError(f"Invalid timestamp: {timestamp!r}") try: # Simple int. return Timestamp(int(timestamp), 0) except ValueError: try: # aaaa.bbbb. Nanosecond resolution supported. parts = timestamp.split('.', 1) return Timestamp(int(parts[0]), int(parts[1][:9].ljust(9, "0"))) except ValueError: # Float. ts = float(timestamp) if math.isnan(ts) or math.isinf(ts): raise ValueError(f"Invalid timestamp: {timestamp!r}") return ts def _is_character_escaped(s, charpos): num_bslashes = 0 while (charpos > num_bslashes and s[charpos - 1 - num_bslashes] == '\\'): num_bslashes += 1 return num_bslashes % 2 == 1 def _parse_labels_with_state_machine(text): # The { has already been parsed. state = 'startoflabelname' labelname = [] labelvalue = [] labels = {} labels_len = 0 for char in text: if state == 'startoflabelname': if char == '}': state = 'endoflabels' else: state = 'labelname' labelname.append(char) elif state == 'labelname': if char == '=': state = 'labelvaluequote' else: labelname.append(char) elif state == 'labelvaluequote': if char == '"': state = 'labelvalue' else: raise ValueError("Invalid line: " + text) elif state == 'labelvalue': if char == '\\': state = 'labelvalueslash' elif char == '"': ln = ''.join(labelname) if not METRIC_LABEL_NAME_RE.match(ln): raise ValueError("Invalid line, bad label name: " + text) if ln in labels: raise ValueError("Invalid line, duplicate label name: " + text) labels[ln] = ''.join(labelvalue) labelname = [] labelvalue = [] state = 'endoflabelvalue' else: labelvalue.append(char) elif state == 'endoflabelvalue': if char == ',': state = 'labelname' elif char == '}': state = 'endoflabels' else: raise ValueError("Invalid line: " + text) elif state == 'labelvalueslash': state = 'labelvalue' if char == '\\': labelvalue.append('\\') elif char == 'n': labelvalue.append('\n') elif char == '"': labelvalue.append('"') else: labelvalue.append('\\' + char) elif state == 'endoflabels': if char == ' ': break else: raise ValueError("Invalid line: " + text) labels_len += 1 return labels, labels_len def _parse_labels(text): labels = {} # Raise error if we don't have valid labels if text and "=" not in text: raise ValueError # Copy original labels sub_labels = text try: # Process one label at a time while sub_labels: # The label name is before the equal value_start = sub_labels.index("=") label_name = sub_labels[:value_start] sub_labels = sub_labels[value_start + 1:] # Check for missing quotes if not sub_labels or sub_labels[0] != '"': raise ValueError # The first quote is guaranteed to be after the equal value_substr = sub_labels[1:] # Check for extra commas if not label_name or label_name[0] == ',': raise ValueError if not value_substr or value_substr[-1] == ',': raise ValueError # Find the last unescaped quote i = 0 while i < len(value_substr): i = value_substr.index('"', i) if not _is_character_escaped(value_substr[:i], i): break i += 1 # The label value is between the first and last quote quote_end = i + 1 label_value = sub_labels[1:quote_end] # Replace escaping if needed if "\\" in label_value: label_value = _replace_escaping(label_value) if not METRIC_LABEL_NAME_RE.match(label_name): raise ValueError("invalid line, bad label name: " + text) if label_name in labels: raise ValueError("invalid line, duplicate label name: " + text) labels[label_name] = label_value # Remove the processed label from the sub-slice for next iteration sub_labels = sub_labels[quote_end + 1:] if sub_labels.startswith(","): next_comma = 1 else: next_comma = 0 sub_labels = sub_labels[next_comma:] # Check for missing commas if sub_labels and next_comma == 0: raise ValueError return labels except ValueError: raise ValueError("Invalid labels: " + text) def _parse_sample(text): separator = " # " # Detect the labels in the text label_start = text.find("{") if label_start == -1 or separator in text[:label_start]: # We don't have labels, but there could be an exemplar. name_end = text.index(" ") name = text[:name_end] # Parse the remaining text after the name remaining_text = text[name_end + 1:] value, timestamp, exemplar = _parse_remaining_text(remaining_text) return Sample(name, {}, value, timestamp, exemplar) # The name is before the labels name = text[:label_start] if separator not in text: # Line doesn't contain an exemplar # We can use `rindex` to find `label_end` label_end = text.rindex("}") label = text[label_start + 1:label_end] labels = _parse_labels(label) else: # Line potentially contains an exemplar # Fallback to parsing labels with a state machine labels, labels_len = _parse_labels_with_state_machine(text[label_start + 1:]) label_end = labels_len + len(name) # Parsing labels succeeded, continue parsing the remaining text remaining_text = text[label_end + 2:] value, timestamp, exemplar = _parse_remaining_text(remaining_text) return Sample(name, labels, value, timestamp, exemplar) def _parse_remaining_text(text): split_text = text.split(" ", 1) val = _parse_value(split_text[0]) if len(split_text) == 1: # We don't have timestamp or exemplar return val, None, None timestamp = [] exemplar_value = [] exemplar_timestamp = [] exemplar_labels = None state = 'timestamp' text = split_text[1] it = iter(text) for char in it: if state == 'timestamp': if char == '#' and not timestamp: state = 'exemplarspace' elif char == ' ': state = 'exemplarhash' else: timestamp.append(char) elif state == 'exemplarhash': if char == '#': state = 'exemplarspace' else: raise ValueError("Invalid line: " + text) elif state == 'exemplarspace': if char == ' ': state = 'exemplarstartoflabels' else: raise ValueError("Invalid line: " + text) elif state == 'exemplarstartoflabels': if char == '{': label_start, label_end = text.index("{"), text.rindex("}") exemplar_labels = _parse_labels(text[label_start + 1:label_end]) state = 'exemplarparsedlabels' else: raise ValueError("Invalid line: " + text) elif state == 'exemplarparsedlabels': if char == '}': state = 'exemplarvaluespace' elif state == 'exemplarvaluespace': if char == ' ': state = 'exemplarvalue' else: raise ValueError("Invalid line: " + text) elif state == 'exemplarvalue': if char == ' ' and not exemplar_value: raise ValueError("Invalid line: " + text) elif char == ' ': state = 'exemplartimestamp' else: exemplar_value.append(char) elif state == 'exemplartimestamp': exemplar_timestamp.append(char) # Trailing space after value. if state == 'timestamp' and not timestamp: raise ValueError("Invalid line: " + text) # Trailing space after value. if state == 'exemplartimestamp' and not exemplar_timestamp: raise ValueError("Invalid line: " + text) # Incomplete exemplar. if state in ['exemplarhash', 'exemplarspace', 'exemplarstartoflabels', 'exemplarparsedlabels']: raise ValueError("Invalid line: " + text) ts = _parse_timestamp(timestamp) exemplar = None if exemplar_labels is not None: exemplar_length = sum(len(k) + len(v) for k, v in exemplar_labels.items()) if exemplar_length > 128: raise ValueError("Exemplar labels are too long: " + text) exemplar = Exemplar( exemplar_labels, _parse_value(exemplar_value), _parse_timestamp(exemplar_timestamp), ) return val, ts, exemplar def _group_for_sample(sample, name, typ): if typ == 'info': # We can't distinguish between groups for info metrics. return {} if typ == 'summary' and sample.name == name: d = sample.labels.copy() del d['quantile'] return d if typ == 'stateset': d = sample.labels.copy() del d[name] return d if typ in ['histogram', 'gaugehistogram'] and sample.name == name + '_bucket': d = sample.labels.copy() del d['le'] return d return sample.labels def _check_histogram(samples, name): group = None timestamp = None def do_checks(): if bucket != float('+Inf'): raise ValueError("+Inf bucket missing: " + name) if count is not None and value != count: raise ValueError("Count does not match +Inf value: " + name) if has_sum and count is None: raise ValueError("_count must be present if _sum is present: " + name) if has_gsum and count is None: raise ValueError("_gcount must be present if _gsum is present: " + name) if not (has_sum or has_gsum) and count is not None: raise ValueError("_sum/_gsum must be present if _count is present: " + name) if has_negative_buckets and has_sum: raise ValueError("Cannot have _sum with negative buckets: " + name) if not has_negative_buckets and has_negative_gsum: raise ValueError("Cannot have negative _gsum with non-negative buckets: " + name) for s in samples: suffix = s.name[len(name):] g = _group_for_sample(s, name, 'histogram') if g != group or s.timestamp != timestamp: if group is not None: do_checks() count = None bucket = None has_negative_buckets = False has_sum = False has_gsum = False has_negative_gsum = False value = 0 group = g timestamp = s.timestamp if suffix == '_bucket': b = float(s.labels['le']) if b < 0: has_negative_buckets = True if bucket is not None and b <= bucket: raise ValueError("Buckets out of order: " + name) if s.value < value: raise ValueError("Bucket values out of order: " + name) bucket = b value = s.value elif suffix in ['_count', '_gcount']: count = s.value elif suffix in ['_sum']: has_sum = True elif suffix in ['_gsum']: has_gsum = True if s.value < 0: has_negative_gsum = True if group is not None: do_checks() def text_fd_to_metric_families(fd): """Parse Prometheus text format from a file descriptor. This is a laxer parser than the main Go parser, so successful parsing does not imply that the parsed text meets the specification. Yields Metric's. """ name = None allowed_names = [] eof = False seen_names = set() type_suffixes = { 'counter': ['_total', '_created'], 'summary': ['', '_count', '_sum', '_created'], 'histogram': ['_count', '_sum', '_bucket', '_created'], 'gaugehistogram': ['_gcount', '_gsum', '_bucket'], 'info': ['_info'], } def build_metric(name, documentation, typ, unit, samples): if typ is None: typ = 'unknown' for suffix in set(type_suffixes.get(typ, []) + [""]): if name + suffix in seen_names: raise ValueError("Clashing name: " + name + suffix) seen_names.add(name + suffix) if documentation is None: documentation = '' if unit is None: unit = '' if unit and not name.endswith("_" + unit): raise ValueError("Unit does not match metric name: " + name) if unit and typ in ['info', 'stateset']: raise ValueError("Units not allowed for this metric type: " + name) if typ in ['histogram', 'gaugehistogram']: _check_histogram(samples, name) metric = Metric(name, documentation, typ, unit) # TODO: check labelvalues are valid utf8 metric.samples = samples return metric for line in fd: if line[-1] == '\n': line = line[:-1] if eof: raise ValueError("Received line after # EOF: " + line) if not line: raise ValueError("Received blank line") if line == '# EOF': eof = True elif line.startswith('#'): parts = line.split(' ', 3) if len(parts) < 4: raise ValueError("Invalid line: " + line) if parts[2] == name and samples: raise ValueError("Received metadata after samples: " + line) if parts[2] != name: if name is not None: yield build_metric(name, documentation, typ, unit, samples) # New metric name = parts[2] unit = None typ = None documentation = None group = None seen_groups = set() group_timestamp = None group_timestamp_samples = set() samples = [] allowed_names = [parts[2]] if parts[1] == 'HELP': if documentation is not None: raise ValueError("More than one HELP for metric: " + line) documentation = _unescape_help(parts[3]) elif parts[1] == 'TYPE': if typ is not None: raise ValueError("More than one TYPE for metric: " + line) typ = parts[3] if typ == 'untyped': raise ValueError("Invalid TYPE for metric: " + line) allowed_names = [name + n for n in type_suffixes.get(typ, [''])] elif parts[1] == 'UNIT': if unit is not None: raise ValueError("More than one UNIT for metric: " + line) unit = parts[3] else: raise ValueError("Invalid line: " + line) else: sample = _parse_sample(line) if sample.name not in allowed_names: if name is not None: yield build_metric(name, documentation, typ, unit, samples) # Start an unknown metric. name = sample.name documentation = None unit = None typ = 'unknown' samples = [] group = None group_timestamp = None group_timestamp_samples = set() seen_groups = set() allowed_names = [sample.name] if typ == 'stateset' and name not in sample.labels: raise ValueError("Stateset missing label: " + line) if (name + '_bucket' == sample.name and (sample.labels.get('le', "NaN") == "NaN" or _isUncanonicalNumber(sample.labels['le']))): raise ValueError("Invalid le label: " + line) if (name + '_bucket' == sample.name and (not isinstance(sample.value, int) and not sample.value.is_integer())): raise ValueError("Bucket value must be an integer: " + line) if ((name + '_count' == sample.name or name + '_gcount' == sample.name) and (not isinstance(sample.value, int) and not sample.value.is_integer())): raise ValueError("Count value must be an integer: " + line) if (typ == 'summary' and name == sample.name and (not (0 <= float(sample.labels.get('quantile', -1)) <= 1) or _isUncanonicalNumber(sample.labels['quantile']))): raise ValueError("Invalid quantile label: " + line) g = tuple(sorted(_group_for_sample(sample, name, typ).items())) if group is not None and g != group and g in seen_groups: raise ValueError("Invalid metric grouping: " + line) if group is not None and g == group: if (sample.timestamp is None) != (group_timestamp is None): raise ValueError("Mix of timestamp presence within a group: " + line) if group_timestamp is not None and group_timestamp > sample.timestamp and typ != 'info': raise ValueError("Timestamps went backwards within a group: " + line) else: group_timestamp_samples = set() series_id = (sample.name, tuple(sorted(sample.labels.items()))) if sample.timestamp != group_timestamp or series_id not in group_timestamp_samples: # Not a duplicate due to timestamp truncation. samples.append(sample) group_timestamp_samples.add(series_id) group = g group_timestamp = sample.timestamp seen_groups.add(g) if typ == 'stateset' and sample.value not in [0, 1]: raise ValueError("Stateset samples can only have values zero and one: " + line) if typ == 'info' and sample.value != 1: raise ValueError("Info samples can only have value one: " + line) if typ == 'summary' and name == sample.name and sample.value < 0: raise ValueError("Quantile values cannot be negative: " + line) if sample.name[len(name):] in ['_total', '_sum', '_count', '_bucket', '_gcount', '_gsum'] and math.isnan( sample.value): raise ValueError("Counter-like samples cannot be NaN: " + line) if sample.name[len(name):] in ['_total', '_sum', '_count', '_bucket', '_gcount'] and sample.value < 0: raise ValueError("Counter-like samples cannot be negative: " + line) if sample.exemplar and not ( (typ in ['histogram', 'gaugehistogram'] and sample.name.endswith('_bucket')) or (typ in ['counter'] and sample.name.endswith('_total'))): raise ValueError("Invalid line only histogram/gaugehistogram buckets and counters can have exemplars: " + line) if name is not None: yield build_metric(name, documentation, typ, unit, samples) if not eof: raise ValueError("Missing # EOF at end") python-prometheus-client-0.19.0+ds1/prometheus_client/parser.py000066400000000000000000000164121454301344400246570ustar00rootroot00000000000000import io as StringIO import re from typing import Dict, Iterable, List, Match, Optional, TextIO, Tuple from .metrics_core import Metric from .samples import Sample def text_string_to_metric_families(text: str) -> Iterable[Metric]: """Parse Prometheus text format from a unicode string. See text_fd_to_metric_families. """ yield from text_fd_to_metric_families(StringIO.StringIO(text)) ESCAPE_SEQUENCES = { '\\\\': '\\', '\\n': '\n', '\\"': '"', } def replace_escape_sequence(match: Match[str]) -> str: return ESCAPE_SEQUENCES[match.group(0)] HELP_ESCAPING_RE = re.compile(r'\\[\\n]') ESCAPING_RE = re.compile(r'\\[\\n"]') def _replace_help_escaping(s: str) -> str: return HELP_ESCAPING_RE.sub(replace_escape_sequence, s) def _replace_escaping(s: str) -> str: return ESCAPING_RE.sub(replace_escape_sequence, s) def _is_character_escaped(s: str, charpos: int) -> bool: num_bslashes = 0 while (charpos > num_bslashes and s[charpos - 1 - num_bslashes] == '\\'): num_bslashes += 1 return num_bslashes % 2 == 1 def _parse_labels(labels_string: str) -> Dict[str, str]: labels: Dict[str, str] = {} # Return if we don't have valid labels if "=" not in labels_string: return labels escaping = False if "\\" in labels_string: escaping = True # Copy original labels sub_labels = labels_string try: # Process one label at a time while sub_labels: # The label name is before the equal value_start = sub_labels.index("=") label_name = sub_labels[:value_start] sub_labels = sub_labels[value_start + 1:].lstrip() # Find the first quote after the equal quote_start = sub_labels.index('"') + 1 value_substr = sub_labels[quote_start:] # Find the last unescaped quote i = 0 while i < len(value_substr): i = value_substr.index('"', i) if not _is_character_escaped(value_substr, i): break i += 1 # The label value is between the first and last quote quote_end = i + 1 label_value = sub_labels[quote_start:quote_end] # Replace escaping if needed if escaping: label_value = _replace_escaping(label_value) labels[label_name.strip()] = label_value # Remove the processed label from the sub-slice for next iteration sub_labels = sub_labels[quote_end + 1:] next_comma = sub_labels.find(",") + 1 sub_labels = sub_labels[next_comma:].lstrip() return labels except ValueError: raise ValueError("Invalid labels: %s" % labels_string) # If we have multiple values only consider the first def _parse_value_and_timestamp(s: str) -> Tuple[float, Optional[float]]: s = s.lstrip() separator = " " if separator not in s: separator = "\t" values = [value.strip() for value in s.split(separator) if value.strip()] if not values: return float(s), None value = float(values[0]) timestamp = (float(values[-1]) / 1000) if len(values) > 1 else None return value, timestamp def _parse_sample(text: str) -> Sample: # Detect the labels in the text try: label_start, label_end = text.index("{"), text.rindex("}") # The name is before the labels name = text[:label_start].strip() # We ignore the starting curly brace label = text[label_start + 1:label_end] # The value is after the label end (ignoring curly brace) value, timestamp = _parse_value_and_timestamp(text[label_end + 1:]) return Sample(name, _parse_labels(label), value, timestamp) # We don't have labels except ValueError: # Detect what separator is used separator = " " if separator not in text: separator = "\t" name_end = text.index(separator) name = text[:name_end] # The value is after the name value, timestamp = _parse_value_and_timestamp(text[name_end:]) return Sample(name, {}, value, timestamp) def text_fd_to_metric_families(fd: TextIO) -> Iterable[Metric]: """Parse Prometheus text format from a file descriptor. This is a laxer parser than the main Go parser, so successful parsing does not imply that the parsed text meets the specification. Yields Metric's. """ name = '' documentation = '' typ = 'untyped' samples: List[Sample] = [] allowed_names = [] def build_metric(name: str, documentation: str, typ: str, samples: List[Sample]) -> Metric: # Munge counters into OpenMetrics representation # used internally. if typ == 'counter': if name.endswith('_total'): name = name[:-6] else: new_samples = [] for s in samples: new_samples.append(Sample(s[0] + '_total', *s[1:])) samples = new_samples metric = Metric(name, documentation, typ) metric.samples = samples return metric for line in fd: line = line.strip() if line.startswith('#'): parts = line.split(None, 3) if len(parts) < 2: continue if parts[1] == 'HELP': if parts[2] != name: if name != '': yield build_metric(name, documentation, typ, samples) # New metric name = parts[2] typ = 'untyped' samples = [] allowed_names = [parts[2]] if len(parts) == 4: documentation = _replace_help_escaping(parts[3]) else: documentation = '' elif parts[1] == 'TYPE': if parts[2] != name: if name != '': yield build_metric(name, documentation, typ, samples) # New metric name = parts[2] documentation = '' samples = [] typ = parts[3] allowed_names = { 'counter': [''], 'gauge': [''], 'summary': ['_count', '_sum', ''], 'histogram': ['_count', '_sum', '_bucket'], }.get(typ, ['']) allowed_names = [name + n for n in allowed_names] else: # Ignore other comment tokens pass elif line == '': # Ignore blank lines pass else: sample = _parse_sample(line) if sample.name not in allowed_names: if name != '': yield build_metric(name, documentation, typ, samples) # New metric, yield immediately as untyped singleton name = '' documentation = '' typ = 'untyped' samples = [] allowed_names = [] yield build_metric(sample[0], documentation, typ, [sample]) else: samples.append(sample) if name != '': yield build_metric(name, documentation, typ, samples) python-prometheus-client-0.19.0+ds1/prometheus_client/platform_collector.py000066400000000000000000000035271454301344400272600ustar00rootroot00000000000000import platform as pf from typing import Any, Iterable, Optional from .metrics_core import GaugeMetricFamily, Metric from .registry import Collector, CollectorRegistry, REGISTRY class PlatformCollector(Collector): """Collector for python platform information""" def __init__(self, registry: Optional[CollectorRegistry] = REGISTRY, platform: Optional[Any] = None, ): self._platform = pf if platform is None else platform info = self._info() system = self._platform.system() if system == "Java": info.update(self._java()) self._metrics = [ self._add_metric("python_info", "Python platform information", info) ] if registry: registry.register(self) def collect(self) -> Iterable[Metric]: return self._metrics @staticmethod def _add_metric(name, documentation, data): labels = data.keys() values = [data[k] for k in labels] g = GaugeMetricFamily(name, documentation, labels=labels) g.add_metric(values, 1) return g def _info(self): major, minor, patchlevel = self._platform.python_version_tuple() return { "version": self._platform.python_version(), "implementation": self._platform.python_implementation(), "major": major, "minor": minor, "patchlevel": patchlevel } def _java(self): java_version, _, vminfo, osinfo = self._platform.java_ver() vm_name, vm_release, vm_vendor = vminfo return { "jvm_version": java_version, "jvm_release": vm_release, "jvm_vendor": vm_vendor, "jvm_name": vm_name } PLATFORM_COLLECTOR = PlatformCollector() """PlatformCollector in default Registry REGISTRY""" python-prometheus-client-0.19.0+ds1/prometheus_client/process_collector.py000066400000000000000000000074301454301344400271070ustar00rootroot00000000000000import os from typing import Callable, Iterable, Optional, Union from .metrics_core import CounterMetricFamily, GaugeMetricFamily, Metric from .registry import Collector, CollectorRegistry, REGISTRY try: import resource _PAGESIZE = resource.getpagesize() except ImportError: # Not Unix _PAGESIZE = 4096 class ProcessCollector(Collector): """Collector for Standard Exports such as cpu and memory.""" def __init__(self, namespace: str = '', pid: Callable[[], Union[int, str]] = lambda: 'self', proc: str = '/proc', registry: Optional[CollectorRegistry] = REGISTRY): self._namespace = namespace self._pid = pid self._proc = proc if namespace: self._prefix = namespace + '_process_' else: self._prefix = 'process_' self._ticks = 100.0 try: self._ticks = os.sysconf('SC_CLK_TCK') except (ValueError, TypeError, AttributeError, OSError): pass self._pagesize = _PAGESIZE # This is used to test if we can access /proc. self._btime = 0 try: self._btime = self._boot_time() except OSError: pass if registry: registry.register(self) def _boot_time(self): with open(os.path.join(self._proc, 'stat'), 'rb') as stat: for line in stat: if line.startswith(b'btime '): return float(line.split()[1]) def collect(self) -> Iterable[Metric]: if not self._btime: return [] pid = os.path.join(self._proc, str(self._pid()).strip()) result = [] try: with open(os.path.join(pid, 'stat'), 'rb') as stat: parts = (stat.read().split(b')')[-1].split()) vmem = GaugeMetricFamily(self._prefix + 'virtual_memory_bytes', 'Virtual memory size in bytes.', value=float(parts[20])) rss = GaugeMetricFamily(self._prefix + 'resident_memory_bytes', 'Resident memory size in bytes.', value=float(parts[21]) * self._pagesize) start_time_secs = float(parts[19]) / self._ticks start_time = GaugeMetricFamily(self._prefix + 'start_time_seconds', 'Start time of the process since unix epoch in seconds.', value=start_time_secs + self._btime) utime = float(parts[11]) / self._ticks stime = float(parts[12]) / self._ticks cpu = CounterMetricFamily(self._prefix + 'cpu_seconds_total', 'Total user and system CPU time spent in seconds.', value=utime + stime) result.extend([vmem, rss, start_time, cpu]) except OSError: pass try: with open(os.path.join(pid, 'limits'), 'rb') as limits: for line in limits: if line.startswith(b'Max open file'): max_fds = GaugeMetricFamily(self._prefix + 'max_fds', 'Maximum number of open file descriptors.', value=float(line.split()[3])) break open_fds = GaugeMetricFamily(self._prefix + 'open_fds', 'Number of open file descriptors.', len(os.listdir(os.path.join(pid, 'fd')))) result.extend([open_fds, max_fds]) except OSError: pass return result PROCESS_COLLECTOR = ProcessCollector() """Default ProcessCollector in default Registry REGISTRY.""" python-prometheus-client-0.19.0+ds1/prometheus_client/py.typed000066400000000000000000000000001454301344400244720ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/prometheus_client/registry.py000066400000000000000000000140641454301344400252340ustar00rootroot00000000000000from abc import ABC, abstractmethod import copy from threading import Lock from typing import Dict, Iterable, List, Optional from .metrics_core import Metric # Ideally this would be a Protocol, but Protocols are only available in Python >= 3.8. class Collector(ABC): @abstractmethod def collect(self) -> Iterable[Metric]: pass class _EmptyCollector(Collector): def collect(self) -> Iterable[Metric]: return [] class CollectorRegistry(Collector): """Metric collector registry. Collectors must have a no-argument method 'collect' that returns a list of Metric objects. The returned metrics should be consistent with the Prometheus exposition formats. """ def __init__(self, auto_describe: bool = False, target_info: Optional[Dict[str, str]] = None): self._collector_to_names: Dict[Collector, List[str]] = {} self._names_to_collectors: Dict[str, Collector] = {} self._auto_describe = auto_describe self._lock = Lock() self._target_info: Optional[Dict[str, str]] = {} self.set_target_info(target_info) def register(self, collector: Collector) -> None: """Add a collector to the registry.""" with self._lock: names = self._get_names(collector) duplicates = set(self._names_to_collectors).intersection(names) if duplicates: raise ValueError( 'Duplicated timeseries in CollectorRegistry: {}'.format( duplicates)) for name in names: self._names_to_collectors[name] = collector self._collector_to_names[collector] = names def unregister(self, collector: Collector) -> None: """Remove a collector from the registry.""" with self._lock: for name in self._collector_to_names[collector]: del self._names_to_collectors[name] del self._collector_to_names[collector] def _get_names(self, collector): """Get names of timeseries the collector produces and clashes with.""" desc_func = None # If there's a describe function, use it. try: desc_func = collector.describe except AttributeError: pass # Otherwise, if auto describe is enabled use the collect function. if not desc_func and self._auto_describe: desc_func = collector.collect if not desc_func: return [] result = [] type_suffixes = { 'counter': ['_total', '_created'], 'summary': ['_sum', '_count', '_created'], 'histogram': ['_bucket', '_sum', '_count', '_created'], 'gaugehistogram': ['_bucket', '_gsum', '_gcount'], 'info': ['_info'], } for metric in desc_func(): result.append(metric.name) for suffix in type_suffixes.get(metric.type, []): result.append(metric.name + suffix) return result def collect(self) -> Iterable[Metric]: """Yields metrics from the collectors in the registry.""" collectors = None ti = None with self._lock: collectors = copy.copy(self._collector_to_names) if self._target_info: ti = self._target_info_metric() if ti: yield ti for collector in collectors: yield from collector.collect() def restricted_registry(self, names: Iterable[str]) -> "RestrictedRegistry": """Returns object that only collects some metrics. Returns an object which upon collect() will return only samples with the given names. Intended usage is: generate_latest(REGISTRY.restricted_registry(['a_timeseries'])) Experimental.""" names = set(names) return RestrictedRegistry(names, self) def set_target_info(self, labels: Optional[Dict[str, str]]) -> None: with self._lock: if labels: if not self._target_info and 'target_info' in self._names_to_collectors: raise ValueError('CollectorRegistry already contains a target_info metric') self._names_to_collectors['target_info'] = _EmptyCollector() elif self._target_info: self._names_to_collectors.pop('target_info', None) self._target_info = labels def get_target_info(self) -> Optional[Dict[str, str]]: with self._lock: return self._target_info def _target_info_metric(self): m = Metric('target', 'Target metadata', 'info') m.add_sample('target_info', self._target_info, 1) return m def get_sample_value(self, name: str, labels: Optional[Dict[str, str]] = None) -> Optional[float]: """Returns the sample value, or None if not found. This is inefficient, and intended only for use in unittests. """ if labels is None: labels = {} for metric in self.collect(): for s in metric.samples: if s.name == name and s.labels == labels: return s.value return None class RestrictedRegistry: def __init__(self, names: Iterable[str], registry: CollectorRegistry): self._name_set = set(names) self._registry = registry def collect(self) -> Iterable[Metric]: collectors = set() target_info_metric = None with self._registry._lock: if 'target_info' in self._name_set and self._registry._target_info: target_info_metric = self._registry._target_info_metric() for name in self._name_set: if name != 'target_info' and name in self._registry._names_to_collectors: collectors.add(self._registry._names_to_collectors[name]) if target_info_metric: yield target_info_metric for collector in collectors: for metric in collector.collect(): m = metric._restricted_metric(self._name_set) if m: yield m REGISTRY = CollectorRegistry(auto_describe=True) python-prometheus-client-0.19.0+ds1/prometheus_client/samples.py000066400000000000000000000031411454301344400250220ustar00rootroot00000000000000from typing import Dict, NamedTuple, Optional, Union class Timestamp: """A nanosecond-resolution timestamp.""" def __init__(self, sec: float, nsec: float) -> None: if nsec < 0 or nsec >= 1e9: raise ValueError(f"Invalid value for nanoseconds in Timestamp: {nsec}") if sec < 0: nsec = -nsec self.sec: int = int(sec) self.nsec: int = int(nsec) def __str__(self) -> str: return f"{self.sec}.{self.nsec:09d}" def __repr__(self) -> str: return f"Timestamp({self.sec}, {self.nsec})" def __float__(self) -> float: return float(self.sec) + float(self.nsec) / 1e9 def __eq__(self, other: object) -> bool: return isinstance(other, Timestamp) and self.sec == other.sec and self.nsec == other.nsec def __ne__(self, other: object) -> bool: return not self == other def __gt__(self, other: "Timestamp") -> bool: return self.sec > other.sec or self.nsec > other.nsec def __lt__(self, other: "Timestamp") -> bool: return self.sec < other.sec or self.nsec < other.nsec # Timestamp and exemplar are optional. # Value can be an int or a float. # Timestamp can be a float containing a unixtime in seconds, # a Timestamp object, or None. # Exemplar can be an Exemplar object, or None. class Exemplar(NamedTuple): labels: Dict[str, str] value: float timestamp: Optional[Union[float, Timestamp]] = None class Sample(NamedTuple): name: str labels: Dict[str, str] value: float timestamp: Optional[Union[float, Timestamp]] = None exemplar: Optional[Exemplar] = None python-prometheus-client-0.19.0+ds1/prometheus_client/twisted/000077500000000000000000000000001454301344400244705ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/prometheus_client/twisted/__init__.py000066400000000000000000000001101454301344400265710ustar00rootroot00000000000000from ._exposition import MetricsResource __all__ = ['MetricsResource'] python-prometheus-client-0.19.0+ds1/prometheus_client/twisted/_exposition.py000066400000000000000000000003721454301344400274040ustar00rootroot00000000000000from twisted.internet import reactor from twisted.web.wsgi import WSGIResource from .. import exposition, REGISTRY MetricsResource = lambda registry=REGISTRY: WSGIResource( reactor, reactor.getThreadPool(), exposition.make_wsgi_app(registry) ) python-prometheus-client-0.19.0+ds1/prometheus_client/utils.py000066400000000000000000000011221454301344400245130ustar00rootroot00000000000000import math INF = float("inf") MINUS_INF = float("-inf") NaN = float("NaN") def floatToGoString(d): d = float(d) if d == INF: return '+Inf' elif d == MINUS_INF: return '-Inf' elif math.isnan(d): return 'NaN' else: s = repr(d) dot = s.find('.') # Go switches to exponents sooner than Python. # We only need to care about positive values for le/quantile. if d > 0 and dot > 6: mantissa = f'{s[0]}.{s[1:dot]}{s[dot + 1:]}'.rstrip('0.') return f'{mantissa}e+0{dot - 1}' return s python-prometheus-client-0.19.0+ds1/prometheus_client/values.py000066400000000000000000000116121454301344400246570ustar00rootroot00000000000000import os from threading import Lock import warnings from .mmap_dict import mmap_key, MmapedDict class MutexValue: """A float protected by a mutex.""" _multiprocess = False def __init__(self, typ, metric_name, name, labelnames, labelvalues, help_text, **kwargs): self._value = 0.0 self._exemplar = None self._lock = Lock() def inc(self, amount): with self._lock: self._value += amount def set(self, value, timestamp=None): with self._lock: self._value = value def set_exemplar(self, exemplar): with self._lock: self._exemplar = exemplar def get(self): with self._lock: return self._value def get_exemplar(self): with self._lock: return self._exemplar def MultiProcessValue(process_identifier=os.getpid): """Returns a MmapedValue class based on a process_identifier function. The 'process_identifier' function MUST comply with this simple rule: when called in simultaneously running processes it MUST return distinct values. Using a different function than the default 'os.getpid' is at your own risk. """ files = {} values = [] pid = {'value': process_identifier()} # Use a single global lock when in multi-processing mode # as we presume this means there is no threading going on. # This avoids the need to also have mutexes in __MmapDict. lock = Lock() class MmapedValue: """A float protected by a mutex backed by a per-process mmaped file.""" _multiprocess = True def __init__(self, typ, metric_name, name, labelnames, labelvalues, help_text, multiprocess_mode='', **kwargs): self._params = typ, metric_name, name, labelnames, labelvalues, help_text, multiprocess_mode # This deprecation warning can go away in a few releases when removing the compatibility if 'prometheus_multiproc_dir' in os.environ and 'PROMETHEUS_MULTIPROC_DIR' not in os.environ: os.environ['PROMETHEUS_MULTIPROC_DIR'] = os.environ['prometheus_multiproc_dir'] warnings.warn("prometheus_multiproc_dir variable has been deprecated in favor of the upper case naming PROMETHEUS_MULTIPROC_DIR", DeprecationWarning) with lock: self.__check_for_pid_change() self.__reset() values.append(self) def __reset(self): typ, metric_name, name, labelnames, labelvalues, help_text, multiprocess_mode = self._params if typ == 'gauge': file_prefix = typ + '_' + multiprocess_mode else: file_prefix = typ if file_prefix not in files: filename = os.path.join( os.environ.get('PROMETHEUS_MULTIPROC_DIR'), '{}_{}.db'.format(file_prefix, pid['value'])) files[file_prefix] = MmapedDict(filename) self._file = files[file_prefix] self._key = mmap_key(metric_name, name, labelnames, labelvalues, help_text) self._value, self._timestamp = self._file.read_value(self._key) def __check_for_pid_change(self): actual_pid = process_identifier() if pid['value'] != actual_pid: pid['value'] = actual_pid # There has been a fork(), reset all the values. for f in files.values(): f.close() files.clear() for value in values: value.__reset() def inc(self, amount): with lock: self.__check_for_pid_change() self._value += amount self._timestamp = 0.0 self._file.write_value(self._key, self._value, self._timestamp) def set(self, value, timestamp=None): with lock: self.__check_for_pid_change() self._value = value self._timestamp = timestamp or 0.0 self._file.write_value(self._key, self._value, self._timestamp) def set_exemplar(self, exemplar): # TODO: Implement exemplars for multiprocess mode. return def get(self): with lock: self.__check_for_pid_change() return self._value def get_exemplar(self): # TODO: Implement exemplars for multiprocess mode. return None return MmapedValue def get_value_class(): # Should we enable multi-process mode? # This needs to be chosen before the first metric is constructed, # and as that may be in some arbitrary library the user/admin has # no control over we use an environment variable. if 'prometheus_multiproc_dir' in os.environ or 'PROMETHEUS_MULTIPROC_DIR' in os.environ: return MultiProcessValue() else: return MutexValue ValueClass = get_value_class() python-prometheus-client-0.19.0+ds1/setup.py000066400000000000000000000033501454301344400207670ustar00rootroot00000000000000from os import path from setuptools import setup with open(path.join(path.abspath(path.dirname(__file__)), 'README.md')) as f: long_description = f.read() setup( name="prometheus_client", version="0.19.0", author="Brian Brazil", author_email="brian.brazil@robustperception.io", description="Python client for the Prometheus monitoring system.", long_description=long_description, long_description_content_type='text/markdown', license="Apache Software License 2.0", keywords="prometheus monitoring instrumentation client", url="https://github.com/prometheus/client_python", packages=[ 'prometheus_client', 'prometheus_client.bridge', 'prometheus_client.openmetrics', 'prometheus_client.twisted', ], package_data={ 'prometheus_client': ['py.typed'] }, extras_require={ 'twisted': ['twisted'], }, test_suite="tests", python_requires=">=3.8", classifiers=[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Intended Audience :: Information Technology", "Intended Audience :: System Administrators", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: Implementation :: CPython", "Programming Language :: Python :: Implementation :: PyPy", "Topic :: System :: Monitoring", "License :: OSI Approved :: Apache Software License", ], ) python-prometheus-client-0.19.0+ds1/tests/000077500000000000000000000000001454301344400204165ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/tests/__init__.py000066400000000000000000000000001454301344400225150ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/tests/certs/000077500000000000000000000000001454301344400215365ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/tests/certs/cert.pem000066400000000000000000000037101454301344400231770ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIFkzCCA3ugAwIBAgIUWhYKmrh+gF5pZ7Yqj41cWJ2pRzkwDQYJKoZIhvcNAQEL BQAwWTELMAkGA1UEBhMCVVMxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoM GEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZDESMBAGA1UEAwwJbG9jYWxob3N0MB4X DTIyMDkyMDE4NTQzOFoXDTIyMTAyMDE4NTQzOFowWTELMAkGA1UEBhMCVVMxEzAR BgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoMGEludGVybmV0IFdpZGdpdHMgUHR5 IEx0ZDESMBAGA1UEAwwJbG9jYWxob3N0MIICIjANBgkqhkiG9w0BAQEFAAOCAg8A MIICCgKCAgEAxs6v9c/bNtA0TjNOzSoIq2CQuBhjWt8nwVG7Ujn3JBSvC263foRq OG0oGhaFrbt4HrbikYeSo5u/0CDBfKBIr7nuWifpISUovNSm2EVgVL7YQhgPOSvb PiJhrflfkUBZWUlNf0EAcFOn29xSvw3ElDUtxqUql39RRXUah1JDIIJpFPENwbmB 4jN5AoumdzSZde3S1xAXqoPf7gwxvsIgBKYGxRZs95DLx3HwesDAmRdmXcLShNZm RpUbIbomn4w3mf7S0QZ7H49/IHRghw61/TGKCX6ieJ8QTWwnBgo9EiwhhuTCVtFw IFhzQy5sP2b6mYkxQK4enYiHkTEHESdV9ipDoyZfE3G+WojEa65OYkrBFaZbuJ8q lBjRCA+pyXfxa40XkK4x3aibvdKeH1CGE+rYhxPYC6emu0Jk1wMO5TNff2Gv+eJv GEQXuyQPC5SmgSpy0tWwO9ReYmDU1++gbmFZc1QVB0GI/WxgF+PpBBEa8naFXxbe ZWG3Q6pFfrIJ3pbhb1lkTlk2zpJO2hTDUIBIn6pdVBbv+QAqCyu94gItfUxWWNL9 SHT+dfyX4CtpF9R9m9VltoKqXKcVhnjkPc9wG03TBcJdCZT3e9sg8bSz09hwx+HO 5x8RrLkqUd7PFznhX59k/xhXTSIdtdWEfrhPjy2L36o2bEEpPL70qfMCAwEAAaNT MFEwHQYDVR0OBBYEFJOiGRDrPKW5+qvVZzs+Yu8qPNJnMB8GA1UdIwQYMBaAFJOi GRDrPKW5+qvVZzs+Yu8qPNJnMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEL BQADggIBALbV73MLFz24tQpC8Ax6vh9okeuc5n2eVOk/ofe2A9DL16IeMyJo8QAG dK2j+mfZBvPfYYX/Uz596bOLauRKAsXm3ERmPW74nemLE4CiMhod2snIUMbSHrem 6iFLIqGGosncoqHIPJej4sG+XLKsqw8zLzflUSsvNveaHEwt2PSZys37bExADbB5 uj4JU+vEBhagZ6ieZ3hENn3Uj5c46kimXrEStD3chT2PWoLAS4Am6VJyXUtqXQZj 1ef+S+6caCbldaJNfNEVU1jQliJpo0u+EfiF/DqU799xnnoW4D132AxiG0L1jPFr 7GbIme6H2ZbNCZngf6eCbdsoHAGkQZpD3wYFHLmTFDCNmjMJteb3YFMNHyU5eh8C 2bTp4D+pifgFRo3Mc23kuRXiIzyrhg/eOHl+Qi2V8BpXP/7bI27FZXglp9rzhP57 YMfBeoOejRgRHJTeMPhBbEevQyqpb9ecZxpuGV77Pi3S5IA26fe3pZQCHRceufZr YKWLOCt2jZJ0y9KM4wdVg40yr6BVZqy2OPqDn64q0pAgMHG24XrxxuaFISjgPOgx bYhhTUG/4Dkb1BWhfEv4EwEf/uDxhHE8k6LoKk3Qb+aOzPr2PKGvW3yBHEFNSmKH x3SB8xxj/IOiA+em+TLtKd66gG0T0b8cPt037k+D3a0BqC9d2sVs -----END CERTIFICATE----- python-prometheus-client-0.19.0+ds1/tests/certs/key.pem000066400000000000000000000063041454301344400230340ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIJQQIBADANBgkqhkiG9w0BAQEFAASCCSswggknAgEAAoICAQDGzq/1z9s20DRO M07NKgirYJC4GGNa3yfBUbtSOfckFK8Lbrd+hGo4bSgaFoWtu3getuKRh5Kjm7/Q IMF8oEivue5aJ+khJSi81KbYRWBUvthCGA85K9s+ImGt+V+RQFlZSU1/QQBwU6fb 3FK/DcSUNS3GpSqXf1FFdRqHUkMggmkU8Q3BuYHiM3kCi6Z3NJl17dLXEBeqg9/u DDG+wiAEpgbFFmz3kMvHcfB6wMCZF2ZdwtKE1mZGlRshuiafjDeZ/tLRBnsfj38g dGCHDrX9MYoJfqJ4nxBNbCcGCj0SLCGG5MJW0XAgWHNDLmw/ZvqZiTFArh6diIeR MQcRJ1X2KkOjJl8Tcb5aiMRrrk5iSsEVplu4nyqUGNEID6nJd/FrjReQrjHdqJu9 0p4fUIYT6tiHE9gLp6a7QmTXAw7lM19/Ya/54m8YRBe7JA8LlKaBKnLS1bA71F5i YNTX76BuYVlzVBUHQYj9bGAX4+kEERrydoVfFt5lYbdDqkV+sgneluFvWWROWTbO kk7aFMNQgEifql1UFu/5ACoLK73iAi19TFZY0v1IdP51/JfgK2kX1H2b1WW2gqpc pxWGeOQ9z3AbTdMFwl0JlPd72yDxtLPT2HDH4c7nHxGsuSpR3s8XOeFfn2T/GFdN Ih211YR+uE+PLYvfqjZsQSk8vvSp8wIDAQABAoICAAPyVHHnx21GItOulxDhlbx5 NUZCTa6fIXXn/nT6a5qOwo7Sitf7HvSxzgr+iXbScucBMGw9Kb8Pt3YVQGIN+INs iHvHsQwUZcOh4RIIBoqII1jki2DSKw8HtbKzcZ87jMqF9wDgtHaGYp2tuQLL7iwX BiqcWsUZJO7hDT7Edkqt7BIbWu+OlDJ+XRec2BgjtiwuJXJZgm7DIW3jVhV4WxRc i2PcNxuPB0yVSXXWX7xqR4Dy/iTe8LbT/O7leCDQssXe1iaKH2WX/qkRRl1IAHrf QeNAXU9RsQwoannnOCElOSEpZ2Y70CMEPn2F7WYw0Ca+H3kuO7Na434RYBeKFV29 saVw6SzToqQIR/NYagNGQ2d7SRmiXgZYsMY7AOLQtTn0HratOGVEouIm+XgeZxMZ qKvjLaITVorZMrPmb/fKQ/z/pxcMEwRHUtws1PT6jSJSuAABIFxvHSi64XQQ6Qo7 nfl6OgBkpoURmrH7oK1q3ZRBsm9Hy+8i3tTilcTRJNeavF0nPdByEbzypo/BLYT5 0WFN2q5inzncwtqFkD+6+QB09HVt9HtlPPu6/NYTum+N9paeFyP84oK1CClnlwkl abpXGlpAFwfjiJhwX6BxpQlB2WR9SJeUy870dftZKm7Jbd3o2XJ7wjYpoaI+UCFY 4BkAcL6sc8HtMhBNFbthAoIBAQDdZ/cXqtUT+gmIvJe6WmCENXFK/8NdX0gxH1Gt /28743h4G2Ukl+Yh2GUq4VWgnV8r0euygxx1kwJFDpC1NEWgNMHzxNeTI1H4mqZt jA2wu164+t8TCsNdX4yTRp1IEKkCfPh+IIzwBFzabPWVOzxRPBoxEC95F/y/OfXm 0R1e5tErulYPFgWFvCWt42mTg6hCVLuAAmfzaKsmJ89nGT5YDORSYzVDa/WuapFf QMKzbpa8ZyvFUyi1+CGN1/+JGyAbNXtM6KS5Oi76DfSR+sSPEQEXGmOxf+bq827S QL0K9GTPMIjU1PzrXC6W5hA9lgG/2lwRD0JsjiJIBynnpVAjAoIBAQDl3ss8CX9W l1IyNfIvHGgK43PQt6tgKgAB10+13F4/+1KlFmF3SZaC/p3AA83YNYt6AP1m74wH QYiihEzFH6U8PMp8qK8qfLOI2qVmmKyGCUPsYyYb6Kf9eHiGfCL4afZvJwmnB3FV 3TrQkdAwo7xtOh9yxqdnnt7u193qSWejfKIQcnvzVs6nXbsdlHwqax0O2q40Mlcf eQqO5de53aUx89iKylTefZPD8fTBb04oL2FV2ImCccapYEHg/UsquwAZl1phMoZV 0mDjn1nWaxUWY+Aog8ds5fOIpz5wd+KF+cvT1cP0/rm5JHT1HuDVDdd8ArPkECuq MMYlNj5MYfPxAoIBAF737E3zkeg6tQI42uAtSf8LqWfhIxyW9TFU3MVErqLCpHbo UU8L9MOJvYNSGleFiUATkAUHJhrsjumuILYJEOByIMt+IHXVjaCUPVT54RlwlWXE /hB96mTPyk2V2XsC4mvVzQTU039Ub7ulRwXW3b1+iUGITsSjXF9t7iMuiWmemhQm nilkacP+ey8GP8/thivFipOS9KG8wMTiCJ2Rf2NnTDxmn38m/L/uqCJyddFfWzq/ ClBepjS/lSzxfIOD5halrxjDJXzqDyJlAAXpyYwQYCZXxHFrilI3Ts7SxAPB5sfU aqzYGxCdfsJtNoQkJuXzNNCAeh50LRI2OGxLRX8CggEARm4s9wgx6+YRWTEOM0EQ 38UxBxI/gAdeWTIPSjlq50+p0ss4scPqSdiZnOuNdmFxisAi5BchYFfD9YdzvjIj /oDhybAle28Z0ySq6PR+Z9MO7K60TnjKf+8ZfpsqW9KbnxLm8jZlk1llW+JRV5XT deQJHrGfOTCEPcoGRHKZPo5BWai6MaS3TLB7VGTaZmTLUnHOTk/eQdZkVcQ2hMxU gSmlf2DfAAyZ6b+IrnvcBpP9zr+54i3aIKtNhBIXpdAGB9FH79/7KPB8n0GD1R6a J3ISjFdUExmhtI0JpIwW69XNjepBUB976C4zZ6c+XAkRrP1nAMmzl0G6dExaaizZ AQKCAQATD1sklHGmkwNtw5g51G7eC8mJeS8TKK+nlWNaIPcd8LIWc5XnECYlRrt/ pYv74Dwqw+ze6I8W2FlhvSYAheGgYlijzhDlbArdr7jQLFcyRESu27koqwHSTkxM Bt46VS4UrlIk4bAp8/WwXUrGrQ3P094R7wJ2jN3Jp4/tG0C+igti4b12KfM+srkC /N5BiyLLl4H4l1TMFyhuQyY7QsqgWkEJQoYbI+see7m/8IlnU+mxGj8q6aWiTmVG 52ZOak9AV0SoHSIPChpin5J3kNDQ2z/oC3UhyHZBHwWCGj8AOTy+M1HIcVNPzCga YdxZTB2pN96tqnBm8vyvi81Cdy/x -----END PRIVATE KEY----- python-prometheus-client-0.19.0+ds1/tests/openmetrics/000077500000000000000000000000001454301344400227465ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/tests/openmetrics/__init__.py000066400000000000000000000000001454301344400250450ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/tests/openmetrics/test_exposition.py000066400000000000000000000223041454301344400265610ustar00rootroot00000000000000import time import unittest from prometheus_client import ( CollectorRegistry, Counter, Enum, Gauge, Histogram, Info, Metric, Summary, ) from prometheus_client.core import ( Exemplar, GaugeHistogramMetricFamily, Timestamp, ) from prometheus_client.openmetrics.exposition import generate_latest class TestGenerateText(unittest.TestCase): def setUp(self): self.registry = CollectorRegistry() # Mock time so _created values are fixed. self.old_time = time.time time.time = lambda: 123.456 def tearDown(self): time.time = self.old_time def custom_collector(self, metric_family): class CustomCollector: def collect(self): return [metric_family] self.registry.register(CustomCollector()) def test_counter(self): c = Counter('cc', 'A counter', registry=self.registry) c.inc() self.assertEqual(b'# HELP cc A counter\n# TYPE cc counter\ncc_total 1.0\ncc_created 123.456\n# EOF\n', generate_latest(self.registry)) def test_counter_total(self): c = Counter('cc_total', 'A counter', registry=self.registry) c.inc() self.assertEqual(b'# HELP cc A counter\n# TYPE cc counter\ncc_total 1.0\ncc_created 123.456\n# EOF\n', generate_latest(self.registry)) def test_counter_unit(self): c = Counter('cc_seconds', 'A counter', registry=self.registry, unit="seconds") c.inc() self.assertEqual(b'# HELP cc_seconds A counter\n# TYPE cc_seconds counter\n# UNIT cc_seconds seconds\ncc_seconds_total 1.0\ncc_seconds_created 123.456\n# EOF\n', generate_latest(self.registry)) def test_gauge(self): g = Gauge('gg', 'A gauge', registry=self.registry) g.set(17) self.assertEqual(b'# HELP gg A gauge\n# TYPE gg gauge\ngg 17.0\n# EOF\n', generate_latest(self.registry)) def test_summary(self): s = Summary('ss', 'A summary', ['a', 'b'], registry=self.registry) s.labels('c', 'd').observe(17) self.assertEqual(b"""# HELP ss A summary # TYPE ss summary ss_count{a="c",b="d"} 1.0 ss_sum{a="c",b="d"} 17.0 ss_created{a="c",b="d"} 123.456 # EOF """, generate_latest(self.registry)) def test_histogram(self): s = Histogram('hh', 'A histogram', registry=self.registry) s.observe(0.05) self.assertEqual(b"""# HELP hh A histogram # TYPE hh histogram hh_bucket{le="0.005"} 0.0 hh_bucket{le="0.01"} 0.0 hh_bucket{le="0.025"} 0.0 hh_bucket{le="0.05"} 1.0 hh_bucket{le="0.075"} 1.0 hh_bucket{le="0.1"} 1.0 hh_bucket{le="0.25"} 1.0 hh_bucket{le="0.5"} 1.0 hh_bucket{le="0.75"} 1.0 hh_bucket{le="1.0"} 1.0 hh_bucket{le="2.5"} 1.0 hh_bucket{le="5.0"} 1.0 hh_bucket{le="7.5"} 1.0 hh_bucket{le="10.0"} 1.0 hh_bucket{le="+Inf"} 1.0 hh_count 1.0 hh_sum 0.05 hh_created 123.456 # EOF """, generate_latest(self.registry)) def test_histogram_negative_buckets(self): s = Histogram('hh', 'A histogram', buckets=[-1, -0.5, 0, 0.5, 1], registry=self.registry) s.observe(-0.5) self.assertEqual(b"""# HELP hh A histogram # TYPE hh histogram hh_bucket{le="-1.0"} 0.0 hh_bucket{le="-0.5"} 1.0 hh_bucket{le="0.0"} 1.0 hh_bucket{le="0.5"} 1.0 hh_bucket{le="1.0"} 1.0 hh_bucket{le="+Inf"} 1.0 hh_count 1.0 hh_created 123.456 # EOF """, generate_latest(self.registry)) def test_histogram_exemplar(self): s = Histogram('hh', 'A histogram', buckets=[1, 2, 3, 4], registry=self.registry) s.observe(0.5, {'a': 'b'}) s.observe(1.5, {'le': '7'}) s.observe(2.5, {'a': 'b'}) s.observe(3.5, {'a': '\n"\\'}) print(generate_latest(self.registry)) self.assertEqual(b"""# HELP hh A histogram # TYPE hh histogram hh_bucket{le="1.0"} 1.0 # {a="b"} 0.5 123.456 hh_bucket{le="2.0"} 2.0 # {le="7"} 1.5 123.456 hh_bucket{le="3.0"} 3.0 # {a="b"} 2.5 123.456 hh_bucket{le="4.0"} 4.0 # {a="\\n\\"\\\\"} 3.5 123.456 hh_bucket{le="+Inf"} 4.0 hh_count 4.0 hh_sum 8.0 hh_created 123.456 # EOF """, generate_latest(self.registry)) def test_counter_exemplar(self): c = Counter('cc', 'A counter', registry=self.registry) c.inc(exemplar={'a': 'b'}) self.assertEqual(b"""# HELP cc A counter # TYPE cc counter cc_total 1.0 # {a="b"} 1.0 123.456 cc_created 123.456 # EOF """, generate_latest(self.registry)) def test_untyped_exemplar(self): class MyCollector: def collect(self): metric = Metric("hh", "help", 'untyped') # This is not sane, but it covers all the cases. metric.add_sample("hh_bucket", {}, 0, None, Exemplar({'a': 'b'}, 0.5)) yield metric self.registry.register(MyCollector()) with self.assertRaises(ValueError): generate_latest(self.registry) def test_histogram_non_bucket_exemplar(self): class MyCollector: def collect(self): metric = Metric("hh", "help", 'histogram') # This is not sane, but it covers all the cases. metric.add_sample("hh_count", {}, 0, None, Exemplar({'a': 'b'}, 0.5)) yield metric self.registry.register(MyCollector()) with self.assertRaises(ValueError): generate_latest(self.registry) def test_counter_non_total_exemplar(self): class MyCollector: def collect(self): metric = Metric("cc", "A counter", 'counter') metric.add_sample("cc_total", {}, 1, None, None) metric.add_sample("cc_created", {}, 123.456, None, Exemplar({'a': 'b'}, 1.0, 123.456)) yield metric self.registry.register(MyCollector()) with self.assertRaises(ValueError): generate_latest(self.registry) def test_gaugehistogram(self): self.custom_collector( GaugeHistogramMetricFamily('gh', 'help', buckets=[('1.0', 4), ('+Inf', (5))], gsum_value=7)) self.assertEqual(b"""# HELP gh help # TYPE gh gaugehistogram gh_bucket{le="1.0"} 4.0 gh_bucket{le="+Inf"} 5.0 gh_gcount 5.0 gh_gsum 7.0 # EOF """, generate_latest(self.registry)) def test_gaugehistogram_negative_buckets(self): self.custom_collector( GaugeHistogramMetricFamily('gh', 'help', buckets=[('-1.0', 4), ('+Inf', (5))], gsum_value=-7)) self.assertEqual(b"""# HELP gh help # TYPE gh gaugehistogram gh_bucket{le="-1.0"} 4.0 gh_bucket{le="+Inf"} 5.0 gh_gcount 5.0 gh_gsum -7.0 # EOF """, generate_latest(self.registry)) def test_info(self): i = Info('ii', 'A info', ['a', 'b'], registry=self.registry) i.labels('c', 'd').info({'foo': 'bar'}) self.assertEqual(b"""# HELP ii A info # TYPE ii info ii_info{a="c",b="d",foo="bar"} 1.0 # EOF """, generate_latest(self.registry)) def test_enum(self): i = Enum('ee', 'An enum', ['a', 'b'], registry=self.registry, states=['foo', 'bar']) i.labels('c', 'd').state('bar') self.assertEqual(b"""# HELP ee An enum # TYPE ee stateset ee{a="c",b="d",ee="foo"} 0.0 ee{a="c",b="d",ee="bar"} 1.0 # EOF """, generate_latest(self.registry)) def test_unicode(self): c = Counter('cc', '\u4500', ['l'], registry=self.registry) c.labels('\u4500').inc() self.assertEqual(b"""# HELP cc \xe4\x94\x80 # TYPE cc counter cc_total{l="\xe4\x94\x80"} 1.0 cc_created{l="\xe4\x94\x80"} 123.456 # EOF """, generate_latest(self.registry)) def test_escaping(self): c = Counter('cc', 'A\ncount\\er\"', ['a'], registry=self.registry) c.labels('\\x\n"').inc(1) self.assertEqual(b"""# HELP cc A\\ncount\\\\er\\" # TYPE cc counter cc_total{a="\\\\x\\n\\""} 1.0 cc_created{a="\\\\x\\n\\""} 123.456 # EOF """, generate_latest(self.registry)) def test_nonnumber(self): class MyNumber: def __repr__(self): return "MyNumber(123)" def __float__(self): return 123.0 class MyCollector: def collect(self): metric = Metric("nonnumber", "Non number", 'untyped') metric.add_sample("nonnumber", {}, MyNumber()) yield metric self.registry.register(MyCollector()) self.assertEqual(b'# HELP nonnumber Non number\n# TYPE nonnumber unknown\nnonnumber 123.0\n# EOF\n', generate_latest(self.registry)) def test_timestamp(self): class MyCollector: def collect(self): metric = Metric("ts", "help", 'unknown') metric.add_sample("ts", {"foo": "a"}, 0, 123.456) metric.add_sample("ts", {"foo": "b"}, 0, -123.456) metric.add_sample("ts", {"foo": "c"}, 0, 123) metric.add_sample("ts", {"foo": "d"}, 0, Timestamp(123, 456000000)) metric.add_sample("ts", {"foo": "e"}, 0, Timestamp(123, 456000)) metric.add_sample("ts", {"foo": "f"}, 0, Timestamp(123, 456)) yield metric self.registry.register(MyCollector()) self.assertEqual(b"""# HELP ts help # TYPE ts unknown ts{foo="a"} 0.0 123.456 ts{foo="b"} 0.0 -123.456 ts{foo="c"} 0.0 123 ts{foo="d"} 0.0 123.456000000 ts{foo="e"} 0.0 123.000456000 ts{foo="f"} 0.0 123.000000456 # EOF """, generate_latest(self.registry)) if __name__ == '__main__': unittest.main() python-prometheus-client-0.19.0+ds1/tests/openmetrics/test_parser.py000066400000000000000000001036601454301344400256610ustar00rootroot00000000000000import math import unittest from prometheus_client.core import ( CollectorRegistry, CounterMetricFamily, Exemplar, GaugeHistogramMetricFamily, GaugeMetricFamily, HistogramMetricFamily, InfoMetricFamily, Metric, Sample, StateSetMetricFamily, SummaryMetricFamily, Timestamp, ) from prometheus_client.openmetrics.exposition import generate_latest from prometheus_client.openmetrics.parser import text_string_to_metric_families class TestParse(unittest.TestCase): def test_simple_counter(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a help a_total 1 # EOF """) self.assertEqual([CounterMetricFamily("a", "help", value=1)], list(families)) def test_uint64_counter(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a help a_total 9223372036854775808 # EOF """) self.assertEqual([CounterMetricFamily("a", "help", value=9223372036854775808)], list(families)) def test_simple_gauge(self): families = text_string_to_metric_families("""# TYPE a gauge # HELP a help a 1 # EOF """) self.assertEqual([GaugeMetricFamily("a", "help", value=1)], list(families)) def test_float_gauge(self): families = text_string_to_metric_families("""# TYPE a gauge # HELP a help a 1.2 # EOF """) self.assertEqual([GaugeMetricFamily("a", "help", value=1.2)], list(families)) def test_leading_zeros_simple_gauge(self): families = text_string_to_metric_families("""# TYPE a gauge # HELP a help a 0000000000000000000000000000000000000000001 # EOF """) self.assertEqual([GaugeMetricFamily("a", "help", value=1)], list(families)) def test_leading_zeros_float_gauge(self): families = text_string_to_metric_families("""# TYPE a gauge # HELP a help a 0000000000000000000000000000000000000000001.2e-1 # EOF """) self.assertEqual([GaugeMetricFamily("a", "help", value=.12)], list(families)) def test_nan_gauge(self): families = text_string_to_metric_families("""# TYPE a gauge # HELP a help a NaN # EOF """) self.assertTrue(math.isnan(list(families)[0].samples[0].value)) def test_unit_gauge(self): families = text_string_to_metric_families("""# TYPE a_seconds gauge # UNIT a_seconds seconds # HELP a_seconds help a_seconds 1 # EOF """) self.assertEqual([GaugeMetricFamily("a_seconds", "help", value=1, unit='seconds')], list(families)) def test_simple_summary(self): families = text_string_to_metric_families("""# TYPE a summary # HELP a help a_count 1 a_sum 2 # EOF """) summary = SummaryMetricFamily("a", "help", count_value=1, sum_value=2) self.assertEqual([summary], list(families)) def test_summary_quantiles(self): families = text_string_to_metric_families("""# TYPE a summary # HELP a help a_count 1 a_sum 2 a{quantile="0.5"} 0.7 a{quantile="1"} 0.8 # EOF """) # The Python client doesn't support quantiles, but we # still need to be able to parse them. metric_family = SummaryMetricFamily("a", "help", count_value=1, sum_value=2) metric_family.add_sample("a", {"quantile": "0.5"}, 0.7) metric_family.add_sample("a", {"quantile": "1"}, 0.8) self.assertEqual([metric_family], list(families)) def test_simple_histogram(self): families = text_string_to_metric_families("""# TYPE a histogram # HELP a help a_bucket{le="1.0"} 0 a_bucket{le="+Inf"} 3 a_count 3 a_sum 2 # EOF """) self.assertEqual([HistogramMetricFamily("a", "help", sum_value=2, buckets=[("1.0", 0.0), ("+Inf", 3.0)])], list(families)) def test_simple_histogram_float_values(self): families = text_string_to_metric_families("""# TYPE a histogram # HELP a help a_bucket{le="1.0"} 0.0 a_bucket{le="+Inf"} 3.0 a_count 3.0 a_sum 2.0 # EOF """) self.assertEqual([HistogramMetricFamily("a", "help", sum_value=2, buckets=[("1.0", 0.0), ("+Inf", 3.0)])], list(families)) def test_histogram_noncanonical(self): families = text_string_to_metric_families("""# TYPE a histogram # HELP a help a_bucket{le="0"} 0 a_bucket{le="0.00000000001"} 0 a_bucket{le="0.0000000001"} 0 a_bucket{le="1e-04"} 0 a_bucket{le="1.1e-4"} 0 a_bucket{le="1.1e-3"} 0 a_bucket{le="1.1e-2"} 0 a_bucket{le="1"} 0 a_bucket{le="1e+05"} 0 a_bucket{le="10000000000"} 0 a_bucket{le="100000000000.0"} 0 a_bucket{le="+Inf"} 3 a_count 3 a_sum 2 # EOF """) list(families) def test_negative_bucket_histogram(self): families = text_string_to_metric_families("""# TYPE a histogram # HELP a help a_bucket{le="-1.0"} 0 a_bucket{le="1.0"} 1 a_bucket{le="+Inf"} 3 # EOF """) self.assertEqual([HistogramMetricFamily("a", "help", buckets=[("-1.0", 0.0), ("1.0", 1.0), ("+Inf", 3.0)])], list(families)) def test_histogram_exemplars(self): families = text_string_to_metric_families("""# TYPE a histogram # HELP a help a_bucket{le="1.0"} 0 # {a="b"} 0.5 a_bucket{le="2.0"} 2 # {a="c"} 0.5 a_bucket{le="+Inf"} 3 # {a="2345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678"} 4 123 # EOF """) hfm = HistogramMetricFamily("a", "help") hfm.add_sample("a_bucket", {"le": "1.0"}, 0.0, None, Exemplar({"a": "b"}, 0.5)) hfm.add_sample("a_bucket", {"le": "2.0"}, 2.0, None, Exemplar({"a": "c"}, 0.5)), hfm.add_sample("a_bucket", {"le": "+Inf"}, 3.0, None, Exemplar({"a": "2345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678"}, 4, Timestamp(123, 0))) self.assertEqual([hfm], list(families)) def test_simple_gaugehistogram(self): families = text_string_to_metric_families("""# TYPE a gaugehistogram # HELP a help a_bucket{le="1.0"} 0 a_bucket{le="+Inf"} 3 a_gcount 3 a_gsum 2 # EOF """) self.assertEqual([GaugeHistogramMetricFamily("a", "help", gsum_value=2, buckets=[("1.0", 0.0), ("+Inf", 3.0)])], list(families)) def test_negative_bucket_gaugehistogram(self): families = text_string_to_metric_families("""# TYPE a gaugehistogram # HELP a help a_bucket{le="-1.0"} 1 a_bucket{le="1.0"} 2 a_bucket{le="+Inf"} 3 a_gcount 3 a_gsum -5 # EOF """) self.assertEqual([GaugeHistogramMetricFamily("a", "help", gsum_value=-5, buckets=[("-1.0", 1.0), ("1.0", 2.0), ("+Inf", 3.0)])], list(families)) def test_gaugehistogram_exemplars(self): families = text_string_to_metric_families("""# TYPE a gaugehistogram # HELP a help a_bucket{le="1.0"} 0 123 # {a="b"} 0.5 a_bucket{le="2.0"} 2 123 # {a="c"} 0.5 a_bucket{le="+Inf"} 3 123 # {a="d"} 4 123 # EOF """) hfm = GaugeHistogramMetricFamily("a", "help") hfm.add_sample("a_bucket", {"le": "1.0"}, 0.0, Timestamp(123, 0), Exemplar({"a": "b"}, 0.5)) hfm.add_sample("a_bucket", {"le": "2.0"}, 2.0, Timestamp(123, 0), Exemplar({"a": "c"}, 0.5)), hfm.add_sample("a_bucket", {"le": "+Inf"}, 3.0, Timestamp(123, 0), Exemplar({"a": "d"}, 4, Timestamp(123, 0))) self.assertEqual([hfm], list(families)) def test_counter_exemplars(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a help a_total 0 123 # {a="b"} 0.5 # EOF """) cfm = CounterMetricFamily("a", "help") cfm.add_sample("a_total", {}, 0.0, Timestamp(123, 0), Exemplar({"a": "b"}, 0.5)) self.assertEqual([cfm], list(families)) def test_counter_exemplars_empty_brackets(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a help a_total{} 0 123 # {a="b"} 0.5 # EOF """) cfm = CounterMetricFamily("a", "help") cfm.add_sample("a_total", {}, 0.0, Timestamp(123, 0), Exemplar({"a": "b"}, 0.5)) self.assertEqual([cfm], list(families)) def test_simple_info(self): families = text_string_to_metric_families("""# TYPE a info # HELP a help a_info{foo="bar"} 1 # EOF """) self.assertEqual([InfoMetricFamily("a", "help", {'foo': 'bar'})], list(families)) def test_info_timestamps(self): families = text_string_to_metric_families("""# TYPE a info # HELP a help a_info{a="1",foo="bar"} 1 1 a_info{a="2",foo="bar"} 1 0 # EOF """) imf = InfoMetricFamily("a", "help") imf.add_sample("a_info", {"a": "1", "foo": "bar"}, 1, Timestamp(1, 0)) imf.add_sample("a_info", {"a": "2", "foo": "bar"}, 1, Timestamp(0, 0)) self.assertEqual([imf], list(families)) def test_simple_stateset(self): families = text_string_to_metric_families("""# TYPE a stateset # HELP a help a{a="bar"} 0 a{a="foo"} 1.0 # EOF """) self.assertEqual([StateSetMetricFamily("a", "help", {'foo': True, 'bar': False})], list(families)) def test_duplicate_timestamps(self): families = text_string_to_metric_families("""# TYPE a gauge # HELP a help a{a="1",foo="bar"} 1 0.0000000000 a{a="1",foo="bar"} 2 0.0000000001 a{a="1",foo="bar"} 3 0.0000000010 a{a="2",foo="bar"} 4 0.0000000000 a{a="2",foo="bar"} 5 0.0000000001 # EOF """) imf = GaugeMetricFamily("a", "help") imf.add_sample("a", {"a": "1", "foo": "bar"}, 1, Timestamp(0, 0)) imf.add_sample("a", {"a": "1", "foo": "bar"}, 3, Timestamp(0, 1)) imf.add_sample("a", {"a": "2", "foo": "bar"}, 4, Timestamp(0, 0)) self.assertEqual([imf], list(families)) def test_no_metadata(self): families = text_string_to_metric_families("""a 1 # EOF """) metric_family = Metric("a", "", "untyped") metric_family.add_sample("a", {}, 1) self.assertEqual([metric_family], list(families)) def test_empty_metadata(self): families = text_string_to_metric_families("""# HELP a # UNIT a # EOF """) metric_family = Metric("a", "", "untyped") self.assertEqual([metric_family], list(families)) def test_untyped(self): # https://github.com/prometheus/client_python/issues/79 families = text_string_to_metric_families("""# HELP redis_connected_clients Redis connected clients # TYPE redis_connected_clients unknown redis_connected_clients{instance="rough-snowflake-web",port="6380"} 10.0 redis_connected_clients{instance="rough-snowflake-web",port="6381"} 12.0 # EOF """) m = Metric("redis_connected_clients", "Redis connected clients", "untyped") m.samples = [ Sample("redis_connected_clients", {"instance": "rough-snowflake-web", "port": "6380"}, 10), Sample("redis_connected_clients", {"instance": "rough-snowflake-web", "port": "6381"}, 12), ] self.assertEqual([m], list(families)) def test_type_help_switched(self): families = text_string_to_metric_families("""# HELP a help # TYPE a counter a_total 1 # EOF """) self.assertEqual([CounterMetricFamily("a", "help", value=1)], list(families)) def test_labels_with_curly_braces(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a help a_total{foo="bar",bar="b{a}z"} 1 # EOF """) metric_family = CounterMetricFamily("a", "help", labels=["foo", "bar"]) metric_family.add_metric(["bar", "b{a}z"], 1) self.assertEqual([metric_family], list(families)) def test_empty_help(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a a_total 1 # EOF """) self.assertEqual([CounterMetricFamily("a", "", value=1)], list(families)) def test_labels_and_infinite(self): families = text_string_to_metric_families("""# TYPE a gauge # HELP a help a{foo="bar"} +Inf a{foo="baz"} -Inf # EOF """) metric_family = GaugeMetricFamily("a", "help", labels=["foo"]) metric_family.add_metric(["bar"], float('inf')) metric_family.add_metric(["baz"], float('-inf')) self.assertEqual([metric_family], list(families)) def test_empty_brackets(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a help a_total{} 1 # EOF """) self.assertEqual([CounterMetricFamily("a", "help", value=1)], list(families)) def test_nan(self): families = text_string_to_metric_families("""a NaN # EOF """) self.assertTrue(math.isnan(list(families)[0].samples[0][2])) def test_no_newline_after_eof(self): families = text_string_to_metric_families("""# TYPE a gauge # HELP a help a 1 # EOF""") self.assertEqual([GaugeMetricFamily("a", "help", value=1)], list(families)) def test_empty_label(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a help a_total{foo="bar"} 1 a_total{foo=""} 2 # EOF """) metric_family = CounterMetricFamily("a", "help", labels=["foo"]) metric_family.add_metric(["bar"], 1) metric_family.add_metric([""], 2) self.assertEqual([metric_family], list(families)) def test_label_escaping(self): for escaped_val, unescaped_val in [('foo', 'foo'), ('\\foo', '\\foo'), ('\\\\foo', '\\foo'), ('foo\\\\', 'foo\\'), ('\\\\', '\\'), ('\\n', '\n'), ('\\\\n', '\\n'), ('\\\\\\n', '\\\n'), ('\\"', '"'), ('\\\\\\"', '\\"')]: families = list(text_string_to_metric_families("""# TYPE a counter # HELP a help a_total{foo="%s",bar="baz"} 1 # EOF """ % escaped_val)) metric_family = CounterMetricFamily( "a", "help", labels=["foo", "bar"]) metric_family.add_metric([unescaped_val, "baz"], 1) self.assertEqual([metric_family], list(families)) def test_help_escaping(self): for escaped_val, unescaped_val in [ ('foo', 'foo'), ('\\foo', '\\foo'), ('\\\\foo', '\\foo'), ('foo\\', 'foo\\'), ('foo\\\\', 'foo\\'), ('\\n', '\n'), ('\\\\n', '\\n'), ('\\\\\\n', '\\\n'), ('\\"', '"'), ('\\\\"', '\\"'), ('\\\\\\"', '\\"')]: families = list(text_string_to_metric_families("""# TYPE a counter # HELP a %s a_total{foo="bar"} 1 # EOF """ % escaped_val)) metric_family = CounterMetricFamily("a", unescaped_val, labels=["foo"]) metric_family.add_metric(["bar"], 1) self.assertEqual([metric_family], list(families)) def test_escaping(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a he\\n\\\\l\\tp a_total{foo="b\\"a\\nr"} 1 a_total{foo="b\\\\a\\z"} 2 a_total{foo="b\\"a\\nr # "} 3 a_total{foo="b\\\\a\\z # "} 4 # EOF """) metric_family = CounterMetricFamily("a", "he\n\\l\\tp", labels=["foo"]) metric_family.add_metric(["b\"a\nr"], 1) metric_family.add_metric(["b\\a\\z"], 2) metric_family.add_metric(["b\"a\nr # "], 3) metric_family.add_metric(["b\\a\\z # "], 4) self.assertEqual([metric_family], list(families)) def test_null_byte(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a he\0lp # EOF """) metric_family = CounterMetricFamily("a", "he\0lp") self.assertEqual([metric_family], list(families)) def test_timestamps(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a help a_total{foo="1"} 1 000 a_total{foo="2"} 1 0.0 a_total{foo="3"} 1 1.1 a_total{foo="4"} 1 12345678901234567890.1234567890 a_total{foo="5"} 1 1.5e3 # TYPE b counter # HELP b help b_total 2 1234567890 # EOF """) a = CounterMetricFamily("a", "help", labels=["foo"]) a.add_metric(["1"], 1, timestamp=Timestamp(0, 0)) a.add_metric(["2"], 1, timestamp=Timestamp(0, 0)) a.add_metric(["3"], 1, timestamp=Timestamp(1, 100000000)) a.add_metric(["4"], 1, timestamp=Timestamp(12345678901234567890, 123456789)) a.add_metric(["5"], 1, timestamp=1500.0) b = CounterMetricFamily("b", "help") b.add_metric([], 2, timestamp=Timestamp(1234567890, 0)) self.assertEqual([a, b], list(families)) def test_hash_in_label_value(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a help a_total{foo="foo # bar"} 1 a_total{foo="} foo # bar # "} 1 # EOF """) a = CounterMetricFamily("a", "help", labels=["foo"]) a.add_metric(["foo # bar"], 1) a.add_metric(["} foo # bar # "], 1) self.assertEqual([a], list(families)) def test_exemplars_with_hash_in_label_values(self): families = text_string_to_metric_families("""# TYPE a histogram # HELP a help a_bucket{le="1.0",foo="bar # "} 0 # {a="b",foo="bar # bar"} 0.5 a_bucket{le="2.0",foo="bar # "} 2 # {a="c",foo="bar # bar"} 0.5 a_bucket{le="+Inf",foo="bar # "} 3 # {a="d",foo="bar # bar"} 4 # EOF """) hfm = HistogramMetricFamily("a", "help") hfm.add_sample("a_bucket", {"le": "1.0", "foo": "bar # "}, 0.0, None, Exemplar({"a": "b", "foo": "bar # bar"}, 0.5)) hfm.add_sample("a_bucket", {"le": "2.0", "foo": "bar # "}, 2.0, None, Exemplar({"a": "c", "foo": "bar # bar"}, 0.5)) hfm.add_sample("a_bucket", {"le": "+Inf", "foo": "bar # "}, 3.0, None, Exemplar({"a": "d", "foo": "bar # bar"}, 4)) self.assertEqual([hfm], list(families)) def test_fallback_to_state_machine_label_parsing(self): from unittest.mock import patch from prometheus_client.openmetrics.parser import _parse_sample parse_sample_function = "prometheus_client.openmetrics.parser._parse_sample" parse_labels_function = "prometheus_client.openmetrics.parser._parse_labels" parse_remaining_function = "prometheus_client.openmetrics.parser._parse_remaining_text" state_machine_function = "prometheus_client.openmetrics.parser._parse_labels_with_state_machine" parse_sample_return_value = Sample("a_total", {"foo": "foo # bar"}, 1) with patch(parse_sample_function, return_value=parse_sample_return_value) as mock: families = text_string_to_metric_families("""# TYPE a counter # HELP a help a_total{foo="foo # bar"} 1 # EOF """) a = CounterMetricFamily("a", "help", labels=["foo"]) a.add_metric(["foo # bar"], 1) self.assertEqual([a], list(families)) mock.assert_called_once_with('a_total{foo="foo # bar"} 1') # First fallback case state_machine_return_values = [{"foo": "foo # bar"}, len('foo="foo # bar"}')] parse_remaining_values = [1, None, None] with patch(parse_labels_function) as mock1: with patch(state_machine_function, return_value=state_machine_return_values) as mock2: with patch(parse_remaining_function, return_value=parse_remaining_values) as mock3: sample = _parse_sample('a_total{foo="foo # bar"} 1') s = Sample("a_total", {"foo": "foo # bar"}, 1) self.assertEqual(s, sample) mock1.assert_not_called() mock2.assert_called_once_with('foo="foo # bar"} 1') mock3.assert_called_once_with('1') # Second fallback case state_machine_return_values = [{"le": "1.0"}, len('le="1.0"}')] parse_remaining_values = [0.0, Timestamp(123, 0), Exemplar({"a": "b"}, 0.5)] with patch(parse_labels_function) as mock1: with patch(state_machine_function, return_value=state_machine_return_values) as mock2: with patch(parse_remaining_function, return_value=parse_remaining_values) as mock3: sample = _parse_sample('a_bucket{le="1.0"} 0 123 # {a="b"} 0.5') s = Sample("a_bucket", {"le": "1.0"}, 0.0, Timestamp(123, 0), Exemplar({"a": "b"}, 0.5)) self.assertEqual(s, sample) mock1.assert_not_called() mock2.assert_called_once_with('le="1.0"} 0 123 # {a="b"} 0.5') mock3.assert_called_once_with('0 123 # {a="b"} 0.5') # No need to fallback case parse_labels_return_values = {"foo": "foo#bar"} parse_remaining_values = [1, None, None] with patch(parse_labels_function, return_value=parse_labels_return_values) as mock1: with patch(state_machine_function) as mock2: with patch(parse_remaining_function, return_value=parse_remaining_values) as mock3: sample = _parse_sample('a_total{foo="foo#bar"} 1') s = Sample("a_total", {"foo": "foo#bar"}, 1) self.assertEqual(s, sample) mock1.assert_called_once_with('foo="foo#bar"') mock2.assert_not_called() mock3.assert_called_once_with('1') def test_roundtrip(self): text = """# HELP go_gc_duration_seconds A summary of the GC invocation durations. # TYPE go_gc_duration_seconds summary go_gc_duration_seconds{quantile="0.0"} 0.013300656000000001 go_gc_duration_seconds{quantile="0.25"} 0.013638736 go_gc_duration_seconds{quantile="0.5"} 0.013759906 go_gc_duration_seconds{quantile="0.75"} 0.013962066 go_gc_duration_seconds{quantile="1.0"} 0.021383540000000003 go_gc_duration_seconds_sum 56.12904785 go_gc_duration_seconds_count 7476.0 # HELP go_goroutines Number of goroutines that currently exist. # TYPE go_goroutines gauge go_goroutines 166.0 # HELP prometheus_local_storage_indexing_batch_duration_milliseconds Quantiles for batch indexing duration in milliseconds. # TYPE prometheus_local_storage_indexing_batch_duration_milliseconds summary prometheus_local_storage_indexing_batch_duration_milliseconds{quantile="0.5"} NaN prometheus_local_storage_indexing_batch_duration_milliseconds{quantile="0.9"} NaN prometheus_local_storage_indexing_batch_duration_milliseconds{quantile="0.99"} NaN prometheus_local_storage_indexing_batch_duration_milliseconds_sum 871.5665949999999 prometheus_local_storage_indexing_batch_duration_milliseconds_count 229.0 # HELP process_cpu_seconds Total user and system CPU time spent in seconds. # TYPE process_cpu_seconds counter process_cpu_seconds_total 29323.4 # HELP process_virtual_memory_bytes Virtual memory size in bytes. # TYPE process_virtual_memory_bytes gauge process_virtual_memory_bytes 2.478268416e+09 # HELP prometheus_build_info A metric with a constant '1' value labeled by version, revision, and branch from which Prometheus was built. # TYPE prometheus_build_info gauge prometheus_build_info{branch="HEAD",revision="ef176e5",version="0.16.0rc1"} 1.0 # HELP prometheus_local_storage_chunk_ops The total number of chunk operations by their type. # TYPE prometheus_local_storage_chunk_ops counter prometheus_local_storage_chunk_ops_total{type="clone"} 28.0 prometheus_local_storage_chunk_ops_total{type="create"} 997844.0 prometheus_local_storage_chunk_ops_total{type="drop"} 1.345758e+06 prometheus_local_storage_chunk_ops_total{type="load"} 1641.0 prometheus_local_storage_chunk_ops_total{type="persist"} 981408.0 prometheus_local_storage_chunk_ops_total{type="pin"} 32662.0 prometheus_local_storage_chunk_ops_total{type="transcode"} 980180.0 prometheus_local_storage_chunk_ops_total{type="unpin"} 32662.0 # HELP foo histogram Testing histogram buckets # TYPE foo histogram foo_bucket{le="0.0"} 0.0 foo_bucket{le="1e-05"} 0.0 foo_bucket{le="0.0001"} 0.0 foo_bucket{le="0.1"} 8.0 foo_bucket{le="1.0"} 10.0 foo_bucket{le="10.0"} 17.0 foo_bucket{le="100000.0"} 17.0 foo_bucket{le="1e+06"} 17.0 foo_bucket{le="1.55555555555552e+06"} 17.0 foo_bucket{le="1e+23"} 17.0 foo_bucket{le="+Inf"} 17.0 foo_count 17.0 foo_sum 324789.3 foo_created 1.520430000123e+09 # HELP bar histogram Testing with labels # TYPE bar histogram bar_bucket{a="b",le="+Inf"} 0.0 bar_bucket{a="c",le="+Inf"} 0.0 # EOF """ families = list(text_string_to_metric_families(text)) class TextCollector: def collect(self): return families registry = CollectorRegistry() registry.register(TextCollector()) self.assertEqual(text.encode('utf-8'), generate_latest(registry)) def test_invalid_input(self): for case in [ # No EOF. (''), # Blank line ('a 1\n\n# EOF\n'), # Text after EOF. ('a 1\n# EOF\nblah'), ('a 1\n# EOFblah'), # Missing or wrong quotes on label value. ('a{a=1} 1\n# EOF\n'), ('a{a="1} 1\n# EOF\n'), ('a{a=\'1\'} 1\n# EOF\n'), # Missing equal or label value. ('a{a} 1\n# EOF\n'), ('a{a"value"} 1\n# EOF\n'), ('a{a""} 1\n# EOF\n'), ('a{a=} 1\n# EOF\n'), ('a{a="} 1\n# EOF\n'), # Missing delimiters. ('a{a="1"}1\n# EOF\n'), # Missing or extra commas. ('a{a="1"b="2"} 1\n# EOF\n'), ('a{a="1",,b="2"} 1\n# EOF\n'), ('a{a="1",b="2",} 1\n# EOF\n'), # Invalid labels. ('a{1="1"} 1\n# EOF\n'), ('a{1="1"}1\n# EOF\n'), ('a{a="1",a="1"} 1\n# EOF\n'), ('a{a="1"b} 1\n# EOF\n'), ('a{1=" # "} 1\n# EOF\n'), ('a{a=" # ",a=" # "} 1\n# EOF\n'), ('a{a=" # "}1\n# EOF\n'), ('a{a=" # ",b=}1\n# EOF\n'), ('a{a=" # "b}1\n# EOF\n'), # Missing value. ('a\n# EOF\n'), ('a \n# EOF\n'), # Bad HELP. ('# HELP\n# EOF\n'), ('# HELP \n# EOF\n'), ('# HELP a\n# EOF\n'), ('# HELP a\t\n# EOF\n'), (' # HELP a meh\n# EOF\n'), # Bad TYPE. ('# TYPE\n# EOF\n'), ('# TYPE \n# EOF\n'), ('# TYPE a\n# EOF\n'), ('# TYPE a\t\n# EOF\n'), ('# TYPE a meh\n# EOF\n'), ('# TYPE a meh \n# EOF\n'), ('# TYPE a gauge \n# EOF\n'), ('# TYPE a untyped\n# EOF\n'), # Bad UNIT. ('# UNIT\n# EOF\n'), ('# UNIT \n# EOF\n'), ('# UNIT a\n# EOF\n'), ('# UNIT a\t\n# EOF\n'), ('# UNIT a seconds\n# EOF\n'), ('# UNIT a_seconds seconds \n# EOF\n'), ('# TYPE x_u info\n# UNIT x_u u\n# EOF\n'), ('# TYPE x_u stateset\n# UNIT x_u u\n# EOF\n'), # Metadata in wrong place. ('# HELP a x\na 1\n# TYPE a gauge\n# EOF\n'), ('# TYPE a gauge\na 1\n# HELP a gauge\n# EOF\n'), ('# TYPE a_s gauge\na_s 1\n# UNIT a_s s\n# EOF\n'), # Repeated metadata. ('# HELP a \n# HELP a \n# EOF\n'), ('# HELP a x\n# HELP a x\n# EOF\n'), ('# TYPE a untyped\n# TYPE a untyped\n# EOF\n'), ('# UNIT a_s s\n# UNIT a_s s\n# EOF\n'), # Bad metadata. ('# FOO a x\n# EOF\n'), # Bad metric names. ('0a 1\n# EOF\n'), ('a.b 1\n# EOF\n'), ('a-b 1\n# EOF\n'), # Bad value. ('a a\n# EOF\n'), ('a 1\n# EOF\n'), ('a 1\t\n# EOF\n'), ('a 1 \n# EOF\n'), ('a 1_2\n# EOF\n'), ('a 0x1p-3\n# EOF\n'), ('a 0x1P-3\n# EOF\n'), ('a 0b1\n# EOF\n'), ('a 0B1\n# EOF\n'), ('a 0x1\n# EOF\n'), ('a 0X1\n# EOF\n'), ('a 0o1\n# EOF\n'), ('a 0O1\n# EOF\n'), # Bad timestamp. ('a 1 z\n# EOF\n'), ('a 1 1z\n# EOF\n'), ('a 1 1_2\n# EOF\n'), ('a 1 1.1.1\n# EOF\n'), ('a 1 NaN\n# EOF\n'), ('a 1 Inf\n# EOF\n'), ('a 1 +Inf\n# EOF\n'), ('a 1 -Inf\n# EOF\n'), ('a 1 0x1p-3\n# EOF\n'), # Bad exemplars. ('# TYPE a histogram\na_bucket{le="+Inf"} 1 #\n# EOF\n'), ('# TYPE a histogram\na_bucket{le="+Inf"} 1# {} 1\n# EOF\n'), ('# TYPE a histogram\na_bucket{le="+Inf"} 1 #{} 1\n# EOF\n'), ('# TYPE a histogram\na_bucket{le="+Inf"} 1 # {}1\n# EOF\n'), ('# TYPE a histogram\na_bucket{le="+Inf"} 1 # {} 1 \n# EOF\n'), ('# TYPE a histogram\na_bucket{le="+Inf"} 1 # {} 1 1 \n# EOF\n'), ('# TYPE a histogram\na_bucket{le="+Inf"} 1 # ' '{a="23456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789"} 1 1\n# EOF\n'), ('# TYPE a histogram\na_bucket{le="+Inf"} 1 # {} 0x1p-3\n# EOF\n'), ('# TYPE a histogram\na_bucket{le="+Inf"} 1 # {} 1 0x1p-3\n# EOF\n'), ('# TYPE a counter\na_total 1 1 # {id="a"} \n# EOF\n'), ('# TYPE a counter\na_total 1 1 # id="a"} 1\n# EOF\n'), ('# TYPE a counter\na_total 1 1 #id=" # "} 1\n# EOF\n'), ('# TYPE a counter\na_total 1 1 id=" # "} 1\n# EOF\n'), # Exemplars on unallowed samples. ('# TYPE a histogram\na_sum 1 # {a="b"} 0.5\n# EOF\n'), ('# TYPE a gaugehistogram\na_sum 1 # {a="b"} 0.5\n# EOF\n'), ('# TYPE a_bucket gauge\na_bucket 1 # {a="b"} 0.5\n# EOF\n'), ('# TYPE a counter\na_created 1 # {a="b"} 0.5\n# EOF\n'), # Exemplars on unallowed metric types. ('# TYPE a gauge\na 1 # {a="b"} 1\n# EOF\n'), ('# TYPE a info\na_info 1 # {a="b"} 1\n# EOF\n'), ('# TYPE a stateset\na{a="b"} 1 # {c="d"} 1\n# EOF\n'), # Bad stateset/info values. ('# TYPE a stateset\na 2\n# EOF\n'), ('# TYPE a info\na 2\n# EOF\n'), ('# TYPE a stateset\na 2.0\n# EOF\n'), ('# TYPE a info\na 2.0\n# EOF\n'), # Missing or invalid labels for a type. ('# TYPE a summary\na 0\n# EOF\n'), ('# TYPE a summary\na{quantile="-1"} 0\n# EOF\n'), ('# TYPE a summary\na{quantile="foo"} 0\n# EOF\n'), ('# TYPE a summary\na{quantile="1.01"} 0\n# EOF\n'), ('# TYPE a summary\na{quantile="NaN"} 0\n# EOF\n'), ('# TYPE a histogram\na_bucket 0\n# EOF\n'), ('# TYPE a gaugehistogram\na_bucket 0\n# EOF\n'), ('# TYPE a stateset\na 0\n# EOF\n'), # Bad counter values. ('# TYPE a counter\na_total NaN\n# EOF\n'), ('# TYPE a counter\na_total -1\n# EOF\n'), ('# TYPE a histogram\na_sum NaN\n# EOF\n'), ('# TYPE a histogram\na_count NaN\n# EOF\n'), ('# TYPE a histogram\na_bucket{le="+Inf"} NaN\n# EOF\n'), ('# TYPE a histogram\na_sum -1\n# EOF\n'), ('# TYPE a histogram\na_count -1\n# EOF\n'), ('# TYPE a histogram\na_bucket{le="+Inf"} -1\n# EOF\n'), ('# TYPE a histogram\na_bucket{le="-1.0"} 1\na_bucket{le="+Inf"} 2\na_sum -1\n# EOF\n'), ('# TYPE a histogram\na_bucket{le="-1.0"} 1\na_bucket{le="+Inf"} 2\na_sum 1\n# EOF\n'), ('# TYPE a histogram\na_bucket{le="+Inf"} 0.5\n# EOF\n'), ('# TYPE a histogram\na_bucket{le="+Inf"} 0.5\na_count 0.5\na_sum 0\n# EOF\n'), ('# TYPE a gaugehistogram\na_bucket{le="+Inf"} NaN\n# EOF\n'), ('# TYPE a gaugehistogram\na_bucket{le="+Inf"} -1\na_gcount -1\n# EOF\n'), ('# TYPE a gaugehistogram\na_bucket{le="+Inf"} -1\n# EOF\n'), ('# TYPE a gaugehistogram\na_bucket{le="+Inf"} 1\na_gsum -1\n# EOF\n'), ('# TYPE a gaugehistogram\na_bucket{le="+Inf"} 1\na_gsum NaN\n# EOF\n'), ('# TYPE a gaugehistogram\na_bucket{le="+Inf"} 0.5\n# EOF\n'), ('# TYPE a gaugehistogram\na_bucket{le="+Inf"} 0.5\na_gsum 0.5\na_gcount 0\n# EOF\n'), ('# TYPE a summary\na_sum NaN\n# EOF\n'), ('# TYPE a summary\na_count NaN\n# EOF\n'), ('# TYPE a summary\na_sum -1\n# EOF\n'), ('# TYPE a summary\na_count -1\n# EOF\n'), ('# TYPE a summary\na_count 0.5\n# EOF\n'), ('# TYPE a summary\na{quantile="0.5"} -1\n# EOF\n'), # Bad info and stateset values. ('# TYPE a info\na_info{foo="bar"} 2\n# EOF\n'), ('# TYPE a stateset\na{a="bar"} 2\n# EOF\n'), # Bad histograms. ('# TYPE a histogram\na_sum 1\n# EOF\n'), ('# TYPE a histogram\na_bucket{le="+Inf"} 0\na_sum 0\n# EOF\n'), ('# TYPE a histogram\na_bucket{le="+Inf"} 0\na_count 0\n# EOF\n'), ('# TYPE a histogram\na_bucket{le="-1"} 0\na_bucket{le="+Inf"} 0\na_sum 0\na_count 0\n# EOF\n'), ('# TYPE a gaugehistogram\na_gsum 1\n# EOF\n'), ('# TYPE a gaugehistogram\na_bucket{le="+Inf"} 0\na_gsum 0\n# EOF\n'), ('# TYPE a gaugehistogram\na_bucket{le="+Inf"} 0\na_gcount 0\n# EOF\n'), ('# TYPE a gaugehistogram\na_bucket{le="+Inf"} 1\na_gsum -1\na_gcount 1\n# EOF\n'), ('# TYPE a histogram\na_count 1\na_bucket{le="+Inf"} 0\n# EOF\n'), ('# TYPE a histogram\na_bucket{le="+Inf"} 0\na_count 1\n# EOF\n'), ('# TYPE a histogram\na_bucket{le="+INF"} 0\n# EOF\n'), ('# TYPE a histogram\na_bucket{le="2"} 0\na_bucket{le="1"} 0\na_bucket{le="+Inf"} 0\n# EOF\n'), ('# TYPE a histogram\na_bucket{le="1"} 1\na_bucket{le="2"} 1\na_bucket{le="+Inf"} 0\n# EOF\n'), # Bad grouping or ordering. ('# TYPE a histogram\na_sum{a="1"} 0\na_sum{a="2"} 0\na_count{a="1"} 0\n# EOF\n'), ('# TYPE a histogram\na_bucket{a="1",le="1"} 0\na_bucket{a="2",le="+Inf""} ' '0\na_bucket{a="1",le="+Inf"} 0\n# EOF\n'), ('# TYPE a gaugehistogram\na_gsum{a="1"} 0\na_gsum{a="2"} 0\na_gcount{a="1"} 0\n# EOF\n'), ('# TYPE a summary\nquantile{quantile="0"} 0\na_sum{a="1"} 0\nquantile{quantile="1"} 0\n# EOF\n'), ('# TYPE a gauge\na 0 -1\na 0 -2\n# EOF\n'), ('# TYPE a gauge\na 0 -1\na 0 -1.1\n# EOF\n'), ('# TYPE a gauge\na 0 1\na 0 -1\n# EOF\n'), ('# TYPE a gauge\na 0 1.1\na 0 1\n# EOF\n'), ('# TYPE a gauge\na 0 1\na 0 0\n# EOF\n'), ('# TYPE a gauge\na 0\na 0 0\n# EOF\n'), ('# TYPE a gauge\na 0 0\na 0\n# EOF\n'), # Clashing names. ('# TYPE a counter\n# TYPE a counter\n# EOF\n'), ('# TYPE a info\n# TYPE a counter\n# EOF\n'), ('# TYPE a_created gauge\n# TYPE a counter\n# EOF\n'), ]: with self.assertRaises(ValueError, msg=case): list(text_string_to_metric_families(case)) if __name__ == '__main__': unittest.main() python-prometheus-client-0.19.0+ds1/tests/proc/000077500000000000000000000000001454301344400213615ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/tests/proc/26231/000077500000000000000000000000001454301344400220365ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/tests/proc/26231/fd/000077500000000000000000000000001454301344400224275ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/tests/proc/26231/fd/0000066400000000000000000000000001454301344400224770ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/tests/proc/26231/fd/1000066400000000000000000000000001454301344400225000ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/tests/proc/26231/fd/2000066400000000000000000000000001454301344400225010ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/tests/proc/26231/fd/3000066400000000000000000000000001454301344400225020ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/tests/proc/26231/fd/4000066400000000000000000000000001454301344400225030ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/tests/proc/26231/limits000066400000000000000000000021051454301344400232600ustar00rootroot00000000000000Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 unlimited bytes Max resident set unlimited unlimited bytes Max processes 62898 62898 processes Max open files 2048 4096 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 62898 62898 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 python-prometheus-client-0.19.0+ds1/tests/proc/26231/stat000066400000000000000000000005121454301344400227320ustar00rootroot0000000000000026231 (vim) R 5392 7446 5392 34835 7446 4218880 32533 309516 26 82 1677 44 158 99 20 0 1 0 82375 56274944 1981 18446744073709551615 4194304 6294284 140736914091744 140736914087944 139965136429984 0 0 12288 1870679807 0 0 0 17 0 0 0 31 0 0 8391624 8481048 16420864 140736914093252 140736914093279 140736914093279 140736914096107 0 python-prometheus-client-0.19.0+ds1/tests/proc/584/000077500000000000000000000000001454301344400217015ustar00rootroot00000000000000python-prometheus-client-0.19.0+ds1/tests/proc/584/stat000066400000000000000000000004721454301344400226020ustar00rootroot000000000000001020 ((a b ) ( c d) ) R 28378 1020 28378 34842 1020 4218880 286 0 0 0 0 0 0 0 20 0 1 0 10839175 10395648 155 18446744073709551615 4194304 4238788 140736466511168 140736466511168 140609271124624 0 0 0 0 0 0 0 17 5 0 0 0 0 0 6336016 6337300 25579520 140736466515030 140736466515061 140736466515061 140736466518002 0 python-prometheus-client-0.19.0+ds1/tests/proc/stat000066400000000000000000000040561454301344400222640ustar00rootroot00000000000000cpu 301854 612 111922 8979004 3552 2 3944 0 0 0 cpu0 44490 19 21045 1087069 220 1 3410 0 0 0 cpu1 47869 23 16474 1110787 591 0 46 0 0 0 cpu2 46504 36 15916 1112321 441 0 326 0 0 0 cpu3 47054 102 15683 1113230 533 0 60 0 0 0 cpu4 28413 25 10776 1140321 217 0 8 0 0 0 cpu5 29271 101 11586 1136270 672 0 30 0 0 0 cpu6 29152 36 10276 1139721 319 0 29 0 0 0 cpu7 29098 268 10164 1139282 555 0 31 0 0 0 intr 8885917 17 0 0 0 0 0 0 0 1 79281 0 0 0 0 0 0 0 231237 0 0 0 0 250586 103 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 223424 190745 13 906 1283803 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0intr 8885917 17 0 0 0 0 0 0 0 1 79281 0 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ctxt 38014093 btime 1418183276 processes 26442 procs_running 2 procs_blocked 0 softirq 5057579 250191 1481983 1647 211099 186066 0 1783454 622196 12499 508444 python-prometheus-client-0.19.0+ds1/tests/test_asgi.py000066400000000000000000000157241454301344400227630ustar00rootroot00000000000000import gzip from unittest import skipUnless, TestCase from prometheus_client import CollectorRegistry, Counter from prometheus_client.exposition import CONTENT_TYPE_LATEST try: # Python >3.5 only import asyncio from asgiref.testing import ApplicationCommunicator from prometheus_client import make_asgi_app HAVE_ASYNCIO_AND_ASGI = True except ImportError: HAVE_ASYNCIO_AND_ASGI = False def setup_testing_defaults(scope): scope.update( { "client": ("127.0.0.1", 32767), "headers": [], "http_version": "1.0", "method": "GET", "path": "/", "query_string": b"", "scheme": "http", "server": ("127.0.0.1", 80), "type": "http", } ) class ASGITest(TestCase): @skipUnless(HAVE_ASYNCIO_AND_ASGI, "Don't have asyncio/asgi installed.") def setUp(self): self.registry = CollectorRegistry() self.captured_status = None self.captured_headers = None # Setup ASGI scope self.scope = {} setup_testing_defaults(self.scope) self.communicator = None def tearDown(self): if self.communicator: asyncio.get_event_loop().run_until_complete( self.communicator.wait() ) def seed_app(self, app): self.communicator = ApplicationCommunicator(app, self.scope) def send_input(self, payload): asyncio.get_event_loop().run_until_complete( self.communicator.send_input(payload) ) def send_default_request(self): self.send_input({"type": "http.request", "body": b""}) def get_output(self): output = asyncio.get_event_loop().run_until_complete( self.communicator.receive_output(0) ) return output def get_all_output(self): outputs = [] while True: try: outputs.append(self.get_output()) except asyncio.TimeoutError: break return outputs def get_all_response_headers(self): outputs = self.get_all_output() response_start = next(o for o in outputs if o["type"] == "http.response.start") return response_start["headers"] def get_response_header_value(self, header_name): response_headers = self.get_all_response_headers() return next( value.decode("utf-8") for name, value in response_headers if name.decode("utf-8") == header_name ) def increment_metrics(self, metric_name, help_text, increments): c = Counter(metric_name, help_text, registry=self.registry) for _ in range(increments): c.inc() def assert_outputs(self, outputs, metric_name, help_text, increments, compressed): self.assertEqual(len(outputs), 2) response_start = outputs[0] self.assertEqual(response_start['type'], 'http.response.start') response_body = outputs[1] self.assertEqual(response_body['type'], 'http.response.body') # Status code self.assertEqual(response_start['status'], 200) # Headers num_of_headers = 2 if compressed else 1 self.assertEqual(len(response_start['headers']), num_of_headers) self.assertIn((b"Content-Type", CONTENT_TYPE_LATEST.encode('utf8')), response_start['headers']) if compressed: self.assertIn((b"Content-Encoding", b"gzip"), response_start['headers']) # Body if compressed: output = gzip.decompress(response_body['body']).decode('utf8') else: output = response_body['body'].decode('utf8') self.assertIn("# HELP " + metric_name + "_total " + help_text + "\n", output) self.assertIn("# TYPE " + metric_name + "_total counter\n", output) self.assertIn(metric_name + "_total " + str(increments) + ".0\n", output) def validate_metrics(self, metric_name, help_text, increments): """ ASGI app serves the metrics from the provided registry. """ self.increment_metrics(metric_name, help_text, increments) # Create and run ASGI app app = make_asgi_app(self.registry) self.seed_app(app) self.send_default_request() # Assert outputs outputs = self.get_all_output() self.assert_outputs(outputs, metric_name, help_text, increments, compressed=False) def test_report_metrics_1(self): self.validate_metrics("counter", "A counter", 2) def test_report_metrics_2(self): self.validate_metrics("counter", "Another counter", 3) def test_report_metrics_3(self): self.validate_metrics("requests", "Number of requests", 5) def test_report_metrics_4(self): self.validate_metrics("failed_requests", "Number of failed requests", 7) def test_gzip(self): # Increment a metric. metric_name = "counter" help_text = "A counter" increments = 2 self.increment_metrics(metric_name, help_text, increments) app = make_asgi_app(self.registry) self.seed_app(app) # Send input with gzip header. self.scope["headers"] = [(b"accept-encoding", b"gzip")] self.send_input({"type": "http.request", "body": b""}) # Assert outputs are compressed. outputs = self.get_all_output() self.assert_outputs(outputs, metric_name, help_text, increments, compressed=True) def test_gzip_disabled(self): # Increment a metric. metric_name = "counter" help_text = "A counter" increments = 2 self.increment_metrics(metric_name, help_text, increments) # Disable compression explicitly. app = make_asgi_app(self.registry, disable_compression=True) self.seed_app(app) # Send input with gzip header. self.scope["headers"] = [(b"accept-encoding", b"gzip")] self.send_input({"type": "http.request", "body": b""}) # Assert outputs are not compressed. outputs = self.get_all_output() self.assert_outputs(outputs, metric_name, help_text, increments, compressed=False) def test_openmetrics_encoding(self): """Response content type is application/openmetrics-text when appropriate Accept header is in request""" app = make_asgi_app(self.registry) self.seed_app(app) self.scope["headers"] = [(b"Accept", b"application/openmetrics-text")] self.send_input({"type": "http.request", "body": b""}) content_type = self.get_response_header_value('Content-Type').split(";")[0] assert content_type == "application/openmetrics-text" def test_plaintext_encoding(self): """Response content type is text/plain when Accept header is missing in request""" app = make_asgi_app(self.registry) self.seed_app(app) self.send_input({"type": "http.request", "body": b""}) content_type = self.get_response_header_value('Content-Type').split(";")[0] assert content_type == "text/plain" python-prometheus-client-0.19.0+ds1/tests/test_core.py000066400000000000000000001231571454301344400227700ustar00rootroot00000000000000from concurrent.futures import ThreadPoolExecutor import os import time import unittest import pytest from prometheus_client import metrics from prometheus_client.core import ( CollectorRegistry, Counter, CounterMetricFamily, Enum, Gauge, GaugeHistogramMetricFamily, GaugeMetricFamily, Histogram, HistogramMetricFamily, Info, InfoMetricFamily, Metric, Sample, StateSetMetricFamily, Summary, SummaryMetricFamily, UntypedMetricFamily, ) from prometheus_client.decorator import getargspec from prometheus_client.metrics import _get_use_created def assert_not_observable(fn, *args, **kwargs): """ Assert that a function call falls with a ValueError exception containing 'missing label values' """ try: fn(*args, **kwargs) except ValueError as e: assert 'missing label values' in str(e) return assert False, "Did not raise a 'missing label values' exception" class TestCounter(unittest.TestCase): def setUp(self): self.registry = CollectorRegistry() self.counter = Counter('c_total', 'help', registry=self.registry) def test_increment(self): self.assertEqual(0, self.registry.get_sample_value('c_total')) self.counter.inc() self.assertEqual(1, self.registry.get_sample_value('c_total')) self.counter.inc(7) self.assertEqual(8, self.registry.get_sample_value('c_total')) def test_repr(self): self.assertEqual(repr(self.counter), "prometheus_client.metrics.Counter(c)") def test_negative_increment_raises(self): self.assertRaises(ValueError, self.counter.inc, -1) def test_function_decorator(self): @self.counter.count_exceptions(ValueError) def f(r): if r: raise ValueError else: raise TypeError self.assertEqual((["r"], None, None, None), getargspec(f)) try: f(False) except TypeError: pass self.assertEqual(0, self.registry.get_sample_value('c_total')) try: f(True) except ValueError: pass self.assertEqual(1, self.registry.get_sample_value('c_total')) def test_block_decorator(self): with self.counter.count_exceptions(): pass self.assertEqual(0, self.registry.get_sample_value('c_total')) raised = False try: with self.counter.count_exceptions(): raise ValueError except: raised = True self.assertTrue(raised) self.assertEqual(1, self.registry.get_sample_value('c_total')) def test_count_exceptions_not_observable(self): counter = Counter('counter', 'help', labelnames=('label',), registry=self.registry) assert_not_observable(counter.count_exceptions) def test_inc_not_observable(self): """.inc() must fail if the counter is not observable.""" counter = Counter('counter', 'help', labelnames=('label',), registry=self.registry) assert_not_observable(counter.inc) def test_exemplar_invalid_label_name(self): self.assertRaises(ValueError, self.counter.inc, exemplar={':o)': 'smile'}) self.assertRaises(ValueError, self.counter.inc, exemplar={'1': 'number'}) def test_exemplar_unicode(self): # 128 characters should not raise, even using characters larger than 1 byte. self.counter.inc(exemplar={ 'abcdefghijklmnopqrstuvwxyz': '26+16 characters', 'x123456': '7+15 characters', 'zyxwvutsrqponmlkjihgfedcba': '26+16 characters', 'unicode': '7+15 chars å¹³', }) def test_exemplar_too_long(self): # 129 characters should fail. self.assertRaises(ValueError, self.counter.inc, exemplar={ 'abcdefghijklmnopqrstuvwxyz': '26+16 characters', 'x1234567': '8+15 characters', 'zyxwvutsrqponmlkjihgfedcba': '26+16 characters', 'y123456': '7+15 characters', }) class TestDisableCreated(unittest.TestCase): def setUp(self): self.registry = CollectorRegistry() os.environ['PROMETHEUS_DISABLE_CREATED_SERIES'] = 'True' metrics._use_created = _get_use_created() def tearDown(self): os.environ.pop('PROMETHEUS_DISABLE_CREATED_SERIES', None) metrics._use_created = _get_use_created() def test_counter(self): counter = Counter('c_total', 'help', registry=self.registry) counter.inc() self.assertEqual(None, self.registry.get_sample_value('c_created')) def test_histogram(self): histogram = Histogram('h', 'help', registry=self.registry) histogram.observe(3.2) self.assertEqual(None, self.registry.get_sample_value('h_created')) def test_summary(self): summary = Summary('s', 'help', registry=self.registry) summary.observe(8.2) self.assertEqual(None, self.registry.get_sample_value('s_created')) class TestGauge(unittest.TestCase): def setUp(self): self.registry = CollectorRegistry() self.gauge = Gauge('g', 'help', registry=self.registry) self.gauge_with_label = Gauge('g2', 'help', labelnames=("label1",), registry=self.registry) def test_repr(self): self.assertEqual(repr(self.gauge), "prometheus_client.metrics.Gauge(g)") def test_gauge(self): self.assertEqual(0, self.registry.get_sample_value('g')) self.gauge.inc() self.assertEqual(1, self.registry.get_sample_value('g')) self.gauge.dec(3) self.assertEqual(-2, self.registry.get_sample_value('g')) self.gauge.set(9) self.assertEqual(9, self.registry.get_sample_value('g')) def test_inc_not_observable(self): """.inc() must fail if the gauge is not observable.""" assert_not_observable(self.gauge_with_label.inc) def test_dec_not_observable(self): """.dec() must fail if the gauge is not observable.""" assert_not_observable(self.gauge_with_label.dec) def test_set_not_observable(self): """.set() must fail if the gauge is not observable.""" assert_not_observable(self.gauge_with_label.set, 1) def test_inprogress_function_decorator(self): self.assertEqual(0, self.registry.get_sample_value('g')) @self.gauge.track_inprogress() def f(): self.assertEqual(1, self.registry.get_sample_value('g')) self.assertEqual(([], None, None, None), getargspec(f)) f() self.assertEqual(0, self.registry.get_sample_value('g')) def test_inprogress_block_decorator(self): self.assertEqual(0, self.registry.get_sample_value('g')) with self.gauge.track_inprogress(): self.assertEqual(1, self.registry.get_sample_value('g')) self.assertEqual(0, self.registry.get_sample_value('g')) def test_gauge_function(self): x = {} self.gauge.set_function(lambda: len(x)) self.assertEqual(0, self.registry.get_sample_value('g')) self.gauge.inc() self.assertEqual(0, self.registry.get_sample_value('g')) x['a'] = None self.assertEqual(1, self.registry.get_sample_value('g')) def test_set_function_not_observable(self): """.set_function() must fail if the gauge is not observable.""" assert_not_observable(self.gauge_with_label.set_function, lambda: 1) def test_time_function_decorator(self): self.assertEqual(0, self.registry.get_sample_value('g')) @self.gauge.time() def f(): time.sleep(.001) self.assertEqual(([], None, None, None), getargspec(f)) f() self.assertNotEqual(0, self.registry.get_sample_value('g')) def test_function_decorator_multithread(self): self.assertEqual(0, self.registry.get_sample_value('g')) workers = 2 pool = ThreadPoolExecutor(max_workers=workers) @self.gauge.time() def f(duration): time.sleep(duration) expected_duration = 1 pool.submit(f, expected_duration) time.sleep(0.7 * expected_duration) pool.submit(f, expected_duration * 2) time.sleep(expected_duration) rounding_coefficient = 0.9 adjusted_expected_duration = expected_duration * rounding_coefficient self.assertLess(adjusted_expected_duration, self.registry.get_sample_value('g')) pool.shutdown(wait=True) def test_time_block_decorator(self): self.assertEqual(0, self.registry.get_sample_value('g')) with self.gauge.time(): time.sleep(.001) self.assertNotEqual(0, self.registry.get_sample_value('g')) def test_time_block_decorator_with_label(self): value = self.registry.get_sample_value self.assertEqual(None, value('g2', {'label1': 'foo'})) with self.gauge_with_label.time() as metric: metric.labels('foo') self.assertLess(0, value('g2', {'label1': 'foo'})) def test_track_in_progress_not_observable(self): g = Gauge('test', 'help', labelnames=('label',), registry=self.registry) assert_not_observable(g.track_inprogress) def test_timer_not_observable(self): g = Gauge('test', 'help', labelnames=('label',), registry=self.registry) def manager(): with g.time(): pass assert_not_observable(manager) class TestSummary(unittest.TestCase): def setUp(self): self.registry = CollectorRegistry() self.summary = Summary('s', 'help', registry=self.registry) self.summary_with_labels = Summary('s_with_labels', 'help', labelnames=("label1",), registry=self.registry) def test_repr(self): self.assertEqual(repr(self.summary), "prometheus_client.metrics.Summary(s)") def test_summary(self): self.assertEqual(0, self.registry.get_sample_value('s_count')) self.assertEqual(0, self.registry.get_sample_value('s_sum')) self.summary.observe(10) self.assertEqual(1, self.registry.get_sample_value('s_count')) self.assertEqual(10, self.registry.get_sample_value('s_sum')) def test_summary_not_observable(self): """.observe() must fail if the Summary is not observable.""" assert_not_observable(self.summary_with_labels.observe, 1) def test_function_decorator(self): self.assertEqual(0, self.registry.get_sample_value('s_count')) @self.summary.time() def f(): pass self.assertEqual(([], None, None, None), getargspec(f)) f() self.assertEqual(1, self.registry.get_sample_value('s_count')) def test_function_decorator_multithread(self): self.assertEqual(0, self.registry.get_sample_value('s_count')) summary2 = Summary('s2', 'help', registry=self.registry) workers = 3 duration = 0.1 pool = ThreadPoolExecutor(max_workers=workers) @self.summary.time() def f(): time.sleep(duration / 2) # Testing that different instances of timer do not interfere summary2.time()(lambda: time.sleep(duration / 2))() jobs = workers * 3 for i in range(jobs): pool.submit(f) pool.shutdown(wait=True) self.assertEqual(jobs, self.registry.get_sample_value('s_count')) rounding_coefficient = 0.9 total_expected_duration = jobs * duration * rounding_coefficient self.assertLess(total_expected_duration, self.registry.get_sample_value('s_sum')) self.assertLess(total_expected_duration / 2, self.registry.get_sample_value('s2_sum')) def test_function_decorator_reentrancy(self): self.assertEqual(0, self.registry.get_sample_value('s_count')) iterations = 2 sleep = 0.1 @self.summary.time() def f(i=1): time.sleep(sleep) if i == iterations: return f(i + 1) f() self.assertEqual(iterations, self.registry.get_sample_value('s_count')) # Arithmetic series with d == a_1 total_expected_duration = sleep * (iterations ** 2 + iterations) / 2 rounding_coefficient = 0.9 total_expected_duration *= rounding_coefficient self.assertLess(total_expected_duration, self.registry.get_sample_value('s_sum')) def test_block_decorator(self): self.assertEqual(0, self.registry.get_sample_value('s_count')) with self.summary.time(): pass self.assertEqual(1, self.registry.get_sample_value('s_count')) def test_block_decorator_with_label(self): value = self.registry.get_sample_value self.assertEqual(None, value('s_with_labels_count', {'label1': 'foo'})) with self.summary_with_labels.time() as metric: metric.labels('foo') self.assertEqual(1, value('s_with_labels_count', {'label1': 'foo'})) def test_timer_not_observable(self): s = Summary('test', 'help', labelnames=('label',), registry=self.registry) def manager(): with s.time(): pass assert_not_observable(manager) class TestHistogram(unittest.TestCase): def setUp(self): self.registry = CollectorRegistry() self.histogram = Histogram('h', 'help', registry=self.registry) self.labels = Histogram('hl', 'help', ['l'], registry=self.registry) def test_repr(self): self.assertEqual(repr(self.histogram), "prometheus_client.metrics.Histogram(h)") self.assertEqual(repr(self.labels), "prometheus_client.metrics.Histogram(hl)") def test_histogram(self): self.assertEqual(0, self.registry.get_sample_value('h_bucket', {'le': '1.0'})) self.assertEqual(0, self.registry.get_sample_value('h_bucket', {'le': '2.5'})) self.assertEqual(0, self.registry.get_sample_value('h_bucket', {'le': '5.0'})) self.assertEqual(0, self.registry.get_sample_value('h_bucket', {'le': '+Inf'})) self.assertEqual(0, self.registry.get_sample_value('h_count')) self.assertEqual(0, self.registry.get_sample_value('h_sum')) self.histogram.observe(2) self.assertEqual(0, self.registry.get_sample_value('h_bucket', {'le': '1.0'})) self.assertEqual(1, self.registry.get_sample_value('h_bucket', {'le': '2.5'})) self.assertEqual(1, self.registry.get_sample_value('h_bucket', {'le': '5.0'})) self.assertEqual(1, self.registry.get_sample_value('h_bucket', {'le': '+Inf'})) self.assertEqual(1, self.registry.get_sample_value('h_count')) self.assertEqual(2, self.registry.get_sample_value('h_sum')) self.histogram.observe(2.5) self.assertEqual(0, self.registry.get_sample_value('h_bucket', {'le': '1.0'})) self.assertEqual(2, self.registry.get_sample_value('h_bucket', {'le': '2.5'})) self.assertEqual(2, self.registry.get_sample_value('h_bucket', {'le': '5.0'})) self.assertEqual(2, self.registry.get_sample_value('h_bucket', {'le': '+Inf'})) self.assertEqual(2, self.registry.get_sample_value('h_count')) self.assertEqual(4.5, self.registry.get_sample_value('h_sum')) self.histogram.observe(float("inf")) self.assertEqual(0, self.registry.get_sample_value('h_bucket', {'le': '1.0'})) self.assertEqual(2, self.registry.get_sample_value('h_bucket', {'le': '2.5'})) self.assertEqual(2, self.registry.get_sample_value('h_bucket', {'le': '5.0'})) self.assertEqual(3, self.registry.get_sample_value('h_bucket', {'le': '+Inf'})) self.assertEqual(3, self.registry.get_sample_value('h_count')) self.assertEqual(float("inf"), self.registry.get_sample_value('h_sum')) def test_histogram_not_observable(self): """.observe() must fail if the Summary is not observable.""" assert_not_observable(self.labels.observe, 1) def test_setting_buckets(self): h = Histogram('h', 'help', registry=None, buckets=[0, 1, 2]) self.assertEqual([0.0, 1.0, 2.0, float("inf")], h._upper_bounds) h = Histogram('h', 'help', registry=None, buckets=[0, 1, 2, float("inf")]) self.assertEqual([0.0, 1.0, 2.0, float("inf")], h._upper_bounds) self.assertRaises(ValueError, Histogram, 'h', 'help', registry=None, buckets=[]) self.assertRaises(ValueError, Histogram, 'h', 'help', registry=None, buckets=[float("inf")]) self.assertRaises(ValueError, Histogram, 'h', 'help', registry=None, buckets=[3, 1]) def test_labels(self): self.assertRaises(ValueError, Histogram, 'h', 'help', registry=None, labelnames=['le']) self.labels.labels('a').observe(2) self.assertEqual(0, self.registry.get_sample_value('hl_bucket', {'le': '1.0', 'l': 'a'})) self.assertEqual(1, self.registry.get_sample_value('hl_bucket', {'le': '2.5', 'l': 'a'})) self.assertEqual(1, self.registry.get_sample_value('hl_bucket', {'le': '5.0', 'l': 'a'})) self.assertEqual(1, self.registry.get_sample_value('hl_bucket', {'le': '+Inf', 'l': 'a'})) self.assertEqual(1, self.registry.get_sample_value('hl_count', {'l': 'a'})) self.assertEqual(2, self.registry.get_sample_value('hl_sum', {'l': 'a'})) def test_function_decorator(self): self.assertEqual(0, self.registry.get_sample_value('h_count')) self.assertEqual(0, self.registry.get_sample_value('h_bucket', {'le': '+Inf'})) @self.histogram.time() def f(): pass self.assertEqual(([], None, None, None), getargspec(f)) f() self.assertEqual(1, self.registry.get_sample_value('h_count')) self.assertEqual(1, self.registry.get_sample_value('h_bucket', {'le': '+Inf'})) def test_function_decorator_multithread(self): self.assertEqual(0, self.registry.get_sample_value('h_count')) workers = 3 duration = 0.1 pool = ThreadPoolExecutor(max_workers=workers) @self.histogram.time() def f(): time.sleep(duration) jobs = workers * 3 for i in range(jobs): pool.submit(f) pool.shutdown(wait=True) self.assertEqual(jobs, self.registry.get_sample_value('h_count')) rounding_coefficient = 0.9 total_expected_duration = jobs * duration * rounding_coefficient self.assertLess(total_expected_duration, self.registry.get_sample_value('h_sum')) def test_block_decorator(self): self.assertEqual(0, self.registry.get_sample_value('h_count')) self.assertEqual(0, self.registry.get_sample_value('h_bucket', {'le': '+Inf'})) with self.histogram.time(): pass self.assertEqual(1, self.registry.get_sample_value('h_count')) self.assertEqual(1, self.registry.get_sample_value('h_bucket', {'le': '+Inf'})) def test_block_decorator_with_label(self): value = self.registry.get_sample_value self.assertEqual(None, value('hl_count', {'l': 'a'})) self.assertEqual(None, value('hl_bucket', {'le': '+Inf', 'l': 'a'})) with self.labels.time() as metric: metric.labels('a') self.assertEqual(1, value('hl_count', {'l': 'a'})) self.assertEqual(1, value('hl_bucket', {'le': '+Inf', 'l': 'a'})) def test_exemplar_invalid_label_name(self): self.assertRaises(ValueError, self.histogram.observe, 3.0, exemplar={':o)': 'smile'}) self.assertRaises(ValueError, self.histogram.observe, 3.0, exemplar={'1': 'number'}) def test_exemplar_too_long(self): # 129 characters in total should fail. self.assertRaises(ValueError, self.histogram.observe, 1.0, exemplar={ 'abcdefghijklmnopqrstuvwxyz': '26+16 characters', 'x1234567': '8+15 characters', 'zyxwvutsrqponmlkjihgfedcba': '26+16 characters', 'y123456': '7+15 characters', }) class TestInfo(unittest.TestCase): def setUp(self): self.registry = CollectorRegistry() self.info = Info('i', 'help', registry=self.registry) self.labels = Info('il', 'help', ['l'], registry=self.registry) def test_repr(self): self.assertEqual(repr(self.info), "prometheus_client.metrics.Info(i)") self.assertEqual(repr(self.labels), "prometheus_client.metrics.Info(il)") def test_info(self): self.assertEqual(1, self.registry.get_sample_value('i_info', {})) self.info.info({'a': 'b', 'c': 'd'}) self.assertEqual(None, self.registry.get_sample_value('i_info', {})) self.assertEqual(1, self.registry.get_sample_value('i_info', {'a': 'b', 'c': 'd'})) def test_labels(self): self.assertRaises(ValueError, self.labels.labels('a').info, {'l': ''}) self.labels.labels('a').info({'foo': 'bar'}) self.assertEqual(1, self.registry.get_sample_value('il_info', {'l': 'a', 'foo': 'bar'})) class TestEnum(unittest.TestCase): def setUp(self): self.registry = CollectorRegistry() self.enum = Enum('e', 'help', states=['a', 'b', 'c'], registry=self.registry) self.labels = Enum('el', 'help', ['l'], states=['a', 'b', 'c'], registry=self.registry) def test_enum(self): self.assertEqual(1, self.registry.get_sample_value('e', {'e': 'a'})) self.assertEqual(0, self.registry.get_sample_value('e', {'e': 'b'})) self.assertEqual(0, self.registry.get_sample_value('e', {'e': 'c'})) self.enum.state('b') self.assertEqual(0, self.registry.get_sample_value('e', {'e': 'a'})) self.assertEqual(1, self.registry.get_sample_value('e', {'e': 'b'})) self.assertEqual(0, self.registry.get_sample_value('e', {'e': 'c'})) self.assertRaises(ValueError, self.enum.state, 'd') self.assertRaises(ValueError, Enum, 'e', 'help', registry=None) def test_labels(self): self.labels.labels('a').state('c') self.assertEqual(0, self.registry.get_sample_value('el', {'l': 'a', 'el': 'a'})) self.assertEqual(0, self.registry.get_sample_value('el', {'l': 'a', 'el': 'b'})) self.assertEqual(1, self.registry.get_sample_value('el', {'l': 'a', 'el': 'c'})) self.assertRaises(ValueError, self.labels.state, 'a') def test_overlapping_labels(self): with pytest.raises(ValueError): Enum('e', 'help', registry=None, labelnames=['e']) class TestMetricWrapper(unittest.TestCase): def setUp(self): self.registry = CollectorRegistry() self.counter = Counter('c_total', 'help', labelnames=['l'], registry=self.registry) self.two_labels = Counter('two', 'help', labelnames=['a', 'b'], registry=self.registry) def test_child(self): self.counter.labels('x').inc() self.assertEqual(1, self.registry.get_sample_value('c_total', {'l': 'x'})) self.two_labels.labels('x', 'y').inc(2) self.assertEqual(2, self.registry.get_sample_value('two_total', {'a': 'x', 'b': 'y'})) def test_remove(self): self.counter.labels('x').inc() self.counter.labels('y').inc(2) self.assertEqual(1, self.registry.get_sample_value('c_total', {'l': 'x'})) self.assertEqual(2, self.registry.get_sample_value('c_total', {'l': 'y'})) self.counter.remove('x') self.assertEqual(None, self.registry.get_sample_value('c_total', {'l': 'x'})) self.assertEqual(2, self.registry.get_sample_value('c_total', {'l': 'y'})) def test_clear(self): self.counter.labels('x').inc() self.counter.labels('y').inc(2) self.assertEqual(1, self.registry.get_sample_value('c_total', {'l': 'x'})) self.assertEqual(2, self.registry.get_sample_value('c_total', {'l': 'y'})) self.counter.clear() self.assertEqual(None, self.registry.get_sample_value('c_total', {'l': 'x'})) self.assertEqual(None, self.registry.get_sample_value('c_total', {'l': 'y'})) def test_incorrect_label_count_raises(self): self.assertRaises(ValueError, self.counter.labels) self.assertRaises(ValueError, self.counter.labels, 'a', 'b') self.assertRaises(ValueError, self.counter.remove) self.assertRaises(ValueError, self.counter.remove, 'a', 'b') def test_labels_on_labels(self): with pytest.raises(ValueError): self.counter.labels('a').labels('b') def test_labels_coerced_to_string(self): self.counter.labels(None).inc() self.counter.labels(l=None).inc() self.assertEqual(2, self.registry.get_sample_value('c_total', {'l': 'None'})) self.counter.remove(None) self.assertEqual(None, self.registry.get_sample_value('c_total', {'l': 'None'})) def test_non_string_labels_raises(self): class Test: __str__ = None self.assertRaises(TypeError, self.counter.labels, Test()) self.assertRaises(TypeError, self.counter.labels, l=Test()) def test_namespace_subsystem_concatenated(self): c = Counter('c_total', 'help', namespace='a', subsystem='b', registry=self.registry) c.inc() self.assertEqual(1, self.registry.get_sample_value('a_b_c_total')) def test_labels_by_kwarg(self): self.counter.labels(l='x').inc() self.assertEqual(1, self.registry.get_sample_value('c_total', {'l': 'x'})) self.assertRaises(ValueError, self.counter.labels, l='x', m='y') self.assertRaises(ValueError, self.counter.labels, m='y') self.assertRaises(ValueError, self.counter.labels) self.two_labels.labels(a='x', b='y').inc() self.assertEqual(1, self.registry.get_sample_value('two_total', {'a': 'x', 'b': 'y'})) self.assertRaises(ValueError, self.two_labels.labels, a='x', b='y', c='z') self.assertRaises(ValueError, self.two_labels.labels, a='x', c='z') self.assertRaises(ValueError, self.two_labels.labels, b='y', c='z') self.assertRaises(ValueError, self.two_labels.labels, c='z') self.assertRaises(ValueError, self.two_labels.labels) self.assertRaises(ValueError, self.two_labels.labels, {'a': 'x'}, b='y') def test_invalid_names_raise(self): self.assertRaises(ValueError, Counter, '', 'help') self.assertRaises(ValueError, Counter, '^', 'help') self.assertRaises(ValueError, Counter, '', 'help', namespace='&') self.assertRaises(ValueError, Counter, '', 'help', subsystem='(') self.assertRaises(ValueError, Counter, 'c_total', '', labelnames=['^']) self.assertRaises(ValueError, Counter, 'c_total', '', labelnames=['a:b']) self.assertRaises(ValueError, Counter, 'c_total', '', labelnames=['__reserved']) self.assertRaises(ValueError, Summary, 'c_total', '', labelnames=['quantile']) def test_empty_labels_list(self): Histogram('h', 'help', [], registry=self.registry) self.assertEqual(0, self.registry.get_sample_value('h_sum')) def test_unit_appended(self): Histogram('h', 'help', [], registry=self.registry, unit="seconds") self.assertEqual(0, self.registry.get_sample_value('h_seconds_sum')) def test_unit_notappended(self): Histogram('h_seconds', 'help', [], registry=self.registry, unit="seconds") self.assertEqual(0, self.registry.get_sample_value('h_seconds_sum')) def test_no_units_for_info_enum(self): self.assertRaises(ValueError, Info, 'foo', 'help', unit="x") self.assertRaises(ValueError, Enum, 'foo', 'help', unit="x") def test_name_cleanup_before_unit_append(self): c = Counter('b_total', 'help', unit="total", labelnames=['l'], registry=self.registry) self.assertEqual(c._name, 'b_total') class TestMetricFamilies(unittest.TestCase): def setUp(self): self.registry = CollectorRegistry() def custom_collector(self, metric_family): class CustomCollector: def collect(self): return [metric_family] self.registry.register(CustomCollector()) def test_untyped(self): self.custom_collector(UntypedMetricFamily('u', 'help', value=1)) self.assertEqual(1, self.registry.get_sample_value('u', {})) def test_untyped_labels(self): cmf = UntypedMetricFamily('u', 'help', labels=['a', 'c']) cmf.add_metric(['b', 'd'], 2) self.custom_collector(cmf) self.assertEqual(2, self.registry.get_sample_value('u', {'a': 'b', 'c': 'd'})) def test_untyped_unit(self): self.custom_collector(UntypedMetricFamily('u', 'help', value=1, unit='unit')) self.assertEqual(1, self.registry.get_sample_value('u_unit', {})) def test_counter(self): self.custom_collector(CounterMetricFamily('c_total', 'help', value=1)) self.assertEqual(1, self.registry.get_sample_value('c_total', {})) def test_counter_total(self): self.custom_collector(CounterMetricFamily('c_total', 'help', value=1)) self.assertEqual(1, self.registry.get_sample_value('c_total', {})) def test_counter_labels(self): cmf = CounterMetricFamily('c_total', 'help', labels=['a', 'c_total']) cmf.add_metric(['b', 'd'], 2) self.custom_collector(cmf) self.assertEqual(2, self.registry.get_sample_value('c_total', {'a': 'b', 'c_total': 'd'})) def test_gauge(self): self.custom_collector(GaugeMetricFamily('g', 'help', value=1)) self.assertEqual(1, self.registry.get_sample_value('g', {})) def test_gauge_labels(self): cmf = GaugeMetricFamily('g', 'help', labels=['a']) cmf.add_metric(['b'], 2) self.custom_collector(cmf) self.assertEqual(2, self.registry.get_sample_value('g', {'a': 'b'})) def test_summary(self): self.custom_collector(SummaryMetricFamily('s', 'help', count_value=1, sum_value=2)) self.assertEqual(1, self.registry.get_sample_value('s_count', {})) self.assertEqual(2, self.registry.get_sample_value('s_sum', {})) def test_summary_labels(self): cmf = SummaryMetricFamily('s', 'help', labels=['a']) cmf.add_metric(['b'], count_value=1, sum_value=2) self.custom_collector(cmf) self.assertEqual(1, self.registry.get_sample_value('s_count', {'a': 'b'})) self.assertEqual(2, self.registry.get_sample_value('s_sum', {'a': 'b'})) def test_histogram(self): self.custom_collector(HistogramMetricFamily('h', 'help', buckets=[('0', 1), ('+Inf', 2)], sum_value=3)) self.assertEqual(1, self.registry.get_sample_value('h_bucket', {'le': '0'})) self.assertEqual(2, self.registry.get_sample_value('h_bucket', {'le': '+Inf'})) self.assertEqual(2, self.registry.get_sample_value('h_count', {})) self.assertEqual(3, self.registry.get_sample_value('h_sum', {})) def test_histogram_labels(self): cmf = HistogramMetricFamily('h', 'help', labels=['a']) cmf.add_metric(['b'], buckets=[('0', 1), ('+Inf', 2)], sum_value=3) self.custom_collector(cmf) self.assertEqual(1, self.registry.get_sample_value('h_bucket', {'a': 'b', 'le': '0'})) self.assertEqual(2, self.registry.get_sample_value('h_bucket', {'a': 'b', 'le': '+Inf'})) self.assertEqual(2, self.registry.get_sample_value('h_count', {'a': 'b'})) self.assertEqual(3, self.registry.get_sample_value('h_sum', {'a': 'b'})) def test_gaugehistogram(self): self.custom_collector(GaugeHistogramMetricFamily('h', 'help', buckets=[('0', 1), ('+Inf', 2)])) self.assertEqual(1, self.registry.get_sample_value('h_bucket', {'le': '0'})) self.assertEqual(2, self.registry.get_sample_value('h_bucket', {'le': '+Inf'})) def test_gaugehistogram_labels(self): cmf = GaugeHistogramMetricFamily('h', 'help', labels=['a']) cmf.add_metric(['b'], buckets=[('0', 1), ('+Inf', 2)], gsum_value=3) self.custom_collector(cmf) self.assertEqual(1, self.registry.get_sample_value('h_bucket', {'a': 'b', 'le': '0'})) self.assertEqual(2, self.registry.get_sample_value('h_bucket', {'a': 'b', 'le': '+Inf'})) self.assertEqual(2, self.registry.get_sample_value('h_gcount', {'a': 'b'})) self.assertEqual(3, self.registry.get_sample_value('h_gsum', {'a': 'b'})) def test_info(self): self.custom_collector(InfoMetricFamily('i', 'help', value={'a': 'b'})) self.assertEqual(1, self.registry.get_sample_value('i_info', {'a': 'b'})) def test_info_labels(self): cmf = InfoMetricFamily('i', 'help', labels=['a']) cmf.add_metric(['b'], {'c': 'd'}) self.custom_collector(cmf) self.assertEqual(1, self.registry.get_sample_value('i_info', {'a': 'b', 'c': 'd'})) def test_stateset(self): self.custom_collector(StateSetMetricFamily('s', 'help', value={'a': True, 'b': True, })) self.assertEqual(1, self.registry.get_sample_value('s', {'s': 'a'})) self.assertEqual(1, self.registry.get_sample_value('s', {'s': 'b'})) def test_stateset_labels(self): cmf = StateSetMetricFamily('s', 'help', labels=['foo']) cmf.add_metric(['bar'], {'a': False, 'b': False, }) self.custom_collector(cmf) self.assertEqual(0, self.registry.get_sample_value('s', {'foo': 'bar', 's': 'a'})) self.assertEqual(0, self.registry.get_sample_value('s', {'foo': 'bar', 's': 'b'})) def test_bad_constructors(self): self.assertRaises(ValueError, UntypedMetricFamily, 'u', 'help', value=1, labels=[]) self.assertRaises(ValueError, UntypedMetricFamily, 'u', 'help', value=1, labels=['a']) self.assertRaises(ValueError, CounterMetricFamily, 'c_total', 'help', value=1, labels=[]) self.assertRaises(ValueError, CounterMetricFamily, 'c_total', 'help', value=1, labels=['a']) self.assertRaises(ValueError, GaugeMetricFamily, 'g', 'help', value=1, labels=[]) self.assertRaises(ValueError, GaugeMetricFamily, 'g', 'help', value=1, labels=['a']) self.assertRaises(ValueError, SummaryMetricFamily, 's', 'help', sum_value=1) self.assertRaises(ValueError, SummaryMetricFamily, 's', 'help', count_value=1) self.assertRaises(ValueError, SummaryMetricFamily, 's', 'help', count_value=1, labels=['a']) self.assertRaises(ValueError, SummaryMetricFamily, 's', 'help', sum_value=1, labels=['a']) self.assertRaises(ValueError, SummaryMetricFamily, 's', 'help', count_value=1, sum_value=1, labels=['a']) self.assertRaises(ValueError, HistogramMetricFamily, 'h', 'help', sum_value=1) self.assertRaises(KeyError, HistogramMetricFamily, 'h', 'help', buckets={}) self.assertRaises(ValueError, HistogramMetricFamily, 'h', 'help', sum_value=1, labels=['a']) self.assertRaises(ValueError, HistogramMetricFamily, 'h', 'help', buckets={}, labels=['a']) self.assertRaises(ValueError, HistogramMetricFamily, 'h', 'help', buckets={}, sum_value=1, labels=['a']) self.assertRaises(KeyError, HistogramMetricFamily, 'h', 'help', buckets={}, sum_value=1) self.assertRaises(ValueError, InfoMetricFamily, 'i', 'help', value={}, labels=[]) self.assertRaises(ValueError, InfoMetricFamily, 'i', 'help', value={}, labels=['a']) self.assertRaises(ValueError, StateSetMetricFamily, 's', 'help', value={'a': True}, labels=[]) self.assertRaises(ValueError, StateSetMetricFamily, 's', 'help', value={'a': True}, labels=['a']) def test_labelnames(self): cmf = UntypedMetricFamily('u', 'help', labels=iter(['a'])) self.assertEqual(('a',), cmf._labelnames) cmf = CounterMetricFamily('c_total', 'help', labels=iter(['a'])) self.assertEqual(('a',), cmf._labelnames) gmf = GaugeMetricFamily('g', 'help', labels=iter(['a'])) self.assertEqual(('a',), gmf._labelnames) smf = SummaryMetricFamily('s', 'help', labels=iter(['a'])) self.assertEqual(('a',), smf._labelnames) hmf = HistogramMetricFamily('h', 'help', labels=iter(['a'])) self.assertEqual(('a',), hmf._labelnames) class TestCollectorRegistry(unittest.TestCase): def test_duplicate_metrics_raises(self): registry = CollectorRegistry() Counter('c_total', 'help', registry=registry) self.assertRaises(ValueError, Counter, 'c_total', 'help', registry=registry) self.assertRaises(ValueError, Gauge, 'c_total', 'help', registry=registry) self.assertRaises(ValueError, Gauge, 'c_created', 'help', registry=registry) Gauge('g_created', 'help', registry=registry) self.assertRaises(ValueError, Gauge, 'g_created', 'help', registry=registry) self.assertRaises(ValueError, Counter, 'g', 'help', registry=registry) Summary('s', 'help', registry=registry) self.assertRaises(ValueError, Summary, 's', 'help', registry=registry) self.assertRaises(ValueError, Gauge, 's_created', 'help', registry=registry) self.assertRaises(ValueError, Gauge, 's_sum', 'help', registry=registry) self.assertRaises(ValueError, Gauge, 's_count', 'help', registry=registry) # We don't currently expose quantiles, but let's prevent future # clashes anyway. self.assertRaises(ValueError, Gauge, 's', 'help', registry=registry) Histogram('h', 'help', registry=registry) self.assertRaises(ValueError, Histogram, 'h', 'help', registry=registry) # Clashes aggaint various suffixes. self.assertRaises(ValueError, Summary, 'h', 'help', registry=registry) self.assertRaises(ValueError, Gauge, 'h_count', 'help', registry=registry) self.assertRaises(ValueError, Gauge, 'h_sum', 'help', registry=registry) self.assertRaises(ValueError, Gauge, 'h_bucket', 'help', registry=registry) self.assertRaises(ValueError, Gauge, 'h_created', 'help', registry=registry) # The name of the histogram itself is also taken. self.assertRaises(ValueError, Gauge, 'h', 'help', registry=registry) Info('i', 'help', registry=registry) self.assertRaises(ValueError, Gauge, 'i_info', 'help', registry=registry) def test_unregister_works(self): registry = CollectorRegistry() s = Summary('s', 'help', registry=registry) self.assertRaises(ValueError, Gauge, 's_count', 'help', registry=registry) registry.unregister(s) Gauge('s_count', 'help', registry=registry) def custom_collector(self, metric_family, registry): class CustomCollector: def collect(self): return [metric_family] registry.register(CustomCollector()) def test_autodescribe_disabled_by_default(self): registry = CollectorRegistry() self.custom_collector(CounterMetricFamily('c_total', 'help', value=1), registry) self.custom_collector(CounterMetricFamily('c_total', 'help', value=1), registry) registry = CollectorRegistry(auto_describe=True) self.custom_collector(CounterMetricFamily('c_total', 'help', value=1), registry) self.assertRaises(ValueError, self.custom_collector, CounterMetricFamily('c_total', 'help', value=1), registry) def test_restricted_registry(self): registry = CollectorRegistry() Counter('c_total', 'help', registry=registry) Summary('s', 'help', registry=registry).observe(7) m = Metric('s', 'help', 'summary') m.samples = [Sample('s_sum', {}, 7)] self.assertEqual([m], list(registry.restricted_registry(['s_sum']).collect())) def test_restricted_registry_adds_new_metrics(self): registry = CollectorRegistry() Counter('c_total', 'help', registry=registry) restricted_registry = registry.restricted_registry(['s_sum']) Summary('s', 'help', registry=registry).observe(7) m = Metric('s', 'help', 'summary') m.samples = [Sample('s_sum', {}, 7)] self.assertEqual([m], list(restricted_registry.collect())) def test_target_info_injected(self): registry = CollectorRegistry(target_info={'foo': 'bar'}) self.assertEqual(1, registry.get_sample_value('target_info', {'foo': 'bar'})) def test_target_info_duplicate_detected(self): registry = CollectorRegistry(target_info={'foo': 'bar'}) self.assertRaises(ValueError, Info, 'target', 'help', registry=registry) registry.set_target_info({}) i = Info('target', 'help', registry=registry) registry.set_target_info({}) self.assertRaises(ValueError, Info, 'target', 'help', registry=registry) self.assertRaises(ValueError, registry.set_target_info, {'foo': 'bar'}) registry.unregister(i) registry.set_target_info({'foo': 'bar'}) def test_target_info_restricted_registry(self): registry = CollectorRegistry(target_info={'foo': 'bar'}) Summary('s', 'help', registry=registry).observe(7) m = Metric('s', 'help', 'summary') m.samples = [Sample('s_sum', {}, 7)] self.assertEqual([m], list(registry.restricted_registry(['s_sum']).collect())) m = Metric('target', 'Target metadata', 'info') m.samples = [Sample('target_info', {'foo': 'bar'}, 1)] self.assertEqual([m], list(registry.restricted_registry(['target_info']).collect())) def test_restricted_registry_does_not_call_extra(self): from unittest.mock import MagicMock registry = CollectorRegistry() mock_collector = MagicMock() mock_collector.describe.return_value = [Metric('foo', 'help', 'summary')] registry.register(mock_collector) Summary('s', 'help', registry=registry).observe(7) m = Metric('s', 'help', 'summary') m.samples = [Sample('s_sum', {}, 7)] self.assertEqual([m], list(registry.restricted_registry(['s_sum']).collect())) mock_collector.collect.assert_not_called() def test_restricted_registry_does_not_yield_while_locked(self): registry = CollectorRegistry(target_info={'foo': 'bar'}) Summary('s', 'help', registry=registry).observe(7) m = Metric('s', 'help', 'summary') m.samples = [Sample('s_sum', {}, 7)] self.assertEqual([m], list(registry.restricted_registry(['s_sum']).collect())) m = Metric('target', 'Target metadata', 'info') m.samples = [Sample('target_info', {'foo': 'bar'}, 1)] for _ in registry.restricted_registry(['target_info', 's_sum']).collect(): self.assertFalse(registry._lock.locked()) if __name__ == '__main__': unittest.main() python-prometheus-client-0.19.0+ds1/tests/test_exposition.py000066400000000000000000000476711454301344400242470ustar00rootroot00000000000000from http.server import BaseHTTPRequestHandler, HTTPServer import os import threading import time import unittest import pytest from prometheus_client import ( CollectorRegistry, CONTENT_TYPE_LATEST, core, Counter, delete_from_gateway, Enum, Gauge, generate_latest, Histogram, Info, instance_ip_grouping_key, Metric, push_to_gateway, pushadd_to_gateway, Summary, ) from prometheus_client.core import GaugeHistogramMetricFamily, Timestamp from prometheus_client.exposition import ( basic_auth_handler, choose_encoder, default_handler, MetricsHandler, passthrough_redirect_handler, tls_auth_handler, ) import prometheus_client.openmetrics.exposition as openmetrics class TestGenerateText(unittest.TestCase): def setUp(self): self.registry = CollectorRegistry() # Mock time so _created values are fixed. self.old_time = time.time time.time = lambda: 123.456 def tearDown(self): time.time = self.old_time def custom_collector(self, metric_family): class CustomCollector: def collect(self): return [metric_family] self.registry.register(CustomCollector()) def test_counter(self): c = Counter('cc', 'A counter', registry=self.registry) c.inc() self.assertEqual(b"""# HELP cc_total A counter # TYPE cc_total counter cc_total 1.0 # HELP cc_created A counter # TYPE cc_created gauge cc_created 123.456 """, generate_latest(self.registry)) def test_counter_name_unit_append(self): c = Counter('requests', 'Request counter', unit="total", registry=self.registry) c.inc() self.assertEqual(b"""# HELP requests_total_total Request counter # TYPE requests_total_total counter requests_total_total 1.0 # HELP requests_total_created Request counter # TYPE requests_total_created gauge requests_total_created 123.456 """, generate_latest(self.registry)) def test_counter_total(self): c = Counter('cc_total', 'A counter', registry=self.registry) c.inc() self.assertEqual(b"""# HELP cc_total A counter # TYPE cc_total counter cc_total 1.0 # HELP cc_created A counter # TYPE cc_created gauge cc_created 123.456 """, generate_latest(self.registry)) def test_gauge(self): g = Gauge('gg', 'A gauge', registry=self.registry) g.set(17) self.assertEqual(b'# HELP gg A gauge\n# TYPE gg gauge\ngg 17.0\n', generate_latest(self.registry)) def test_summary(self): s = Summary('ss', 'A summary', ['a', 'b'], registry=self.registry) s.labels('c', 'd').observe(17) self.assertEqual(b"""# HELP ss A summary # TYPE ss summary ss_count{a="c",b="d"} 1.0 ss_sum{a="c",b="d"} 17.0 # HELP ss_created A summary # TYPE ss_created gauge ss_created{a="c",b="d"} 123.456 """, generate_latest(self.registry)) def test_histogram(self): s = Histogram('hh', 'A histogram', registry=self.registry) s.observe(0.05) self.assertEqual(b"""# HELP hh A histogram # TYPE hh histogram hh_bucket{le="0.005"} 0.0 hh_bucket{le="0.01"} 0.0 hh_bucket{le="0.025"} 0.0 hh_bucket{le="0.05"} 1.0 hh_bucket{le="0.075"} 1.0 hh_bucket{le="0.1"} 1.0 hh_bucket{le="0.25"} 1.0 hh_bucket{le="0.5"} 1.0 hh_bucket{le="0.75"} 1.0 hh_bucket{le="1.0"} 1.0 hh_bucket{le="2.5"} 1.0 hh_bucket{le="5.0"} 1.0 hh_bucket{le="7.5"} 1.0 hh_bucket{le="10.0"} 1.0 hh_bucket{le="+Inf"} 1.0 hh_count 1.0 hh_sum 0.05 # HELP hh_created A histogram # TYPE hh_created gauge hh_created 123.456 """, generate_latest(self.registry)) def test_gaugehistogram(self): self.custom_collector(GaugeHistogramMetricFamily('gh', 'help', buckets=[('1.0', 4), ('+Inf', 5)], gsum_value=7)) self.assertEqual(b"""# HELP gh help # TYPE gh histogram gh_bucket{le="1.0"} 4.0 gh_bucket{le="+Inf"} 5.0 # HELP gh_gcount help # TYPE gh_gcount gauge gh_gcount 5.0 # HELP gh_gsum help # TYPE gh_gsum gauge gh_gsum 7.0 """, generate_latest(self.registry)) def test_info(self): i = Info('ii', 'A info', ['a', 'b'], registry=self.registry) i.labels('c', 'd').info({'foo': 'bar'}) self.assertEqual(b'# HELP ii_info A info\n# TYPE ii_info gauge\nii_info{a="c",b="d",foo="bar"} 1.0\n', generate_latest(self.registry)) def test_enum(self): i = Enum('ee', 'An enum', ['a', 'b'], registry=self.registry, states=['foo', 'bar']) i.labels('c', 'd').state('bar') self.assertEqual( b'# HELP ee An enum\n# TYPE ee gauge\nee{a="c",b="d",ee="foo"} 0.0\nee{a="c",b="d",ee="bar"} 1.0\n', generate_latest(self.registry)) def test_unicode(self): c = Gauge('cc', '\u4500', ['l'], registry=self.registry) c.labels('\u4500').inc() self.assertEqual(b'# HELP cc \xe4\x94\x80\n# TYPE cc gauge\ncc{l="\xe4\x94\x80"} 1.0\n', generate_latest(self.registry)) def test_escaping(self): g = Gauge('cc', 'A\ngaug\\e', ['a'], registry=self.registry) g.labels('\\x\n"').inc(1) self.assertEqual(b'# HELP cc A\\ngaug\\\\e\n# TYPE cc gauge\ncc{a="\\\\x\\n\\""} 1.0\n', generate_latest(self.registry)) def test_nonnumber(self): class MyNumber: def __repr__(self): return "MyNumber(123)" def __float__(self): return 123.0 class MyCollector: def collect(self): metric = Metric("nonnumber", "Non number", 'untyped') metric.add_sample("nonnumber", {}, MyNumber()) yield metric self.registry.register(MyCollector()) self.assertEqual(b'# HELP nonnumber Non number\n# TYPE nonnumber untyped\nnonnumber 123.0\n', generate_latest(self.registry)) def test_timestamp(self): class MyCollector: def collect(self): metric = Metric("ts", "help", 'untyped') metric.add_sample("ts", {"foo": "a"}, 0, 123.456) metric.add_sample("ts", {"foo": "b"}, 0, -123.456) metric.add_sample("ts", {"foo": "c"}, 0, 123) metric.add_sample("ts", {"foo": "d"}, 0, Timestamp(123, 456000000)) metric.add_sample("ts", {"foo": "e"}, 0, Timestamp(123, 456000)) metric.add_sample("ts", {"foo": "f"}, 0, Timestamp(123, 456)) yield metric self.registry.register(MyCollector()) self.assertEqual(b"""# HELP ts help # TYPE ts untyped ts{foo="a"} 0.0 123456 ts{foo="b"} 0.0 -123456 ts{foo="c"} 0.0 123000 ts{foo="d"} 0.0 123456 ts{foo="e"} 0.0 123000 ts{foo="f"} 0.0 123000 """, generate_latest(self.registry)) class TestPushGateway(unittest.TestCase): def setUp(self): redirect_flag = 'testFlag' self.redirect_flag = redirect_flag # preserve a copy for downstream test assertions self.registry = CollectorRegistry() self.counter = Gauge('g', 'help', registry=self.registry) self.requests = requests = [] class TestHandler(BaseHTTPRequestHandler): def do_PUT(self): if 'with_basic_auth' in self.requestline and self.headers['authorization'] != 'Basic Zm9vOmJhcg==': self.send_response(401) elif 'redirect' in self.requestline and redirect_flag not in self.requestline: # checks for an initial test request with 'redirect' but without the redirect_flag, # and simulates a redirect to a url with the redirect_flag (which will produce a 201) self.send_response(301) self.send_header('Location', getattr(self, 'redirect_address', None)) else: self.send_response(201) length = int(self.headers['content-length']) requests.append((self, self.rfile.read(length))) self.end_headers() do_POST = do_PUT do_DELETE = do_PUT # set up a separate server to serve a fake redirected request. # the redirected URL will have `redirect_flag` added to it, # which will cause the request handler to return 201. httpd_redirect = HTTPServer(('localhost', 0), TestHandler) self.redirect_address = TestHandler.redirect_address = \ f'http://localhost:{httpd_redirect.server_address[1]}/{redirect_flag}' class TestRedirectServer(threading.Thread): def run(self): httpd_redirect.handle_request() self.redirect_server = TestRedirectServer() self.redirect_server.daemon = True self.redirect_server.start() # set up the normal server to serve the example requests across test cases. httpd = HTTPServer(('localhost', 0), TestHandler) self.address = f'http://localhost:{httpd.server_address[1]}' class TestServer(threading.Thread): def run(self): httpd.handle_request() self.server = TestServer() self.server.daemon = True self.server.start() def test_push(self): push_to_gateway(self.address, "my_job", self.registry) self.assertEqual(self.requests[0][0].command, 'PUT') self.assertEqual(self.requests[0][0].path, '/metrics/job/my_job') self.assertEqual(self.requests[0][0].headers.get('content-type'), CONTENT_TYPE_LATEST) self.assertEqual(self.requests[0][1], b'# HELP g help\n# TYPE g gauge\ng 0.0\n') def test_push_schemeless_url(self): push_to_gateway(self.address.replace('http://', ''), "my_job", self.registry) self.assertEqual(self.requests[0][0].command, 'PUT') self.assertEqual(self.requests[0][0].path, '/metrics/job/my_job') self.assertEqual(self.requests[0][0].headers.get('content-type'), CONTENT_TYPE_LATEST) self.assertEqual(self.requests[0][1], b'# HELP g help\n# TYPE g gauge\ng 0.0\n') def test_push_with_groupingkey(self): push_to_gateway(self.address, "my_job", self.registry, {'a': 9}) self.assertEqual(self.requests[0][0].command, 'PUT') self.assertEqual(self.requests[0][0].path, '/metrics/job/my_job/a/9') self.assertEqual(self.requests[0][0].headers.get('content-type'), CONTENT_TYPE_LATEST) self.assertEqual(self.requests[0][1], b'# HELP g help\n# TYPE g gauge\ng 0.0\n') def test_push_with_groupingkey_empty_label(self): push_to_gateway(self.address, "my_job", self.registry, {'a': ''}) self.assertEqual(self.requests[0][0].command, 'PUT') self.assertEqual(self.requests[0][0].path, '/metrics/job/my_job/a@base64/=') self.assertEqual(self.requests[0][0].headers.get('content-type'), CONTENT_TYPE_LATEST) self.assertEqual(self.requests[0][1], b'# HELP g help\n# TYPE g gauge\ng 0.0\n') def test_push_with_complex_groupingkey(self): push_to_gateway(self.address, "my_job", self.registry, {'a': 9, 'b': 'a/ z'}) self.assertEqual(self.requests[0][0].command, 'PUT') self.assertEqual(self.requests[0][0].path, '/metrics/job/my_job/a/9/b@base64/YS8geg==') self.assertEqual(self.requests[0][0].headers.get('content-type'), CONTENT_TYPE_LATEST) self.assertEqual(self.requests[0][1], b'# HELP g help\n# TYPE g gauge\ng 0.0\n') def test_push_with_complex_job(self): push_to_gateway(self.address, "my/job", self.registry) self.assertEqual(self.requests[0][0].command, 'PUT') self.assertEqual(self.requests[0][0].path, '/metrics/job@base64/bXkvam9i') self.assertEqual(self.requests[0][0].headers.get('content-type'), CONTENT_TYPE_LATEST) self.assertEqual(self.requests[0][1], b'# HELP g help\n# TYPE g gauge\ng 0.0\n') def test_pushadd(self): pushadd_to_gateway(self.address, "my_job", self.registry) self.assertEqual(self.requests[0][0].command, 'POST') self.assertEqual(self.requests[0][0].path, '/metrics/job/my_job') self.assertEqual(self.requests[0][0].headers.get('content-type'), CONTENT_TYPE_LATEST) self.assertEqual(self.requests[0][1], b'# HELP g help\n# TYPE g gauge\ng 0.0\n') def test_pushadd_with_groupingkey(self): pushadd_to_gateway(self.address, "my_job", self.registry, {'a': 9}) self.assertEqual(self.requests[0][0].command, 'POST') self.assertEqual(self.requests[0][0].path, '/metrics/job/my_job/a/9') self.assertEqual(self.requests[0][0].headers.get('content-type'), CONTENT_TYPE_LATEST) self.assertEqual(self.requests[0][1], b'# HELP g help\n# TYPE g gauge\ng 0.0\n') def test_delete(self): delete_from_gateway(self.address, "my_job") self.assertEqual(self.requests[0][0].command, 'DELETE') self.assertEqual(self.requests[0][0].path, '/metrics/job/my_job') self.assertEqual(self.requests[0][0].headers.get('content-type'), CONTENT_TYPE_LATEST) self.assertEqual(self.requests[0][1], b'') def test_delete_with_groupingkey(self): delete_from_gateway(self.address, "my_job", {'a': 9}) self.assertEqual(self.requests[0][0].command, 'DELETE') self.assertEqual(self.requests[0][0].path, '/metrics/job/my_job/a/9') self.assertEqual(self.requests[0][0].headers.get('content-type'), CONTENT_TYPE_LATEST) self.assertEqual(self.requests[0][1], b'') def test_push_with_handler(self): def my_test_handler(url, method, timeout, headers, data): headers.append(['X-Test-Header', 'foobar']) # Handler should be passed sane default timeout self.assertEqual(timeout, 30) return default_handler(url, method, timeout, headers, data) push_to_gateway(self.address, "my_job", self.registry, handler=my_test_handler) self.assertEqual(self.requests[0][0].command, 'PUT') self.assertEqual(self.requests[0][0].path, '/metrics/job/my_job') self.assertEqual(self.requests[0][0].headers.get('content-type'), CONTENT_TYPE_LATEST) self.assertEqual(self.requests[0][0].headers.get('x-test-header'), 'foobar') self.assertEqual(self.requests[0][1], b'# HELP g help\n# TYPE g gauge\ng 0.0\n') def test_push_with_basic_auth_handler(self): def my_auth_handler(url, method, timeout, headers, data): return basic_auth_handler(url, method, timeout, headers, data, "foo", "bar") push_to_gateway(self.address, "my_job_with_basic_auth", self.registry, handler=my_auth_handler) self.assertEqual(self.requests[0][0].command, 'PUT') self.assertEqual(self.requests[0][0].path, '/metrics/job/my_job_with_basic_auth') self.assertEqual(self.requests[0][0].headers.get('content-type'), CONTENT_TYPE_LATEST) self.assertEqual(self.requests[0][1], b'# HELP g help\n# TYPE g gauge\ng 0.0\n') def test_push_with_tls_auth_handler(self): def my_auth_handler(url, method, timeout, headers, data): certs_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'certs') return tls_auth_handler(url, method, timeout, headers, data, os.path.join(certs_dir, "cert.pem"), os.path.join(certs_dir, "key.pem")) push_to_gateway(self.address, "my_job_with_tls_auth", self.registry, handler=my_auth_handler) self.assertEqual(self.requests[0][0].command, 'PUT') self.assertEqual(self.requests[0][0].path, '/metrics/job/my_job_with_tls_auth') self.assertEqual(self.requests[0][0].headers.get('content-type'), CONTENT_TYPE_LATEST) self.assertEqual(self.requests[0][1], b'# HELP g help\n# TYPE g gauge\ng 0.0\n') def test_push_with_redirect_handler(self): def my_redirect_handler(url, method, timeout, headers, data): return passthrough_redirect_handler(url, method, timeout, headers, data) push_to_gateway(self.address, "my_job_with_redirect", self.registry, handler=my_redirect_handler) self.assertEqual(self.requests[0][0].command, 'PUT') self.assertEqual(self.requests[0][0].path, '/metrics/job/my_job_with_redirect') self.assertEqual(self.requests[0][0].headers.get('content-type'), CONTENT_TYPE_LATEST) self.assertEqual(self.requests[0][1], b'# HELP g help\n# TYPE g gauge\ng 0.0\n') # ensure the redirect preserved request settings from the initial request. self.assertEqual(self.requests[0][0].command, self.requests[1][0].command) self.assertEqual( self.requests[0][0].headers.get('content-type'), self.requests[1][0].headers.get('content-type') ) self.assertEqual(self.requests[0][1], self.requests[1][1]) # ensure the redirect took place at the expected redirect location. self.assertEqual(self.requests[1][0].path, "/" + self.redirect_flag) def test_push_with_trailing_slash(self): address = self.address + '/' push_to_gateway(address, "my_job_with_trailing_slash", self.registry) self.assertNotIn('//', self.requests[0][0].path) def test_instance_ip_grouping_key(self): self.assertTrue('' != instance_ip_grouping_key()['instance']) def test_metrics_handler(self): handler = MetricsHandler.factory(self.registry) self.assertEqual(handler.registry, self.registry) def test_metrics_handler_subclassing(self): subclass = type('MetricsHandlerSubclass', (MetricsHandler, object), {}) handler = subclass.factory(self.registry) self.assertTrue(issubclass(handler, (MetricsHandler, subclass))) @pytest.fixture def registry(): return core.CollectorRegistry() class Collector: def __init__(self, metric_family, *values): self.metric_family = metric_family self.values = values def collect(self): self.metric_family.add_metric([], *self.values) return [self.metric_family] def _expect_metric_exception(registry, expected_error): try: generate_latest(registry) except expected_error as exception: assert isinstance(exception.args[-1], core.Metric) # Got a valid error as expected, return quietly return raise RuntimeError('Expected exception not raised') @pytest.mark.parametrize('MetricFamily', [ core.CounterMetricFamily, core.GaugeMetricFamily, ]) @pytest.mark.parametrize('value,error', [ (None, TypeError), ('', ValueError), ('x', ValueError), ([], TypeError), ({}, TypeError), ]) def test_basic_metric_families(registry, MetricFamily, value, error): metric_family = MetricFamily(MetricFamily.__name__, 'help') registry.register(Collector(metric_family, value)) _expect_metric_exception(registry, error) @pytest.mark.parametrize('count_value,sum_value,error', [ (None, 0, TypeError), (0, None, TypeError), ('', 0, ValueError), (0, '', ValueError), ([], 0, TypeError), (0, [], TypeError), ({}, 0, TypeError), (0, {}, TypeError), ]) def test_summary_metric_family(registry, count_value, sum_value, error): metric_family = core.SummaryMetricFamily('summary', 'help') registry.register(Collector(metric_family, count_value, sum_value)) _expect_metric_exception(registry, error) @pytest.mark.parametrize('MetricFamily', [ core.GaugeHistogramMetricFamily, ]) @pytest.mark.parametrize('buckets,sum_value,error', [ ([('spam', 0), ('eggs', 0)], None, TypeError), ([('spam', 0), ('eggs', None)], 0, TypeError), ([('spam', 0), (None, 0)], 0, AttributeError), ([('spam', None), ('eggs', 0)], 0, TypeError), ([(None, 0), ('eggs', 0)], 0, AttributeError), ([('spam', 0), ('eggs', 0)], '', ValueError), ([('spam', 0), ('eggs', '')], 0, ValueError), ([('spam', ''), ('eggs', 0)], 0, ValueError), ]) def test_histogram_metric_families(MetricFamily, registry, buckets, sum_value, error): metric_family = MetricFamily(MetricFamily.__name__, 'help') registry.register(Collector(metric_family, buckets, sum_value)) _expect_metric_exception(registry, error) def test_choose_encoder(): assert choose_encoder(None) == (generate_latest, CONTENT_TYPE_LATEST) assert choose_encoder(CONTENT_TYPE_LATEST) == (generate_latest, CONTENT_TYPE_LATEST) assert choose_encoder(openmetrics.CONTENT_TYPE_LATEST) == (openmetrics.generate_latest, openmetrics.CONTENT_TYPE_LATEST) if __name__ == '__main__': unittest.main() python-prometheus-client-0.19.0+ds1/tests/test_gc_collector.py000066400000000000000000000033771454301344400245000ustar00rootroot00000000000000import gc import platform import unittest from prometheus_client import CollectorRegistry, GCCollector SKIP = platform.python_implementation() != "CPython" @unittest.skipIf(SKIP, "Test requires CPython") class TestGCCollector(unittest.TestCase): def setUp(self): gc.disable() gc.collect() self.registry = CollectorRegistry() def test_working(self): GCCollector(registry=self.registry) self.registry.collect() before = self.registry.get_sample_value('python_gc_objects_collected_total', labels={"generation": "0"}) # add targets for gc a = [] a.append(a) del a b = [] b.append(b) del b gc.collect(0) self.registry.collect() after = self.registry.get_sample_value('python_gc_objects_collected_total', labels={"generation": "0"}) self.assertEqual(2, after - before) self.assertEqual(0, self.registry.get_sample_value( 'python_gc_objects_uncollectable_total', labels={"generation": "0"})) def test_empty(self): GCCollector(registry=self.registry) self.registry.collect() before = self.registry.get_sample_value('python_gc_objects_collected_total', labels={"generation": "0"}) gc.collect(0) self.registry.collect() after = self.registry.get_sample_value('python_gc_objects_collected_total', labels={"generation": "0"}) self.assertEqual(0, after - before) def tearDown(self): gc.enable() python-prometheus-client-0.19.0+ds1/tests/test_graphite_bridge.py000066400000000000000000000057561454301344400251630ustar00rootroot00000000000000import socketserver as SocketServer import threading import unittest from prometheus_client import CollectorRegistry, Gauge from prometheus_client.bridge.graphite import GraphiteBridge def fake_timer(): return 1434898897.5 class TestGraphiteBridge(unittest.TestCase): def setUp(self): self.registry = CollectorRegistry() self.data = '' class TCPHandler(SocketServer.BaseRequestHandler): def handle(s): self.data = s.request.recv(1024) server = SocketServer.TCPServer(('', 0), TCPHandler) class ServingThread(threading.Thread): def run(self): server.handle_request() server.socket.close() self.t = ServingThread() self.t.start() # Explicitly use localhost as the target host, since connecting to 0.0.0.0 fails on Windows self.address = ('localhost', server.server_address[1]) self.gb = GraphiteBridge(self.address, self.registry, _timer=fake_timer) def _use_tags(self): self.gb = GraphiteBridge(self.address, self.registry, tags=True, _timer=fake_timer) def test_nolabels(self): gauge = Gauge('g', 'help', registry=self.registry) gauge.inc() self.gb.push() self.t.join() self.assertEqual(b'g 1.0 1434898897\n', self.data) def test_labels(self): labels = Gauge('labels', 'help', ['a', 'b'], registry=self.registry) labels.labels('c', 'd').inc() self.gb.push() self.t.join() self.assertEqual(b'labels.a.c.b.d 1.0 1434898897\n', self.data) def test_labels_tags(self): self._use_tags() labels = Gauge('labels', 'help', ['a', 'b'], registry=self.registry) labels.labels('c', 'd').inc() self.gb.push() self.t.join() self.assertEqual(b'labels;a=c;b=d 1.0 1434898897\n', self.data) def test_prefix(self): labels = Gauge('labels', 'help', ['a', 'b'], registry=self.registry) labels.labels('c', 'd').inc() self.gb.push(prefix='pre.fix') self.t.join() self.assertEqual(b'pre.fix.labels.a.c.b.d 1.0 1434898897\n', self.data) def test_prefix_tags(self): self._use_tags() labels = Gauge('labels', 'help', ['a', 'b'], registry=self.registry) labels.labels('c', 'd').inc() self.gb.push(prefix='pre.fix') self.t.join() self.assertEqual(b'pre.fix.labels;a=c;b=d 1.0 1434898897\n', self.data) def test_sanitizing(self): labels = Gauge('labels', 'help', ['a'], registry=self.registry) labels.labels('c.:8').inc() self.gb.push() self.t.join() self.assertEqual(b'labels.a.c__8 1.0 1434898897\n', self.data) def test_sanitizing_tags(self): self._use_tags() labels = Gauge('labels', 'help', ['a'], registry=self.registry) labels.labels('c.:8').inc() self.gb.push() self.t.join() self.assertEqual(b'labels;a=c__8 1.0 1434898897\n', self.data) python-prometheus-client-0.19.0+ds1/tests/test_multiprocess.py000066400000000000000000000430531454301344400245650ustar00rootroot00000000000000import glob import os import shutil import tempfile import unittest import warnings from prometheus_client import mmap_dict, values from prometheus_client.core import ( CollectorRegistry, Counter, Gauge, Histogram, Sample, Summary, ) from prometheus_client.multiprocess import ( mark_process_dead, MultiProcessCollector, ) from prometheus_client.values import ( get_value_class, MultiProcessValue, MutexValue, ) class TestMultiProcessDeprecation(unittest.TestCase): def setUp(self): self.tempdir = tempfile.mkdtemp() def tearDown(self): os.environ.pop('prometheus_multiproc_dir', None) os.environ.pop('PROMETHEUS_MULTIPROC_DIR', None) values.ValueClass = MutexValue shutil.rmtree(self.tempdir) def test_deprecation_warning(self): os.environ['prometheus_multiproc_dir'] = self.tempdir with warnings.catch_warnings(record=True) as w: values.ValueClass = get_value_class() registry = CollectorRegistry() collector = MultiProcessCollector(registry) Counter('c', 'help', registry=None) assert os.environ['PROMETHEUS_MULTIPROC_DIR'] == self.tempdir assert len(w) == 1 assert issubclass(w[-1].category, DeprecationWarning) assert "PROMETHEUS_MULTIPROC_DIR" in str(w[-1].message) def test_mark_process_dead_respects_lowercase(self): os.environ['prometheus_multiproc_dir'] = self.tempdir # Just test that this does not raise with a lowercase env var. The # logic is tested elsewhere. mark_process_dead(123) class TestMultiProcess(unittest.TestCase): def setUp(self): self.tempdir = tempfile.mkdtemp() os.environ['PROMETHEUS_MULTIPROC_DIR'] = self.tempdir values.ValueClass = MultiProcessValue(lambda: 123) self.registry = CollectorRegistry() self.collector = MultiProcessCollector(self.registry) @property def _value_class(self): return def tearDown(self): del os.environ['PROMETHEUS_MULTIPROC_DIR'] shutil.rmtree(self.tempdir) values.ValueClass = MutexValue def test_counter_adds(self): c1 = Counter('c', 'help', registry=None) values.ValueClass = MultiProcessValue(lambda: 456) c2 = Counter('c', 'help', registry=None) self.assertEqual(0, self.registry.get_sample_value('c_total')) c1.inc(1) c2.inc(2) self.assertEqual(3, self.registry.get_sample_value('c_total')) def test_summary_adds(self): s1 = Summary('s', 'help', registry=None) values.ValueClass = MultiProcessValue(lambda: 456) s2 = Summary('s', 'help', registry=None) self.assertEqual(0, self.registry.get_sample_value('s_count')) self.assertEqual(0, self.registry.get_sample_value('s_sum')) s1.observe(1) s2.observe(2) self.assertEqual(2, self.registry.get_sample_value('s_count')) self.assertEqual(3, self.registry.get_sample_value('s_sum')) def test_histogram_adds(self): h1 = Histogram('h', 'help', registry=None) values.ValueClass = MultiProcessValue(lambda: 456) h2 = Histogram('h', 'help', registry=None) self.assertEqual(0, self.registry.get_sample_value('h_count')) self.assertEqual(0, self.registry.get_sample_value('h_sum')) self.assertEqual(0, self.registry.get_sample_value('h_bucket', {'le': '5.0'})) h1.observe(1) h2.observe(2) self.assertEqual(2, self.registry.get_sample_value('h_count')) self.assertEqual(3, self.registry.get_sample_value('h_sum')) self.assertEqual(2, self.registry.get_sample_value('h_bucket', {'le': '5.0'})) def test_gauge_all(self): g1 = Gauge('g', 'help', registry=None) values.ValueClass = MultiProcessValue(lambda: 456) g2 = Gauge('g', 'help', registry=None) self.assertEqual(0, self.registry.get_sample_value('g', {'pid': '123'})) self.assertEqual(0, self.registry.get_sample_value('g', {'pid': '456'})) g1.set(1) g2.set(2) mark_process_dead(123) self.assertEqual(1, self.registry.get_sample_value('g', {'pid': '123'})) self.assertEqual(2, self.registry.get_sample_value('g', {'pid': '456'})) def test_gauge_liveall(self): g1 = Gauge('g', 'help', registry=None, multiprocess_mode='liveall') values.ValueClass = MultiProcessValue(lambda: 456) g2 = Gauge('g', 'help', registry=None, multiprocess_mode='liveall') self.assertEqual(0, self.registry.get_sample_value('g', {'pid': '123'})) self.assertEqual(0, self.registry.get_sample_value('g', {'pid': '456'})) g1.set(1) g2.set(2) self.assertEqual(1, self.registry.get_sample_value('g', {'pid': '123'})) self.assertEqual(2, self.registry.get_sample_value('g', {'pid': '456'})) mark_process_dead(123, os.environ['PROMETHEUS_MULTIPROC_DIR']) self.assertEqual(None, self.registry.get_sample_value('g', {'pid': '123'})) self.assertEqual(2, self.registry.get_sample_value('g', {'pid': '456'})) def test_gauge_min(self): g1 = Gauge('g', 'help', registry=None, multiprocess_mode='min') values.ValueClass = MultiProcessValue(lambda: 456) g2 = Gauge('g', 'help', registry=None, multiprocess_mode='min') self.assertEqual(0, self.registry.get_sample_value('g')) g1.set(1) g2.set(2) self.assertEqual(1, self.registry.get_sample_value('g')) def test_gauge_livemin(self): g1 = Gauge('g', 'help', registry=None, multiprocess_mode='livemin') values.ValueClass = MultiProcessValue(lambda: 456) g2 = Gauge('g', 'help', registry=None, multiprocess_mode='livemin') self.assertEqual(0, self.registry.get_sample_value('g')) g1.set(1) g2.set(2) self.assertEqual(1, self.registry.get_sample_value('g')) mark_process_dead(123, os.environ['PROMETHEUS_MULTIPROC_DIR']) self.assertEqual(2, self.registry.get_sample_value('g')) def test_gauge_max(self): g1 = Gauge('g', 'help', registry=None, multiprocess_mode='max') values.ValueClass = MultiProcessValue(lambda: 456) g2 = Gauge('g', 'help', registry=None, multiprocess_mode='max') self.assertEqual(0, self.registry.get_sample_value('g')) g1.set(1) g2.set(2) self.assertEqual(2, self.registry.get_sample_value('g')) def test_gauge_livemax(self): g1 = Gauge('g', 'help', registry=None, multiprocess_mode='livemax') values.ValueClass = MultiProcessValue(lambda: 456) g2 = Gauge('g', 'help', registry=None, multiprocess_mode='livemax') self.assertEqual(0, self.registry.get_sample_value('g')) g1.set(2) g2.set(1) self.assertEqual(2, self.registry.get_sample_value('g')) mark_process_dead(123, os.environ['PROMETHEUS_MULTIPROC_DIR']) self.assertEqual(1, self.registry.get_sample_value('g')) def test_gauge_sum(self): g1 = Gauge('g', 'help', registry=None, multiprocess_mode='sum') values.ValueClass = MultiProcessValue(lambda: 456) g2 = Gauge('g', 'help', registry=None, multiprocess_mode='sum') self.assertEqual(0, self.registry.get_sample_value('g')) g1.set(1) g2.set(2) self.assertEqual(3, self.registry.get_sample_value('g')) mark_process_dead(123, os.environ['PROMETHEUS_MULTIPROC_DIR']) self.assertEqual(3, self.registry.get_sample_value('g')) def test_gauge_livesum(self): g1 = Gauge('g', 'help', registry=None, multiprocess_mode='livesum') values.ValueClass = MultiProcessValue(lambda: 456) g2 = Gauge('g', 'help', registry=None, multiprocess_mode='livesum') self.assertEqual(0, self.registry.get_sample_value('g')) g1.set(1) g2.set(2) self.assertEqual(3, self.registry.get_sample_value('g')) mark_process_dead(123, os.environ['PROMETHEUS_MULTIPROC_DIR']) self.assertEqual(2, self.registry.get_sample_value('g')) def test_gauge_mostrecent(self): g1 = Gauge('g', 'help', registry=None, multiprocess_mode='mostrecent') values.ValueClass = MultiProcessValue(lambda: 456) g2 = Gauge('g', 'help', registry=None, multiprocess_mode='mostrecent') g2.set(2) g1.set(1) self.assertEqual(1, self.registry.get_sample_value('g')) mark_process_dead(123, os.environ['PROMETHEUS_MULTIPROC_DIR']) self.assertEqual(1, self.registry.get_sample_value('g')) def test_gauge_livemostrecent(self): g1 = Gauge('g', 'help', registry=None, multiprocess_mode='livemostrecent') values.ValueClass = MultiProcessValue(lambda: 456) g2 = Gauge('g', 'help', registry=None, multiprocess_mode='livemostrecent') g2.set(2) g1.set(1) self.assertEqual(1, self.registry.get_sample_value('g')) mark_process_dead(123, os.environ['PROMETHEUS_MULTIPROC_DIR']) self.assertEqual(2, self.registry.get_sample_value('g')) def test_namespace_subsystem(self): c1 = Counter('c', 'help', registry=None, namespace='ns', subsystem='ss') c1.inc(1) self.assertEqual(1, self.registry.get_sample_value('ns_ss_c_total')) def test_counter_across_forks(self): pid = 0 values.ValueClass = MultiProcessValue(lambda: pid) c1 = Counter('c', 'help', registry=None) self.assertEqual(0, self.registry.get_sample_value('c_total')) c1.inc(1) c1.inc(1) pid = 1 c1.inc(1) self.assertEqual(3, self.registry.get_sample_value('c_total')) self.assertEqual(1, c1._value.get()) def test_initialization_detects_pid_change(self): pid = 0 values.ValueClass = MultiProcessValue(lambda: pid) # can not inspect the files cache directly, as it's a closure, so we # check for the actual files themselves def files(): fs = os.listdir(os.environ['PROMETHEUS_MULTIPROC_DIR']) fs.sort() return fs c1 = Counter('c1', 'c1', registry=None) self.assertEqual(files(), ['counter_0.db']) c2 = Counter('c2', 'c2', registry=None) self.assertEqual(files(), ['counter_0.db']) pid = 1 c3 = Counter('c3', 'c3', registry=None) self.assertEqual(files(), ['counter_0.db', 'counter_1.db']) def test_collect(self): pid = 0 values.ValueClass = MultiProcessValue(lambda: pid) labels = {i: i for i in 'abcd'} def add_label(key, value): l = labels.copy() l[key] = value return l c = Counter('c', 'help', labelnames=labels.keys(), registry=None) g = Gauge('g', 'help', labelnames=labels.keys(), registry=None) h = Histogram('h', 'help', labelnames=labels.keys(), registry=None) c.labels(**labels).inc(1) g.labels(**labels).set(1) h.labels(**labels).observe(1) pid = 1 c.labels(**labels).inc(1) g.labels(**labels).set(1) h.labels(**labels).observe(5) metrics = {m.name: m for m in self.collector.collect()} self.assertEqual( metrics['c'].samples, [Sample('c_total', labels, 2.0)] ) metrics['g'].samples.sort(key=lambda x: x[1]['pid']) self.assertEqual(metrics['g'].samples, [ Sample('g', add_label('pid', '0'), 1.0), Sample('g', add_label('pid', '1'), 1.0), ]) metrics['h'].samples.sort( key=lambda x: (x[0], float(x[1].get('le', 0))) ) expected_histogram = [ Sample('h_bucket', add_label('le', '0.005'), 0.0), Sample('h_bucket', add_label('le', '0.01'), 0.0), Sample('h_bucket', add_label('le', '0.025'), 0.0), Sample('h_bucket', add_label('le', '0.05'), 0.0), Sample('h_bucket', add_label('le', '0.075'), 0.0), Sample('h_bucket', add_label('le', '0.1'), 0.0), Sample('h_bucket', add_label('le', '0.25'), 0.0), Sample('h_bucket', add_label('le', '0.5'), 0.0), Sample('h_bucket', add_label('le', '0.75'), 0.0), Sample('h_bucket', add_label('le', '1.0'), 1.0), Sample('h_bucket', add_label('le', '2.5'), 1.0), Sample('h_bucket', add_label('le', '5.0'), 2.0), Sample('h_bucket', add_label('le', '7.5'), 2.0), Sample('h_bucket', add_label('le', '10.0'), 2.0), Sample('h_bucket', add_label('le', '+Inf'), 2.0), Sample('h_count', labels, 2.0), Sample('h_sum', labels, 6.0), ] self.assertEqual(metrics['h'].samples, expected_histogram) def test_collect_preserves_help(self): pid = 0 values.ValueClass = MultiProcessValue(lambda: pid) labels = {i: i for i in 'abcd'} c = Counter('c', 'c help', labelnames=labels.keys(), registry=None) g = Gauge('g', 'g help', labelnames=labels.keys(), registry=None) h = Histogram('h', 'h help', labelnames=labels.keys(), registry=None) c.labels(**labels).inc(1) g.labels(**labels).set(1) h.labels(**labels).observe(1) pid = 1 c.labels(**labels).inc(1) g.labels(**labels).set(1) h.labels(**labels).observe(5) metrics = {m.name: m for m in self.collector.collect()} self.assertEqual(metrics['c'].documentation, 'c help') self.assertEqual(metrics['g'].documentation, 'g help') self.assertEqual(metrics['h'].documentation, 'h help') def test_merge_no_accumulate(self): pid = 0 values.ValueClass = MultiProcessValue(lambda: pid) labels = {i: i for i in 'abcd'} def add_label(key, value): l = labels.copy() l[key] = value return l h = Histogram('h', 'help', labelnames=labels.keys(), registry=None) h.labels(**labels).observe(1) pid = 1 h.labels(**labels).observe(5) path = os.path.join(os.environ['PROMETHEUS_MULTIPROC_DIR'], '*.db') files = glob.glob(path) metrics = { m.name: m for m in self.collector.merge(files, accumulate=False) } metrics['h'].samples.sort( key=lambda x: (x[0], float(x[1].get('le', 0))) ) expected_histogram = [ Sample('h_bucket', add_label('le', '0.005'), 0.0), Sample('h_bucket', add_label('le', '0.01'), 0.0), Sample('h_bucket', add_label('le', '0.025'), 0.0), Sample('h_bucket', add_label('le', '0.05'), 0.0), Sample('h_bucket', add_label('le', '0.075'), 0.0), Sample('h_bucket', add_label('le', '0.1'), 0.0), Sample('h_bucket', add_label('le', '0.25'), 0.0), Sample('h_bucket', add_label('le', '0.5'), 0.0), Sample('h_bucket', add_label('le', '0.75'), 0.0), Sample('h_bucket', add_label('le', '1.0'), 1.0), Sample('h_bucket', add_label('le', '2.5'), 0.0), Sample('h_bucket', add_label('le', '5.0'), 1.0), Sample('h_bucket', add_label('le', '7.5'), 0.0), Sample('h_bucket', add_label('le', '10.0'), 0.0), Sample('h_bucket', add_label('le', '+Inf'), 0.0), Sample('h_sum', labels, 6.0), ] self.assertEqual(metrics['h'].samples, expected_histogram) def test_missing_gauge_file_during_merge(self): # These files don't exist, just like if mark_process_dead(9999999) had been # called during self.collector.collect(), after the glob found it # but before the merge actually happened. # This should not raise and return no metrics self.assertFalse(self.collector.merge([ os.path.join(self.tempdir, 'gauge_liveall_9999999.db'), os.path.join(self.tempdir, 'gauge_livesum_9999999.db'), ])) class TestMmapedDict(unittest.TestCase): def setUp(self): fd, self.tempfile = tempfile.mkstemp() os.close(fd) self.d = mmap_dict.MmapedDict(self.tempfile) def test_process_restart(self): self.d.write_value('abc', 123.0, 987.0) self.d.close() self.d = mmap_dict.MmapedDict(self.tempfile) self.assertEqual((123, 987.0), self.d.read_value('abc')) self.assertEqual([('abc', 123.0, 987.0)], list(self.d.read_all_values())) def test_expansion(self): key = 'a' * mmap_dict._INITIAL_MMAP_SIZE self.d.write_value(key, 123.0, 987.0) self.assertEqual([(key, 123.0, 987.0)], list(self.d.read_all_values())) def test_multi_expansion(self): key = 'a' * mmap_dict._INITIAL_MMAP_SIZE * 4 self.d.write_value('abc', 42.0, 987.0) self.d.write_value(key, 123.0, 876.0) self.d.write_value('def', 17.0, 765.0) self.assertEqual( [('abc', 42.0, 987.0), (key, 123.0, 876.0), ('def', 17.0, 765.0)], list(self.d.read_all_values())) def test_corruption_detected(self): self.d.write_value('abc', 42.0, 987.0) # corrupt the written data self.d._m[8:16] = b'somejunk' with self.assertRaises(RuntimeError): list(self.d.read_all_values()) def tearDown(self): os.unlink(self.tempfile) class TestUnsetEnv(unittest.TestCase): def setUp(self): self.registry = CollectorRegistry() fp, self.tmpfl = tempfile.mkstemp() os.close(fp) def test_unset_syncdir_env(self): self.assertRaises( ValueError, MultiProcessCollector, self.registry) def test_file_syncpath(self): registry = CollectorRegistry() self.assertRaises( ValueError, MultiProcessCollector, registry, self.tmpfl) def tearDown(self): os.remove(self.tmpfl) python-prometheus-client-0.19.0+ds1/tests/test_parser.py000066400000000000000000000304121454301344400233230ustar00rootroot00000000000000import math import unittest from prometheus_client.core import ( CollectorRegistry, CounterMetricFamily, GaugeMetricFamily, HistogramMetricFamily, Metric, Sample, SummaryMetricFamily, ) from prometheus_client.exposition import generate_latest from prometheus_client.parser import text_string_to_metric_families class TestParse(unittest.TestCase): def assertEqualMetrics(self, first, second, msg=None): super().assertEqual(first, second, msg) # Test that samples are actually named tuples of type Sample. for a, b in zip(first, second): for sa, sb in zip(a.samples, b.samples): assert sa.name == sb.name def test_simple_counter(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a help a 1 """) self.assertEqualMetrics([CounterMetricFamily("a", "help", value=1)], list(families)) def test_simple_gauge(self): families = text_string_to_metric_families("""# TYPE a gauge # HELP a help a 1 """) self.assertEqualMetrics([GaugeMetricFamily("a", "help", value=1)], list(families)) def test_simple_summary(self): families = text_string_to_metric_families("""# TYPE a summary # HELP a help a_count 1 a_sum 2 """) summary = SummaryMetricFamily("a", "help", count_value=1, sum_value=2) self.assertEqualMetrics([summary], list(families)) def test_summary_quantiles(self): families = text_string_to_metric_families("""# TYPE a summary # HELP a help a_count 1 a_sum 2 a{quantile="0.5"} 0.7 """) # The Python client doesn't support quantiles, but we # still need to be able to parse them. metric_family = SummaryMetricFamily("a", "help", count_value=1, sum_value=2) metric_family.add_sample("a", {"quantile": "0.5"}, 0.7) self.assertEqualMetrics([metric_family], list(families)) def test_simple_histogram(self): families = text_string_to_metric_families("""# TYPE a histogram # HELP a help a_bucket{le="1"} 0 a_bucket{le="+Inf"} 3 a_count 3 a_sum 2 """) self.assertEqualMetrics([HistogramMetricFamily("a", "help", sum_value=2, buckets=[("1", 0.0), ("+Inf", 3.0)])], list(families)) def test_no_metadata(self): families = text_string_to_metric_families("""a 1 """) metric_family = Metric("a", "", "untyped") metric_family.add_sample("a", {}, 1) self.assertEqualMetrics([metric_family], list(families)) def test_untyped(self): # https://github.com/prometheus/client_python/issues/79 families = text_string_to_metric_families("""# HELP redis_connected_clients Redis connected clients # TYPE redis_connected_clients untyped redis_connected_clients{instance="rough-snowflake-web",port="6380"} 10.0 redis_connected_clients{instance="rough-snowflake-web",port="6381"} 12.0 """) m = Metric("redis_connected_clients", "Redis connected clients", "untyped") m.samples = [ Sample("redis_connected_clients", {"instance": "rough-snowflake-web", "port": "6380"}, 10), Sample("redis_connected_clients", {"instance": "rough-snowflake-web", "port": "6381"}, 12), ] self.assertEqualMetrics([m], list(families)) def test_type_help_switched(self): families = text_string_to_metric_families("""# HELP a help # TYPE a counter a 1 """) self.assertEqualMetrics([CounterMetricFamily("a", "help", value=1)], list(families)) def test_blank_lines_and_comments(self): families = text_string_to_metric_families(""" # TYPE a counter # FOO a # BAR b # HELP a help a 1 """) self.assertEqualMetrics([CounterMetricFamily("a", "help", value=1)], list(families)) def test_tabs(self): families = text_string_to_metric_families("""#\tTYPE\ta\tcounter #\tHELP\ta\thelp a\t1 """) self.assertEqualMetrics([CounterMetricFamily("a", "help", value=1)], list(families)) def test_labels_with_curly_braces(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a help a{foo="bar", bar="b{a}z"} 1 """) metric_family = CounterMetricFamily("a", "help", labels=["foo", "bar"]) metric_family.add_metric(["bar", "b{a}z"], 1) self.assertEqualMetrics([metric_family], list(families)) def test_empty_help(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a a 1 """) self.assertEqualMetrics([CounterMetricFamily("a", "", value=1)], list(families)) def test_labels_and_infinite(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a help a{foo="bar"} +Inf a{foo="baz"} -Inf """) metric_family = CounterMetricFamily("a", "help", labels=["foo"]) metric_family.add_metric(["bar"], float('inf')) metric_family.add_metric(["baz"], float('-inf')) self.assertEqualMetrics([metric_family], list(families)) def test_spaces(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a help a{ foo = "bar" } 1 a\t\t{\t\tfoo\t\t=\t\t"baz"\t\t}\t\t2 a { foo = "buz" } 3 a\t { \t foo\t = "biz"\t } \t 4 a \t{\t foo = "boz"\t}\t 5 a{foo="bez"}6 """) metric_family = CounterMetricFamily("a", "help", labels=["foo"]) metric_family.add_metric(["bar"], 1) metric_family.add_metric(["baz"], 2) metric_family.add_metric(["buz"], 3) metric_family.add_metric(["biz"], 4) metric_family.add_metric(["boz"], 5) metric_family.add_metric(["bez"], 6) self.assertEqualMetrics([metric_family], list(families)) def test_commas(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a help a{foo="bar",} 1 a{foo="baz", } 1 # TYPE b counter # HELP b help b{,} 2 # TYPE c counter # HELP c help c{ ,} 3 # TYPE d counter # HELP d help d{, } 4 """) a = CounterMetricFamily("a", "help", labels=["foo"]) a.add_metric(["bar"], 1) a.add_metric(["baz"], 1) b = CounterMetricFamily("b", "help", value=2) c = CounterMetricFamily("c", "help", value=3) d = CounterMetricFamily("d", "help", value=4) self.assertEqualMetrics([a, b, c, d], list(families)) def test_multiple_trailing_commas(self): text = """# TYPE a counter # HELP a help a{foo="bar",, } 1 """ self.assertRaises(ValueError, lambda: list(text_string_to_metric_families(text))) def test_empty_brackets(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a help a{} 1 """) self.assertEqualMetrics([CounterMetricFamily("a", "help", value=1)], list(families)) def test_nan(self): families = text_string_to_metric_families("""a NaN """) # Can't use a simple comparison as nan != nan. self.assertTrue(math.isnan(list(families)[0].samples[0][2])) def test_empty_label(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a help a{foo="bar"} 1 a{foo=""} 2 """) metric_family = CounterMetricFamily("a", "help", labels=["foo"]) metric_family.add_metric(["bar"], 1) metric_family.add_metric([""], 2) self.assertEqualMetrics([metric_family], list(families)) def test_label_escaping(self): for escaped_val, unescaped_val in [ ('foo', 'foo'), ('\\foo', '\\foo'), ('\\\\foo', '\\foo'), ('foo\\\\', 'foo\\'), ('\\\\', '\\'), ('\\n', '\n'), ('\\\\n', '\\n'), ('\\\\\\n', '\\\n'), ('\\"', '"'), ('\\\\\\"', '\\"')]: families = list(text_string_to_metric_families(""" # TYPE a counter # HELP a help a{foo="%s",bar="baz"} 1 """ % escaped_val)) metric_family = CounterMetricFamily( "a", "help", labels=["foo", "bar"]) metric_family.add_metric([unescaped_val, "baz"], 1) self.assertEqualMetrics([metric_family], list(families)) def test_help_escaping(self): for escaped_val, unescaped_val in [ ('foo', 'foo'), ('\\foo', '\\foo'), ('\\\\foo', '\\foo'), ('foo\\', 'foo\\'), ('foo\\\\', 'foo\\'), ('\\n', '\n'), ('\\\\n', '\\n'), ('\\\\\\n', '\\\n'), ('\\"', '\\"'), ('\\\\"', '\\"'), ('\\\\\\"', '\\\\"')]: families = list(text_string_to_metric_families(""" # TYPE a counter # HELP a %s a{foo="bar"} 1 """ % escaped_val)) metric_family = CounterMetricFamily("a", unescaped_val, labels=["foo"]) metric_family.add_metric(["bar"], 1) self.assertEqualMetrics([metric_family], list(families)) def test_escaping(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a he\\n\\\\l\\tp a{foo="b\\"a\\nr"} 1 a{foo="b\\\\a\\z"} 2 """) metric_family = CounterMetricFamily("a", "he\n\\l\\tp", labels=["foo"]) metric_family.add_metric(["b\"a\nr"], 1) metric_family.add_metric(["b\\a\\z"], 2) self.assertEqualMetrics([metric_family], list(families)) def test_timestamps(self): families = text_string_to_metric_families("""# TYPE a counter # HELP a help a{foo="bar"} 1\t000 # TYPE b counter # HELP b help b 2 1234567890 b 88 1234566000 """) a = CounterMetricFamily("a", "help", labels=["foo"]) a.add_metric(["bar"], 1, timestamp=0) b = CounterMetricFamily("b", "help") b.add_metric([], 2, timestamp=1234567.89) b.add_metric([], 88, timestamp=1234566) self.assertEqualMetrics([a, b], list(families)) def test_roundtrip(self): text = """# HELP go_gc_duration_seconds A summary of the GC invocation durations. # TYPE go_gc_duration_seconds summary go_gc_duration_seconds{quantile="0"} 0.013300656000000001 go_gc_duration_seconds{quantile="0.25"} 0.013638736 go_gc_duration_seconds{quantile="0.5"} 0.013759906 go_gc_duration_seconds{quantile="0.75"} 0.013962066 go_gc_duration_seconds{quantile="1"} 0.021383540000000003 go_gc_duration_seconds_sum 56.12904785 go_gc_duration_seconds_count 7476.0 # HELP go_goroutines Number of goroutines that currently exist. # TYPE go_goroutines gauge go_goroutines 166.0 # HELP prometheus_local_storage_indexing_batch_duration_milliseconds Quantiles for batch indexing duration in milliseconds. # TYPE prometheus_local_storage_indexing_batch_duration_milliseconds summary prometheus_local_storage_indexing_batch_duration_milliseconds{quantile="0.5"} NaN prometheus_local_storage_indexing_batch_duration_milliseconds{quantile="0.9"} NaN prometheus_local_storage_indexing_batch_duration_milliseconds{quantile="0.99"} NaN prometheus_local_storage_indexing_batch_duration_milliseconds_sum 871.5665949999999 prometheus_local_storage_indexing_batch_duration_milliseconds_count 229.0 # HELP process_cpu_seconds_total Total user and system CPU time spent in seconds. # TYPE process_cpu_seconds_total counter process_cpu_seconds_total 29323.4 # HELP process_virtual_memory_bytes Virtual memory size in bytes. # TYPE process_virtual_memory_bytes gauge process_virtual_memory_bytes 2.478268416e+09 # HELP prometheus_build_info A metric with a constant '1' value labeled by version, revision, and branch from which Prometheus was built. # TYPE prometheus_build_info gauge prometheus_build_info{branch="HEAD",revision="ef176e5",version="0.16.0rc1"} 1.0 # HELP prometheus_local_storage_chunk_ops_total The total number of chunk operations by their type. # TYPE prometheus_local_storage_chunk_ops_total counter prometheus_local_storage_chunk_ops_total{type="clone"} 28.0 prometheus_local_storage_chunk_ops_total{type="create"} 997844.0 prometheus_local_storage_chunk_ops_total{type="drop"} 1.345758e+06 prometheus_local_storage_chunk_ops_total{type="load"} 1641.0 prometheus_local_storage_chunk_ops_total{type="persist"} 981408.0 prometheus_local_storage_chunk_ops_total{type="pin"} 32662.0 prometheus_local_storage_chunk_ops_total{type="transcode"} 980180.0 prometheus_local_storage_chunk_ops_total{type="unpin"} 32662.0 """ families = list(text_string_to_metric_families(text)) class TextCollector: def collect(self): return families registry = CollectorRegistry() registry.register(TextCollector()) self.assertEqual(text.encode('utf-8'), generate_latest(registry)) if __name__ == '__main__': unittest.main() python-prometheus-client-0.19.0+ds1/tests/test_platform_collector.py000066400000000000000000000037201454301344400257230ustar00rootroot00000000000000import unittest from prometheus_client import CollectorRegistry, PlatformCollector class TestPlatformCollector(unittest.TestCase): def setUp(self): self.registry = CollectorRegistry() self.platform = _MockPlatform() def test_python_info(self): PlatformCollector(registry=self.registry, platform=self.platform) self.assertLabels("python_info", { "version": "python_version", "implementation": "python_implementation", "major": "pvt_major", "minor": "pvt_minor", "patchlevel": "pvt_patchlevel" }) def test_system_info_java(self): self.platform._system = "Java" PlatformCollector(registry=self.registry, platform=self.platform) self.assertLabels("python_info", { "version": "python_version", "implementation": "python_implementation", "major": "pvt_major", "minor": "pvt_minor", "patchlevel": "pvt_patchlevel", "jvm_version": "jv_release", "jvm_release": "vm_release", "jvm_vendor": "vm_vendor", "jvm_name": "vm_name" }) def assertLabels(self, name, labels): for metric in self.registry.collect(): for s in metric.samples: if s.name == name: assert s.labels == labels return assert False class _MockPlatform: def __init__(self): self._system = "system" def python_version_tuple(self): return "pvt_major", "pvt_minor", "pvt_patchlevel" def python_version(self): return "python_version" def python_implementation(self): return "python_implementation" def system(self): return self._system def java_ver(self): return ( "jv_release", "jv_vendor", ("vm_name", "vm_release", "vm_vendor"), ("os_name", "os_version", "os_arch") ) python-prometheus-client-0.19.0+ds1/tests/test_process_collector.py000066400000000000000000000071231454301344400255560ustar00rootroot00000000000000import os import unittest from prometheus_client import CollectorRegistry, ProcessCollector class TestProcessCollector(unittest.TestCase): def setUp(self): self.registry = CollectorRegistry() self.test_proc = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'proc') def test_working(self): collector = ProcessCollector(proc=self.test_proc, pid=lambda: 26231, registry=self.registry) collector._ticks = 100 collector._pagesize = 4096 self.assertEqual(17.21, self.registry.get_sample_value('process_cpu_seconds_total')) self.assertEqual(56274944.0, self.registry.get_sample_value('process_virtual_memory_bytes')) self.assertEqual(8114176, self.registry.get_sample_value('process_resident_memory_bytes')) self.assertEqual(1418184099.75, self.registry.get_sample_value('process_start_time_seconds')) self.assertEqual(2048.0, self.registry.get_sample_value('process_max_fds')) self.assertEqual(5.0, self.registry.get_sample_value('process_open_fds')) self.assertEqual(None, self.registry.get_sample_value('process_fake_namespace')) def test_namespace(self): collector = ProcessCollector(proc=self.test_proc, pid=lambda: 26231, registry=self.registry, namespace='n') collector._ticks = 100 collector._pagesize = 4096 self.assertEqual(17.21, self.registry.get_sample_value('n_process_cpu_seconds_total')) self.assertEqual(56274944.0, self.registry.get_sample_value('n_process_virtual_memory_bytes')) self.assertEqual(8114176, self.registry.get_sample_value('n_process_resident_memory_bytes')) self.assertEqual(1418184099.75, self.registry.get_sample_value('n_process_start_time_seconds')) self.assertEqual(2048.0, self.registry.get_sample_value('n_process_max_fds')) self.assertEqual(5.0, self.registry.get_sample_value('n_process_open_fds')) self.assertEqual(None, self.registry.get_sample_value('process_cpu_seconds_total')) def test_working_584(self): collector = ProcessCollector(proc=self.test_proc, pid=lambda: "584\n", registry=self.registry) collector._ticks = 100 collector._pagesize = 4096 self.assertEqual(0.0, self.registry.get_sample_value('process_cpu_seconds_total')) self.assertEqual(10395648.0, self.registry.get_sample_value('process_virtual_memory_bytes')) self.assertEqual(634880, self.registry.get_sample_value('process_resident_memory_bytes')) self.assertEqual(1418291667.75, self.registry.get_sample_value('process_start_time_seconds')) self.assertEqual(None, self.registry.get_sample_value('process_max_fds')) self.assertEqual(None, self.registry.get_sample_value('process_open_fds')) def test_working_fake_pid(self): collector = ProcessCollector(proc=self.test_proc, pid=lambda: 123, registry=self.registry) collector._ticks = 100 collector._pagesize = 4096 self.assertEqual(None, self.registry.get_sample_value('process_cpu_seconds_total')) self.assertEqual(None, self.registry.get_sample_value('process_virtual_memory_bytes')) self.assertEqual(None, self.registry.get_sample_value('process_resident_memory_bytes')) self.assertEqual(None, self.registry.get_sample_value('process_start_time_seconds')) self.assertEqual(None, self.registry.get_sample_value('process_max_fds')) self.assertEqual(None, self.registry.get_sample_value('process_open_fds')) self.assertEqual(None, self.registry.get_sample_value('process_fake_namespace')) if __name__ == '__main__': unittest.main() python-prometheus-client-0.19.0+ds1/tests/test_samples.py000066400000000000000000000025031454301344400234730ustar00rootroot00000000000000import unittest from prometheus_client import samples class TestSamples(unittest.TestCase): def test_gt(self): self.assertEqual(samples.Timestamp(1, 1) > samples.Timestamp(1, 1), False) self.assertEqual(samples.Timestamp(1, 1) > samples.Timestamp(1, 2), False) self.assertEqual(samples.Timestamp(1, 1) > samples.Timestamp(2, 1), False) self.assertEqual(samples.Timestamp(1, 1) > samples.Timestamp(2, 2), False) self.assertEqual(samples.Timestamp(1, 2) > samples.Timestamp(1, 1), True) self.assertEqual(samples.Timestamp(2, 1) > samples.Timestamp(1, 1), True) self.assertEqual(samples.Timestamp(2, 2) > samples.Timestamp(1, 1), True) def test_lt(self): self.assertEqual(samples.Timestamp(1, 1) < samples.Timestamp(1, 1), False) self.assertEqual(samples.Timestamp(1, 1) < samples.Timestamp(1, 2), True) self.assertEqual(samples.Timestamp(1, 1) < samples.Timestamp(2, 1), True) self.assertEqual(samples.Timestamp(1, 1) < samples.Timestamp(2, 2), True) self.assertEqual(samples.Timestamp(1, 2) < samples.Timestamp(1, 1), False) self.assertEqual(samples.Timestamp(2, 1) < samples.Timestamp(1, 1), False) self.assertEqual(samples.Timestamp(2, 2) < samples.Timestamp(1, 1), False) if __name__ == '__main__': unittest.main() python-prometheus-client-0.19.0+ds1/tests/test_twisted.py000066400000000000000000000031531454301344400235140ustar00rootroot00000000000000from unittest import skipUnless from prometheus_client import CollectorRegistry, Counter, generate_latest try: from warnings import filterwarnings from twisted.internet import reactor from twisted.trial.unittest import TestCase from twisted.web.client import Agent, readBody from twisted.web.resource import Resource from twisted.web.server import Site from prometheus_client.twisted import MetricsResource HAVE_TWISTED = True except ImportError: from unittest import TestCase HAVE_TWISTED = False class MetricsResourceTest(TestCase): @skipUnless(HAVE_TWISTED, "Don't have twisted installed.") def setUp(self): self.registry = CollectorRegistry() def test_reports_metrics(self): """ ``MetricsResource`` serves the metrics from the provided registry. """ c = Counter('cc', 'A counter', registry=self.registry) c.inc() root = Resource() root.putChild(b'metrics', MetricsResource(registry=self.registry)) server = reactor.listenTCP(0, Site(root)) self.addCleanup(server.stopListening) agent = Agent(reactor) port = server.getHost().port url = f"http://localhost:{port}/metrics" d = agent.request(b"GET", url.encode("ascii")) # Ignore expected DeprecationWarning. filterwarnings("ignore", category=DeprecationWarning, message="Using readBody " "with a transport that does not have an abortConnection method") d.addCallback(readBody) d.addCallback(self.assertEqual, generate_latest(self.registry)) return d python-prometheus-client-0.19.0+ds1/tests/test_wsgi.py000066400000000000000000000111671454301344400230060ustar00rootroot00000000000000import gzip from unittest import TestCase from wsgiref.util import setup_testing_defaults from prometheus_client import CollectorRegistry, Counter, make_wsgi_app from prometheus_client.exposition import _bake_output, CONTENT_TYPE_LATEST class WSGITest(TestCase): def setUp(self): self.registry = CollectorRegistry() self.captured_status = None self.captured_headers = None # Setup WSGI environment self.environ = {} setup_testing_defaults(self.environ) def capture(self, status, header): self.captured_status = status self.captured_headers = header def increment_metrics(self, metric_name, help_text, increments): c = Counter(metric_name, help_text, registry=self.registry) for _ in range(increments): c.inc() def assert_outputs(self, outputs, metric_name, help_text, increments, compressed): self.assertEqual(len(outputs), 1) if compressed: output = gzip.decompress(outputs[0]).decode(encoding="utf-8") else: output = outputs[0].decode('utf8') # Status code self.assertEqual(self.captured_status, "200 OK") # Headers num_of_headers = 2 if compressed else 1 self.assertEqual(len(self.captured_headers), num_of_headers) self.assertIn(("Content-Type", CONTENT_TYPE_LATEST), self.captured_headers) if compressed: self.assertIn(("Content-Encoding", "gzip"), self.captured_headers) # Body self.assertIn("# HELP " + metric_name + "_total " + help_text + "\n", output) self.assertIn("# TYPE " + metric_name + "_total counter\n", output) self.assertIn(metric_name + "_total " + str(increments) + ".0\n", output) def validate_metrics(self, metric_name, help_text, increments): """ WSGI app serves the metrics from the provided registry. """ self.increment_metrics(metric_name, help_text, increments) # Create and run WSGI app app = make_wsgi_app(self.registry) outputs = app(self.environ, self.capture) # Assert outputs self.assert_outputs(outputs, metric_name, help_text, increments, compressed=False) def test_report_metrics_1(self): self.validate_metrics("counter", "A counter", 2) def test_report_metrics_2(self): self.validate_metrics("counter", "Another counter", 3) def test_report_metrics_3(self): self.validate_metrics("requests", "Number of requests", 5) def test_report_metrics_4(self): self.validate_metrics("failed_requests", "Number of failed requests", 7) def test_favicon_path(self): from unittest.mock import patch # Create mock to enable counting access of _bake_output with patch("prometheus_client.exposition._bake_output", side_effect=_bake_output) as mock: # Create and run WSGI app app = make_wsgi_app(self.registry) # Try accessing the favicon path favicon_environ = dict(self.environ) favicon_environ['PATH_INFO'] = '/favicon.ico' outputs = app(favicon_environ, self.capture) # Test empty response self.assertEqual(outputs, [b'']) self.assertEqual(mock.call_count, 0) # Try accessing normal paths app(self.environ, self.capture) self.assertEqual(mock.call_count, 1) def test_gzip(self): # Increment a metric metric_name = "counter" help_text = "A counter" increments = 2 self.increment_metrics(metric_name, help_text, increments) app = make_wsgi_app(self.registry) # Try accessing metrics using the gzip Accept-Content header. gzip_environ = dict(self.environ) gzip_environ['HTTP_ACCEPT_ENCODING'] = 'gzip' outputs = app(gzip_environ, self.capture) # Assert outputs are compressed. self.assert_outputs(outputs, metric_name, help_text, increments, compressed=True) def test_gzip_disabled(self): # Increment a metric metric_name = "counter" help_text = "A counter" increments = 2 self.increment_metrics(metric_name, help_text, increments) # Disable compression explicitly. app = make_wsgi_app(self.registry, disable_compression=True) # Try accessing metrics using the gzip Accept-Content header. gzip_environ = dict(self.environ) gzip_environ['HTTP_ACCEPT_ENCODING'] = 'gzip' outputs = app(gzip_environ, self.capture) # Assert outputs are not compressed. self.assert_outputs(outputs, metric_name, help_text, increments, compressed=False) python-prometheus-client-0.19.0+ds1/tox.ini000066400000000000000000000030331454301344400205660ustar00rootroot00000000000000[tox] envlist = coverage-clean,py{3.8,3.9,3.10,3.11,3.12,py3.8,3.9-nooptionals},coverage-report,flake8,isort,mypy [testenv] deps = coverage pytest attrs {py3.8,pypy3.8}: twisted py3.8: asgiref # See https://github.com/django/asgiref/issues/393 for why we need to pin asgiref for pypy pypy3.8: asgiref==3.6.0 commands = coverage run --parallel -m pytest {posargs} [testenv:py3.9-nooptionals] commands = coverage run --parallel -m pytest {posargs} [testenv:coverage-clean] deps = coverage skip_install = true commands = coverage erase [testenv:coverage-report] deps = coverage skip_install = true commands = coverage combine coverage report [testenv:flake8] deps = flake8==6.0.0 flake8-docstrings==1.6.0 flake8-import-order==0.18.2 skip_install = true commands = flake8 prometheus_client/ tests/ setup.py [testenv:isort] deps = isort==5.10.1 skip_install = true commands = isort --check prometheus_client/ tests/ setup.py [testenv:mypy] deps = pytest asgiref mypy==0.991 skip_install = true commands = mypy --install-types --non-interactive prometheus_client/ tests/ [flake8] ignore = D, E303, E402, E501, E722, E741, F821, F841, W291, W293, W503, E129, E731 per-file-ignores = prometheus_client/__init__.py:F401 import-order-style = google application-import-names = prometheus_client [isort] force_alphabetical_sort_within_sections = True force_sort_within_sections = True include_trailing_comma = True multi_line_output = 5