pax_global_header00006660000000000000000000000064126513712510014515gustar00rootroot0000000000000052 comment=bf4b39141e091cdd6aa4194bc652b4d122f78d13 prometheus-0.16.2+ds/000077500000000000000000000000001265137125100144005ustar00rootroot00000000000000prometheus-0.16.2+ds/.dockerignore000066400000000000000000000000321265137125100170470ustar00rootroot00000000000000data/ prometheus promtool prometheus-0.16.2+ds/.gitignore000066400000000000000000000005201265137125100163650ustar00rootroot00000000000000*# *.[568ao] *.a[568o] *.cgo*.c *.cgo*.go /*.yaml /*.yml /*.rules *.exe *.orig *.pyc *.rej *.so # Editor files # ################ *~ .*.swp .*.swo *.iml .idea .DS_Store ._* .nfs.* /.git [568a].out _cgo_* core *-stamp /prometheus /promtool benchmark.txt /data /.build .#* command-line-arguments.test *BACKUP* *BASE* *LOCAL* *REMOTE* prometheus-0.16.2+ds/.travis.yml000066400000000000000000000001001265137125100165000ustar00rootroot00000000000000sudo: false language: go go: - 1.5 script: - make style test prometheus-0.16.2+ds/AUTHORS.md000066400000000000000000000040121265137125100160440ustar00rootroot00000000000000The Prometheus project was started by Matt T. Proud (emeritus) and Julius Volz in 2012. Maintainers of this repository: * Björn Rabenstein * Fabian Reinartz * Julius Volz The following individuals have contributed code to this repository (listed in alphabetical order): * Alan Braithwaite * Alexander Staubo * Andres Suarez * Bernerd Schaefer * Björn Rabenstein * Brian Brazil * Ceesjan Luiten * Chad Metcalf * Conor Hennessy * Deomid Ryabkov * Dan Williams * Daniel Bornkessel * Fabian Reinartz * Florian Pfitzer * Jimmi Dyson * Johannes 'fish' Ziemke * Joonas Bergius * Joseph Wilk * Julius Volz * Laurie Malau * Marko Mikulicic * Matt T. Proud * Miek Gieben * Miquel Sabaté * Mitsuhiro Tanda * Peter Bourgon * Robey Pointer * Sabra Melamed * Sam McLeod * Scott Worley * Sergiusz 'q3k' Bazański * Sharif Nassar * Sindre Myren * Stephan Erb * Stephen Shirley * Steve Durrheimer * Stuart Nelson * Tobias Gesellchen * Tobias Schmidt * Tom Prince * Tomás Senart * Ursula Kallio * Will Rouesnel prometheus-0.16.2+ds/CHANGELOG.md000066400000000000000000000721121265137125100162140ustar00rootroot00000000000000## 0.16.2 / 2016-01-18 * [FEATURE] Multiple authentication options for EC2 discovery added * [FEATURE] Several meta labels for EC2 discovery added * [FEATURE] Allow full URLs in static target groups (used e.g. by the `blackbox_exporter`) * [FEATURE] Add Graphite remote-storage integration * [FEATURE] Create separate Kubernetes targets for services and their endpoints * [FEATURE] Add `clamp_{min,max}` functions to PromQL * [FEATURE] Omitted time parameter in API query defaults to now * [ENHANCEMENT] Less frequent time series file truncation * [ENHANCEMENT] Instrument number of manually deleted time series * [ENHANCEMENT] Ignore lost+found directory during storage version detection * [CHANGE] Kubernetes `masters` renamed to `api_servers` * [CHANGE] "Healthy" and "unhealthy" targets are now called "up" and "down" in the web UI * [CHANGE] Remove undocumented 2nd argument of the `delta` function. (This is a BREAKING CHANGE for users of the undocumented 2nd argument.) * [BUGFIX] Return proper HTTP status codes on API errors * [BUGFIX] Fix Kubernetes authentication configuration * [BUGFIX] Fix stripped OFFSET from in rule evaluation and display * [BUGFIX] Do not crash on failing Consul SD initialization * [BUGFIX] Revert changes to metric auto-completion * [BUGFIX] Add config overflow validation for TLS configuration * [BUGFIX] Skip already watched Zookeeper nodes in serverset SD * [BUGFIX] Don't federate stale samples * [BUGFIX] Move NaN to end of result for `topk/bottomk/sort/sort_desc/min/max` * [BUGFIX] Limit extrapolation of `delta/rate/increase` * [BUGFIX] Fix unhandled error in rule evaluation Some changes to the Kubernetes service discovery were integration since it was released as a beta feature. ## 0.16.1 / 2015-10-16 * [FEATURE] Add `irate()` function. * [ENHANCEMENT] Improved auto-completion in expression browser. * [CHANGE] Kubernetes SD moves node label to instance label. * [BUGFIX] Escape regexes in console templates. ## 0.16.0 / 2015-10-09 BREAKING CHANGES: * Release tarballs now contain the built binaries in a nested directory. * The `hash_mod` relabeling action now uses MD5 hashes instead of FNV hashes to achieve a better distribution. * The DNS-SD meta label `__meta_dns_srv_name` was renamed to `__meta_dns_name` to reflect support for DNS record types other than `SRV`. * The default full refresh interval for the file-based service discovery has been increased from 30 seconds to 5 minutes. * In relabeling, parts of a source label that weren't matched by the specified regular expression are no longer included in the replacement output. * Queries no longer interpolate between two data points. Instead, the resulting value will always be the latest value before the evaluation query timestamp. * Regular expressions supplied via the configuration are now anchored to match full strings instead of substrings. * Global labels are not appended upon storing time series anymore. Instead, they are only appended when communicating with external systems (Alertmanager, remote storages, federation). They have thus also been renamed from `global.labels` to `global.external_labels`. * The names and units of metrics related to remote storage sample appends have been changed. * The experimental support for writing to InfluxDB has been updated to work with InfluxDB 0.9.x. 0.8.x versions of InfluxDB are not supported anymore. * Escape sequences in double- and single-quoted string literals in rules or query expressions are now interpreted like escape sequences in Go string literals (https://golang.org/ref/spec#String_literals). Future breaking changes / deprecated features: * The `delta()` function had an undocumented optional second boolean argument to make it behave like `increase()`. This second argument will be removed in the future. Migrate any occurrences of `delta(x, 1)` to use `increase(x)` instead. * Support for filter operators between two scalar values (like `2 > 1`) will be removed in the future. These will require a `bool` modifier on the operator, e.g. `2 > bool 1`. All changes: * [CHANGE] Renamed `global.labels` to `global.external_labels`. * [CHANGE] Vendoring is now done via govendor instead of godep. * [CHANGE] Change web UI root page to show the graphing interface instead of the server status page. * [CHANGE] Append global labels only when communicating with external systems instead of storing them locally. * [CHANGE] Change all regexes in the configuration to do full-string matches instead of substring matches. * [CHANGE] Remove interpolation of vector values in queries. * [CHANGE] For alert `SUMMARY`/`DESCRIPTION` template fields, cast the alert value to `float64` to work with common templating functions. * [CHANGE] In relabeling, don't include unmatched source label parts in the replacement. * [CHANGE] Change default full refresh interval for the file-based service discovery from 30 seconds to 5 minutes. * [CHANGE] Rename the DNS-SD meta label `__meta_dns_srv_name` to `__meta_dns_name` to reflect support for other record types than `SRV`. * [CHANGE] Release tarballs now contain the binaries in a nested directory. * [CHANGE] Update InfluxDB write support to work with InfluxDB 0.9.x. * [FEATURE] Support full "Go-style" escape sequences in strings and add raw string literals. * [FEATURE] Add EC2 service discovery support. * [FEATURE] Allow configuring TLS options in scrape configurations. * [FEATURE] Add instrumentation around configuration reloads. * [FEATURE] Add `bool` modifier to comparison operators to enable boolean (`0`/`1`) output instead of filtering. * [FEATURE] In Zookeeper serverset discovery, provide `__meta_serverset_shard` label with the serverset shard number. * [FEATURE] Provide `__meta_consul_service_id` meta label in Consul service discovery. * [FEATURE] Allow scalar expressions in recording rules to enable use cases such as building constant metrics. * [FEATURE] Add `label_replace()` and `vector()` query language functions. * [FEATURE] In Consul service discovery, fill in the `__meta_consul_dc` datacenter label from the Consul agent when it's not set in the Consul SD config. * [FEATURE] Scrape all services upon empty services list in Consul service discovery. * [FEATURE] Add `labelmap` relabeling action to map a set of input labels to a set of output labels using regular expressions. * [FEATURE] Introduce `__tmp` as a relabeling label prefix that is guaranteed to not be used by Prometheus internally. * [FEATURE] Kubernetes-based service discovery. * [FEATURE] Marathon-based service discovery. * [FEATURE] Support multiple series names in console graphs JavaScript library. * [FEATURE] Allow reloading configuration via web handler at `/-/reload`. * [FEATURE] Updates to promtool to reflect new Prometheus configuration features. * [FEATURE] Add `proxy_url` parameter to scrape configurations to enable use of proxy servers. * [FEATURE] Add console templates for Prometheus itself. * [FEATURE] Allow relabeling the protocol scheme of targets. * [FEATURE] Add `predict_linear()` query language function. * [FEATURE] Support for authentication using bearer tokens, client certs, and CA certs. * [FEATURE] Implement unary expressions for vector types (`-foo`, `+foo`). * [FEATURE] Add console templates for the SNMP exporter. * [FEATURE] Make it possible to relabel target scrape query parameters. * [FEATURE] Add support for `A` and `AAAA` records in DNS service discovery. * [ENHANCEMENT] Fix several flaky tests. * [ENHANCEMENT] Switch to common routing package. * [ENHANCEMENT] Use more resilient metric decoder. * [ENHANCEMENT] Update vendored dependencies. * [ENHANCEMENT] Add compression to more HTTP handlers. * [ENHANCEMENT] Make -web.external-url flag help string more verbose. * [ENHANCEMENT] Improve metrics around remote storage queues. * [ENHANCEMENT] Use Go 1.5.1 instead of Go 1.4.2 in builds. * [ENHANCEMENT] Update the architecture diagram in the `README.md`. * [ENHANCEMENT] Time out sample appends in retrieval layer if the storage is backlogging. * [ENHANCEMENT] Make `hash_mod` relabeling action use MD5 instead of FNV to enable better hash distribution. * [ENHANCEMENT] Better tracking of targets between same service discovery mechanisms in one scrape configuration. * [ENHANCEMENT] Handle parser and query evaluation runtime panics more gracefully. * [ENHANCEMENT] Add IDs to H2 tags on status page to allow anchored linking. * [BUGFIX] Fix watching multiple paths with Zookeeper serverset discovery. * [BUGFIX] Fix high CPU usage on configuration reload. * [BUGFIX] Fix disappearing `__params` on configuration reload. * [BUGFIX] Make `labelmap` action available through configuration. * [BUGFIX] Fix direct access of protobuf fields. * [BUGFIX] Fix panic on Consul request error. * [BUGFIX] Redirect of graph endpoint for prefixed setups. * [BUGFIX] Fix series file deletion behavior when purging archived series. * [BUGFIX] Fix error checking and logging around checkpointing. * [BUGFIX] Fix map initialization in target manager. * [BUGFIX] Fix draining of file watcher events in file-based service discovery. * [BUGFIX] Add `POST` handler for `/debug` endpoints to fix CPU profiling. * [BUGFIX] Fix several flaky tests. * [BUGFIX] Fix busylooping in case a scrape configuration has no target providers defined. * [BUGFIX] Fix exit behavior of static target provider. * [BUGFIX] Fix configuration reloading loop upon shutdown. * [BUGFIX] Add missing check for nil expression in expression parser. * [BUGFIX] Fix error handling bug in test code. * [BUGFIX] Fix Consul port meta label. * [BUGFIX] Fix lexer bug that treated non-Latin Unicode digits as digits. * [CLEANUP] Remove obsolete federation example from console templates. * [CLEANUP] Remove duplicated Bootstrap JS inclusion on graph page. * [CLEANUP] Switch to common log package. * [CLEANUP] Update build environment scripts and Makefiles to work better with native Go build mechanisms and new Go 1.5 experimental vendoring support. * [CLEANUP] Remove logged notice about 0.14.x configuration file format change. * [CLEANUP] Move scrape-time metric label modification into SampleAppenders. * [CLEANUP] Switch from `github.com/client_golang/model` to `github.com/common/model` and related type cleanups. * [CLEANUP] Switch from `github.com/client_golang/extraction` to `github.com/common/expfmt` and related type cleanups. * [CLEANUP] Exit Prometheus when the web server encounters a startup error. * [CLEANUP] Remove non-functional alert-silencing links on alerting page. * [CLEANUP] General cleanups to comments and code, derived from `golint`, `go vet`, or otherwise. * [CLEANUP] When entering crash recovery, tell users how to cleanly shut down Prometheus. * [CLEANUP] Remove internal support for multi-statement queries in query engine. * [CLEANUP] Update AUTHORS.md. * [CLEANUP] Don't warn/increment metric upon encountering equal timestamps for the same series upon append. * [CLEANUP] Resolve relative paths during configuration loading. ## 0.15.1 / 2015-07-27 * [BUGFIX] Fix vector matching behavior when there is a mix of equality and non-equality matchers in a vector selector and one matcher matches no series. * [ENHANCEMENT] Allow overriding `GOARCH` and `GOOS` in Makefile.INCLUDE. * [ENHANCEMENT] Update vendored dependencies. ## 0.15.0 / 2015-07-21 BREAKING CHANGES: * Relative paths for rule files are now evaluated relative to the config file. * External reachability flags (`-web.*`) consolidated. * The default storage directory has been changed from `/tmp/metrics` to `data` in the local directory. * The `rule_checker` tool has been replaced by `promtool` with different flags and more functionality. * Empty labels are now removed upon ingestion into the storage. Matching empty labels is now equivalent to matching unset labels (`mymetric{label=""}` now matches series that don't have `label` set at all). * The special `__meta_consul_tags` label in Consul service discovery now starts and ends with tag separators to enable easier regex matching. * The default scrape interval has been changed back from 1 minute to 10 seconds. All changes: * [CHANGE] Change default storage directory to `data` in the current working directory. * [CHANGE] Consolidate external reachability flags (`-web.*`)into one. * [CHANGE] Deprecate `keeping_extra` modifier keyword, rename it to `keep_common`. * [CHANGE] Improve label matching performance and treat unset labels like empty labels in label matchers. * [CHANGE] Remove `rule_checker` tool and add generic `promtool` CLI tool which allows checking rules and configuration files. * [CHANGE] Resolve rule files relative to config file. * [CHANGE] Restore default ScrapeInterval of 1 minute instead of 10 seconds. * [CHANGE] Surround `__meta_consul_tags` value with tag separators. * [CHANGE] Update node disk console for new filesystem labels. * [FEATURE] Add Consul's `ServiceAddress`, `Address`, and `ServicePort` as meta labels to enable setting a custom scrape address if needed. * [FEATURE] Add `hashmod` relabel action to allow for horizontal sharding of Prometheus servers. * [FEATURE] Add `honor_labels` scrape configuration option to not overwrite any labels exposed by the target. * [FEATURE] Add basic federation support on `/federate`. * [FEATURE] Add optional `RUNBOOK` field to alert statements. * [FEATURE] Add pre-relabel target labels to status page. * [FEATURE] Add version information endpoint under `/version`. * [FEATURE] Added initial stable API version 1 under `/api/v1`, including ability to delete series and query more metadata. * [FEATURE] Allow configuring query parameters when scraping metrics endpoints. * [FEATURE] Allow deleting time series via the new v1 API. * [FEATURE] Allow individual ingested metrics to be relabeled. * [FEATURE] Allow loading rule files from an entire directory. * [FEATURE] Allow scalar expressions in range queries, improve error messages. * [FEATURE] Support Zookeeper Serversets as a service discovery mechanism. * [ENHANCEMENT] Add circleci yaml for Dockerfile test build. * [ENHANCEMENT] Always show selected graph range, regardless of available data. * [ENHANCEMENT] Change expression input field to multi-line textarea. * [ENHANCEMENT] Enforce strict monotonicity of time stamps within a series. * [ENHANCEMENT] Export build information as metric. * [ENHANCEMENT] Improve UI of `/alerts` page. * [ENHANCEMENT] Improve display of target labels on status page. * [ENHANCEMENT] Improve initialization and routing functionality of web service. * [ENHANCEMENT] Improve target URL handling and display. * [ENHANCEMENT] New dockerfile using alpine-glibc base image and make. * [ENHANCEMENT] Other minor fixes. * [ENHANCEMENT] Preserve alert state across reloads. * [ENHANCEMENT] Prettify flag help output even more. * [ENHANCEMENT] README.md updates. * [ENHANCEMENT] Raise error on unknown config parameters. * [ENHANCEMENT] Refine v1 HTTP API output. * [ENHANCEMENT] Show original configuration file contents on status page instead of serialized YAML. * [ENHANCEMENT] Start HUP signal handler earlier to not exit upon HUP during startup. * [ENHANCEMENT] Updated vendored dependencies. * [BUGFIX] Do not panic in `StringToDuration()` on wrong duration unit. * [BUGFIX] Exit on invalid rule files on startup. * [BUGFIX] Fix a regression in the `.Path` console template variable. * [BUGFIX] Fix chunk descriptor loading. * [BUGFIX] Fix consoles "Prometheus" link to point to / * [BUGFIX] Fix empty configuration file cases * [BUGFIX] Fix float to int conversions in chunk encoding, which were broken for some architectures. * [BUGFIX] Fix overflow detection for serverset config. * [BUGFIX] Fix race conditions in retrieval layer. * [BUGFIX] Fix shutdown deadlock in Consul SD code. * [BUGFIX] Fix the race condition targets in the Makefile. * [BUGFIX] Fix value display error in web console. * [BUGFIX] Hide authentication credentials in config `String()` output. * [BUGFIX] Increment dirty counter metric in storage only if `setDirty(true)` is called. * [BUGFIX] Periodically refresh services in Consul to recover from missing events. * [BUGFIX] Prevent overwrite of default global config when loading a configuration. * [BUGFIX] Properly lex `\r` as whitespace in expression language. * [BUGFIX] Validate label names in JSON target groups. * [BUGFIX] Validate presence of regex field in relabeling configurations. * [CLEANUP] Clean up initialization of remote storage queues. * [CLEANUP] Fix `go vet` and `golint` violations. * [CLEANUP] General cleanup of rules and query language code. * [CLEANUP] Improve and simplify Dockerfile build steps. * [CLEANUP] Improve and simplify build infrastructure, use go-bindata for web assets. Allow building without git. * [CLEANUP] Move all utility packages into common `util` subdirectory. * [CLEANUP] Refactor main, flag handling, and web package. * [CLEANUP] Remove unused methods from `Rule` interface. * [CLEANUP] Simplify default config handling. * [CLEANUP] Switch human-readable times on web UI to UTC. * [CLEANUP] Use `templates.TemplateExpander` for all page templates. * [CLEANUP] Use new v1 HTTP API for querying and graphing. ## 0.14.0 / 2015-06-01 * [CHANGE] Configuration format changed and switched to YAML. (See the provided [migration tool](https://github.com/prometheus/migrate/releases).) * [ENHANCEMENT] Redesign of state-preserving target discovery. * [ENHANCEMENT] Allow specifying scrape URL scheme and basic HTTP auth for non-static targets. * [FEATURE] Allow attaching meaningful labels to targets via relabeling. * [FEATURE] Configuration/rule reloading at runtime. * [FEATURE] Target discovery via file watches. * [FEATURE] Target discovery via Consul. * [ENHANCEMENT] Simplified binary operation evaluation. * [ENHANCEMENT] More stable component initialization. * [ENHANCEMENT] Added internal expression testing language. * [BUGFIX] Fix graph links with path prefix. * [ENHANCEMENT] Allow building from source without git. * [ENHANCEMENT] Improve storage iterator performance. * [ENHANCEMENT] Change logging output format and flags. * [BUGFIX] Fix memory alignment bug for 32bit systems. * [ENHANCEMENT] Improve web redirection behavior. * [ENHANCEMENT] Allow overriding default hostname for Prometheus URLs. * [BUGFIX] Fix double slash in URL sent to alertmanager. * [FEATURE] Add resets() query function to count counter resets. * [FEATURE] Add changes() query function to count the number of times a gauge changed. * [FEATURE] Add increase() query function to calculate a counter's increase. * [ENHANCEMENT] Limit retrievable samples to the storage's retention window. ## 0.13.4 / 2015-05-23 * [BUGFIX] Fix a race while checkpointing fingerprint mappings. ## 0.13.3 / 2015-05-11 * [BUGFIX] Handle fingerprint collisions properly. * [CHANGE] Comments in rules file must start with `#`. (The undocumented `//` and `/*...*/` comment styles are no longer supported.) * [ENHANCEMENT] Switch to custom expression language parser and evaluation engine, which generates better error messages, fixes some parsing edge-cases, and enables other future enhancements (like the ones below). * [ENHANCEMENT] Limit maximum number of concurrent queries. * [ENHANCEMENT] Terminate running queries during shutdown. ## 0.13.2 / 2015-05-05 * [MAINTENANCE] Updated vendored dependencies to their newest versions. * [MAINTENANCE] Include rule_checker and console templates in release tarball. * [BUGFIX] Sort NaN as the lowest value. * [ENHANCEMENT] Add square root, stddev and stdvar functions. * [BUGFIX] Use scrape_timeout for scrape timeout, not scrape_interval. * [ENHANCEMENT] Improve chunk and chunkDesc loading, increase performance when reading from disk. * [BUGFIX] Show correct error on wrong DNS response. ## 0.13.1 / 2015-04-09 * [BUGFIX] Treat memory series with zero chunks correctly in series maintenance. * [ENHANCEMENT] Improve readability of usage text even more. ## 0.13.0 / 2015-04-08 * [ENHANCEMENT] Double-delta encoding for chunks, saving typically 40% of space, both in RAM and on disk. * [ENHANCEMENT] Redesign of chunk persistence queuing, increasing performance on spinning disks significantly. * [ENHANCEMENT] Redesign of sample ingestion, increasing ingestion performance. * [FEATURE] Added ln, log2, log10 and exp functions to the query language. * [FEATURE] Experimental write support to InfluxDB. * [FEATURE] Allow custom timestamps in instant query API. * [FEATURE] Configurable path prefix for URLs to support proxies. * [ENHANCEMENT] Increase of rule_checker CLI usability. * [CHANGE] Show special float values as gaps. * [ENHANCEMENT] Made usage output more readable. * [ENHANCEMENT] Increased resilience of the storage against data corruption. * [ENHANCEMENT] Various improvements around chunk encoding. * [ENHANCEMENT] Nicer formatting of target health table on /status. * [CHANGE] Rename UNREACHABLE to UNHEALTHY, ALIVE to HEALTHY. * [BUGFIX] Strip trailing slash in alertmanager URL. * [BUGFIX] Avoid +InfYs and similar, just display +Inf. * [BUGFIX] Fixed HTML-escaping at various places. * [BUGFIX] Fixed special value handling in division and modulo of the query language. * [BUGFIX] Fix embed-static.sh. * [CLEANUP] Added intial HTTP API tests. * [CLEANUP] Misc. other code cleanups. * [MAINTENANCE] Updated vendored dependcies to their newest versions. ## 0.12.0 / 2015-03-04 * [CHANGE] Use client_golang v0.3.1. THIS CHANGES FINGERPRINTING AND INVALIDATES ALL PERSISTED FINGERPRINTS. You have to wipe your storage to use this or later versions. There is a version guard in place that will prevent you to run Prometheus with the stored data of an older Prometheus. * [BUGFIX] The change above fixes a weakness in the fingerprinting algorithm. * [ENHANCEMENT] The change above makes fingerprinting faster and less allocation intensive. * [FEATURE] OR operator and vector matching options. See docs for details. * [ENHANCEMENT] Scientific notation and special float values (Inf, NaN) now supported by the expression language. * [CHANGE] Dockerfile makes Prometheus use the Docker volume to store data (rather than /tmp/metrics). * [CHANGE] Makefile uses Go 1.4.2. ## 0.11.1 / 2015-02-27 * [BUGFIX] Make series maintenance complete again. (Ever since 0.9.0rc4, or commit 0851945, series would not be archived, chunk descriptors would not be evicted, and stale head chunks would never be closed. This happened due to accidental deletion of a line calling a (well tested :) function. * [BUGFIX] Do not double count head chunks read from checkpoint on startup. Also fix a related but less severe bug in counting chunk descriptors. * [BUGFIX] Check last time in head chunk for head chunk timeout, not first. * [CHANGE] Update vendoring due to vendoring changes in client_golang. * [CLEANUP] Code cleanups. * [ENHANCEMENT] Limit the number of 'dirty' series counted during checkpointing. ## 0.11.0 / 2015-02-23 * [FEATURE] Introduce new metric type Histogram with server-side aggregation. * [FEATURE] Add offset operator. * [FEATURE] Add floor, ceil and round functions. * [CHANGE] Change instance identifiers to be host:port. * [CHANGE] Dependency management and vendoring changed/improved. * [CHANGE] Flag name changes to create consistency between various Prometheus binaries. * [CHANGE] Show unlimited number of metrics in autocomplete. * [CHANGE] Add query timeout. * [CHANGE] Remove labels on persist error counter. * [ENHANCEMENT] Various performance improvements for sample ingestion. * [ENHANCEMENT] Various Makefile improvements. * [ENHANCEMENT] Various console template improvements, including proof-of-concept for federation via console templates. * [ENHANCEMENT] Fix graph JS glitches and simplify graphing code. * [ENHANCEMENT] Dramatically decrease resources for file embedding. * [ENHANCEMENT] Crash recovery saves lost series data in 'orphaned' directory. * [BUGFIX] Fix aggregation grouping key calculation. * [BUGFIX] Fix Go download path for various architectures. * [BUGFIX] Fixed the link of the Travis build status image. * [BUGFIX] Fix Rickshaw/D3 version mismatch. * [CLEANUP] Various code cleanups. ## 0.10.0 / 2015-01-26 * [CHANGE] More efficient JSON result format in query API. This requires up-to-date versions of PromDash and prometheus_cli, too. * [ENHANCEMENT] Excluded non-minified Bootstrap assets and the Bootstrap maps from embedding into the binary. Those files are only used for debugging, and then you can use -web.use-local-assets. By including fewer files, the RAM usage during compilation is much more manageable. * [ENHANCEMENT] Help link points to http://prometheus.github.io now. * [FEATURE] Consoles for haproxy and cloudwatch. * [BUGFIX] Several fixes to graphs in consoles. * [CLEANUP] Removed a file size check that did not check anything. ## 0.9.0 / 2015-01-23 * [CHANGE] Reworked command line flags, now more consistent and taking into account needs of the new storage backend (see below). * [CHANGE] Metric names are dropped after certain transformations. * [CHANGE] Changed partitioning of summary metrics exported by Prometheus. * [CHANGE] Got rid of Gerrit as a review tool. * [CHANGE] 'Tabular' view now the default (rather than 'Graph') to avoid running very expensive queries accidentally. * [CHANGE] On-disk format for stored samples changed. For upgrading, you have to nuke your old files completely. See "Complete rewrite of the storage * [CHANGE] Removed 2nd argument from `delta`. * [FEATURE] Added a `deriv` function. * [FEATURE] Console templates. * [FEATURE] Added `absent` function. * [FEATURE] Allow omitting the metric name in queries. * [BUGFIX] Removed all known race conditions. * [BUGFIX] Metric mutations now handled correctly in all cases. * [ENHANCEMENT] Proper double-start protection. * [ENHANCEMENT] Complete rewrite of the storage layer. Benefits include: * Better query performance. * More samples in less RAM. * Better memory management. * Scales up to millions of time series and thousands of samples ingested per second. * Purging of obsolete samples much cleaner now, up to completely "forgetting" obsolete time series. * Proper instrumentation to diagnose the storage layer with... well... Prometheus. * Pure Go implementation, no need for cgo and shared C libraries anymore. * Better concurrency. * [ENHANCEMENT] Copy-on-write semantics in the AST layer. * [ENHANCEMENT] Switched from Go 1.3 to Go 1.4. * [ENHANCEMENT] Vendored external dependencies with godeps. * [ENHANCEMENT] Numerous Web UI improvements, moved to Bootstrap3 and Rickshaw 1.5.1. * [ENHANCEMENT] Improved Docker integration. * [ENHANCEMENT] Simplified the Makefile contraption. * [CLEANUP] Put meta-data files into proper shape (LICENSE, README.md etc.) * [CLEANUP] Removed all legitimate 'go vet' and 'golint' warnings. * [CLEANUP] Removed dead code. ## 0.8.0 / 2014-09-04 * [ENHANCEMENT] Stagger scrapes to spread out load. * [BUGFIX] Correctly quote HTTP Accept header. ## 0.7.0 / 2014-08-06 * [FEATURE] Added new functions: abs(), topk(), bottomk(), drop_common_labels(). * [FEATURE] Let console templates get graph links from expressions. * [FEATURE] Allow console templates to dynamically include other templates. * [FEATURE] Template consoles now have access to their URL. * [BUGFIX] Fixed time() function to return evaluation time, not wallclock time. * [BUGFIX] Fixed HTTP connection leak when targets returned a non-200 status. * [BUGFIX] Fixed link to console templates in UI. * [PERFORMANCE] Removed extra memory copies while scraping targets. * [ENHANCEMENT] Switched from Go 1.2.1 to Go 1.3. * [ENHANCEMENT] Made metrics exported by Prometheus itself more consistent. * [ENHANCEMENT] Removed incremental backoffs for unhealthy targets. * [ENHANCEMENT] Dockerfile also builds Prometheus support tools now. ## 0.6.0 / 2014-06-30 * [FEATURE] Added console and alert templates support, along with various template functions. * [PERFORMANCE] Much faster and more memory-efficient flushing to disk. * [ENHANCEMENT] Query results are now only logged when debugging. * [ENHANCEMENT] Upgraded to new Prometheus client library for exposing metrics. * [BUGFIX] Samples are now kept in memory until fully flushed to disk. * [BUGFIX] Non-200 target scrapes are now treated as an error. * [BUGFIX] Added installation step for missing dependency to Dockerfile. * [BUGFIX] Removed broken and unused "User Dashboard" link. ## 0.5.0 / 2014-05-28 * [BUGFIX] Fixed next retrieval time display on status page. * [BUGFIX] Updated some variable references in tools subdir. * [FEATURE] Added support for scraping metrics via the new text format. * [PERFORMANCE] Improved label matcher performance. * [PERFORMANCE] Removed JSON indentation in query API, leading to smaller response sizes. * [ENHANCEMENT] Added internal check to verify temporal order of streams. * [ENHANCEMENT] Some internal refactorings. ## 0.4.0 / 2014-04-17 * [FEATURE] Vectors and scalars may now be reversed in binary operations (` `). * [FEATURE] It's possible to shutdown Prometheus via a `/-/quit` web endpoint now. * [BUGFIX] Fix for a deadlock race condition in the memory storage. * [BUGFIX] Mac OS X build fixed. * [BUGFIX] Built from Go 1.2.1, which has internal fixes to race conditions in garbage collection handling. * [ENHANCEMENT] Internal storage interface refactoring that allows building e.g. the `rule_checker` tool without LevelDB dynamic library dependencies. * [ENHANCEMENT] Cleanups around shutdown handling. * [PERFORMANCE] Preparations for better memory reuse during marshalling / unmarshalling. prometheus-0.16.2+ds/CONTRIBUTING.md000066400000000000000000000015331265137125100166330ustar00rootroot00000000000000# Contributing Prometheus uses GitHub to manage reviews of pull requests. * If you have a trivial fix or improvement, go ahead and create a pull request, addressing (with `@...`) one or more of the maintainers (see [AUTHORS.md](AUTHORS.md)) in the description of the pull request. * If you plan to do something more involved, first discuss your ideas on our [mailing list](https://groups.google.com/forum/?fromgroups#!forum/prometheus-developers). This will avoid unnecessary work and surely give you and us a good deal of inspiration. * Relevant coding style guidelines are the [Go Code Review Comments](https://code.google.com/p/go-wiki/wiki/CodeReviewComments) and the _Formatting and style_ section of Peter Bourgon's [Go: Best Practices for Production Environments](http://peter.bourgon.org/go-in-production/#formatting-and-style). prometheus-0.16.2+ds/Dockerfile000066400000000000000000000017531265137125100164000ustar00rootroot00000000000000FROM sdurrheimer/alpine-glibc MAINTAINER The Prometheus Authors WORKDIR /gopath/src/github.com/prometheus/prometheus COPY . /gopath/src/github.com/prometheus/prometheus RUN apk add --update -t build-deps tar openssl git make bash \ && source ./scripts/goenv.sh /go /gopath \ && make build \ && cp prometheus promtool /bin/ \ && mkdir -p /etc/prometheus \ && mv ./documentation/examples/prometheus.yml /etc/prometheus/prometheus.yml \ && mv ./console_libraries/ ./consoles/ /etc/prometheus/ \ && apk del --purge build-deps \ && rm -rf /go /gopath /var/cache/apk/* EXPOSE 9090 VOLUME [ "/prometheus" ] WORKDIR /prometheus ENTRYPOINT [ "/bin/prometheus" ] CMD [ "-config.file=/etc/prometheus/prometheus.yml", \ "-storage.local.path=/prometheus", \ "-web.console.libraries=/etc/prometheus/console_libraries", \ "-web.console.templates=/etc/prometheus/consoles" ] prometheus-0.16.2+ds/LICENSE000066400000000000000000000261351265137125100154140ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. prometheus-0.16.2+ds/Makefile000066400000000000000000000025231265137125100160420ustar00rootroot00000000000000# Copyright 2015 The Prometheus Authors # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. GO := GO15VENDOREXPERIMENT=1 go pkgs = $(shell $(GO) list ./... | grep -v /vendor/) all: format build test style: @echo ">> checking code style" @! gofmt -d **/*.go | grep '^' test: @echo ">> running tests" @$(GO) test -short $(pkgs) format: @echo ">> formatting code" @$(GO) fmt $(pkgs) vet: @echo ">> vetting code" @$(GO) vet $(pkgs) build: @echo ">> building binaries" @./scripts/build.sh tarballs: @echo ">> building release tarballs" @./scripts/release_tarballs.sh docker: @docker build -t prometheus:$(shell git rev-parse --short HEAD) . assets: @echo ">> writing assets" @$(GO) get -u github.com/jteeuwen/go-bindata/... @$(GO) generate ./web/blob @$(GO) fmt ./web/blob >/dev/null .PHONY: all style format build test vet docker assets tarballs prometheus-0.16.2+ds/NOTICE000066400000000000000000000045711265137125100153130ustar00rootroot00000000000000The Prometheus systems and service monitoring server Copyright 2012-2015 The Prometheus Authors This product includes software developed at SoundCloud Ltd. (http://soundcloud.com/). The following components are included in this product: Bootstrap http://getbootstrap.com Copyright 2011-2014 Twitter, Inc. Licensed under the MIT License bootstrap3-typeahead.js https://github.com/bassjobsen/Bootstrap-3-Typeahead Original written by @mdo and @fat Copyright 2014 Bass Jobsen @bassjobsen Licensed under the Apache License, Version 2.0 bootstrap-datetimepicker.js http://www.eyecon.ro/bootstrap-datepicker Copyright 2012 Stefan Petre Licensed under the Apache License, Version 2.0 Rickshaw https://github.com/shutterstock/rickshaw Copyright 2011-2014 by Shutterstock Images, LLC See https://github.com/shutterstock/rickshaw/blob/master/LICENSE for license details handlebars.js Copyright 2011 by Yehuda Katz See web/static/vendor/js/handlebars.js for license details jQuery https://jquery.org Copyright jQuery Foundation and other contributors Licensed under the MIT License Go support for Protocol Buffers - Google's data interchange format http://github.com/golang/protobuf/ Copyright 2010 The Go Authors See source code for license details. Go support for leveled logs, analogous to https://code.google.com/p/google-glog/ Copyright 2013 Google Inc. Licensed under the Apache License, Version 2.0 Support for streaming Protocol Buffer messages for the Go language (golang). https://github.com/matttproud/golang_protobuf_extensions Copyright 2013 Matt T. Proud Licensed under the Apache License, Version 2.0 DNS library in Go http://miek.nl/posts/2014/Aug/16/go-dns-package/ Copyright 2009 The Go Authors, 2011 Miek Gieben See https://github.com/miekg/dns/blob/master/LICENSE for license details. LevelDB key/value database in Go https://github.com/syndtr/goleveldb Copyright 2012 Suryandaru Triandana See https://github.com/syndtr/goleveldb/blob/master/LICENSE for license details. gosnappy - a fork of code.google.com/p/snappy-go https://github.com/syndtr/gosnappy Copyright 2011 The Snappy-Go Authors See https://github.com/syndtr/gosnappy/blob/master/LICENSE for license details. go-zookeeper - Native ZooKeeper client for Go https://github.com/samuel/go-zookeeper Copyright (c) 2013, Samuel Stauffer See https://github.com/samuel/go-zookeeper/blob/master/LICENSE for license details. prometheus-0.16.2+ds/README.md000066400000000000000000000065201265137125100156620ustar00rootroot00000000000000# Prometheus [![Build Status](https://travis-ci.org/prometheus/prometheus.svg)](https://travis-ci.org/prometheus/prometheus) [![Circle CI](https://circleci.com/gh/prometheus/prometheus/tree/master.svg?style=svg)](https://circleci.com/gh/prometheus/prometheus/tree/master) Prometheus is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true. Prometheus' main distinguishing features as compared to other monitoring systems are: - a **multi-dimensional** data model (timeseries defined by metric name and set of key/value dimensions) - a **flexible query language** to leverage this dimensionality - no dependency on distributed storage; **single server nodes are autonomous** - timeseries collection happens via a **pull model** over HTTP - **pushing timeseries** is supported via an intermediary gateway - targets are discovered via **service discovery** or **static configuration** - multiple modes of **graphing and dashboarding support** - support for hierarchical and horizontal **federation** ## Architecture overview ![](https://cdn.rawgit.com/prometheus/prometheus/e761f0d/documentation/images/architecture.svg) ## Install There are various ways of installing Prometheus. ### Precompiled binaries Precompiled binaries for released versions are available in the [*releases* section](https://github.com/prometheus/prometheus/releases) of the GitHub repository. Using the latest production release binary is the recommended way of installing Prometheus. Debian and RPM packages are being worked on. ### Building from source To build Prometheus from the source code yourself you need to have a working Go environment with [version 1.5 or greater installed](http://golang.org/doc/install). You can directly use the `go` tool to download and install the `prometheus` and `promtool` binaries into your `GOPATH`. We use Go 1.5's experimental vendoring feature, so you will also need to set the `GO15VENDOREXPERIMENT=1` environment variable in this case: $ GO15VENDOREXPERIMENT=1 go get github.com/prometheus/prometheus/cmd/... $ prometheus -config.file=your_config.yml You can also clone the repository yourself and build using `make`: $ mkdir -p $GOPATH/src/github.com/prometheus $ cd $GOPATH/src/github.com/prometheus $ git clone https://github.com/prometheus/prometheus.git $ cd prometheus $ make $ ./prometheus -config.file=your_config.yml The Makefile provides several targets: * *build*: build the `prometheus` and `promtool` binaries * *test*: run the tests * *format*: format the source code * *vet*: check the source code for common errors * *assets*: rebuild the static assets * *docker*: build a docker container for the current `HEAD` ## More information * The source code is periodically indexed: [Prometheus Core](http://godoc.org/github.com/prometheus/prometheus). * You will find a Travis CI configuration in `.travis.yml`. * All of the core developers are accessible via the [Prometheus Developers Mailinglist](https://groups.google.com/forum/?fromgroups#!forum/prometheus-developers) and the `#prometheus` channel on `irc.freenode.net`. ## Contributing Refer to [CONTRIBUTING.md](CONTRIBUTING.md) ## License Apache License 2.0, see [LICENSE](LICENSE). prometheus-0.16.2+ds/circle.yml000066400000000000000000000001621265137125100163630ustar00rootroot00000000000000machine: services: - docker dependencies: override: - make docker test: override: - /bin/true prometheus-0.16.2+ds/cmd/000077500000000000000000000000001265137125100151435ustar00rootroot00000000000000prometheus-0.16.2+ds/cmd/prometheus/000077500000000000000000000000001265137125100173365ustar00rootroot00000000000000prometheus-0.16.2+ds/cmd/prometheus/config.go000066400000000000000000000304771265137125100211450ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package main import ( "flag" "fmt" "net" "net/url" "os" "strings" "text/template" "time" "unicode" "github.com/prometheus/common/log" "github.com/prometheus/prometheus/notification" "github.com/prometheus/prometheus/promql" "github.com/prometheus/prometheus/storage/local" "github.com/prometheus/prometheus/storage/local/index" "github.com/prometheus/prometheus/storage/remote" "github.com/prometheus/prometheus/web" ) // cfg contains immutable configuration parameters for a running Prometheus // server. It is populated by its flag set. var cfg = struct { fs *flag.FlagSet printVersion bool configFile string storage local.MemorySeriesStorageOptions notification notification.NotificationHandlerOptions queryEngine promql.EngineOptions web web.Options remote remote.Options prometheusURL string influxdbURL string }{} func init() { flag.CommandLine.Init(os.Args[0], flag.ContinueOnError) flag.CommandLine.Usage = usage cfg.fs = flag.CommandLine // Set additional defaults. cfg.storage.SyncStrategy = local.Adaptive cfg.fs.BoolVar( &cfg.printVersion, "version", false, "Print version information.", ) cfg.fs.StringVar( &cfg.configFile, "config.file", "prometheus.yml", "Prometheus configuration file name.", ) // Web. cfg.fs.StringVar( &cfg.web.ListenAddress, "web.listen-address", ":9090", "Address to listen on for the web interface, API, and telemetry.", ) cfg.fs.StringVar( &cfg.prometheusURL, "web.external-url", "", "The URL under which Prometheus is externally reachable (for example, if Prometheus is served via a reverse proxy). Used for generating relative and absolute links back to Prometheus itself. If the URL has a path portion, it will be used to prefix all HTTP endpoints served by Prometheus. If omitted, relevant URL components will be derived automatically.", ) cfg.fs.StringVar( &cfg.web.MetricsPath, "web.telemetry-path", "/metrics", "Path under which to expose metrics.", ) cfg.fs.BoolVar( &cfg.web.UseLocalAssets, "web.use-local-assets", false, "Read assets/templates from file instead of binary.", ) cfg.fs.StringVar( &cfg.web.UserAssetsPath, "web.user-assets", "", "Path to static asset directory, available at /user.", ) cfg.fs.BoolVar( &cfg.web.EnableQuit, "web.enable-remote-shutdown", false, "Enable remote service shutdown.", ) cfg.fs.StringVar( &cfg.web.ConsoleTemplatesPath, "web.console.templates", "consoles", "Path to the console template directory, available at /consoles.", ) cfg.fs.StringVar( &cfg.web.ConsoleLibrariesPath, "web.console.libraries", "console_libraries", "Path to the console library directory.", ) // Storage. cfg.fs.StringVar( &cfg.storage.PersistenceStoragePath, "storage.local.path", "data", "Base path for metrics storage.", ) cfg.fs.IntVar( &cfg.storage.MemoryChunks, "storage.local.memory-chunks", 1024*1024, "How many chunks to keep in memory. While the size of a chunk is 1kiB, the total memory usage will be significantly higher than this value * 1kiB. Furthermore, for various reasons, more chunks might have to be kept in memory temporarily.", ) cfg.fs.DurationVar( &cfg.storage.PersistenceRetentionPeriod, "storage.local.retention", 15*24*time.Hour, "How long to retain samples in the local storage.", ) cfg.fs.IntVar( &cfg.storage.MaxChunksToPersist, "storage.local.max-chunks-to-persist", 1024*1024, "How many chunks can be waiting for persistence before sample ingestion will stop. Many chunks waiting to be persisted will increase the checkpoint size.", ) cfg.fs.DurationVar( &cfg.storage.CheckpointInterval, "storage.local.checkpoint-interval", 5*time.Minute, "The period at which the in-memory metrics and the chunks not yet persisted to series files are checkpointed.", ) cfg.fs.IntVar( &cfg.storage.CheckpointDirtySeriesLimit, "storage.local.checkpoint-dirty-series-limit", 5000, "If approx. that many time series are in a state that would require a recovery operation after a crash, a checkpoint is triggered, even if the checkpoint interval hasn't passed yet. A recovery operation requires a disk seek. The default limit intends to keep the recovery time below 1min even on spinning disks. With SSD, recovery is much faster, so you might want to increase this value in that case to avoid overly frequent checkpoints.", ) cfg.fs.Var( &cfg.storage.SyncStrategy, "storage.local.series-sync-strategy", "When to sync series files after modification. Possible values: 'never', 'always', 'adaptive'. Sync'ing slows down storage performance but reduces the risk of data loss in case of an OS crash. With the 'adaptive' strategy, series files are sync'd for as long as the storage is not too much behind on chunk persistence.", ) cfg.fs.Float64Var( &cfg.storage.MinShrinkRatio, "storage.local.series-file-shrink-ratio", 0.1, "A series file is only truncated (to delete samples that have exceeded the retention period) if it shrinks by at least the provided ratio. This saves I/O operations while causing only a limited storage space overhead. If 0 or smaller, truncation will be performed even for a single dropped chunk, while 1 or larger will effectively prevent any truncation.", ) cfg.fs.BoolVar( &cfg.storage.Dirty, "storage.local.dirty", false, "If set, the local storage layer will perform crash recovery even if the last shutdown appears to be clean.", ) cfg.fs.BoolVar( &cfg.storage.PedanticChecks, "storage.local.pedantic-checks", false, "If set, a crash recovery will perform checks on each series file. This might take a very long time.", ) cfg.fs.Var( &local.DefaultChunkEncoding, "storage.local.chunk-encoding-version", "Which chunk encoding version to use for newly created chunks. Currently supported is 0 (delta encoding) and 1 (double-delta encoding).", ) // Index cache sizes. cfg.fs.IntVar( &index.FingerprintMetricCacheSize, "storage.local.index-cache-size.fingerprint-to-metric", index.FingerprintMetricCacheSize, "The size in bytes for the fingerprint to metric index cache.", ) cfg.fs.IntVar( &index.FingerprintTimeRangeCacheSize, "storage.local.index-cache-size.fingerprint-to-timerange", index.FingerprintTimeRangeCacheSize, "The size in bytes for the metric time range index cache.", ) cfg.fs.IntVar( &index.LabelNameLabelValuesCacheSize, "storage.local.index-cache-size.label-name-to-label-values", index.LabelNameLabelValuesCacheSize, "The size in bytes for the label name to label values index cache.", ) cfg.fs.IntVar( &index.LabelPairFingerprintsCacheSize, "storage.local.index-cache-size.label-pair-to-fingerprints", index.LabelPairFingerprintsCacheSize, "The size in bytes for the label pair to fingerprints index cache.", ) // Remote storage. cfg.fs.StringVar( &cfg.remote.GraphiteAddress, "storage.remote.graphite-address", "", "The host:port of the remote Graphite server to send samples to. None, if empty.", ) cfg.fs.StringVar( &cfg.remote.GraphiteTransport, "storage.remote.graphite-transport", "tcp", "Transport protocol to use to communicate with Graphite. 'tcp', if empty.", ) cfg.fs.StringVar( &cfg.remote.GraphitePrefix, "storage.remote.graphite-prefix", "", "The prefix to prepend to all metrics exported to Graphite. None, if empty.", ) cfg.fs.StringVar( &cfg.remote.OpentsdbURL, "storage.remote.opentsdb-url", "", "The URL of the remote OpenTSDB server to send samples to. None, if empty.", ) cfg.fs.StringVar( &cfg.influxdbURL, "storage.remote.influxdb-url", "", "The URL of the remote InfluxDB server to send samples to. None, if empty.", ) cfg.fs.StringVar( &cfg.remote.InfluxdbRetentionPolicy, "storage.remote.influxdb.retention-policy", "default", "The InfluxDB retention policy to use.", ) cfg.fs.StringVar( &cfg.remote.InfluxdbUsername, "storage.remote.influxdb.username", "", "The username to use when sending samples to InfluxDB. The corresponding password must be provided via the INFLUXDB_PW environment variable.", ) cfg.fs.StringVar( &cfg.remote.InfluxdbDatabase, "storage.remote.influxdb.database", "prometheus", "The name of the database to use for storing samples in InfluxDB.", ) cfg.fs.DurationVar( &cfg.remote.StorageTimeout, "storage.remote.timeout", 30*time.Second, "The timeout to use when sending samples to the remote storage.", ) // Alertmanager. cfg.fs.StringVar( &cfg.notification.AlertmanagerURL, "alertmanager.url", "", "The URL of the alert manager to send notifications to.", ) cfg.fs.IntVar( &cfg.notification.QueueCapacity, "alertmanager.notification-queue-capacity", 100, "The capacity of the queue for pending alert manager notifications.", ) cfg.fs.DurationVar( &cfg.notification.Deadline, "alertmanager.http-deadline", 10*time.Second, "Alert manager HTTP API timeout.", ) // Query engine. cfg.fs.DurationVar( &promql.StalenessDelta, "query.staleness-delta", promql.StalenessDelta, "Staleness delta allowance during expression evaluations.", ) cfg.fs.DurationVar( &cfg.queryEngine.Timeout, "query.timeout", 2*time.Minute, "Maximum time a query may take before being aborted.", ) cfg.fs.IntVar( &cfg.queryEngine.MaxConcurrentQueries, "query.max-concurrency", 20, "Maximum number of queries executed concurrently.", ) } func parse(args []string) error { err := cfg.fs.Parse(args) if err != nil { if err != flag.ErrHelp { log.Errorf("Invalid command line arguments. Help: %s -h", os.Args[0]) } return err } if err := parsePrometheusURL(); err != nil { return err } if err := parseInfluxdbURL(); err != nil { return err } cfg.remote.InfluxdbPassword = os.Getenv("INFLUXDB_PW") return nil } func parsePrometheusURL() error { if cfg.prometheusURL == "" { hostname, err := os.Hostname() if err != nil { return err } _, port, err := net.SplitHostPort(cfg.web.ListenAddress) if err != nil { return err } cfg.prometheusURL = fmt.Sprintf("http://%s:%s/", hostname, port) } promURL, err := url.Parse(cfg.prometheusURL) if err != nil { return err } cfg.web.ExternalURL = promURL ppref := strings.TrimRight(cfg.web.ExternalURL.Path, "/") if ppref != "" && !strings.HasPrefix(ppref, "/") { ppref = "/" + ppref } cfg.web.ExternalURL.Path = ppref return nil } func parseInfluxdbURL() error { if cfg.influxdbURL == "" { return nil } url, err := url.Parse(cfg.influxdbURL) if err != nil { return err } cfg.remote.InfluxdbURL = url return nil } var helpTmpl = ` usage: prometheus [] {{ range $cat, $flags := . }}{{ if ne $cat "." }} == {{ $cat | upper }} =={{ end }} {{ range $flags }} -{{ .Name }} {{ .DefValue | quote }} {{ .Usage | wrap 80 6 }} {{ end }} {{ end }} ` func usage() { helpTmpl = strings.TrimSpace(helpTmpl) t := template.New("usage") t = t.Funcs(template.FuncMap{ "wrap": func(width, indent int, s string) (ns string) { width = width - indent length := indent for _, w := range strings.SplitAfter(s, " ") { if length+len(w) > width { ns += "\n" + strings.Repeat(" ", indent) length = 0 } ns += w length += len(w) } return strings.TrimSpace(ns) }, "quote": func(s string) string { if len(s) == 0 || s == "false" || s == "true" || unicode.IsDigit(rune(s[0])) { return s } return fmt.Sprintf("%q", s) }, "upper": strings.ToUpper, }) t = template.Must(t.Parse(helpTmpl)) groups := make(map[string][]*flag.Flag) // Bucket flags into groups based on the first of their dot-separated levels. cfg.fs.VisitAll(func(fl *flag.Flag) { parts := strings.SplitN(fl.Name, ".", 2) if len(parts) == 1 { groups["."] = append(groups["."], fl) } else { name := parts[0] groups[name] = append(groups[name], fl) } }) for cat, fl := range groups { if len(fl) < 2 && cat != "." { groups["."] = append(groups["."], fl...) delete(groups, cat) } } if err := t.Execute(os.Stdout, groups); err != nil { panic(fmt.Errorf("error executing usage template: %s", err)) } } prometheus-0.16.2+ds/cmd/prometheus/main.go000066400000000000000000000142761265137125100206230ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // The main package for the Prometheus server executeable. package main import ( "bytes" "flag" "fmt" _ "net/http/pprof" // Comment this line to disable pprof endpoint. "os" "os/signal" "strings" "syscall" "text/template" "time" "github.com/prometheus/common/log" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/prometheus/config" "github.com/prometheus/prometheus/notification" "github.com/prometheus/prometheus/promql" "github.com/prometheus/prometheus/retrieval" "github.com/prometheus/prometheus/rules" "github.com/prometheus/prometheus/storage" "github.com/prometheus/prometheus/storage/local" "github.com/prometheus/prometheus/storage/remote" "github.com/prometheus/prometheus/version" "github.com/prometheus/prometheus/web" ) func main() { os.Exit(Main()) } var ( configSuccess = prometheus.NewGauge(prometheus.GaugeOpts{ Namespace: "prometheus", Name: "config_last_reload_successful", Help: "Whether the last configuration reload attempt was successful.", }) configSuccessTime = prometheus.NewGauge(prometheus.GaugeOpts{ Namespace: "prometheus", Name: "config_last_reload_success_timestamp_seconds", Help: "Timestamp of the last successful configuration reload.", }) ) // Main manages the startup and shutdown lifecycle of the entire Prometheus server. func Main() int { if err := parse(os.Args[1:]); err != nil { return 2 } printVersion() if cfg.printVersion { return 0 } var reloadables []Reloadable var ( memStorage = local.NewMemorySeriesStorage(&cfg.storage) remoteStorage = remote.New(&cfg.remote) sampleAppender = storage.Fanout{memStorage} ) if remoteStorage != nil { sampleAppender = append(sampleAppender, remoteStorage) reloadables = append(reloadables, remoteStorage) } var ( notificationHandler = notification.NewNotificationHandler(&cfg.notification) targetManager = retrieval.NewTargetManager(sampleAppender) queryEngine = promql.NewEngine(memStorage, &cfg.queryEngine) ) ruleManager := rules.NewManager(&rules.ManagerOptions{ SampleAppender: sampleAppender, NotificationHandler: notificationHandler, QueryEngine: queryEngine, ExternalURL: cfg.web.ExternalURL, }) flags := map[string]string{} cfg.fs.VisitAll(func(f *flag.Flag) { flags[f.Name] = f.Value.String() }) status := &web.PrometheusStatus{ TargetPools: targetManager.Pools, Rules: ruleManager.Rules, Flags: flags, Birth: time.Now(), } webHandler := web.New(memStorage, queryEngine, ruleManager, status, &cfg.web) reloadables = append(reloadables, status, targetManager, ruleManager, webHandler, notificationHandler) if !reloadConfig(cfg.configFile, reloadables...) { return 1 } // Wait for reload or termination signals. Start the handler for SIGHUP as // early as possible, but ignore it until we are ready to handle reloading // our config. hup := make(chan os.Signal) hupReady := make(chan bool) signal.Notify(hup, syscall.SIGHUP) go func() { <-hupReady for { select { case <-hup: case <-webHandler.Reload(): } reloadConfig(cfg.configFile, reloadables...) } }() // Start all components. if err := memStorage.Start(); err != nil { log.Errorln("Error opening memory series storage:", err) return 1 } defer func() { if err := memStorage.Stop(); err != nil { log.Errorln("Error stopping storage:", err) } }() if remoteStorage != nil { prometheus.MustRegister(remoteStorage) go remoteStorage.Run() defer remoteStorage.Stop() } // The storage has to be fully initialized before registering. prometheus.MustRegister(memStorage) prometheus.MustRegister(notificationHandler) prometheus.MustRegister(configSuccess) prometheus.MustRegister(configSuccessTime) go ruleManager.Run() defer ruleManager.Stop() go notificationHandler.Run() defer notificationHandler.Stop() go targetManager.Run() defer targetManager.Stop() defer queryEngine.Stop() go webHandler.Run() // Wait for reload or termination signals. close(hupReady) // Unblock SIGHUP handler. term := make(chan os.Signal) signal.Notify(term, os.Interrupt, syscall.SIGTERM) select { case <-term: log.Warn("Received SIGTERM, exiting gracefully...") case <-webHandler.Quit(): log.Warn("Received termination request via web service, exiting gracefully...") case err := <-webHandler.ListenError(): log.Errorln("Error starting web server, exiting gracefully:", err) } log.Info("See you next time!") return 0 } // Reloadable things can change their internal state to match a new config // and handle failure gracefully. type Reloadable interface { ApplyConfig(*config.Config) bool } func reloadConfig(filename string, rls ...Reloadable) (success bool) { log.Infof("Loading configuration file %s", filename) defer func() { if success { configSuccess.Set(1) configSuccessTime.Set(float64(time.Now().Unix())) } else { configSuccess.Set(0) } }() conf, err := config.LoadFile(filename) if err != nil { log.Errorf("Couldn't load configuration (-config.file=%s): %v", filename, err) return false } success = true for _, rl := range rls { success = success && rl.ApplyConfig(conf) } return success } var versionInfoTmpl = ` prometheus, version {{.version}} (branch: {{.branch}}, revision: {{.revision}}) build user: {{.buildUser}} build date: {{.buildDate}} go version: {{.goVersion}} ` func printVersion() { t := template.Must(template.New("version").Parse(versionInfoTmpl)) var buf bytes.Buffer if err := t.ExecuteTemplate(&buf, "version", version.Map); err != nil { panic(err) } fmt.Fprintln(os.Stdout, strings.TrimSpace(buf.String())) } prometheus-0.16.2+ds/cmd/promtool/000077500000000000000000000000001265137125100170165ustar00rootroot00000000000000prometheus-0.16.2+ds/cmd/promtool/main.go000066400000000000000000000130221265137125100202670ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package main import ( "bytes" "fmt" "io/ioutil" "os" "path/filepath" "strings" "text/template" "github.com/prometheus/prometheus/config" "github.com/prometheus/prometheus/promql" "github.com/prometheus/prometheus/util/cli" "github.com/prometheus/prometheus/version" ) // CheckConfigCmd validates configuration files. func CheckConfigCmd(t cli.Term, args ...string) int { if len(args) == 0 { t.Infof("usage: promtool check-config ") return 2 } failed := false for _, arg := range args { ruleFiles, err := checkConfig(t, arg) if err != nil { t.Errorf(" FAILED: %s", err) failed = true } else { t.Infof(" SUCCESS: %d rule files found", len(ruleFiles)) } t.Infof("") for _, rf := range ruleFiles { if n, err := checkRules(t, rf); err != nil { t.Errorf(" FAILED: %s", err) failed = true } else { t.Infof(" SUCCESS: %d rules found", n) } t.Infof("") } } if failed { return 1 } return 0 } func checkFileExists(fn string) error { // Nothing set, nothing to error on. if fn == "" { return nil } _, err := os.Stat(fn) return err } func checkConfig(t cli.Term, filename string) ([]string, error) { t.Infof("Checking %s", filename) if stat, err := os.Stat(filename); err != nil { return nil, fmt.Errorf("cannot get file info") } else if stat.IsDir() { return nil, fmt.Errorf("is a directory") } cfg, err := config.LoadFile(filename) if err != nil { return nil, err } var ruleFiles []string for _, rf := range cfg.RuleFiles { rfs, err := filepath.Glob(rf) if err != nil { return nil, err } // If an explicit file was given, error if it is not accessible. if !strings.Contains(rf, "*") { if len(rfs) == 0 { return nil, fmt.Errorf("%q does not point to an existing file", rf) } if err := checkFileExists(rfs[0]); err != nil { return nil, fmt.Errorf("error checking rule file %q: %s", rfs[0], err) } } ruleFiles = append(ruleFiles, rfs...) } for _, scfg := range cfg.ScrapeConfigs { if err := checkFileExists(scfg.BearerTokenFile); err != nil { return nil, fmt.Errorf("error checking bearer token file %q: %s", scfg.BearerTokenFile, err) } if err := checkTLSConfig(scfg.TLSConfig); err != nil { return nil, err } for _, kd := range scfg.KubernetesSDConfigs { if err := checkTLSConfig(kd.TLSConfig); err != nil { return nil, err } } } return ruleFiles, nil } func checkTLSConfig(tlsConfig config.TLSConfig) error { if err := checkFileExists(tlsConfig.CertFile); err != nil { return fmt.Errorf("error checking client cert file %q: %s", tlsConfig.CertFile, err) } if err := checkFileExists(tlsConfig.KeyFile); err != nil { return fmt.Errorf("error checking client key file %q: %s", tlsConfig.KeyFile, err) } if len(tlsConfig.CertFile) > 0 && len(tlsConfig.KeyFile) == 0 { return fmt.Errorf("client cert file %q specified without client key file", tlsConfig.CertFile) } if len(tlsConfig.KeyFile) > 0 && len(tlsConfig.CertFile) == 0 { return fmt.Errorf("client key file %q specified without client cert file", tlsConfig.KeyFile) } return nil } // CheckRulesCmd validates rule files. func CheckRulesCmd(t cli.Term, args ...string) int { if len(args) == 0 { t.Infof("usage: promtool check-rules ") return 2 } failed := false for _, arg := range args { if n, err := checkRules(t, arg); err != nil { t.Errorf(" FAILED: %s", err) failed = true } else { t.Infof(" SUCCESS: %d rules found", n) } t.Infof("") } if failed { return 1 } return 0 } func checkRules(t cli.Term, filename string) (int, error) { t.Infof("Checking %s", filename) if stat, err := os.Stat(filename); err != nil { return 0, fmt.Errorf("cannot get file info") } else if stat.IsDir() { return 0, fmt.Errorf("is a directory") } content, err := ioutil.ReadFile(filename) if err != nil { return 0, err } rules, err := promql.ParseStmts(string(content)) if err != nil { return 0, err } return len(rules), nil } var versionInfoTmpl = ` prometheus, version {{.version}} (branch: {{.branch}}, revision: {{.revision}}) build user: {{.buildUser}} build date: {{.buildDate}} go version: {{.goVersion}} ` // VersionCmd prints the binaries version information. func VersionCmd(t cli.Term, _ ...string) int { tmpl := template.Must(template.New("version").Parse(versionInfoTmpl)) var buf bytes.Buffer if err := tmpl.ExecuteTemplate(&buf, "version", version.Map); err != nil { panic(err) } t.Out(strings.TrimSpace(buf.String())) return 0 } func main() { app := cli.NewApp("promtool") app.Register("check-config", &cli.Command{ Desc: "validate configuration files for correctness", Run: CheckConfigCmd, }) app.Register("check-rules", &cli.Command{ Desc: "validate rule files for correctness", Run: CheckRulesCmd, }) app.Register("version", &cli.Command{ Desc: "print the version of this binary", Run: VersionCmd, }) t := cli.BasicTerm(os.Stdout, os.Stderr) os.Exit(app.Run(t, os.Args[1:]...)) } prometheus-0.16.2+ds/config/000077500000000000000000000000001265137125100156455ustar00rootroot00000000000000prometheus-0.16.2+ds/config/config.go000066400000000000000000000676571265137125100174660ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package config import ( "encoding/json" "fmt" "io/ioutil" "net/url" "path/filepath" "regexp" "strings" "time" "github.com/prometheus/common/model" "gopkg.in/yaml.v2" "github.com/prometheus/prometheus/util/strutil" ) var ( patJobName = regexp.MustCompile(`^[a-zA-Z_][a-zA-Z0-9_-]*$`) patFileSDName = regexp.MustCompile(`^[^*]*(\*[^/]*)?\.(json|yml|yaml|JSON|YML|YAML)$`) patRulePath = regexp.MustCompile(`^[^*]*(\*[^/]*)?$`) patAuthLine = regexp.MustCompile(`((?:password|bearer_token|secret_key):\s+)(".+"|'.+'|[^\s]+)`) ) // Load parses the YAML input s into a Config. func Load(s string) (*Config, error) { cfg := &Config{} // If the entire config body is empty the UnmarshalYAML method is // never called. We thus have to set the DefaultConfig at the entry // point as well. *cfg = DefaultConfig err := yaml.Unmarshal([]byte(s), cfg) if err != nil { return nil, err } cfg.original = s return cfg, nil } // LoadFile parses the given YAML file into a Config. func LoadFile(filename string) (*Config, error) { content, err := ioutil.ReadFile(filename) if err != nil { return nil, err } cfg, err := Load(string(content)) if err != nil { return nil, err } resolveFilepaths(filepath.Dir(filename), cfg) return cfg, nil } // The defaults applied before parsing the respective config sections. var ( // DefaultConfig is the default top-level configuration. DefaultConfig = Config{ GlobalConfig: DefaultGlobalConfig, } // DefaultGlobalConfig is the default global configuration. DefaultGlobalConfig = GlobalConfig{ ScrapeInterval: Duration(1 * time.Minute), ScrapeTimeout: Duration(10 * time.Second), EvaluationInterval: Duration(1 * time.Minute), } // DefaultScrapeConfig is the default scrape configuration. DefaultScrapeConfig = ScrapeConfig{ // ScrapeTimeout and ScrapeInterval default to the // configured globals. MetricsPath: "/metrics", Scheme: "http", HonorLabels: false, } // DefaultRelabelConfig is the default Relabel configuration. DefaultRelabelConfig = RelabelConfig{ Action: RelabelReplace, Separator: ";", } // DefaultDNSSDConfig is the default DNS SD configuration. DefaultDNSSDConfig = DNSSDConfig{ RefreshInterval: Duration(30 * time.Second), Type: "SRV", } // DefaultFileSDConfig is the default file SD configuration. DefaultFileSDConfig = FileSDConfig{ RefreshInterval: Duration(5 * time.Minute), } // DefaultConsulSDConfig is the default Consul SD configuration. DefaultConsulSDConfig = ConsulSDConfig{ TagSeparator: ",", Scheme: "http", } // DefaultServersetSDConfig is the default Serverset SD configuration. DefaultServersetSDConfig = ServersetSDConfig{ Timeout: Duration(10 * time.Second), } // DefaultMarathonSDConfig is the default Marathon SD configuration. DefaultMarathonSDConfig = MarathonSDConfig{ RefreshInterval: Duration(30 * time.Second), } // DefaultKubernetesSDConfig is the default Kubernetes SD configuration DefaultKubernetesSDConfig = KubernetesSDConfig{ KubeletPort: 10255, RequestTimeout: Duration(10 * time.Second), RetryInterval: Duration(1 * time.Second), } // DefaultEC2SDConfig is the default EC2 SD configuration. DefaultEC2SDConfig = EC2SDConfig{ Port: 80, RefreshInterval: Duration(60 * time.Second), } ) // URL is a custom URL type that allows validation at configuration load time. type URL struct { *url.URL } // UnmarshalYAML implements the yaml.Unmarshaler interface for URLs. func (u *URL) UnmarshalYAML(unmarshal func(interface{}) error) error { var s string if err := unmarshal(&s); err != nil { return err } urlp, err := url.Parse(s) if err != nil { return err } u.URL = urlp return nil } // MarshalYAML implements the yaml.Marshaler interface for URLs. func (u URL) MarshalYAML() (interface{}, error) { if u.URL != nil { return u.String(), nil } return nil, nil } // Config is the top-level configuration for Prometheus's config files. type Config struct { GlobalConfig GlobalConfig `yaml:"global"` RuleFiles []string `yaml:"rule_files,omitempty"` ScrapeConfigs []*ScrapeConfig `yaml:"scrape_configs,omitempty"` // Catches all undefined fields and must be empty after parsing. XXX map[string]interface{} `yaml:",inline"` // original is the input from which the config was parsed. original string } // resolveFilepaths joins all relative paths in a configuration // with a given base directory. func resolveFilepaths(baseDir string, cfg *Config) { join := func(fp string) string { if len(fp) > 0 && !filepath.IsAbs(fp) { fp = filepath.Join(baseDir, fp) } return fp } for i, rf := range cfg.RuleFiles { cfg.RuleFiles[i] = join(rf) } for _, scfg := range cfg.ScrapeConfigs { scfg.BearerTokenFile = join(scfg.BearerTokenFile) scfg.TLSConfig.CAFile = join(scfg.TLSConfig.CAFile) scfg.TLSConfig.CertFile = join(scfg.TLSConfig.CertFile) scfg.TLSConfig.KeyFile = join(scfg.TLSConfig.KeyFile) for _, kcfg := range scfg.KubernetesSDConfigs { kcfg.BearerTokenFile = join(kcfg.BearerTokenFile) kcfg.TLSConfig.CAFile = join(kcfg.TLSConfig.CAFile) kcfg.TLSConfig.CertFile = join(kcfg.TLSConfig.CertFile) kcfg.TLSConfig.KeyFile = join(kcfg.TLSConfig.KeyFile) } } } func checkOverflow(m map[string]interface{}, ctx string) error { if len(m) > 0 { var keys []string for k := range m { keys = append(keys, k) } return fmt.Errorf("unknown fields in %s: %s", ctx, strings.Join(keys, ", ")) } return nil } func (c Config) String() string { var s string if c.original != "" { s = c.original } else { b, err := yaml.Marshal(c) if err != nil { return fmt.Sprintf("", err) } s = string(b) } return patAuthLine.ReplaceAllString(s, "${1}") } // UnmarshalYAML implements the yaml.Unmarshaler interface. func (c *Config) UnmarshalYAML(unmarshal func(interface{}) error) error { *c = DefaultConfig // We want to set c to the defaults and then overwrite it with the input. // To make unmarshal fill the plain data struct rather than calling UnmarshalYAML // again, we have to hide it using a type indirection. type plain Config if err := unmarshal((*plain)(c)); err != nil { return err } // If a global block was open but empty the default global config is overwritten. // We have to restore it here. if c.GlobalConfig.isZero() { c.GlobalConfig = DefaultGlobalConfig } for _, rf := range c.RuleFiles { if !patRulePath.MatchString(rf) { return fmt.Errorf("invalid rule file path %q", rf) } } // Do global overrides and validate unique names. jobNames := map[string]struct{}{} for _, scfg := range c.ScrapeConfigs { if scfg.ScrapeInterval == 0 { scfg.ScrapeInterval = c.GlobalConfig.ScrapeInterval } if scfg.ScrapeTimeout == 0 { scfg.ScrapeTimeout = c.GlobalConfig.ScrapeTimeout } if _, ok := jobNames[scfg.JobName]; ok { return fmt.Errorf("found multiple scrape configs with job name %q", scfg.JobName) } jobNames[scfg.JobName] = struct{}{} } return checkOverflow(c.XXX, "config") } // GlobalConfig configures values that are used across other configuration // objects. type GlobalConfig struct { // How frequently to scrape targets by default. ScrapeInterval Duration `yaml:"scrape_interval,omitempty"` // The default timeout when scraping targets. ScrapeTimeout Duration `yaml:"scrape_timeout,omitempty"` // How frequently to evaluate rules by default. EvaluationInterval Duration `yaml:"evaluation_interval,omitempty"` // The labels to add to any timeseries that this Prometheus instance scrapes. ExternalLabels model.LabelSet `yaml:"external_labels,omitempty"` // Catches all undefined fields and must be empty after parsing. XXX map[string]interface{} `yaml:",inline"` } // UnmarshalYAML implements the yaml.Unmarshaler interface. func (c *GlobalConfig) UnmarshalYAML(unmarshal func(interface{}) error) error { *c = DefaultGlobalConfig type plain GlobalConfig if err := unmarshal((*plain)(c)); err != nil { return err } return checkOverflow(c.XXX, "global config") } // isZero returns true iff the global config is the zero value. func (c *GlobalConfig) isZero() bool { return c.ExternalLabels == nil && c.ScrapeInterval == 0 && c.ScrapeTimeout == 0 && c.EvaluationInterval == 0 } // TLSConfig configures the options for TLS connections. type TLSConfig struct { // The CA cert to use for the targets. CAFile string `yaml:"ca_file,omitempty"` // The client cert file for the targets. CertFile string `yaml:"cert_file,omitempty"` // The client key file for the targets. KeyFile string `yaml:"key_file,omitempty"` // Disable target certificate validation. InsecureSkipVerify bool `yaml:"insecure_skip_verify"` // Catches all undefined fields and must be empty after parsing. XXX map[string]interface{} `yaml:",inline"` } // UnmarshalYAML implements the yaml.Unmarshaler interface. func (c *TLSConfig) UnmarshalYAML(unmarshal func(interface{}) error) error { type plain TLSConfig if err := unmarshal((*plain)(c)); err != nil { return err } return checkOverflow(c.XXX, "TLS config") } // ScrapeConfig configures a scraping unit for Prometheus. type ScrapeConfig struct { // The job name to which the job label is set by default. JobName string `yaml:"job_name"` // Indicator whether the scraped metrics should remain unmodified. HonorLabels bool `yaml:"honor_labels,omitempty"` // A set of query parameters with which the target is scraped. Params url.Values `yaml:"params,omitempty"` // How frequently to scrape the targets of this scrape config. ScrapeInterval Duration `yaml:"scrape_interval,omitempty"` // The timeout for scraping targets of this config. ScrapeTimeout Duration `yaml:"scrape_timeout,omitempty"` // The HTTP resource path on which to fetch metrics from targets. MetricsPath string `yaml:"metrics_path,omitempty"` // The URL scheme with which to fetch metrics from targets. Scheme string `yaml:"scheme,omitempty"` // The HTTP basic authentication credentials for the targets. BasicAuth *BasicAuth `yaml:"basic_auth,omitempty"` // The bearer token for the targets. BearerToken string `yaml:"bearer_token,omitempty"` // The bearer token file for the targets. BearerTokenFile string `yaml:"bearer_token_file,omitempty"` // HTTP proxy server to use to connect to the targets. ProxyURL URL `yaml:"proxy_url,omitempty"` // TLSConfig to use to connect to the targets. TLSConfig TLSConfig `yaml:"tls_config,omitempty"` // List of labeled target groups for this job. TargetGroups []*TargetGroup `yaml:"target_groups,omitempty"` // List of DNS service discovery configurations. DNSSDConfigs []*DNSSDConfig `yaml:"dns_sd_configs,omitempty"` // List of file service discovery configurations. FileSDConfigs []*FileSDConfig `yaml:"file_sd_configs,omitempty"` // List of Consul service discovery configurations. ConsulSDConfigs []*ConsulSDConfig `yaml:"consul_sd_configs,omitempty"` // List of Serverset service discovery configurations. ServersetSDConfigs []*ServersetSDConfig `yaml:"serverset_sd_configs,omitempty"` // MarathonSDConfigs is a list of Marathon service discovery configurations. MarathonSDConfigs []*MarathonSDConfig `yaml:"marathon_sd_configs,omitempty"` // List of Kubernetes service discovery configurations. KubernetesSDConfigs []*KubernetesSDConfig `yaml:"kubernetes_sd_configs,omitempty"` // List of EC2 service discovery configurations. EC2SDConfigs []*EC2SDConfig `yaml:"ec2_sd_configs,omitempty"` // List of target relabel configurations. RelabelConfigs []*RelabelConfig `yaml:"relabel_configs,omitempty"` // List of metric relabel configurations. MetricRelabelConfigs []*RelabelConfig `yaml:"metric_relabel_configs,omitempty"` // Catches all undefined fields and must be empty after parsing. XXX map[string]interface{} `yaml:",inline"` } // UnmarshalYAML implements the yaml.Unmarshaler interface. func (c *ScrapeConfig) UnmarshalYAML(unmarshal func(interface{}) error) error { *c = DefaultScrapeConfig type plain ScrapeConfig err := unmarshal((*plain)(c)) if err != nil { return err } if !patJobName.MatchString(c.JobName) { return fmt.Errorf("%q is not a valid job name", c.JobName) } if len(c.BearerToken) > 0 && len(c.BearerTokenFile) > 0 { return fmt.Errorf("at most one of bearer_token & bearer_token_file must be configured") } if c.BasicAuth != nil && (len(c.BearerToken) > 0 || len(c.BearerTokenFile) > 0) { return fmt.Errorf("at most one of basic_auth, bearer_token & bearer_token_file must be configured") } // Check for users putting URLs in target groups. if len(c.RelabelConfigs) == 0 { for _, tg := range c.TargetGroups { for _, t := range tg.Targets { if err = CheckTargetAddress(t[model.AddressLabel]); err != nil { return err } } } } return checkOverflow(c.XXX, "scrape_config") } // CheckTargetAddress checks if target address is valid. func CheckTargetAddress(address model.LabelValue) error { // For now check for a URL, we may want to expand this later. if strings.Contains(string(address), "/") { return fmt.Errorf("%q is not a valid hostname", address) } return nil } // BasicAuth contains basic HTTP authentication credentials. type BasicAuth struct { Username string `yaml:"username"` Password string `yaml:"password"` // Catches all undefined fields and must be empty after parsing. XXX map[string]interface{} `yaml:",inline"` } // ClientCert contains client cert credentials. type ClientCert struct { Cert string `yaml:"cert"` Key string `yaml:"key"` // Catches all undefined fields and must be empty after parsing. XXX map[string]interface{} `yaml:",inline"` } // UnmarshalYAML implements the yaml.Unmarshaler interface. func (a *BasicAuth) UnmarshalYAML(unmarshal func(interface{}) error) error { type plain BasicAuth err := unmarshal((*plain)(a)) if err != nil { return err } return checkOverflow(a.XXX, "basic_auth") } // TargetGroup is a set of targets with a common label set. type TargetGroup struct { // Targets is a list of targets identified by a label set. Each target is // uniquely identifiable in the group by its address label. Targets []model.LabelSet // Labels is a set of labels that is common across all targets in the group. Labels model.LabelSet // Source is an identifier that describes a group of targets. Source string } func (tg TargetGroup) String() string { return tg.Source } // UnmarshalYAML implements the yaml.Unmarshaler interface. func (tg *TargetGroup) UnmarshalYAML(unmarshal func(interface{}) error) error { g := struct { Targets []string `yaml:"targets"` Labels model.LabelSet `yaml:"labels"` XXX map[string]interface{} `yaml:",inline"` }{} if err := unmarshal(&g); err != nil { return err } tg.Targets = make([]model.LabelSet, 0, len(g.Targets)) for _, t := range g.Targets { tg.Targets = append(tg.Targets, model.LabelSet{ model.AddressLabel: model.LabelValue(t), }) } tg.Labels = g.Labels return checkOverflow(g.XXX, "target_group") } // MarshalYAML implements the yaml.Marshaler interface. func (tg TargetGroup) MarshalYAML() (interface{}, error) { g := &struct { Targets []string `yaml:"targets"` Labels model.LabelSet `yaml:"labels,omitempty"` }{ Targets: make([]string, 0, len(tg.Targets)), Labels: tg.Labels, } for _, t := range tg.Targets { g.Targets = append(g.Targets, string(t[model.AddressLabel])) } return g, nil } // UnmarshalJSON implements the json.Unmarshaler interface. func (tg *TargetGroup) UnmarshalJSON(b []byte) error { g := struct { Targets []string `json:"targets"` Labels model.LabelSet `json:"labels"` }{} if err := json.Unmarshal(b, &g); err != nil { return err } tg.Targets = make([]model.LabelSet, 0, len(g.Targets)) for _, t := range g.Targets { if strings.Contains(t, "/") { return fmt.Errorf("%q is not a valid hostname", t) } tg.Targets = append(tg.Targets, model.LabelSet{ model.AddressLabel: model.LabelValue(t), }) } tg.Labels = g.Labels return nil } // DNSSDConfig is the configuration for DNS based service discovery. type DNSSDConfig struct { Names []string `yaml:"names"` RefreshInterval Duration `yaml:"refresh_interval,omitempty"` Type string `yaml:"type"` Port int `yaml:"port"` // Ignored for SRV records // Catches all undefined fields and must be empty after parsing. XXX map[string]interface{} `yaml:",inline"` } // UnmarshalYAML implements the yaml.Unmarshaler interface. func (c *DNSSDConfig) UnmarshalYAML(unmarshal func(interface{}) error) error { *c = DefaultDNSSDConfig type plain DNSSDConfig err := unmarshal((*plain)(c)) if err != nil { return err } if len(c.Names) == 0 { return fmt.Errorf("DNS-SD config must contain at least one SRV record name") } switch strings.ToUpper(c.Type) { case "SRV": case "A", "AAAA": if c.Port == 0 { return fmt.Errorf("a port is required in DNS-SD configs for all record types except SRV") } default: return fmt.Errorf("invalid DNS-SD records type %s", c.Type) } return checkOverflow(c.XXX, "dns_sd_config") } // FileSDConfig is the configuration for file based discovery. type FileSDConfig struct { Names []string `yaml:"names"` RefreshInterval Duration `yaml:"refresh_interval,omitempty"` // Catches all undefined fields and must be empty after parsing. XXX map[string]interface{} `yaml:",inline"` } // UnmarshalYAML implements the yaml.Unmarshaler interface. func (c *FileSDConfig) UnmarshalYAML(unmarshal func(interface{}) error) error { *c = DefaultFileSDConfig type plain FileSDConfig err := unmarshal((*plain)(c)) if err != nil { return err } if len(c.Names) == 0 { return fmt.Errorf("file service discovery config must contain at least one path name") } for _, name := range c.Names { if !patFileSDName.MatchString(name) { return fmt.Errorf("path name %q is not valid for file discovery", name) } } return checkOverflow(c.XXX, "file_sd_config") } // ConsulSDConfig is the configuration for Consul service discovery. type ConsulSDConfig struct { Server string `yaml:"server"` Token string `yaml:"token,omitempty"` Datacenter string `yaml:"datacenter,omitempty"` TagSeparator string `yaml:"tag_separator,omitempty"` Scheme string `yaml:"scheme,omitempty"` Username string `yaml:"username,omitempty"` Password string `yaml:"password,omitempty"` // The list of services for which targets are discovered. // Defaults to all services if empty. Services []string `yaml:"services"` // Catches all undefined fields and must be empty after parsing. XXX map[string]interface{} `yaml:",inline"` } // UnmarshalYAML implements the yaml.Unmarshaler interface. func (c *ConsulSDConfig) UnmarshalYAML(unmarshal func(interface{}) error) error { *c = DefaultConsulSDConfig type plain ConsulSDConfig err := unmarshal((*plain)(c)) if err != nil { return err } if strings.TrimSpace(c.Server) == "" { return fmt.Errorf("Consul SD configuration requires a server address") } return checkOverflow(c.XXX, "consul_sd_config") } // ServersetSDConfig is the configuration for Twitter serversets in Zookeeper based discovery. type ServersetSDConfig struct { Servers []string `yaml:"servers"` Paths []string `yaml:"paths"` Timeout Duration `yaml:"timeout,omitempty"` // Catches all undefined fields and must be empty after parsing. XXX map[string]interface{} `yaml:",inline"` } // UnmarshalYAML implements the yaml.Unmarshaler interface. func (c *ServersetSDConfig) UnmarshalYAML(unmarshal func(interface{}) error) error { *c = DefaultServersetSDConfig type plain ServersetSDConfig err := unmarshal((*plain)(c)) if err != nil { return err } if len(c.Servers) == 0 { return fmt.Errorf("serverset SD config must contain at least one Zookeeper server") } if len(c.Paths) == 0 { return fmt.Errorf("serverset SD config must contain at least one path") } for _, path := range c.Paths { if !strings.HasPrefix(path, "/") { return fmt.Errorf("serverset SD config paths must begin with '/': %s", path) } } return checkOverflow(c.XXX, "serverset_sd_config") } // MarathonSDConfig is the configuration for services running on Marathon. type MarathonSDConfig struct { Servers []string `yaml:"servers,omitempty"` RefreshInterval Duration `yaml:"refresh_interval,omitempty"` // Catches all undefined fields and must be empty after parsing. XXX map[string]interface{} `yaml:",inline"` } // UnmarshalYAML implements the yaml.Unmarshaler interface. func (c *MarathonSDConfig) UnmarshalYAML(unmarshal func(interface{}) error) error { *c = DefaultMarathonSDConfig type plain MarathonSDConfig err := unmarshal((*plain)(c)) if err != nil { return err } if len(c.Servers) == 0 { return fmt.Errorf("Marathon SD config must contain at least one Marathon server") } return checkOverflow(c.XXX, "marathon_sd_config") } // KubernetesSDConfig is the configuration for Kubernetes service discovery. type KubernetesSDConfig struct { APIServers []URL `yaml:"api_servers"` KubeletPort int `yaml:"kubelet_port,omitempty"` InCluster bool `yaml:"in_cluster,omitempty"` BasicAuth *BasicAuth `yaml:"basic_auth,omitempty"` BearerToken string `yaml:"bearer_token,omitempty"` BearerTokenFile string `yaml:"bearer_token_file,omitempty"` RetryInterval Duration `yaml:"retry_interval,omitempty"` RequestTimeout Duration `yaml:"request_timeout,omitempty"` TLSConfig TLSConfig `yaml:"tls_config,omitempty"` // Catches all undefined fields and must be empty after parsing. XXX map[string]interface{} `yaml:",inline"` } // UnmarshalYAML implements the yaml.Unmarshaler interface. func (c *KubernetesSDConfig) UnmarshalYAML(unmarshal func(interface{}) error) error { *c = DefaultKubernetesSDConfig type plain KubernetesSDConfig err := unmarshal((*plain)(c)) if err != nil { return err } if len(c.APIServers) == 0 { return fmt.Errorf("Kubernetes SD configuration requires at least one Kubernetes API server") } if len(c.BearerToken) > 0 && len(c.BearerTokenFile) > 0 { return fmt.Errorf("at most one of bearer_token & bearer_token_file must be configured") } if c.BasicAuth != nil && (len(c.BearerToken) > 0 || len(c.BearerTokenFile) > 0) { return fmt.Errorf("at most one of basic_auth, bearer_token & bearer_token_file must be configured") } return checkOverflow(c.XXX, "kubernetes_sd_config") } // EC2SDConfig is the configuration for EC2 based service discovery. type EC2SDConfig struct { Region string `yaml:"region"` AccessKey string `yaml:"access_key,omitempty"` SecretKey string `yaml:"secret_key,omitempty"` RefreshInterval Duration `yaml:"refresh_interval,omitempty"` Port int `yaml:"port"` // Catches all undefined fields and must be empty after parsing. XXX map[string]interface{} `yaml:",inline"` } // UnmarshalYAML implements the yaml.Unmarshaler interface. func (c *EC2SDConfig) UnmarshalYAML(unmarshal func(interface{}) error) error { *c = DefaultEC2SDConfig type plain EC2SDConfig err := unmarshal((*plain)(c)) if err != nil { return err } if c.Region == "" { return fmt.Errorf("EC2 SD configuration requires a region") } return checkOverflow(c.XXX, "ec2_sd_config") } // RelabelAction is the action to be performed on relabeling. type RelabelAction string const ( // RelabelReplace performs a regex replacement. RelabelReplace RelabelAction = "replace" // RelabelKeep drops targets for which the input does not match the regex. RelabelKeep RelabelAction = "keep" // RelabelDrop drops targets for which the input does match the regex. RelabelDrop RelabelAction = "drop" // RelabelHashMod sets a label to the modulus of a hash of labels. RelabelHashMod RelabelAction = "hashmod" // RelabelLabelMap copies labels to other labelnames based on a regex. RelabelLabelMap RelabelAction = "labelmap" ) // UnmarshalYAML implements the yaml.Unmarshaler interface. func (a *RelabelAction) UnmarshalYAML(unmarshal func(interface{}) error) error { var s string if err := unmarshal(&s); err != nil { return err } switch act := RelabelAction(strings.ToLower(s)); act { case RelabelReplace, RelabelKeep, RelabelDrop, RelabelHashMod, RelabelLabelMap: *a = act return nil } return fmt.Errorf("unknown relabel action %q", s) } // RelabelConfig is the configuration for relabeling of target label sets. type RelabelConfig struct { // A list of labels from which values are taken and concatenated // with the configured separator in order. SourceLabels model.LabelNames `yaml:"source_labels,flow"` // Separator is the string between concatenated values from the source labels. Separator string `yaml:"separator,omitempty"` // Regex against which the concatenation is matched. Regex *Regexp `yaml:"regex,omitempty"` // Modulus to take of the hash of concatenated values from the source labels. Modulus uint64 `yaml:"modulus,omitempty"` // The label to which the resulting string is written in a replacement. TargetLabel model.LabelName `yaml:"target_label,omitempty"` // Replacement is the regex replacement pattern to be used. Replacement string `yaml:"replacement,omitempty"` // Action is the action to be performed for the relabeling. Action RelabelAction `yaml:"action,omitempty"` // Catches all undefined fields and must be empty after parsing. XXX map[string]interface{} `yaml:",inline"` } // UnmarshalYAML implements the yaml.Unmarshaler interface. func (c *RelabelConfig) UnmarshalYAML(unmarshal func(interface{}) error) error { *c = DefaultRelabelConfig type plain RelabelConfig if err := unmarshal((*plain)(c)); err != nil { return err } if c.Regex == nil && c.Action != RelabelHashMod { return fmt.Errorf("relabel configuration requires a regular expression") } if c.Modulus == 0 && c.Action == RelabelHashMod { return fmt.Errorf("relabel configuration for hashmod requires non-zero modulus") } return checkOverflow(c.XXX, "relabel_config") } // Regexp encapsulates a regexp.Regexp and makes it YAML marshallable. type Regexp struct { regexp.Regexp original string } // NewRegexp creates a new anchored Regexp and returns an error if the // passed-in regular expression does not compile. func NewRegexp(s string) (*Regexp, error) { regex, err := regexp.Compile("^(?:" + s + ")$") if err != nil { return nil, err } return &Regexp{ Regexp: *regex, original: s, }, nil } // MustNewRegexp works like NewRegexp, but panics if the regular expression does not compile. func MustNewRegexp(s string) *Regexp { re, err := NewRegexp(s) if err != nil { panic(err) } return re } // UnmarshalYAML implements the yaml.Unmarshaler interface. func (re *Regexp) UnmarshalYAML(unmarshal func(interface{}) error) error { var s string if err := unmarshal(&s); err != nil { return err } r, err := NewRegexp(s) if err != nil { return err } *re = *r return nil } // MarshalYAML implements the yaml.Marshaler interface. func (re *Regexp) MarshalYAML() (interface{}, error) { if re != nil { return re.original, nil } return nil, nil } // Duration encapsulates a time.Duration and makes it YAML marshallable. // // TODO(fabxc): Since we have custom types for most things, including timestamps, // we might want to move this into our model as well, eventually. type Duration time.Duration // UnmarshalYAML implements the yaml.Unmarshaler interface. func (d *Duration) UnmarshalYAML(unmarshal func(interface{}) error) error { var s string if err := unmarshal(&s); err != nil { return err } dur, err := strutil.StringToDuration(s) if err != nil { return err } *d = Duration(dur) return nil } // MarshalYAML implements the yaml.Marshaler interface. func (d Duration) MarshalYAML() (interface{}, error) { return strutil.DurationToString(time.Duration(d)), nil } prometheus-0.16.2+ds/config/config_test.go000066400000000000000000000240231265137125100205010ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package config import ( "encoding/json" "io/ioutil" "net/url" "reflect" "strings" "testing" "time" "github.com/prometheus/common/model" "gopkg.in/yaml.v2" ) var expectedConf = &Config{ GlobalConfig: GlobalConfig{ ScrapeInterval: Duration(15 * time.Second), ScrapeTimeout: DefaultGlobalConfig.ScrapeTimeout, EvaluationInterval: Duration(30 * time.Second), ExternalLabels: model.LabelSet{ "monitor": "codelab", "foo": "bar", }, }, RuleFiles: []string{ "testdata/first.rules", "/absolute/second.rules", "testdata/my/*.rules", }, ScrapeConfigs: []*ScrapeConfig{ { JobName: "prometheus", HonorLabels: true, ScrapeInterval: Duration(15 * time.Second), ScrapeTimeout: DefaultGlobalConfig.ScrapeTimeout, MetricsPath: DefaultScrapeConfig.MetricsPath, Scheme: DefaultScrapeConfig.Scheme, BearerTokenFile: "testdata/valid_token_file", TargetGroups: []*TargetGroup{ { Targets: []model.LabelSet{ {model.AddressLabel: "localhost:9090"}, {model.AddressLabel: "localhost:9191"}, }, Labels: model.LabelSet{ "my": "label", "your": "label", }, }, }, FileSDConfigs: []*FileSDConfig{ { Names: []string{"foo/*.slow.json", "foo/*.slow.yml", "single/file.yml"}, RefreshInterval: Duration(10 * time.Minute), }, { Names: []string{"bar/*.yaml"}, RefreshInterval: Duration(5 * time.Minute), }, }, RelabelConfigs: []*RelabelConfig{ { SourceLabels: model.LabelNames{"job", "__meta_dns_srv_name"}, TargetLabel: "job", Separator: ";", Regex: MustNewRegexp("(.*)some-[regex]"), Replacement: "foo-${1}", Action: RelabelReplace, }, }, }, { JobName: "service-x", ScrapeInterval: Duration(50 * time.Second), ScrapeTimeout: Duration(5 * time.Second), BasicAuth: &BasicAuth{ Username: "admin_name", Password: "admin_password", }, MetricsPath: "/my_path", Scheme: "https", DNSSDConfigs: []*DNSSDConfig{ { Names: []string{ "first.dns.address.domain.com", "second.dns.address.domain.com", }, RefreshInterval: Duration(15 * time.Second), Type: "SRV", }, { Names: []string{ "first.dns.address.domain.com", }, RefreshInterval: Duration(30 * time.Second), Type: "SRV", }, }, RelabelConfigs: []*RelabelConfig{ { SourceLabels: model.LabelNames{"job"}, Regex: MustNewRegexp("(.*)some-[regex]"), Separator: ";", Action: RelabelDrop, }, { SourceLabels: model.LabelNames{"__address__"}, TargetLabel: "__tmp_hash", Modulus: 8, Separator: ";", Action: RelabelHashMod, }, { SourceLabels: model.LabelNames{"__tmp_hash"}, Regex: MustNewRegexp("1"), Separator: ";", Action: RelabelKeep, }, { Regex: MustNewRegexp("1"), Separator: ";", Action: RelabelLabelMap, }, }, MetricRelabelConfigs: []*RelabelConfig{ { SourceLabels: model.LabelNames{"__name__"}, Regex: MustNewRegexp("expensive_metric.*"), Separator: ";", Action: RelabelDrop, }, }, }, { JobName: "service-y", ScrapeInterval: Duration(15 * time.Second), ScrapeTimeout: DefaultGlobalConfig.ScrapeTimeout, MetricsPath: DefaultScrapeConfig.MetricsPath, Scheme: DefaultScrapeConfig.Scheme, ConsulSDConfigs: []*ConsulSDConfig{ { Server: "localhost:1234", Services: []string{"nginx", "cache", "mysql"}, TagSeparator: DefaultConsulSDConfig.TagSeparator, Scheme: DefaultConsulSDConfig.Scheme, }, }, }, { JobName: "service-z", ScrapeInterval: Duration(15 * time.Second), ScrapeTimeout: Duration(10 * time.Second), MetricsPath: "/metrics", Scheme: "http", TLSConfig: TLSConfig{ CertFile: "testdata/valid_cert_file", KeyFile: "testdata/valid_key_file", }, BearerToken: "avalidtoken", }, { JobName: "service-kubernetes", ScrapeInterval: Duration(15 * time.Second), ScrapeTimeout: DefaultGlobalConfig.ScrapeTimeout, MetricsPath: DefaultScrapeConfig.MetricsPath, Scheme: DefaultScrapeConfig.Scheme, KubernetesSDConfigs: []*KubernetesSDConfig{ { APIServers: []URL{kubernetesSDHostURL()}, BasicAuth: &BasicAuth{ Username: "myusername", Password: "mypassword", }, KubeletPort: 10255, RequestTimeout: Duration(10 * time.Second), RetryInterval: Duration(1 * time.Second), }, }, }, { JobName: "service-marathon", ScrapeInterval: Duration(15 * time.Second), ScrapeTimeout: DefaultGlobalConfig.ScrapeTimeout, MetricsPath: DefaultScrapeConfig.MetricsPath, Scheme: DefaultScrapeConfig.Scheme, MarathonSDConfigs: []*MarathonSDConfig{ { Servers: []string{ "http://marathon.example.com:8080", }, RefreshInterval: Duration(30 * time.Second), }, }, }, { JobName: "service-ec2", ScrapeInterval: Duration(15 * time.Second), ScrapeTimeout: DefaultGlobalConfig.ScrapeTimeout, MetricsPath: DefaultScrapeConfig.MetricsPath, Scheme: DefaultScrapeConfig.Scheme, EC2SDConfigs: []*EC2SDConfig{ { Region: "us-east-1", AccessKey: "access", SecretKey: "secret", RefreshInterval: Duration(60 * time.Second), Port: 80, }, }, }, }, original: "", } func TestLoadConfig(t *testing.T) { // Parse a valid file that sets a global scrape timeout. This tests whether parsing // an overwritten default field in the global config permanently changes the default. if _, err := LoadFile("testdata/global_timeout.good.yml"); err != nil { t.Errorf("Error parsing %s: %s", "testdata/conf.good.yml", err) } c, err := LoadFile("testdata/conf.good.yml") if err != nil { t.Fatalf("Error parsing %s: %s", "testdata/conf.good.yml", err) } bgot, err := yaml.Marshal(c) if err != nil { t.Fatalf("%s", err) } bexp, err := yaml.Marshal(expectedConf) if err != nil { t.Fatalf("%s", err) } expectedConf.original = c.original if !reflect.DeepEqual(c, expectedConf) { t.Fatalf("%s: unexpected config result: \n\n%s\n expected\n\n%s", "testdata/conf.good.yml", bgot, bexp) } // String method must not reveal authentication credentials. s := c.String() if strings.Contains(s, "admin_password") { t.Fatalf("config's String method reveals authentication credentials.") } } var expectedErrors = []struct { filename string errMsg string }{ { filename: "jobname.bad.yml", errMsg: `"prom^etheus" is not a valid job name`, }, { filename: "jobname_dup.bad.yml", errMsg: `found multiple scrape configs with job name "prometheus"`, }, { filename: "labelname.bad.yml", errMsg: `"not$allowed" is not a valid label name`, }, { filename: "labelname2.bad.yml", errMsg: `"not:allowed" is not a valid label name`, }, { filename: "regex.bad.yml", errMsg: "error parsing regexp", }, { filename: "regex_missing.bad.yml", errMsg: "relabel configuration requires a regular expression", }, { filename: "modulus_missing.bad.yml", errMsg: "relabel configuration for hashmod requires non-zero modulus", }, { filename: "rules.bad.yml", errMsg: "invalid rule file path", }, { filename: "unknown_attr.bad.yml", errMsg: "unknown fields in scrape_config: consult_sd_configs", }, { filename: "bearertoken.bad.yml", errMsg: "at most one of bearer_token & bearer_token_file must be configured", }, { filename: "bearertoken_basicauth.bad.yml", errMsg: "at most one of basic_auth, bearer_token & bearer_token_file must be configured", }, { filename: "kubernetes_bearertoken.bad.yml", errMsg: "at most one of bearer_token & bearer_token_file must be configured", }, { filename: "kubernetes_bearertoken_basicauth.bad.yml", errMsg: "at most one of basic_auth, bearer_token & bearer_token_file must be configured", }, { filename: "marathon_no_servers.bad.yml", errMsg: "Marathon SD config must contain at least one Marathon server", }, { filename: "url_in_targetgroup.bad.yml", errMsg: "\"http://bad\" is not a valid hostname", }, } func TestBadConfigs(t *testing.T) { for _, ee := range expectedErrors { _, err := LoadFile("testdata/" + ee.filename) if err == nil { t.Errorf("Expected error parsing %s but got none", ee.filename) continue } if !strings.Contains(err.Error(), ee.errMsg) { t.Errorf("Expected error for %s to contain %q but got: %s", ee.filename, ee.errMsg, err) } } } func TestBadTargetGroup(t *testing.T) { content, err := ioutil.ReadFile("testdata/tgroup.bad.json") if err != nil { t.Fatal(err) } var tg TargetGroup err = json.Unmarshal(content, &tg) if err == nil { t.Errorf("Expected unmarshal error but got none.") } } func TestEmptyConfig(t *testing.T) { c, err := Load("") if err != nil { t.Fatalf("Unexpected error parsing empty config file: %s", err) } exp := DefaultConfig if !reflect.DeepEqual(*c, exp) { t.Fatalf("want %v, got %v", exp, c) } } func TestEmptyGlobalBlock(t *testing.T) { c, err := Load("global:\n") if err != nil { t.Fatalf("Unexpected error parsing empty config file: %s", err) } exp := DefaultConfig exp.original = "global:\n" if !reflect.DeepEqual(*c, exp) { t.Fatalf("want %v, got %v", exp, c) } } func kubernetesSDHostURL() URL { tURL, _ := url.Parse("https://localhost:1234") return URL{URL: tURL} } prometheus-0.16.2+ds/config/testdata/000077500000000000000000000000001265137125100174565ustar00rootroot00000000000000prometheus-0.16.2+ds/config/testdata/bearertoken.bad.yml000066400000000000000000000001421265137125100232240ustar00rootroot00000000000000scrape_configs: - job_name: prometheus bearer_token: 1234 bearer_token_file: somefile prometheus-0.16.2+ds/config/testdata/bearertoken_basicauth.bad.yml000066400000000000000000000002001265137125100252420ustar00rootroot00000000000000scrape_configs: - job_name: prometheus bearer_token: 1234 basic_auth: username: user password: password prometheus-0.16.2+ds/config/testdata/conf.good.yml000066400000000000000000000047431265137125100220650ustar00rootroot00000000000000# my global config global: scrape_interval: 15s evaluation_interval: 30s # scrape_timeout is set to the global default (10s). external_labels: monitor: codelab foo: bar rule_files: - "first.rules" - "/absolute/second.rules" - "my/*.rules" scrape_configs: - job_name: prometheus honor_labels: true # scrape_interval is defined by the configured global (15s). # scrape_timeout is defined by the global default (10s). # metrics_path defaults to '/metrics' # scheme defaults to 'http'. file_sd_configs: - names: - foo/*.slow.json - foo/*.slow.yml - single/file.yml refresh_interval: 10m - names: - bar/*.yaml target_groups: - targets: ['localhost:9090', 'localhost:9191'] labels: my: label your: label relabel_configs: - source_labels: [job, __meta_dns_srv_name] regex: (.*)some-[regex] target_label: job replacement: foo-${1} # action defaults to 'replace' bearer_token_file: valid_token_file - job_name: service-x basic_auth: username: admin_name password: admin_password scrape_interval: 50s scrape_timeout: 5s metrics_path: /my_path scheme: https dns_sd_configs: - refresh_interval: 15s names: - first.dns.address.domain.com - second.dns.address.domain.com - names: - first.dns.address.domain.com # refresh_interval defaults to 30s. relabel_configs: - source_labels: [job] regex: (.*)some-[regex] action: drop - source_labels: [__address__] modulus: 8 target_label: __tmp_hash action: hashmod - source_labels: [__tmp_hash] regex: 1 action: keep - action: labelmap regex: 1 metric_relabel_configs: - source_labels: [__name__] regex: expensive_metric.* action: drop - job_name: service-y consul_sd_configs: - server: 'localhost:1234' services: ['nginx', 'cache', 'mysql'] - job_name: service-z tls_config: cert_file: valid_cert_file key_file: valid_key_file bearer_token: avalidtoken - job_name: service-kubernetes kubernetes_sd_configs: - api_servers: - 'https://localhost:1234' basic_auth: username: 'myusername' password: 'mypassword' - job_name: service-marathon marathon_sd_configs: - servers: - 'http://marathon.example.com:8080' - job_name: service-ec2 ec2_sd_configs: - region: us-east-1 access_key: access secret_key: secret prometheus-0.16.2+ds/config/testdata/global_timeout.good.yml000066400000000000000000000000351265137125100241340ustar00rootroot00000000000000global: scrape_timeout: 1h prometheus-0.16.2+ds/config/testdata/jobname.bad.yml000066400000000000000000000000521265137125100223360ustar00rootroot00000000000000scrape_configs: - job_name: prom^etheus prometheus-0.16.2+ds/config/testdata/jobname_dup.bad.yml000066400000000000000000000002301265137125100232040ustar00rootroot00000000000000# Two scrape configs with the same job names are not allowed. scrape_configs: - job_name: prometheus - job_name: service-x - job_name: prometheus prometheus-0.16.2+ds/config/testdata/kubernetes_bearertoken.bad.yml000066400000000000000000000002661265137125100254620ustar00rootroot00000000000000scrape_configs: - job_name: prometheus kubernetes_sd_configs: - api_servers: - 'https://localhost:1234' bearer_token: 1234 bearer_token_file: somefile prometheus-0.16.2+ds/config/testdata/kubernetes_bearertoken_basicauth.bad.yml000066400000000000000000000003301265137125100274750ustar00rootroot00000000000000scrape_configs: - job_name: prometheus kubernetes_sd_configs: - api_servers: - 'https://localhost:1234' bearer_token: 1234 basic_auth: username: user password: password prometheus-0.16.2+ds/config/testdata/labelname.bad.yml000066400000000000000000000000621265137125100226440ustar00rootroot00000000000000global: external_labels: not$allowed: value prometheus-0.16.2+ds/config/testdata/labelname2.bad.yml000066400000000000000000000000641265137125100227300ustar00rootroot00000000000000global: external_labels: 'not:allowed': value prometheus-0.16.2+ds/config/testdata/marathon_no_servers.bad.yml000066400000000000000000000002441265137125100250040ustar00rootroot00000000000000# my global config global: scrape_interval: 15s evaluation_interval: 30s scrape_configs: - job_name: service-marathon marathon_sd_configs: - servers: prometheus-0.16.2+ds/config/testdata/modulus_missing.bad.yml000066400000000000000000000001541265137125100241470ustar00rootroot00000000000000scrape_configs: - job_name: prometheus relabel_configs: - regex: abcdef action: hashmod prometheus-0.16.2+ds/config/testdata/regex.bad.yml000066400000000000000000000001251265137125100220360ustar00rootroot00000000000000scrape_configs: - job_name: prometheus relabel_configs: - regex: abc(def prometheus-0.16.2+ds/config/testdata/regex_missing.bad.yml000066400000000000000000000001361265137125100235710ustar00rootroot00000000000000scrape_configs: - job_name: prometheus relabel_configs: - source_labels: ['blub'] prometheus-0.16.2+ds/config/testdata/rules.bad.yml000066400000000000000000000000671265137125100220630ustar00rootroot00000000000000rule_files: - 'my_rule' # fine - 'my/*/rule' # bad prometheus-0.16.2+ds/config/testdata/tgroup.bad.json000066400000000000000000000001741265137125100224200ustar00rootroot00000000000000{ "targets": ["1.2.3.4:9100"], "labels": { "some_valid_label": "foo", "oops:this-label-is-invalid": "bar" } } prometheus-0.16.2+ds/config/testdata/unknown_attr.bad.yml000066400000000000000000000005531265137125100234620ustar00rootroot00000000000000# my global config global: scrape_interval: 15s evaluation_interval: 30s # scrape_timeout is set to the global default (10s). external_labels: monitor: codelab foo: bar rule_files: - "first.rules" - "second.rules" - "my/*.rules" scrape_configs: - job_name: prometheus consult_sd_configs: - server: 'localhost:1234' prometheus-0.16.2+ds/config/testdata/url_in_targetgroup.bad.yml000066400000000000000000000001261265137125100246400ustar00rootroot00000000000000scrape_configs: - job_name: prometheus target_groups: - targets: - http://bad prometheus-0.16.2+ds/console_libraries/000077500000000000000000000000001265137125100200765ustar00rootroot00000000000000prometheus-0.16.2+ds/console_libraries/menu.lib000066400000000000000000000111131265137125100215270ustar00rootroot00000000000000{{/* vim: set ft=html: */}} {{/* Navbar, should be passed . */}} {{ define "navbar" }} {{ end }} {{/* LHS menu, should be passed . */}} {{ define "menu" }}
    {{ template "_menuItem" (args . "index.html.example" "Overview") }} {{ if query "up{job='haproxy'}" }} {{ template "_menuItem" (args . "haproxy.html" "HAProxy") }} {{ if match "^haproxy" .Path }}
      {{ template "_menuItem" (args . "haproxy-frontends.html" "Frontends") }} {{ if .Params.frontend }}
    • {{ end }} {{ template "_menuItem" (args . "haproxy-backends.html" "Backends") }} {{ if .Params.backend }}
    • {{ end }}
    {{ end }} {{ end }} {{ if query "up{job='cassandra'}" }} {{ template "_menuItem" (args . "cassandra.html" "Cassandra") }} {{ end }} {{ if query "up{job='blackbox'}" }} {{ template "_menuItem" (args . "blackbox.html" "Blackbox") }} {{ end }} {{ if query "up{job='node'}" }} {{ template "_menuItem" (args . "node.html" "Node") }} {{ if match "^node" .Path }} {{ if .Params.instance }} {{ end }} {{ end }} {{ end }} {{ if query "up{job='prometheus'}" }} {{ template "_menuItem" (args . "prometheus.html" "Prometheus") }} {{ if match "^prometheus" .Path }} {{ if .Params.instance }} {{ end }} {{ end }} {{ end }} {{ if query "up{job='snmp'}" }} {{ template "_menuItem" (args . "snmp.html" "SNMP") }} {{ if match "^snmp" .Path }} {{ if .Params.instance }} {{ end }} {{ end }} {{ end }} {{ if query "up{job='cloudwatch'}" }} {{ template "_menuItem" (args . "cloudwatch.html" "CloudWatch") }} {{ end }} {{ if query "aws_elasticache_cpuutilization_average{job='aws_elasticache'}" }} {{ template "_menuItem" (args . "aws_elasticache.html" "ElastiCache") }} {{ end }} {{ if query "aws_elb_healthy_host_count_average{job='aws_elb'}" }} {{ template "_menuItem" (args . "aws_elb.html" "ELB") }} {{ end }} {{ if query "aws_redshift_health_status_average{job='aws_redshift'}" }} {{ template "_menuItem" (args . "aws_redshift.html" "Redshift") }} {{ if and (eq "aws_redshift-cluster.html" .Path) .Params.cluster_identifier }}
    • {{ reReplaceAll "^(.{8}).{8,}(.{8})$" "$1...$2" .Params.cluster_identifier }}
    {{ end }} {{ end }}
{{ end }} {{/* Helper, pass (args . path name) */}} {{ define "_menuItem" }}
  • {{ .arg2 }}
  • {{ end }} prometheus-0.16.2+ds/console_libraries/prom.lib000066400000000000000000000130341265137125100215440ustar00rootroot00000000000000{{/* vim: set ft=html: */}} {{/* Load Prometheus console library JS/CSS. Should go in */}} {{ define "prom_console_head" }} {{ end }} {{/* Top of all pages. */}} {{ define "head" }} {{ template "prom_console_head" }} {{ template "navbar" . }} {{ template "menu" . }} {{ end }} {{ define "__prom_query_drilldown_noop" }}{{ . }}{{ end }} {{ define "humanize" }}{{ humanize . }}{{ end }} {{ define "humanizeNoSmallPrefix" }}{{ if and (lt . 1.0) (gt . -1.0) }}{{ printf "%.3g" . }}{{ else }}{{ humanize . }}{{ end }}{{ end }} {{ define "humanize1024" }}{{ humanize1024 . }}{{ end }} {{ define "humanizeDuration" }}{{ humanizeDuration . }}{{ end }} {{ define "humanizeTimestamp" }}{{ humanizeTimestamp . }}{{ end }} {{ define "printf.1f" }}{{ printf "%.1f" . }}{{ end }} {{ define "printf.3g" }}{{ printf "%.3g" . }}{{ end }} {{/* prom_query_drilldown (args expr suffix? renderTemplate?) Displays the result of the expression, with a link to /graph for it. renderTemplate is the name of the template to use to render the value. */}} {{ define "prom_query_drilldown" }} {{ $expr := .arg0 }}{{ $suffix := (or .arg1 "") }}{{ $renderTemplate := (or .arg2 "__prom_query_drilldown_noop") }} {{ with query $expr }}{{tmpl $renderTemplate ( . | first | value )}}{{ $suffix }}{{ else }}-{{ end }} {{ end }} {{ define "prom_path" }}/consoles/{{ .Path }}?{{ range $param, $value := .Params }}{{ $param }}={{ $value }}&{{ end }}{{ end }}" {{ define "prom_right_table_head" }}
    {{ end }} {{ define "prom_right_table_tail" }}
    {{ end }} {{/* RHS table head, pass job name. Should be used after prom_right_table_head. */}} {{ define "prom_right_table_job_head" }} {{ . }} {{ template "prom_query_drilldown" (args (printf "sum(up{job='%s'})" .)) }} / {{ template "prom_query_drilldown" (args (printf "count(up{job='%s'})" .)) }} CPU {{ template "prom_query_drilldown" (args (printf "avg by(job)(irate(process_cpu_seconds_total{job='%s'}[5m]))" .) "s/s" "humanizeNoSmallPrefix") }} Memory {{ template "prom_query_drilldown" (args (printf "avg by(job)(process_resident_memory_bytes{job='%s'})" .) "B" "humanize1024") }} {{ end }} {{ define "prom_content_head" }}
    {{ template "prom_graph_timecontrol" . }} {{ end }} {{ define "prom_content_tail" }}
    {{ end }} {{ define "prom_graph_timecontrol" }}
    {{ end }} {{/* Bottom of all pages. */}} {{ define "tail" }} {{ end }} prometheus-0.16.2+ds/consoles/000077500000000000000000000000001265137125100162255ustar00rootroot00000000000000prometheus-0.16.2+ds/consoles/aws_elasticache.html000066400000000000000000000034031265137125100222320ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_right_table_head" }} {{ range printf "sum by (cache_cluster_id)(aws_elasticache_cpuutilization_average{job='aws_elasticache'})" | query | sortByLabel "cache_cluster_id" }} {{ .Labels.cache_cluster_id }} CPU {{ template "prom_query_drilldown" (args (printf "aws_elasticache_cpuutilization_average{job='aws_elasticache',cache_cluster_id='%s'}" .Labels.cache_cluster_id) "%" "printf.3g") }} Cache Size {{ template "prom_query_drilldown" (args (printf "aws_elasticache_bytes_used_for_cache_average{job='aws_elasticache',cache_cluster_id='%s'}" .Labels.cache_cluster_id) "B" "humanize1024") }} Cache Items {{ template "prom_query_drilldown" (args (printf "aws_elasticache_curr_items_average{job='aws_elasticache',cache_cluster_id='%s'}" .Labels.cache_cluster_id) "" "humanize") }} Freeable Memory {{ template "prom_query_drilldown" (args (printf "aws_elasticache_freeable_memory_average{job='aws_elasticache',cache_cluster_id='%s'}" .Labels.cache_cluster_id) "B" "humanize1024") }} {{ end }} {{ template "prom_right_table_tail" }} {{ template "prom_content_head" . }}

    AWS ElastiCache

    CPU

    {{ template "prom_content_tail" . }} {{ template "tail" }} prometheus-0.16.2+ds/consoles/aws_elb.html000066400000000000000000000041341265137125100205310ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_right_table_head" }} {{ range query "sum by (load_balancer_name)(aws_elb_healthy_host_count_average{job='aws_elb'})" | sortByLabel "load_balancer_name" }} {{ .Labels.load_balancer_name }} Healthy Hosts {{ template "prom_query_drilldown" (args (printf "avg(aws_elb_healthy_host_count_average{job='aws_elb',load_balancer_name='%s'})" .Labels.load_balancer_name) ) }} / {{ template "prom_query_drilldown" (args (printf "avg(aws_elb_healthy_host_count_average{job='aws_elb',load_balancer_name='%s'}) + avg(aws_elb_un_healthy_host_count_average{job='aws_elb',load_balancer_name='%s'})" .Labels.load_balancer_name .Labels.load_balancer_name) ) }} Queries {{ template "prom_query_drilldown" (args (printf "sum(aws_elb_request_count_sum{job='aws_elb',load_balancer_name='%s'}) / 60" .Labels.load_balancer_name) "/s" "humanizeNoSmallPrefix") }} Latency {{ template "prom_query_drilldown" (args (printf "avg(aws_elb_latency_average{job='aws_elb',load_balancer_name='%s'})" .Labels.load_balancer_name) "s" "humanize") }} Surge Queue {{ template "prom_query_drilldown" (args (printf "sum(aws_elb_surge_queue_length_sum{job='aws_elb',load_balancer_name='%s'})" .Labels.load_balancer_name) "" "humanize") }} {{ end }} {{ template "prom_right_table_tail" }} {{ template "prom_content_head" . }}

    AWS Elastic Load Balancer

    This console assumes that period_seconds in the CloudWatch Exporter is the default of 60s.

    Queries

    {{ template "prom_content_tail" . }} {{ template "tail" }} prometheus-0.16.2+ds/consoles/aws_redshift-cluster.html000066400000000000000000000077411265137125100232650ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_right_table_head" }} Nodes {{ template "prom_query_drilldown" (args (printf "count(aws_redshift_percentage_disk_space_used_average{job='aws_redshift',cluster_identifier='%s'})" .Params.cluster_identifier)) }} Healthy {{ with printf "aws_redshift_health_status_average{job='aws_redshift',cluster_identifier='%s'}" .Params.cluster_identifier | query }}{{ if eq (. | first | value) 1.0 }}Yes{{ else }}No{{ end }} {{ end }} Maintenance mode {{ with printf "aws_redshift_maintenance_mode_average{job='aws_redshift',cluster_identifier='%s'}" .Params.cluster_identifier | query }}{{ if eq (. | first | value) 1.0 }}Yes{{ else }}No{{ end }} {{ end }} Connections {{ template "prom_query_drilldown" (args (printf "aws_redshift_database_connections_average{job='aws_redshift',cluster_identifier='%s'}" .Params.cluster_identifier)) }} CPU {{ template "prom_query_drilldown" (args (printf "avg(aws_redshift_cpuutilization_average{job='aws_redshift',cluster_identifier='%s'})" .Params.cluster_identifier) "%" "printf.3g") }} Disk Used {{ template "prom_query_drilldown" (args (printf "max(aws_redshift_percentage_disk_space_used_average{job='aws_redshift',cluster_identifier='%s'})" .Params.cluster_identifier) "%" "printf.3g") }} Network Transmitted {{ template "prom_query_drilldown" (args (printf "avg(aws_redshift_network_transmit_throughput_average{job='aws_redshift',cluster_identifier='%s'})" .Params.cluster_identifier) "B/s" "humanize") }} Network Received {{ template "prom_query_drilldown" (args (printf "avg(aws_redshift_network_receive_throughput_average{job='aws_redshift',cluster_identifier='%s'})" .Params.cluster_identifier) "B/s" "humanize") }} Read Throughput {{ template "prom_query_drilldown" (args (printf "avg(aws_redshift_read_throughput_average{job='aws_redshift',cluster_identifier='%s'})" .Params.cluster_identifier) "B/s" "humanize") }} Read IOPS {{ template "prom_query_drilldown" (args (printf "avg(aws_redshift_read_iops_average{job='aws_redshift',cluster_identifier='%s'})" .Params.cluster_identifier) "/s" "humanizeNoSmallPrefix") }} Read Latency {{ template "prom_query_drilldown" (args (printf "avg(aws_redshift_read_latency_average{job='aws_redshift',cluster_identifier='%s'})" .Params.cluster_identifier) "s" "humanize") }} Write Throughput {{ template "prom_query_drilldown" (args (printf "avg(aws_redshift_write_throughput_average{job='aws_redshift',cluster_identifier='%s'})" .Params.cluster_identifier) "B/s" "humanize") }} Write IOPS {{ template "prom_query_drilldown" (args (printf "avg(aws_redshift_write_iops_average{job='aws_redshift',cluster_identifier='%s'})" .Params.cluster_identifier) "/s" "humanizeNoSmallPrefix") }} Write Latency {{ template "prom_query_drilldown" (args (printf "avg(aws_redshift_write_latency_average{job='aws_redshift',cluster_identifier='%s'})" .Params.cluster_identifier) "s" "humanize") }} {{ template "prom_right_table_tail" }} {{ template "prom_content_head" . }}

    AWS Redshift

    Cluster: {{ .Params.cluster_identifier }}

    CPU Usage

    {{ template "prom_content_tail" . }} {{ template "tail" }} prometheus-0.16.2+ds/consoles/aws_redshift.html000066400000000000000000000027451265137125100216050ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_content_head" . }}

    AWS Redshift

    Overview

    {{ range printf "sum by (cluster_identifier)(aws_redshift_health_status_average{job='aws_redshift'})" | query | sortByLabel "cluster_identifier" }} {{ end }}
    Cluster Healthy Maintenance Mode Nodes Disk Used
    {{ .Labels.cluster_identifier }} {{ with printf "aws_redshift_health_status_average{job='aws_redshift',cluster_identifier='%s'}" .Labels.cluster_identifier | query }}{{ if eq (. | first | value) 1.0 }}Yes{{ else }}No{{ end }} {{ end }} {{ with printf "aws_redshift_maintenance_mode_average{job='aws_redshift',cluster_identifier='%s'}" .Labels.cluster_identifier | query }}{{ if eq (. | first | value) 1.0 }}Yes{{ else }}No{{ end }} {{ end }} {{ template "prom_query_drilldown" (args (printf "count(aws_redshift_percentage_disk_space_used_average{job='aws_redshift',cluster_identifier='%s'})" .Labels.cluster_identifier)) }} {{ template "prom_query_drilldown" (args (printf "max(aws_redshift_percentage_disk_space_used_average{job='aws_redshift',cluster_identifier='%s'})" .Labels.cluster_identifier) "%" "printf.3g") }}
    {{ template "prom_content_tail" . }} {{ template "tail" }} prometheus-0.16.2+ds/consoles/blackbox.html000066400000000000000000000026431265137125100207050ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_right_table_head" }} Blackbox {{ template "prom_query_drilldown" (args "sum(up{job='blackbox'})") }} / {{ template "prom_query_drilldown" (args "count(up{job='blackbox'})") }} Currently {{ range query "probe_success{job='blackbox'}" | sortByLabel "instance" }} {{ .Labels.instance }} Up{{ else }} class="alert-danger">Down{{ end }} {{ end }} Past Day % {{ range query "avg_over_time(probe_success{job='blackbox'}[1d]) * 100" | sortByLabel "instance" }} {{ .Labels.instance }} {{ (. | value | printf "%.2f") }}% {{ end }} {{ template "prom_right_table_tail" }} {{ template "prom_content_head" .}}

    Blackbox

    Response times

    {{ template "prom_content_tail" . }} {{ template "tail" }} prometheus-0.16.2+ds/consoles/cassandra.html000066400000000000000000000062331265137125100210560ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_right_table_head" }} {{ template "prom_right_table_job_head" "cassandra" }} Queries {{ template "prom_query_drilldown" (args "sum by (job)(irate(cassandra_clientrequest_latency{job='cassandra'}[5m]))" "/s" "humanizeNoSmallPrefix") }} Timeout Ratio {{ template "prom_query_drilldown" (args "sum by (job)(irate(cassandra_clientrequest_timeouts{job='cassandra'}[5m])) / sum by (job)(rate(cassandra_clientrequest_latency{job='cassandra'}[5m]))" "" "humanizeNoSmallPrefix") }} Unavailable Ratio {{ template "prom_query_drilldown" (args "sum by (job)(irate(cassandra_clientrequest_unavailables{job='cassandra'}[5m])) / sum by (job)(rate(cassandra_clientrequest_latency{job='cassandra'}[5m]))" "" "humanizeNoSmallPrefix") }} Internals Hints Inprogress {{ template "prom_query_drilldown" (args "sum by (job)(cassandra_storage_totalhintsinprogress{job='cassandra'})" "" "humanize") }} Blocked Tasks {{ template "prom_query_drilldown" (args "sum by (job)(cassandra_threadpools_currentlyblockedtasks{job='cassandra'})" "" "humanize") }} Average Node Disk Compacted {{ template "prom_query_drilldown" (args "avg by (job)(irate(cassandra_compaction_bytescompacted{job='cassandra'}[5m]))" "B/s" "humanize1024") }} Live CF {{ template "prom_query_drilldown" (args "avg by (job)(sum by (job, instance)(cassandra_columnfamily_totaldiskspaceused{job='cassandra'}))" "B" "humanize1024") }} Total CF {{ template "prom_query_drilldown" (args "avg by (job)(sum by (job, instance)(cassandra_columnfamily_totaldiskspaceused{job='cassandra'}))" "B" "humanize1024") }} Commit Log {{ template "prom_query_drilldown" (args "avg by (job)(cassandra_commitlog_totalcommitlogsize{job='cassandra'})" "B" "humanize1024") }} {{ template "prom_right_table_tail" }} {{ template "prom_content_head" .}}

    Cassandra

    Client Queries

    Client Latency

    {{ template "prom_content_tail" . }} {{ template "tail" }} prometheus-0.16.2+ds/consoles/cloudwatch.html000066400000000000000000000011501265137125100212450ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_right_table_head" }} cloudwatch {{ template "prom_query_drilldown" (args "sum(up{job='cloudwatch'})") }} / {{ template "prom_query_drilldown" (args "count(up{job='cloudwatch'})") }} API Requests {{ template "prom_query_drilldown" (args "sum by (job)(irate(cloudwatch_requests_total{job='cloudwatch'}[5m]))" "/s" "humanizeNoSmallPrefix") }} {{ template "prom_right_table_tail" }} {{ template "prom_content_head" . }}

    CloudWatch Exporter

    {{ template "prom_content_tail" . }} {{ template "tail" }} prometheus-0.16.2+ds/consoles/haproxy-backend.html000066400000000000000000000055511265137125100222000ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_right_table_head" }} {{ .Params.backend }}{{ template "prom_query_drilldown" (args (printf "sum(min by (server)(haproxy_server_up{job='haproxy',backend='%s'}))" .Params.backend)) }} / {{ template "prom_query_drilldown" (args (printf "count(sum by (server)(haproxy_server_up{job='haproxy',backend='%s'}))" .Params.backend))}} Responses {{ template "prom_query_drilldown" (args (printf "sum(irate(haproxy_backend_http_responses_total{job='haproxy',backend='%s'}[5m]))" .Params.backend) "/s" "humanizeNoSmallPrefix") }} Data In {{ template "prom_query_drilldown" (args (printf "sum(irate(haproxy_backend_bytes_in_total{job='haproxy',backend='%s'}[5m]))" .Params.backend) "B/s" "humanize") }} Data Out {{ template "prom_query_drilldown" (args (printf "sum(irate(haproxy_backend_bytes_out_total{job='haproxy',backend='%s'}[5m]))" .Params.backend) "B/s" "humanize") }} Current Sessions {{ template "prom_query_drilldown" (args (printf "sum(haproxy_backend_current_sessions{job='haproxy',backend='%s'})" .Params.backend) "" "humanize") }} Current Queue {{ template "prom_query_drilldown" (args (printf "sum(haproxy_backend_current_queue{job='haproxy',backend='%s'})" .Params.backend) "" "humanize") }} Server Errors Connection Errors {{ template "prom_query_drilldown" (args (printf "sum(irate(haproxy_backend_connection_errors_total{job='haproxy',backend='%s'}[5m]))" .Params.backend) "/s" "humanizeNoSmallPrefix") }} Response Errors {{ template "prom_query_drilldown" (args (printf "sum(irate(haproxy_backend_connection_errors_total{job='haproxy',backend='%s'}[5m]))" .Params.backend) "/s" "humanizeNoSmallPrefix") }} Retry Warnings {{ template "prom_query_drilldown" (args (printf "sum(irate(haproxy_backend_retry_warnings_total{job='haproxy',backend='%s'}[5m]))" .Params.backend) "/s" "humanizeNoSmallPrefix") }} {{ template "prom_right_table_tail" }} {{ template "prom_content_head" . }}

    HAProxy Backend - {{ .Params.backend }}

    Responses

    {{ template "prom_content_tail" . }} {{ template "tail" }} prometheus-0.16.2+ds/consoles/haproxy-backends.html000066400000000000000000000027231265137125100223610ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_content_head" . }}

    HAProxy Backends

    {{ range query "count by (backend)(haproxy_backend_http_responses_total{job='haproxy'})" | sortByLabel "backend" }} {{ else }} {{ end }} {{ template "prom_content_tail" . }} {{ template "tail" }} prometheus-0.16.2+ds/consoles/haproxy-frontend.html000066400000000000000000000036101265137125100224220ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_right_table_head" }} {{ template "prom_right_table_tail" }} {{ template "prom_content_head" . }}

    HAProxy Frontend - {{ .Params.frontend }}

    Responses

    {{ template "prom_content_tail" . }} {{ template "tail" }} prometheus-0.16.2+ds/consoles/haproxy-frontends.html000066400000000000000000000017431265137125100226120ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_content_head" . }}

    HAProxy Frontends

    Backend Servers Healthy Responses Sessions Queue
    {{ .Labels.backend }} {{ template "prom_query_drilldown" (args (printf "sum(min by (server)(haproxy_server_up{job='haproxy',backend='%s'}))" .Labels.backend)) }} / {{ template "prom_query_drilldown" (args (printf "count(sum by (server)(haproxy_server_up{job='haproxy',backend='%s'}))" .Labels.backend))}} {{ template "prom_query_drilldown" (args (printf "sum by(backend)(irate(haproxy_backend_http_responses_total{job='haproxy',backend='%s'}[5m]))" .Labels.backend) "/s" "humanizeNoSmallPrefix") }} {{ template "prom_query_drilldown" (args (printf "sum by(backend)(haproxy_backend_current_sessions{job='haproxy',backend='%s'})" .Labels.backend) "" "humanize") }} {{ template "prom_query_drilldown" (args (printf "sum by(backend)(haproxy_backend_current_queue{job='haproxy',backend='%s'})" .Labels.backend) "" "humanize") }}
    No backends found.
    {{ .Params.frontend }}
    Requests {{ template "prom_query_drilldown" (args (printf "sum(irate(haproxy_frontend_http_requests_total{job='haproxy',frontend='%s'}[5m]))" .Params.frontend) "/s" "humanizeNoSmallPrefix") }}
    Requests Denied {{ template "prom_query_drilldown" (args (printf "sum(irate(haproxy_frontend_requests_denied_total{job='haproxy',frontend='%s'}[5m]))" .Params.frontend) "/s" "humanizeNoSmallPrefix") }}
    Data In {{ template "prom_query_drilldown" (args (printf "sum(irate(haproxy_frontend_bytes_in_total{job='haproxy',frontend='%s'}[5m]))" .Params.frontend) "B/s" "humanize") }}
    Data Out {{ template "prom_query_drilldown" (args (printf "sum(irate(haproxy_frontend_bytes_out_total{job='haproxy',frontend='%s'}[5m]))" .Params.frontend) "B/s" "humanize") }}
    Current Sessions {{ template "prom_query_drilldown" (args (printf "sum(haproxy_frontend_current_sessions{job='haproxy',frontend='%s'})" .Params.frontend) "" "humanize") }}
    {{ range query "count by (frontend)(haproxy_frontend_http_requests_total{job='haproxy'})" | sortByLabel "frontend" }} {{ else }} {{ end }} {{ template "prom_content_tail" . }} {{ template "tail" }} prometheus-0.16.2+ds/consoles/haproxy.html000066400000000000000000000056371265137125100206200ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_right_table_head" }} {{ template "prom_right_table_tail" }} {{ template "prom_content_head" . }}

    HAProxy

    Frontend Requests

    Backend Responses

    Current Sessions

    {{ template "prom_content_tail" . }} {{ template "tail" }} prometheus-0.16.2+ds/consoles/index.html.example000066400000000000000000000020001265137125100216440ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_right_table_head" }} {{ template "prom_right_table_tail" }} {{ template "prom_content_head" . }}

    Overview

    These are example consoles for Prometheus, they are still under development.

    These consoles expect exporters to have the following job labels:

    Frontend Requests Sessions
    {{ .Labels.frontend }} {{ template "prom_query_drilldown" (args (printf "sum by(frontend)(irate(haproxy_frontend_http_requests_total{job='haproxy',frontend='%s'}[5m]))" .Labels.frontend) "/s" "humanizeNoSmallPrefix") }} {{ template "prom_query_drilldown" (args (printf "sum by(frontend)(haproxy_frontend_current_sessions{job='haproxy',frontend='%s'})" .Labels.frontend) "" "humanize") }}
    No frontends found.
    HAProxy {{ template "prom_query_drilldown" (args "sum(haproxy_up{job='haproxy'})") }} / {{ template "prom_query_drilldown" (args "count(up{job='haproxy'})") }}
    CPU {{ template "prom_query_drilldown" (args "avg by(job)(irate(haproxy_process_cpu_seconds_total{job='haproxy'}[5m]))" "s/s" "humanizeNoSmallPrefix") }}
    Memory {{ template "prom_query_drilldown" (args "avg by(job)(haproxy_process_resident_memory_bytes{job='haproxy'})" "B" "humanize1024") }}
    Frontend
    Requests {{ template "prom_query_drilldown" (args "sum(irate(haproxy_frontend_http_requests_total{job='haproxy'}[5m]))" "/s" "humanizeNoSmallPrefix") }}
    Requests Denied {{ template "prom_query_drilldown" (args "sum(irate(haproxy_frontend_requests_denied_total{job='haproxy'}[5m]))" "/s" "humanizeNoSmallPrefix") }}
    Data In {{ template "prom_query_drilldown" (args "sum(irate(haproxy_frontend_bytes_in_total{job='haproxy'}[5m]))" "B/s" "humanize") }}
    Data Out {{ template "prom_query_drilldown" (args "sum(irate(haproxy_frontend_bytes_out_total{job='haproxy'}[5m]))" "B/s" "humanize") }}
    Current Sessions {{ template "prom_query_drilldown" (args "sum(haproxy_frontend_current_sessions{job='haproxy'})" "" "humanize") }}
    Exporter Job label
    Node Exporter node
    Prometheus prometheus
    SNMP Exporter snmp
    HAProxy Exporter haproxy
    CloudWatch Exporter cloudwatch
    Cassandra (JMX Exporter) cassandra
    Blackbox (Prober) blackbox
    {{ template "prom_content_tail" . }} {{ template "tail" }} prometheus-0.16.2+ds/consoles/node-cpu.html000066400000000000000000000050331265137125100206260ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_right_table_head" }} CPU(s): {{ template "prom_query_drilldown" (args (printf "scalar(count(count by (cpu)(node_cpu{job='node',instance='%s'})))" .Params.instance)) }} {{ range printf "sum by (mode)(irate(node_cpu{job='node',instance='%s'}[5m])) * 100 / scalar(count(count by (cpu)(node_cpu{job='node',instance='%s'})))" .Params.instance .Params.instance | query | sortByLabel "mode" }} {{ .Labels.mode | title }} CPU {{ .Value | printf "%.1f" }}% {{ end }} Misc Processes Running {{ template "prom_query_drilldown" (args (printf "node_procs_running{job='node',instance='%s'}" .Params.instance) "" "humanize") }} Processes Blocked {{ template "prom_query_drilldown" (args (printf "node_procs_blocked{job='node',instance='%s'}" .Params.instance) "" "humanize") }} Forks {{ template "prom_query_drilldown" (args (printf "irate(node_forks{job='node',instance='%s'}[5m])" .Params.instance) "/s" "humanize") }} Context Switches {{ template "prom_query_drilldown" (args (printf "irate(node_context_switches{job='node',instance='%s'}[5m])" .Params.instance) "/s" "humanize") }} Interrupts {{ template "prom_query_drilldown" (args (printf "irate(node_intr{job='node',instance='%s'}[5m])" .Params.instance) "/s" "humanize") }} 1m Loadavg {{ template "prom_query_drilldown" (args (printf "node_load1{job='node',instance='%s'}" .Params.instance)) }} {{ template "prom_right_table_tail" }} {{ template "prom_content_head" . }}

    Node CPU - {{ reReplaceAll "(.*?://)([^:/]+?)(:\\d+)?/.*" "$2" .Params.instance }}

    CPU Usage

    {{ template "prom_content_tail" . }} {{ template "tail" }} prometheus-0.16.2+ds/consoles/node-disk.html000066400000000000000000000065651265137125100210040ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_right_table_head" }} Disks {{ range printf "node_disk_io_time_ms{job='node',instance='%s'}" .Params.instance | query | sortByLabel "device" }} {{ .Labels.device }} Utilization {{ template "prom_query_drilldown" (args (printf "irate(node_disk_io_time_ms{job='node',instance='%s',device='%s'}[5m]) / 1000 * 100" .Labels.instance .Labels.device) "%" "printf.1f") }} Throughput {{ template "prom_query_drilldown" (args (printf "irate(node_disk_sectors_read{job='node',instance='%s',device='%s'}[5m]) * 512 + irate(node_disk_sectors_written{job='node',instance='%s',device='%s'}[5m]) * 512" .Labels.instance .Labels.device .Labels.instance .Labels.device) "B/s" "humanize") }} Avg Read Time {{ template "prom_query_drilldown" (args (printf "irate(node_disk_read_time_ms{job='node',instance='%s',device='%s'}[5m]) / 1000 / irate(node_disk_reads_completed{job='node',instance='%s',device='%s'}[5m])" .Labels.instance .Labels.device .Labels.instance .Labels.device) "s" "humanize") }} Avg Write Time {{ template "prom_query_drilldown" (args (printf "irate(node_disk_write_time_ms{job='node',instance='%s',device='%s'}[5m]) / 1000 / irate(node_disk_writes_completed{job='node',instance='%s',device='%s'}[5m])" .Labels.instance .Labels.device .Labels.instance .Labels.device) "s" "humanize") }} {{ end }} Filesystem Fullness {{ define "roughlyNearZero" }} {{ if gt .1 . }}~0{{ else }}{{ printf "%.1f" . }}{{ end }} {{ end }} {{ range printf "node_filesystem_size{job='node',instance='%s'}" .Params.instance | query | sortByLabel "mountpoint" }} {{ .Labels.mountpoint }} {{ template "prom_query_drilldown" (args (printf "100 - node_filesystem_free{job='node',instance='%s',mountpoint='%s'} / node_filesystem_size{job='node'} * 100" .Labels.instance .Labels.mountpoint) "%" "roughlyNearZero") }} {{ end }} {{ template "prom_right_table_tail" }} {{ template "prom_content_head" . }}

    Node Disk - {{ reReplaceAll "(.*?://)([^:/]+?)(:\\d+)?/.*" "$2" .Params.instance }}

    Disk I/O Utilization

    Filesystem Usage

    {{ template "prom_content_tail" . }} {{ template "tail" }} prometheus-0.16.2+ds/consoles/node-overview.html000066400000000000000000000127351265137125100217140ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_right_table_head" }} Overview User CPU {{ template "prom_query_drilldown" (args (printf "sum(irate(node_cpu{job='node',instance='%s',mode='user'}[5m])) * 100 / count(count by (cpu)(node_cpu{job='node',instance='%s'}))" .Params.instance .Params.instance) "%" "printf.1f") }} System CPU {{ template "prom_query_drilldown" (args (printf "sum(irate(node_cpu{job='node',instance='%s',mode='system'}[5m])) * 100 / count(count by (cpu)(node_cpu{job='node',instance='%s'}))" .Params.instance .Params.instance) "%" "printf.1f") }} Memory Total {{ template "prom_query_drilldown" (args (printf "node_memory_MemTotal{job='node',instance='%s'}" .Params.instance) "B" "humanize1024") }} Memory Free {{ template "prom_query_drilldown" (args (printf "node_memory_MemFree{job='node',instance='%s'}" .Params.instance) "B" "humanize1024") }} Network {{ range printf "node_network_receive_bytes{job='node',instance='%s',device!='lo'}" .Params.instance | query | sortByLabel "device" }} {{ .Labels.device }} Received {{ template "prom_query_drilldown" (args (printf "irate(node_network_receive_bytes{job='node',instance='%s',device='%s'}[5m])" .Labels.instance .Labels.device) "B/s" "humanize") }} {{ .Labels.device }} Transmitted {{ template "prom_query_drilldown" (args (printf "irate(node_network_transmit_bytes{job='node',instance='%s',device='%s'}[5m])" .Labels.instance .Labels.device) "B/s" "humanize") }} {{ end }} Disks {{ range printf "node_disk_io_time_ms{job='node',instance='%s',device!~'^(md\\\\d+$|dm-)'}" .Params.instance | query | sortByLabel "device" }} {{ .Labels.device }} Utilization {{ template "prom_query_drilldown" (args (printf "irate(node_disk_io_time_ms{job='node',instance='%s',device='%s'}[5m]) / 1000 * 100" .Labels.instance .Labels.device) "%" "printf.1f") }} {{ end }} {{ range printf "node_disk_io_time_ms{job='node',instance='%s'}" .Params.instance | query | sortByLabel "device" }} {{ .Labels.device }} Throughput {{ template "prom_query_drilldown" (args (printf "irate(node_disk_sectors_read{job='node',instance='%s',device='%s'}[5m]) * 512 + rate(node_disk_sectors_written{job='node',instance='%s',device='%s'}[5m]) * 512" .Labels.instance .Labels.device .Labels.instance .Labels.device) "B/s" "humanize") }} {{ end }} Filesystem Fullness {{ define "roughlyNearZero" }} {{ if gt .1 . }}~0{{ else }}{{ printf "%.1f" . }}{{ end }} {{ end }} {{ range printf "node_filesystem_size{job='node',instance='%s'}" .Params.instance | query | sortByLabel "mountpoint" }} {{ .Labels.mountpoint }} {{ template "prom_query_drilldown" (args (printf "100 - node_filesystem_free{job='node',instance='%s',mountpoint='%s'} / node_filesystem_size{job='node'} * 100" .Labels.instance .Labels.mountpoint) "%" "roughlyNearZero") }} {{ end }} {{ template "prom_right_table_tail" }} {{ template "prom_content_head" . }}

    Node Overview - {{ reReplaceAll "(.*?://)([^:/]+?)(:\\d+)?/.*" "$2" .Params.instance }}

    CPU Usage

    Disk I/O Utilization

    Memory

    {{ template "prom_content_tail" . }} {{ template "tail" }} prometheus-0.16.2+ds/consoles/node.html000066400000000000000000000026041265137125100200420ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_right_table_head" }} Node {{ template "prom_query_drilldown" (args "sum(up{job='node'})") }} / {{ template "prom_query_drilldown" (args "count(up{job='node'})") }} {{ template "prom_right_table_tail" }} {{ template "prom_content_head" . }}

    Node

    {{ range query "up{job='node'}" | sortByLabel "instance" }} Yes{{ else }} class="alert-danger">No{{ end }} {{ else }} {{ end }} {{ template "prom_content_tail" . }} {{ template "tail" }} prometheus-0.16.2+ds/consoles/prometheus-overview.html000066400000000000000000000126651265137125100231640ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_right_table_head" }} {{ range printf "http_request_duration_microseconds_count{job='prometheus',instance='%s',handler=~'^(query.*|federate|consoles)$'}" .Params.instance | query | sortByLabel "handler" }} {{ end }} {{ template "prom_right_table_tail" }} {{ template "prom_content_head" . }}

    Prometheus Overview - {{ .Params.instance }}

    Ingested Samples

    Time Series

    HTTP Server

    {{ template "prom_content_tail" . }} {{ template "tail" }} prometheus-0.16.2+ds/consoles/prometheus.html000066400000000000000000000030131265137125100213030ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_right_table_head" }} {{ template "prom_right_table_tail" }} {{ template "prom_content_head" . }}

    Prometheus

    Node Up CPU
    Used
    Memory
    Available
    {{ reReplaceAll "(.*?://)([^:/]+?)(:\\d+)?/.*" "$2" .Labels.instance }} {{ template "prom_query_drilldown" (args (printf "100 * (1 - avg by(instance)(irate(node_cpu{job='node',mode='idle',instance='%s'}[5m])))" .Labels.instance) "%" "printf.1f") }} {{ template "prom_query_drilldown" (args (printf "node_memory_MemFree{job='node',instance='%s'} + node_memory_Cached{job='node',instance='%s'} + node_memory_Buffers{job='node',instance='%s'}" .Labels.instance .Labels.instance .Labels.instance) "B" "humanize1024") }}
    No nodes found.
    Overview
    CPU {{ template "prom_query_drilldown" (args (printf "irate(process_cpu_seconds_total{job='prometheus',instance='%s'}[5m])" .Params.instance) "s/s" "humanizeNoSmallPrefix") }}
    Memory {{ template "prom_query_drilldown" (args (printf "process_resident_memory_bytes{job='prometheus',instance='%s'}" .Params.instance) "B" "humanize1024") }}
    Version {{ with query (printf "prometheus_build_info{job='prometheus',instance='%s'}" .Params.instance) }}{{. | first | label "version"}}{{end}}
    Storage
    Ingested Samples {{ template "prom_query_drilldown" (args (printf "irate(prometheus_local_storage_ingested_samples_total{job='prometheus',instance='%s'}[5m])" .Params.instance) "/s" "humanizeNoSmallPrefix") }}
    Time Series {{ template "prom_query_drilldown" (args (printf "prometheus_local_storage_memory_series{job='prometheus',instance='%s'}" .Params.instance) "" "humanize") }}
    Indexing Queue {{ template "prom_query_drilldown" (args (printf "prometheus_local_storage_indexing_queue_length{job='prometheus',instance='%s'}" .Params.instance) "" "humanize") }}
    Chunks {{ template "prom_query_drilldown" (args (printf "prometheus_local_storage_memory_chunks{job='prometheus',instance='%s'}" .Params.instance) "" "humanize") }}
    Chunk Descriptors {{ template "prom_query_drilldown" (args (printf "prometheus_local_storage_memory_chunkdescs{job='prometheus',instance='%s'}" .Params.instance) "" "humanize") }}
    Chunks To Persist {{ template "prom_query_drilldown" (args (printf "prometheus_local_storage_chunks_to_persist{job='prometheus',instance='%s'}" .Params.instance) "" "humanize") }}
    Checkpoint Duration {{ template "prom_query_drilldown" (args (printf "prometheus_local_storage_checkpoint_duration_milliseconds{job='prometheus',instance='%s'} / 1000" .Params.instance) "" "humanizeDuration") }}
    Rules
    Evaluation Duration {{ template "prom_query_drilldown" (args (printf "irate(prometheus_evaluator_duration_milliseconds_sum{job='prometheus',instance='%s'}[5m]) / rate(prometheus_evaluator_duration_milliseconds_count{job='prometheus',instance='%s'}[5m]) / 1000" .Params.instance .Params.instance) "" "humanizeDuration") }}
    Notification Latency {{ template "prom_query_drilldown" (args (printf "irate(prometheus_notifications_latency_milliseconds_sum{job='prometheus',instance='%s'}[5m]) / rate(prometheus_notifications_latency_milliseconds_count{job='prometheus',instance='%s'}[5m]) / 1000" .Params.instance .Params.instance) "" "humanizeDuration") }}
    Notification Queue {{ template "prom_query_drilldown" (args (printf "prometheus_notifications_queue_length{job='prometheus',instance='%s'}" .Params.instance) "" "humanize") }}
    HTTP Server
    {{ .Labels.handler }} {{ template "prom_query_drilldown" (args (printf "irate(http_request_duration_microseconds_count{job='prometheus',instance='%s',handler='%s'}[5m])" .Labels.instance .Labels.handler) "/s" "humanizeNoSmallPrefix") }}
    Prometheus {{ template "prom_query_drilldown" (args "sum(up{job='prometheus'})") }} / {{ template "prom_query_drilldown" (args "count(up{job='prometheus'})") }}
    {{ range query "up{job='prometheus'}" | sortByLabel "instance" }} {{ else }} {{ end }} {{ template "prom_content_tail" . }} {{ template "tail" }} prometheus-0.16.2+ds/consoles/snmp-overview.html000066400000000000000000000046311265137125100217400ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_content_head" . }}

    SNMP Device Overview - {{ .Params.instance }}

    Prometheus Up Ingested Samples Time Series Memory
    {{ .Labels.instance }} Yes{{ else }} class="alert-danger">No{{ end }} {{ template "prom_query_drilldown" (args (printf "irate(prometheus_local_storage_ingested_samples_total{job='prometheus',instance='%s'}[5m])" .Labels.instance) "/s" "humanizeNoSmallPrefix") }} {{ template "prom_query_drilldown" (args (printf "prometheus_local_storage_memory_series{job='prometheus',instance='%s'}" .Labels.instance) "" "humanize") }} {{ template "prom_query_drilldown" (args (printf "process_resident_memory_bytes{job='prometheus',instance='%s'}" .Labels.instance) "B" "humanize1024")}}
    No devices found.
    {{ range query (printf "ifOperStatus{job='snmp',instance='%s'}" .Params.instance) | sortByLabel "ifDescr" }} {{ end }} {{ template "prom_content_tail" . }} {{ template "tail" }} prometheus-0.16.2+ds/consoles/snmp.html000066400000000000000000000043601265137125100200730ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_right_table_head" }} {{ template "prom_right_table_tail" }} {{ template "prom_content_head" . }}

    SNMP

    Port Status Speed In Out Discards Errors
    {{ .Labels.ifDescr }} up {{ else if eq (. | value) 2.0}}">down {{ else if eq (. | value) 3.0}}">testing {{ else if eq (. | value) 4.0}}">unknown {{ else if eq (. | value) 5.0}}">dormant {{ else if eq (. | value) 6.0}}">notPresent {{ else if eq (. | value) 7.0}}">lowerLayerDown {{else}}">{{ end }} {{ template "prom_query_drilldown" (args (printf "ifHighSpeed{job='snmp',instance='%s',ifDescr='%s'} * 1e6 or ifSpeed{job='snmp',instance='%s',ifDescr='%s'}" .Labels.instance .Labels.ifDescr .Labels.instance .Labels.ifDescr) "b/s" "humanize")}} {{ template "prom_query_drilldown" (args (printf "irate(ifHCInOctets{job='snmp',instance='%s',ifDescr='%s'}[5m]) * 8 or rate(ifInOctets{job='snmp',instance='%s',ifDescr='%s'}[5m]) * 8" .Labels.instance .Labels.ifDescr .Labels.instance .Labels.ifDescr) "b/s" "humanize")}} {{ template "prom_query_drilldown" (args (printf "irate(ifHCOutOctets{job='snmp',instance='%s',ifDescr='%s'}[5m]) * 8 or rate(ifOutOctets{job='snmp',instance='%s',ifDescr='%s'}[5m]) * 8" .Labels.instance .Labels.ifDescr .Labels.instance .Labels.ifDescr) "b/s" "humanize")}} {{ template "prom_query_drilldown" (args (printf "irate(ifInDiscards{job='snmp',instance='%s',ifDescr='%s'}[5m]) + rate(ifOutDiscards{job='snmp',instance='%s',ifDescr='%s'}[5m]) * 8" .Labels.instance .Labels.ifDescr .Labels.instance .Labels.ifDescr) "/s" "humanizeNoSmallPrefix")}} {{ template "prom_query_drilldown" (args (printf "irate(ifInErrors{job='snmp',instance='%s',ifDescr='%s'}[5m]) + rate(ifOutErrors{job='snmp',instance='%s',ifDescr='%s'}[5m]) * 8" .Labels.instance .Labels.ifDescr .Labels.instance .Labels.ifDescr) "/s" "humanizeNoSmallPrefix")}}
    SNMP {{ template "prom_query_drilldown" (args "sum(up{job='snmp'})") }} / {{ template "prom_query_drilldown" (args "count(up{job='snmp'})") }}
    {{ range query "up{job='snmp'}" | sortByLabel "instance" }} {{ else }} {{ end }} {{ template "prom_content_tail" . }} {{ template "tail" }} prometheus-0.16.2+ds/documentation/000077500000000000000000000000001265137125100172515ustar00rootroot00000000000000prometheus-0.16.2+ds/documentation/examples/000077500000000000000000000000001265137125100210675ustar00rootroot00000000000000prometheus-0.16.2+ds/documentation/examples/prometheus-kubernetes.yml000066400000000000000000000100471265137125100261540ustar00rootroot00000000000000# A scrape configuration for running Prometheus on a Kubernetes cluster. # This uses separate scrape configs for cluster components (i.e. API server, node) # and services to allow each to use different authentication configs. # # Kubernetes labels will be added as Prometheus labels on metrics via the # `labelmap` relabeling action. # Scrape config for cluster components. scrape_configs: - job_name: 'kubernetes-cluster' # This TLS & bearer token file config is used to connect to the actual scrape # endpoints for cluster components. This is separate to discovery auth # configuration (`in_cluster` below) because discovery & scraping are two # separate concerns in Prometheus. tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs: - api_servers: - 'https://kubernetes.default.svc' in_cluster: true relabel_configs: - source_labels: [__meta_kubernetes_role] action: keep regex: (?:apiserver|node) - action: labelmap regex: __meta_kubernetes_node_label_(.+) - source_labels: [__meta_kubernetes_role] action: replace target_label: kubernetes_role # Scrape config for service endpoints. # # The relabeling allows the actual service scrape endpoint to be configured # via the following annotations: # # * `prometheus.io/scrape`: Only scrape services that have a value of `true` # * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need # to set this to `https` & most likely set the `tls_config` of the scrape config. # * `prometheus.io/path`: If the metrics path is not `/metrics` override this. # * `prometheus.io/port`: If the metrics are exposed on a different port to the # service then set this appropriately. - job_name: 'kubernetes-service-endpoints' kubernetes_sd_configs: - api_servers: - 'https://kubernetes.default.svc' in_cluster: true relabel_configs: - source_labels: [__meta_kubernetes_role, __meta_kubernetes_service_annotation_prometheus_io_scrape] action: keep regex: endpoint;true - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] action: replace target_label: __scheme__ regex: (https?) - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] action: replace target_label: __address__ regex: (.+)(?::\d+);(\d+) replacement: $1:$2 - action: labelmap regex: __meta_kubernetes_service_label_(.+) - source_labels: [__meta_kubernetes_role] action: replace target_label: kubernetes_role - source_labels: [__meta_kubernetes_service_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_service_name] action: replace target_label: kubernetes_name # Example scrape config for probing services via the Blackbox Exporter. # # The relabeling allows the actual service scrape endpoint to be configured # via the following annotations: # # * `prometheus.io/probe`: Only probe services that have a value of `true` - job_name: 'kubernetes-services' metrics_path: /probe params: module: [http_2xx] kubernetes_sd_configs: - api_servers: - 'https://kubernetes.default.svc' in_cluster: true relabel_configs: - source_labels: [__meta_kubernetes_role, __meta_kubernetes_service_annotation_prometheus_io_probe] action: keep regex: service;true - source_labels: [__address__] target_label: __param_target - target_label: __address__ replacement: blackbox - source_labels: [__param_target] target_label: instance - action: labelmap regex: __meta_kubernetes_service_label_(.+) - source_labels: [__meta_kubernetes_role] target_label: kubernetes_role - source_labels: [__meta_kubernetes_service_namespace] target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_service_name] target_label: kubernetes_name prometheus-0.16.2+ds/documentation/examples/prometheus.yml000066400000000000000000000020461265137125100240070ustar00rootroot00000000000000# my global config global: scrape_interval: 15s # By default, scrape targets every 15 seconds. evaluation_interval: 15s # By default, scrape targets every 15 seconds. # scrape_timeout is set to the global default (10s). # Attach these labels to any time series or alerts when communicating with # external systems (federation, remote storage, Alertmanager). external_labels: monitor: 'codelab-monitor' # Load and evaluate rules in this file every 'evaluation_interval' seconds. rule_files: # - "first.rules" # - "second.rules" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=` to any timeseries scraped from this config. - job_name: 'prometheus' # Override the global default and scrape targets from this job every 5 seconds. scrape_interval: 5s scrape_timeout: 10s # metrics_path defaults to '/metrics' # scheme defaults to 'http'. target_groups: - targets: ['localhost:9090'] prometheus-0.16.2+ds/documentation/images/000077500000000000000000000000001265137125100205165ustar00rootroot00000000000000prometheus-0.16.2+ds/documentation/images/architecture.svg000066400000000000000000000622031265137125100237240ustar00rootroot00000000000000
    pull metrics


    [Not supported by viewer]
    HDD / SSD
    [Not supported by viewer]
    Pushgateway
    [Not supported by viewer]
    Short-lived jobs
    [Not supported by viewer]
    Jobs / Exporters
    [Not supported by viewer]
    Storage
    [Not supported by viewer]
    Retrieval
    [Not supported by viewer]
    PromQL
    [Not supported by viewer]
    Prometheus Server
    [Not supported by viewer]
    Node
    [Not supported by viewer]
    Service Discovery

    [Not supported by viewer]
                 find 
                     targets
    [Not supported by viewer]
    Prometheus Server
    [Not supported by viewer]
    Alertmanager
    [Not supported by viewer]
    push alerts         


    [Not supported by viewer]
    Web UI
    [Not supported by viewer]
    PromDash
    [Not supported by viewer]
    Grafana
    [Not supported by viewer]
    API clients
    [Not supported by viewer]
    PagerDuty
    [Not supported by viewer]
    Email
    [Not supported by viewer]
    • DNS
    • Kubernetes
    • Consul
    • ...
    • Custom integration
    [Not supported by viewer]
                  notify
    [Not supported by viewer]
    ...
    [Not supported by viewer]
    prometheus-0.16.2+ds/documentation/images/architecture.xml000066400000000000000000000074771265137125100237410ustar00rootroot000000000000007R1dd+q48dfkMTkQvpLHEJK7be9u09KebR8dEOBeY1FjbnL31+9I1hh7JHNtPDYhCZzDwWMhSzOj+Za46N2vX79E3mb1q5yL4OK6M3+96E0urq9vB334VIAfCWA46CWAZeTPE1B3D5j6fwgD7Bjozp+Lba5hLGUQ+5s8cCbDUMziHGwhg/wjNt4Su98DpjMvsKG/+/N4lUBvrod7+C/CX67wMd3hbXJnG//APuZi4e2C+FKD4J66vfawLz2r3gMgLJISulHf1q/3IlBIQ4QkU38suJsOMhKhGcjhH/TNKL57wc4M0hr1y8qPxXTjzdT1C1DyojdexesArrrwdbvy5vIFLjpwsQy8rUK++r6NI/lN3MtARrqj3k1HvdM7iEbVycIPgkzLh6F6q/4ib+7DXPBeKEMYxdgMWkSxMMzkmLkGmWl/EXIt4ugHNDE/uL41XGSYrzc01y976l4jbJWh7MjAPMNky7TrPZ7hi0G1G+31sS4jL1QMSzH3+Phwfw+jKMLcEdSae9uVUANXz20A8+l6/hnmuwMG1BuZcwj1gKON+gpT8oJABL960TcBWMlxPUVXf6LeTBgaGkGIGEKey2CoayaSRRAHfgYl8GMt37w40MhbvwIHblZXi0C+zFZeFF9tIjkTit/GLt5mQBsKM1zRLqwZeZhDW48BbUYNnCPaBgRtsOhbQ9vofNE2JGhDqdYG2m4sLIk5WCvmUkbxSi5l6AUPe+g4krtwruW4EuoZHIpwfhdFWik8B3L2LQE9gl4pKevUsw8jEQwzL1oKFFNuvEYi8GL/e74rF5bMT5+kDw/ZC80OUSvIx9hFMgTzq6xNRDtC3VPU0VbuopmwOgIkemoe2GyjGmyLB9xFvUees+eBpMc9R6TIKsUktv15TkyiBSo/k1Da4qrlpi2ufPIcLtoi65wpcbXYb5y4PZQI3MRFEU6ew0ZcM/M6TpmlNMsb/mP95lGSKQlwHQyMfZnRkmimZZUkXZbHKMkug29robE9zKHWOYA5bMKOOeN+nCfm+shQp+C5Eu5TZcxZLv4jBEh4cEVc8VSyZY1YhGVx1efA1Rkb/+hppxrARhtKPm7bv2uM/7P0mfJoQy5qBW285vCrH/9Hga8G5uq/mTtPIvJhgCpmpH/Yin2VGDdZHmE3sKjHkoqGn7hYR9hAaD9zGbjw4Ay91GVKsLaN30RH8Fu/VO3RQDWX9YvqlDyHz7Wxzd/NTjdRvfgzGNswAFyO5/53+LpUXxH0HFEIPM7R7qifEn4EMajgGTaLxNb/w3vWDRQHGlRC68H4YqA5pjf2An8ZAmwGPKKjykqi+pBoujM31v58rtk58J5FMPZm35aasUkkP8/s5c37lHtLZGEerx8P5BIWMoxNVg5cluS6UkTcJOYMxtJkVW4BZdJiLo1y2bnqYyjcMOSl6f3Y9YRN5GKxhTVbT471mvDlsvQlNBuM1JvJkO6bnxwKqLv0NUceoufy3pJ1qfhMcw/Oevj/nUqYwpx7C/3KgpLV+8sE2sGwHuFzOlW/wSWe9FZ6jefXE66EDBEMqPwyd5FfQmtllwFkBe0E/KB4iXOQGngmnwSwfaahaZKlNK7YWpS2U3JPu+1q6cXiRWmmD0qSdGWhseWyltEgY6eJ7cdOV2B0XQYgUOdw43/yWdsIH5I0w7wF1LfXCvrV7HSx84x/1ZQwou3hdQNUEtHHpc2IpuVs4jh1Fgt17IjGNIZKDfAPPio50rQoeguOMhtXrQcLOexIyT+VByEA8HEJQiPyDoK4oqMsBDExmKymj+T6H18/rt1FnfYWqYHWvcPC1m5vhh5oTSuyXBoM30GLQCyU7Uxs7dSjdvYR+KG4xLmoTrpX6rmkD8UXIl6JndJtUxEBljM2O/Rc01V/p4ubprc7Du3nKkphqaxzZY1STkgo8xvUALvp+CmOCx2vpozJfolaSLbiX2C7Ccy1Sn6rueLfIQ1ku3wr1zrB9VUL7Qw5wgaifbe3UDh8INrHUddKtJ0L6yiJcoW/SK5aaLddJ6VXfEAxsKa/namlDz02GSB+nzIsrUMo9oedSWQWGWZcrjMthLptpsyth2FnWgLxk1zQTzvq0qRSQcavalKpaMB8SSWnceKt1XoJn7dJfhvm1ipo4YcQS6PNmEzwE8+tHChhnzSj95lxewMZt0SkFkp7SKHnQ2x1820oAnKdEnnFkovDcbdeOdNgAVdqL6HudeXoXMqXo6YG92u+K4TSGiXXdpjGEOpIhbkCIW/Y0CwpJDkoRb0Jh+vcWIJsYLyJs7E7m6+2p05aWRsz3cxARRizjUmfgwOuOi4umxRT7ue6i0sL/wbYKm8I9OhqZSpjS6UAeQ4XdXF1cNeb3mbqTffVp7Ti9JANCuoVOQU3hSuYYRVSr2pa7FlpL7yrFqVq3cbv6tJ8FiZ8uT3Uguew8UuJQjGbLhkWyW3TVxdPXgwMoZQ4dN3pu7iio1+WZwI4qKFXMurXFcxDWO0QR74II72uHuIg9axoRrCHOMiA8+qnPgOV8G4cSuLQmQYHDuzI65ZMffX1jTLE0wrrztVQe6oaQKXUkfXTBYqnJb7D6GZds4emfq2OmPiuTxUdN9+VcAJb4LvBSFW9ZvhuMGLmu+R4iVPxXb/Aeq3MdwOacaQGFhffUfOZeZfs0PaV7wLwJ9deCAVXn04yagWS6XQUmDRWD4d1ko1VmGw3XlijyGQDZcZqioptkhpKtki3O6gPCEuGXHqPyqlRdDD52fZcKmdgGTwu/J4oFYhLH95NVtG9quqq2wrxcMriOE/NKPCc8qvrvZWWGXaa/nehKsn+/ZeqVRLN52vaKFWhJ4m4AvyuKCfHFm3c8kHi0RPwLj+JkTXCWiGGXc4M52ouwNx5k7SAV0NIR4u6FaTbJct3TyCJOrNAsZi9z+J9o951yFlTqEdBxxxBteOnqM5Tf9Hc66kDSZwOZOta/jg9jro0q8gLnNhq3mmxL1gjPY87d5qmN0cEvGwktQbtmjkja0CPFaKn3jB5/0MaZTDp8aKoBG3PfWDayESpzpK5+AVDM+eDnIy58pnZ9pnLhKI+JdfeZGJnLovotBa/IebCOCYbs9Qv/z/WiCx/AHtqRDbnSw2IbenYDOA6347l1GE7xvCkYsaTXfy2zxtoL25sUafc6YMcUeOR7XJ9yAVCNym1uECw349OghHVBTYJEMROArse5GHt+W9743qLeS1yBq9DPrkowyGfbpux9nInCSaWXJpmAJvh1HmGI2u+DK5yhz/XtQ9LE8r1pwxJ3mgH7fKZpMCnkEPHTPXn6m1nqia/TQuPlkrB9qN4Hv633TPUe0GYKrufpuUx3Mtwq5F7oudfXV2dcPK7bSzXigNBoIJWin2p8qOVRwPALIO6BX6hsK4hmBvcunQ4T5ncldFcRORORlSqif8dlJAPVnpvoo0E/bsnufU1qh2q7CtpkGIJlZ/OOOs0+cwPl3ioHtlnZe7+S4Lim1yqQiMD+ap/PbmEs4xT2FjGigsAqmouK2/ROn47tSsz6KpIov9ncZRs5S1eL6MET68Dq22/Y9GVLGfFVg6BEMbKh8vssB/ZCcIcXbvl/bebQlZ7l+zEHoYbkcBAQ5t1KrIgHVXF5uxHGBcfXlq2EirS8rrBsq7TbwYPZewv0hM3StaTfW4JP8GW8EQEHzqEeXAD2fI3ewpzWuGaWZDaUzj/WEpt+teIhjUWc4HL/X/gJvTe/4Nw7+FPprometheus-0.16.2+ds/documentation/images/diagram_note.md000066400000000000000000000006041265137125100234710ustar00rootroot00000000000000The architecture image was drawn on https://www.draw.io/. The native draw.io source file is called `architecture.xml`, while `architecture.svg` is its SVG export. To change the architecture diagram, go to https://www.draw.io/ and import the XML source file. After making changes to the diagram, export the result as SVG. Update both the source file and the SVG export in this directory. prometheus-0.16.2+ds/notification/000077500000000000000000000000001265137125100170665ustar00rootroot00000000000000prometheus-0.16.2+ds/notification/notification.go000066400000000000000000000171531265137125100221120ustar00rootroot00000000000000// Copyright 2013 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package notification import ( "bytes" "encoding/json" "io" "io/ioutil" "net/http" "strings" "sync" "time" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/common/log" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/config" "github.com/prometheus/prometheus/util/httputil" ) const ( alertmanagerAPIEventsPath = "/api/alerts" contentTypeJSON = "application/json" ) // String constants for instrumentation. const ( namespace = "prometheus" subsystem = "notifications" ) // NotificationReq is a request for sending a notification to the alert manager // for a single alert vector element. type NotificationReq struct { // Short-form alert summary. May contain text/template-style interpolations. Summary string // Longer alert description. May contain text/template-style interpolations. Description string // A reference to the runbook for the alert. Runbook string // Labels associated with this alert notification, including alert name. Labels model.LabelSet // Current value of alert Value model.SampleValue // Since when this alert has been active (pending or firing). ActiveSince time.Time // A textual representation of the rule that triggered the alert. RuleString string // Prometheus console link to alert expression. GeneratorURL string } // NotificationReqs is just a short-hand for []*NotificationReq. No methods // attached. Arguably, it's more confusing than helpful. Perhaps we should // remove it... type NotificationReqs []*NotificationReq type httpPoster interface { Post(url string, bodyType string, body io.Reader) (*http.Response, error) } // NotificationHandler is responsible for dispatching alert notifications to an // alert manager service. type NotificationHandler struct { // The URL of the alert manager to send notifications to. alertmanagerURL string // Buffer of notifications that have not yet been sent. pendingNotifications chan NotificationReqs // HTTP client with custom timeout settings. httpClient httpPoster notificationLatency prometheus.Summary notificationErrors prometheus.Counter notificationDropped prometheus.Counter notificationsQueueLength prometheus.Gauge notificationsQueueCapacity prometheus.Metric externalLabels model.LabelSet mtx sync.RWMutex stopped chan struct{} } // NotificationHandlerOptions are the configurable parameters of a NotificationHandler. type NotificationHandlerOptions struct { AlertmanagerURL string QueueCapacity int Deadline time.Duration } // NewNotificationHandler constructs a new NotificationHandler. func NewNotificationHandler(o *NotificationHandlerOptions) *NotificationHandler { return &NotificationHandler{ alertmanagerURL: strings.TrimRight(o.AlertmanagerURL, "/"), pendingNotifications: make(chan NotificationReqs, o.QueueCapacity), httpClient: httputil.NewDeadlineClient(o.Deadline, nil), notificationLatency: prometheus.NewSummary(prometheus.SummaryOpts{ Namespace: namespace, Subsystem: subsystem, Name: "latency_milliseconds", Help: "Latency quantiles for sending alert notifications (not including dropped notifications).", }), notificationErrors: prometheus.NewCounter(prometheus.CounterOpts{ Namespace: namespace, Subsystem: subsystem, Name: "errors_total", Help: "Total number of errors sending alert notifications.", }), notificationDropped: prometheus.NewCounter(prometheus.CounterOpts{ Namespace: namespace, Subsystem: subsystem, Name: "dropped_total", Help: "Total number of alert notifications dropped due to alert manager missing in configuration.", }), notificationsQueueLength: prometheus.NewGauge(prometheus.GaugeOpts{ Namespace: namespace, Subsystem: subsystem, Name: "queue_length", Help: "The number of alert notifications in the queue.", }), notificationsQueueCapacity: prometheus.MustNewConstMetric( prometheus.NewDesc( prometheus.BuildFQName(namespace, subsystem, "queue_capacity"), "The capacity of the alert notifications queue.", nil, nil, ), prometheus.GaugeValue, float64(o.QueueCapacity), ), stopped: make(chan struct{}), } } // ApplyConfig updates the status state as the new config requires. // Returns true on success. func (n *NotificationHandler) ApplyConfig(conf *config.Config) bool { n.mtx.Lock() defer n.mtx.Unlock() n.externalLabels = conf.GlobalConfig.ExternalLabels return true } // Send a list of notifications to the configured alert manager. func (n *NotificationHandler) sendNotifications(reqs NotificationReqs) error { n.mtx.RLock() defer n.mtx.RUnlock() alerts := make([]map[string]interface{}, 0, len(reqs)) for _, req := range reqs { for ln, lv := range n.externalLabels { if _, ok := req.Labels[ln]; !ok { req.Labels[ln] = lv } } alerts = append(alerts, map[string]interface{}{ "summary": req.Summary, "description": req.Description, "runbook": req.Runbook, "labels": req.Labels, "payload": map[string]interface{}{ "value": req.Value, "activeSince": req.ActiveSince, "generatorURL": req.GeneratorURL, "alertingRule": req.RuleString, }, }) } buf, err := json.Marshal(alerts) if err != nil { return err } log.Debugln("Sending notifications to alertmanager:", string(buf)) resp, err := n.httpClient.Post( n.alertmanagerURL+alertmanagerAPIEventsPath, contentTypeJSON, bytes.NewBuffer(buf), ) if err != nil { return err } defer resp.Body.Close() _, err = ioutil.ReadAll(resp.Body) if err != nil { return err } // BUG: Do we need to check the response code? return nil } // Run dispatches notifications continuously. func (n *NotificationHandler) Run() { for reqs := range n.pendingNotifications { if n.alertmanagerURL == "" { log.Warn("No alert manager configured, not dispatching notification") n.notificationDropped.Inc() continue } begin := time.Now() err := n.sendNotifications(reqs) if err != nil { log.Error("Error sending notification: ", err) n.notificationErrors.Inc() } n.notificationLatency.Observe(float64(time.Since(begin) / time.Millisecond)) } close(n.stopped) } // SubmitReqs queues the given notification requests for processing. func (n *NotificationHandler) SubmitReqs(reqs NotificationReqs) { n.pendingNotifications <- reqs } // Stop shuts down the notification handler. func (n *NotificationHandler) Stop() { log.Info("Stopping notification handler...") close(n.pendingNotifications) <-n.stopped log.Info("Notification handler stopped.") } // Describe implements prometheus.Collector. func (n *NotificationHandler) Describe(ch chan<- *prometheus.Desc) { n.notificationLatency.Describe(ch) ch <- n.notificationsQueueLength.Desc() ch <- n.notificationsQueueCapacity.Desc() } // Collect implements prometheus.Collector. func (n *NotificationHandler) Collect(ch chan<- prometheus.Metric) { n.notificationLatency.Collect(ch) n.notificationsQueueLength.Set(float64(len(n.pendingNotifications))) ch <- n.notificationsQueueLength ch <- n.notificationsQueueCapacity } prometheus-0.16.2+ds/notification/notification_test.go000066400000000000000000000050611265137125100231440ustar00rootroot00000000000000// Copyright 2013 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package notification import ( "bytes" "io" "io/ioutil" "net/http" "testing" "time" "github.com/prometheus/common/model" ) type testHTTPPoster struct { message string receivedPost chan<- bool } func (p *testHTTPPoster) Post(url string, bodyType string, body io.Reader) (*http.Response, error) { var buf bytes.Buffer buf.ReadFrom(body) p.message = buf.String() p.receivedPost <- true return &http.Response{ Body: ioutil.NopCloser(&bytes.Buffer{}), }, nil } type testNotificationScenario struct { description string summary string message string runbook string } func (s *testNotificationScenario) test(i int, t *testing.T) { h := NewNotificationHandler(&NotificationHandlerOptions{ AlertmanagerURL: "alertmanager_url", QueueCapacity: 0, Deadline: 10 * time.Second, }) defer h.Stop() receivedPost := make(chan bool, 1) poster := testHTTPPoster{receivedPost: receivedPost} h.httpClient = &poster go h.Run() h.SubmitReqs(NotificationReqs{ { Summary: s.summary, Description: s.description, Runbook: s.runbook, Labels: model.LabelSet{ model.LabelName("instance"): model.LabelValue("testinstance"), }, Value: model.SampleValue(1.0 / 3.0), ActiveSince: time.Time{}, RuleString: "Test rule string", GeneratorURL: "prometheus_url", }, }) <-receivedPost if poster.message != s.message { t.Fatalf("%d. Expected '%s', received '%s'", i, s.message, poster.message) } } func TestNotificationHandler(t *testing.T) { scenarios := []testNotificationScenario{ { // Correct message. summary: "Summary", description: "Description", runbook: "Runbook", message: `[{"description":"Description","labels":{"instance":"testinstance"},"payload":{"activeSince":"0001-01-01T00:00:00Z","alertingRule":"Test rule string","generatorURL":"prometheus_url","value":"0.3333333333333333"},"runbook":"Runbook","summary":"Summary"}]`, }, } for i, s := range scenarios { s.test(i, t) } } prometheus-0.16.2+ds/promql/000077500000000000000000000000001265137125100157125ustar00rootroot00000000000000prometheus-0.16.2+ds/promql/analyzer.go000066400000000000000000000125421265137125100200720ustar00rootroot00000000000000// Copyright 2013 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package promql import ( "errors" "time" "github.com/prometheus/common/model" "golang.org/x/net/context" "github.com/prometheus/prometheus/storage/local" ) // An Analyzer traverses an expression and determines which data has to be requested // from the storage. It is bound to a context that allows cancellation and timing out. type Analyzer struct { // The storage from which to query data. Storage local.Storage // The expression being analyzed. Expr Expr // The time range for evaluation of Expr. Start, End model.Time // The preload times for different query time offsets. offsetPreloadTimes map[time.Duration]preloadTimes } // preloadTimes tracks which instants or ranges to preload for a set of // fingerprints. One of these structs is collected for each offset by the query // analyzer. type preloadTimes struct { // Instants require single samples to be loaded along the entire query // range, with intervals between the samples corresponding to the query // resolution. instants map[model.Fingerprint]struct{} // Ranges require loading a range of samples at each resolution step, // stretching backwards from the current evaluation timestamp. The length of // the range into the past is given by the duration, as in "foo[5m]". ranges map[model.Fingerprint]time.Duration } // Analyze the provided expression and attach metrics and fingerprints to data-selecting // AST nodes that are later used to preload the data from the storage. func (a *Analyzer) Analyze(ctx context.Context) error { a.offsetPreloadTimes = map[time.Duration]preloadTimes{} getPreloadTimes := func(offset time.Duration) preloadTimes { if _, ok := a.offsetPreloadTimes[offset]; !ok { a.offsetPreloadTimes[offset] = preloadTimes{ instants: map[model.Fingerprint]struct{}{}, ranges: map[model.Fingerprint]time.Duration{}, } } return a.offsetPreloadTimes[offset] } // Retrieve fingerprints and metrics for the required time range for // each metric or matrix selector node. Inspect(a.Expr, func(node Node) bool { switch n := node.(type) { case *VectorSelector: n.metrics = a.Storage.MetricsForLabelMatchers(n.LabelMatchers...) n.iterators = make(map[model.Fingerprint]local.SeriesIterator, len(n.metrics)) pt := getPreloadTimes(n.Offset) for fp := range n.metrics { // Only add the fingerprint to the instants if not yet present in the // ranges. Ranges always contain more points and span more time than // instants for the same offset. if _, alreadyInRanges := pt.ranges[fp]; !alreadyInRanges { pt.instants[fp] = struct{}{} } } case *MatrixSelector: n.metrics = a.Storage.MetricsForLabelMatchers(n.LabelMatchers...) n.iterators = make(map[model.Fingerprint]local.SeriesIterator, len(n.metrics)) pt := getPreloadTimes(n.Offset) for fp := range n.metrics { if pt.ranges[fp] < n.Range { pt.ranges[fp] = n.Range // Delete the fingerprint from the instants. Ranges always contain more // points and span more time than instants, so we don't need to track // an instant for the same fingerprint, should we have one. delete(pt.instants, fp) } } } return true }) // Currently we do not return an error but we might place a context check in here // or extend the stage in some other way. return nil } // Prepare the expression evaluation by preloading all required chunks from the storage // and setting the respective storage iterators in the AST nodes. func (a *Analyzer) Prepare(ctx context.Context) (local.Preloader, error) { const env = "query preparation" if a.offsetPreloadTimes == nil { return nil, errors.New("analysis must be performed before preparing query") } var err error // The preloader must not be closed unless an error ocurred as closing // unpins the preloaded chunks. p := a.Storage.NewPreloader() defer func() { if err != nil { p.Close() } }() // Preload all analyzed ranges. for offset, pt := range a.offsetPreloadTimes { start := a.Start.Add(-offset) end := a.End.Add(-offset) for fp, rangeDuration := range pt.ranges { if err = contextDone(ctx, env); err != nil { return nil, err } err = p.PreloadRange(fp, start.Add(-rangeDuration), end, StalenessDelta) if err != nil { return nil, err } } for fp := range pt.instants { if err = contextDone(ctx, env); err != nil { return nil, err } err = p.PreloadRange(fp, start, end, StalenessDelta) if err != nil { return nil, err } } } // Attach storage iterators to AST nodes. Inspect(a.Expr, func(node Node) bool { switch n := node.(type) { case *VectorSelector: for fp := range n.metrics { n.iterators[fp] = a.Storage.NewIterator(fp) } case *MatrixSelector: for fp := range n.metrics { n.iterators[fp] = a.Storage.NewIterator(fp) } } return true }) return p, nil } prometheus-0.16.2+ds/promql/ast.go000066400000000000000000000212541265137125100170340ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package promql import ( "fmt" "time" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/storage/local" "github.com/prometheus/prometheus/storage/metric" ) // Node is a generic interface for all nodes in an AST. // // Whenever numerous nodes are listed such as in a switch-case statement // or a chain of function definitions (e.g. String(), expr(), etc.) convention is // to list them as follows: // // - Statements // - statement types (alphabetical) // - ... // - Expressions // - expression types (alphabetical) // - ... // type Node interface { // String representation of the node that returns the given node when parsed // as part of a valid query. String() string } // Statement is a generic interface for all statements. type Statement interface { Node // stmt ensures that no other type accidentally implements the interface stmt() } // Statements is a list of statement nodes that implements Node. type Statements []Statement // AlertStmt represents an added alert rule. type AlertStmt struct { Name string Expr Expr Duration time.Duration Labels model.LabelSet Summary string Description string Runbook string } // EvalStmt holds an expression and information on the range it should // be evaluated on. type EvalStmt struct { Expr Expr // Expression to be evaluated. // The time boundaries for the evaluation. If Start equals End an instant // is evaluated. Start, End model.Time // Time between two evaluated instants for the range [Start:End]. Interval time.Duration } // RecordStmt represents an added recording rule. type RecordStmt struct { Name string Expr Expr Labels model.LabelSet } func (*AlertStmt) stmt() {} func (*EvalStmt) stmt() {} func (*RecordStmt) stmt() {} // Expr is a generic interface for all expression types. type Expr interface { Node // Type returns the type the expression evaluates to. It does not perform // in-depth checks as this is done at parsing-time. Type() model.ValueType // expr ensures that no other types accidentally implement the interface. expr() } // Expressions is a list of expression nodes that implements Node. type Expressions []Expr // AggregateExpr represents an aggregation operation on a vector. type AggregateExpr struct { Op itemType // The used aggregation operation. Expr Expr // The vector expression over which is aggregated. Grouping model.LabelNames // The labels by which to group the vector. KeepExtraLabels bool // Whether to keep extra labels common among result elements. } // BinaryExpr represents a binary expression between two child expressions. type BinaryExpr struct { Op itemType // The operation of the expression. LHS, RHS Expr // The operands on the respective sides of the operator. // The matching behavior for the operation if both operands are vectors. // If they are not this field is nil. VectorMatching *VectorMatching // If a comparison operator, return 0/1 rather than filtering. ReturnBool bool } // Call represents a function call. type Call struct { Func *Function // The function that was called. Args Expressions // Arguments used in the call. } // MatrixSelector represents a matrix selection. type MatrixSelector struct { Name string Range time.Duration Offset time.Duration LabelMatchers metric.LabelMatchers // The series iterators are populated at query analysis time. iterators map[model.Fingerprint]local.SeriesIterator metrics map[model.Fingerprint]metric.Metric } // NumberLiteral represents a number. type NumberLiteral struct { Val model.SampleValue } // ParenExpr wraps an expression so it cannot be disassembled as a consequence // of operator precendence. type ParenExpr struct { Expr Expr } // StringLiteral represents a string. type StringLiteral struct { Val string } // UnaryExpr represents a unary operation on another expression. // Currently unary operations are only supported for scalars. type UnaryExpr struct { Op itemType Expr Expr } // VectorSelector represents a vector selection. type VectorSelector struct { Name string Offset time.Duration LabelMatchers metric.LabelMatchers // The series iterators are populated at query analysis time. iterators map[model.Fingerprint]local.SeriesIterator metrics map[model.Fingerprint]metric.Metric } func (e *AggregateExpr) Type() model.ValueType { return model.ValVector } func (e *Call) Type() model.ValueType { return e.Func.ReturnType } func (e *MatrixSelector) Type() model.ValueType { return model.ValMatrix } func (e *NumberLiteral) Type() model.ValueType { return model.ValScalar } func (e *ParenExpr) Type() model.ValueType { return e.Expr.Type() } func (e *StringLiteral) Type() model.ValueType { return model.ValString } func (e *UnaryExpr) Type() model.ValueType { return e.Expr.Type() } func (e *VectorSelector) Type() model.ValueType { return model.ValVector } func (e *BinaryExpr) Type() model.ValueType { if e.LHS.Type() == model.ValScalar && e.RHS.Type() == model.ValScalar { return model.ValScalar } return model.ValVector } func (*AggregateExpr) expr() {} func (*BinaryExpr) expr() {} func (*Call) expr() {} func (*MatrixSelector) expr() {} func (*NumberLiteral) expr() {} func (*ParenExpr) expr() {} func (*StringLiteral) expr() {} func (*UnaryExpr) expr() {} func (*VectorSelector) expr() {} // VectorMatchCardinality describes the cardinality relationship // of two vectors in a binary operation. type VectorMatchCardinality int const ( CardOneToOne VectorMatchCardinality = iota CardManyToOne CardOneToMany CardManyToMany ) func (vmc VectorMatchCardinality) String() string { switch vmc { case CardOneToOne: return "one-to-one" case CardManyToOne: return "many-to-one" case CardOneToMany: return "one-to-many" case CardManyToMany: return "many-to-many" } panic("promql.VectorMatchCardinality.String: unknown match cardinality") } // VectorMatching describes how elements from two vectors in a binary // operation are supposed to be matched. type VectorMatching struct { // The cardinality of the two vectors. Card VectorMatchCardinality // On contains the labels which define equality of a pair // of elements from the vectors. On model.LabelNames // Include contains additional labels that should be included in // the result from the side with the higher cardinality. Include model.LabelNames } // Visitor allows visiting a Node and its child nodes. The Visit method is // invoked for each node encountered by Walk. If the result visitor w is not // nil, Walk visits each of the children of node with the visitor w, followed // by a call of w.Visit(nil). type Visitor interface { Visit(node Node) (w Visitor) } // Walk traverses an AST in depth-first order: It starts by calling // v.Visit(node); node must not be nil. If the visitor w returned by // v.Visit(node) is not nil, Walk is invoked recursively with visitor // w for each of the non-nil children of node, followed by a call of // w.Visit(nil). func Walk(v Visitor, node Node) { if v = v.Visit(node); v == nil { return } switch n := node.(type) { case Statements: for _, s := range n { Walk(v, s) } case *AlertStmt: Walk(v, n.Expr) case *EvalStmt: Walk(v, n.Expr) case *RecordStmt: Walk(v, n.Expr) case Expressions: for _, e := range n { Walk(v, e) } case *AggregateExpr: Walk(v, n.Expr) case *BinaryExpr: Walk(v, n.LHS) Walk(v, n.RHS) case *Call: Walk(v, n.Args) case *ParenExpr: Walk(v, n.Expr) case *UnaryExpr: Walk(v, n.Expr) case *MatrixSelector, *NumberLiteral, *StringLiteral, *VectorSelector: // nothing to do default: panic(fmt.Errorf("promql.Walk: unhandled node type %T", node)) } v.Visit(nil) } type inspector func(Node) bool func (f inspector) Visit(node Node) Visitor { if f(node) { return f } return nil } // Inspect traverses an AST in depth-first order: It starts by calling // f(node); node must not be nil. If f returns true, Inspect invokes f // for all the non-nil children of node, recursively. func Inspect(node Node, f func(Node) bool) { Walk(inspector(f), node) } prometheus-0.16.2+ds/promql/engine.go000066400000000000000000001003571265137125100175140ustar00rootroot00000000000000// Copyright 2013 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package promql import ( "fmt" "math" "runtime" "sort" "time" "github.com/prometheus/common/log" "github.com/prometheus/common/model" "golang.org/x/net/context" "github.com/prometheus/prometheus/storage/local" "github.com/prometheus/prometheus/storage/metric" "github.com/prometheus/prometheus/util/stats" ) // sampleStream is a stream of Values belonging to an attached COWMetric. type sampleStream struct { Metric metric.Metric Values []model.SamplePair } // sample is a single sample belonging to a COWMetric. type sample struct { Metric metric.Metric Value model.SampleValue Timestamp model.Time } // vector is basically only an alias for model.Samples, but the // contract is that in a Vector, all Samples have the same timestamp. type vector []*sample func (vector) Type() model.ValueType { return model.ValVector } func (vec vector) String() string { return vec.value().String() } func (vec vector) value() model.Vector { val := make(model.Vector, len(vec)) for i, s := range vec { val[i] = &model.Sample{ Metric: s.Metric.Copy().Metric, Value: s.Value, Timestamp: s.Timestamp, } } return val } // matrix is a slice of SampleStreams that implements sort.Interface and // has a String method. type matrix []*sampleStream func (matrix) Type() model.ValueType { return model.ValMatrix } func (mat matrix) String() string { return mat.value().String() } func (mat matrix) value() model.Matrix { val := make(model.Matrix, len(mat)) for i, ss := range mat { val[i] = &model.SampleStream{ Metric: ss.Metric.Copy().Metric, Values: ss.Values, } } return val } // Result holds the resulting value of an execution or an error // if any occurred. type Result struct { Err error Value model.Value } // Vector returns a vector if the result value is one. An error is returned if // the result was an error or the result value is not a vector. func (r *Result) Vector() (model.Vector, error) { if r.Err != nil { return nil, r.Err } v, ok := r.Value.(model.Vector) if !ok { return nil, fmt.Errorf("query result is not a vector") } return v, nil } // Matrix returns a matrix. An error is returned if // the result was an error or the result value is not a matrix. func (r *Result) Matrix() (model.Matrix, error) { if r.Err != nil { return nil, r.Err } v, ok := r.Value.(model.Matrix) if !ok { return nil, fmt.Errorf("query result is not a matrix") } return v, nil } // Scalar returns a scalar value. An error is returned if // the result was an error or the result value is not a scalar. func (r *Result) Scalar() (*model.Scalar, error) { if r.Err != nil { return nil, r.Err } v, ok := r.Value.(*model.Scalar) if !ok { return nil, fmt.Errorf("query result is not a scalar") } return v, nil } func (r *Result) String() string { if r.Err != nil { return r.Err.Error() } if r.Value == nil { return "" } return r.Value.String() } type ( // ErrQueryTimeout is returned if a query timed out during processing. ErrQueryTimeout string // ErrQueryCanceled is returned if a query was canceled during processing. ErrQueryCanceled string ) func (e ErrQueryTimeout) Error() string { return fmt.Sprintf("query timed out in %s", string(e)) } func (e ErrQueryCanceled) Error() string { return fmt.Sprintf("query was canceled in %s", string(e)) } // A Query is derived from an a raw query string and can be run against an engine // it is associated with. type Query interface { // Exec processes the query and Exec() *Result // Statement returns the parsed statement of the query. Statement() Statement // Stats returns statistics about the lifetime of the query. Stats() *stats.TimerGroup // Cancel signals that a running query execution should be aborted. Cancel() } // query implements the Query interface. type query struct { // The original query string. q string // Statement of the parsed query. stmt Statement // Timer stats for the query execution. stats *stats.TimerGroup // Cancelation function for the query. cancel func() // The engine against which the query is executed. ng *Engine } // Statement implements the Query interface. func (q *query) Statement() Statement { return q.stmt } // Stats implements the Query interface. func (q *query) Stats() *stats.TimerGroup { return q.stats } // Cancel implements the Query interface. func (q *query) Cancel() { if q.cancel != nil { q.cancel() } } // Exec implements the Query interface. func (q *query) Exec() *Result { res, err := q.ng.exec(q) return &Result{Err: err, Value: res} } // contextDone returns an error if the context was canceled or timed out. func contextDone(ctx context.Context, env string) error { select { case <-ctx.Done(): err := ctx.Err() switch err { case context.Canceled: return ErrQueryCanceled(env) case context.DeadlineExceeded: return ErrQueryTimeout(env) default: return err } default: return nil } } // Engine handles the lifetime of queries from beginning to end. // It is connected to a storage. type Engine struct { // The storage on which the engine operates. storage local.Storage // The base context for all queries and its cancellation function. baseCtx context.Context cancelQueries func() // The gate limiting the maximum number of concurrent and waiting queries. gate *queryGate options *EngineOptions } // NewEngine returns a new engine. func NewEngine(storage local.Storage, o *EngineOptions) *Engine { if o == nil { o = DefaultEngineOptions } ctx, cancel := context.WithCancel(context.Background()) return &Engine{ storage: storage, baseCtx: ctx, cancelQueries: cancel, gate: newQueryGate(o.MaxConcurrentQueries), options: o, } } // EngineOptions contains configuration parameters for an Engine. type EngineOptions struct { MaxConcurrentQueries int Timeout time.Duration } // DefaultEngineOptions are the default engine options. var DefaultEngineOptions = &EngineOptions{ MaxConcurrentQueries: 20, Timeout: 2 * time.Minute, } // Stop the engine and cancel all running queries. func (ng *Engine) Stop() { ng.cancelQueries() } // NewInstantQuery returns an evaluation query for the given expression at the given time. func (ng *Engine) NewInstantQuery(qs string, ts model.Time) (Query, error) { expr, err := ParseExpr(qs) if err != nil { return nil, err } qry := ng.newQuery(expr, ts, ts, 0) qry.q = qs return qry, nil } // NewRangeQuery returns an evaluation query for the given time range and with // the resolution set by the interval. func (ng *Engine) NewRangeQuery(qs string, start, end model.Time, interval time.Duration) (Query, error) { expr, err := ParseExpr(qs) if err != nil { return nil, err } if expr.Type() != model.ValVector && expr.Type() != model.ValScalar { return nil, fmt.Errorf("invalid expression type %q for range query, must be scalar or vector", expr.Type()) } qry := ng.newQuery(expr, start, end, interval) qry.q = qs return qry, nil } func (ng *Engine) newQuery(expr Expr, start, end model.Time, interval time.Duration) *query { es := &EvalStmt{ Expr: expr, Start: start, End: end, Interval: interval, } qry := &query{ stmt: es, ng: ng, stats: stats.NewTimerGroup(), } return qry } // testStmt is an internal helper statement that allows execution // of an arbitrary function during handling. It is used to test the Engine. type testStmt func(context.Context) error func (testStmt) String() string { return "test statement" } func (testStmt) DotGraph() string { return "test statement" } func (testStmt) stmt() {} func (ng *Engine) newTestQuery(f func(context.Context) error) Query { qry := &query{ q: "test statement", stmt: testStmt(f), ng: ng, stats: stats.NewTimerGroup(), } return qry } // exec executes the query. // // At this point per query only one EvalStmt is evaluated. Alert and record // statements are not handled by the Engine. func (ng *Engine) exec(q *query) (model.Value, error) { ctx, cancel := context.WithTimeout(q.ng.baseCtx, ng.options.Timeout) q.cancel = cancel queueTimer := q.stats.GetTimer(stats.ExecQueueTime).Start() if err := ng.gate.Start(ctx); err != nil { return nil, err } defer ng.gate.Done() queueTimer.Stop() // Cancel when execution is done or an error was raised. defer q.cancel() const env = "query execution" evalTimer := q.stats.GetTimer(stats.TotalEvalTime).Start() defer evalTimer.Stop() // The base context might already be canceled on the first iteration (e.g. during shutdown). if err := contextDone(ctx, env); err != nil { return nil, err } switch s := q.Statement().(type) { case *EvalStmt: return ng.execEvalStmt(ctx, q, s) case testStmt: return nil, s(ctx) } panic(fmt.Errorf("promql.Engine.exec: unhandled statement of type %T", q.Statement())) } // execEvalStmt evaluates the expression of an evaluation statement for the given time range. func (ng *Engine) execEvalStmt(ctx context.Context, query *query, s *EvalStmt) (model.Value, error) { prepareTimer := query.stats.GetTimer(stats.TotalQueryPreparationTime).Start() analyzeTimer := query.stats.GetTimer(stats.QueryAnalysisTime).Start() // Only one execution statement per query is allowed. analyzer := &Analyzer{ Storage: ng.storage, Expr: s.Expr, Start: s.Start, End: s.End, } err := analyzer.Analyze(ctx) if err != nil { analyzeTimer.Stop() prepareTimer.Stop() return nil, err } analyzeTimer.Stop() preloadTimer := query.stats.GetTimer(stats.PreloadTime).Start() closer, err := analyzer.Prepare(ctx) if err != nil { preloadTimer.Stop() prepareTimer.Stop() return nil, err } defer closer.Close() preloadTimer.Stop() prepareTimer.Stop() evalTimer := query.stats.GetTimer(stats.InnerEvalTime).Start() // Instant evaluation. if s.Start == s.End && s.Interval == 0 { evaluator := &evaluator{ Timestamp: s.Start, ctx: ctx, } val, err := evaluator.Eval(s.Expr) if err != nil { return nil, err } // Turn matrix and vector types with protected metrics into // model.* types. switch v := val.(type) { case vector: val = v.value() case matrix: val = v.value() } evalTimer.Stop() return val, nil } numSteps := int(s.End.Sub(s.Start) / s.Interval) // Range evaluation. sampleStreams := map[model.Fingerprint]*sampleStream{} for ts := s.Start; !ts.After(s.End); ts = ts.Add(s.Interval) { if err := contextDone(ctx, "range evaluation"); err != nil { return nil, err } evaluator := &evaluator{ Timestamp: ts, ctx: ctx, } val, err := evaluator.Eval(s.Expr) if err != nil { return nil, err } switch v := val.(type) { case *model.Scalar: // As the expression type does not change we can safely default to 0 // as the fingerprint for scalar expressions. ss := sampleStreams[0] if ss == nil { ss = &sampleStream{Values: make([]model.SamplePair, 0, numSteps)} sampleStreams[0] = ss } ss.Values = append(ss.Values, model.SamplePair{ Value: v.Value, Timestamp: v.Timestamp, }) case vector: for _, sample := range v { fp := sample.Metric.Metric.Fingerprint() ss := sampleStreams[fp] if ss == nil { ss = &sampleStream{ Metric: sample.Metric, Values: make([]model.SamplePair, 0, numSteps), } sampleStreams[fp] = ss } ss.Values = append(ss.Values, model.SamplePair{ Value: sample.Value, Timestamp: sample.Timestamp, }) } default: panic(fmt.Errorf("promql.Engine.exec: invalid expression type %q", val.Type())) } } evalTimer.Stop() if err := contextDone(ctx, "expression evaluation"); err != nil { return nil, err } appendTimer := query.stats.GetTimer(stats.ResultAppendTime).Start() mat := matrix{} for _, ss := range sampleStreams { mat = append(mat, ss) } appendTimer.Stop() if err := contextDone(ctx, "expression evaluation"); err != nil { return nil, err } // Turn matrix type with protected metric into model.Matrix. resMatrix := mat.value() sortTimer := query.stats.GetTimer(stats.ResultSortTime).Start() sort.Sort(resMatrix) sortTimer.Stop() return resMatrix, nil } // An evaluator evaluates given expressions at a fixed timestamp. It is attached to an // engine through which it connects to a storage and reports errors. On timeout or // cancellation of its context it terminates. type evaluator struct { ctx context.Context Timestamp model.Time } // fatalf causes a panic with the input formatted into an error. func (ev *evaluator) errorf(format string, args ...interface{}) { ev.error(fmt.Errorf(format, args...)) } // fatal causes a panic with the given error. func (ev *evaluator) error(err error) { panic(err) } // recover is the handler that turns panics into returns from the top level of evaluation. func (ev *evaluator) recover(errp *error) { e := recover() if e != nil { if _, ok := e.(runtime.Error); ok { // Print the stack trace but do not inhibit the running application. buf := make([]byte, 64<<10) buf = buf[:runtime.Stack(buf, false)] log.Errorf("parser panic: %v\n%s", e, buf) *errp = fmt.Errorf("unexpected error") } else { *errp = e.(error) } } } // evalScalar attempts to evaluate e to a scalar value and errors otherwise. func (ev *evaluator) evalScalar(e Expr) *model.Scalar { val := ev.eval(e) sv, ok := val.(*model.Scalar) if !ok { ev.errorf("expected scalar but got %s", val.Type()) } return sv } // evalVector attempts to evaluate e to a vector value and errors otherwise. func (ev *evaluator) evalVector(e Expr) vector { val := ev.eval(e) vec, ok := val.(vector) if !ok { ev.errorf("expected vector but got %s", val.Type()) } return vec } // evalInt attempts to evaluate e into an integer and errors otherwise. func (ev *evaluator) evalInt(e Expr) int { sc := ev.evalScalar(e) return int(sc.Value) } // evalFloat attempts to evaluate e into a float and errors otherwise. func (ev *evaluator) evalFloat(e Expr) float64 { sc := ev.evalScalar(e) return float64(sc.Value) } // evalMatrix attempts to evaluate e into a matrix and errors otherwise. func (ev *evaluator) evalMatrix(e Expr) matrix { val := ev.eval(e) mat, ok := val.(matrix) if !ok { ev.errorf("expected matrix but got %s", val.Type()) } return mat } // evalMatrixBounds attempts to evaluate e to matrix boundaries and errors otherwise. func (ev *evaluator) evalMatrixBounds(e Expr) matrix { ms, ok := e.(*MatrixSelector) if !ok { ev.errorf("matrix bounds can only be evaluated for matrix selectors, got %T", e) } return ev.matrixSelectorBounds(ms) } // evalString attempts to evaluate e to a string value and errors otherwise. func (ev *evaluator) evalString(e Expr) *model.String { val := ev.eval(e) sv, ok := val.(*model.String) if !ok { ev.errorf("expected string but got %s", val.Type()) } return sv } // evalOneOf evaluates e and errors unless the result is of one of the given types. func (ev *evaluator) evalOneOf(e Expr, t1, t2 model.ValueType) model.Value { val := ev.eval(e) if val.Type() != t1 && val.Type() != t2 { ev.errorf("expected %s or %s but got %s", t1, t2, val.Type()) } return val } func (ev *evaluator) Eval(expr Expr) (v model.Value, err error) { defer ev.recover(&err) return ev.eval(expr), nil } // eval evaluates the given expression as the given AST expression node requires. func (ev *evaluator) eval(expr Expr) model.Value { // This is the top-level evaluation method. // Thus, we check for timeout/cancelation here. if err := contextDone(ev.ctx, "expression evaluation"); err != nil { ev.error(err) } switch e := expr.(type) { case *AggregateExpr: vector := ev.evalVector(e.Expr) return ev.aggregation(e.Op, e.Grouping, e.KeepExtraLabels, vector) case *BinaryExpr: lhs := ev.evalOneOf(e.LHS, model.ValScalar, model.ValVector) rhs := ev.evalOneOf(e.RHS, model.ValScalar, model.ValVector) switch lt, rt := lhs.Type(), rhs.Type(); { case lt == model.ValScalar && rt == model.ValScalar: return &model.Scalar{ Value: scalarBinop(e.Op, lhs.(*model.Scalar).Value, rhs.(*model.Scalar).Value), Timestamp: ev.Timestamp, } case lt == model.ValVector && rt == model.ValVector: switch e.Op { case itemLAND: return ev.vectorAnd(lhs.(vector), rhs.(vector), e.VectorMatching) case itemLOR: return ev.vectorOr(lhs.(vector), rhs.(vector), e.VectorMatching) default: return ev.vectorBinop(e.Op, lhs.(vector), rhs.(vector), e.VectorMatching, e.ReturnBool) } case lt == model.ValVector && rt == model.ValScalar: return ev.vectorScalarBinop(e.Op, lhs.(vector), rhs.(*model.Scalar), false, e.ReturnBool) case lt == model.ValScalar && rt == model.ValVector: return ev.vectorScalarBinop(e.Op, rhs.(vector), lhs.(*model.Scalar), true, e.ReturnBool) } case *Call: return e.Func.Call(ev, e.Args) case *MatrixSelector: return ev.matrixSelector(e) case *NumberLiteral: return &model.Scalar{Value: e.Val, Timestamp: ev.Timestamp} case *ParenExpr: return ev.eval(e.Expr) case *StringLiteral: return &model.String{Value: e.Val, Timestamp: ev.Timestamp} case *UnaryExpr: se := ev.evalOneOf(e.Expr, model.ValScalar, model.ValVector) // Only + and - are possible operators. if e.Op == itemSUB { switch v := se.(type) { case *model.Scalar: v.Value = -v.Value case vector: for i, sv := range v { v[i].Value = -sv.Value } } } return se case *VectorSelector: return ev.vectorSelector(e) } panic(fmt.Errorf("unhandled expression of type: %T", expr)) } // vectorSelector evaluates a *VectorSelector expression. func (ev *evaluator) vectorSelector(node *VectorSelector) vector { vec := vector{} for fp, it := range node.iterators { sampleCandidates := it.ValueAtTime(ev.Timestamp.Add(-node.Offset)) samplePair := chooseClosestBefore(sampleCandidates, ev.Timestamp.Add(-node.Offset)) if samplePair != nil { vec = append(vec, &sample{ Metric: node.metrics[fp], Value: samplePair.Value, Timestamp: ev.Timestamp, }) } } return vec } // matrixSelector evaluates a *MatrixSelector expression. func (ev *evaluator) matrixSelector(node *MatrixSelector) matrix { interval := metric.Interval{ OldestInclusive: ev.Timestamp.Add(-node.Range - node.Offset), NewestInclusive: ev.Timestamp.Add(-node.Offset), } sampleStreams := make([]*sampleStream, 0, len(node.iterators)) for fp, it := range node.iterators { samplePairs := it.RangeValues(interval) if len(samplePairs) == 0 { continue } if node.Offset != 0 { for _, sp := range samplePairs { sp.Timestamp = sp.Timestamp.Add(node.Offset) } } sampleStream := &sampleStream{ Metric: node.metrics[fp], Values: samplePairs, } sampleStreams = append(sampleStreams, sampleStream) } return matrix(sampleStreams) } // matrixSelectorBounds evaluates the boundaries of a *MatrixSelector. func (ev *evaluator) matrixSelectorBounds(node *MatrixSelector) matrix { interval := metric.Interval{ OldestInclusive: ev.Timestamp.Add(-node.Range - node.Offset), NewestInclusive: ev.Timestamp.Add(-node.Offset), } sampleStreams := make([]*sampleStream, 0, len(node.iterators)) for fp, it := range node.iterators { samplePairs := it.BoundaryValues(interval) if len(samplePairs) == 0 { continue } ss := &sampleStream{ Metric: node.metrics[fp], Values: samplePairs, } sampleStreams = append(sampleStreams, ss) } return matrix(sampleStreams) } func (ev *evaluator) vectorAnd(lhs, rhs vector, matching *VectorMatching) vector { if matching.Card != CardManyToMany { panic("logical operations must always be many-to-many matching") } // If no matching labels are specified, match by all labels. sigf := signatureFunc(matching.On...) var result vector // The set of signatures for the right-hand side vector. rightSigs := map[uint64]struct{}{} // Add all rhs samples to a map so we can easily find matches later. for _, rs := range rhs { rightSigs[sigf(rs.Metric)] = struct{}{} } for _, ls := range lhs { // If there's a matching entry in the right-hand side vector, add the sample. if _, ok := rightSigs[sigf(ls.Metric)]; ok { result = append(result, ls) } } return result } func (ev *evaluator) vectorOr(lhs, rhs vector, matching *VectorMatching) vector { if matching.Card != CardManyToMany { panic("logical operations must always be many-to-many matching") } sigf := signatureFunc(matching.On...) var result vector leftSigs := map[uint64]struct{}{} // Add everything from the left-hand-side vector. for _, ls := range lhs { leftSigs[sigf(ls.Metric)] = struct{}{} result = append(result, ls) } // Add all right-hand side elements which have not been added from the left-hand side. for _, rs := range rhs { if _, ok := leftSigs[sigf(rs.Metric)]; !ok { result = append(result, rs) } } return result } // vectorBinop evaluates a binary operation between two vector, excluding AND and OR. func (ev *evaluator) vectorBinop(op itemType, lhs, rhs vector, matching *VectorMatching, returnBool bool) vector { if matching.Card == CardManyToMany { panic("many-to-many only allowed for AND and OR") } var ( result = vector{} sigf = signatureFunc(matching.On...) resultLabels = append(matching.On, matching.Include...) ) // The control flow below handles one-to-one or many-to-one matching. // For one-to-many, swap sidedness and account for the swap when calculating // values. if matching.Card == CardOneToMany { lhs, rhs = rhs, lhs } // All samples from the rhs hashed by the matching label/values. rightSigs := map[uint64]*sample{} // Add all rhs samples to a map so we can easily find matches later. for _, rs := range rhs { sig := sigf(rs.Metric) // The rhs is guaranteed to be the 'one' side. Having multiple samples // with the same signature means that the matching is many-to-many. if _, found := rightSigs[sig]; found { // Many-to-many matching not allowed. ev.errorf("many-to-many matching not allowed: matching labels must be unique on one side") } rightSigs[sig] = rs } // Tracks the match-signature. For one-to-one operations the value is nil. For many-to-one // the value is a set of signatures to detect duplicated result elements. matchedSigs := map[uint64]map[uint64]struct{}{} // For all lhs samples find a respective rhs sample and perform // the binary operation. for _, ls := range lhs { sig := sigf(ls.Metric) rs, found := rightSigs[sig] // Look for a match in the rhs vector. if !found { continue } // Account for potentially swapped sidedness. vl, vr := ls.Value, rs.Value if matching.Card == CardOneToMany { vl, vr = vr, vl } value, keep := vectorElemBinop(op, vl, vr) if returnBool { if keep { value = 1.0 } else { value = 0.0 } } else if !keep { continue } metric := resultMetric(ls.Metric, op, resultLabels...) insertedSigs, exists := matchedSigs[sig] if matching.Card == CardOneToOne { if exists { ev.errorf("multiple matches for labels: many-to-one matching must be explicit (group_left/group_right)") } matchedSigs[sig] = nil // Set existance to true. } else { // In many-to-one matching the grouping labels have to ensure a unique metric // for the result vector. Check whether those labels have already been added for // the same matching labels. insertSig := model.SignatureForLabels(metric.Metric, matching.Include...) if !exists { insertedSigs = map[uint64]struct{}{} matchedSigs[sig] = insertedSigs } else if _, duplicate := insertedSigs[insertSig]; duplicate { ev.errorf("multiple matches for labels: grouping labels must ensure unique matches") } insertedSigs[insertSig] = struct{}{} } result = append(result, &sample{ Metric: metric, Value: value, Timestamp: ev.Timestamp, }) } return result } // signatureFunc returns a function that calculates the signature for a metric // based on the provided labels. func signatureFunc(labels ...model.LabelName) func(m metric.Metric) uint64 { if len(labels) == 0 { return func(m metric.Metric) uint64 { m.Del(model.MetricNameLabel) return uint64(m.Metric.Fingerprint()) } } return func(m metric.Metric) uint64 { return model.SignatureForLabels(m.Metric, labels...) } } // resultMetric returns the metric for the given sample(s) based on the vector // binary operation and the matching options. func resultMetric(met metric.Metric, op itemType, labels ...model.LabelName) metric.Metric { if len(labels) == 0 { if shouldDropMetricName(op) { met.Del(model.MetricNameLabel) } return met } // As we definitly write, creating a new metric is the easiest solution. m := model.Metric{} for _, ln := range labels { // Included labels from the `group_x` modifier are taken from the "many"-side. if v, ok := met.Metric[ln]; ok { m[ln] = v } } return metric.Metric{Metric: m, Copied: false} } // vectorScalarBinop evaluates a binary operation between a vector and a scalar. func (ev *evaluator) vectorScalarBinop(op itemType, lhs vector, rhs *model.Scalar, swap, returnBool bool) vector { vec := make(vector, 0, len(lhs)) for _, lhsSample := range lhs { lv, rv := lhsSample.Value, rhs.Value // lhs always contains the vector. If the original position was different // swap for calculating the value. if swap { lv, rv = rv, lv } value, keep := vectorElemBinop(op, lv, rv) if returnBool { if keep { value = 1.0 } else { value = 0.0 } keep = true } if keep { lhsSample.Value = value if shouldDropMetricName(op) { lhsSample.Metric.Del(model.MetricNameLabel) } vec = append(vec, lhsSample) } } return vec } // scalarBinop evaluates a binary operation between two scalars. func scalarBinop(op itemType, lhs, rhs model.SampleValue) model.SampleValue { switch op { case itemADD: return lhs + rhs case itemSUB: return lhs - rhs case itemMUL: return lhs * rhs case itemDIV: return lhs / rhs case itemMOD: if rhs != 0 { return model.SampleValue(int(lhs) % int(rhs)) } return model.SampleValue(math.NaN()) case itemEQL: return btos(lhs == rhs) case itemNEQ: return btos(lhs != rhs) case itemGTR: return btos(lhs > rhs) case itemLSS: return btos(lhs < rhs) case itemGTE: return btos(lhs >= rhs) case itemLTE: return btos(lhs <= rhs) } panic(fmt.Errorf("operator %q not allowed for scalar operations", op)) } // vectorElemBinop evaluates a binary operation between two vector elements. func vectorElemBinop(op itemType, lhs, rhs model.SampleValue) (model.SampleValue, bool) { switch op { case itemADD: return lhs + rhs, true case itemSUB: return lhs - rhs, true case itemMUL: return lhs * rhs, true case itemDIV: return lhs / rhs, true case itemMOD: if rhs != 0 { return model.SampleValue(int(lhs) % int(rhs)), true } return model.SampleValue(math.NaN()), true case itemEQL: return lhs, lhs == rhs case itemNEQ: return lhs, lhs != rhs case itemGTR: return lhs, lhs > rhs case itemLSS: return lhs, lhs < rhs case itemGTE: return lhs, lhs >= rhs case itemLTE: return lhs, lhs <= rhs } panic(fmt.Errorf("operator %q not allowed for operations between vectors", op)) } // labelIntersection returns the metric of common label/value pairs of two input metrics. func labelIntersection(metric1, metric2 metric.Metric) metric.Metric { for label, value := range metric1.Metric { if metric2.Metric[label] != value { metric1.Del(label) } } return metric1 } type groupedAggregation struct { labels metric.Metric value model.SampleValue valuesSquaredSum model.SampleValue groupCount int } // aggregation evaluates an aggregation operation on a vector. func (ev *evaluator) aggregation(op itemType, grouping model.LabelNames, keepExtra bool, vec vector) vector { result := map[uint64]*groupedAggregation{} for _, sample := range vec { groupingKey := model.SignatureForLabels(sample.Metric.Metric, grouping...) groupedResult, ok := result[groupingKey] // Add a new group if it doesn't exist. if !ok { var m metric.Metric if keepExtra { m = sample.Metric m.Del(model.MetricNameLabel) } else { m = metric.Metric{ Metric: model.Metric{}, Copied: true, } for _, l := range grouping { if v, ok := sample.Metric.Metric[l]; ok { m.Set(l, v) } } } result[groupingKey] = &groupedAggregation{ labels: m, value: sample.Value, valuesSquaredSum: sample.Value * sample.Value, groupCount: 1, } continue } // Add the sample to the existing group. if keepExtra { groupedResult.labels = labelIntersection(groupedResult.labels, sample.Metric) } switch op { case itemSum: groupedResult.value += sample.Value case itemAvg: groupedResult.value += sample.Value groupedResult.groupCount++ case itemMax: if groupedResult.value < sample.Value || math.IsNaN(float64(groupedResult.value)) { groupedResult.value = sample.Value } case itemMin: if groupedResult.value > sample.Value || math.IsNaN(float64(groupedResult.value)) { groupedResult.value = sample.Value } case itemCount: groupedResult.groupCount++ case itemStdvar, itemStddev: groupedResult.value += sample.Value groupedResult.valuesSquaredSum += sample.Value * sample.Value groupedResult.groupCount++ default: panic(fmt.Errorf("expected aggregation operator but got %q", op)) } } // Construct the result vector from the aggregated groups. resultVector := make(vector, 0, len(result)) for _, aggr := range result { switch op { case itemAvg: aggr.value = aggr.value / model.SampleValue(aggr.groupCount) case itemCount: aggr.value = model.SampleValue(aggr.groupCount) case itemStdvar: avg := float64(aggr.value) / float64(aggr.groupCount) aggr.value = model.SampleValue(float64(aggr.valuesSquaredSum)/float64(aggr.groupCount) - avg*avg) case itemStddev: avg := float64(aggr.value) / float64(aggr.groupCount) aggr.value = model.SampleValue(math.Sqrt(float64(aggr.valuesSquaredSum)/float64(aggr.groupCount) - avg*avg)) default: // For other aggregations, we already have the right value. } sample := &sample{ Metric: aggr.labels, Value: aggr.value, Timestamp: ev.Timestamp, } resultVector = append(resultVector, sample) } return resultVector } // btos returns 1 if b is true, 0 otherwise. func btos(b bool) model.SampleValue { if b { return 1 } return 0 } // shouldDropMetricName returns whether the metric name should be dropped in the // result of the op operation. func shouldDropMetricName(op itemType) bool { switch op { case itemADD, itemSUB, itemDIV, itemMUL, itemMOD: return true default: return false } } // StalenessDelta determines the time since the last sample after which a time // series is considered stale. var StalenessDelta = 5 * time.Minute // chooseClosestBefore chooses the closest sample of a list of samples // before or at a given target time. func chooseClosestBefore(samples []model.SamplePair, timestamp model.Time) *model.SamplePair { for _, candidate := range samples { delta := candidate.Timestamp.Sub(timestamp) // Samples before or at target time. if delta <= 0 { // Ignore samples outside of staleness policy window. if -delta > StalenessDelta { continue } return &candidate } } return nil } // A queryGate controls the maximum number of concurrently running and waiting queries. type queryGate struct { ch chan struct{} } // newQueryGate returns a query gate that limits the number of queries // being concurrently executed. func newQueryGate(length int) *queryGate { return &queryGate{ ch: make(chan struct{}, length), } } // Start blocks until the gate has a free spot or the context is done. func (g *queryGate) Start(ctx context.Context) error { select { case <-ctx.Done(): return contextDone(ctx, "query queue") case g.ch <- struct{}{}: return nil } } // Done releases a single spot in the gate. func (g *queryGate) Done() { select { case <-g.ch: default: panic("engine.queryGate.Done: more operations done than started") } } prometheus-0.16.2+ds/promql/engine_test.go000066400000000000000000000113711265137125100205500ustar00rootroot00000000000000package promql import ( "fmt" "testing" "time" "golang.org/x/net/context" ) var noop = testStmt(func(context.Context) error { return nil }) func TestQueryConcurrency(t *testing.T) { engine := NewEngine(nil, nil) defer engine.Stop() block := make(chan struct{}) processing := make(chan struct{}) f := func(context.Context) error { processing <- struct{}{} <-block return nil } for i := 0; i < DefaultEngineOptions.MaxConcurrentQueries; i++ { q := engine.newTestQuery(f) go q.Exec() select { case <-processing: // Expected. case <-time.After(20 * time.Millisecond): t.Fatalf("Query within concurrency threshold not being executed") } } q := engine.newTestQuery(f) go q.Exec() select { case <-processing: t.Fatalf("Query above concurrency threhosld being executed") case <-time.After(20 * time.Millisecond): // Expected. } // Terminate a running query. block <- struct{}{} select { case <-processing: // Expected. case <-time.After(20 * time.Millisecond): t.Fatalf("Query within concurrency threshold not being executed") } // Terminate remaining queries. for i := 0; i < DefaultEngineOptions.MaxConcurrentQueries; i++ { block <- struct{}{} } } func TestQueryTimeout(t *testing.T) { engine := NewEngine(nil, &EngineOptions{ Timeout: 5 * time.Millisecond, MaxConcurrentQueries: 20, }) defer engine.Stop() query := engine.newTestQuery(func(ctx context.Context) error { time.Sleep(50 * time.Millisecond) return contextDone(ctx, "test statement execution") }) res := query.Exec() if res.Err == nil { t.Fatalf("expected timeout error but got none") } if _, ok := res.Err.(ErrQueryTimeout); res.Err != nil && !ok { t.Fatalf("expected timeout error but got: %s", res.Err) } } func TestQueryCancel(t *testing.T) { engine := NewEngine(nil, nil) defer engine.Stop() // Cancel a running query before it completes. block := make(chan struct{}) processing := make(chan struct{}) query1 := engine.newTestQuery(func(ctx context.Context) error { processing <- struct{}{} <-block return contextDone(ctx, "test statement execution") }) var res *Result go func() { res = query1.Exec() processing <- struct{}{} }() <-processing query1.Cancel() block <- struct{}{} <-processing if res.Err == nil { t.Fatalf("expected cancellation error for query1 but got none") } if ee := ErrQueryCanceled("test statement execution"); res.Err != ee { t.Fatalf("expected error %q, got %q", ee, res.Err) } // Canceling a query before starting it must have no effect. query2 := engine.newTestQuery(func(ctx context.Context) error { return contextDone(ctx, "test statement execution") }) query2.Cancel() res = query2.Exec() if res.Err != nil { t.Fatalf("unexpeceted error on executing query2: %s", res.Err) } } func TestEngineShutdown(t *testing.T) { engine := NewEngine(nil, nil) block := make(chan struct{}) processing := make(chan struct{}) // Shutdown engine on first handler execution. Should handler execution ever become // concurrent this test has to be adjusted accordingly. f := func(ctx context.Context) error { processing <- struct{}{} <-block return contextDone(ctx, "test statement execution") } query1 := engine.newTestQuery(f) // Stopping the engine must cancel the base context. While executing queries is // still possible, their context is canceled from the beginning and execution should // terminate immediately. var res *Result go func() { res = query1.Exec() processing <- struct{}{} }() <-processing engine.Stop() block <- struct{}{} <-processing if res.Err == nil { t.Fatalf("expected error on shutdown during query but got none") } if ee := ErrQueryCanceled("test statement execution"); res.Err != ee { t.Fatalf("expected error %q, got %q", ee, res.Err) } query2 := engine.newTestQuery(func(context.Context) error { t.Fatalf("reached query execution unexpectedly") return nil }) // The second query is started after the engine shut down. It must // be canceled immediately. res2 := query2.Exec() if res2.Err == nil { t.Fatalf("expected error on querying shutdown engine but got none") } if _, ok := res2.Err.(ErrQueryCanceled); !ok { t.Fatalf("expected cancelation error, got %q", res2.Err) } } func TestRecoverEvaluatorRuntime(t *testing.T) { var ev *evaluator var err error defer ev.recover(&err) // Cause a runtime panic. var a []int a[123] = 1 if err.Error() != "unexpected error" { t.Fatalf("wrong error message: %q, expected %q", err, "unexpected error") } } func TestRecoverEvaluatorError(t *testing.T) { var ev *evaluator var err error e := fmt.Errorf("custom error") defer func() { if err.Error() != e.Error() { t.Fatalf("wrong error message: %q, expected %q", err, e) } }() defer ev.recover(&err) panic(e) } prometheus-0.16.2+ds/promql/functions.go000066400000000000000000000705471265137125100202660ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package promql import ( "container/heap" "math" "regexp" "sort" "strconv" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/storage/metric" ) // Function represents a function of the expression language and is // used by function nodes. type Function struct { Name string ArgTypes []model.ValueType OptionalArgs int ReturnType model.ValueType Call func(ev *evaluator, args Expressions) model.Value } // === time() model.SampleValue === func funcTime(ev *evaluator, args Expressions) model.Value { return &model.Scalar{ Value: model.SampleValue(ev.Timestamp.Unix()), Timestamp: ev.Timestamp, } } // extrapolatedRate is a utility function for rate/increase/delta. // It calculates the rate (allowing for counter resets if isCounter is true), // extrapolates if the first/last sample is close to the boundary, and returns // the result as either per-second (if isRate is true) or overall. func extrapolatedRate(ev *evaluator, arg Expr, isCounter bool, isRate bool) model.Value { ms := arg.(*MatrixSelector) rangeStart := ev.Timestamp.Add(-ms.Range - ms.Offset) rangeEnd := ev.Timestamp.Add(-ms.Offset) resultVector := vector{} matrixValue := ev.evalMatrix(ms) for _, samples := range matrixValue { // No sense in trying to compute a rate without at least two points. Drop // this vector element. if len(samples.Values) < 2 { continue } var ( counterCorrection model.SampleValue lastValue model.SampleValue ) for _, sample := range samples.Values { currentValue := sample.Value if isCounter && currentValue < lastValue { counterCorrection += lastValue - currentValue } lastValue = currentValue } resultValue := lastValue - samples.Values[0].Value + counterCorrection // Duration between first/last samples and boundary of range. durationToStart := samples.Values[0].Timestamp.Sub(rangeStart).Seconds() durationToEnd := rangeEnd.Sub(samples.Values[len(samples.Values)-1].Timestamp).Seconds() sampledInterval := samples.Values[len(samples.Values)-1].Timestamp.Sub(samples.Values[0].Timestamp).Seconds() averageDurationBetweenSamples := sampledInterval / float64(len(samples.Values)-1) if isCounter && resultValue > 0 && samples.Values[0].Value >= 0 { // Counters cannot be negative. If we have any slope at // all (i.e. resultValue went up), we can extrapolate // the zero point of the counter. If the duration to the // zero point is shorter than the durationToStart, we // take the zero point as the start of the series, // thereby avoiding extrapolation to negative counter // values. durationToZero := sampledInterval * float64(samples.Values[0].Value/resultValue) if durationToZero < durationToStart { durationToStart = durationToZero } } // If the first/last samples are close to the boundaries of the range, // extrapolate the result. This is as we expect that another sample // will exist given the spacing between samples we've seen thus far, // with an allowance for noise. extrapolationThreshold := averageDurationBetweenSamples * 1.1 extrapolateToInterval := sampledInterval if durationToStart < extrapolationThreshold { extrapolateToInterval += durationToStart } else { extrapolateToInterval += averageDurationBetweenSamples / 2 } if durationToEnd < extrapolationThreshold { extrapolateToInterval += durationToEnd } else { extrapolateToInterval += averageDurationBetweenSamples / 2 } resultValue = resultValue * model.SampleValue(extrapolateToInterval/sampledInterval) if isRate { resultValue = resultValue / model.SampleValue(ms.Range.Seconds()) } resultSample := &sample{ Metric: samples.Metric, Value: resultValue, Timestamp: ev.Timestamp, } resultSample.Metric.Del(model.MetricNameLabel) resultVector = append(resultVector, resultSample) } return resultVector } // === delta(matrix model.ValMatrix) Vector === func funcDelta(ev *evaluator, args Expressions) model.Value { return extrapolatedRate(ev, args[0], false, false) } // === rate(node model.ValMatrix) Vector === func funcRate(ev *evaluator, args Expressions) model.Value { return extrapolatedRate(ev, args[0], true, true) } // === increase(node model.ValMatrix) Vector === func funcIncrease(ev *evaluator, args Expressions) model.Value { return extrapolatedRate(ev, args[0], true, false) } // === irate(node model.ValMatrix) Vector === func funcIrate(ev *evaluator, args Expressions) model.Value { resultVector := vector{} for _, samples := range ev.evalMatrix(args[0]) { // No sense in trying to compute a rate without at least two points. Drop // this vector element. if len(samples.Values) < 2 { continue } lastSample := samples.Values[len(samples.Values)-1] previousSample := samples.Values[len(samples.Values)-2] var resultValue model.SampleValue if lastSample.Value < previousSample.Value { // Counter reset. resultValue = lastSample.Value } else { resultValue = lastSample.Value - previousSample.Value } sampledInterval := lastSample.Timestamp.Sub(previousSample.Timestamp) if sampledInterval == 0 { // Avoid dividing by 0. continue } // Convert to per-second. resultValue /= model.SampleValue(sampledInterval.Seconds()) resultSample := &sample{ Metric: samples.Metric, Value: resultValue, Timestamp: ev.Timestamp, } resultSample.Metric.Del(model.MetricNameLabel) resultVector = append(resultVector, resultSample) } return resultVector } // === sort(node model.ValVector) Vector === func funcSort(ev *evaluator, args Expressions) model.Value { // NaN should sort to the bottom, so take descending sort with NaN first and // reverse it. byValueSorter := vectorByReverseValueHeap(ev.evalVector(args[0])) sort.Sort(sort.Reverse(byValueSorter)) return vector(byValueSorter) } // === sortDesc(node model.ValVector) Vector === func funcSortDesc(ev *evaluator, args Expressions) model.Value { // NaN should sort to the bottom, so take ascending sort with NaN first and // reverse it. byValueSorter := vectorByValueHeap(ev.evalVector(args[0])) sort.Sort(sort.Reverse(byValueSorter)) return vector(byValueSorter) } // === topk(k model.ValScalar, node model.ValVector) Vector === func funcTopk(ev *evaluator, args Expressions) model.Value { k := ev.evalInt(args[0]) if k < 1 { return vector{} } vec := ev.evalVector(args[1]) topk := make(vectorByValueHeap, 0, k) for _, el := range vec { if len(topk) < k || topk[0].Value < el.Value || math.IsNaN(float64(topk[0].Value)) { if len(topk) == k { heap.Pop(&topk) } heap.Push(&topk, el) } } // The heap keeps the lowest value on top, so reverse it. sort.Sort(sort.Reverse(topk)) return vector(topk) } // === bottomk(k model.ValScalar, node model.ValVector) Vector === func funcBottomk(ev *evaluator, args Expressions) model.Value { k := ev.evalInt(args[0]) if k < 1 { return vector{} } vec := ev.evalVector(args[1]) bottomk := make(vectorByReverseValueHeap, 0, k) for _, el := range vec { if len(bottomk) < k || bottomk[0].Value > el.Value || math.IsNaN(float64(bottomk[0].Value)) { if len(bottomk) == k { heap.Pop(&bottomk) } heap.Push(&bottomk, el) } } // The heap keeps the highest value on top, so reverse it. sort.Sort(sort.Reverse(bottomk)) return vector(bottomk) } // === clamp_max(vector model.ValVector, max Scalar) Vector === func funcClampMax(ev *evaluator, args Expressions) model.Value { vec := ev.evalVector(args[0]) max := ev.evalFloat(args[1]) for _, el := range vec { el.Metric.Del(model.MetricNameLabel) el.Value = model.SampleValue(math.Min(max, float64(el.Value))) } return vec } // === clamp_min(vector model.ValVector, min Scalar) Vector === func funcClampMin(ev *evaluator, args Expressions) model.Value { vec := ev.evalVector(args[0]) min := ev.evalFloat(args[1]) for _, el := range vec { el.Metric.Del(model.MetricNameLabel) el.Value = model.SampleValue(math.Max(min, float64(el.Value))) } return vec } // === drop_common_labels(node model.ValVector) Vector === func funcDropCommonLabels(ev *evaluator, args Expressions) model.Value { vec := ev.evalVector(args[0]) if len(vec) < 1 { return vector{} } common := model.LabelSet{} for k, v := range vec[0].Metric.Metric { // TODO(julius): Should we also drop common metric names? if k == model.MetricNameLabel { continue } common[k] = v } for _, el := range vec[1:] { for k, v := range common { if el.Metric.Metric[k] != v { // Deletion of map entries while iterating over them is safe. // From http://golang.org/ref/spec#For_statements: // "If map entries that have not yet been reached are deleted during // iteration, the corresponding iteration values will not be produced." delete(common, k) } } } for _, el := range vec { for k := range el.Metric.Metric { if _, ok := common[k]; ok { el.Metric.Del(k) } } } return vec } // === round(vector model.ValVector, toNearest=1 Scalar) Vector === func funcRound(ev *evaluator, args Expressions) model.Value { // round returns a number rounded to toNearest. // Ties are solved by rounding up. toNearest := float64(1) if len(args) >= 2 { toNearest = ev.evalFloat(args[1]) } // Invert as it seems to cause fewer floating point accuracy issues. toNearestInverse := 1.0 / toNearest vec := ev.evalVector(args[0]) for _, el := range vec { el.Metric.Del(model.MetricNameLabel) el.Value = model.SampleValue(math.Floor(float64(el.Value)*toNearestInverse+0.5) / toNearestInverse) } return vec } // === scalar(node model.ValVector) Scalar === func funcScalar(ev *evaluator, args Expressions) model.Value { v := ev.evalVector(args[0]) if len(v) != 1 { return &model.Scalar{ Value: model.SampleValue(math.NaN()), Timestamp: ev.Timestamp, } } return &model.Scalar{ Value: model.SampleValue(v[0].Value), Timestamp: ev.Timestamp, } } // === count_scalar(vector model.ValVector) model.SampleValue === func funcCountScalar(ev *evaluator, args Expressions) model.Value { return &model.Scalar{ Value: model.SampleValue(len(ev.evalVector(args[0]))), Timestamp: ev.Timestamp, } } func aggrOverTime(ev *evaluator, args Expressions, aggrFn func([]model.SamplePair) model.SampleValue) model.Value { mat := ev.evalMatrix(args[0]) resultVector := vector{} for _, el := range mat { if len(el.Values) == 0 { continue } el.Metric.Del(model.MetricNameLabel) resultVector = append(resultVector, &sample{ Metric: el.Metric, Value: aggrFn(el.Values), Timestamp: ev.Timestamp, }) } return resultVector } // === avg_over_time(matrix model.ValMatrix) Vector === func funcAvgOverTime(ev *evaluator, args Expressions) model.Value { return aggrOverTime(ev, args, func(values []model.SamplePair) model.SampleValue { var sum model.SampleValue for _, v := range values { sum += v.Value } return sum / model.SampleValue(len(values)) }) } // === count_over_time(matrix model.ValMatrix) Vector === func funcCountOverTime(ev *evaluator, args Expressions) model.Value { return aggrOverTime(ev, args, func(values []model.SamplePair) model.SampleValue { return model.SampleValue(len(values)) }) } // === floor(vector model.ValVector) Vector === func funcFloor(ev *evaluator, args Expressions) model.Value { vector := ev.evalVector(args[0]) for _, el := range vector { el.Metric.Del(model.MetricNameLabel) el.Value = model.SampleValue(math.Floor(float64(el.Value))) } return vector } // === max_over_time(matrix model.ValMatrix) Vector === func funcMaxOverTime(ev *evaluator, args Expressions) model.Value { return aggrOverTime(ev, args, func(values []model.SamplePair) model.SampleValue { max := math.Inf(-1) for _, v := range values { max = math.Max(max, float64(v.Value)) } return model.SampleValue(max) }) } // === min_over_time(matrix model.ValMatrix) Vector === func funcMinOverTime(ev *evaluator, args Expressions) model.Value { return aggrOverTime(ev, args, func(values []model.SamplePair) model.SampleValue { min := math.Inf(1) for _, v := range values { min = math.Min(min, float64(v.Value)) } return model.SampleValue(min) }) } // === sum_over_time(matrix model.ValMatrix) Vector === func funcSumOverTime(ev *evaluator, args Expressions) model.Value { return aggrOverTime(ev, args, func(values []model.SamplePair) model.SampleValue { var sum model.SampleValue for _, v := range values { sum += v.Value } return sum }) } // === abs(vector model.ValVector) Vector === func funcAbs(ev *evaluator, args Expressions) model.Value { vector := ev.evalVector(args[0]) for _, el := range vector { el.Metric.Del(model.MetricNameLabel) el.Value = model.SampleValue(math.Abs(float64(el.Value))) } return vector } // === absent(vector model.ValVector) Vector === func funcAbsent(ev *evaluator, args Expressions) model.Value { if len(ev.evalVector(args[0])) > 0 { return vector{} } m := model.Metric{} if vs, ok := args[0].(*VectorSelector); ok { for _, matcher := range vs.LabelMatchers { if matcher.Type == metric.Equal && matcher.Name != model.MetricNameLabel { m[matcher.Name] = matcher.Value } } } return vector{ &sample{ Metric: metric.Metric{ Metric: m, Copied: true, }, Value: 1, Timestamp: ev.Timestamp, }, } } // === ceil(vector model.ValVector) Vector === func funcCeil(ev *evaluator, args Expressions) model.Value { vector := ev.evalVector(args[0]) for _, el := range vector { el.Metric.Del(model.MetricNameLabel) el.Value = model.SampleValue(math.Ceil(float64(el.Value))) } return vector } // === exp(vector model.ValVector) Vector === func funcExp(ev *evaluator, args Expressions) model.Value { vector := ev.evalVector(args[0]) for _, el := range vector { el.Metric.Del(model.MetricNameLabel) el.Value = model.SampleValue(math.Exp(float64(el.Value))) } return vector } // === sqrt(vector VectorNode) Vector === func funcSqrt(ev *evaluator, args Expressions) model.Value { vector := ev.evalVector(args[0]) for _, el := range vector { el.Metric.Del(model.MetricNameLabel) el.Value = model.SampleValue(math.Sqrt(float64(el.Value))) } return vector } // === ln(vector model.ValVector) Vector === func funcLn(ev *evaluator, args Expressions) model.Value { vector := ev.evalVector(args[0]) for _, el := range vector { el.Metric.Del(model.MetricNameLabel) el.Value = model.SampleValue(math.Log(float64(el.Value))) } return vector } // === log2(vector model.ValVector) Vector === func funcLog2(ev *evaluator, args Expressions) model.Value { vector := ev.evalVector(args[0]) for _, el := range vector { el.Metric.Del(model.MetricNameLabel) el.Value = model.SampleValue(math.Log2(float64(el.Value))) } return vector } // === log10(vector model.ValVector) Vector === func funcLog10(ev *evaluator, args Expressions) model.Value { vector := ev.evalVector(args[0]) for _, el := range vector { el.Metric.Del(model.MetricNameLabel) el.Value = model.SampleValue(math.Log10(float64(el.Value))) } return vector } // === deriv(node model.ValMatrix) Vector === func funcDeriv(ev *evaluator, args Expressions) model.Value { resultVector := vector{} mat := ev.evalMatrix(args[0]) for _, samples := range mat { // No sense in trying to compute a derivative without at least two points. // Drop this vector element. if len(samples.Values) < 2 { continue } // Least squares. var ( n model.SampleValue sumX, sumY model.SampleValue sumXY, sumX2 model.SampleValue ) for _, sample := range samples.Values { x := model.SampleValue(sample.Timestamp.UnixNano() / 1e9) n += 1.0 sumY += sample.Value sumX += x sumXY += x * sample.Value sumX2 += x * x } numerator := sumXY - sumX*sumY/n denominator := sumX2 - (sumX*sumX)/n resultValue := numerator / denominator resultSample := &sample{ Metric: samples.Metric, Value: resultValue, Timestamp: ev.Timestamp, } resultSample.Metric.Del(model.MetricNameLabel) resultVector = append(resultVector, resultSample) } return resultVector } // === predict_linear(node model.ValMatrix, k model.ValScalar) Vector === func funcPredictLinear(ev *evaluator, args Expressions) model.Value { vec := funcDeriv(ev, args[0:1]).(vector) duration := model.SampleValue(model.SampleValue(ev.evalFloat(args[1]))) excludedLabels := map[model.LabelName]struct{}{ model.MetricNameLabel: {}, } // Calculate predicted delta over the duration. signatureToDelta := map[uint64]model.SampleValue{} for _, el := range vec { signature := model.SignatureWithoutLabels(el.Metric.Metric, excludedLabels) signatureToDelta[signature] = el.Value * duration } // add predicted delta to last value. matrixBounds := ev.evalMatrixBounds(args[0]) outVec := make(vector, 0, len(signatureToDelta)) for _, samples := range matrixBounds { if len(samples.Values) < 2 { continue } signature := model.SignatureWithoutLabels(samples.Metric.Metric, excludedLabels) delta, ok := signatureToDelta[signature] if ok { samples.Metric.Del(model.MetricNameLabel) outVec = append(outVec, &sample{ Metric: samples.Metric, Value: delta + samples.Values[1].Value, Timestamp: ev.Timestamp, }) } } return outVec } // === histogram_quantile(k model.ValScalar, vector model.ValVector) Vector === func funcHistogramQuantile(ev *evaluator, args Expressions) model.Value { q := model.SampleValue(ev.evalFloat(args[0])) inVec := ev.evalVector(args[1]) outVec := vector{} signatureToMetricWithBuckets := map[uint64]*metricWithBuckets{} for _, el := range inVec { upperBound, err := strconv.ParseFloat( string(el.Metric.Metric[model.BucketLabel]), 64, ) if err != nil { // Oops, no bucket label or malformed label value. Skip. // TODO(beorn7): Issue a warning somehow. continue } signature := model.SignatureWithoutLabels(el.Metric.Metric, excludedLabels) mb, ok := signatureToMetricWithBuckets[signature] if !ok { el.Metric.Del(model.BucketLabel) el.Metric.Del(model.MetricNameLabel) mb = &metricWithBuckets{el.Metric, nil} signatureToMetricWithBuckets[signature] = mb } mb.buckets = append(mb.buckets, bucket{upperBound, el.Value}) } for _, mb := range signatureToMetricWithBuckets { outVec = append(outVec, &sample{ Metric: mb.metric, Value: model.SampleValue(quantile(q, mb.buckets)), Timestamp: ev.Timestamp, }) } return outVec } // === resets(matrix model.ValMatrix) Vector === func funcResets(ev *evaluator, args Expressions) model.Value { in := ev.evalMatrix(args[0]) out := make(vector, 0, len(in)) for _, samples := range in { resets := 0 prev := model.SampleValue(samples.Values[0].Value) for _, sample := range samples.Values[1:] { current := sample.Value if current < prev { resets++ } prev = current } rs := &sample{ Metric: samples.Metric, Value: model.SampleValue(resets), Timestamp: ev.Timestamp, } rs.Metric.Del(model.MetricNameLabel) out = append(out, rs) } return out } // === changes(matrix model.ValMatrix) Vector === func funcChanges(ev *evaluator, args Expressions) model.Value { in := ev.evalMatrix(args[0]) out := make(vector, 0, len(in)) for _, samples := range in { changes := 0 prev := model.SampleValue(samples.Values[0].Value) for _, sample := range samples.Values[1:] { current := sample.Value if current != prev { changes++ } prev = current } rs := &sample{ Metric: samples.Metric, Value: model.SampleValue(changes), Timestamp: ev.Timestamp, } rs.Metric.Del(model.MetricNameLabel) out = append(out, rs) } return out } // === label_replace(vector model.ValVector, dst_label, replacement, src_labelname, regex model.ValString) Vector === func funcLabelReplace(ev *evaluator, args Expressions) model.Value { var ( vector = ev.evalVector(args[0]) dst = model.LabelName(ev.evalString(args[1]).Value) repl = ev.evalString(args[2]).Value src = model.LabelName(ev.evalString(args[3]).Value) regexStr = ev.evalString(args[4]).Value ) regex, err := regexp.Compile("^(?:" + regexStr + ")$") if err != nil { ev.errorf("invalid regular expression in label_replace(): %s", regexStr) } if !model.LabelNameRE.MatchString(string(dst)) { ev.errorf("invalid destination label name in label_replace(): %s", dst) } outSet := make(map[model.Fingerprint]struct{}, len(vector)) for _, el := range vector { srcVal := string(el.Metric.Metric[src]) indexes := regex.FindStringSubmatchIndex(srcVal) // If there is no match, no replacement should take place. if indexes == nil { continue } res := regex.ExpandString([]byte{}, repl, srcVal, indexes) if len(res) == 0 { el.Metric.Del(dst) } else { el.Metric.Set(dst, model.LabelValue(res)) } fp := el.Metric.Metric.Fingerprint() if _, exists := outSet[fp]; exists { ev.errorf("duplicated label set in output of label_replace(): %s", el.Metric.Metric) } else { outSet[fp] = struct{}{} } } return vector } // === vector(s scalar) Vector === func funcVector(ev *evaluator, args Expressions) model.Value { return vector{ &sample{ Metric: metric.Metric{}, Value: model.SampleValue(ev.evalFloat(args[0])), Timestamp: ev.Timestamp, }, } } var functions = map[string]*Function{ "abs": { Name: "abs", ArgTypes: []model.ValueType{model.ValVector}, ReturnType: model.ValVector, Call: funcAbs, }, "absent": { Name: "absent", ArgTypes: []model.ValueType{model.ValVector}, ReturnType: model.ValVector, Call: funcAbsent, }, "increase": { Name: "increase", ArgTypes: []model.ValueType{model.ValMatrix}, ReturnType: model.ValVector, Call: funcIncrease, }, "avg_over_time": { Name: "avg_over_time", ArgTypes: []model.ValueType{model.ValMatrix}, ReturnType: model.ValVector, Call: funcAvgOverTime, }, "bottomk": { Name: "bottomk", ArgTypes: []model.ValueType{model.ValScalar, model.ValVector}, ReturnType: model.ValVector, Call: funcBottomk, }, "ceil": { Name: "ceil", ArgTypes: []model.ValueType{model.ValVector}, ReturnType: model.ValVector, Call: funcCeil, }, "changes": { Name: "changes", ArgTypes: []model.ValueType{model.ValMatrix}, ReturnType: model.ValVector, Call: funcChanges, }, "clamp_max": { Name: "clamp_max", ArgTypes: []model.ValueType{model.ValVector, model.ValScalar}, ReturnType: model.ValVector, Call: funcClampMax, }, "clamp_min": { Name: "clamp_min", ArgTypes: []model.ValueType{model.ValVector, model.ValScalar}, ReturnType: model.ValVector, Call: funcClampMin, }, "count_over_time": { Name: "count_over_time", ArgTypes: []model.ValueType{model.ValMatrix}, ReturnType: model.ValVector, Call: funcCountOverTime, }, "count_scalar": { Name: "count_scalar", ArgTypes: []model.ValueType{model.ValVector}, ReturnType: model.ValScalar, Call: funcCountScalar, }, "delta": { Name: "delta", ArgTypes: []model.ValueType{model.ValMatrix}, ReturnType: model.ValVector, Call: funcDelta, }, "deriv": { Name: "deriv", ArgTypes: []model.ValueType{model.ValMatrix}, ReturnType: model.ValVector, Call: funcDeriv, }, "drop_common_labels": { Name: "drop_common_labels", ArgTypes: []model.ValueType{model.ValVector}, ReturnType: model.ValVector, Call: funcDropCommonLabels, }, "exp": { Name: "exp", ArgTypes: []model.ValueType{model.ValVector}, ReturnType: model.ValVector, Call: funcExp, }, "floor": { Name: "floor", ArgTypes: []model.ValueType{model.ValVector}, ReturnType: model.ValVector, Call: funcFloor, }, "histogram_quantile": { Name: "histogram_quantile", ArgTypes: []model.ValueType{model.ValScalar, model.ValVector}, ReturnType: model.ValVector, Call: funcHistogramQuantile, }, "irate": { Name: "irate", ArgTypes: []model.ValueType{model.ValMatrix}, ReturnType: model.ValVector, Call: funcIrate, }, "label_replace": { Name: "label_replace", ArgTypes: []model.ValueType{model.ValVector, model.ValString, model.ValString, model.ValString, model.ValString}, ReturnType: model.ValVector, Call: funcLabelReplace, }, "ln": { Name: "ln", ArgTypes: []model.ValueType{model.ValVector}, ReturnType: model.ValVector, Call: funcLn, }, "log10": { Name: "log10", ArgTypes: []model.ValueType{model.ValVector}, ReturnType: model.ValVector, Call: funcLog10, }, "log2": { Name: "log2", ArgTypes: []model.ValueType{model.ValVector}, ReturnType: model.ValVector, Call: funcLog2, }, "max_over_time": { Name: "max_over_time", ArgTypes: []model.ValueType{model.ValMatrix}, ReturnType: model.ValVector, Call: funcMaxOverTime, }, "min_over_time": { Name: "min_over_time", ArgTypes: []model.ValueType{model.ValMatrix}, ReturnType: model.ValVector, Call: funcMinOverTime, }, "predict_linear": { Name: "predict_linear", ArgTypes: []model.ValueType{model.ValMatrix, model.ValScalar}, ReturnType: model.ValVector, Call: funcPredictLinear, }, "rate": { Name: "rate", ArgTypes: []model.ValueType{model.ValMatrix}, ReturnType: model.ValVector, Call: funcRate, }, "resets": { Name: "resets", ArgTypes: []model.ValueType{model.ValMatrix}, ReturnType: model.ValVector, Call: funcResets, }, "round": { Name: "round", ArgTypes: []model.ValueType{model.ValVector, model.ValScalar}, OptionalArgs: 1, ReturnType: model.ValVector, Call: funcRound, }, "scalar": { Name: "scalar", ArgTypes: []model.ValueType{model.ValVector}, ReturnType: model.ValScalar, Call: funcScalar, }, "sort": { Name: "sort", ArgTypes: []model.ValueType{model.ValVector}, ReturnType: model.ValVector, Call: funcSort, }, "sort_desc": { Name: "sort_desc", ArgTypes: []model.ValueType{model.ValVector}, ReturnType: model.ValVector, Call: funcSortDesc, }, "sqrt": { Name: "sqrt", ArgTypes: []model.ValueType{model.ValVector}, ReturnType: model.ValVector, Call: funcSqrt, }, "sum_over_time": { Name: "sum_over_time", ArgTypes: []model.ValueType{model.ValMatrix}, ReturnType: model.ValVector, Call: funcSumOverTime, }, "time": { Name: "time", ArgTypes: []model.ValueType{}, ReturnType: model.ValScalar, Call: funcTime, }, "topk": { Name: "topk", ArgTypes: []model.ValueType{model.ValScalar, model.ValVector}, ReturnType: model.ValVector, Call: funcTopk, }, "vector": { Name: "vector", ArgTypes: []model.ValueType{model.ValScalar}, ReturnType: model.ValVector, Call: funcVector, }, } // getFunction returns a predefined Function object for the given name. func getFunction(name string) (*Function, bool) { function, ok := functions[name] return function, ok } type vectorByValueHeap vector func (s vectorByValueHeap) Len() int { return len(s) } func (s vectorByValueHeap) Less(i, j int) bool { if math.IsNaN(float64(s[i].Value)) { return true } return s[i].Value < s[j].Value } func (s vectorByValueHeap) Swap(i, j int) { s[i], s[j] = s[j], s[i] } func (s *vectorByValueHeap) Push(x interface{}) { *s = append(*s, x.(*sample)) } func (s *vectorByValueHeap) Pop() interface{} { old := *s n := len(old) el := old[n-1] *s = old[0 : n-1] return el } type vectorByReverseValueHeap vector func (s vectorByReverseValueHeap) Len() int { return len(s) } func (s vectorByReverseValueHeap) Less(i, j int) bool { if math.IsNaN(float64(s[i].Value)) { return true } return s[i].Value > s[j].Value } func (s vectorByReverseValueHeap) Swap(i, j int) { s[i], s[j] = s[j], s[i] } func (s *vectorByReverseValueHeap) Push(x interface{}) { *s = append(*s, x.(*sample)) } func (s *vectorByReverseValueHeap) Pop() interface{} { old := *s n := len(old) el := old[n-1] *s = old[0 : n-1] return el } prometheus-0.16.2+ds/promql/lex.go000066400000000000000000000472531265137125100170440ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package promql import ( "fmt" "strings" "unicode" "unicode/utf8" ) // item represents a token or text string returned from the scanner. type item struct { typ itemType // The type of this item. pos Pos // The starting position, in bytes, of this item in the input string. val string // The value of this item. } // String returns a descriptive string for the item. func (i item) String() string { switch { case i.typ == itemEOF: return "EOF" case i.typ == itemError: return i.val case i.typ == itemIdentifier || i.typ == itemMetricIdentifier: return fmt.Sprintf("%q", i.val) case i.typ.isKeyword(): return fmt.Sprintf("<%s>", i.val) case i.typ.isOperator(): return fmt.Sprintf("", i.val) case i.typ.isAggregator(): return fmt.Sprintf("", i.val) case len(i.val) > 10: return fmt.Sprintf("%.10q...", i.val) } return fmt.Sprintf("%q", i.val) } // isOperator returns true if the item corresponds to a logical or arithmetic operator. // Returns false otherwise. func (i itemType) isOperator() bool { return i > operatorsStart && i < operatorsEnd } // isAggregator returns true if the item belongs to the aggregator functions. // Returns false otherwise func (i itemType) isAggregator() bool { return i > aggregatorsStart && i < aggregatorsEnd } // isKeyword returns true if the item corresponds to a keyword. // Returns false otherwise. func (i itemType) isKeyword() bool { return i > keywordsStart && i < keywordsEnd } // isCompairsonOperator returns true if the item corresponds to a comparison operator. // Returns false otherwise. func (i itemType) isComparisonOperator() bool { switch i { case itemEQL, itemNEQ, itemLTE, itemLSS, itemGTE, itemGTR: return true default: return false } } // Constants for operator precedence in expressions. // const LowestPrec = 0 // Non-operators. // Precedence returns the operator precedence of the binary // operator op. If op is not a binary operator, the result // is LowestPrec. func (i itemType) precedence() int { switch i { case itemLOR: return 1 case itemLAND: return 2 case itemEQL, itemNEQ, itemLTE, itemLSS, itemGTE, itemGTR: return 3 case itemADD, itemSUB: return 4 case itemMUL, itemDIV, itemMOD: return 5 default: return LowestPrec } } type itemType int const ( itemError itemType = iota // Error occurred, value is error message itemEOF itemComment itemIdentifier itemMetricIdentifier itemLeftParen itemRightParen itemLeftBrace itemRightBrace itemLeftBracket itemRightBracket itemComma itemAssign itemSemicolon itemString itemNumber itemDuration itemBlank itemTimes operatorsStart // Operators. itemSUB itemADD itemMUL itemMOD itemDIV itemLAND itemLOR itemEQL itemNEQ itemLTE itemLSS itemGTE itemGTR itemEQLRegex itemNEQRegex operatorsEnd aggregatorsStart // Aggregators. itemAvg itemCount itemSum itemMin itemMax itemStddev itemStdvar aggregatorsEnd keywordsStart // Keywords. itemAlert itemIf itemFor itemWith itemSummary itemRunbook itemDescription itemKeepCommon itemOffset itemBy itemOn itemGroupLeft itemGroupRight itemBool keywordsEnd ) var key = map[string]itemType{ // Operators. "and": itemLAND, "or": itemLOR, // Aggregators. "sum": itemSum, "avg": itemAvg, "count": itemCount, "min": itemMin, "max": itemMax, "stddev": itemStddev, "stdvar": itemStdvar, // Keywords. "alert": itemAlert, "if": itemIf, "for": itemFor, "with": itemWith, "summary": itemSummary, "runbook": itemRunbook, "description": itemDescription, "offset": itemOffset, "by": itemBy, "keeping_extra": itemKeepCommon, "keep_common": itemKeepCommon, "on": itemOn, "group_left": itemGroupLeft, "group_right": itemGroupRight, "bool": itemBool, } // These are the default string representations for common items. It does not // imply that those are the only character sequences that can be lexed to such an item. var itemTypeStr = map[itemType]string{ itemLeftParen: "(", itemRightParen: ")", itemLeftBrace: "{", itemRightBrace: "}", itemLeftBracket: "[", itemRightBracket: "]", itemComma: ",", itemAssign: "=", itemSemicolon: ";", itemBlank: "_", itemTimes: "x", itemSUB: "-", itemADD: "+", itemMUL: "*", itemMOD: "%", itemDIV: "/", itemEQL: "==", itemNEQ: "!=", itemLTE: "<=", itemLSS: "<", itemGTE: ">=", itemGTR: ">", itemEQLRegex: "=~", itemNEQRegex: "!~", } func init() { // Add keywords to item type strings. for s, ty := range key { itemTypeStr[ty] = s } // Special numbers. key["inf"] = itemNumber key["nan"] = itemNumber } func (i itemType) String() string { if s, ok := itemTypeStr[i]; ok { return s } return fmt.Sprintf("", i) } func (i item) desc() string { if _, ok := itemTypeStr[i.typ]; ok { return i.String() } if i.typ == itemEOF { return i.typ.desc() } return fmt.Sprintf("%s %s", i.typ.desc(), i) } func (i itemType) desc() string { switch i { case itemError: return "error" case itemEOF: return "end of input" case itemComment: return "comment" case itemIdentifier: return "identifier" case itemMetricIdentifier: return "metric identifier" case itemString: return "string" case itemNumber: return "number" case itemDuration: return "duration" } return fmt.Sprintf("%q", i) } const eof = -1 // stateFn represents the state of the scanner as a function that returns the next state. type stateFn func(*lexer) stateFn // Pos is the position in a string. type Pos int // lexer holds the state of the scanner. type lexer struct { input string // The string being scanned. state stateFn // The next lexing function to enter. pos Pos // Current position in the input. start Pos // Start position of this item. width Pos // Width of last rune read from input. lastPos Pos // Position of most recent item returned by nextItem. items chan item // Channel of scanned items. parenDepth int // Nesting depth of ( ) exprs. braceOpen bool // Whether a { is opened. bracketOpen bool // Whether a [ is opened. stringOpen rune // Quote rune of the string currently being read. // seriesDesc is set when a series description for the testing // language is lexed. seriesDesc bool } // next returns the next rune in the input. func (l *lexer) next() rune { if int(l.pos) >= len(l.input) { l.width = 0 return eof } r, w := utf8.DecodeRuneInString(l.input[l.pos:]) l.width = Pos(w) l.pos += l.width return r } // peek returns but does not consume the next rune in the input. func (l *lexer) peek() rune { r := l.next() l.backup() return r } // backup steps back one rune. Can only be called once per call of next. func (l *lexer) backup() { l.pos -= l.width } // emit passes an item back to the client. func (l *lexer) emit(t itemType) { l.items <- item{t, l.start, l.input[l.start:l.pos]} l.start = l.pos } // ignore skips over the pending input before this point. func (l *lexer) ignore() { l.start = l.pos } // accept consumes the next rune if it's from the valid set. func (l *lexer) accept(valid string) bool { if strings.IndexRune(valid, l.next()) >= 0 { return true } l.backup() return false } // acceptRun consumes a run of runes from the valid set. func (l *lexer) acceptRun(valid string) { for strings.IndexRune(valid, l.next()) >= 0 { // consume } l.backup() } // lineNumber reports which line we're on, based on the position of // the previous item returned by nextItem. Doing it this way // means we don't have to worry about peek double counting. func (l *lexer) lineNumber() int { return 1 + strings.Count(l.input[:l.lastPos], "\n") } // linePosition reports at which character in the current line // we are on. func (l *lexer) linePosition() int { lb := strings.LastIndex(l.input[:l.lastPos], "\n") if lb == -1 { return 1 + int(l.lastPos) } return 1 + int(l.lastPos) - lb } // errorf returns an error token and terminates the scan by passing // back a nil pointer that will be the next state, terminating l.nextItem. func (l *lexer) errorf(format string, args ...interface{}) stateFn { l.items <- item{itemError, l.start, fmt.Sprintf(format, args...)} return nil } // nextItem returns the next item from the input. func (l *lexer) nextItem() item { item := <-l.items l.lastPos = item.pos return item } // lex creates a new scanner for the input string. func lex(input string) *lexer { l := &lexer{ input: input, items: make(chan item), } go l.run() return l } // run runs the state machine for the lexer. func (l *lexer) run() { for l.state = lexStatements; l.state != nil; { l.state = l.state(l) } close(l.items) } // lineComment is the character that starts a line comment. const lineComment = "#" // lexStatements is the top-level state for lexing. func lexStatements(l *lexer) stateFn { if l.braceOpen { return lexInsideBraces } if strings.HasPrefix(l.input[l.pos:], lineComment) { return lexLineComment } switch r := l.next(); { case r == eof: if l.parenDepth != 0 { return l.errorf("unclosed left parenthesis") } else if l.bracketOpen { return l.errorf("unclosed left bracket") } l.emit(itemEOF) return nil case r == ',': l.emit(itemComma) case isSpace(r): return lexSpace case r == '*': l.emit(itemMUL) case r == '/': l.emit(itemDIV) case r == '%': l.emit(itemMOD) case r == '+': l.emit(itemADD) case r == '-': l.emit(itemSUB) case r == '=': if t := l.peek(); t == '=' { l.next() l.emit(itemEQL) } else if t == '~' { return l.errorf("unexpected character after '=': %q", t) } else { l.emit(itemAssign) } case r == '!': if t := l.next(); t == '=' { l.emit(itemNEQ) } else { return l.errorf("unexpected character after '!': %q", t) } case r == '<': if t := l.peek(); t == '=' { l.next() l.emit(itemLTE) } else { l.emit(itemLSS) } case r == '>': if t := l.peek(); t == '=' { l.next() l.emit(itemGTE) } else { l.emit(itemGTR) } case isDigit(r) || (r == '.' && isDigit(l.peek())): l.backup() return lexNumberOrDuration case r == '"' || r == '\'': l.stringOpen = r return lexString case r == '`': l.stringOpen = r return lexRawString case isAlpha(r) || r == ':': l.backup() return lexKeywordOrIdentifier case r == '(': l.emit(itemLeftParen) l.parenDepth++ return lexStatements case r == ')': l.emit(itemRightParen) l.parenDepth-- if l.parenDepth < 0 { return l.errorf("unexpected right parenthesis %q", r) } return lexStatements case r == '{': l.emit(itemLeftBrace) l.braceOpen = true return lexInsideBraces(l) case r == '[': if l.bracketOpen { return l.errorf("unexpected left bracket %q", r) } l.emit(itemLeftBracket) l.bracketOpen = true return lexDuration case r == ']': if !l.bracketOpen { return l.errorf("unexpected right bracket %q", r) } l.emit(itemRightBracket) l.bracketOpen = false default: return l.errorf("unexpected character: %q", r) } return lexStatements } // lexInsideBraces scans the inside of a vector selector. Keywords are ignored and // scanned as identifiers. func lexInsideBraces(l *lexer) stateFn { if strings.HasPrefix(l.input[l.pos:], lineComment) { return lexLineComment } switch r := l.next(); { case r == eof: return l.errorf("unexpected end of input inside braces") case isSpace(r): return lexSpace case isAlpha(r): l.backup() return lexIdentifier case r == ',': l.emit(itemComma) case r == '"' || r == '\'': l.stringOpen = r return lexString case r == '`': l.stringOpen = r return lexRawString case r == '=': if l.next() == '~' { l.emit(itemEQLRegex) break } l.backup() l.emit(itemEQL) case r == '!': switch nr := l.next(); { case nr == '~': l.emit(itemNEQRegex) case nr == '=': l.emit(itemNEQ) default: return l.errorf("unexpected character after '!' inside braces: %q", nr) } case r == '{': return l.errorf("unexpected left brace %q", r) case r == '}': l.emit(itemRightBrace) l.braceOpen = false if l.seriesDesc { return lexValueSequence } return lexStatements default: return l.errorf("unexpected character inside braces: %q", r) } return lexInsideBraces } // lexValueSequence scans a value sequence of a series description. func lexValueSequence(l *lexer) stateFn { switch r := l.next(); { case r == eof: return lexStatements case isSpace(r): lexSpace(l) case r == '+': l.emit(itemADD) case r == '-': l.emit(itemSUB) case r == 'x': l.emit(itemTimes) case r == '_': l.emit(itemBlank) case isDigit(r) || (r == '.' && isDigit(l.peek())): l.backup() lexNumber(l) case isAlpha(r): l.backup() // We might lex invalid items here but this will be caught by the parser. return lexKeywordOrIdentifier default: return l.errorf("unexpected character in series sequence: %q", r) } return lexValueSequence } // lexEscape scans a string escape sequence. The initial escaping character (\) // has already been seen. // // NOTE: This function as well as the helper function digitVal() and associated // tests have been adapted from the corresponding functions in the "go/scanner" // package of the Go standard library to work for Prometheus-style strings. // None of the actual escaping/quoting logic was changed in this function - it // was only modified to integrate with our lexer. func lexEscape(l *lexer) { var n int var base, max uint32 ch := l.next() switch ch { case 'a', 'b', 'f', 'n', 'r', 't', 'v', '\\', l.stringOpen: return case '0', '1', '2', '3', '4', '5', '6', '7': n, base, max = 3, 8, 255 case 'x': ch = l.next() n, base, max = 2, 16, 255 case 'u': ch = l.next() n, base, max = 4, 16, unicode.MaxRune case 'U': ch = l.next() n, base, max = 8, 16, unicode.MaxRune case eof: l.errorf("escape sequence not terminated") default: l.errorf("unknown escape sequence %#U", ch) } var x uint32 for n > 0 { d := uint32(digitVal(ch)) if d >= base { if ch == eof { l.errorf("escape sequence not terminated") } l.errorf("illegal character %#U in escape sequence", ch) } x = x*base + d ch = l.next() n-- } if x > max || 0xD800 <= x && x < 0xE000 { l.errorf("escape sequence is an invalid Unicode code point") } } // digitVal returns the digit value of a rune or 16 in case the rune does not // represent a valid digit. func digitVal(ch rune) int { switch { case '0' <= ch && ch <= '9': return int(ch - '0') case 'a' <= ch && ch <= 'f': return int(ch - 'a' + 10) case 'A' <= ch && ch <= 'F': return int(ch - 'A' + 10) } return 16 // Larger than any legal digit val. } // lexString scans a quoted string. The initial quote has already been seen. func lexString(l *lexer) stateFn { Loop: for { switch l.next() { case '\\': lexEscape(l) case eof, '\n': return l.errorf("unterminated quoted string") case l.stringOpen: break Loop } } l.emit(itemString) return lexStatements } // lexRawString scans a raw quoted string. The initial quote has already been seen. func lexRawString(l *lexer) stateFn { Loop: for { switch l.next() { case eof: return l.errorf("unterminated raw string") case l.stringOpen: break Loop } } l.emit(itemString) return lexStatements } // lexSpace scans a run of space characters. One space has already been seen. func lexSpace(l *lexer) stateFn { for isSpace(l.peek()) { l.next() } l.ignore() return lexStatements } // lexLineComment scans a line comment. Left comment marker is known to be present. func lexLineComment(l *lexer) stateFn { l.pos += Pos(len(lineComment)) for r := l.next(); !isEndOfLine(r) && r != eof; { r = l.next() } l.backup() l.emit(itemComment) return lexStatements } func lexDuration(l *lexer) stateFn { if l.scanNumber() { return l.errorf("missing unit character in duration") } // Next two chars must be a valid unit and a non-alphanumeric. if l.accept("smhdwy") { if isAlphaNumeric(l.next()) { return l.errorf("bad duration syntax: %q", l.input[l.start:l.pos]) } l.backup() l.emit(itemDuration) return lexStatements } return l.errorf("bad duration syntax: %q", l.input[l.start:l.pos]) } // lexNumber scans a number: decimal, hex, oct or float. func lexNumber(l *lexer) stateFn { if !l.scanNumber() { return l.errorf("bad number syntax: %q", l.input[l.start:l.pos]) } l.emit(itemNumber) return lexStatements } // lexNumberOrDuration scans a number or a duration item. func lexNumberOrDuration(l *lexer) stateFn { if l.scanNumber() { l.emit(itemNumber) return lexStatements } // Next two chars must be a valid unit and a non-alphanumeric. if l.accept("smhdwy") { if isAlphaNumeric(l.next()) { return l.errorf("bad number or duration syntax: %q", l.input[l.start:l.pos]) } l.backup() l.emit(itemDuration) return lexStatements } return l.errorf("bad number or duration syntax: %q", l.input[l.start:l.pos]) } // scanNumber scans numbers of different formats. The scanned item is // not necessarily a valid number. This case is caught by the parser. func (l *lexer) scanNumber() bool { digits := "0123456789" // Disallow hexadecimal in series descriptions as the syntax is ambiguous. if !l.seriesDesc && l.accept("0") && l.accept("xX") { digits = "0123456789abcdefABCDEF" } l.acceptRun(digits) if l.accept(".") { l.acceptRun(digits) } if l.accept("eE") { l.accept("+-") l.acceptRun("0123456789") } // Next thing must not be alphanumeric unless it's the times token // for series repetitions. if r := l.peek(); (l.seriesDesc && r == 'x') || !isAlphaNumeric(r) { return true } return false } // lexIdentifier scans an alphanumeric identifier. The next character // is known to be a letter. func lexIdentifier(l *lexer) stateFn { for isAlphaNumeric(l.next()) { // absorb } l.backup() l.emit(itemIdentifier) return lexStatements } // lexKeywordOrIdentifier scans an alphanumeric identifier which may contain // a colon rune. If the identifier is a keyword the respective keyword item // is scanned. func lexKeywordOrIdentifier(l *lexer) stateFn { Loop: for { switch r := l.next(); { case isAlphaNumeric(r) || r == ':': // absorb. default: l.backup() word := l.input[l.start:l.pos] if kw, ok := key[strings.ToLower(word)]; ok { l.emit(kw) } else if !strings.Contains(word, ":") { l.emit(itemIdentifier) } else { l.emit(itemMetricIdentifier) } break Loop } } if l.seriesDesc && l.peek() != '{' { return lexValueSequence } return lexStatements } func isSpace(r rune) bool { return r == ' ' || r == '\t' || r == '\n' || r == '\r' } // isEndOfLine reports whether r is an end-of-line character. func isEndOfLine(r rune) bool { return r == '\r' || r == '\n' } // isAlphaNumeric reports whether r is an alphabetic, digit, or underscore. func isAlphaNumeric(r rune) bool { return isAlpha(r) || isDigit(r) } // isDigit reports whether r is a digit. Note: we cannot use unicode.IsDigit() // instead because that also classifies non-Latin digits as digits. See // https://github.com/prometheus/prometheus/issues/939. func isDigit(r rune) bool { return '0' <= r && r <= '9' } // isAlpha reports whether r is an alphabetic or underscore. func isAlpha(r rune) bool { return r == '_' || ('a' <= r && r <= 'z') || ('A' <= r && r <= 'Z') } prometheus-0.16.2+ds/promql/lex_test.go000066400000000000000000000244231265137125100200750ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package promql import ( "fmt" "reflect" "testing" ) var tests = []struct { input string expected []item fail bool seriesDesc bool // Whether to lex a series description. }{ // Test common stuff. { input: ",", expected: []item{{itemComma, 0, ","}}, }, { input: "()", expected: []item{{itemLeftParen, 0, `(`}, {itemRightParen, 1, `)`}}, }, { input: "{}", expected: []item{{itemLeftBrace, 0, `{`}, {itemRightBrace, 1, `}`}}, }, { input: "[5m]", expected: []item{ {itemLeftBracket, 0, `[`}, {itemDuration, 1, `5m`}, {itemRightBracket, 3, `]`}, }, }, { input: "\r\n\r", expected: []item{}, }, // Test numbers. { input: "1", expected: []item{{itemNumber, 0, "1"}}, }, { input: "4.23", expected: []item{{itemNumber, 0, "4.23"}}, }, { input: ".3", expected: []item{{itemNumber, 0, ".3"}}, }, { input: "5.", expected: []item{{itemNumber, 0, "5."}}, }, { input: "NaN", expected: []item{{itemNumber, 0, "NaN"}}, }, { input: "nAN", expected: []item{{itemNumber, 0, "nAN"}}, }, { input: "NaN 123", expected: []item{{itemNumber, 0, "NaN"}, {itemNumber, 4, "123"}}, }, { input: "NaN123", expected: []item{{itemIdentifier, 0, "NaN123"}}, }, { input: "iNf", expected: []item{{itemNumber, 0, "iNf"}}, }, { input: "Inf", expected: []item{{itemNumber, 0, "Inf"}}, }, { input: "+Inf", expected: []item{{itemADD, 0, "+"}, {itemNumber, 1, "Inf"}}, }, { input: "+Inf 123", expected: []item{{itemADD, 0, "+"}, {itemNumber, 1, "Inf"}, {itemNumber, 5, "123"}}, }, { input: "-Inf", expected: []item{{itemSUB, 0, "-"}, {itemNumber, 1, "Inf"}}, }, { input: "Infoo", expected: []item{{itemIdentifier, 0, "Infoo"}}, }, { input: "-Infoo", expected: []item{{itemSUB, 0, "-"}, {itemIdentifier, 1, "Infoo"}}, }, { input: "-Inf 123", expected: []item{{itemSUB, 0, "-"}, {itemNumber, 1, "Inf"}, {itemNumber, 5, "123"}}, }, { input: "0x123", expected: []item{{itemNumber, 0, "0x123"}}, }, { // See https://github.com/prometheus/prometheus/issues/939. input: ".٩", fail: true, }, // Test duration. { input: "5s", expected: []item{{itemDuration, 0, "5s"}}, }, { input: "123m", expected: []item{{itemDuration, 0, "123m"}}, }, { input: "1h", expected: []item{{itemDuration, 0, "1h"}}, }, { input: "3w", expected: []item{{itemDuration, 0, "3w"}}, }, { input: "1y", expected: []item{{itemDuration, 0, "1y"}}, }, // Test identifiers. { input: "abc", expected: []item{{itemIdentifier, 0, "abc"}}, }, { input: "a:bc", expected: []item{{itemMetricIdentifier, 0, "a:bc"}}, }, { input: "abc d", expected: []item{{itemIdentifier, 0, "abc"}, {itemIdentifier, 4, "d"}}, }, { input: ":bc", expected: []item{{itemMetricIdentifier, 0, ":bc"}}, }, { input: "0a:bc", fail: true, }, // Test comments. { input: "# some comment", expected: []item{{itemComment, 0, "# some comment"}}, }, { input: "5 # 1+1\n5", expected: []item{ {itemNumber, 0, "5"}, {itemComment, 2, "# 1+1"}, {itemNumber, 8, "5"}, }, }, // Test operators. { input: `=`, expected: []item{{itemAssign, 0, `=`}}, }, { // Inside braces equality is a single '=' character. input: `{=}`, expected: []item{{itemLeftBrace, 0, `{`}, {itemEQL, 1, `=`}, {itemRightBrace, 2, `}`}}, }, { input: `==`, expected: []item{{itemEQL, 0, `==`}}, }, { input: `!=`, expected: []item{{itemNEQ, 0, `!=`}}, }, { input: `<`, expected: []item{{itemLSS, 0, `<`}}, }, { input: `>`, expected: []item{{itemGTR, 0, `>`}}, }, { input: `>=`, expected: []item{{itemGTE, 0, `>=`}}, }, { input: `<=`, expected: []item{{itemLTE, 0, `<=`}}, }, { input: `+`, expected: []item{{itemADD, 0, `+`}}, }, { input: `-`, expected: []item{{itemSUB, 0, `-`}}, }, { input: `*`, expected: []item{{itemMUL, 0, `*`}}, }, { input: `/`, expected: []item{{itemDIV, 0, `/`}}, }, { input: `%`, expected: []item{{itemMOD, 0, `%`}}, }, { input: `AND`, expected: []item{{itemLAND, 0, `AND`}}, }, { input: `or`, expected: []item{{itemLOR, 0, `or`}}, }, // Test aggregators. { input: `sum`, expected: []item{{itemSum, 0, `sum`}}, }, { input: `AVG`, expected: []item{{itemAvg, 0, `AVG`}}, }, { input: `MAX`, expected: []item{{itemMax, 0, `MAX`}}, }, { input: `min`, expected: []item{{itemMin, 0, `min`}}, }, { input: `count`, expected: []item{{itemCount, 0, `count`}}, }, { input: `stdvar`, expected: []item{{itemStdvar, 0, `stdvar`}}, }, { input: `stddev`, expected: []item{{itemStddev, 0, `stddev`}}, }, // Test keywords. { input: "alert", expected: []item{{itemAlert, 0, "alert"}}, }, { input: "keeping_extra", expected: []item{{itemKeepCommon, 0, "keeping_extra"}}, }, { input: "keep_common", expected: []item{{itemKeepCommon, 0, "keep_common"}}, }, { input: "if", expected: []item{{itemIf, 0, "if"}}, }, { input: "for", expected: []item{{itemFor, 0, "for"}}, }, { input: "with", expected: []item{{itemWith, 0, "with"}}, }, { input: "description", expected: []item{{itemDescription, 0, "description"}}, }, { input: "summary", expected: []item{{itemSummary, 0, "summary"}}, }, { input: "runbook", expected: []item{{itemRunbook, 0, "runbook"}}, }, { input: "offset", expected: []item{{itemOffset, 0, "offset"}}, }, { input: "by", expected: []item{{itemBy, 0, "by"}}, }, { input: "on", expected: []item{{itemOn, 0, "on"}}, }, { input: "group_left", expected: []item{{itemGroupLeft, 0, "group_left"}}, }, { input: "group_right", expected: []item{{itemGroupRight, 0, "group_right"}}, }, { input: "bool", expected: []item{{itemBool, 0, "bool"}}, }, // Test Selector. { input: `台北`, fail: true, }, { input: `{台北='a'}`, fail: true, }, { input: `{0a='a'}`, fail: true, }, { input: `{foo='bar'}`, expected: []item{ {itemLeftBrace, 0, `{`}, {itemIdentifier, 1, `foo`}, {itemEQL, 4, `=`}, {itemString, 5, `'bar'`}, {itemRightBrace, 10, `}`}, }, }, { input: `{foo="bar"}`, expected: []item{ {itemLeftBrace, 0, `{`}, {itemIdentifier, 1, `foo`}, {itemEQL, 4, `=`}, {itemString, 5, `"bar"`}, {itemRightBrace, 10, `}`}, }, }, { input: `{foo="bar\"bar"}`, expected: []item{ {itemLeftBrace, 0, `{`}, {itemIdentifier, 1, `foo`}, {itemEQL, 4, `=`}, {itemString, 5, `"bar\"bar"`}, {itemRightBrace, 15, `}`}, }, }, { input: `{NaN != "bar" }`, expected: []item{ {itemLeftBrace, 0, `{`}, {itemIdentifier, 1, `NaN`}, {itemNEQ, 5, `!=`}, {itemString, 8, `"bar"`}, {itemRightBrace, 14, `}`}, }, }, { input: `{alert=~"bar" }`, expected: []item{ {itemLeftBrace, 0, `{`}, {itemIdentifier, 1, `alert`}, {itemEQLRegex, 6, `=~`}, {itemString, 8, `"bar"`}, {itemRightBrace, 14, `}`}, }, }, { input: `{on!~"bar"}`, expected: []item{ {itemLeftBrace, 0, `{`}, {itemIdentifier, 1, `on`}, {itemNEQRegex, 3, `!~`}, {itemString, 5, `"bar"`}, {itemRightBrace, 10, `}`}, }, }, { input: `{alert!#"bar"}`, fail: true, }, { input: `{foo:a="bar"}`, fail: true, }, // Test common errors. { input: `=~`, fail: true, }, { input: `!~`, fail: true, }, { input: `!(`, fail: true, }, { input: "1a", fail: true, }, // Test mismatched parens. { input: `(`, fail: true, }, { input: `())`, fail: true, }, { input: `(()`, fail: true, }, { input: `{`, fail: true, }, { input: `}`, fail: true, }, { input: "{{", fail: true, }, { input: "{{}}", fail: true, }, { input: `[`, fail: true, }, { input: `[[`, fail: true, }, { input: `[]]`, fail: true, }, { input: `[[]]`, fail: true, }, { input: `]`, fail: true, }, // Test series description. { input: `{} _ 1 x .3`, expected: []item{ {itemLeftBrace, 0, `{`}, {itemRightBrace, 1, `}`}, {itemBlank, 3, `_`}, {itemNumber, 5, `1`}, {itemTimes, 7, `x`}, {itemNumber, 9, `.3`}, }, seriesDesc: true, }, { input: `metric +Inf Inf NaN`, expected: []item{ {itemIdentifier, 0, `metric`}, {itemADD, 7, `+`}, {itemNumber, 8, `Inf`}, {itemNumber, 12, `Inf`}, {itemNumber, 16, `NaN`}, }, seriesDesc: true, }, { input: `metric 1+1x4`, expected: []item{ {itemIdentifier, 0, `metric`}, {itemNumber, 7, `1`}, {itemADD, 8, `+`}, {itemNumber, 9, `1`}, {itemTimes, 10, `x`}, {itemNumber, 11, `4`}, }, seriesDesc: true, }, } // TestLexer tests basic functionality of the lexer. More elaborate tests are implemented // for the parser to avoid duplicated effort. func TestLexer(t *testing.T) { for i, test := range tests { l := lex(test.input) l.seriesDesc = test.seriesDesc out := []item{} for it := range l.items { out = append(out, it) } lastItem := out[len(out)-1] if test.fail { if lastItem.typ != itemError { t.Logf("%d: input %q", i, test.input) t.Fatalf("expected lexing error but did not fail") } continue } if lastItem.typ == itemError { t.Logf("%d: input %q", i, test.input) t.Fatalf("unexpected lexing error at position %d: %s", lastItem.pos, lastItem) } if !reflect.DeepEqual(lastItem, item{itemEOF, Pos(len(test.input)), ""}) { t.Logf("%d: input %q", i, test.input) t.Fatalf("lexing error: expected output to end with EOF item.\ngot:\n%s", expectedList(out)) } out = out[:len(out)-1] if !reflect.DeepEqual(out, test.expected) { t.Logf("%d: input %q", i, test.input) t.Fatalf("lexing mismatch:\nexpected:\n%s\ngot:\n%s", expectedList(test.expected), expectedList(out)) } } } func expectedList(exp []item) string { s := "" for _, it := range exp { s += fmt.Sprintf("\t%#v\n", it) } return s } prometheus-0.16.2+ds/promql/parse.go000066400000000000000000000645461265137125100173720ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package promql import ( "fmt" "runtime" "strconv" "strings" "time" "github.com/prometheus/common/log" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/storage/metric" "github.com/prometheus/prometheus/util/strutil" ) type parser struct { lex *lexer token [3]item peekCount int } // ParseErr wraps a parsing error with line and position context. // If the parsing input was a single line, line will be 0 and omitted // from the error string. type ParseErr struct { Line, Pos int Err error } func (e *ParseErr) Error() string { if e.Line == 0 { return fmt.Sprintf("parse error at char %d: %s", e.Pos, e.Err) } return fmt.Sprintf("parse error at line %d, char %d: %s", e.Line, e.Pos, e.Err) } // ParseStmts parses the input and returns the resulting statements or any ocurring error. func ParseStmts(input string) (Statements, error) { p := newParser(input) stmts, err := p.parseStmts() if err != nil { return nil, err } err = p.typecheck(stmts) return stmts, err } // ParseExpr returns the expression parsed from the input. func ParseExpr(input string) (Expr, error) { p := newParser(input) expr, err := p.parseExpr() if err != nil { return nil, err } err = p.typecheck(expr) return expr, err } // ParseMetric parses the input into a metric func ParseMetric(input string) (m model.Metric, err error) { p := newParser(input) defer p.recover(&err) m = p.metric() if p.peek().typ != itemEOF { p.errorf("could not parse remaining input %.15q...", p.lex.input[p.lex.lastPos:]) } return m, nil } // ParseMetricSelector parses the provided textual metric selector into a list of // label matchers. func ParseMetricSelector(input string) (m metric.LabelMatchers, err error) { p := newParser(input) defer p.recover(&err) name := "" if t := p.peek().typ; t == itemMetricIdentifier || t == itemIdentifier { name = p.next().val } vs := p.vectorSelector(name) if p.peek().typ != itemEOF { p.errorf("could not parse remaining input %.15q...", p.lex.input[p.lex.lastPos:]) } return vs.LabelMatchers, nil } // parseSeriesDesc parses the description of a time series. func parseSeriesDesc(input string) (model.Metric, []sequenceValue, error) { p := newParser(input) p.lex.seriesDesc = true return p.parseSeriesDesc() } // newParser returns a new parser. func newParser(input string) *parser { p := &parser{ lex: lex(input), } return p } // parseStmts parses a sequence of statements from the input. func (p *parser) parseStmts() (stmts Statements, err error) { defer p.recover(&err) stmts = Statements{} for p.peek().typ != itemEOF { if p.peek().typ == itemComment { continue } stmts = append(stmts, p.stmt()) } return } // parseExpr parses a single expression from the input. func (p *parser) parseExpr() (expr Expr, err error) { defer p.recover(&err) for p.peek().typ != itemEOF { if p.peek().typ == itemComment { continue } if expr != nil { p.errorf("could not parse remaining input %.15q...", p.lex.input[p.lex.lastPos:]) } expr = p.expr() } if expr == nil { p.errorf("no expression found in input") } return } // sequenceValue is an omittable value in a sequence of time series values. type sequenceValue struct { value model.SampleValue omitted bool } func (v sequenceValue) String() string { if v.omitted { return "_" } return v.value.String() } // parseSeriesDesc parses a description of a time series into its metric and value sequence. func (p *parser) parseSeriesDesc() (m model.Metric, vals []sequenceValue, err error) { defer p.recover(&err) m = p.metric() const ctx = "series values" for { if p.peek().typ == itemEOF { break } // Extract blanks. if p.peek().typ == itemBlank { p.next() times := uint64(1) if p.peek().typ == itemTimes { p.next() times, err = strconv.ParseUint(p.expect(itemNumber, ctx).val, 10, 64) if err != nil { p.errorf("invalid repetition in %s: %s", ctx, err) } } for i := uint64(0); i < times; i++ { vals = append(vals, sequenceValue{omitted: true}) } continue } // Extract values. sign := 1.0 if t := p.peek().typ; t == itemSUB || t == itemADD { if p.next().typ == itemSUB { sign = -1 } } k := sign * p.number(p.expect(itemNumber, ctx).val) vals = append(vals, sequenceValue{ value: model.SampleValue(k), }) // If there are no offset repetitions specified, proceed with the next value. if t := p.peek().typ; t == itemNumber || t == itemBlank { continue } else if t == itemEOF { break } else if t != itemADD && t != itemSUB { p.errorf("expected next value or relative expansion in %s but got %s", ctx, t.desc()) } // Expand the repeated offsets into values. sign = 1.0 if p.next().typ == itemSUB { sign = -1.0 } offset := sign * p.number(p.expect(itemNumber, ctx).val) p.expect(itemTimes, ctx) times, err := strconv.ParseUint(p.expect(itemNumber, ctx).val, 10, 64) if err != nil { p.errorf("invalid repetition in %s: %s", ctx, err) } for i := uint64(0); i < times; i++ { k += offset vals = append(vals, sequenceValue{ value: model.SampleValue(k), }) } } return m, vals, nil } // typecheck checks correct typing of the parsed statements or expression. func (p *parser) typecheck(node Node) (err error) { defer p.recover(&err) p.checkType(node) return nil } // next returns the next token. func (p *parser) next() item { if p.peekCount > 0 { p.peekCount-- } else { t := p.lex.nextItem() // Skip comments. for t.typ == itemComment { t = p.lex.nextItem() } p.token[0] = t } if p.token[p.peekCount].typ == itemError { p.errorf("%s", p.token[p.peekCount].val) } return p.token[p.peekCount] } // peek returns but does not consume the next token. func (p *parser) peek() item { if p.peekCount > 0 { return p.token[p.peekCount-1] } p.peekCount = 1 t := p.lex.nextItem() // Skip comments. for t.typ == itemComment { t = p.lex.nextItem() } p.token[0] = t return p.token[0] } // backup backs the input stream up one token. func (p *parser) backup() { p.peekCount++ } // errorf formats the error and terminates processing. func (p *parser) errorf(format string, args ...interface{}) { p.error(fmt.Errorf(format, args...)) } // error terminates processing. func (p *parser) error(err error) { perr := &ParseErr{ Line: p.lex.lineNumber(), Pos: p.lex.linePosition(), Err: err, } if strings.Count(strings.TrimSpace(p.lex.input), "\n") == 0 { perr.Line = 0 } panic(perr) } // expect consumes the next token and guarantees it has the required type. func (p *parser) expect(exp itemType, context string) item { token := p.next() if token.typ != exp { p.errorf("unexpected %s in %s, expected %s", token.desc(), context, exp.desc()) } return token } // expectOneOf consumes the next token and guarantees it has one of the required types. func (p *parser) expectOneOf(exp1, exp2 itemType, context string) item { token := p.next() if token.typ != exp1 && token.typ != exp2 { p.errorf("unexpected %s in %s, expected %s or %s", token.desc(), context, exp1.desc(), exp2.desc()) } return token } var errUnexpected = fmt.Errorf("unexpected error") // recover is the handler that turns panics into returns from the top level of Parse. func (p *parser) recover(errp *error) { e := recover() if e != nil { if _, ok := e.(runtime.Error); ok { // Print the stack trace but do not inhibit the running application. buf := make([]byte, 64<<10) buf = buf[:runtime.Stack(buf, false)] log.Errorf("parser panic: %v\n%s", e, buf) *errp = errUnexpected } else { *errp = e.(error) } } return } // stmt parses any statement. // // alertStatement | recordStatement // func (p *parser) stmt() Statement { switch tok := p.peek(); tok.typ { case itemAlert: return p.alertStmt() case itemIdentifier, itemMetricIdentifier: return p.recordStmt() } p.errorf("no valid statement detected") return nil } // alertStmt parses an alert rule. // // ALERT name IF expr [FOR duration] [WITH label_set] // SUMMARY "summary" // DESCRIPTION "description" // func (p *parser) alertStmt() *AlertStmt { const ctx = "alert statement" p.expect(itemAlert, ctx) name := p.expect(itemIdentifier, ctx) // Alerts require a vector typed expression. p.expect(itemIf, ctx) expr := p.expr() // Optional for clause. var duration time.Duration var err error if p.peek().typ == itemFor { p.next() dur := p.expect(itemDuration, ctx) duration, err = parseDuration(dur.val) if err != nil { p.error(err) } } lset := model.LabelSet{} if p.peek().typ == itemWith { p.expect(itemWith, ctx) lset = p.labelSet() } var ( hasSum, hasDesc, hasRunbook bool sum, desc, runbook string ) Loop: for { switch p.next().typ { case itemSummary: if hasSum { p.errorf("summary must not be defined twice") } hasSum = true sum = p.unquoteString(p.expect(itemString, ctx).val) case itemDescription: if hasDesc { p.errorf("description must not be defined twice") } hasDesc = true desc = p.unquoteString(p.expect(itemString, ctx).val) case itemRunbook: if hasRunbook { p.errorf("runbook must not be defined twice") } hasRunbook = true runbook = p.unquoteString(p.expect(itemString, ctx).val) default: p.backup() break Loop } } if sum == "" { p.errorf("alert summary missing") } if desc == "" { p.errorf("alert description missing") } return &AlertStmt{ Name: name.val, Expr: expr, Duration: duration, Labels: lset, Summary: sum, Description: desc, Runbook: runbook, } } // recordStmt parses a recording rule. func (p *parser) recordStmt() *RecordStmt { const ctx = "record statement" name := p.expectOneOf(itemIdentifier, itemMetricIdentifier, ctx).val var lset model.LabelSet if p.peek().typ == itemLeftBrace { lset = p.labelSet() } p.expect(itemAssign, ctx) expr := p.expr() return &RecordStmt{ Name: name, Labels: lset, Expr: expr, } } // expr parses any expression. func (p *parser) expr() Expr { // Parse the starting expression. expr := p.unaryExpr() // Loop through the operations and construct a binary operation tree based // on the operators' precedence. for { // If the next token is not an operator the expression is done. op := p.peek().typ if !op.isOperator() { return expr } p.next() // Consume operator. // Parse optional operator matching options. Its validity // is checked in the type-checking stage. vecMatching := &VectorMatching{ Card: CardOneToOne, } if op == itemLAND || op == itemLOR { vecMatching.Card = CardManyToMany } returnBool := false // Parse bool modifier. if p.peek().typ == itemBool { if !op.isComparisonOperator() { p.errorf("bool modifier can only be used on comparison operators") } p.next() returnBool = true } // Parse ON clause. if p.peek().typ == itemOn { p.next() vecMatching.On = p.labels() // Parse grouping. if t := p.peek().typ; t == itemGroupLeft { p.next() vecMatching.Card = CardManyToOne vecMatching.Include = p.labels() } else if t == itemGroupRight { p.next() vecMatching.Card = CardOneToMany vecMatching.Include = p.labels() } } for _, ln := range vecMatching.On { for _, ln2 := range vecMatching.Include { if ln == ln2 { p.errorf("label %q must not occur in ON and INCLUDE clause at once", ln) } } } // Parse the next operand. rhs := p.unaryExpr() // Assign the new root based on the precendence of the LHS and RHS operators. if lhs, ok := expr.(*BinaryExpr); ok && lhs.Op.precedence() < op.precedence() { expr = &BinaryExpr{ Op: lhs.Op, LHS: lhs.LHS, RHS: &BinaryExpr{ Op: op, LHS: lhs.RHS, RHS: rhs, VectorMatching: vecMatching, ReturnBool: returnBool, }, VectorMatching: lhs.VectorMatching, } if op.isComparisonOperator() && !returnBool && rhs.Type() == model.ValScalar && lhs.RHS.Type() == model.ValScalar { p.errorf("comparisons between scalars must use BOOL modifier") } } else { expr = &BinaryExpr{ Op: op, LHS: expr, RHS: rhs, VectorMatching: vecMatching, ReturnBool: returnBool, } if op.isComparisonOperator() && !returnBool && rhs.Type() == model.ValScalar && expr.Type() == model.ValScalar { p.errorf("comparisons between scalars must use BOOL modifier") } } } } // unaryExpr parses a unary expression. // // | | (+|-) | '(' ')' // func (p *parser) unaryExpr() Expr { switch t := p.peek(); t.typ { case itemADD, itemSUB: p.next() e := p.unaryExpr() // Simplify unary expressions for number literals. if nl, ok := e.(*NumberLiteral); ok { if t.typ == itemSUB { nl.Val *= -1 } return nl } return &UnaryExpr{Op: t.typ, Expr: e} case itemLeftParen: p.next() e := p.expr() p.expect(itemRightParen, "paren expression") return &ParenExpr{Expr: e} } e := p.primaryExpr() // Expression might be followed by a range selector. if p.peek().typ == itemLeftBracket { vs, ok := e.(*VectorSelector) if !ok { p.errorf("range specification must be preceded by a metric selector, but follows a %T instead", e) } e = p.rangeSelector(vs) } return e } // rangeSelector parses a matrix selector based on a given vector selector. // // '[' ']' // func (p *parser) rangeSelector(vs *VectorSelector) *MatrixSelector { const ctx = "matrix selector" p.next() var erange, offset time.Duration var err error erangeStr := p.expect(itemDuration, ctx).val erange, err = parseDuration(erangeStr) if err != nil { p.error(err) } p.expect(itemRightBracket, ctx) // Parse optional offset. if p.peek().typ == itemOffset { p.next() offi := p.expect(itemDuration, ctx) offset, err = parseDuration(offi.val) if err != nil { p.error(err) } } e := &MatrixSelector{ Name: vs.Name, LabelMatchers: vs.LabelMatchers, Range: erange, Offset: offset, } return e } // parseNumber parses a number. func (p *parser) number(val string) float64 { n, err := strconv.ParseInt(val, 0, 64) f := float64(n) if err != nil { f, err = strconv.ParseFloat(val, 64) } if err != nil { p.errorf("error parsing number: %s", err) } return f } // primaryExpr parses a primary expression. // // | | | // func (p *parser) primaryExpr() Expr { switch t := p.next(); { case t.typ == itemNumber: f := p.number(t.val) return &NumberLiteral{model.SampleValue(f)} case t.typ == itemString: return &StringLiteral{p.unquoteString(t.val)} case t.typ == itemLeftBrace: // Metric selector without metric name. p.backup() return p.vectorSelector("") case t.typ == itemIdentifier: // Check for function call. if p.peek().typ == itemLeftParen { return p.call(t.val) } fallthrough // Else metric selector. case t.typ == itemMetricIdentifier: return p.vectorSelector(t.val) case t.typ.isAggregator(): p.backup() return p.aggrExpr() default: p.errorf("no valid expression found") } return nil } // labels parses a list of labelnames. // // '(' , ... ')' // func (p *parser) labels() model.LabelNames { const ctx = "grouping opts" p.expect(itemLeftParen, ctx) labels := model.LabelNames{} for { id := p.expect(itemIdentifier, ctx) labels = append(labels, model.LabelName(id.val)) if p.peek().typ != itemComma { break } p.next() } p.expect(itemRightParen, ctx) return labels } // aggrExpr parses an aggregation expression. // // () [by ] [keep_common] // [by ] [keep_common] () // func (p *parser) aggrExpr() *AggregateExpr { const ctx = "aggregation" agop := p.next() if !agop.typ.isAggregator() { p.errorf("expected aggregation operator but got %s", agop) } var grouping model.LabelNames var keepExtra bool modifiersFirst := false if p.peek().typ == itemBy { p.next() grouping = p.labels() modifiersFirst = true } if p.peek().typ == itemKeepCommon { p.next() keepExtra = true modifiersFirst = true } p.expect(itemLeftParen, ctx) e := p.expr() p.expect(itemRightParen, ctx) if !modifiersFirst { if p.peek().typ == itemBy { if len(grouping) > 0 { p.errorf("aggregation must only contain one grouping clause") } p.next() grouping = p.labels() } if p.peek().typ == itemKeepCommon { p.next() keepExtra = true } } return &AggregateExpr{ Op: agop.typ, Expr: e, Grouping: grouping, KeepExtraLabels: keepExtra, } } // call parses a function call. // // '(' [ , ...] ')' // func (p *parser) call(name string) *Call { const ctx = "function call" fn, exist := getFunction(name) if !exist { p.errorf("unknown function with name %q", name) } p.expect(itemLeftParen, ctx) // Might be call without args. if p.peek().typ == itemRightParen { p.next() // Consume. return &Call{fn, nil} } var args []Expr for { e := p.expr() args = append(args, e) // Terminate if no more arguments. if p.peek().typ != itemComma { break } p.next() } // Call must be closed. p.expect(itemRightParen, ctx) return &Call{Func: fn, Args: args} } // labelSet parses a set of label matchers // // '{' [ '=' , ... ] '}' // func (p *parser) labelSet() model.LabelSet { set := model.LabelSet{} for _, lm := range p.labelMatchers(itemEQL) { set[lm.Name] = lm.Value } return set } // labelMatchers parses a set of label matchers. // // '{' [ , ... ] '}' // func (p *parser) labelMatchers(operators ...itemType) metric.LabelMatchers { const ctx = "label matching" matchers := metric.LabelMatchers{} p.expect(itemLeftBrace, ctx) // Check if no matchers are provided. if p.peek().typ == itemRightBrace { p.next() return matchers } for { label := p.expect(itemIdentifier, ctx) op := p.next().typ if !op.isOperator() { p.errorf("expected label matching operator but got %s", op) } var validOp = false for _, allowedOp := range operators { if op == allowedOp { validOp = true } } if !validOp { p.errorf("operator must be one of %q, is %q", operators, op) } val := p.unquoteString(p.expect(itemString, ctx).val) // Map the item to the respective match type. var matchType metric.MatchType switch op { case itemEQL: matchType = metric.Equal case itemNEQ: matchType = metric.NotEqual case itemEQLRegex: matchType = metric.RegexMatch case itemNEQRegex: matchType = metric.RegexNoMatch default: p.errorf("item %q is not a metric match type", op) } m, err := metric.NewLabelMatcher( matchType, model.LabelName(label.val), model.LabelValue(val), ) if err != nil { p.error(err) } matchers = append(matchers, m) // Terminate list if last matcher. if p.peek().typ != itemComma { break } p.next() } p.expect(itemRightBrace, ctx) return matchers } // metric parses a metric. // // // [] // func (p *parser) metric() model.Metric { name := "" m := model.Metric{} t := p.peek().typ if t == itemIdentifier || t == itemMetricIdentifier { name = p.next().val t = p.peek().typ } if t != itemLeftBrace && name == "" { p.errorf("missing metric name or metric selector") } if t == itemLeftBrace { m = model.Metric(p.labelSet()) } if name != "" { m[model.MetricNameLabel] = model.LabelValue(name) } return m } // metricSelector parses a new metric selector. // // [] [ offset ] // [] [ offset ] // func (p *parser) vectorSelector(name string) *VectorSelector { const ctx = "metric selector" var matchers metric.LabelMatchers // Parse label matching if any. if t := p.peek(); t.typ == itemLeftBrace { matchers = p.labelMatchers(itemEQL, itemNEQ, itemEQLRegex, itemNEQRegex) } // Metric name must not be set in the label matchers and before at the same time. if name != "" { for _, m := range matchers { if m.Name == model.MetricNameLabel { p.errorf("metric name must not be set twice: %q or %q", name, m.Value) } } // Set name label matching. matchers = append(matchers, &metric.LabelMatcher{ Type: metric.Equal, Name: model.MetricNameLabel, Value: model.LabelValue(name), }) } if len(matchers) == 0 { p.errorf("vector selector must contain label matchers or metric name") } // A vector selector must contain at least one non-empty matcher to prevent // implicit selection of all metrics (e.g. by a typo). notEmpty := false for _, lm := range matchers { // Matching changes the inner state of the regex and causes reflect.DeepEqual // to return false, which break tests. // Thus, we create a new label matcher for this testing. lm, err := metric.NewLabelMatcher(lm.Type, lm.Name, lm.Value) if err != nil { p.error(err) } if !lm.Match("") { notEmpty = true break } } if !notEmpty { p.errorf("vector selector must contain at least one non-empty matcher") } var err error var offset time.Duration // Parse optional offset. if p.peek().typ == itemOffset { p.next() offi := p.expect(itemDuration, ctx) offset, err = parseDuration(offi.val) if err != nil { p.error(err) } } return &VectorSelector{ Name: name, LabelMatchers: matchers, Offset: offset, } } // expectType checks the type of the node and raises an error if it // is not of the expected type. func (p *parser) expectType(node Node, want model.ValueType, context string) { t := p.checkType(node) if t != want { p.errorf("expected type %s in %s, got %s", want, context, t) } } // check the types of the children of each node and raise an error // if they do not form a valid node. // // Some of these checks are redundant as the the parsing stage does not allow // them, but the costs are small and might reveal errors when making changes. func (p *parser) checkType(node Node) (typ model.ValueType) { // For expressions the type is determined by their Type function. // Statements and lists do not have a type but are not invalid either. switch n := node.(type) { case Statements, Expressions, Statement: typ = model.ValNone case Expr: typ = n.Type() default: p.errorf("unknown node type: %T", node) } // Recursively check correct typing for child nodes and raise // errors in case of bad typing. switch n := node.(type) { case Statements: for _, s := range n { p.expectType(s, model.ValNone, "statement list") } case *AlertStmt: p.expectType(n.Expr, model.ValVector, "alert statement") case *EvalStmt: ty := p.checkType(n.Expr) if ty == model.ValNone { p.errorf("evaluation statement must have a valid expression type but got %s", ty) } case *RecordStmt: ty := p.checkType(n.Expr) if ty != model.ValVector && ty != model.ValScalar { p.errorf("record statement must have a valid expression of type vector or scalar but got %s", ty) } case Expressions: for _, e := range n { ty := p.checkType(e) if ty == model.ValNone { p.errorf("expression must have a valid expression type but got %s", ty) } } case *AggregateExpr: if !n.Op.isAggregator() { p.errorf("aggregation operator expected in aggregation expression but got %q", n.Op) } p.expectType(n.Expr, model.ValVector, "aggregation expression") case *BinaryExpr: lt := p.checkType(n.LHS) rt := p.checkType(n.RHS) if !n.Op.isOperator() { p.errorf("only logical and arithmetic operators allowed in binary expression, got %q", n.Op) } if (lt != model.ValScalar && lt != model.ValVector) || (rt != model.ValScalar && rt != model.ValVector) { p.errorf("binary expression must contain only scalar and vector types") } if (lt != model.ValVector || rt != model.ValVector) && n.VectorMatching != nil { if len(n.VectorMatching.On) > 0 { p.errorf("vector matching only allowed between vectors") } n.VectorMatching = nil } else { // Both operands are vectors. if n.Op == itemLAND || n.Op == itemLOR { if n.VectorMatching.Card == CardOneToMany || n.VectorMatching.Card == CardManyToOne { p.errorf("no grouping allowed for AND and OR operations") } if n.VectorMatching.Card != CardManyToMany { p.errorf("AND and OR operations must always be many-to-many") } } } if (lt == model.ValScalar || rt == model.ValScalar) && (n.Op == itemLAND || n.Op == itemLOR) { p.errorf("AND and OR not allowed in binary scalar expression") } case *Call: nargs := len(n.Func.ArgTypes) if na := nargs - n.Func.OptionalArgs; na > len(n.Args) { p.errorf("expected at least %d argument(s) in call to %q, got %d", na, n.Func.Name, len(n.Args)) } if nargs < len(n.Args) { p.errorf("expected at most %d argument(s) in call to %q, got %d", nargs, n.Func.Name, len(n.Args)) } for i, arg := range n.Args { p.expectType(arg, n.Func.ArgTypes[i], fmt.Sprintf("call to function %q", n.Func.Name)) } case *ParenExpr: p.checkType(n.Expr) case *UnaryExpr: if n.Op != itemADD && n.Op != itemSUB { p.errorf("only + and - operators allowed for unary expressions") } if t := p.checkType(n.Expr); t != model.ValScalar && t != model.ValVector { p.errorf("unary expression only allowed on expressions of type scalar or vector, got %q", t) } case *NumberLiteral, *MatrixSelector, *StringLiteral, *VectorSelector: // Nothing to do for terminals. default: p.errorf("unknown node type: %T", node) } return } func (p *parser) unquoteString(s string) string { unquoted, err := strutil.Unquote(s) if err != nil { p.errorf("error unquoting string %q: %s", s, err) } return unquoted } func parseDuration(ds string) (time.Duration, error) { dur, err := strutil.StringToDuration(ds) if err != nil { return 0, err } if dur == 0 { return 0, fmt.Errorf("duration must be greater than 0") } return dur, nil } prometheus-0.16.2+ds/promql/parse_test.go000066400000000000000000001167721265137125100204300ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package promql import ( "fmt" "math" "reflect" "strings" "testing" "time" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/storage/metric" ) var testExpr = []struct { input string // The input to be parsed. expected Expr // The expected expression AST. fail bool // Whether parsing is supposed to fail. errMsg string // If not empty the parsing error has to contain this string. }{ // Scalars and scalar-to-scalar operations. { input: "1", expected: &NumberLiteral{1}, }, { input: "+Inf", expected: &NumberLiteral{model.SampleValue(math.Inf(1))}, }, { input: "-Inf", expected: &NumberLiteral{model.SampleValue(math.Inf(-1))}, }, { input: ".5", expected: &NumberLiteral{0.5}, }, { input: "5.", expected: &NumberLiteral{5}, }, { input: "123.4567", expected: &NumberLiteral{123.4567}, }, { input: "5e-3", expected: &NumberLiteral{0.005}, }, { input: "5e3", expected: &NumberLiteral{5000}, }, { input: "0xc", expected: &NumberLiteral{12}, }, { input: "0755", expected: &NumberLiteral{493}, }, { input: "+5.5e-3", expected: &NumberLiteral{0.0055}, }, { input: "-0755", expected: &NumberLiteral{-493}, }, { input: "1 + 1", expected: &BinaryExpr{itemADD, &NumberLiteral{1}, &NumberLiteral{1}, nil, false}, }, { input: "1 - 1", expected: &BinaryExpr{itemSUB, &NumberLiteral{1}, &NumberLiteral{1}, nil, false}, }, { input: "1 * 1", expected: &BinaryExpr{itemMUL, &NumberLiteral{1}, &NumberLiteral{1}, nil, false}, }, { input: "1 % 1", expected: &BinaryExpr{itemMOD, &NumberLiteral{1}, &NumberLiteral{1}, nil, false}, }, { input: "1 / 1", expected: &BinaryExpr{itemDIV, &NumberLiteral{1}, &NumberLiteral{1}, nil, false}, }, { input: "1 == bool 1", expected: &BinaryExpr{itemEQL, &NumberLiteral{1}, &NumberLiteral{1}, nil, true}, }, { input: "1 != bool 1", expected: &BinaryExpr{itemNEQ, &NumberLiteral{1}, &NumberLiteral{1}, nil, true}, }, { input: "1 > bool 1", expected: &BinaryExpr{itemGTR, &NumberLiteral{1}, &NumberLiteral{1}, nil, true}, }, { input: "1 >= bool 1", expected: &BinaryExpr{itemGTE, &NumberLiteral{1}, &NumberLiteral{1}, nil, true}, }, { input: "1 < bool 1", expected: &BinaryExpr{itemLSS, &NumberLiteral{1}, &NumberLiteral{1}, nil, true}, }, { input: "1 <= bool 1", expected: &BinaryExpr{itemLTE, &NumberLiteral{1}, &NumberLiteral{1}, nil, true}, }, { input: "+1 + -2 * 1", expected: &BinaryExpr{ Op: itemADD, LHS: &NumberLiteral{1}, RHS: &BinaryExpr{ Op: itemMUL, LHS: &NumberLiteral{-2}, RHS: &NumberLiteral{1}, }, }, }, { input: "1 + 2/(3*1)", expected: &BinaryExpr{ Op: itemADD, LHS: &NumberLiteral{1}, RHS: &BinaryExpr{ Op: itemDIV, LHS: &NumberLiteral{2}, RHS: &ParenExpr{&BinaryExpr{ Op: itemMUL, LHS: &NumberLiteral{3}, RHS: &NumberLiteral{1}, }}, }, }, }, { input: "-some_metric", expected: &UnaryExpr{ Op: itemSUB, Expr: &VectorSelector{ Name: "some_metric", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "some_metric"}, }, }, }, }, { input: "+some_metric", expected: &UnaryExpr{ Op: itemADD, Expr: &VectorSelector{ Name: "some_metric", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "some_metric"}, }, }, }, }, { input: "", fail: true, errMsg: "no expression found in input", }, { input: "# just a comment\n\n", fail: true, errMsg: "no expression found in input", }, { input: "1+", fail: true, errMsg: "no valid expression found", }, { input: ".", fail: true, errMsg: "unexpected character: '.'", }, { input: "2.5.", fail: true, errMsg: "could not parse remaining input \".\"...", }, { input: "100..4", fail: true, errMsg: "could not parse remaining input \".4\"...", }, { input: "0deadbeef", fail: true, errMsg: "bad number or duration syntax: \"0de\"", }, { input: "1 /", fail: true, errMsg: "no valid expression found", }, { input: "*1", fail: true, errMsg: "no valid expression found", }, { input: "(1))", fail: true, errMsg: "could not parse remaining input \")\"...", }, { input: "((1)", fail: true, errMsg: "unclosed left parenthesis", }, { input: "999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999", fail: true, errMsg: "out of range", }, { input: "(", fail: true, errMsg: "unclosed left parenthesis", }, { input: "1 and 1", fail: true, errMsg: "AND and OR not allowed in binary scalar expression", }, { input: "1 == 1", fail: true, errMsg: "parse error at char 7: comparisons between scalars must use BOOL modifier", }, { input: "1 or 1", fail: true, errMsg: "AND and OR not allowed in binary scalar expression", }, { input: "1 !~ 1", fail: true, errMsg: "could not parse remaining input \"!~ 1\"...", }, { input: "1 =~ 1", fail: true, errMsg: "could not parse remaining input \"=~ 1\"...", }, { input: `-"string"`, fail: true, errMsg: `unary expression only allowed on expressions of type scalar or vector, got "string"`, }, { input: `-test[5m]`, fail: true, errMsg: `unary expression only allowed on expressions of type scalar or vector, got "matrix"`, }, { input: `*test`, fail: true, errMsg: "no valid expression found", }, // Vector binary operations. { input: "foo * bar", expected: &BinaryExpr{ Op: itemMUL, LHS: &VectorSelector{ Name: "foo", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "foo"}, }, }, RHS: &VectorSelector{ Name: "bar", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "bar"}, }, }, VectorMatching: &VectorMatching{Card: CardOneToOne}, }, }, { input: "foo == 1", expected: &BinaryExpr{ Op: itemEQL, LHS: &VectorSelector{ Name: "foo", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "foo"}, }, }, RHS: &NumberLiteral{1}, }, }, { input: "foo == bool 1", expected: &BinaryExpr{ Op: itemEQL, LHS: &VectorSelector{ Name: "foo", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "foo"}, }, }, RHS: &NumberLiteral{1}, ReturnBool: true, }, }, { input: "2.5 / bar", expected: &BinaryExpr{ Op: itemDIV, LHS: &NumberLiteral{2.5}, RHS: &VectorSelector{ Name: "bar", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "bar"}, }, }, }, }, { input: "foo and bar", expected: &BinaryExpr{ Op: itemLAND, LHS: &VectorSelector{ Name: "foo", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "foo"}, }, }, RHS: &VectorSelector{ Name: "bar", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "bar"}, }, }, VectorMatching: &VectorMatching{Card: CardManyToMany}, }, }, { input: "foo or bar", expected: &BinaryExpr{ Op: itemLOR, LHS: &VectorSelector{ Name: "foo", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "foo"}, }, }, RHS: &VectorSelector{ Name: "bar", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "bar"}, }, }, VectorMatching: &VectorMatching{Card: CardManyToMany}, }, }, { // Test and/or precedence and reassigning of operands. input: "foo + bar or bla and blub", expected: &BinaryExpr{ Op: itemLOR, LHS: &BinaryExpr{ Op: itemADD, LHS: &VectorSelector{ Name: "foo", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "foo"}, }, }, RHS: &VectorSelector{ Name: "bar", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "bar"}, }, }, VectorMatching: &VectorMatching{Card: CardOneToOne}, }, RHS: &BinaryExpr{ Op: itemLAND, LHS: &VectorSelector{ Name: "bla", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "bla"}, }, }, RHS: &VectorSelector{ Name: "blub", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "blub"}, }, }, VectorMatching: &VectorMatching{Card: CardManyToMany}, }, VectorMatching: &VectorMatching{Card: CardManyToMany}, }, }, { // Test precedence and reassigning of operands. input: "bar + on(foo) bla / on(baz, buz) group_right(test) blub", expected: &BinaryExpr{ Op: itemADD, LHS: &VectorSelector{ Name: "bar", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "bar"}, }, }, RHS: &BinaryExpr{ Op: itemDIV, LHS: &VectorSelector{ Name: "bla", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "bla"}, }, }, RHS: &VectorSelector{ Name: "blub", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "blub"}, }, }, VectorMatching: &VectorMatching{ Card: CardOneToMany, On: model.LabelNames{"baz", "buz"}, Include: model.LabelNames{"test"}, }, }, VectorMatching: &VectorMatching{ Card: CardOneToOne, On: model.LabelNames{"foo"}, }, }, }, { input: "foo * on(test,blub) bar", expected: &BinaryExpr{ Op: itemMUL, LHS: &VectorSelector{ Name: "foo", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "foo"}, }, }, RHS: &VectorSelector{ Name: "bar", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "bar"}, }, }, VectorMatching: &VectorMatching{ Card: CardOneToOne, On: model.LabelNames{"test", "blub"}, }, }, }, { input: "foo and on(test,blub) bar", expected: &BinaryExpr{ Op: itemLAND, LHS: &VectorSelector{ Name: "foo", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "foo"}, }, }, RHS: &VectorSelector{ Name: "bar", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "bar"}, }, }, VectorMatching: &VectorMatching{ Card: CardManyToMany, On: model.LabelNames{"test", "blub"}, }, }, }, { input: "foo / on(test,blub) group_left(bar) bar", expected: &BinaryExpr{ Op: itemDIV, LHS: &VectorSelector{ Name: "foo", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "foo"}, }, }, RHS: &VectorSelector{ Name: "bar", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "bar"}, }, }, VectorMatching: &VectorMatching{ Card: CardManyToOne, On: model.LabelNames{"test", "blub"}, Include: model.LabelNames{"bar"}, }, }, }, { input: "foo - on(test,blub) group_right(bar,foo) bar", expected: &BinaryExpr{ Op: itemSUB, LHS: &VectorSelector{ Name: "foo", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "foo"}, }, }, RHS: &VectorSelector{ Name: "bar", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "bar"}, }, }, VectorMatching: &VectorMatching{ Card: CardOneToMany, On: model.LabelNames{"test", "blub"}, Include: model.LabelNames{"bar", "foo"}, }, }, }, { input: "foo and 1", fail: true, errMsg: "AND and OR not allowed in binary scalar expression", }, { input: "1 and foo", fail: true, errMsg: "AND and OR not allowed in binary scalar expression", }, { input: "foo or 1", fail: true, errMsg: "AND and OR not allowed in binary scalar expression", }, { input: "1 or foo", fail: true, errMsg: "AND and OR not allowed in binary scalar expression", }, { input: "1 or on(bar) foo", fail: true, errMsg: "vector matching only allowed between vectors", }, { input: "foo == on(bar) 10", fail: true, errMsg: "vector matching only allowed between vectors", }, { input: "foo and on(bar) group_left(baz) bar", fail: true, errMsg: "no grouping allowed for AND and OR operations", }, { input: "foo and on(bar) group_right(baz) bar", fail: true, errMsg: "no grouping allowed for AND and OR operations", }, { input: "foo or on(bar) group_left(baz) bar", fail: true, errMsg: "no grouping allowed for AND and OR operations", }, { input: "foo or on(bar) group_right(baz) bar", fail: true, errMsg: "no grouping allowed for AND and OR operations", }, { input: `http_requests{group="production"} / on(instance) group_left cpu_count{type="smp"}`, fail: true, errMsg: "unexpected identifier \"cpu_count\" in grouping opts, expected \"(\"", }, { input: `http_requests{group="production"} + on(instance) group_left(job,instance) cpu_count{type="smp"}`, fail: true, errMsg: "label \"instance\" must not occur in ON and INCLUDE clause at once", }, { input: "foo + bool bar", fail: true, errMsg: "bool modifier can only be used on comparison operators", }, { input: "foo + bool 10", fail: true, errMsg: "bool modifier can only be used on comparison operators", }, { input: "foo and bool 10", fail: true, errMsg: "bool modifier can only be used on comparison operators", }, // Test vector selector. { input: "foo", expected: &VectorSelector{ Name: "foo", Offset: 0, LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "foo"}, }, }, }, { input: "foo offset 5m", expected: &VectorSelector{ Name: "foo", Offset: 5 * time.Minute, LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "foo"}, }, }, }, { input: `foo:bar{a="bc"}`, expected: &VectorSelector{ Name: "foo:bar", Offset: 0, LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: "a", Value: "bc"}, {Type: metric.Equal, Name: model.MetricNameLabel, Value: "foo:bar"}, }, }, }, { input: `foo{NaN='bc'}`, expected: &VectorSelector{ Name: "foo", Offset: 0, LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: "NaN", Value: "bc"}, {Type: metric.Equal, Name: model.MetricNameLabel, Value: "foo"}, }, }, }, { input: `foo{a="b", foo!="bar", test=~"test", bar!~"baz"}`, expected: &VectorSelector{ Name: "foo", Offset: 0, LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: "a", Value: "b"}, {Type: metric.NotEqual, Name: "foo", Value: "bar"}, mustLabelMatcher(metric.RegexMatch, "test", "test"), mustLabelMatcher(metric.RegexNoMatch, "bar", "baz"), {Type: metric.Equal, Name: model.MetricNameLabel, Value: "foo"}, }, }, }, { input: `{`, fail: true, errMsg: "unexpected end of input inside braces", }, { input: `}`, fail: true, errMsg: "unexpected character: '}'", }, { input: `some{`, fail: true, errMsg: "unexpected end of input inside braces", }, { input: `some}`, fail: true, errMsg: "could not parse remaining input \"}\"...", }, { input: `some_metric{a=b}`, fail: true, errMsg: "unexpected identifier \"b\" in label matching, expected string", }, { input: `some_metric{a:b="b"}`, fail: true, errMsg: "unexpected character inside braces: ':'", }, { input: `foo{a*"b"}`, fail: true, errMsg: "unexpected character inside braces: '*'", }, { input: `foo{a>="b"}`, fail: true, // TODO(fabxc): willingly lexing wrong tokens allows for more precrise error // messages from the parser - consider if this is an option. errMsg: "unexpected character inside braces: '>'", }, { input: `foo{gibberish}`, fail: true, errMsg: "expected label matching operator but got }", }, { input: `foo{1}`, fail: true, errMsg: "unexpected character inside braces: '1'", }, { input: `{}`, fail: true, errMsg: "vector selector must contain label matchers or metric name", }, { input: `{x=""}`, fail: true, errMsg: "vector selector must contain at least one non-empty matcher", }, { input: `{x=~".*"}`, fail: true, errMsg: "vector selector must contain at least one non-empty matcher", }, { input: `{x!~".+"}`, fail: true, errMsg: "vector selector must contain at least one non-empty matcher", }, { input: `{x!="a"}`, fail: true, errMsg: "vector selector must contain at least one non-empty matcher", }, { input: `foo{__name__="bar"}`, fail: true, errMsg: "metric name must not be set twice: \"foo\" or \"bar\"", // }, { // input: `:foo`, // fail: true, // errMsg: "bla", }, // Test matrix selector. { input: "test[5s]", expected: &MatrixSelector{ Name: "test", Offset: 0, Range: 5 * time.Second, LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "test"}, }, }, }, { input: "test[5m]", expected: &MatrixSelector{ Name: "test", Offset: 0, Range: 5 * time.Minute, LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "test"}, }, }, }, { input: "test[5h] OFFSET 5m", expected: &MatrixSelector{ Name: "test", Offset: 5 * time.Minute, Range: 5 * time.Hour, LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "test"}, }, }, }, { input: "test[5d] OFFSET 10s", expected: &MatrixSelector{ Name: "test", Offset: 10 * time.Second, Range: 5 * 24 * time.Hour, LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "test"}, }, }, }, { input: "test[5w] offset 2w", expected: &MatrixSelector{ Name: "test", Offset: 14 * 24 * time.Hour, Range: 5 * 7 * 24 * time.Hour, LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "test"}, }, }, }, { input: `test{a="b"}[5y] OFFSET 3d`, expected: &MatrixSelector{ Name: "test", Offset: 3 * 24 * time.Hour, Range: 5 * 365 * 24 * time.Hour, LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: "a", Value: "b"}, {Type: metric.Equal, Name: model.MetricNameLabel, Value: "test"}, }, }, }, { input: `foo[5mm]`, fail: true, errMsg: "bad duration syntax: \"5mm\"", }, { input: `foo[0m]`, fail: true, errMsg: "duration must be greater than 0", }, { input: `foo[5m30s]`, fail: true, errMsg: "bad duration syntax: \"5m3\"", }, { input: `foo[5m] OFFSET 1h30m`, fail: true, errMsg: "bad number or duration syntax: \"1h3\"", }, { input: `foo["5m"]`, fail: true, }, { input: `foo[]`, fail: true, errMsg: "missing unit character in duration", }, { input: `foo[1]`, fail: true, errMsg: "missing unit character in duration", }, { input: `some_metric[5m] OFFSET 1`, fail: true, errMsg: "unexpected number \"1\" in matrix selector, expected duration", }, { input: `some_metric[5m] OFFSET 1mm`, fail: true, errMsg: "bad number or duration syntax: \"1mm\"", }, { input: `some_metric[5m] OFFSET`, fail: true, errMsg: "unexpected end of input in matrix selector, expected duration", }, { input: `(foo + bar)[5m]`, fail: true, errMsg: "could not parse remaining input \"[5m]\"...", }, // Test aggregation. { input: "sum by (foo)(some_metric)", expected: &AggregateExpr{ Op: itemSum, Expr: &VectorSelector{ Name: "some_metric", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "some_metric"}, }, }, Grouping: model.LabelNames{"foo"}, }, }, { input: "sum by (foo) keep_common (some_metric)", expected: &AggregateExpr{ Op: itemSum, KeepExtraLabels: true, Expr: &VectorSelector{ Name: "some_metric", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "some_metric"}, }, }, Grouping: model.LabelNames{"foo"}, }, }, { input: "sum (some_metric) by (foo,bar) keep_common", expected: &AggregateExpr{ Op: itemSum, KeepExtraLabels: true, Expr: &VectorSelector{ Name: "some_metric", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "some_metric"}, }, }, Grouping: model.LabelNames{"foo", "bar"}, }, }, { input: "avg by (foo)(some_metric)", expected: &AggregateExpr{ Op: itemAvg, Expr: &VectorSelector{ Name: "some_metric", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "some_metric"}, }, }, Grouping: model.LabelNames{"foo"}, }, }, { input: "COUNT by (foo) keep_common (some_metric)", expected: &AggregateExpr{ Op: itemCount, Expr: &VectorSelector{ Name: "some_metric", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "some_metric"}, }, }, Grouping: model.LabelNames{"foo"}, KeepExtraLabels: true, }, }, { input: "MIN (some_metric) by (foo) keep_common", expected: &AggregateExpr{ Op: itemMin, Expr: &VectorSelector{ Name: "some_metric", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "some_metric"}, }, }, Grouping: model.LabelNames{"foo"}, KeepExtraLabels: true, }, }, { input: "max by (foo)(some_metric)", expected: &AggregateExpr{ Op: itemMax, Expr: &VectorSelector{ Name: "some_metric", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "some_metric"}, }, }, Grouping: model.LabelNames{"foo"}, }, }, { input: "stddev(some_metric)", expected: &AggregateExpr{ Op: itemStddev, Expr: &VectorSelector{ Name: "some_metric", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "some_metric"}, }, }, }, }, { input: "stdvar by (foo)(some_metric)", expected: &AggregateExpr{ Op: itemStdvar, Expr: &VectorSelector{ Name: "some_metric", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "some_metric"}, }, }, Grouping: model.LabelNames{"foo"}, }, }, { input: `sum some_metric by (test)`, fail: true, errMsg: "unexpected identifier \"some_metric\" in aggregation, expected \"(\"", }, { input: `sum (some_metric) by test`, fail: true, errMsg: "unexpected identifier \"test\" in grouping opts, expected \"(\"", }, { input: `sum (some_metric) by ()`, fail: true, errMsg: "unexpected \")\" in grouping opts, expected identifier", }, { input: `sum (some_metric) by test`, fail: true, errMsg: "unexpected identifier \"test\" in grouping opts, expected \"(\"", }, { input: `some_metric[5m] OFFSET`, fail: true, errMsg: "unexpected end of input in matrix selector, expected duration", }, { input: `sum () by (test)`, fail: true, errMsg: "no valid expression found", }, { input: "MIN keep_common (some_metric) by (foo)", fail: true, errMsg: "could not parse remaining input \"by (foo)\"...", }, { input: "MIN by(test) (some_metric) keep_common", fail: true, errMsg: "could not parse remaining input \"keep_common\"...", }, // Test function calls. { input: "time()", expected: &Call{ Func: mustGetFunction("time"), }, }, { input: `floor(some_metric{foo!="bar"})`, expected: &Call{ Func: mustGetFunction("floor"), Args: Expressions{ &VectorSelector{ Name: "some_metric", LabelMatchers: metric.LabelMatchers{ {Type: metric.NotEqual, Name: "foo", Value: "bar"}, {Type: metric.Equal, Name: model.MetricNameLabel, Value: "some_metric"}, }, }, }, }, }, { input: "rate(some_metric[5m])", expected: &Call{ Func: mustGetFunction("rate"), Args: Expressions{ &MatrixSelector{ Name: "some_metric", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "some_metric"}, }, Range: 5 * time.Minute, }, }, }, }, { input: "round(some_metric)", expected: &Call{ Func: mustGetFunction("round"), Args: Expressions{ &VectorSelector{ Name: "some_metric", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "some_metric"}, }, }, }, }, }, { input: "round(some_metric, 5)", expected: &Call{ Func: mustGetFunction("round"), Args: Expressions{ &VectorSelector{ Name: "some_metric", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "some_metric"}, }, }, &NumberLiteral{5}, }, }, }, { input: "floor()", fail: true, errMsg: "expected at least 1 argument(s) in call to \"floor\", got 0", }, { input: "floor(some_metric, other_metric)", fail: true, errMsg: "expected at most 1 argument(s) in call to \"floor\", got 2", }, { input: "floor(1)", fail: true, errMsg: "expected type vector in call to function \"floor\", got scalar", }, { input: "non_existant_function_far_bar()", fail: true, errMsg: "unknown function with name \"non_existant_function_far_bar\"", }, { input: "rate(some_metric)", fail: true, errMsg: "expected type matrix in call to function \"rate\", got vector", }, // Fuzzing regression tests. { input: "-=", fail: true, errMsg: `no valid expression found`, }, { input: "++-++-+-+-<", fail: true, errMsg: `no valid expression found`, }, { input: "e-+=/(0)", fail: true, errMsg: `no valid expression found`, }, { input: "-If", fail: true, errMsg: `no valid expression found`, }, // String quoting and escape sequence interpretation tests. { input: `"double-quoted string \" with escaped quote"`, expected: &StringLiteral{ Val: "double-quoted string \" with escaped quote", }, }, { input: `'single-quoted string \' with escaped quote'`, expected: &StringLiteral{ Val: "single-quoted string ' with escaped quote", }, }, { input: "`backtick-quoted string`", expected: &StringLiteral{ Val: "backtick-quoted string", }, }, { input: `"\a\b\f\n\r\t\v\\\" - \xFF\377\u1234\U00010111\U0001011111☺"`, expected: &StringLiteral{ Val: "\a\b\f\n\r\t\v\\\" - \xFF\377\u1234\U00010111\U0001011111☺", }, }, { input: `'\a\b\f\n\r\t\v\\\' - \xFF\377\u1234\U00010111\U0001011111☺'`, expected: &StringLiteral{ Val: "\a\b\f\n\r\t\v\\' - \xFF\377\u1234\U00010111\U0001011111☺", }, }, { input: "`" + `\a\b\f\n\r\t\v\\\"\' - \xFF\377\u1234\U00010111\U0001011111☺` + "`", expected: &StringLiteral{ Val: `\a\b\f\n\r\t\v\\\"\' - \xFF\377\u1234\U00010111\U0001011111☺`, }, }, { input: "`\\``", fail: true, errMsg: "could not parse remaining input", }, { input: `"\`, fail: true, errMsg: "escape sequence not terminated", }, { input: `"\c"`, fail: true, errMsg: "unknown escape sequence U+0063 'c'", }, { input: `"\x."`, fail: true, errMsg: "illegal character U+002E '.' in escape sequence", }, } func TestParseExpressions(t *testing.T) { for _, test := range testExpr { parser := newParser(test.input) expr, err := parser.parseExpr() // Unexpected errors are always caused by a bug. if err == errUnexpected { t.Fatalf("unexpected error occurred") } if !test.fail && err != nil { t.Errorf("error in input '%s'", test.input) t.Fatalf("could not parse: %s", err) } if test.fail && err != nil { if !strings.Contains(err.Error(), test.errMsg) { t.Errorf("unexpected error on input '%s'", test.input) t.Fatalf("expected error to contain %q but got %q", test.errMsg, err) } continue } err = parser.typecheck(expr) if !test.fail && err != nil { t.Errorf("error on input '%s'", test.input) t.Fatalf("typecheck failed: %s", err) } if test.fail { if err != nil { if !strings.Contains(err.Error(), test.errMsg) { t.Errorf("unexpected error on input '%s'", test.input) t.Fatalf("expected error to contain %q but got %q", test.errMsg, err) } continue } t.Errorf("error on input '%s'", test.input) t.Fatalf("failure expected, but passed with result: %q", expr) } if !reflect.DeepEqual(expr, test.expected) { t.Errorf("error on input '%s'", test.input) t.Fatalf("no match\n\nexpected:\n%s\ngot: \n%s\n", Tree(test.expected), Tree(expr)) } } } // NaN has no equality. Thus, we need a separate test for it. func TestNaNExpression(t *testing.T) { parser := newParser("NaN") expr, err := parser.parseExpr() if err != nil { t.Errorf("error on input 'NaN'") t.Fatalf("coud not parse: %s", err) } nl, ok := expr.(*NumberLiteral) if !ok { t.Errorf("error on input 'NaN'") t.Fatalf("expected number literal but got %T", expr) } if !math.IsNaN(float64(nl.Val)) { t.Errorf("error on input 'NaN'") t.Fatalf("expected 'NaN' in number literal but got %v", nl.Val) } } var testStatement = []struct { input string expected Statements fail bool }{ { // Test a file-like input. input: ` # A simple test recording rule. dc:http_request:rate5m = sum(rate(http_request_count[5m])) by (dc) # A simple test alerting rule. ALERT GlobalRequestRateLow IF(dc:http_request:rate5m < 10000) FOR 5m WITH { service = "testservice" # ... more fields here ... } SUMMARY "Global request rate low" DESCRIPTION "The global request rate is low" foo = bar{label1="value1"} ALERT BazAlert IF foo > 10 DESCRIPTION "BazAlert" RUNBOOK "http://my.url" SUMMARY "Baz" `, expected: Statements{ &RecordStmt{ Name: "dc:http_request:rate5m", Expr: &AggregateExpr{ Op: itemSum, Grouping: model.LabelNames{"dc"}, Expr: &Call{ Func: mustGetFunction("rate"), Args: Expressions{ &MatrixSelector{ Name: "http_request_count", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "http_request_count"}, }, Range: 5 * time.Minute, }, }, }, }, Labels: nil, }, &AlertStmt{ Name: "GlobalRequestRateLow", Expr: &ParenExpr{&BinaryExpr{ Op: itemLSS, LHS: &VectorSelector{ Name: "dc:http_request:rate5m", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "dc:http_request:rate5m"}, }, }, RHS: &NumberLiteral{10000}, }}, Labels: model.LabelSet{"service": "testservice"}, Duration: 5 * time.Minute, Summary: "Global request rate low", Description: "The global request rate is low", }, &RecordStmt{ Name: "foo", Expr: &VectorSelector{ Name: "bar", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: "label1", Value: "value1"}, {Type: metric.Equal, Name: model.MetricNameLabel, Value: "bar"}, }, }, Labels: nil, }, &AlertStmt{ Name: "BazAlert", Expr: &BinaryExpr{ Op: itemGTR, LHS: &VectorSelector{ Name: "foo", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "foo"}, }, }, RHS: &NumberLiteral{10}, }, Labels: model.LabelSet{}, Summary: "Baz", Description: "BazAlert", Runbook: "http://my.url", }, }, }, { input: `foo{x="", a="z"} = bar{a="b", x=~"y"}`, expected: Statements{ &RecordStmt{ Name: "foo", Expr: &VectorSelector{ Name: "bar", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: "a", Value: "b"}, mustLabelMatcher(metric.RegexMatch, "x", "y"), {Type: metric.Equal, Name: model.MetricNameLabel, Value: "bar"}, }, }, Labels: model.LabelSet{"x": "", "a": "z"}, }, }, }, { input: `ALERT SomeName IF some_metric > 1 SUMMARY "Global request rate low" DESCRIPTION "The global request rate is low" `, expected: Statements{ &AlertStmt{ Name: "SomeName", Expr: &BinaryExpr{ Op: itemGTR, LHS: &VectorSelector{ Name: "some_metric", LabelMatchers: metric.LabelMatchers{ {Type: metric.Equal, Name: model.MetricNameLabel, Value: "some_metric"}, }, }, RHS: &NumberLiteral{1}, }, Labels: model.LabelSet{}, Summary: "Global request rate low", Description: "The global request rate is low", }, }, }, { input: ` # A simple test alerting rule. ALERT GlobalRequestRateLow IF(dc:http_request:rate5m < 10000) FOR 5 WITH { service = "testservice" # ... more fields here ... } SUMMARY "Global request rate low" DESCRIPTION "The global request rate is low" `, fail: true, }, { input: "", expected: Statements{}, }, { input: "foo = time()", expected: Statements{ &RecordStmt{ Name: "foo", Expr: &Call{Func: mustGetFunction("time")}, Labels: nil, }}, }, { input: "foo = 1", expected: Statements{ &RecordStmt{ Name: "foo", Expr: &NumberLiteral{1}, Labels: nil, }}, }, { input: "foo = bar[5m]", fail: true, }, { input: `foo = "test"`, fail: true, }, { input: `foo = `, fail: true, }, { input: `foo{a!="b"} = bar`, fail: true, }, { input: `foo{a=~"b"} = bar`, fail: true, }, { input: `foo{a!~"b"} = bar`, fail: true, }, { input: `ALERT SomeName IF time() WITH {} SUMMARY "Global request rate low" DESCRIPTION "The global request rate is low" `, fail: true, }, { input: `ALERT SomeName IF some_metric > 1 WITH {} SUMMARY "Global request rate low" `, fail: true, }, { input: `ALERT SomeName IF some_metric > 1 DESCRIPTION "The global request rate is low" `, fail: true, }, // Fuzzing regression tests. { input: `I=-/`, fail: true, }, { input: `I=3E8/-=`, fail: true, }, { input: `M=-=-0-0`, fail: true, }, } func TestParseStatements(t *testing.T) { for _, test := range testStatement { parser := newParser(test.input) stmts, err := parser.parseStmts() // Unexpected errors are always caused by a bug. if err == errUnexpected { t.Fatalf("unexpected error occurred") } if !test.fail && err != nil { t.Errorf("error in input: \n\n%s\n", test.input) t.Fatalf("could not parse: %s", err) } if test.fail && err != nil { continue } err = parser.typecheck(stmts) if !test.fail && err != nil { t.Errorf("error in input: \n\n%s\n", test.input) t.Fatalf("typecheck failed: %s", err) } if test.fail { if err != nil { continue } t.Errorf("error in input: \n\n%s\n", test.input) t.Fatalf("failure expected, but passed") } if !reflect.DeepEqual(stmts, test.expected) { t.Errorf("error in input: \n\n%s\n", test.input) t.Fatalf("no match\n\nexpected:\n%s\ngot: \n%s\n", Tree(test.expected), Tree(stmts)) } } } func mustLabelMatcher(mt metric.MatchType, name model.LabelName, val model.LabelValue) *metric.LabelMatcher { m, err := metric.NewLabelMatcher(mt, name, val) if err != nil { panic(err) } return m } func mustGetFunction(name string) *Function { f, ok := getFunction(name) if !ok { panic(fmt.Errorf("function %q does not exist", name)) } return f } var testSeries = []struct { input string expectedMetric model.Metric expectedValues []sequenceValue fail bool }{ { input: `{} 1 2 3`, expectedMetric: model.Metric{}, expectedValues: newSeq(1, 2, 3), }, { input: `{a="b"} -1 2 3`, expectedMetric: model.Metric{ "a": "b", }, expectedValues: newSeq(-1, 2, 3), }, { input: `my_metric 1 2 3`, expectedMetric: model.Metric{ model.MetricNameLabel: "my_metric", }, expectedValues: newSeq(1, 2, 3), }, { input: `my_metric{} 1 2 3`, expectedMetric: model.Metric{ model.MetricNameLabel: "my_metric", }, expectedValues: newSeq(1, 2, 3), }, { input: `my_metric{a="b"} 1 2 3`, expectedMetric: model.Metric{ model.MetricNameLabel: "my_metric", "a": "b", }, expectedValues: newSeq(1, 2, 3), }, { input: `my_metric{a="b"} 1 2 3-10x4`, expectedMetric: model.Metric{ model.MetricNameLabel: "my_metric", "a": "b", }, expectedValues: newSeq(1, 2, 3, -7, -17, -27, -37), }, { input: `my_metric{a="b"} 1 2 3-0x4`, expectedMetric: model.Metric{ model.MetricNameLabel: "my_metric", "a": "b", }, expectedValues: newSeq(1, 2, 3, 3, 3, 3, 3), }, { input: `my_metric{a="b"} 1 3 _ 5 _x4`, expectedMetric: model.Metric{ model.MetricNameLabel: "my_metric", "a": "b", }, expectedValues: newSeq(1, 3, none, 5, none, none, none, none), }, { input: `my_metric{a="b"} 1 3 _ 5 _a4`, fail: true, }, } // For these tests only, we use the smallest float64 to signal an omitted value. const none = math.SmallestNonzeroFloat64 func newSeq(vals ...float64) (res []sequenceValue) { for _, v := range vals { if v == none { res = append(res, sequenceValue{omitted: true}) } else { res = append(res, sequenceValue{value: model.SampleValue(v)}) } } return res } func TestParseSeries(t *testing.T) { for _, test := range testSeries { parser := newParser(test.input) parser.lex.seriesDesc = true metric, vals, err := parser.parseSeriesDesc() // Unexpected errors are always caused by a bug. if err == errUnexpected { t.Fatalf("unexpected error occurred") } if !test.fail && err != nil { t.Errorf("error in input: \n\n%s\n", test.input) t.Fatalf("could not parse: %s", err) } if test.fail && err != nil { continue } if test.fail { if err != nil { continue } t.Errorf("error in input: \n\n%s\n", test.input) t.Fatalf("failure expected, but passed") } if !reflect.DeepEqual(vals, test.expectedValues) || !reflect.DeepEqual(metric, test.expectedMetric) { t.Errorf("error in input: \n\n%s\n", test.input) t.Fatalf("no match\n\nexpected:\n%s %s\ngot: \n%s %s\n", test.expectedMetric, test.expectedValues, metric, vals) } } } func TestRecoverParserRuntime(t *testing.T) { var p *parser var err error defer p.recover(&err) // Cause a runtime panic. var a []int a[123] = 1 if err != errUnexpected { t.Fatalf("wrong error message: %q, expected %q", err, errUnexpected) } } func TestRecoverParserError(t *testing.T) { var p *parser var err error e := fmt.Errorf("custom error") defer func() { if err.Error() != e.Error() { t.Fatalf("wrong error message: %q, expected %q", err, e) } }() defer p.recover(&err) panic(e) } prometheus-0.16.2+ds/promql/printer.go000066400000000000000000000123721265137125100177310ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package promql import ( "fmt" "sort" "strings" "time" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/storage/metric" "github.com/prometheus/prometheus/util/strutil" ) // Tree returns a string of the tree structure of the given node. func Tree(node Node) string { return tree(node, "") } func tree(node Node, level string) string { if node == nil { return fmt.Sprintf("%s |---- %T\n", level, node) } typs := strings.Split(fmt.Sprintf("%T", node), ".")[1] var t string // Only print the number of statements for readability. if stmts, ok := node.(Statements); ok { t = fmt.Sprintf("%s |---- %s :: %d\n", level, typs, len(stmts)) } else { t = fmt.Sprintf("%s |---- %s :: %s\n", level, typs, node) } level += " · · ·" switch n := node.(type) { case Statements: for _, s := range n { t += tree(s, level) } case *AlertStmt: t += tree(n.Expr, level) case *EvalStmt: t += tree(n.Expr, level) case *RecordStmt: t += tree(n.Expr, level) case Expressions: for _, e := range n { t += tree(e, level) } case *AggregateExpr: t += tree(n.Expr, level) case *BinaryExpr: t += tree(n.LHS, level) t += tree(n.RHS, level) case *Call: t += tree(n.Args, level) case *ParenExpr: t += tree(n.Expr, level) case *UnaryExpr: t += tree(n.Expr, level) case *MatrixSelector, *NumberLiteral, *StringLiteral, *VectorSelector: // nothing to do default: panic("promql.Tree: not all node types covered") } return t } func (stmts Statements) String() (s string) { if len(stmts) == 0 { return "" } for _, stmt := range stmts { s += stmt.String() s += "\n\n" } return s[:len(s)-2] } func (node *AlertStmt) String() string { s := fmt.Sprintf("ALERT %s", node.Name) s += fmt.Sprintf("\n\tIF %s", node.Expr) if node.Duration > 0 { s += fmt.Sprintf("\n\tFOR %s", strutil.DurationToString(node.Duration)) } if len(node.Labels) > 0 { s += fmt.Sprintf("\n\tWITH %s", node.Labels) } s += fmt.Sprintf("\n\tSUMMARY %q", node.Summary) s += fmt.Sprintf("\n\tDESCRIPTION %q", node.Description) return s } func (node *EvalStmt) String() string { return "EVAL " + node.Expr.String() } func (node *RecordStmt) String() string { s := fmt.Sprintf("%s%s = %s", node.Name, node.Labels, node.Expr) return s } func (es Expressions) String() (s string) { if len(es) == 0 { return "" } for _, e := range es { s += e.String() s += ", " } return s[:len(s)-2] } func (node *AggregateExpr) String() string { aggrString := fmt.Sprintf("%s(%s)", node.Op, node.Expr) if len(node.Grouping) > 0 { format := "%s BY (%s)" if node.KeepExtraLabels { format += " KEEP_COMMON" } return fmt.Sprintf(format, aggrString, node.Grouping) } return aggrString } func (node *BinaryExpr) String() string { returnBool := "" if node.ReturnBool { returnBool = " BOOL" } matching := "" vm := node.VectorMatching if vm != nil && len(vm.On) > 0 { matching = fmt.Sprintf(" ON(%s)", vm.On) if vm.Card == CardManyToOne { matching += fmt.Sprintf(" GROUP_LEFT(%s)", vm.Include) } if vm.Card == CardOneToMany { matching += fmt.Sprintf(" GROUP_RIGHT(%s)", vm.Include) } } return fmt.Sprintf("%s %s%s%s %s", node.LHS, node.Op, returnBool, matching, node.RHS) } func (node *Call) String() string { return fmt.Sprintf("%s(%s)", node.Func.Name, node.Args) } func (node *MatrixSelector) String() string { vecSelector := &VectorSelector{ Name: node.Name, LabelMatchers: node.LabelMatchers, } offset := "" if node.Offset != time.Duration(0) { offset = fmt.Sprintf(" OFFSET %s", strutil.DurationToString(node.Offset)) } return fmt.Sprintf("%s[%s]%s", vecSelector.String(), strutil.DurationToString(node.Range), offset) } func (node *NumberLiteral) String() string { return fmt.Sprint(node.Val) } func (node *ParenExpr) String() string { return fmt.Sprintf("(%s)", node.Expr) } func (node *StringLiteral) String() string { return fmt.Sprintf("%q", node.Val) } func (node *UnaryExpr) String() string { return fmt.Sprintf("%s%s", node.Op, node.Expr) } func (node *VectorSelector) String() string { labelStrings := make([]string, 0, len(node.LabelMatchers)-1) for _, matcher := range node.LabelMatchers { // Only include the __name__ label if its no equality matching. if matcher.Name == model.MetricNameLabel && matcher.Type == metric.Equal { continue } labelStrings = append(labelStrings, matcher.String()) } offset := "" if node.Offset != time.Duration(0) { offset = fmt.Sprintf(" OFFSET %s", strutil.DurationToString(node.Offset)) } if len(labelStrings) == 0 { return fmt.Sprintf("%s%s", node.Name, offset) } sort.Strings(labelStrings) return fmt.Sprintf("%s{%s}%s", node.Name, strings.Join(labelStrings, ","), offset) } prometheus-0.16.2+ds/promql/printer_test.go000066400000000000000000000027141265137125100207670ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package promql import ( "testing" ) func TestExprString(t *testing.T) { // A list of valid expressions that are expected to be // returned as out when calling String(). If out is empty the output // is expected to equal the input. inputs := []struct { in, out string }{ { in: `sum(task:errors:rate10s{job="s"}) BY (code)`, }, { in: `sum(task:errors:rate10s{job="s"}) BY (code) KEEP_COMMON`, }, { in: `up > BOOL 0`, }, { in: `a OFFSET 1m`, }, { in: `a{c="d"}[5m] OFFSET 1m`, }, { in: `a[5m] OFFSET 1m`, }, } for _, test := range inputs { expr, err := ParseExpr(test.in) if err != nil { t.Fatalf("parsing error for %q: %s", test.in, err) } exp := test.in if test.out != "" { exp = test.out } if expr.String() != exp { t.Fatalf("expected %q to be returned as:\n%s\ngot:\n%s\n", test.in, exp, expr.String()) } } } prometheus-0.16.2+ds/promql/promql_test.go000066400000000000000000000017771265137125100206260ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package promql import ( "path/filepath" "testing" ) func TestEvaluations(t *testing.T) { files, err := filepath.Glob("testdata/*.test") if err != nil { t.Fatal(err) } for _, fn := range files { test, err := newTestFromFile(t, fn) if err != nil { t.Errorf("error creating test for %s: %s", fn, err) } err = test.Run() if err != nil { t.Errorf("error running test %s: %s", fn, err) } test.Close() } } prometheus-0.16.2+ds/promql/quantile.go000066400000000000000000000064051265137125100200700ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package promql import ( "math" "sort" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/storage/metric" ) // Helpers to calculate quantiles. // excludedLabels are the labels to exclude from signature calculation for // quantiles. var excludedLabels = map[model.LabelName]struct{}{ model.MetricNameLabel: {}, model.BucketLabel: {}, } type bucket struct { upperBound float64 count model.SampleValue } // buckets implements sort.Interface. type buckets []bucket func (b buckets) Len() int { return len(b) } func (b buckets) Swap(i, j int) { b[i], b[j] = b[j], b[i] } func (b buckets) Less(i, j int) bool { return b[i].upperBound < b[j].upperBound } type metricWithBuckets struct { metric metric.Metric buckets buckets } // quantile calculates the quantile 'q' based on the given buckets. The buckets // will be sorted by upperBound by this function (i.e. no sorting needed before // calling this function). The quantile value is interpolated assuming a linear // distribution within a bucket. However, if the quantile falls into the highest // bucket, the upper bound of the 2nd highest bucket is returned. A natural // lower bound of 0 is assumed if the upper bound of the lowest bucket is // greater 0. In that case, interpolation in the lowest bucket happens linearly // between 0 and the upper bound of the lowest bucket. However, if the lowest // bucket has an upper bound less or equal 0, this upper bound is returned if // the quantile falls into the lowest bucket. // // There are a number of special cases (once we have a way to report errors // happening during evaluations of AST functions, we should report those // explicitly): // // If 'buckets' has fewer than 2 elements, NaN is returned. // // If the highest bucket is not +Inf, NaN is returned. // // If q<0, -Inf is returned. // // If q>1, +Inf is returned. func quantile(q model.SampleValue, buckets buckets) float64 { if q < 0 { return math.Inf(-1) } if q > 1 { return math.Inf(+1) } if len(buckets) < 2 { return math.NaN() } sort.Sort(buckets) if !math.IsInf(buckets[len(buckets)-1].upperBound, +1) { return math.NaN() } rank := q * buckets[len(buckets)-1].count b := sort.Search(len(buckets)-1, func(i int) bool { return buckets[i].count >= rank }) if b == len(buckets)-1 { return buckets[len(buckets)-2].upperBound } if b == 0 && buckets[0].upperBound <= 0 { return buckets[0].upperBound } var ( bucketStart float64 bucketEnd = buckets[b].upperBound count = buckets[b].count ) if b > 0 { bucketStart = buckets[b-1].upperBound count -= buckets[b-1].count rank -= buckets[b-1].count } return bucketStart + (bucketEnd-bucketStart)*float64(rank/count) } prometheus-0.16.2+ds/promql/test.go000066400000000000000000000307711265137125100172300ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package promql import ( "fmt" "io/ioutil" "math" "regexp" "strconv" "strings" "time" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/storage" "github.com/prometheus/prometheus/storage/local" "github.com/prometheus/prometheus/util/strutil" "github.com/prometheus/prometheus/util/testutil" ) var ( minNormal = math.Float64frombits(0x0010000000000000) // The smallest positive normal value of type float64. patSpace = regexp.MustCompile("[\t ]+") patLoad = regexp.MustCompile(`^load\s+(.+?)$`) patEvalInstant = regexp.MustCompile(`^eval(?:_(fail|ordered))?\s+instant\s+(?:at\s+(.+?))?\s+(.+)$`) ) const ( testStartTime = model.Time(0) epsilon = 0.000001 // Relative error allowed for sample values. ) // Test is a sequence of read and write commands that are run // against a test storage. type Test struct { testutil.T cmds []testCommand storage local.Storage closeStorage func() queryEngine *Engine } // NewTest returns an initialized empty Test. func NewTest(t testutil.T, input string) (*Test, error) { test := &Test{ T: t, cmds: []testCommand{}, } err := test.parse(input) test.clear() return test, err } func newTestFromFile(t testutil.T, filename string) (*Test, error) { content, err := ioutil.ReadFile(filename) if err != nil { return nil, err } return NewTest(t, string(content)) } // QueryEngine returns the test's query engine. func (t *Test) QueryEngine() *Engine { return t.queryEngine } // Storage returns the test's storage. func (t *Test) Storage() local.Storage { return t.storage } func raise(line int, format string, v ...interface{}) error { return &ParseErr{ Line: line + 1, Err: fmt.Errorf(format, v...), } } func (t *Test) parseLoad(lines []string, i int) (int, *loadCmd, error) { if !patLoad.MatchString(lines[i]) { return i, nil, raise(i, "invalid load command. (load )") } parts := patLoad.FindStringSubmatch(lines[i]) gap, err := strutil.StringToDuration(parts[1]) if err != nil { return i, nil, raise(i, "invalid step definition %q: %s", parts[1], err) } cmd := newLoadCmd(gap) for i+1 < len(lines) { i++ defLine := lines[i] if len(defLine) == 0 { i-- break } metric, vals, err := parseSeriesDesc(defLine) if err != nil { if perr, ok := err.(*ParseErr); ok { perr.Line = i + 1 } return i, nil, err } cmd.set(metric, vals...) } return i, cmd, nil } func (t *Test) parseEval(lines []string, i int) (int, *evalCmd, error) { if !patEvalInstant.MatchString(lines[i]) { return i, nil, raise(i, "invalid evaluation command. (eval[_fail|_ordered] instant [at ] ") } parts := patEvalInstant.FindStringSubmatch(lines[i]) var ( mod = parts[1] at = parts[2] qry = parts[3] ) expr, err := ParseExpr(qry) if err != nil { if perr, ok := err.(*ParseErr); ok { perr.Line = i + 1 perr.Pos += strings.Index(lines[i], qry) } return i, nil, err } offset, err := strutil.StringToDuration(at) if err != nil { return i, nil, raise(i, "invalid step definition %q: %s", parts[1], err) } ts := testStartTime.Add(offset) cmd := newEvalCmd(expr, ts, ts, 0) switch mod { case "ordered": cmd.ordered = true case "fail": cmd.fail = true } for j := 1; i+1 < len(lines); j++ { i++ defLine := lines[i] if len(defLine) == 0 { i-- break } if f, err := parseNumber(defLine); err == nil { cmd.expect(0, nil, sequenceValue{value: model.SampleValue(f)}) break } metric, vals, err := parseSeriesDesc(defLine) if err != nil { if perr, ok := err.(*ParseErr); ok { perr.Line = i + 1 } return i, nil, err } // Currently, we are not expecting any matrices. if len(vals) > 1 { return i, nil, raise(i, "expecting multiple values in instant evaluation not allowed") } cmd.expect(j, metric, vals...) } return i, cmd, nil } // parse the given command sequence and appends it to the test. func (t *Test) parse(input string) error { // Trim lines and remove comments. lines := strings.Split(input, "\n") for i, l := range lines { l = strings.TrimSpace(l) if strings.HasPrefix(l, "#") { l = "" } lines[i] = l } var err error // Scan for steps line by line. for i := 0; i < len(lines); i++ { l := lines[i] if len(l) == 0 { continue } var cmd testCommand switch c := strings.ToLower(patSpace.Split(l, 2)[0]); { case c == "clear": cmd = &clearCmd{} case c == "load": i, cmd, err = t.parseLoad(lines, i) case strings.HasPrefix(c, "eval"): i, cmd, err = t.parseEval(lines, i) default: return raise(i, "invalid command %q", l) } if err != nil { return err } t.cmds = append(t.cmds, cmd) } return nil } // testCommand is an interface that ensures that only the package internal // types can be a valid command for a test. type testCommand interface { testCmd() } func (*clearCmd) testCmd() {} func (*loadCmd) testCmd() {} func (*evalCmd) testCmd() {} // loadCmd is a command that loads sequences of sample values for specific // metrics into the storage. type loadCmd struct { gap time.Duration metrics map[model.Fingerprint]model.Metric defs map[model.Fingerprint][]model.SamplePair } func newLoadCmd(gap time.Duration) *loadCmd { return &loadCmd{ gap: gap, metrics: map[model.Fingerprint]model.Metric{}, defs: map[model.Fingerprint][]model.SamplePair{}, } } func (cmd loadCmd) String() string { return "load" } // set a sequence of sample values for the given metric. func (cmd *loadCmd) set(m model.Metric, vals ...sequenceValue) { fp := m.Fingerprint() samples := make([]model.SamplePair, 0, len(vals)) ts := testStartTime for _, v := range vals { if !v.omitted { samples = append(samples, model.SamplePair{ Timestamp: ts, Value: v.value, }) } ts = ts.Add(cmd.gap) } cmd.defs[fp] = samples cmd.metrics[fp] = m } // append the defined time series to the storage. func (cmd *loadCmd) append(a storage.SampleAppender) { for fp, samples := range cmd.defs { met := cmd.metrics[fp] for _, smpl := range samples { s := &model.Sample{ Metric: met, Value: smpl.Value, Timestamp: smpl.Timestamp, } a.Append(s) } } } // evalCmd is a command that evaluates an expression for the given time (range) // and expects a specific result. type evalCmd struct { expr Expr start, end model.Time interval time.Duration instant bool fail, ordered bool metrics map[model.Fingerprint]model.Metric expected map[model.Fingerprint]entry } type entry struct { pos int vals []sequenceValue } func (e entry) String() string { return fmt.Sprintf("%d: %s", e.pos, e.vals) } func newEvalCmd(expr Expr, start, end model.Time, interval time.Duration) *evalCmd { return &evalCmd{ expr: expr, start: start, end: end, interval: interval, instant: start == end && interval == 0, metrics: map[model.Fingerprint]model.Metric{}, expected: map[model.Fingerprint]entry{}, } } func (ev *evalCmd) String() string { return "eval" } // expect adds a new metric with a sequence of values to the set of expected // results for the query. func (ev *evalCmd) expect(pos int, m model.Metric, vals ...sequenceValue) { if m == nil { ev.expected[0] = entry{pos: pos, vals: vals} return } fp := m.Fingerprint() ev.metrics[fp] = m ev.expected[fp] = entry{pos: pos, vals: vals} } // compareResult compares the result value with the defined expectation. func (ev *evalCmd) compareResult(result model.Value) error { switch val := result.(type) { case model.Matrix: if ev.instant { return fmt.Errorf("received range result on instant evaluation") } seen := map[model.Fingerprint]bool{} for pos, v := range val { fp := v.Metric.Fingerprint() if _, ok := ev.metrics[fp]; !ok { return fmt.Errorf("unexpected metric %s in result", v.Metric) } exp := ev.expected[fp] if ev.ordered && exp.pos != pos+1 { return fmt.Errorf("expected metric %s with %v at position %d but was at %d", v.Metric, exp.vals, exp.pos, pos+1) } for i, expVal := range exp.vals { if !almostEqual(float64(expVal.value), float64(v.Values[i].Value)) { return fmt.Errorf("expected %v for %s but got %v", expVal, v.Metric, v.Values) } } seen[fp] = true } for fp, expVals := range ev.expected { if !seen[fp] { return fmt.Errorf("expected metric %s with %v not found", ev.metrics[fp], expVals) } } case model.Vector: if !ev.instant { return fmt.Errorf("received instant result on range evaluation") } seen := map[model.Fingerprint]bool{} for pos, v := range val { fp := v.Metric.Fingerprint() if _, ok := ev.metrics[fp]; !ok { return fmt.Errorf("unexpected metric %s in result", v.Metric) } exp := ev.expected[fp] if ev.ordered && exp.pos != pos+1 { return fmt.Errorf("expected metric %s with %v at position %d but was at %d", v.Metric, exp.vals, exp.pos, pos+1) } if !almostEqual(float64(exp.vals[0].value), float64(v.Value)) { return fmt.Errorf("expected %v for %s but got %v", exp.vals[0].value, v.Metric, v.Value) } seen[fp] = true } for fp, expVals := range ev.expected { if !seen[fp] { return fmt.Errorf("expected metric %s with %v not found", ev.metrics[fp], expVals) } } case *model.Scalar: if !almostEqual(float64(ev.expected[0].vals[0].value), float64(val.Value)) { return fmt.Errorf("expected scalar %v but got %v", val.Value, ev.expected[0].vals[0].value) } default: panic(fmt.Errorf("promql.Test.compareResult: unexpected result type %T", result)) } return nil } // clearCmd is a command that wipes the test's storage state. type clearCmd struct{} func (cmd clearCmd) String() string { return "clear" } // Run executes the command sequence of the test. Until the maximum error number // is reached, evaluation errors do not terminate execution. func (t *Test) Run() error { for _, cmd := range t.cmds { err := t.exec(cmd) // TODO(fabxc): aggregate command errors, yield diffs for result // comparison errors. if err != nil { return err } } return nil } // exec processes a single step of the test func (t *Test) exec(tc testCommand) error { switch cmd := tc.(type) { case *clearCmd: t.clear() case *loadCmd: cmd.append(t.storage) t.storage.WaitForIndexing() case *evalCmd: q := t.queryEngine.newQuery(cmd.expr, cmd.start, cmd.end, cmd.interval) res := q.Exec() if res.Err != nil { if cmd.fail { return nil } return fmt.Errorf("error evaluating query: %s", res.Err) } if res.Err == nil && cmd.fail { return fmt.Errorf("expected error evaluating query but got none") } err := cmd.compareResult(res.Value) if err != nil { return fmt.Errorf("error in %s %s: %s", cmd, cmd.expr, err) } default: panic("promql.Test.exec: unknown test command type") } return nil } // clear the current test storage of all inserted samples. func (t *Test) clear() { if t.closeStorage != nil { t.closeStorage() } if t.queryEngine != nil { t.queryEngine.Stop() } var closer testutil.Closer t.storage, closer = local.NewTestStorage(t, 1) t.closeStorage = closer.Close t.queryEngine = NewEngine(t.storage, nil) } // Close closes resources associated with the Test. func (t *Test) Close() { t.queryEngine.Stop() t.closeStorage() } // samplesAlmostEqual returns true if the two sample lines only differ by a // small relative error in their sample value. func almostEqual(a, b float64) bool { // NaN has no equality but for testing we still want to know whether both values // are NaN. if math.IsNaN(a) && math.IsNaN(b) { return true } // Cf. http://floating-point-gui.de/errors/comparison/ if a == b { return true } diff := math.Abs(a - b) if a == 0 || b == 0 || diff < minNormal { return diff < epsilon*minNormal } return diff/(math.Abs(a)+math.Abs(b)) < epsilon } func parseNumber(s string) (float64, error) { n, err := strconv.ParseInt(s, 0, 64) f := float64(n) if err != nil { f, err = strconv.ParseFloat(s, 64) } if err != nil { return 0, fmt.Errorf("error parsing number: %s", err) } return f, nil } prometheus-0.16.2+ds/promql/testdata/000077500000000000000000000000001265137125100175235ustar00rootroot00000000000000prometheus-0.16.2+ds/promql/testdata/comparison.test000066400000000000000000000026521265137125100226030ustar00rootroot00000000000000load 5m http_requests{job="api-server", instance="0", group="production"} 0+10x10 http_requests{job="api-server", instance="1", group="production"} 0+20x10 http_requests{job="api-server", instance="0", group="canary"} 0+30x10 http_requests{job="api-server", instance="1", group="canary"} 0+40x10 http_requests{job="app-server", instance="0", group="production"} 0+50x10 http_requests{job="app-server", instance="1", group="production"} 0+60x10 http_requests{job="app-server", instance="0", group="canary"} 0+70x10 http_requests{job="app-server", instance="1", group="canary"} 0+80x10 eval instant at 50m SUM(http_requests) BY (job) > 1000 {job="app-server"} 2600 eval instant at 50m 1000 < SUM(http_requests) BY (job) {job="app-server"} 1000 eval instant at 50m SUM(http_requests) BY (job) <= 1000 {job="api-server"} 1000 eval instant at 50m SUM(http_requests) BY (job) != 1000 {job="app-server"} 2600 eval instant at 50m SUM(http_requests) BY (job) == 1000 {job="api-server"} 1000 eval instant at 50m SUM(http_requests) BY (job) == bool 1000 {job="api-server"} 1 {job="app-server"} 0 eval instant at 50m SUM(http_requests) BY (job) == bool SUM(http_requests) BY (job) {job="api-server"} 1 {job="app-server"} 1 eval instant at 50m SUM(http_requests) BY (job) != bool SUM(http_requests) BY (job) {job="api-server"} 0 {job="app-server"} 0 eval instant at 50m 0 == bool 1 0 eval instant at 50m 1 == bool 1 1 prometheus-0.16.2+ds/promql/testdata/functions.test000066400000000000000000000247051265137125100224440ustar00rootroot00000000000000# Testdata for resets() and changes(). load 5m http_requests{path="/foo"} 1 2 3 0 1 0 0 1 2 0 http_requests{path="/bar"} 1 2 3 4 5 1 2 3 4 5 http_requests{path="/biz"} 0 0 0 0 0 1 1 1 1 1 # Tests for resets(). eval instant at 50m resets(http_requests[5m]) {path="/foo"} 0 {path="/bar"} 0 {path="/biz"} 0 eval instant at 50m resets(http_requests[20m]) {path="/foo"} 1 {path="/bar"} 0 {path="/biz"} 0 eval instant at 50m resets(http_requests[30m]) {path="/foo"} 2 {path="/bar"} 1 {path="/biz"} 0 eval instant at 50m resets(http_requests[50m]) {path="/foo"} 3 {path="/bar"} 1 {path="/biz"} 0 eval instant at 50m resets(nonexistent_metric[50m]) # Tests for changes(). eval instant at 50m changes(http_requests[5m]) {path="/foo"} 0 {path="/bar"} 0 {path="/biz"} 0 eval instant at 50m changes(http_requests[20m]) {path="/foo"} 3 {path="/bar"} 3 {path="/biz"} 0 eval instant at 50m changes(http_requests[30m]) {path="/foo"} 4 {path="/bar"} 5 {path="/biz"} 1 eval instant at 50m changes(http_requests[50m]) {path="/foo"} 8 {path="/bar"} 9 {path="/biz"} 1 eval instant at 50m changes(nonexistent_metric[50m]) clear # Tests for increase(). load 5m http_requests{path="/foo"} 0+10x10 http_requests{path="/bar"} 0+10x5 0+10x5 # Tests for increase(). eval instant at 50m increase(http_requests[50m]) {path="/foo"} 100 {path="/bar"} 90 eval instant at 50m increase(http_requests[100m]) {path="/foo"} 100 {path="/bar"} 90 clear # Tests for irate(). load 5m http_requests{path="/foo"} 0+10x10 http_requests{path="/bar"} 0+10x5 0+10x5 eval instant at 50m irate(http_requests[50m]) {path="/foo"} .03333333333333333333 {path="/bar"} .03333333333333333333 # Counter reset. eval instant at 30m irate(http_requests[50m]) {path="/foo"} .03333333333333333333 {path="/bar"} 0 clear # Tests for deriv() and predict_linear(). load 5m testcounter_reset_middle 0+10x4 0+10x5 http_requests{job="app-server", instance="1", group="canary"} 0+80x10 # deriv should return the same as rate in simple cases. eval instant at 50m rate(http_requests{group="canary", instance="1", job="app-server"}[50m]) {group="canary", instance="1", job="app-server"} 0.26666666666666666 eval instant at 50m deriv(http_requests{group="canary", instance="1", job="app-server"}[50m]) {group="canary", instance="1", job="app-server"} 0.26666666666666666 # deriv should return correct result. eval instant at 50m deriv(testcounter_reset_middle[100m]) {} 0.010606060606060607 # predict_linear should return correct result. eval instant at 50m predict_linear(testcounter_reset_middle[100m], 3600) {} 88.181818181818185200 # predict_linear is syntactic sugar around deriv. eval instant at 50m predict_linear(http_requests[50m], 3600) - (http_requests + deriv(http_requests[50m]) * 3600) {group="canary", instance="1", job="app-server"} 0 eval instant at 50m predict_linear(testcounter_reset_middle[100m], 3600) - (testcounter_reset_middle + deriv(testcounter_reset_middle[100m]) * 3600) {} 0 clear # Tests for label_replace. load 5m testmetric{src="source-value-10",dst="original-destination-value"} 0 testmetric{src="source-value-20",dst="original-destination-value"} 1 # label_replace does a full-string match and replace. eval instant at 0m label_replace(testmetric, "dst", "destination-value-$1", "src", "source-value-(.*)") testmetric{src="source-value-10",dst="destination-value-10"} 0 testmetric{src="source-value-20",dst="destination-value-20"} 1 # label_replace does not do a sub-string match. eval instant at 0m label_replace(testmetric, "dst", "destination-value-$1", "src", "value-(.*)") testmetric{src="source-value-10",dst="original-destination-value"} 0 testmetric{src="source-value-20",dst="original-destination-value"} 1 # label_replace works with multiple capture groups. eval instant at 0m label_replace(testmetric, "dst", "$1-value-$2", "src", "(.*)-value-(.*)") testmetric{src="source-value-10",dst="source-value-10"} 0 testmetric{src="source-value-20",dst="source-value-20"} 1 # label_replace does not overwrite the destination label if the source label # does not exist. eval instant at 0m label_replace(testmetric, "dst", "value-$1", "nonexistent-src", "source-value-(.*)") testmetric{src="source-value-10",dst="original-destination-value"} 0 testmetric{src="source-value-20",dst="original-destination-value"} 1 # label_replace overwrites the destination label if the source label is empty, # but matched. eval instant at 0m label_replace(testmetric, "dst", "value-$1", "nonexistent-src", "(.*)") testmetric{src="source-value-10",dst="value-"} 0 testmetric{src="source-value-20",dst="value-"} 1 # label_replace does not overwrite the destination label if the source label # is not matched. eval instant at 0m label_replace(testmetric, "dst", "value-$1", "src", "non-matching-regex") testmetric{src="source-value-10",dst="original-destination-value"} 0 testmetric{src="source-value-20",dst="original-destination-value"} 1 # label_replace drops labels that are set to empty values. eval instant at 0m label_replace(testmetric, "dst", "", "dst", ".*") testmetric{src="source-value-10"} 0 testmetric{src="source-value-20"} 1 # label_replace fails when the regex is invalid. eval_fail instant at 0m label_replace(testmetric, "dst", "value-$1", "src", "(.*") # label_replace fails when the destination label name is not a valid Prometheus label name. eval_fail instant at 0m label_replace(testmetric, "invalid-label-name", "", "src", "(.*)") # label_replace fails when there would be duplicated identical output label sets. eval_fail instant at 0m label_replace(testmetric, "src", "", "", "") clear # Tests for vector. eval instant at 0m vector(1) {} 1 eval instant at 60m vector(time()) {} 3600 clear # Tests for clamp_max and clamp_min(). load 5m test_clamp{src="clamp-a"} -50 test_clamp{src="clamp-b"} 0 test_clamp{src="clamp-c"} 100 eval instant at 0m clamp_max(test_clamp, 75) {src="clamp-a"} -50 {src="clamp-b"} 0 {src="clamp-c"} 75 eval instant at 0m clamp_min(test_clamp, -25) {src="clamp-a"} -25 {src="clamp-b"} 0 {src="clamp-c"} 100 eval instant at 0m clamp_max(clamp_min(test_clamp, -20), 70) {src="clamp-a"} -20 {src="clamp-b"} 0 {src="clamp-c"} 70 clear # Tests for topk/bottomk. load 5m http_requests{job="api-server", instance="0", group="production"} 0+10x10 http_requests{job="api-server", instance="1", group="production"} 0+20x10 http_requests{job="api-server", instance="2", group="production"} NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN http_requests{job="api-server", instance="0", group="canary"} 0+30x10 http_requests{job="api-server", instance="1", group="canary"} 0+40x10 http_requests{job="app-server", instance="0", group="production"} 0+50x10 http_requests{job="app-server", instance="1", group="production"} 0+60x10 http_requests{job="app-server", instance="0", group="canary"} 0+70x10 http_requests{job="app-server", instance="1", group="canary"} 0+80x10 eval_ordered instant at 50m topk(3, http_requests) http_requests{group="canary", instance="1", job="app-server"} 800 http_requests{group="canary", instance="0", job="app-server"} 700 http_requests{group="production", instance="1", job="app-server"} 600 eval_ordered instant at 50m topk(5, http_requests{group="canary",job="app-server"}) http_requests{group="canary", instance="1", job="app-server"} 800 http_requests{group="canary", instance="0", job="app-server"} 700 eval_ordered instant at 50m bottomk(3, http_requests) http_requests{group="production", instance="0", job="api-server"} 100 http_requests{group="production", instance="1", job="api-server"} 200 http_requests{group="canary", instance="0", job="api-server"} 300 eval_ordered instant at 50m bottomk(5, http_requests{group="canary",job="app-server"}) http_requests{group="canary", instance="0", job="app-server"} 700 http_requests{group="canary", instance="1", job="app-server"} 800 # Test NaN is sorted away from the top/bottom. eval_ordered instant at 50m topk(3, http_requests{job="api-server",group="production"}) http_requests{job="api-server", instance="1", group="production"} 200 http_requests{job="api-server", instance="0", group="production"} 100 http_requests{job="api-server", instance="2", group="production"} NaN eval_ordered instant at 50m bottomk(3, http_requests{job="api-server",group="production"}) http_requests{job="api-server", instance="0", group="production"} 100 http_requests{job="api-server", instance="1", group="production"} 200 http_requests{job="api-server", instance="2", group="production"} NaN # Tests for sort/sort_desc. clear load 5m http_requests{job="api-server", instance="0", group="production"} 0+10x10 http_requests{job="api-server", instance="1", group="production"} 0+20x10 http_requests{job="api-server", instance="0", group="canary"} 0+30x10 http_requests{job="api-server", instance="1", group="canary"} 0+40x10 http_requests{job="api-server", instance="2", group="canary"} NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN http_requests{job="app-server", instance="0", group="production"} 0+50x10 http_requests{job="app-server", instance="1", group="production"} 0+60x10 http_requests{job="app-server", instance="0", group="canary"} 0+70x10 http_requests{job="app-server", instance="1", group="canary"} 0+80x10 eval_ordered instant at 50m sort(http_requests) http_requests{group="production", instance="0", job="api-server"} 100 http_requests{group="production", instance="1", job="api-server"} 200 http_requests{group="canary", instance="0", job="api-server"} 300 http_requests{group="canary", instance="1", job="api-server"} 400 http_requests{group="production", instance="0", job="app-server"} 500 http_requests{group="production", instance="1", job="app-server"} 600 http_requests{group="canary", instance="0", job="app-server"} 700 http_requests{group="canary", instance="1", job="app-server"} 800 http_requests{group="canary", instance="2", job="api-server"} NaN eval_ordered instant at 50m sort_desc(http_requests) http_requests{group="canary", instance="1", job="app-server"} 800 http_requests{group="canary", instance="0", job="app-server"} 700 http_requests{group="production", instance="1", job="app-server"} 600 http_requests{group="production", instance="0", job="app-server"} 500 http_requests{group="canary", instance="1", job="api-server"} 400 http_requests{group="canary", instance="0", job="api-server"} 300 http_requests{group="production", instance="1", job="api-server"} 200 http_requests{group="production", instance="0", job="api-server"} 100 http_requests{group="canary", instance="2", job="api-server"} NaN prometheus-0.16.2+ds/promql/testdata/histograms.test000066400000000000000000000133241265137125100226070ustar00rootroot00000000000000# Two histograms with 4 buckets each (x_sum and x_count not included, # only buckets). Lowest bucket for one histogram < 0, for the other > # 0. They have the same name, just separated by label. Not useful in # practice, but can happen (if clients change bucketing), and the # server has to cope with it. # Test histogram. load 5m testhistogram_bucket{le="0.1", start="positive"} 0+5x10 testhistogram_bucket{le=".2", start="positive"} 0+7x10 testhistogram_bucket{le="1e0", start="positive"} 0+11x10 testhistogram_bucket{le="+Inf", start="positive"} 0+12x10 testhistogram_bucket{le="-.2", start="negative"} 0+1x10 testhistogram_bucket{le="-0.1", start="negative"} 0+2x10 testhistogram_bucket{le="0.3", start="negative"} 0+2x10 testhistogram_bucket{le="+Inf", start="negative"} 0+3x10 # Now a more realistic histogram per job and instance to test aggregation. load 5m request_duration_seconds_bucket{job="job1", instance="ins1", le="0.1"} 0+1x10 request_duration_seconds_bucket{job="job1", instance="ins1", le="0.2"} 0+3x10 request_duration_seconds_bucket{job="job1", instance="ins1", le="+Inf"} 0+4x10 request_duration_seconds_bucket{job="job1", instance="ins2", le="0.1"} 0+2x10 request_duration_seconds_bucket{job="job1", instance="ins2", le="0.2"} 0+5x10 request_duration_seconds_bucket{job="job1", instance="ins2", le="+Inf"} 0+6x10 request_duration_seconds_bucket{job="job2", instance="ins1", le="0.1"} 0+3x10 request_duration_seconds_bucket{job="job2", instance="ins1", le="0.2"} 0+4x10 request_duration_seconds_bucket{job="job2", instance="ins1", le="+Inf"} 0+6x10 request_duration_seconds_bucket{job="job2", instance="ins2", le="0.1"} 0+4x10 request_duration_seconds_bucket{job="job2", instance="ins2", le="0.2"} 0+7x10 request_duration_seconds_bucket{job="job2", instance="ins2", le="+Inf"} 0+9x10 # Quantile too low. eval instant at 50m histogram_quantile(-0.1, testhistogram_bucket) {start="positive"} -Inf {start="negative"} -Inf # Quantile too high. eval instant at 50m histogram_quantile(1.01, testhistogram_bucket) {start="positive"} +Inf {start="negative"} +Inf # Quantile value in lowest bucket, which is positive. eval instant at 50m histogram_quantile(0, testhistogram_bucket{start="positive"}) {start="positive"} 0 # Quantile value in lowest bucket, which is negative. eval instant at 50m histogram_quantile(0, testhistogram_bucket{start="negative"}) {start="negative"} -0.2 # Quantile value in highest bucket. eval instant at 50m histogram_quantile(1, testhistogram_bucket) {start="positive"} 1 {start="negative"} 0.3 # Finally some useful quantiles. eval instant at 50m histogram_quantile(0.2, testhistogram_bucket) {start="positive"} 0.048 {start="negative"} -0.2 eval instant at 50m histogram_quantile(0.5, testhistogram_bucket) {start="positive"} 0.15 {start="negative"} -0.15 eval instant at 50m histogram_quantile(0.8, testhistogram_bucket) {start="positive"} 0.72 {start="negative"} 0.3 # More realistic with rates. eval instant at 50m histogram_quantile(0.2, rate(testhistogram_bucket[5m])) {start="positive"} 0.048 {start="negative"} -0.2 eval instant at 50m histogram_quantile(0.5, rate(testhistogram_bucket[5m])) {start="positive"} 0.15 {start="negative"} -0.15 eval instant at 50m histogram_quantile(0.8, rate(testhistogram_bucket[5m])) {start="positive"} 0.72 {start="negative"} 0.3 # Aggregated histogram: Everything in one. eval instant at 50m histogram_quantile(0.3, sum(rate(request_duration_seconds_bucket[5m])) by (le)) {} 0.075 eval instant at 50m histogram_quantile(0.5, sum(rate(request_duration_seconds_bucket[5m])) by (le)) {} 0.1277777777777778 # Aggregated histogram: Everything in one. Now with avg, which does not change anything. eval instant at 50m histogram_quantile(0.3, avg(rate(request_duration_seconds_bucket[5m])) by (le)) {} 0.075 eval instant at 50m histogram_quantile(0.5, avg(rate(request_duration_seconds_bucket[5m])) by (le)) {} 0.12777777777777778 # Aggregated histogram: By job. eval instant at 50m histogram_quantile(0.3, sum(rate(request_duration_seconds_bucket[5m])) by (le, instance)) {instance="ins1"} 0.075 {instance="ins2"} 0.075 eval instant at 50m histogram_quantile(0.5, sum(rate(request_duration_seconds_bucket[5m])) by (le, instance)) {instance="ins1"} 0.1333333333 {instance="ins2"} 0.125 # Aggregated histogram: By instance. eval instant at 50m histogram_quantile(0.3, sum(rate(request_duration_seconds_bucket[5m])) by (le, job)) {job="job1"} 0.1 {job="job2"} 0.0642857142857143 eval instant at 50m histogram_quantile(0.5, sum(rate(request_duration_seconds_bucket[5m])) by (le, job)) {job="job1"} 0.14 {job="job2"} 0.1125 # Aggregated histogram: By job and instance. eval instant at 50m histogram_quantile(0.3, sum(rate(request_duration_seconds_bucket[5m])) by (le, job, instance)) {instance="ins1", job="job1"} 0.11 {instance="ins2", job="job1"} 0.09 {instance="ins1", job="job2"} 0.06 {instance="ins2", job="job2"} 0.0675 eval instant at 50m histogram_quantile(0.5, sum(rate(request_duration_seconds_bucket[5m])) by (le, job, instance)) {instance="ins1", job="job1"} 0.15 {instance="ins2", job="job1"} 0.1333333333333333 {instance="ins1", job="job2"} 0.1 {instance="ins2", job="job2"} 0.1166666666666667 # The unaggregated histogram for comparison. Same result as the previous one. eval instant at 50m histogram_quantile(0.3, rate(request_duration_seconds_bucket[5m])) {instance="ins1", job="job1"} 0.11 {instance="ins2", job="job1"} 0.09 {instance="ins1", job="job2"} 0.06 {instance="ins2", job="job2"} 0.0675 eval instant at 50m histogram_quantile(0.5, rate(request_duration_seconds_bucket[5m])) {instance="ins1", job="job1"} 0.15 {instance="ins2", job="job1"} 0.13333333333333333 {instance="ins1", job="job2"} 0.1 {instance="ins2", job="job2"} 0.11666666666666667 prometheus-0.16.2+ds/promql/testdata/legacy.test000066400000000000000000000545551265137125100217060ustar00rootroot00000000000000load 5m http_requests{job="api-server", instance="0", group="production"} 0+10x10 http_requests{job="api-server", instance="1", group="production"} 0+20x10 http_requests{job="api-server", instance="0", group="canary"} 0+30x10 http_requests{job="api-server", instance="1", group="canary"} 0+40x10 http_requests{job="app-server", instance="0", group="production"} 0+50x10 http_requests{job="app-server", instance="1", group="production"} 0+60x10 http_requests{job="app-server", instance="0", group="canary"} 0+70x10 http_requests{job="app-server", instance="1", group="canary"} 0+80x10 load 5m x{y="testvalue"} 0+10x10 load 5m testcounter_reset_middle 0+10x4 0+10x5 testcounter_reset_end 0+10x9 0 10 load 4m testcounter_zero_cutoff{start="0m"} 0+240x10 testcounter_zero_cutoff{start="1m"} 60+240x10 testcounter_zero_cutoff{start="2m"} 120+240x10 testcounter_zero_cutoff{start="3m"} 180+240x10 testcounter_zero_cutoff{start="4m"} 240+240x10 testcounter_zero_cutoff{start="5m"} 300+240x10 load 5m label_grouping_test{a="aa", b="bb"} 0+10x10 label_grouping_test{a="a", b="abb"} 0+20x10 load 5m vector_matching_a{l="x"} 0+1x100 vector_matching_a{l="y"} 0+2x50 vector_matching_b{l="x"} 0+4x25 load 5m cpu_count{instance="0", type="numa"} 0+30x10 cpu_count{instance="0", type="smp"} 0+10x20 cpu_count{instance="1", type="smp"} 0+20x10 eval instant at 50m SUM(http_requests) {} 3600 eval instant at 50m SUM(http_requests{instance="0"}) BY(job) {job="api-server"} 400 {job="app-server"} 1200 eval instant at 50m SUM(http_requests{instance="0"}) BY(job) KEEP_COMMON {instance="0", job="api-server"} 400 {instance="0", job="app-server"} 1200 eval instant at 50m SUM(http_requests) BY (job) {job="api-server"} 1000 {job="app-server"} 2600 # Non-existent labels mentioned in BY-clauses shouldn't propagate to output. eval instant at 50m SUM(http_requests) BY (job, nonexistent) {job="api-server"} 1000 {job="app-server"} 2600 eval instant at 50m COUNT(http_requests) BY (job) {job="api-server"} 4 {job="app-server"} 4 eval instant at 50m SUM(http_requests) BY (job, group) {group="canary", job="api-server"} 700 {group="canary", job="app-server"} 1500 {group="production", job="api-server"} 300 {group="production", job="app-server"} 1100 eval instant at 50m AVG(http_requests) BY (job) {job="api-server"} 250 {job="app-server"} 650 eval instant at 50m MIN(http_requests) BY (job) {job="api-server"} 100 {job="app-server"} 500 eval instant at 50m MAX(http_requests) BY (job) {job="api-server"} 400 {job="app-server"} 800 eval instant at 50m SUM(http_requests) BY (job) - COUNT(http_requests) BY (job) {job="api-server"} 996 {job="app-server"} 2596 eval instant at 50m 2 - SUM(http_requests) BY (job) {job="api-server"} -998 {job="app-server"} -2598 eval instant at 50m 1000 / SUM(http_requests) BY (job) {job="api-server"} 1 {job="app-server"} 0.38461538461538464 eval instant at 50m SUM(http_requests) BY (job) - 2 {job="api-server"} 998 {job="app-server"} 2598 eval instant at 50m SUM(http_requests) BY (job) % 3 {job="api-server"} 1 {job="app-server"} 2 eval instant at 50m SUM(http_requests) BY (job) / 0 {job="api-server"} +Inf {job="app-server"} +Inf eval instant at 50m SUM(http_requests) BY (job) + SUM(http_requests) BY (job) {job="api-server"} 2000 {job="app-server"} 5200 eval instant at 50m http_requests{job="api-server", group="canary"} http_requests{group="canary", instance="0", job="api-server"} 300 http_requests{group="canary", instance="1", job="api-server"} 400 eval instant at 50m http_requests{job="api-server", group="canary"} + rate(http_requests{job="api-server"}[5m]) * 5 * 60 {group="canary", instance="0", job="api-server"} 330 {group="canary", instance="1", job="api-server"} 440 eval instant at 50m rate(http_requests[25m]) * 25 * 60 {group="canary", instance="0", job="api-server"} 150 {group="canary", instance="0", job="app-server"} 350 {group="canary", instance="1", job="api-server"} 200 {group="canary", instance="1", job="app-server"} 400 {group="production", instance="0", job="api-server"} 50 {group="production", instance="0", job="app-server"} 249.99999999999997 {group="production", instance="1", job="api-server"} 100 {group="production", instance="1", job="app-server"} 300 # Single-letter label names and values. eval instant at 50m x{y="testvalue"} x{y="testvalue"} 100 # Lower-cased aggregation operators should work too. eval instant at 50m sum(http_requests) by (job) + min(http_requests) by (job) + max(http_requests) by (job) + avg(http_requests) by (job) {job="app-server"} 4550 {job="api-server"} 1750 # Deltas should be adjusted for target interval vs. samples under target interval. eval instant at 50m delta(http_requests{group="canary", instance="1", job="app-server"}[18m]) {group="canary", instance="1", job="app-server"} 288 # Rates should calculate per-second rates. eval instant at 50m rate(http_requests{group="canary", instance="1", job="app-server"}[50m]) {group="canary", instance="1", job="app-server"} 0.26666666666666666 # Counter resets at in the middle of range are handled correctly by rate(). eval instant at 50m rate(testcounter_reset_middle[50m]) {} 0.03 # Counter resets at end of range are ignored by rate(). eval instant at 50m rate(testcounter_reset_end[5m]) {} 0 # Zero cutoff for left-side extrapolation. eval instant at 10m rate(testcounter_zero_cutoff[20m]) {start="0m"} 0.5 {start="1m"} 0.55 {start="2m"} 0.6 {start="3m"} 0.65 {start="4m"} 0.7 {start="5m"} 0.6 # Normal half-interval cutoff for left-side extrapolation. eval instant at 50m rate(testcounter_zero_cutoff[20m]) {start="0m"} 0.6 {start="1m"} 0.6 {start="2m"} 0.6 {start="3m"} 0.6 {start="4m"} 0.6 {start="5m"} 0.6 # count_scalar for a non-empty vector should return scalar element count. eval instant at 50m count_scalar(http_requests) 8 # count_scalar for an empty vector should return scalar 0. eval instant at 50m count_scalar(nonexistent) 0 eval instant at 50m http_requests{group!="canary"} http_requests{group="production", instance="1", job="app-server"} 600 http_requests{group="production", instance="0", job="app-server"} 500 http_requests{group="production", instance="1", job="api-server"} 200 http_requests{group="production", instance="0", job="api-server"} 100 eval instant at 50m http_requests{job=~"server",group!="canary"} http_requests{group="production", instance="1", job="app-server"} 600 http_requests{group="production", instance="0", job="app-server"} 500 http_requests{group="production", instance="1", job="api-server"} 200 http_requests{group="production", instance="0", job="api-server"} 100 eval instant at 50m http_requests{job!~"api",group!="canary"} http_requests{group="production", instance="1", job="app-server"} 600 http_requests{group="production", instance="0", job="app-server"} 500 eval instant at 50m count_scalar(http_requests{job=~"^server$"}) 0 eval instant at 50m http_requests{group="production",job=~"^api"} http_requests{group="production", instance="0", job="api-server"} 100 http_requests{group="production", instance="1", job="api-server"} 200 eval instant at 50m abs(-1 * http_requests{group="production",job="api-server"}) {group="production", instance="0", job="api-server"} 100 {group="production", instance="1", job="api-server"} 200 eval instant at 50m floor(0.004 * http_requests{group="production",job="api-server"}) {group="production", instance="0", job="api-server"} 0 {group="production", instance="1", job="api-server"} 0 eval instant at 50m ceil(0.004 * http_requests{group="production",job="api-server"}) {group="production", instance="0", job="api-server"} 1 {group="production", instance="1", job="api-server"} 1 eval instant at 50m round(0.004 * http_requests{group="production",job="api-server"}) {group="production", instance="0", job="api-server"} 0 {group="production", instance="1", job="api-server"} 1 # Round should correctly handle negative numbers. eval instant at 50m round(-1 * (0.004 * http_requests{group="production",job="api-server"})) {group="production", instance="0", job="api-server"} 0 {group="production", instance="1", job="api-server"} -1 # Round should round half up. eval instant at 50m round(0.005 * http_requests{group="production",job="api-server"}) {group="production", instance="0", job="api-server"} 1 {group="production", instance="1", job="api-server"} 1 eval instant at 50m round(-1 * (0.005 * http_requests{group="production",job="api-server"})) {group="production", instance="0", job="api-server"} 0 {group="production", instance="1", job="api-server"} -1 eval instant at 50m round(1 + 0.005 * http_requests{group="production",job="api-server"}) {group="production", instance="0", job="api-server"} 2 {group="production", instance="1", job="api-server"} 2 eval instant at 50m round(-1 * (1 + 0.005 * http_requests{group="production",job="api-server"})) {group="production", instance="0", job="api-server"} -1 {group="production", instance="1", job="api-server"} -2 # Round should accept the number to round nearest to. eval instant at 50m round(0.0005 * http_requests{group="production",job="api-server"}, 0.1) {group="production", instance="0", job="api-server"} 0.1 {group="production", instance="1", job="api-server"} 0.1 eval instant at 50m round(2.1 + 0.0005 * http_requests{group="production",job="api-server"}, 0.1) {group="production", instance="0", job="api-server"} 2.2 {group="production", instance="1", job="api-server"} 2.2 eval instant at 50m round(5.2 + 0.0005 * http_requests{group="production",job="api-server"}, 0.1) {group="production", instance="0", job="api-server"} 5.3 {group="production", instance="1", job="api-server"} 5.3 # Round should work correctly with negative numbers and multiple decimal places. eval instant at 50m round(-1 * (5.2 + 0.0005 * http_requests{group="production",job="api-server"}), 0.1) {group="production", instance="0", job="api-server"} -5.2 {group="production", instance="1", job="api-server"} -5.3 # Round should work correctly with big toNearests. eval instant at 50m round(0.025 * http_requests{group="production",job="api-server"}, 5) {group="production", instance="0", job="api-server"} 5 {group="production", instance="1", job="api-server"} 5 eval instant at 50m round(0.045 * http_requests{group="production",job="api-server"}, 5) {group="production", instance="0", job="api-server"} 5 {group="production", instance="1", job="api-server"} 10 eval instant at 50m avg_over_time(http_requests{group="production",job="api-server"}[1h]) {group="production", instance="0", job="api-server"} 50 {group="production", instance="1", job="api-server"} 100 eval instant at 50m count_over_time(http_requests{group="production",job="api-server"}[1h]) {group="production", instance="0", job="api-server"} 11 {group="production", instance="1", job="api-server"} 11 eval instant at 50m max_over_time(http_requests{group="production",job="api-server"}[1h]) {group="production", instance="0", job="api-server"} 100 {group="production", instance="1", job="api-server"} 200 eval instant at 50m min_over_time(http_requests{group="production",job="api-server"}[1h]) {group="production", instance="0", job="api-server"} 0 {group="production", instance="1", job="api-server"} 0 eval instant at 50m sum_over_time(http_requests{group="production",job="api-server"}[1h]) {group="production", instance="0", job="api-server"} 550 {group="production", instance="1", job="api-server"} 1100 eval instant at 50m time() 3000 eval instant at 50m drop_common_labels(http_requests{group="production",job="api-server"}) http_requests{instance="0"} 100 http_requests{instance="1"} 200 eval instant at 50m {__name__=~".+"} http_requests{group="canary", instance="0", job="api-server"} 300 http_requests{group="canary", instance="0", job="app-server"} 700 http_requests{group="canary", instance="1", job="api-server"} 400 http_requests{group="canary", instance="1", job="app-server"} 800 http_requests{group="production", instance="0", job="api-server"} 100 http_requests{group="production", instance="0", job="app-server"} 500 http_requests{group="production", instance="1", job="api-server"} 200 http_requests{group="production", instance="1", job="app-server"} 600 testcounter_reset_end 0 testcounter_reset_middle 50 x{y="testvalue"} 100 label_grouping_test{a="a", b="abb"} 200 label_grouping_test{a="aa", b="bb"} 100 vector_matching_a{l="x"} 10 vector_matching_a{l="y"} 20 vector_matching_b{l="x"} 40 cpu_count{instance="1", type="smp"} 200 cpu_count{instance="0", type="smp"} 100 cpu_count{instance="0", type="numa"} 300 eval instant at 50m {job=~"server", job!~"api"} http_requests{group="canary", instance="0", job="app-server"} 700 http_requests{group="canary", instance="1", job="app-server"} 800 http_requests{group="production", instance="0", job="app-server"} 500 http_requests{group="production", instance="1", job="app-server"} 600 # Test alternative "by"-clause order. eval instant at 50m sum by (group) (http_requests{job="api-server"}) {group="canary"} 700 {group="production"} 300 # Test alternative "by"-clause order with "keep_common". eval instant at 50m sum by (group) keep_common (http_requests{job="api-server"}) {group="canary", job="api-server"} 700 {group="production", job="api-server"} 300 # Test both alternative "by"-clause orders in one expression. # Public health warning: stick to one form within an expression (or even # in an organization), or risk serious user confusion. eval instant at 50m sum(sum by (group) keep_common (http_requests{job="api-server"})) by (job) {job="api-server"} 1000 eval instant at 50m http_requests{group="canary"} and http_requests{instance="0"} http_requests{group="canary", instance="0", job="api-server"} 300 http_requests{group="canary", instance="0", job="app-server"} 700 eval instant at 50m (http_requests{group="canary"} + 1) and http_requests{instance="0"} {group="canary", instance="0", job="api-server"} 301 {group="canary", instance="0", job="app-server"} 701 eval instant at 50m (http_requests{group="canary"} + 1) and on(instance, job) http_requests{instance="0", group="production"} {group="canary", instance="0", job="api-server"} 301 {group="canary", instance="0", job="app-server"} 701 eval instant at 50m (http_requests{group="canary"} + 1) and on(instance) http_requests{instance="0", group="production"} {group="canary", instance="0", job="api-server"} 301 {group="canary", instance="0", job="app-server"} 701 eval instant at 50m http_requests{group="canary"} or http_requests{group="production"} http_requests{group="canary", instance="0", job="api-server"} 300 http_requests{group="canary", instance="0", job="app-server"} 700 http_requests{group="canary", instance="1", job="api-server"} 400 http_requests{group="canary", instance="1", job="app-server"} 800 http_requests{group="production", instance="0", job="api-server"} 100 http_requests{group="production", instance="0", job="app-server"} 500 http_requests{group="production", instance="1", job="api-server"} 200 http_requests{group="production", instance="1", job="app-server"} 600 # On overlap the rhs samples must be dropped. eval instant at 50m (http_requests{group="canary"} + 1) or http_requests{instance="1"} {group="canary", instance="0", job="api-server"} 301 {group="canary", instance="0", job="app-server"} 701 {group="canary", instance="1", job="api-server"} 401 {group="canary", instance="1", job="app-server"} 801 http_requests{group="production", instance="1", job="api-server"} 200 http_requests{group="production", instance="1", job="app-server"} 600 # Matching only on instance excludes everything that has instance=0/1 but includes # entries without the instance label. eval instant at 50m (http_requests{group="canary"} + 1) or on(instance) (http_requests or cpu_count or vector_matching_a) {group="canary", instance="0", job="api-server"} 301 {group="canary", instance="0", job="app-server"} 701 {group="canary", instance="1", job="api-server"} 401 {group="canary", instance="1", job="app-server"} 801 vector_matching_a{l="x"} 10 vector_matching_a{l="y"} 20 eval instant at 50m http_requests{group="canary"} / on(instance,job) http_requests{group="production"} {instance="0", job="api-server"} 3 {instance="0", job="app-server"} 1.4 {instance="1", job="api-server"} 2 {instance="1", job="app-server"} 1.3333333333333333 # Include labels must guarantee uniquely identifiable time series. eval_fail instant at 50m http_requests{group="production"} / on(instance) group_left(group) cpu_count{type="smp"} # Many-to-many matching is not allowed. eval_fail instant at 50m http_requests{group="production"} / on(instance) group_left(job,type) cpu_count # Many-to-one matching must be explicit. eval_fail instant at 50m http_requests{group="production"} / on(instance) cpu_count{type="smp"} eval instant at 50m http_requests{group="production"} / on(instance) group_left(job) cpu_count{type="smp"} {instance="1", job="api-server"} 1 {instance="0", job="app-server"} 5 {instance="1", job="app-server"} 3 {instance="0", job="api-server"} 1 # Ensure sidedness of grouping preserves operand sides. eval instant at 50m cpu_count{type="smp"} / on(instance) group_right(job) http_requests{group="production"} {instance="1", job="app-server"} 0.3333333333333333 {instance="0", job="app-server"} 0.2 {instance="1", job="api-server"} 1 {instance="0", job="api-server"} 1 # Include labels from both sides. eval instant at 50m http_requests{group="production"} / on(instance) group_left(job) cpu_count{type="smp"} {instance="1", job="api-server"} 1 {instance="0", job="app-server"} 5 {instance="1", job="app-server"} 3 {instance="0", job="api-server"} 1 eval instant at 50m http_requests{group="production"} < on(instance,job) http_requests{group="canary"} {instance="1", job="app-server"} 600 {instance="0", job="app-server"} 500 {instance="1", job="api-server"} 200 {instance="0", job="api-server"} 100 eval instant at 50m http_requests{group="production"} > on(instance,job) http_requests{group="canary"} # no output eval instant at 50m http_requests{group="production"} == on(instance,job) http_requests{group="canary"} # no output eval instant at 50m http_requests > on(instance) group_left(group,job) cpu_count{type="smp"} {group="canary", instance="0", job="app-server"} 700 {group="canary", instance="1", job="app-server"} 800 {group="canary", instance="0", job="api-server"} 300 {group="canary", instance="1", job="api-server"} 400 {group="production", instance="0", job="app-server"} 500 {group="production", instance="1", job="app-server"} 600 eval instant at 50m {l="x"} + on(__name__) {l="y"} vector_matching_a 30 eval instant at 50m absent(nonexistent) {} 1 eval instant at 50m absent(nonexistent{job="testjob", instance="testinstance", method=~".x"}) {instance="testinstance", job="testjob"} 1 eval instant at 50m count_scalar(absent(http_requests)) 0 eval instant at 50m count_scalar(absent(sum(http_requests))) 0 eval instant at 50m absent(sum(nonexistent{job="testjob", instance="testinstance"})) {} 1 eval instant at 50m http_requests{group="production",job="api-server"} offset 5m http_requests{group="production", instance="0", job="api-server"} 90 http_requests{group="production", instance="1", job="api-server"} 180 eval instant at 50m rate(http_requests{group="production",job="api-server"}[10m] offset 5m) {group="production", instance="0", job="api-server"} 0.03333333333333333 {group="production", instance="1", job="api-server"} 0.06666666666666667 # Regression test for missing separator byte in labelsToGroupingKey. eval instant at 50m sum(label_grouping_test) by (a, b) {a="a", b="abb"} 200 {a="aa", b="bb"} 100 eval instant at 50m http_requests{group="canary", instance="0", job="api-server"} / 0 {group="canary", instance="0", job="api-server"} +Inf eval instant at 50m -1 * http_requests{group="canary", instance="0", job="api-server"} / 0 {group="canary", instance="0", job="api-server"} -Inf eval instant at 50m 0 * http_requests{group="canary", instance="0", job="api-server"} / 0 {group="canary", instance="0", job="api-server"} NaN eval instant at 50m 0 * http_requests{group="canary", instance="0", job="api-server"} % 0 {group="canary", instance="0", job="api-server"} NaN eval instant at 50m exp(vector_matching_a) {l="x"} 22026.465794806718 {l="y"} 485165195.4097903 eval instant at 50m exp(vector_matching_a - 10) {l="y"} 22026.465794806718 {l="x"} 1 eval instant at 50m exp(vector_matching_a - 20) {l="x"} 4.5399929762484854e-05 {l="y"} 1 eval instant at 50m ln(vector_matching_a) {l="x"} 2.302585092994046 {l="y"} 2.995732273553991 eval instant at 50m ln(vector_matching_a - 10) {l="y"} 2.302585092994046 {l="x"} -Inf eval instant at 50m ln(vector_matching_a - 20) {l="y"} -Inf {l="x"} NaN eval instant at 50m exp(ln(vector_matching_a)) {l="y"} 20 {l="x"} 10 eval instant at 50m sqrt(vector_matching_a) {l="x"} 3.1622776601683795 {l="y"} 4.47213595499958 eval instant at 50m log2(vector_matching_a) {l="x"} 3.3219280948873626 {l="y"} 4.321928094887363 eval instant at 50m log2(vector_matching_a - 10) {l="y"} 3.3219280948873626 {l="x"} -Inf eval instant at 50m log2(vector_matching_a - 20) {l="x"} NaN {l="y"} -Inf eval instant at 50m log10(vector_matching_a) {l="x"} 1 {l="y"} 1.301029995663981 eval instant at 50m log10(vector_matching_a - 10) {l="y"} 1 {l="x"} -Inf eval instant at 50m log10(vector_matching_a - 20) {l="x"} NaN {l="y"} -Inf eval instant at 50m stddev(http_requests) {} 229.12878474779 eval instant at 50m stddev by (instance)(http_requests) {instance="0"} 223.60679774998 {instance="1"} 223.60679774998 eval instant at 50m stdvar(http_requests) {} 52500 eval instant at 50m stdvar by (instance)(http_requests) {instance="0"} 50000 {instance="1"} 50000 # Matrix tests. clear load 1h testmetric{testlabel="1"} 1 1 testmetric{testlabel="2"} _ 2 eval instant at 0h drop_common_labels(testmetric) testmetric 1 eval instant at 1h drop_common_labels(testmetric) testmetric{testlabel="1"} 1 testmetric{testlabel="2"} 2 clear load 1h testmetric{testlabel="1"} 1 1 testmetric{testlabel="2"} 2 _ eval instant at 0h sum(testmetric) keep_common {} 3 eval instant at 1h sum(testmetric) keep_common {testlabel="1"} 1 clear load 1h testmetric{aa="bb"} 1 testmetric{a="abb"} 2 eval instant at 0h testmetric testmetric{aa="bb"} 1 testmetric{a="abb"} 2 prometheus-0.16.2+ds/promql/testdata/literals.test000066400000000000000000000011551265137125100222450ustar00rootroot00000000000000eval instant at 50m 12.34e6 12340000 eval instant at 50m 12.34e+6 12340000 eval instant at 50m 12.34e-6 0.00001234 eval instant at 50m 1+1 2 eval instant at 50m 1-1 0 eval instant at 50m 1 - -1 2 eval instant at 50m .2 0.2 eval instant at 50m +0.2 0.2 eval instant at 50m -0.2e-6 -0.0000002 eval instant at 50m +Inf +Inf eval instant at 50m inF +Inf eval instant at 50m -inf -Inf eval instant at 50m NaN NaN eval instant at 50m nan NaN eval instant at 50m 2. 2 eval instant at 50m 1 / 0 +Inf eval instant at 50m -1 / 0 -Inf eval instant at 50m 0 / 0 NaN eval instant at 50m 1 % 0 NaN prometheus-0.16.2+ds/promql/testdata/operators.test000066400000000000000000000012301265137125100224360ustar00rootroot00000000000000# Tests for min/max. clear load 5m http_requests{job="api-server", instance="0", group="production"} 1 http_requests{job="api-server", instance="1", group="production"} 2 http_requests{job="api-server", instance="0", group="canary"} NaN http_requests{job="api-server", instance="1", group="canary"} 3 http_requests{job="api-server", instance="2", group="canary"} 4 eval instant at 0m max(http_requests) {} 4 eval instant at 0m min(http_requests) {} 1 eval instant at 0m max by (group) (http_requests) {group="production"} 2 {group="canary"} 4 eval instant at 0m min by (group) (http_requests) {group="production"} 1 {group="canary"} 3 prometheus-0.16.2+ds/retrieval/000077500000000000000000000000001265137125100163755ustar00rootroot00000000000000prometheus-0.16.2+ds/retrieval/discovery/000077500000000000000000000000001265137125100204045ustar00rootroot00000000000000prometheus-0.16.2+ds/retrieval/discovery/consul.go000066400000000000000000000210211265137125100222320ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package discovery import ( "fmt" "net/http" "strconv" "strings" "sync" "time" consul "github.com/hashicorp/consul/api" "github.com/prometheus/common/log" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/config" ) const ( consulWatchTimeout = 30 * time.Second consulRetryInterval = 15 * time.Second // consulAddressLabel is the name for the label containing a target's address. consulAddressLabel = model.MetaLabelPrefix + "consul_address" // consulNodeLabel is the name for the label containing a target's node name. consulNodeLabel = model.MetaLabelPrefix + "consul_node" // consulTagsLabel is the name of the label containing the tags assigned to the target. consulTagsLabel = model.MetaLabelPrefix + "consul_tags" // consulServiceLabel is the name of the label containing the service name. consulServiceLabel = model.MetaLabelPrefix + "consul_service" // consulServiceAddressLabel is the name of the label containing the (optional) service address. consulServiceAddressLabel = model.MetaLabelPrefix + "consul_service_address" // consulServicePortLabel is the name of the label containing the service port. consulServicePortLabel = model.MetaLabelPrefix + "consul_service_port" // consulDCLabel is the name of the label containing the datacenter ID. consulDCLabel = model.MetaLabelPrefix + "consul_dc" // consulServiceIDLabel is the name of the label containing the service ID. consulServiceIDLabel = model.MetaLabelPrefix + "consul_service_id" ) // ConsulDiscovery retrieves target information from a Consul server // and updates them via watches. type ConsulDiscovery struct { client *consul.Client clientConf *consul.Config clientDatacenter string tagSeparator string scrapedServices map[string]struct{} mu sync.RWMutex services map[string]*consulService } // consulService contains data belonging to the same service. type consulService struct { name string tgroup config.TargetGroup lastIndex uint64 removed bool running bool done chan struct{} } // NewConsulDiscovery returns a new ConsulDiscovery for the given config. func NewConsulDiscovery(conf *config.ConsulSDConfig) (*ConsulDiscovery, error) { clientConf := &consul.Config{ Address: conf.Server, Scheme: conf.Scheme, Datacenter: conf.Datacenter, Token: conf.Token, HttpAuth: &consul.HttpBasicAuth{ Username: conf.Username, Password: conf.Password, }, } client, err := consul.NewClient(clientConf) if err != nil { return nil, err } cd := &ConsulDiscovery{ client: client, clientConf: clientConf, tagSeparator: conf.TagSeparator, scrapedServices: map[string]struct{}{}, services: map[string]*consulService{}, } // If the datacenter isn't set in the clientConf, let's get it from the local Consul agent // (Consul default is to use local node's datacenter if one isn't given for a query). if clientConf.Datacenter == "" { info, err := client.Agent().Self() if err != nil { return nil, err } cd.clientDatacenter = info["Config"]["Datacenter"].(string) } else { cd.clientDatacenter = clientConf.Datacenter } for _, name := range conf.Services { cd.scrapedServices[name] = struct{}{} } return cd, nil } // Sources implements the TargetProvider interface. func (cd *ConsulDiscovery) Sources() []string { clientConf := *cd.clientConf clientConf.HttpClient = &http.Client{Timeout: 5 * time.Second} client, err := consul.NewClient(&clientConf) if err != nil { // NewClient always returns a nil error. panic(fmt.Errorf("discovery.ConsulDiscovery.Sources: %s", err)) } srvs, _, err := client.Catalog().Services(nil) if err != nil { log.Errorf("Error refreshing service list: %s", err) return nil } cd.mu.Lock() defer cd.mu.Unlock() srcs := make([]string, 0, len(srvs)) for name := range srvs { if _, ok := cd.scrapedServices[name]; len(cd.scrapedServices) == 0 || ok { srcs = append(srcs, name) } } return srcs } // Run implements the TargetProvider interface. func (cd *ConsulDiscovery) Run(ch chan<- config.TargetGroup, done <-chan struct{}) { defer close(ch) defer cd.stop() update := make(chan *consulService, 10) go cd.watchServices(update, done) for { select { case <-done: return case srv := <-update: if srv.removed { close(srv.done) // Send clearing update. ch <- config.TargetGroup{Source: srv.name} break } // Launch watcher for the service. if !srv.running { go cd.watchService(srv, ch) srv.running = true } } } } func (cd *ConsulDiscovery) stop() { // The lock prevents Run from terminating while the watchers attempt // to send on their channels. cd.mu.Lock() defer cd.mu.Unlock() for _, srv := range cd.services { close(srv.done) } } // watchServices retrieves updates from Consul's services endpoint and sends // potential updates to the update channel. func (cd *ConsulDiscovery) watchServices(update chan<- *consulService, done <-chan struct{}) { var lastIndex uint64 for { catalog := cd.client.Catalog() srvs, meta, err := catalog.Services(&consul.QueryOptions{ WaitIndex: lastIndex, WaitTime: consulWatchTimeout, }) if err != nil { log.Errorf("Error refreshing service list: %s", err) time.Sleep(consulRetryInterval) continue } // If the index equals the previous one, the watch timed out with no update. if meta.LastIndex == lastIndex { continue } lastIndex = meta.LastIndex cd.mu.Lock() select { case <-done: cd.mu.Unlock() return default: // Continue. } // Check for new services. for name := range srvs { if _, ok := cd.scrapedServices[name]; len(cd.scrapedServices) > 0 && !ok { continue } srv, ok := cd.services[name] if !ok { srv = &consulService{ name: name, done: make(chan struct{}), } srv.tgroup.Source = name cd.services[name] = srv } srv.tgroup.Labels = model.LabelSet{ consulServiceLabel: model.LabelValue(name), consulDCLabel: model.LabelValue(cd.clientDatacenter), } update <- srv } // Check for removed services. for name, srv := range cd.services { if _, ok := srvs[name]; !ok { srv.removed = true update <- srv delete(cd.services, name) } } cd.mu.Unlock() } } // watchService retrieves updates about srv from Consul's service endpoint. // On a potential update the resulting target group is sent to ch. func (cd *ConsulDiscovery) watchService(srv *consulService, ch chan<- config.TargetGroup) { catalog := cd.client.Catalog() for { nodes, meta, err := catalog.Service(srv.name, "", &consul.QueryOptions{ WaitIndex: srv.lastIndex, WaitTime: consulWatchTimeout, }) if err != nil { log.Errorf("Error refreshing service %s: %s", srv.name, err) time.Sleep(consulRetryInterval) continue } // If the index equals the previous one, the watch timed out with no update. if meta.LastIndex == srv.lastIndex { continue } srv.lastIndex = meta.LastIndex srv.tgroup.Targets = make([]model.LabelSet, 0, len(nodes)) for _, node := range nodes { addr := fmt.Sprintf("%s:%d", node.Address, node.ServicePort) // We surround the separated list with the separator as well. This way regular expressions // in relabeling rules don't have to consider tag positions. tags := cd.tagSeparator + strings.Join(node.ServiceTags, cd.tagSeparator) + cd.tagSeparator srv.tgroup.Targets = append(srv.tgroup.Targets, model.LabelSet{ model.AddressLabel: model.LabelValue(addr), consulAddressLabel: model.LabelValue(node.Address), consulNodeLabel: model.LabelValue(node.Node), consulTagsLabel: model.LabelValue(tags), consulServiceAddressLabel: model.LabelValue(node.ServiceAddress), consulServicePortLabel: model.LabelValue(strconv.Itoa(node.ServicePort)), consulServiceIDLabel: model.LabelValue(node.ServiceID), }) } cd.mu.Lock() select { case <-srv.done: cd.mu.Unlock() return default: // Continue. } ch <- srv.tgroup cd.mu.Unlock() } } prometheus-0.16.2+ds/retrieval/discovery/dns.go000066400000000000000000000134431265137125100215240ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package discovery import ( "fmt" "net" "strings" "sync" "time" "github.com/miekg/dns" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/common/log" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/config" ) const ( resolvConf = "/etc/resolv.conf" dnsNameLabel = model.MetaLabelPrefix + "dns_name" // Constants for instrumentation. namespace = "prometheus" interval = "interval" ) var ( dnsSDLookupsCount = prometheus.NewCounter( prometheus.CounterOpts{ Namespace: namespace, Name: "dns_sd_lookups_total", Help: "The number of DNS-SD lookups.", }) dnsSDLookupFailuresCount = prometheus.NewCounter( prometheus.CounterOpts{ Namespace: namespace, Name: "dns_sd_lookup_failures_total", Help: "The number of DNS-SD lookup failures.", }) ) func init() { prometheus.MustRegister(dnsSDLookupFailuresCount) prometheus.MustRegister(dnsSDLookupsCount) } // DNSDiscovery periodically performs DNS-SD requests. It implements // the TargetProvider interface. type DNSDiscovery struct { names []string done chan struct{} interval time.Duration m sync.RWMutex port int qtype uint16 } // NewDNSDiscovery returns a new DNSDiscovery which periodically refreshes its targets. func NewDNSDiscovery(conf *config.DNSSDConfig) *DNSDiscovery { qtype := dns.TypeSRV switch strings.ToUpper(conf.Type) { case "A": qtype = dns.TypeA case "AAAA": qtype = dns.TypeAAAA case "SRV": qtype = dns.TypeSRV } return &DNSDiscovery{ names: conf.Names, done: make(chan struct{}), interval: time.Duration(conf.RefreshInterval), qtype: qtype, port: conf.Port, } } // Run implements the TargetProvider interface. func (dd *DNSDiscovery) Run(ch chan<- config.TargetGroup, done <-chan struct{}) { defer close(ch) ticker := time.NewTicker(dd.interval) defer ticker.Stop() // Get an initial set right away. dd.refreshAll(ch) for { select { case <-ticker.C: dd.refreshAll(ch) case <-done: return } } } // Sources implements the TargetProvider interface. func (dd *DNSDiscovery) Sources() []string { var srcs []string for _, name := range dd.names { srcs = append(srcs, name) } return srcs } func (dd *DNSDiscovery) refreshAll(ch chan<- config.TargetGroup) { var wg sync.WaitGroup wg.Add(len(dd.names)) for _, name := range dd.names { go func(n string) { if err := dd.refresh(n, ch); err != nil { log.Errorf("Error refreshing DNS targets: %s", err) } wg.Done() }(name) } wg.Wait() } func (dd *DNSDiscovery) refresh(name string, ch chan<- config.TargetGroup) error { response, err := lookupAll(name, dd.qtype) dnsSDLookupsCount.Inc() if err != nil { dnsSDLookupFailuresCount.Inc() return err } var tg config.TargetGroup for _, record := range response.Answer { target := model.LabelValue("") switch addr := record.(type) { case *dns.SRV: // Remove the final dot from rooted DNS names to make them look more usual. addr.Target = strings.TrimRight(addr.Target, ".") target = model.LabelValue(fmt.Sprintf("%s:%d", addr.Target, addr.Port)) case *dns.A: target = model.LabelValue(fmt.Sprintf("%s:%d", addr.A, dd.port)) case *dns.AAAA: target = model.LabelValue(fmt.Sprintf("%s:%d", addr.AAAA, dd.port)) default: log.Warnf("%q is not a valid SRV record", record) continue } tg.Targets = append(tg.Targets, model.LabelSet{ model.AddressLabel: target, dnsNameLabel: model.LabelValue(name), }) } tg.Source = name ch <- tg return nil } func lookupAll(name string, qtype uint16) (*dns.Msg, error) { conf, err := dns.ClientConfigFromFile(resolvConf) if err != nil { return nil, fmt.Errorf("could not load resolv.conf: %s", err) } client := &dns.Client{} response := &dns.Msg{} for _, server := range conf.Servers { servAddr := net.JoinHostPort(server, conf.Port) for _, suffix := range conf.Search { response, err = lookup(name, qtype, client, servAddr, suffix, false) if err != nil { log.Warnf("resolving %s.%s failed: %s", name, suffix, err) continue } if len(response.Answer) > 0 { return response, nil } } response, err = lookup(name, qtype, client, servAddr, "", false) if err == nil { return response, nil } } return response, fmt.Errorf("could not resolve %s: No server responded", name) } func lookup(name string, queryType uint16, client *dns.Client, servAddr string, suffix string, edns bool) (*dns.Msg, error) { msg := &dns.Msg{} lname := strings.Join([]string{name, suffix}, ".") msg.SetQuestion(dns.Fqdn(lname), queryType) if edns { opt := &dns.OPT{ Hdr: dns.RR_Header{ Name: ".", Rrtype: dns.TypeOPT, }, } opt.SetUDPSize(dns.DefaultMsgSize) msg.Extra = append(msg.Extra, opt) } response, _, err := client.Exchange(msg, servAddr) if err != nil { return nil, err } if msg.Id != response.Id { return nil, fmt.Errorf("DNS ID mismatch, request: %d, response: %d", msg.Id, response.Id) } if response.MsgHdr.Truncated { if client.Net == "tcp" { return nil, fmt.Errorf("got truncated message on tcp") } if edns { // Truncated even though EDNS is used client.Net = "tcp" } return lookup(name, queryType, client, servAddr, suffix, !edns) } return response, nil } prometheus-0.16.2+ds/retrieval/discovery/ec2.go000066400000000000000000000107531265137125100214120ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package discovery import ( "fmt" "strings" "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/credentials" "github.com/aws/aws-sdk-go/aws/defaults" "github.com/prometheus/common/log" "github.com/prometheus/common/model" "github.com/aws/aws-sdk-go/service/ec2" "github.com/prometheus/prometheus/config" "github.com/prometheus/prometheus/util/strutil" ) const ( ec2Label = model.MetaLabelPrefix + "ec2_" ec2LabelAZ = ec2Label + "availability_zone" ec2LabelInstanceID = ec2Label + "instance_id" ec2LabelPublicDNS = ec2Label + "public_dns_name" ec2LabelPublicIP = ec2Label + "public_ip" ec2LabelPrivateIP = ec2Label + "private_ip" ec2LabelSubnetID = ec2Label + "subnet_id" ec2LabelTag = ec2Label + "tag_" ec2LabelVPCID = ec2Label + "vpc_id" subnetSeparator = "," ) // EC2Discovery periodically performs EC2-SD requests. It implements // the TargetProvider interface. type EC2Discovery struct { aws *aws.Config done chan struct{} interval time.Duration port int } // NewEC2Discovery returns a new EC2Discovery which periodically refreshes its targets. func NewEC2Discovery(conf *config.EC2SDConfig) *EC2Discovery { creds := credentials.NewStaticCredentials(conf.AccessKey, conf.SecretKey, "") if conf.AccessKey == "" && conf.SecretKey == "" { creds = defaults.DefaultChainCredentials } return &EC2Discovery{ aws: &aws.Config{ Region: &conf.Region, Credentials: creds, }, done: make(chan struct{}), interval: time.Duration(conf.RefreshInterval), port: conf.Port, } } // Run implements the TargetProvider interface. func (ed *EC2Discovery) Run(ch chan<- config.TargetGroup, done <-chan struct{}) { defer close(ch) ticker := time.NewTicker(ed.interval) defer ticker.Stop() // Get an initial set right away. tg, err := ed.refresh() if err != nil { log.Error(err) } else { ch <- *tg } for { select { case <-ticker.C: tg, err := ed.refresh() if err != nil { log.Error(err) } else { ch <- *tg } case <-done: return } } } // Sources implements the TargetProvider interface. func (ed *EC2Discovery) Sources() []string { return []string{*ed.aws.Region} } func (ed *EC2Discovery) refresh() (*config.TargetGroup, error) { ec2s := ec2.New(ed.aws) tg := &config.TargetGroup{ Source: *ed.aws.Region, } if err := ec2s.DescribeInstancesPages(nil, func(p *ec2.DescribeInstancesOutput, lastPage bool) bool { for _, r := range p.Reservations { for _, inst := range r.Instances { if inst.PrivateIpAddress == nil { continue } labels := model.LabelSet{ ec2LabelInstanceID: model.LabelValue(*inst.InstanceId), } labels[ec2LabelPrivateIP] = model.LabelValue(*inst.PrivateIpAddress) addr := fmt.Sprintf("%s:%d", *inst.PrivateIpAddress, ed.port) labels[model.AddressLabel] = model.LabelValue(addr) if inst.PublicIpAddress != nil { labels[ec2LabelPublicIP] = model.LabelValue(*inst.PublicIpAddress) labels[ec2LabelPublicDNS] = model.LabelValue(*inst.PublicDnsName) } labels[ec2LabelAZ] = model.LabelValue(*inst.Placement.AvailabilityZone) if inst.VpcId != nil { labels[ec2LabelVPCID] = model.LabelValue(*inst.VpcId) subnetsMap := make(map[string]struct{}) for _, eni := range inst.NetworkInterfaces { subnetsMap[*eni.SubnetId] = struct{}{} } subnets := []string{} for k := range subnetsMap { subnets = append(subnets, k) } labels[ec2LabelSubnetID] = model.LabelValue( subnetSeparator + strings.Join(subnets, subnetSeparator) + subnetSeparator) } for _, t := range inst.Tags { name := strutil.SanitizeLabelName(*t.Key) labels[ec2LabelTag+model.LabelName(name)] = model.LabelValue(*t.Value) } tg.Targets = append(tg.Targets, labels) } } return true }); err != nil { return nil, fmt.Errorf("could not describe instances: %s", err) } return tg, nil } prometheus-0.16.2+ds/retrieval/discovery/file.go000066400000000000000000000153111265137125100216530ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package discovery import ( "encoding/json" "fmt" "io/ioutil" "path/filepath" "strings" "time" "github.com/prometheus/common/log" "github.com/prometheus/common/model" "gopkg.in/fsnotify.v1" "gopkg.in/yaml.v2" "github.com/prometheus/prometheus/config" ) const fileSDFilepathLabel = model.MetaLabelPrefix + "filepath" // FileDiscovery provides service discovery functionality based // on files that contain target groups in JSON or YAML format. Refreshing // happens using file watches and periodic refreshes. type FileDiscovery struct { paths []string watcher *fsnotify.Watcher interval time.Duration // lastRefresh stores which files were found during the last refresh // and how many target groups they contained. // This is used to detect deleted target groups. lastRefresh map[string]int } // NewFileDiscovery returns a new file discovery for the given paths. func NewFileDiscovery(conf *config.FileSDConfig) *FileDiscovery { return &FileDiscovery{ paths: conf.Names, interval: time.Duration(conf.RefreshInterval), } } // Sources implements the TargetProvider interface. func (fd *FileDiscovery) Sources() []string { var srcs []string // As we allow multiple target groups per file we have no choice // but to parse them all. for _, p := range fd.listFiles() { tgroups, err := readFile(p) if err != nil { log.Errorf("Error reading file %q: %s", p, err) } for _, tg := range tgroups { srcs = append(srcs, tg.Source) } } return srcs } // listFiles returns a list of all files that match the configured patterns. func (fd *FileDiscovery) listFiles() []string { var paths []string for _, p := range fd.paths { files, err := filepath.Glob(p) if err != nil { log.Errorf("Error expanding glob %q: %s", p, err) continue } paths = append(paths, files...) } return paths } // watchFiles sets watches on all full paths or directories that were configured for // this file discovery. func (fd *FileDiscovery) watchFiles() { if fd.watcher == nil { panic("no watcher configured") } for _, p := range fd.paths { if idx := strings.LastIndex(p, "/"); idx > -1 { p = p[:idx] } else { p = "./" } if err := fd.watcher.Add(p); err != nil { log.Errorf("Error adding file watch for %q: %s", p, err) } } } // Run implements the TargetProvider interface. func (fd *FileDiscovery) Run(ch chan<- config.TargetGroup, done <-chan struct{}) { defer close(ch) defer fd.stop() watcher, err := fsnotify.NewWatcher() if err != nil { log.Errorf("Error creating file watcher: %s", err) return } fd.watcher = watcher fd.refresh(ch) ticker := time.NewTicker(fd.interval) defer ticker.Stop() for { // Stopping has priority over refreshing. Thus we wrap the actual select // clause to always catch done signals. select { case <-done: return default: select { case <-done: return case event := <-fd.watcher.Events: // fsnotify sometimes sends a bunch of events without name or operation. // It's unclear what they are and why they are sent - filter them out. if len(event.Name) == 0 { break } // Everything but a chmod requires rereading. if event.Op^fsnotify.Chmod == 0 { break } // Changes to a file can spawn various sequences of events with // different combinations of operations. For all practical purposes // this is inaccurate. // The most reliable solution is to reload everything if anything happens. fd.refresh(ch) case <-ticker.C: // Setting a new watch after an update might fail. Make sure we don't lose // those files forever. fd.refresh(ch) case err := <-fd.watcher.Errors: if err != nil { log.Errorf("Error on file watch: %s", err) } } } } } // stop shuts down the file watcher. func (fd *FileDiscovery) stop() { log.Debugf("Stopping file discovery for %s...", fd.paths) done := make(chan struct{}) defer close(done) // Closing the watcher will deadlock unless all events and errors are drained. go func() { for { select { case <-fd.watcher.Errors: case <-fd.watcher.Events: // Drain all events and errors. case <-done: return } } }() if err := fd.watcher.Close(); err != nil { log.Errorf("Error closing file watcher for %s: %s", fd.paths, err) } log.Debugf("File discovery for %s stopped.", fd.paths) } // refresh reads all files matching the discovery's patterns and sends the respective // updated target groups through the channel. func (fd *FileDiscovery) refresh(ch chan<- config.TargetGroup) { ref := map[string]int{} for _, p := range fd.listFiles() { tgroups, err := readFile(p) if err != nil { log.Errorf("Error reading file %q: %s", p, err) // Prevent deletion down below. ref[p] = fd.lastRefresh[p] continue } for _, tg := range tgroups { ch <- *tg } ref[p] = len(tgroups) } // Send empty updates for sources that disappeared. for f, n := range fd.lastRefresh { m, ok := ref[f] if !ok || n > m { for i := m; i < n; i++ { ch <- config.TargetGroup{Source: fileSource(f, i)} } } } fd.lastRefresh = ref fd.watchFiles() } // fileSource returns a source ID for the i-th target group in the file. func fileSource(filename string, i int) string { return fmt.Sprintf("%s:%d", filename, i) } // readFile reads a JSON or YAML list of targets groups from the file, depending on its // file extension. It returns full configuration target groups. func readFile(filename string) ([]*config.TargetGroup, error) { content, err := ioutil.ReadFile(filename) if err != nil { return nil, err } var targetGroups []*config.TargetGroup switch ext := filepath.Ext(filename); strings.ToLower(ext) { case ".json": if err := json.Unmarshal(content, &targetGroups); err != nil { return nil, err } case ".yml", ".yaml": if err := yaml.Unmarshal(content, &targetGroups); err != nil { return nil, err } default: panic(fmt.Errorf("retrieval.FileDiscovery.readFile: unhandled file extension %q", ext)) } for i, tg := range targetGroups { tg.Source = fileSource(filename, i) if tg.Labels == nil { tg.Labels = model.LabelSet{} } tg.Labels[fileSDFilepathLabel] = model.LabelValue(filename) } return targetGroups, nil } prometheus-0.16.2+ds/retrieval/discovery/file_test.go000066400000000000000000000051401265137125100227110ustar00rootroot00000000000000package discovery import ( "fmt" "io" "os" "testing" "time" "github.com/prometheus/prometheus/config" ) func TestFileSD(t *testing.T) { defer os.Remove("fixtures/_test.yml") defer os.Remove("fixtures/_test.json") testFileSD(t, ".yml") testFileSD(t, ".json") } func testFileSD(t *testing.T, ext string) { // As interval refreshing is more of a fallback, we only want to test // whether file watches work as expected. var conf config.FileSDConfig conf.Names = []string{"fixtures/_*" + ext} conf.RefreshInterval = config.Duration(1 * time.Hour) var ( fsd = NewFileDiscovery(&conf) ch = make(chan config.TargetGroup) done = make(chan struct{}) ) go fsd.Run(ch, done) select { case <-time.After(25 * time.Millisecond): // Expected. case tg := <-ch: t.Fatalf("Unexpected target group in file discovery: %s", tg) } newf, err := os.Create("fixtures/_test" + ext) if err != nil { t.Fatal(err) } defer newf.Close() f, err := os.Open("fixtures/target_groups" + ext) if err != nil { t.Fatal(err) } defer f.Close() _, err = io.Copy(newf, f) if err != nil { t.Fatal(err) } newf.Close() // The files contain two target groups which are read and sent in order. select { case <-time.After(15 * time.Second): t.Fatalf("Expected new target group but got none") case tg := <-ch: if _, ok := tg.Labels["foo"]; !ok { t.Fatalf("Label not parsed") } if tg.String() != fmt.Sprintf("fixtures/_test%s:0", ext) { t.Fatalf("Unexpected target group %s", tg) } } select { case <-time.After(15 * time.Second): t.Fatalf("Expected new target group but got none") case tg := <-ch: if tg.String() != fmt.Sprintf("fixtures/_test%s:1", ext) { t.Fatalf("Unexpected target group %s", tg) } } // Based on unknown circumstances, sometimes fsnotify will trigger more events in // some runs (which might be empty, chains of different operations etc.). // We have to drain those (as the target manager would) to avoid deadlocking and must // not try to make sense of it all... drained := make(chan struct{}) go func() { for tg := range ch { // Below we will change the file to a bad syntax. Previously extracted target // groups must not be deleted via sending an empty target group. if len(tg.Targets) == 0 { t.Errorf("Unexpected empty target group received: %s", tg) } } close(drained) }() newf, err = os.Create("fixtures/_test.new") if err != nil { t.Fatal(err) } defer os.Remove(newf.Name()) if _, err := newf.Write([]byte("]gibberish\n][")); err != nil { t.Fatal(err) } newf.Close() os.Rename(newf.Name(), "fixtures/_test"+ext) close(done) <-drained } prometheus-0.16.2+ds/retrieval/discovery/fixtures/000077500000000000000000000000001265137125100222555ustar00rootroot00000000000000prometheus-0.16.2+ds/retrieval/discovery/fixtures/target_groups.json000066400000000000000000000002021265137125100260270ustar00rootroot00000000000000[ { "targets": ["localhost:9090", "example.org:443"], "labels": { "foo": "bar" } }, { "targets": ["my.domain"] } ] prometheus-0.16.2+ds/retrieval/discovery/fixtures/target_groups.yml000066400000000000000000000001421265137125100256620ustar00rootroot00000000000000- targets: ['localhost:9090', 'example.org:443'] labels: foo: bar - targets: ['my.domain'] prometheus-0.16.2+ds/retrieval/discovery/kubernetes.go000066400000000000000000000020421265137125100231000ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package discovery import ( "github.com/prometheus/prometheus/config" "github.com/prometheus/prometheus/retrieval/discovery/kubernetes" ) // NewKubernetesDiscovery creates a Kubernetes service discovery based on the passed-in configuration. func NewKubernetesDiscovery(conf *config.KubernetesSDConfig) (*kubernetes.Discovery, error) { kd := &kubernetes.Discovery{ Conf: conf, } err := kd.Initialize() if err != nil { return nil, err } return kd, nil } prometheus-0.16.2+ds/retrieval/discovery/kubernetes/000077500000000000000000000000001265137125100225535ustar00rootroot00000000000000prometheus-0.16.2+ds/retrieval/discovery/kubernetes/discovery.go000066400000000000000000000460721265137125100251220ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package kubernetes import ( "encoding/json" "fmt" "io/ioutil" "net" "net/http" "os" "sync" "time" "github.com/prometheus/common/log" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/config" "github.com/prometheus/prometheus/util/httputil" "github.com/prometheus/prometheus/util/strutil" ) const ( sourceServicePrefix = "services" // kubernetesMetaLabelPrefix is the meta prefix used for all meta labels. // in this discovery. metaLabelPrefix = model.MetaLabelPrefix + "kubernetes_" // serviceNamespaceLabel is the name for the label containing a target's service namespace. serviceNamespaceLabel = metaLabelPrefix + "service_namespace" // serviceNameLabel is the name for the label containing a target's service name. serviceNameLabel = metaLabelPrefix + "service_name" // nodeLabelPrefix is the prefix for the node labels. nodeLabelPrefix = metaLabelPrefix + "node_label_" // serviceLabelPrefix is the prefix for the service labels. serviceLabelPrefix = metaLabelPrefix + "service_label_" // serviceAnnotationPrefix is the prefix for the service annotations. serviceAnnotationPrefix = metaLabelPrefix + "service_annotation_" // nodesTargetGroupName is the name given to the target group for nodes. nodesTargetGroupName = "nodes" // apiServersTargetGroupName is the name given to the target group for API servers. apiServersTargetGroupName = "apiServers" // roleLabel is the name for the label containing a target's role. roleLabel = metaLabelPrefix + "role" serviceAccountToken = "/var/run/secrets/kubernetes.io/serviceaccount/token" serviceAccountCACert = "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt" apiVersion = "v1" apiPrefix = "/api/" + apiVersion nodesURL = apiPrefix + "/nodes" servicesURL = apiPrefix + "/services" endpointsURL = apiPrefix + "/endpoints" serviceEndpointsURL = apiPrefix + "/namespaces/%s/endpoints/%s" ) // Discovery implements a TargetProvider for Kubernetes services. type Discovery struct { client *http.Client Conf *config.KubernetesSDConfig apiServers []config.URL apiServersMu sync.RWMutex nodesResourceVersion string servicesResourceVersion string endpointsResourceVersion string nodes map[string]*Node services map[string]map[string]*Service nodesMu sync.RWMutex servicesMu sync.RWMutex runDone chan struct{} } // Initialize sets up the discovery for usage. func (kd *Discovery) Initialize() error { client, err := newKubernetesHTTPClient(kd.Conf) if err != nil { return err } kd.apiServers = kd.Conf.APIServers kd.client = client kd.nodes = map[string]*Node{} kd.services = map[string]map[string]*Service{} kd.runDone = make(chan struct{}) return nil } // Sources implements the TargetProvider interface. func (kd *Discovery) Sources() []string { sourceNames := make([]string, 0, len(kd.apiServers)) for _, apiServer := range kd.apiServers { sourceNames = append(sourceNames, apiServersTargetGroupName+":"+apiServer.Host) } res, err := kd.queryAPIServerPath(nodesURL) if err != nil { // If we can't list nodes then we can't watch them. Assume this is a misconfiguration // & log & return empty. log.Errorf("Unable to list Kubernetes nodes: %s", err) return []string{} } defer res.Body.Close() if res.StatusCode != http.StatusOK { log.Errorf("Unable to list Kubernetes nodes. Unexpected response: %d %s", res.StatusCode, res.Status) return []string{} } var nodes NodeList if err := json.NewDecoder(res.Body).Decode(&nodes); err != nil { body, _ := ioutil.ReadAll(res.Body) log.Errorf("Unable to list Kubernetes nodes. Unexpected response body: %s", string(body)) return []string{} } kd.nodesMu.Lock() defer kd.nodesMu.Unlock() kd.nodesResourceVersion = nodes.ResourceVersion for idx, node := range nodes.Items { sourceNames = append(sourceNames, nodesTargetGroupName+":"+node.ObjectMeta.Name) kd.nodes[node.ObjectMeta.Name] = &nodes.Items[idx] } res, err = kd.queryAPIServerPath(servicesURL) if err != nil { // If we can't list services then we can't watch them. Assume this is a misconfiguration // & log & return empty. log.Errorf("Unable to list Kubernetes services: %s", err) return []string{} } defer res.Body.Close() if res.StatusCode != http.StatusOK { log.Errorf("Unable to list Kubernetes services. Unexpected response: %d %s", res.StatusCode, res.Status) return []string{} } var services ServiceList if err := json.NewDecoder(res.Body).Decode(&services); err != nil { body, _ := ioutil.ReadAll(res.Body) log.Errorf("Unable to list Kubernetes services. Unexpected response body: %s", string(body)) return []string{} } kd.servicesMu.Lock() defer kd.servicesMu.Unlock() kd.servicesResourceVersion = services.ResourceVersion for idx, service := range services.Items { sourceNames = append(sourceNames, serviceSource(&service)) namespace, ok := kd.services[service.ObjectMeta.Namespace] if !ok { namespace = map[string]*Service{} kd.services[service.ObjectMeta.Namespace] = namespace } namespace[service.ObjectMeta.Name] = &services.Items[idx] } return sourceNames } // Run implements the TargetProvider interface. func (kd *Discovery) Run(ch chan<- config.TargetGroup, done <-chan struct{}) { defer close(ch) if tg := kd.updateAPIServersTargetGroup(); tg != nil { select { case ch <- *tg: case <-done: return } } if tg := kd.updateNodesTargetGroup(); tg != nil { select { case ch <- *tg: case <-done: return } } for _, ns := range kd.services { for _, service := range ns { tg := kd.addService(service) if tg == nil { continue } select { case ch <- *tg: case <-done: return } } } retryInterval := time.Duration(kd.Conf.RetryInterval) update := make(chan interface{}, 10) go kd.watchNodes(update, done, retryInterval) go kd.watchServices(update, done, retryInterval) go kd.watchServiceEndpoints(update, done, retryInterval) var tg *config.TargetGroup for { select { case <-done: return case event := <-update: switch obj := event.(type) { case *nodeEvent: kd.updateNode(obj.Node, obj.EventType) tg = kd.updateNodesTargetGroup() case *serviceEvent: tg = kd.updateService(obj.Service, obj.EventType) case *endpointsEvent: tg = kd.updateServiceEndpoints(obj.Endpoints, obj.EventType) } } if tg == nil { continue } select { case ch <- *tg: case <-done: return } } } func (kd *Discovery) queryAPIServerPath(path string) (*http.Response, error) { req, err := http.NewRequest("GET", path, nil) if err != nil { return nil, err } return kd.queryAPIServerReq(req) } func (kd *Discovery) queryAPIServerReq(req *http.Request) (*http.Response, error) { // Lock in case we need to rotate API servers to request. kd.apiServersMu.Lock() defer kd.apiServersMu.Unlock() var lastErr error for i := 0; i < len(kd.apiServers); i++ { cloneReq := *req cloneReq.URL.Host = kd.apiServers[0].Host cloneReq.URL.Scheme = kd.apiServers[0].Scheme res, err := kd.client.Do(&cloneReq) if err == nil { return res, nil } lastErr = err kd.rotateAPIServers() } return nil, fmt.Errorf("Unable to query any API servers: %v", lastErr) } func (kd *Discovery) rotateAPIServers() { if len(kd.apiServers) > 1 { kd.apiServers = append(kd.apiServers[1:], kd.apiServers[0]) } } func (kd *Discovery) updateAPIServersTargetGroup() *config.TargetGroup { tg := &config.TargetGroup{ Source: apiServersTargetGroupName, Labels: model.LabelSet{ roleLabel: model.LabelValue("apiserver"), }, } for _, apiServer := range kd.apiServers { apiServerAddress := apiServer.Host _, _, err := net.SplitHostPort(apiServerAddress) // If error then no port is specified - use default for scheme. if err != nil { switch apiServer.Scheme { case "http": apiServerAddress = net.JoinHostPort(apiServerAddress, "80") case "https": apiServerAddress = net.JoinHostPort(apiServerAddress, "443") } } t := model.LabelSet{ model.AddressLabel: model.LabelValue(apiServerAddress), model.SchemeLabel: model.LabelValue(apiServer.Scheme), } tg.Targets = append(tg.Targets, t) } return tg } func (kd *Discovery) updateNodesTargetGroup() *config.TargetGroup { kd.nodesMu.Lock() defer kd.nodesMu.Unlock() tg := &config.TargetGroup{ Source: nodesTargetGroupName, Labels: model.LabelSet{ roleLabel: model.LabelValue("node"), }, } // Now let's loop through the nodes & add them to the target group with appropriate labels. for nodeName, node := range kd.nodes { address := fmt.Sprintf("%s:%d", node.Status.Addresses[0].Address, kd.Conf.KubeletPort) t := model.LabelSet{ model.AddressLabel: model.LabelValue(address), model.InstanceLabel: model.LabelValue(nodeName), } for k, v := range node.ObjectMeta.Labels { labelName := strutil.SanitizeLabelName(nodeLabelPrefix + k) t[model.LabelName(labelName)] = model.LabelValue(v) } tg.Targets = append(tg.Targets, t) } return tg } func (kd *Discovery) updateNode(node *Node, eventType EventType) { kd.nodesMu.Lock() defer kd.nodesMu.Unlock() updatedNodeName := node.ObjectMeta.Name switch eventType { case deleted: // Deleted - remove from nodes map. delete(kd.nodes, updatedNodeName) case added, modified: // Added/Modified - update the node in the nodes map. kd.nodes[updatedNodeName] = node } } // watchNodes watches nodes as they come & go. func (kd *Discovery) watchNodes(events chan interface{}, done <-chan struct{}, retryInterval time.Duration) { until(func() { req, err := http.NewRequest("GET", nodesURL, nil) if err != nil { log.Errorf("Failed to watch nodes: %s", err) return } values := req.URL.Query() values.Add("watch", "true") values.Add("resourceVersion", kd.nodesResourceVersion) req.URL.RawQuery = values.Encode() res, err := kd.queryAPIServerReq(req) if err != nil { log.Errorf("Failed to watch nodes: %s", err) return } defer res.Body.Close() if res.StatusCode != http.StatusOK { log.Errorf("Failed to watch nodes: %d", res.StatusCode) return } d := json.NewDecoder(res.Body) for { var event nodeEvent if err := d.Decode(&event); err != nil { log.Errorf("Failed to watch nodes: %s", err) return } kd.nodesResourceVersion = event.Node.ObjectMeta.ResourceVersion select { case events <- &event: case <-done: } } }, retryInterval, done) } // watchServices watches services as they come & go. func (kd *Discovery) watchServices(events chan interface{}, done <-chan struct{}, retryInterval time.Duration) { until(func() { req, err := http.NewRequest("GET", servicesURL, nil) if err != nil { log.Errorf("Failed to watch services: %s", err) return } values := req.URL.Query() values.Add("watch", "true") values.Add("resourceVersion", kd.servicesResourceVersion) req.URL.RawQuery = values.Encode() res, err := kd.queryAPIServerReq(req) if err != nil { log.Errorf("Failed to watch services: %s", err) return } defer res.Body.Close() if res.StatusCode != http.StatusOK { log.Errorf("Failed to watch services: %d", res.StatusCode) return } d := json.NewDecoder(res.Body) for { var event serviceEvent if err := d.Decode(&event); err != nil { log.Errorf("Unable to watch services: %s", err) return } kd.servicesResourceVersion = event.Service.ObjectMeta.ResourceVersion select { case events <- &event: case <-done: } } }, retryInterval, done) } func (kd *Discovery) updateService(service *Service, eventType EventType) *config.TargetGroup { kd.servicesMu.Lock() defer kd.servicesMu.Unlock() var ( name = service.ObjectMeta.Name namespace = service.ObjectMeta.Namespace _, exists = kd.services[namespace][name] ) switch eventType { case deleted: if exists { return kd.deleteService(service) } case added, modified: return kd.addService(service) } return nil } func (kd *Discovery) deleteService(service *Service) *config.TargetGroup { tg := &config.TargetGroup{Source: serviceSource(service)} delete(kd.services[service.ObjectMeta.Namespace], service.ObjectMeta.Name) if len(kd.services[service.ObjectMeta.Namespace]) == 0 { delete(kd.services, service.ObjectMeta.Namespace) } return tg } func (kd *Discovery) addService(service *Service) *config.TargetGroup { namespace, ok := kd.services[service.ObjectMeta.Namespace] if !ok { namespace = map[string]*Service{} kd.services[service.ObjectMeta.Namespace] = namespace } namespace[service.ObjectMeta.Name] = service endpointURL := fmt.Sprintf(serviceEndpointsURL, service.ObjectMeta.Namespace, service.ObjectMeta.Name) res, err := kd.queryAPIServerPath(endpointURL) if err != nil { log.Errorf("Error getting service endpoints: %s", err) return nil } defer res.Body.Close() if res.StatusCode != http.StatusOK { log.Errorf("Failed to get service endpoints: %d", res.StatusCode) return nil } var eps Endpoints if err := json.NewDecoder(res.Body).Decode(&eps); err != nil { log.Errorf("Error getting service endpoints: %s", err) return nil } return kd.updateServiceTargetGroup(service, &eps) } func (kd *Discovery) updateServiceTargetGroup(service *Service, eps *Endpoints) *config.TargetGroup { tg := &config.TargetGroup{ Source: serviceSource(service), Labels: model.LabelSet{ serviceNamespaceLabel: model.LabelValue(service.ObjectMeta.Namespace), serviceNameLabel: model.LabelValue(service.ObjectMeta.Name), }, } for k, v := range service.ObjectMeta.Labels { labelName := strutil.SanitizeLabelName(serviceLabelPrefix + k) tg.Labels[model.LabelName(labelName)] = model.LabelValue(v) } for k, v := range service.ObjectMeta.Annotations { labelName := strutil.SanitizeLabelName(serviceAnnotationPrefix + k) tg.Labels[model.LabelName(labelName)] = model.LabelValue(v) } serviceAddress := service.ObjectMeta.Name + "." + service.ObjectMeta.Namespace + ".svc" // Append the first TCP service port if one exists. for _, port := range service.Spec.Ports { if port.Protocol == ProtocolTCP { serviceAddress += fmt.Sprintf(":%d", port.Port) break } } t := model.LabelSet{ model.AddressLabel: model.LabelValue(serviceAddress), roleLabel: model.LabelValue("service"), } tg.Targets = append(tg.Targets, t) // Now let's loop through the endpoints & add them to the target group with appropriate labels. for _, ss := range eps.Subsets { epPort := ss.Ports[0].Port for _, addr := range ss.Addresses { ipAddr := addr.IP if len(ipAddr) == net.IPv6len { ipAddr = "[" + ipAddr + "]" } address := fmt.Sprintf("%s:%d", ipAddr, epPort) t := model.LabelSet{ model.AddressLabel: model.LabelValue(address), roleLabel: model.LabelValue("endpoint"), } tg.Targets = append(tg.Targets, t) } } return tg } // watchServiceEndpoints watches service endpoints as they come & go. func (kd *Discovery) watchServiceEndpoints(events chan interface{}, done <-chan struct{}, retryInterval time.Duration) { until(func() { req, err := http.NewRequest("GET", endpointsURL, nil) if err != nil { log.Errorf("Failed to watch service endpoints: %s", err) return } values := req.URL.Query() values.Add("watch", "true") values.Add("resourceVersion", kd.servicesResourceVersion) req.URL.RawQuery = values.Encode() res, err := kd.queryAPIServerReq(req) if err != nil { log.Errorf("Failed to watch service endpoints: %s", err) return } defer res.Body.Close() if res.StatusCode != http.StatusOK { log.Errorf("Failed to watch service endpoints: %d", res.StatusCode) return } d := json.NewDecoder(res.Body) for { var event endpointsEvent if err := d.Decode(&event); err != nil { log.Errorf("Unable to watch service endpoints: %s", err) return } kd.servicesResourceVersion = event.Endpoints.ObjectMeta.ResourceVersion select { case events <- &event: case <-done: } } }, retryInterval, done) } func (kd *Discovery) updateServiceEndpoints(endpoints *Endpoints, eventType EventType) *config.TargetGroup { kd.servicesMu.Lock() defer kd.servicesMu.Unlock() serviceNamespace := endpoints.ObjectMeta.Namespace serviceName := endpoints.ObjectMeta.Name if service, ok := kd.services[serviceNamespace][serviceName]; ok { return kd.updateServiceTargetGroup(service, endpoints) } return nil } func newKubernetesHTTPClient(conf *config.KubernetesSDConfig) (*http.Client, error) { bearerTokenFile := conf.BearerTokenFile caFile := conf.TLSConfig.CAFile if conf.InCluster { if len(bearerTokenFile) == 0 { bearerTokenFile = serviceAccountToken } if len(caFile) == 0 { // With recent versions, the CA certificate is mounted as a secret // but we need to handle older versions too. In this case, don't // set the CAFile & the configuration will have to use InsecureSkipVerify. if _, err := os.Stat(serviceAccountCACert); err == nil { caFile = serviceAccountCACert } } } tlsOpts := httputil.TLSOptions{ InsecureSkipVerify: conf.TLSConfig.InsecureSkipVerify, CAFile: caFile, CertFile: conf.TLSConfig.CertFile, KeyFile: conf.TLSConfig.KeyFile, } tlsConfig, err := httputil.NewTLSConfig(tlsOpts) if err != nil { return nil, err } var rt http.RoundTripper = &http.Transport{ Dial: func(netw, addr string) (c net.Conn, err error) { c, err = net.DialTimeout(netw, addr, time.Duration(conf.RequestTimeout)) return }, TLSClientConfig: tlsConfig, } // If a bearer token is provided, create a round tripper that will set the // Authorization header correctly on each request. bearerToken := conf.BearerToken if len(bearerToken) == 0 && len(bearerTokenFile) > 0 { b, err := ioutil.ReadFile(bearerTokenFile) if err != nil { return nil, fmt.Errorf("unable to read bearer token file %s: %s", bearerTokenFile, err) } bearerToken = string(b) } if len(bearerToken) > 0 { rt = httputil.NewBearerAuthRoundTripper(bearerToken, rt) } if conf.BasicAuth != nil { rt = httputil.NewBasicAuthRoundTripper(conf.BasicAuth.Username, conf.BasicAuth.Password, rt) } return &http.Client{ Transport: rt, }, nil } func serviceSource(service *Service) string { return sourceServicePrefix + ":" + service.ObjectMeta.Namespace + "/" + service.ObjectMeta.Name } // Until loops until stop channel is closed, running f every period. // f may not be invoked if stop channel is already closed. func until(f func(), period time.Duration, stopCh <-chan struct{}) { select { case <-stopCh: return default: f() } for { select { case <-stopCh: return case <-time.After(period): f() } } } prometheus-0.16.2+ds/retrieval/discovery/kubernetes/types.go000066400000000000000000000252601265137125100242530ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package kubernetes type EventType string const ( added EventType = "ADDED" modified EventType = "MODIFIED" deleted EventType = "DELETED" ) type nodeEvent struct { EventType EventType `json:"type"` Node *Node `json:"object"` } type serviceEvent struct { EventType EventType `json:"type"` Service *Service `json:"object"` } type endpointsEvent struct { EventType EventType `json:"type"` Endpoints *Endpoints `json:"object"` } // From here down types are copied from // https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/api/v1/types.go // with all currently irrelevant types/fields stripped out. This removes the // need for any kubernetes dependencies, with the drawback of having to keep // this file up to date. // ListMeta describes metadata that synthetic resources must have, including lists and // various status objects. type ListMeta struct { // An opaque value that represents the version of this response for use with optimistic // concurrency and change monitoring endpoints. Clients must treat these values as opaque // and values may only be valid for a particular resource or set of resources. Only servers // will generate resource versions. ResourceVersion string `json:"resourceVersion,omitempty" description:"string that identifies the internal version of this object that can be used by clients to determine when objects have changed; populated by the system, read-only; value must be treated as opaque by clients and passed unmodified back to the server: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#concurrency-control-and-consistency"` } // ObjectMeta is metadata that all persisted resources must have, which includes all objects // users must create. type ObjectMeta struct { // Name is unique within a namespace. Name is required when creating resources, although // some resources may allow a client to request the generation of an appropriate name // automatically. Name is primarily intended for creation idempotence and configuration // definition. Name string `json:"name,omitempty" description:"string that identifies an object. Must be unique within a namespace; cannot be updated; see http://releases.k8s.io/HEAD/docs/user-guide/identifiers.md#names"` // Namespace defines the space within which name must be unique. An empty namespace is // equivalent to the "default" namespace, but "default" is the canonical representation. // Not all objects are required to be scoped to a namespace - the value of this field for // those objects will be empty. Namespace string `json:"namespace,omitempty" description:"namespace of the object; must be a DNS_LABEL; cannot be updated; see http://releases.k8s.io/HEAD/docs/user-guide/namespaces.md"` ResourceVersion string `json:"resourceVersion,omitempty" description:"string that identifies the internal version of this object that can be used by clients to determine when objects have changed; populated by the system, read-only; value must be treated as opaque by clients and passed unmodified back to the server: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#concurrency-control-and-consistency"` // TODO: replace map[string]string with labels.LabelSet type Labels map[string]string `json:"labels,omitempty" description:"map of string keys and values that can be used to organize and categorize objects; may match selectors of replication controllers and services; see http://releases.k8s.io/HEAD/docs/user-guide/labels.md"` // Annotations are unstructured key value data stored with a resource that may be set by // external tooling. They are not queryable and should be preserved when modifying // objects. Annotations map[string]string `json:"annotations,omitempty" description:"map of string keys and values that can be used by external tooling to store and retrieve arbitrary metadata about objects; see http://releases.k8s.io/HEAD/docs/user-guide/annotations.md"` } // Protocol defines network protocols supported for things like conatiner ports. type Protocol string const ( // ProtocolTCP is the TCP protocol. ProtocolTCP Protocol = "TCP" // ProtocolUDP is the UDP protocol. ProtocolUDP Protocol = "UDP" ) const ( // NamespaceAll is the default argument to specify on a context when you want to list or filter resources across all namespaces NamespaceAll string = "" ) // Container represents a single container that is expected to be run on the host. type Container struct { // Required: This must be a DNS_LABEL. Each container in a pod must // have a unique name. Name string `json:"name" description:"name of the container; must be a DNS_LABEL and unique within the pod; cannot be updated"` // Optional. Image string `json:"image,omitempty" description:"Docker image name; see http://releases.k8s.io/HEAD/docs/user-guide/images.md"` } // Service is a named abstraction of software service (for example, mysql) consisting of local port // (for example 3306) that the proxy listens on, and the selector that determines which pods // will answer requests sent through the proxy. type Service struct { ObjectMeta `json:"metadata,omitempty" description:"standard object metadata; see http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata"` // Spec defines the behavior of a service. // http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#spec-and-status Spec ServiceSpec `json:"spec,omitempty"` } // ServiceSpec describes the attributes that a user creates on a service. type ServiceSpec struct { // The list of ports that are exposed by this service. // More info: http://releases.k8s.io/HEAD/docs/user-guide/services.md#virtual-ips-and-service-proxies Ports []ServicePort `json:"ports"` } // ServicePort conatins information on service's port. type ServicePort struct { // The IP protocol for this port. Supports "TCP" and "UDP". // Default is TCP. Protocol Protocol `json:"protocol,omitempty"` // The port that will be exposed by this service. Port int32 `json:"port"` } // ServiceList holds a list of services. type ServiceList struct { ListMeta `json:"metadata,omitempty" description:"standard list metadata; see http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata"` Items []Service `json:"items" description:"list of services"` } // Endpoints is a collection of endpoints that implement the actual service. Example: // Name: "mysvc", // Subsets: [ // { // Addresses: [{"ip": "10.10.1.1"}, {"ip": "10.10.2.2"}], // Ports: [{"name": "a", "port": 8675}, {"name": "b", "port": 309}] // }, // { // Addresses: [{"ip": "10.10.3.3"}], // Ports: [{"name": "a", "port": 93}, {"name": "b", "port": 76}] // }, // ] type Endpoints struct { ObjectMeta `json:"metadata,omitempty" description:"standard object metadata; see http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata"` // The set of all endpoints is the union of all subsets. Subsets []EndpointSubset `json:"subsets" description:"sets of addresses and ports that comprise a service"` } // EndpointSubset is a group of addresses with a common set of ports. The // expanded set of endpoints is the Cartesian product of Addresses x Ports. // For example, given: // { // Addresses: [{"ip": "10.10.1.1"}, {"ip": "10.10.2.2"}], // Ports: [{"name": "a", "port": 8675}, {"name": "b", "port": 309}] // } // The resulting set of endpoints can be viewed as: // a: [ 10.10.1.1:8675, 10.10.2.2:8675 ], // b: [ 10.10.1.1:309, 10.10.2.2:309 ] type EndpointSubset struct { Addresses []EndpointAddress `json:"addresses,omitempty" description:"IP addresses which offer the related ports"` Ports []EndpointPort `json:"ports,omitempty" description:"port numbers available on the related IP addresses"` } // EndpointAddress is a tuple that describes single IP address. type EndpointAddress struct { // The IP of this endpoint. // TODO: This should allow hostname or IP, see #4447. IP string `json:"ip" description:"IP address of the endpoint"` } // EndpointPort is a tuple that describes a single port. type EndpointPort struct { // The port number. Port int `json:"port" description:"port number of the endpoint"` // The IP protocol for this port. Protocol Protocol `json:"protocol,omitempty" description:"protocol for this port; must be UDP or TCP; TCP if unspecified"` } // EndpointsList is a list of endpoints. type EndpointsList struct { ListMeta `json:"metadata,omitempty" description:"standard list metadata; see http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata"` Items []Endpoints `json:"items" description:"list of endpoints"` } // NodeStatus is information about the current status of a node. type NodeStatus struct { // Queried from cloud provider, if available. Addresses []NodeAddress `json:"addresses,omitempty" description:"list of addresses reachable to the node; see http://releases.k8s.io/HEAD/docs/admin/node.md#node-addresses" patchStrategy:"merge" patchMergeKey:"type"` } type NodeAddressType string // These are valid address type of node. const ( NodeHostName NodeAddressType = "Hostname" NodeExternalIP NodeAddressType = "ExternalIP" NodeInternalIP NodeAddressType = "InternalIP" ) type NodeAddress struct { Type NodeAddressType `json:"type" description:"node address type, one of Hostname, ExternalIP or InternalIP"` Address string `json:"address" description:"the node address"` } // Node is a worker node in Kubernetes, formerly known as minion. // Each node will have a unique identifier in the cache (i.e. in etcd). type Node struct { ObjectMeta `json:"metadata,omitempty" description:"standard object metadata; see http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata"` // Status describes the current status of a Node Status NodeStatus `json:"status,omitempty" description:"most recently observed status of the node; populated by the system, read-only; http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#spec-and-status"` } // NodeList is the whole list of all Nodes which have been registered with master. type NodeList struct { ListMeta `json:"metadata,omitempty" description:"standard list metadata; see http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata"` Items []Node `json:"items" description:"list of nodes"` } prometheus-0.16.2+ds/retrieval/discovery/marathon.go000066400000000000000000000054211265137125100225460ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package discovery import ( "time" "github.com/prometheus/common/log" "github.com/prometheus/prometheus/config" "github.com/prometheus/prometheus/retrieval/discovery/marathon" ) // MarathonDiscovery provides service discovery based on a Marathon instance. type MarathonDiscovery struct { servers []string refreshInterval time.Duration done chan struct{} lastRefresh map[string]*config.TargetGroup client marathon.AppListClient } // NewMarathonDiscovery creates a new Marathon based discovery. func NewMarathonDiscovery(conf *config.MarathonSDConfig) *MarathonDiscovery { return &MarathonDiscovery{ servers: conf.Servers, refreshInterval: time.Duration(conf.RefreshInterval), done: make(chan struct{}), client: marathon.FetchMarathonApps, } } // Sources implements the TargetProvider interface. func (md *MarathonDiscovery) Sources() []string { var sources []string tgroups, err := md.fetchTargetGroups() if err == nil { for source := range tgroups { sources = append(sources, source) } } return sources } // Run implements the TargetProvider interface. func (md *MarathonDiscovery) Run(ch chan<- config.TargetGroup, done <-chan struct{}) { defer close(ch) for { select { case <-done: return case <-time.After(md.refreshInterval): err := md.updateServices(ch) if err != nil { log.Errorf("Error while updating services: %s", err) } } } } func (md *MarathonDiscovery) updateServices(ch chan<- config.TargetGroup) error { targetMap, err := md.fetchTargetGroups() if err != nil { return err } // Update services which are still present for _, tg := range targetMap { ch <- *tg } // Remove services which did disappear for source := range md.lastRefresh { _, ok := targetMap[source] if !ok { log.Debugf("Removing group for %s", source) ch <- config.TargetGroup{Source: source} } } md.lastRefresh = targetMap return nil } func (md *MarathonDiscovery) fetchTargetGroups() (map[string]*config.TargetGroup, error) { url := marathon.RandomAppsURL(md.servers) apps, err := md.client(url) if err != nil { return nil, err } groups := marathon.AppsToTargetGroups(apps) return groups, nil } prometheus-0.16.2+ds/retrieval/discovery/marathon/000077500000000000000000000000001265137125100222155ustar00rootroot00000000000000prometheus-0.16.2+ds/retrieval/discovery/marathon/client.go000066400000000000000000000024071265137125100240250ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package marathon import ( "encoding/json" "io/ioutil" "net/http" ) // AppListClient defines a function that can be used to get an application list from marathon. type AppListClient func(url string) (*AppList, error) // FetchMarathonApps requests a list of applications from a marathon server. func FetchMarathonApps(url string) (*AppList, error) { resp, err := http.Get(url) if err != nil { return nil, err } body, err := ioutil.ReadAll(resp.Body) if err != nil { return nil, err } return parseAppJSON(body) } func parseAppJSON(body []byte) (*AppList, error) { apps := &AppList{} err := json.Unmarshal(body, apps) if err != nil { return nil, err } return apps, nil } prometheus-0.16.2+ds/retrieval/discovery/marathon/constants.go000066400000000000000000000024071265137125100245630ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package marathon import ( "github.com/prometheus/common/model" ) const ( // metaLabelPrefix is the meta prefix used for all meta labels in this discovery. metaLabelPrefix = model.MetaLabelPrefix + "marathon_" // appLabelPrefix is the prefix for the application labels. appLabelPrefix = metaLabelPrefix + "app_label_" // appLabel is used for the name of the app in Marathon. appLabel model.LabelName = metaLabelPrefix + "app" // imageLabel is the label that is used for the docker image running the service. imageLabel model.LabelName = metaLabelPrefix + "image" // taskLabel contains the mesos task name of the app instance. taskLabel model.LabelName = metaLabelPrefix + "task" ) prometheus-0.16.2+ds/retrieval/discovery/marathon/conversion.go000066400000000000000000000036231265137125100247350ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package marathon import ( "fmt" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/config" ) // AppsToTargetGroups takes an array of Marathon apps and converts them into target groups. func AppsToTargetGroups(apps *AppList) map[string]*config.TargetGroup { tgroups := map[string]*config.TargetGroup{} for _, a := range apps.Apps { group := createTargetGroup(&a) tgroups[group.Source] = group } return tgroups } func createTargetGroup(app *App) *config.TargetGroup { var ( targets = targetsForApp(app) appName = model.LabelValue(app.ID) image = model.LabelValue(app.Container.Docker.Image) ) tg := &config.TargetGroup{ Targets: targets, Labels: model.LabelSet{ appLabel: appName, imageLabel: image, }, Source: app.ID, } for ln, lv := range app.Labels { ln = appLabelPrefix + ln tg.Labels[model.LabelName(ln)] = model.LabelValue(lv) } return tg } func targetsForApp(app *App) []model.LabelSet { targets := make([]model.LabelSet, 0, len(app.Tasks)) for _, t := range app.Tasks { target := targetForTask(&t) targets = append(targets, model.LabelSet{ model.AddressLabel: model.LabelValue(target), taskLabel: model.LabelValue(t.ID), }) } return targets } func targetForTask(task *Task) string { return fmt.Sprintf("%s:%d", task.Host, task.Ports[0]) } prometheus-0.16.2+ds/retrieval/discovery/marathon/objects.go000066400000000000000000000026471265137125100242060ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package marathon // Task describes one instance of a service running on Marathon. type Task struct { ID string `json:"id"` Host string `json:"host"` Ports []uint32 `json:"ports"` } // DockerContainer describes a container which uses the docker runtime. type DockerContainer struct { Image string `json:"image"` } // Container describes the runtime an app in running in. type Container struct { Docker DockerContainer `json:"docker"` } // App describes a service running on Marathon. type App struct { ID string `json:"id"` Tasks []Task `json:"tasks"` RunningTasks int `json:"tasksRunning"` Labels map[string]string `json:"labels"` Container Container `json:"container"` } // AppList is a list of Marathon apps. type AppList struct { Apps []App `json:"apps"` } prometheus-0.16.2+ds/retrieval/discovery/marathon/url.go000066400000000000000000000017701265137125100233530ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package marathon import ( "fmt" "math/rand" ) const appListPath string = "/v2/apps/?embed=apps.tasks" // RandomAppsURL randomly selects a server from an array and creates // an URL pointing to the app list. func RandomAppsURL(servers []string) string { // TODO: If possible update server list from Marathon at some point. server := servers[rand.Intn(len(servers))] return fmt.Sprintf("%s%s", server, appListPath) } prometheus-0.16.2+ds/retrieval/discovery/marathon_test.go000066400000000000000000000110271265137125100236040ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package discovery import ( "errors" "testing" "time" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/config" "github.com/prometheus/prometheus/retrieval/discovery/marathon" ) var marathonValidLabel = map[string]string{"prometheus": "yes"} func newTestDiscovery(client marathon.AppListClient) (chan config.TargetGroup, *MarathonDiscovery) { ch := make(chan config.TargetGroup) md := NewMarathonDiscovery(&config.MarathonSDConfig{ Servers: []string{"http://localhost:8080"}, }) md.client = client return ch, md } func TestMarathonSDHandleError(t *testing.T) { var errTesting = errors.New("testing failure") ch, md := newTestDiscovery(func(url string) (*marathon.AppList, error) { return nil, errTesting }) go func() { select { case tg := <-ch: t.Fatalf("Got group: %s", tg) default: } }() err := md.updateServices(ch) if err != errTesting { t.Fatalf("Expected error: %s", err) } } func TestMarathonSDEmptyList(t *testing.T) { ch, md := newTestDiscovery(func(url string) (*marathon.AppList, error) { return &marathon.AppList{}, nil }) go func() { select { case tg := <-ch: t.Fatalf("Got group: %v", tg) default: } }() err := md.updateServices(ch) if err != nil { t.Fatalf("Got error: %s", err) } } func marathonTestAppList(labels map[string]string, runningTasks int) *marathon.AppList { task := marathon.Task{ ID: "test-task-1", Host: "mesos-slave1", Ports: []uint32{31000}, } docker := marathon.DockerContainer{Image: "repo/image:tag"} container := marathon.Container{Docker: docker} app := marathon.App{ ID: "test-service", Tasks: []marathon.Task{task}, RunningTasks: runningTasks, Labels: labels, Container: container, } return &marathon.AppList{ Apps: []marathon.App{app}, } } func TestMarathonSDSendGroup(t *testing.T) { ch, md := newTestDiscovery(func(url string) (*marathon.AppList, error) { return marathonTestAppList(marathonValidLabel, 1), nil }) go func() { select { case tg := <-ch: if tg.Source != "test-service" { t.Fatalf("Wrong target group name: %s", tg.Source) } if len(tg.Targets) != 1 { t.Fatalf("Wrong number of targets: %v", tg.Targets) } tgt := tg.Targets[0] if tgt[model.AddressLabel] != "mesos-slave1:31000" { t.Fatalf("Wrong target address: %s", tgt[model.AddressLabel]) } default: t.Fatal("Did not get a target group.") } }() err := md.updateServices(ch) if err != nil { t.Fatalf("Got error: %s", err) } } func TestMarathonSDRemoveApp(t *testing.T) { ch, md := newTestDiscovery(func(url string) (*marathon.AppList, error) { return marathonTestAppList(marathonValidLabel, 1), nil }) go func() { up1 := <-ch up2 := <-ch if up2.Source != up1.Source { t.Fatalf("Source is different: %s", up2) if len(up2.Targets) > 0 { t.Fatalf("Got a non-empty target set: %s", up2.Targets) } } }() err := md.updateServices(ch) if err != nil { t.Fatalf("Got error on first update: %s", err) } md.client = func(url string) (*marathon.AppList, error) { return marathonTestAppList(marathonValidLabel, 0), nil } err = md.updateServices(ch) if err != nil { t.Fatalf("Got error on second update: %s", err) } } func TestMarathonSDSources(t *testing.T) { _, md := newTestDiscovery(func(url string) (*marathon.AppList, error) { return marathonTestAppList(marathonValidLabel, 1), nil }) sources := md.Sources() if len(sources) != 1 { t.Fatalf("Wrong number of sources: %s", sources) } } func TestMarathonSDRunAndStop(t *testing.T) { ch, md := newTestDiscovery(func(url string) (*marathon.AppList, error) { return marathonTestAppList(marathonValidLabel, 1), nil }) md.refreshInterval = time.Millisecond * 10 done := make(chan struct{}) go func() { select { case <-ch: close(done) case <-time.After(md.refreshInterval * 3): close(done) t.Fatalf("Update took too long.") } }() md.Run(ch, done) select { case <-ch: default: t.Fatalf("Channel not closed.") } } prometheus-0.16.2+ds/retrieval/discovery/serverset.go000066400000000000000000000233571265137125100227670ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package discovery import ( "bytes" "encoding/json" "fmt" "strconv" "strings" "sync" "time" "github.com/prometheus/common/log" "github.com/prometheus/common/model" "github.com/samuel/go-zookeeper/zk" "github.com/prometheus/prometheus/config" "github.com/prometheus/prometheus/util/strutil" ) const ( serversetNodePrefix = "member_" serversetLabelPrefix = model.MetaLabelPrefix + "serverset_" serversetStatusLabel = serversetLabelPrefix + "status" serversetPathLabel = serversetLabelPrefix + "path" serversetEndpointLabelPrefix = serversetLabelPrefix + "endpoint" serversetShardLabel = serversetLabelPrefix + "shard" ) type serversetMember struct { ServiceEndpoint serversetEndpoint AdditionalEndpoints map[string]serversetEndpoint Status string `json:"status"` Shard int `json:"shard"` } type serversetEndpoint struct { Host string Port int } type zookeeperLogger struct { } // Implements zk.Logger func (zl zookeeperLogger) Printf(s string, i ...interface{}) { log.Infof(s, i...) } // ServersetDiscovery retrieves target information from a Serverset server // and updates them via watches. type ServersetDiscovery struct { conf *config.ServersetSDConfig conn *zk.Conn mu sync.RWMutex sources map[string]*config.TargetGroup sdUpdates *chan<- config.TargetGroup updates chan zookeeperTreeCacheEvent treeCaches []*zookeeperTreeCache } // NewServersetDiscovery returns a new ServersetDiscovery for the given config. func NewServersetDiscovery(conf *config.ServersetSDConfig) *ServersetDiscovery { conn, _, err := zk.Connect(conf.Servers, time.Duration(conf.Timeout)) conn.SetLogger(zookeeperLogger{}) if err != nil { return nil } updates := make(chan zookeeperTreeCacheEvent) sd := &ServersetDiscovery{ conf: conf, conn: conn, updates: updates, sources: map[string]*config.TargetGroup{}, } go sd.processUpdates() for _, path := range conf.Paths { sd.treeCaches = append(sd.treeCaches, newZookeeperTreeCache(conn, path, updates)) } return sd } // Sources implements the TargetProvider interface. func (sd *ServersetDiscovery) Sources() []string { sd.mu.RLock() defer sd.mu.RUnlock() srcs := []string{} for t := range sd.sources { srcs = append(srcs, t) } return srcs } func (sd *ServersetDiscovery) processUpdates() { defer sd.conn.Close() for event := range sd.updates { tg := &config.TargetGroup{ Source: event.Path, } sd.mu.Lock() if event.Data != nil { labelSet, err := parseServersetMember(*event.Data, event.Path) if err == nil { tg.Targets = []model.LabelSet{*labelSet} sd.sources[event.Path] = tg } else { delete(sd.sources, event.Path) } } else { delete(sd.sources, event.Path) } sd.mu.Unlock() if sd.sdUpdates != nil { *sd.sdUpdates <- *tg } } if sd.sdUpdates != nil { close(*sd.sdUpdates) } } // Run implements the TargetProvider interface. func (sd *ServersetDiscovery) Run(ch chan<- config.TargetGroup, done <-chan struct{}) { // Send on everything we have seen so far. sd.mu.Lock() for _, targetGroup := range sd.sources { ch <- *targetGroup } // Tell processUpdates to send future updates. sd.sdUpdates = &ch sd.mu.Unlock() <-done for _, tc := range sd.treeCaches { tc.Stop() } } func parseServersetMember(data []byte, path string) (*model.LabelSet, error) { member := serversetMember{} err := json.Unmarshal(data, &member) if err != nil { return nil, fmt.Errorf("error unmarshaling serverset member %q: %s", path, err) } labels := model.LabelSet{} labels[serversetPathLabel] = model.LabelValue(path) labels[model.AddressLabel] = model.LabelValue( fmt.Sprintf("%s:%d", member.ServiceEndpoint.Host, member.ServiceEndpoint.Port)) labels[serversetEndpointLabelPrefix+"_host"] = model.LabelValue(member.ServiceEndpoint.Host) labels[serversetEndpointLabelPrefix+"_port"] = model.LabelValue(fmt.Sprintf("%d", member.ServiceEndpoint.Port)) for name, endpoint := range member.AdditionalEndpoints { cleanName := model.LabelName(strutil.SanitizeLabelName(name)) labels[serversetEndpointLabelPrefix+"_host_"+cleanName] = model.LabelValue( endpoint.Host) labels[serversetEndpointLabelPrefix+"_port_"+cleanName] = model.LabelValue( fmt.Sprintf("%d", endpoint.Port)) } labels[serversetStatusLabel] = model.LabelValue(member.Status) labels[serversetShardLabel] = model.LabelValue(strconv.Itoa(member.Shard)) return &labels, nil } type zookeeperTreeCache struct { conn *zk.Conn prefix string events chan zookeeperTreeCacheEvent zkEvents chan zk.Event stop chan struct{} head *zookeeperTreeCacheNode } type zookeeperTreeCacheEvent struct { Path string Data *[]byte } type zookeeperTreeCacheNode struct { data *[]byte events chan zk.Event done chan struct{} stopped bool children map[string]*zookeeperTreeCacheNode } func newZookeeperTreeCache(conn *zk.Conn, path string, events chan zookeeperTreeCacheEvent) *zookeeperTreeCache { tc := &zookeeperTreeCache{ conn: conn, prefix: path, events: events, stop: make(chan struct{}), } tc.head = &zookeeperTreeCacheNode{ events: make(chan zk.Event), children: map[string]*zookeeperTreeCacheNode{}, stopped: true, } err := tc.recursiveNodeUpdate(path, tc.head) if err != nil { log.Errorf("Error during initial read of Zookeeper: %s", err) } go tc.loop(err != nil) return tc } func (tc *zookeeperTreeCache) Stop() { tc.stop <- struct{}{} } func (tc *zookeeperTreeCache) loop(failureMode bool) { retryChan := make(chan struct{}) failure := func() { failureMode = true time.AfterFunc(time.Second*10, func() { retryChan <- struct{}{} }) } if failureMode { failure() } for { select { case ev := <-tc.head.events: log.Debugf("Received Zookeeper event: %s", ev) if failureMode { continue } if ev.Type == zk.EventNotWatching { log.Infof("Lost connection to Zookeeper.") failure() } else { path := strings.TrimPrefix(ev.Path, tc.prefix) parts := strings.Split(path, "/") node := tc.head for _, part := range parts[1:] { childNode := node.children[part] if childNode == nil { childNode = &zookeeperTreeCacheNode{ events: tc.head.events, children: map[string]*zookeeperTreeCacheNode{}, done: make(chan struct{}, 1), } node.children[part] = childNode } node = childNode } err := tc.recursiveNodeUpdate(ev.Path, node) if err != nil { log.Errorf("Error during processing of Zookeeper event: %s", err) failure() } else if tc.head.data == nil { log.Errorf("Error during processing of Zookeeper event: path %s no longer exists", tc.prefix) failure() } } case <-retryChan: log.Infof("Attempting to resync state with Zookeeper") // Reset root child nodes before traversing the Zookeeper path. tc.head.children = make(map[string]*zookeeperTreeCacheNode) err := tc.recursiveNodeUpdate(tc.prefix, tc.head) if err != nil { log.Errorf("Error during Zookeeper resync: %s", err) failure() } else { log.Infof("Zookeeper resync successful") failureMode = false } case <-tc.stop: close(tc.events) return } } } func (tc *zookeeperTreeCache) recursiveNodeUpdate(path string, node *zookeeperTreeCacheNode) error { data, _, dataWatcher, err := tc.conn.GetW(path) if err == zk.ErrNoNode { tc.recursiveDelete(path, node) if node == tc.head { return fmt.Errorf("path %s does not exist", path) } return nil } else if err != nil { return err } if node.data == nil || !bytes.Equal(*node.data, data) { node.data = &data tc.events <- zookeeperTreeCacheEvent{Path: path, Data: node.data} } children, _, childWatcher, err := tc.conn.ChildrenW(path) if err == zk.ErrNoNode { tc.recursiveDelete(path, node) return nil } else if err != nil { return err } currentChildren := map[string]struct{}{} for _, child := range children { currentChildren[child] = struct{}{} childNode := node.children[child] // Does not already exists or we previous had a watch that // triggered. if childNode == nil || childNode.stopped { node.children[child] = &zookeeperTreeCacheNode{ events: node.events, children: map[string]*zookeeperTreeCacheNode{}, done: make(chan struct{}, 1), } err = tc.recursiveNodeUpdate(path+"/"+child, node.children[child]) if err != nil { return err } } } // Remove nodes that no longer exist for name, childNode := range node.children { if _, ok := currentChildren[name]; !ok || node.data == nil { tc.recursiveDelete(path+"/"+name, childNode) delete(node.children, name) } } go func() { // Pass up zookeeper events, until the node is deleted. select { case event := <-dataWatcher: node.events <- event case event := <-childWatcher: node.events <- event case <-node.done: } }() return nil } func (tc *zookeeperTreeCache) recursiveDelete(path string, node *zookeeperTreeCacheNode) { if !node.stopped { node.done <- struct{}{} node.stopped = true } if node.data != nil { tc.events <- zookeeperTreeCacheEvent{Path: path, Data: nil} node.data = nil } for name, childNode := range node.children { tc.recursiveDelete(path+"/"+name, childNode) } } prometheus-0.16.2+ds/retrieval/helpers_test.go000066400000000000000000000031201265137125100214210ustar00rootroot00000000000000// Copyright 2013 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package retrieval import ( "time" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/config" ) type nopAppender struct{} func (a nopAppender) Append(*model.Sample) { } type slowAppender struct{} func (a slowAppender) Append(*model.Sample) { time.Sleep(time.Millisecond) } type collectResultAppender struct { result model.Samples } func (a *collectResultAppender) Append(s *model.Sample) { for ln, lv := range s.Metric { if len(lv) == 0 { delete(s.Metric, ln) } } a.result = append(a.result, s) } // fakeTargetProvider implements a TargetProvider and allows manual injection // of TargetGroups through the update channel. type fakeTargetProvider struct { sources []string update chan *config.TargetGroup } func (tp *fakeTargetProvider) Run(ch chan<- config.TargetGroup, done <-chan struct{}) { defer close(ch) for { select { case tg := <-tp.update: ch <- *tg case <-done: return } } } func (tp *fakeTargetProvider) Sources() []string { return tp.sources } prometheus-0.16.2+ds/retrieval/relabel.go000066400000000000000000000055541265137125100203430ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package retrieval import ( "crypto/md5" "fmt" "strings" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/config" ) // Relabel returns a relabeled copy of the given label set. The relabel configurations // are applied in order of input. // If a label set is dropped, nil is returned. func Relabel(labels model.LabelSet, cfgs ...*config.RelabelConfig) (model.LabelSet, error) { out := model.LabelSet{} for ln, lv := range labels { out[ln] = lv } var err error for _, cfg := range cfgs { if out, err = relabel(out, cfg); err != nil { return nil, err } if out == nil { return nil, nil } } return out, nil } func relabel(labels model.LabelSet, cfg *config.RelabelConfig) (model.LabelSet, error) { values := make([]string, 0, len(cfg.SourceLabels)) for _, ln := range cfg.SourceLabels { values = append(values, string(labels[ln])) } val := strings.Join(values, cfg.Separator) switch cfg.Action { case config.RelabelDrop: if cfg.Regex.MatchString(val) { return nil, nil } case config.RelabelKeep: if !cfg.Regex.MatchString(val) { return nil, nil } case config.RelabelReplace: indexes := cfg.Regex.FindStringSubmatchIndex(val) // If there is no match no replacement must take place. if indexes == nil { break } res := cfg.Regex.ExpandString([]byte{}, cfg.Replacement, val, indexes) if len(res) == 0 { delete(labels, cfg.TargetLabel) } else { labels[cfg.TargetLabel] = model.LabelValue(res) } case config.RelabelHashMod: mod := sum64(md5.Sum([]byte(val))) % cfg.Modulus labels[cfg.TargetLabel] = model.LabelValue(fmt.Sprintf("%d", mod)) case config.RelabelLabelMap: out := make(model.LabelSet, len(labels)) // Take a copy to avoid infinite loops. for ln, lv := range labels { out[ln] = lv } for ln, lv := range labels { if cfg.Regex.MatchString(string(ln)) { res := cfg.Regex.ReplaceAllString(string(ln), cfg.Replacement) out[model.LabelName(res)] = lv } } labels = out default: panic(fmt.Errorf("retrieval.relabel: unknown relabel action type %q", cfg.Action)) } return labels, nil } // sum64 sums the md5 hash to an uint64. func sum64(hash [md5.Size]byte) uint64 { var s uint64 for i, b := range hash { shift := uint64((md5.Size - i - 1) * 8) s |= uint64(b) << shift } return s } prometheus-0.16.2+ds/retrieval/relabel_test.go000066400000000000000000000141431265137125100213740ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package retrieval import ( "reflect" "testing" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/config" ) func TestRelabel(t *testing.T) { tests := []struct { input model.LabelSet relabel []*config.RelabelConfig output model.LabelSet }{ { input: model.LabelSet{ "a": "foo", "b": "bar", "c": "baz", }, relabel: []*config.RelabelConfig{ { SourceLabels: model.LabelNames{"a"}, Regex: config.MustNewRegexp("f(.*)"), TargetLabel: model.LabelName("d"), Separator: ";", Replacement: "ch${1}-ch${1}", Action: config.RelabelReplace, }, }, output: model.LabelSet{ "a": "foo", "b": "bar", "c": "baz", "d": "choo-choo", }, }, { input: model.LabelSet{ "a": "foo", "b": "bar", "c": "baz", }, relabel: []*config.RelabelConfig{ { SourceLabels: model.LabelNames{"a", "b"}, Regex: config.MustNewRegexp("f(.*);(.*)r"), TargetLabel: model.LabelName("a"), Separator: ";", Replacement: "b${1}${2}m", // boobam Action: config.RelabelReplace, }, { SourceLabels: model.LabelNames{"c", "a"}, Regex: config.MustNewRegexp("(b).*b(.*)ba(.*)"), TargetLabel: model.LabelName("d"), Separator: ";", Replacement: "$1$2$2$3", Action: config.RelabelReplace, }, }, output: model.LabelSet{ "a": "boobam", "b": "bar", "c": "baz", "d": "boooom", }, }, { input: model.LabelSet{ "a": "foo", }, relabel: []*config.RelabelConfig{ { SourceLabels: model.LabelNames{"a"}, Regex: config.MustNewRegexp(".*o.*"), Action: config.RelabelDrop, }, { SourceLabels: model.LabelNames{"a"}, Regex: config.MustNewRegexp("f(.*)"), TargetLabel: model.LabelName("d"), Separator: ";", Replacement: "ch$1-ch$1", Action: config.RelabelReplace, }, }, output: nil, }, { input: model.LabelSet{ "a": "abc", }, relabel: []*config.RelabelConfig{ { SourceLabels: model.LabelNames{"a"}, Regex: config.MustNewRegexp(".*(b).*"), TargetLabel: model.LabelName("d"), Separator: ";", Replacement: "$1", Action: config.RelabelReplace, }, }, output: model.LabelSet{ "a": "abc", "d": "b", }, }, { input: model.LabelSet{ "a": "foo", }, relabel: []*config.RelabelConfig{ { SourceLabels: model.LabelNames{"a"}, Regex: config.MustNewRegexp("no-match"), Action: config.RelabelDrop, }, }, output: model.LabelSet{ "a": "foo", }, }, { input: model.LabelSet{ "a": "foo", }, relabel: []*config.RelabelConfig{ { SourceLabels: model.LabelNames{"a"}, Regex: config.MustNewRegexp("f|o"), Action: config.RelabelDrop, }, }, output: model.LabelSet{ "a": "foo", }, }, { input: model.LabelSet{ "a": "foo", }, relabel: []*config.RelabelConfig{ { SourceLabels: model.LabelNames{"a"}, Regex: config.MustNewRegexp("no-match"), Action: config.RelabelKeep, }, }, output: nil, }, { input: model.LabelSet{ "a": "foo", }, relabel: []*config.RelabelConfig{ { SourceLabels: model.LabelNames{"a"}, Regex: config.MustNewRegexp("f.*"), Action: config.RelabelKeep, }, }, output: model.LabelSet{ "a": "foo", }, }, { // No replacement must be applied if there is no match. input: model.LabelSet{ "a": "boo", }, relabel: []*config.RelabelConfig{ { SourceLabels: model.LabelNames{"a"}, Regex: config.MustNewRegexp("f"), TargetLabel: model.LabelName("b"), Replacement: "bar", Action: config.RelabelReplace, }, }, output: model.LabelSet{ "a": "boo", }, }, { input: model.LabelSet{ "a": "foo", "b": "bar", "c": "baz", }, relabel: []*config.RelabelConfig{ { SourceLabels: model.LabelNames{"c"}, TargetLabel: model.LabelName("d"), Separator: ";", Action: config.RelabelHashMod, Modulus: 1000, }, }, output: model.LabelSet{ "a": "foo", "b": "bar", "c": "baz", "d": "976", }, }, { input: model.LabelSet{ "a": "foo", "b1": "bar", "b2": "baz", }, relabel: []*config.RelabelConfig{ { Regex: config.MustNewRegexp("(b.*)"), Replacement: "bar_${1}", Action: config.RelabelLabelMap, }, }, output: model.LabelSet{ "a": "foo", "b1": "bar", "b2": "baz", "bar_b1": "bar", "bar_b2": "baz", }, }, { input: model.LabelSet{ "a": "foo", "__meta_my_bar": "aaa", "__meta_my_baz": "bbb", "__meta_other": "ccc", }, relabel: []*config.RelabelConfig{ { Regex: config.MustNewRegexp("__meta_(my.*)"), Replacement: "${1}", Action: config.RelabelLabelMap, }, }, output: model.LabelSet{ "a": "foo", "__meta_my_bar": "aaa", "__meta_my_baz": "bbb", "__meta_other": "ccc", "my_bar": "aaa", "my_baz": "bbb", }, }, } for i, test := range tests { res, err := Relabel(test.input, test.relabel...) if err != nil { t.Errorf("Test %d: error relabeling: %s", i+1, err) } if !reflect.DeepEqual(res, test.output) { t.Errorf("Test %d: relabel output mismatch: expected %#v, got %#v", i+1, test.output, res) } } } prometheus-0.16.2+ds/retrieval/target.go000066400000000000000000000407431265137125100202220ustar00rootroot00000000000000// Copyright 2013 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package retrieval import ( "errors" "fmt" "io" "io/ioutil" "math/rand" "net/http" "net/url" "strings" "sync" "time" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/common/expfmt" "github.com/prometheus/common/log" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/config" "github.com/prometheus/prometheus/storage" "github.com/prometheus/prometheus/util/httputil" ) const ( scrapeHealthMetricName = "up" scrapeDurationMetricName = "scrape_duration_seconds" // Capacity of the channel to buffer samples during ingestion. ingestedSamplesCap = 256 // Constants for instrumentation. namespace = "prometheus" interval = "interval" ) var ( errIngestChannelFull = errors.New("ingestion channel full") targetIntervalLength = prometheus.NewSummaryVec( prometheus.SummaryOpts{ Namespace: namespace, Name: "target_interval_length_seconds", Help: "Actual intervals between scrapes.", Objectives: map[float64]float64{0.01: 0.001, 0.05: 0.005, 0.5: 0.05, 0.90: 0.01, 0.99: 0.001}, }, []string{interval}, ) ) func init() { prometheus.MustRegister(targetIntervalLength) } // TargetHealth describes the health state of a target. type TargetHealth int func (t TargetHealth) String() string { switch t { case HealthUnknown: return "unknown" case HealthGood: return "up" case HealthBad: return "down" } panic("unknown state") } func (t TargetHealth) value() model.SampleValue { if t == HealthGood { return 1 } return 0 } const ( // HealthUnknown is the state of a Target before it is first scraped. HealthUnknown TargetHealth = iota // HealthGood is the state of a Target that has been successfully scraped. HealthGood // HealthBad is the state of a Target that was scraped unsuccessfully. HealthBad ) // TargetStatus contains information about the current status of a scrape target. type TargetStatus struct { lastError error lastScrape time.Time health TargetHealth mu sync.RWMutex } // LastError returns the error encountered during the last scrape. func (ts *TargetStatus) LastError() error { ts.mu.RLock() defer ts.mu.RUnlock() return ts.lastError } // LastScrape returns the time of the last scrape. func (ts *TargetStatus) LastScrape() time.Time { ts.mu.RLock() defer ts.mu.RUnlock() return ts.lastScrape } // Health returns the last known health state of the target. func (ts *TargetStatus) Health() TargetHealth { ts.mu.RLock() defer ts.mu.RUnlock() return ts.health } func (ts *TargetStatus) setLastScrape(t time.Time) { ts.mu.Lock() defer ts.mu.Unlock() ts.lastScrape = t } func (ts *TargetStatus) setLastError(err error) { ts.mu.Lock() defer ts.mu.Unlock() if err == nil { ts.health = HealthGood } else { ts.health = HealthBad } ts.lastError = err } // Target refers to a singular HTTP or HTTPS endpoint. type Target struct { // The status object for the target. It is only set once on initialization. status *TargetStatus // Closing scraperStopping signals that scraping should stop. scraperStopping chan struct{} // Closing scraperStopped signals that scraping has been stopped. scraperStopped chan struct{} // Channel to buffer ingested samples. ingestedSamples chan model.Vector // Mutex protects the members below. sync.RWMutex // The HTTP client used to scrape the target's endpoint. httpClient *http.Client // url is the URL to be scraped. Its host is immutable. url *url.URL // Labels before any processing. metaLabels model.LabelSet // Any base labels that are added to this target and its metrics. baseLabels model.LabelSet // Internal labels, such as scheme. internalLabels model.LabelSet // What is the deadline for the HTTP or HTTPS against this endpoint. deadline time.Duration // The time between two scrapes. scrapeInterval time.Duration // Whether the target's labels have precedence over the base labels // assigned by the scraping instance. honorLabels bool // Metric relabel configuration. metricRelabelConfigs []*config.RelabelConfig } // NewTarget creates a reasonably configured target for querying. func NewTarget(cfg *config.ScrapeConfig, baseLabels, metaLabels model.LabelSet) *Target { t := &Target{ url: &url.URL{ Scheme: string(baseLabels[model.SchemeLabel]), Host: string(baseLabels[model.AddressLabel]), }, status: &TargetStatus{}, scraperStopping: make(chan struct{}), scraperStopped: make(chan struct{}), } t.Update(cfg, baseLabels, metaLabels) return t } // Status returns the status of the target. func (t *Target) Status() *TargetStatus { return t.status } // Update overwrites settings in the target that are derived from the job config // it belongs to. func (t *Target) Update(cfg *config.ScrapeConfig, baseLabels, metaLabels model.LabelSet) { t.Lock() defer t.Unlock() httpClient, err := newHTTPClient(cfg) if err != nil { log.Errorf("cannot create HTTP client: %v", err) return } t.httpClient = httpClient t.url.Scheme = string(baseLabels[model.SchemeLabel]) t.url.Path = string(baseLabels[model.MetricsPathLabel]) t.internalLabels = model.LabelSet{} t.internalLabels[model.SchemeLabel] = baseLabels[model.SchemeLabel] t.internalLabels[model.MetricsPathLabel] = baseLabels[model.MetricsPathLabel] t.internalLabels[model.AddressLabel] = model.LabelValue(t.url.Host) params := url.Values{} for k, v := range cfg.Params { params[k] = make([]string, len(v)) copy(params[k], v) } for k, v := range baseLabels { if strings.HasPrefix(string(k), model.ParamLabelPrefix) { if len(params[string(k[len(model.ParamLabelPrefix):])]) > 0 { params[string(k[len(model.ParamLabelPrefix):])][0] = string(v) } else { params[string(k[len(model.ParamLabelPrefix):])] = []string{string(v)} } t.internalLabels[model.ParamLabelPrefix+k[len(model.ParamLabelPrefix):]] = v } } t.url.RawQuery = params.Encode() t.scrapeInterval = time.Duration(cfg.ScrapeInterval) t.deadline = time.Duration(cfg.ScrapeTimeout) t.honorLabels = cfg.HonorLabels t.metaLabels = metaLabels t.baseLabels = model.LabelSet{} // All remaining internal labels will not be part of the label set. for name, val := range baseLabels { if !strings.HasPrefix(string(name), model.ReservedLabelPrefix) { t.baseLabels[name] = val } } if _, ok := t.baseLabels[model.InstanceLabel]; !ok { t.baseLabels[model.InstanceLabel] = model.LabelValue(t.InstanceIdentifier()) } t.metricRelabelConfigs = cfg.MetricRelabelConfigs } func newHTTPClient(cfg *config.ScrapeConfig) (*http.Client, error) { rt := httputil.NewDeadlineRoundTripper(time.Duration(cfg.ScrapeTimeout), cfg.ProxyURL.URL) tlsOpts := httputil.TLSOptions{ InsecureSkipVerify: cfg.TLSConfig.InsecureSkipVerify, CAFile: cfg.TLSConfig.CAFile, } if len(cfg.TLSConfig.CertFile) > 0 && len(cfg.TLSConfig.KeyFile) > 0 { tlsOpts.CertFile = cfg.TLSConfig.CertFile tlsOpts.KeyFile = cfg.TLSConfig.KeyFile } tlsConfig, err := httputil.NewTLSConfig(tlsOpts) if err != nil { return nil, err } // Get a default roundtripper with the scrape timeout. tr := rt.(*http.Transport) // Set the TLS config from above tr.TLSClientConfig = tlsConfig rt = tr // If a bearer token is provided, create a round tripper that will set the // Authorization header correctly on each request. bearerToken := cfg.BearerToken if len(bearerToken) == 0 && len(cfg.BearerTokenFile) > 0 { b, err := ioutil.ReadFile(cfg.BearerTokenFile) if err != nil { return nil, fmt.Errorf("unable to read bearer token file %s: %s", cfg.BearerTokenFile, err) } bearerToken = string(b) } if len(bearerToken) > 0 { rt = httputil.NewBearerAuthRoundTripper(bearerToken, rt) } if cfg.BasicAuth != nil { rt = httputil.NewBasicAuthRoundTripper(cfg.BasicAuth.Username, cfg.BasicAuth.Password, rt) } // Return a new client with the configured round tripper. return httputil.NewClient(rt), nil } func (t *Target) String() string { return t.url.Host } // RunScraper implements Target. func (t *Target) RunScraper(sampleAppender storage.SampleAppender) { defer close(t.scraperStopped) t.RLock() lastScrapeInterval := t.scrapeInterval t.RUnlock() log.Debugf("Starting scraper for target %v...", t) jitterTimer := time.NewTimer(time.Duration(float64(lastScrapeInterval) * rand.Float64())) select { case <-jitterTimer.C: case <-t.scraperStopping: jitterTimer.Stop() return } jitterTimer.Stop() ticker := time.NewTicker(lastScrapeInterval) defer ticker.Stop() t.status.setLastScrape(time.Now()) t.scrape(sampleAppender) // Explanation of the contraption below: // // In case t.scraperStopping has something to receive, we want to read // from that channel rather than starting a new scrape (which might take very // long). That's why the outer select has no ticker.C. Should t.scraperStopping // not have anything to receive, we go into the inner select, where ticker.C // is in the mix. for { select { case <-t.scraperStopping: return default: select { case <-t.scraperStopping: return case <-ticker.C: took := time.Since(t.status.LastScrape()) t.status.setLastScrape(time.Now()) intervalStr := lastScrapeInterval.String() t.RLock() // On changed scrape interval the new interval becomes effective // after the next scrape. if lastScrapeInterval != t.scrapeInterval { ticker.Stop() ticker = time.NewTicker(t.scrapeInterval) lastScrapeInterval = t.scrapeInterval } t.RUnlock() targetIntervalLength.WithLabelValues(intervalStr).Observe( float64(took) / float64(time.Second), // Sub-second precision. ) t.scrape(sampleAppender) } } } } // StopScraper implements Target. func (t *Target) StopScraper() { log.Debugf("Stopping scraper for target %v...", t) close(t.scraperStopping) <-t.scraperStopped log.Debugf("Scraper for target %v stopped.", t) } func (t *Target) ingest(s model.Vector) error { t.RLock() deadline := t.deadline t.RUnlock() // Since the regular case is that ingestedSamples is ready to receive, // first try without setting a timeout so that we don't need to allocate // a timer most of the time. select { case t.ingestedSamples <- s: return nil default: select { case t.ingestedSamples <- s: return nil case <-time.After(deadline / 10): return errIngestChannelFull } } } const acceptHeader = `application/vnd.google.protobuf;proto=io.prometheus.client.MetricFamily;encoding=delimited;q=0.7,text/plain;version=0.0.4;q=0.3,application/json;schema="prometheus/telemetry";version=0.0.2;q=0.2,*/*;q=0.1` func (t *Target) scrape(appender storage.SampleAppender) (err error) { start := time.Now() baseLabels := t.BaseLabels() defer func(appender storage.SampleAppender) { t.status.setLastError(err) recordScrapeHealth(appender, start, baseLabels, t.status.Health(), time.Since(start)) }(appender) t.RLock() // The relabelAppender has to be inside the label-modifying appenders // so the relabeling rules are applied to the correct label set. if len(t.metricRelabelConfigs) > 0 { appender = relabelAppender{ app: appender, relabelings: t.metricRelabelConfigs, } } if t.honorLabels { appender = honorLabelsAppender{ app: appender, labels: baseLabels, } } else { appender = ruleLabelsAppender{ app: appender, labels: baseLabels, } } httpClient := t.httpClient t.RUnlock() req, err := http.NewRequest("GET", t.URL().String(), nil) if err != nil { return err } req.Header.Add("Accept", acceptHeader) resp, err := httpClient.Do(req) if err != nil { return err } defer resp.Body.Close() if resp.StatusCode != http.StatusOK { return fmt.Errorf("server returned HTTP status %s", resp.Status) } dec := expfmt.NewDecoder(resp.Body, expfmt.ResponseFormat(resp.Header)) sdec := expfmt.SampleDecoder{ Dec: dec, Opts: &expfmt.DecodeOptions{ Timestamp: model.TimeFromUnixNano(start.UnixNano()), }, } t.ingestedSamples = make(chan model.Vector, ingestedSamplesCap) go func() { for { // TODO(fabxc): Change the SampleAppender interface to return an error // so we can proceed based on the status and don't leak goroutines trying // to append a single sample after dropping all the other ones. // // This will also allow use to reuse this vector and save allocations. var samples model.Vector if err = sdec.Decode(&samples); err != nil { break } if err = t.ingest(samples); err != nil { break } } close(t.ingestedSamples) }() for samples := range t.ingestedSamples { for _, s := range samples { appender.Append(s) } } if err == io.EOF { return nil } return err } // Merges the ingested sample's metric with the label set. On a collision the // value of the ingested label is stored in a label prefixed with 'exported_'. type ruleLabelsAppender struct { app storage.SampleAppender labels model.LabelSet } func (app ruleLabelsAppender) Append(s *model.Sample) { for ln, lv := range app.labels { if v, ok := s.Metric[ln]; ok && v != "" { s.Metric[model.ExportedLabelPrefix+ln] = v } s.Metric[ln] = lv } app.app.Append(s) } type honorLabelsAppender struct { app storage.SampleAppender labels model.LabelSet } // Merges the sample's metric with the given labels if the label is not // already present in the metric. // This also considers labels explicitly set to the empty string. func (app honorLabelsAppender) Append(s *model.Sample) { for ln, lv := range app.labels { if _, ok := s.Metric[ln]; !ok { s.Metric[ln] = lv } } app.app.Append(s) } // Applies a set of relabel configurations to the sample's metric // before actually appending it. type relabelAppender struct { app storage.SampleAppender relabelings []*config.RelabelConfig } func (app relabelAppender) Append(s *model.Sample) { labels, err := Relabel(model.LabelSet(s.Metric), app.relabelings...) if err != nil { log.Errorf("Error while relabeling metric %s: %s", s.Metric, err) return } // Check if the timeseries was dropped. if labels == nil { return } s.Metric = model.Metric(labels) app.app.Append(s) } // URL returns a copy of the target's URL. func (t *Target) URL() *url.URL { t.RLock() defer t.RUnlock() u := &url.URL{} *u = *t.url return u } // InstanceIdentifier returns the identifier for the target. func (t *Target) InstanceIdentifier() string { return t.url.Host } // fullLabels returns the base labels plus internal labels defining the target. func (t *Target) fullLabels() model.LabelSet { t.RLock() defer t.RUnlock() lset := make(model.LabelSet, len(t.baseLabels)+len(t.internalLabels)) for ln, lv := range t.baseLabels { lset[ln] = lv } for k, v := range t.internalLabels { lset[k] = v } return lset } // BaseLabels returns a copy of the target's base labels. func (t *Target) BaseLabels() model.LabelSet { t.RLock() defer t.RUnlock() lset := make(model.LabelSet, len(t.baseLabels)) for ln, lv := range t.baseLabels { lset[ln] = lv } return lset } // MetaLabels returns a copy of the target's labels before any processing. func (t *Target) MetaLabels() model.LabelSet { t.RLock() defer t.RUnlock() lset := make(model.LabelSet, len(t.metaLabels)) for ln, lv := range t.metaLabels { lset[ln] = lv } return lset } func recordScrapeHealth( sampleAppender storage.SampleAppender, timestamp time.Time, baseLabels model.LabelSet, health TargetHealth, scrapeDuration time.Duration, ) { healthMetric := make(model.Metric, len(baseLabels)+1) durationMetric := make(model.Metric, len(baseLabels)+1) healthMetric[model.MetricNameLabel] = scrapeHealthMetricName durationMetric[model.MetricNameLabel] = scrapeDurationMetricName for ln, lv := range baseLabels { healthMetric[ln] = lv durationMetric[ln] = lv } ts := model.TimeFromUnixNano(timestamp.UnixNano()) healthSample := &model.Sample{ Metric: healthMetric, Timestamp: ts, Value: health.value(), } durationSample := &model.Sample{ Metric: durationMetric, Timestamp: ts, Value: model.SampleValue(float64(scrapeDuration) / float64(time.Second)), } sampleAppender.Append(healthSample) sampleAppender.Append(durationSample) } prometheus-0.16.2+ds/retrieval/target_test.go000066400000000000000000000400471265137125100212560ustar00rootroot00000000000000// Copyright 2013 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package retrieval import ( "crypto/tls" "crypto/x509" "errors" "fmt" "io/ioutil" "net/http" "net/http/httptest" "net/url" "reflect" "strings" "testing" "time" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/config" ) func TestBaseLabels(t *testing.T) { target := newTestTarget("example.com:80", 0, model.LabelSet{"job": "some_job", "foo": "bar"}) want := model.LabelSet{ model.JobLabel: "some_job", model.InstanceLabel: "example.com:80", "foo": "bar", } got := target.BaseLabels() if !reflect.DeepEqual(want, got) { t.Errorf("want base labels %v, got %v", want, got) } } func TestOverwriteLabels(t *testing.T) { type test struct { metric string resultNormal model.Metric resultHonor model.Metric } var tests []test server := httptest.NewServer( http.HandlerFunc( func(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", `text/plain; version=0.0.4`) for _, test := range tests { w.Write([]byte(test.metric)) w.Write([]byte(" 1\n")) } }, ), ) defer server.Close() addr := model.LabelValue(strings.Split(server.URL, "://")[1]) tests = []test{ { metric: `foo{}`, resultNormal: model.Metric{ model.MetricNameLabel: "foo", model.InstanceLabel: addr, }, resultHonor: model.Metric{ model.MetricNameLabel: "foo", model.InstanceLabel: addr, }, }, { metric: `foo{instance=""}`, resultNormal: model.Metric{ model.MetricNameLabel: "foo", model.InstanceLabel: addr, }, resultHonor: model.Metric{ model.MetricNameLabel: "foo", }, }, { metric: `foo{instance="other_instance"}`, resultNormal: model.Metric{ model.MetricNameLabel: "foo", model.InstanceLabel: addr, model.ExportedLabelPrefix + model.InstanceLabel: "other_instance", }, resultHonor: model.Metric{ model.MetricNameLabel: "foo", model.InstanceLabel: "other_instance", }, }, } target := newTestTarget(server.URL, time.Second, nil) target.honorLabels = false app := &collectResultAppender{} if err := target.scrape(app); err != nil { t.Fatal(err) } for i, test := range tests { if !reflect.DeepEqual(app.result[i].Metric, test.resultNormal) { t.Errorf("Error comparing %q:\nExpected:\n%s\nGot:\n%s\n", test.metric, test.resultNormal, app.result[i].Metric) } } target.honorLabels = true app = &collectResultAppender{} if err := target.scrape(app); err != nil { t.Fatal(err) } for i, test := range tests { if !reflect.DeepEqual(app.result[i].Metric, test.resultHonor) { t.Errorf("Error comparing %q:\nExpected:\n%s\nGot:\n%s\n", test.metric, test.resultHonor, app.result[i].Metric) } } } func TestTargetScrapeUpdatesState(t *testing.T) { testTarget := newTestTarget("bad schema", 0, nil) testTarget.scrape(nopAppender{}) if testTarget.status.Health() != HealthBad { t.Errorf("Expected target state %v, actual: %v", HealthBad, testTarget.status.Health()) } } func TestTargetScrapeWithFullChannel(t *testing.T) { server := httptest.NewServer( http.HandlerFunc( func(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", `text/plain; version=0.0.4`) for i := 0; i < 2*ingestedSamplesCap; i++ { w.Write([]byte( fmt.Sprintf("test_metric_%d{foo=\"bar\"} 123.456\n", i), )) } }, ), ) defer server.Close() testTarget := newTestTarget(server.URL, time.Second, model.LabelSet{"dings": "bums"}) // Affects full channel but not HTTP fetch testTarget.deadline = 0 testTarget.scrape(slowAppender{}) if testTarget.status.Health() != HealthBad { t.Errorf("Expected target state %v, actual: %v", HealthBad, testTarget.status.Health()) } if testTarget.status.LastError() != errIngestChannelFull { t.Errorf("Expected target error %q, actual: %q", errIngestChannelFull, testTarget.status.LastError()) } } func TestTargetScrapeMetricRelabelConfigs(t *testing.T) { server := httptest.NewServer( http.HandlerFunc( func(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", `text/plain; version=0.0.4`) w.Write([]byte("test_metric_drop 0\n")) w.Write([]byte("test_metric_relabel 1\n")) }, ), ) defer server.Close() testTarget := newTestTarget(server.URL, time.Second, model.LabelSet{}) testTarget.metricRelabelConfigs = []*config.RelabelConfig{ { SourceLabels: model.LabelNames{"__name__"}, Regex: config.MustNewRegexp(".*drop.*"), Action: config.RelabelDrop, }, { SourceLabels: model.LabelNames{"__name__"}, Regex: config.MustNewRegexp(".*(relabel|up).*"), TargetLabel: "foo", Replacement: "bar", Action: config.RelabelReplace, }, } appender := &collectResultAppender{} if err := testTarget.scrape(appender); err != nil { t.Fatal(err) } // Remove variables part of result. for _, sample := range appender.result { sample.Timestamp = 0 sample.Value = 0 } expected := []*model.Sample{ { Metric: model.Metric{ model.MetricNameLabel: "test_metric_relabel", "foo": "bar", model.InstanceLabel: model.LabelValue(testTarget.url.Host), }, Timestamp: 0, Value: 0, }, // The metrics about the scrape are not affected. { Metric: model.Metric{ model.MetricNameLabel: scrapeHealthMetricName, model.InstanceLabel: model.LabelValue(testTarget.url.Host), }, Timestamp: 0, Value: 0, }, { Metric: model.Metric{ model.MetricNameLabel: scrapeDurationMetricName, model.InstanceLabel: model.LabelValue(testTarget.url.Host), }, Timestamp: 0, Value: 0, }, } if !appender.result.Equal(expected) { t.Fatalf("Expected and actual samples not equal. Expected: %s, actual: %s", expected, appender.result) } } func TestTargetRecordScrapeHealth(t *testing.T) { testTarget := newTestTarget("example.url:80", 0, model.LabelSet{model.JobLabel: "testjob"}) now := model.Now() appender := &collectResultAppender{} testTarget.status.setLastError(nil) recordScrapeHealth(appender, now.Time(), testTarget.BaseLabels(), testTarget.status.Health(), 2*time.Second) result := appender.result if len(result) != 2 { t.Fatalf("Expected two samples, got %d", len(result)) } actual := result[0] expected := &model.Sample{ Metric: model.Metric{ model.MetricNameLabel: scrapeHealthMetricName, model.InstanceLabel: "example.url:80", model.JobLabel: "testjob", }, Timestamp: now, Value: 1, } if !actual.Equal(expected) { t.Fatalf("Expected and actual samples not equal. Expected: %v, actual: %v", expected, actual) } actual = result[1] expected = &model.Sample{ Metric: model.Metric{ model.MetricNameLabel: scrapeDurationMetricName, model.InstanceLabel: "example.url:80", model.JobLabel: "testjob", }, Timestamp: now, Value: 2.0, } if !actual.Equal(expected) { t.Fatalf("Expected and actual samples not equal. Expected: %v, actual: %v", expected, actual) } } func TestTargetScrapeTimeout(t *testing.T) { signal := make(chan bool, 1) server := httptest.NewServer( http.HandlerFunc( func(w http.ResponseWriter, r *http.Request) { <-signal w.Header().Set("Content-Type", `text/plain; version=0.0.4`) w.Write([]byte{}) }, ), ) defer server.Close() testTarget := newTestTarget(server.URL, 50*time.Millisecond, model.LabelSet{}) appender := nopAppender{} // scrape once without timeout signal <- true if err := testTarget.scrape(appender); err != nil { t.Fatal(err) } // let the deadline lapse time.Sleep(55 * time.Millisecond) // now scrape again signal <- true if err := testTarget.scrape(appender); err != nil { t.Fatal(err) } // now timeout if err := testTarget.scrape(appender); err == nil { t.Fatal("expected scrape to timeout") } else { signal <- true // let handler continue } // now scrape again without timeout signal <- true if err := testTarget.scrape(appender); err != nil { t.Fatal(err) } } func TestTargetScrape404(t *testing.T) { server := httptest.NewServer( http.HandlerFunc( func(w http.ResponseWriter, r *http.Request) { w.WriteHeader(http.StatusNotFound) }, ), ) defer server.Close() testTarget := newTestTarget(server.URL, time.Second, model.LabelSet{}) appender := nopAppender{} want := errors.New("server returned HTTP status 404 Not Found") got := testTarget.scrape(appender) if got == nil || want.Error() != got.Error() { t.Fatalf("want err %q, got %q", want, got) } } func TestTargetRunScraperScrapes(t *testing.T) { testTarget := newTestTarget("bad schema", 0, nil) go testTarget.RunScraper(nopAppender{}) // Enough time for a scrape to happen. time.Sleep(20 * time.Millisecond) if testTarget.status.LastScrape().IsZero() { t.Errorf("Scrape hasn't occured.") } testTarget.StopScraper() // Wait for it to take effect. time.Sleep(20 * time.Millisecond) last := testTarget.status.LastScrape() // Enough time for a scrape to happen. time.Sleep(20 * time.Millisecond) if testTarget.status.LastScrape() != last { t.Errorf("Scrape occured after it was stopped.") } } func BenchmarkScrape(b *testing.B) { server := httptest.NewServer( http.HandlerFunc( func(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", `text/plain; version=0.0.4`) w.Write([]byte("test_metric{foo=\"bar\"} 123.456\n")) }, ), ) defer server.Close() testTarget := newTestTarget(server.URL, time.Second, model.LabelSet{"dings": "bums"}) appender := nopAppender{} b.ResetTimer() for i := 0; i < b.N; i++ { if err := testTarget.scrape(appender); err != nil { b.Fatal(err) } } } func TestURLParams(t *testing.T) { server := httptest.NewServer( http.HandlerFunc( func(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", `text/plain; version=0.0.4`) w.Write([]byte{}) r.ParseForm() if r.Form["foo"][0] != "bar" { t.Fatalf("URL parameter 'foo' had unexpected first value '%v'", r.Form["foo"][0]) } if r.Form["foo"][1] != "baz" { t.Fatalf("URL parameter 'foo' had unexpected second value '%v'", r.Form["foo"][1]) } }, ), ) defer server.Close() serverURL, err := url.Parse(server.URL) if err != nil { t.Fatal(err) } target := NewTarget( &config.ScrapeConfig{ JobName: "test_job1", ScrapeInterval: config.Duration(1 * time.Minute), ScrapeTimeout: config.Duration(1 * time.Second), Scheme: serverURL.Scheme, Params: url.Values{ "foo": []string{"bar", "baz"}, }, }, model.LabelSet{ model.SchemeLabel: model.LabelValue(serverURL.Scheme), model.AddressLabel: model.LabelValue(serverURL.Host), "__param_foo": "bar", }, nil) app := &collectResultAppender{} if err = target.scrape(app); err != nil { t.Fatal(err) } } func newTestTarget(targetURL string, deadline time.Duration, baseLabels model.LabelSet) *Target { cfg := &config.ScrapeConfig{ ScrapeTimeout: config.Duration(deadline), } c, _ := newHTTPClient(cfg) t := &Target{ url: &url.URL{ Scheme: "http", Host: strings.TrimLeft(targetURL, "http://"), Path: "/metrics", }, deadline: deadline, status: &TargetStatus{}, scrapeInterval: 1 * time.Millisecond, httpClient: c, scraperStopping: make(chan struct{}), scraperStopped: make(chan struct{}), } t.baseLabels = model.LabelSet{ model.InstanceLabel: model.LabelValue(t.InstanceIdentifier()), } for baseLabel, baseValue := range baseLabels { t.baseLabels[baseLabel] = baseValue } return t } func TestNewHTTPBearerToken(t *testing.T) { server := httptest.NewServer( http.HandlerFunc( func(w http.ResponseWriter, r *http.Request) { expected := "Bearer 1234" received := r.Header.Get("Authorization") if expected != received { t.Fatalf("Authorization header was not set correctly: expected '%v', got '%v'", expected, received) } }, ), ) defer server.Close() cfg := &config.ScrapeConfig{ ScrapeTimeout: config.Duration(1 * time.Second), BearerToken: "1234", } c, err := newHTTPClient(cfg) if err != nil { t.Fatal(err) } _, err = c.Get(server.URL) if err != nil { t.Fatal(err) } } func TestNewHTTPBearerTokenFile(t *testing.T) { server := httptest.NewServer( http.HandlerFunc( func(w http.ResponseWriter, r *http.Request) { expected := "Bearer 12345" received := r.Header.Get("Authorization") if expected != received { t.Fatalf("Authorization header was not set correctly: expected '%v', got '%v'", expected, received) } }, ), ) defer server.Close() cfg := &config.ScrapeConfig{ ScrapeTimeout: config.Duration(1 * time.Second), BearerTokenFile: "testdata/bearertoken.txt", } c, err := newHTTPClient(cfg) if err != nil { t.Fatal(err) } _, err = c.Get(server.URL) if err != nil { t.Fatal(err) } } func TestNewHTTPBasicAuth(t *testing.T) { server := httptest.NewServer( http.HandlerFunc( func(w http.ResponseWriter, r *http.Request) { username, password, ok := r.BasicAuth() if !(ok && username == "user" && password == "password123") { t.Fatalf("Basic authorization header was not set correctly: expected '%v:%v', got '%v:%v'", "user", "password123", username, password) } }, ), ) defer server.Close() cfg := &config.ScrapeConfig{ ScrapeTimeout: config.Duration(1 * time.Second), BasicAuth: &config.BasicAuth{ Username: "user", Password: "password123", }, } c, err := newHTTPClient(cfg) if err != nil { t.Fatal(err) } _, err = c.Get(server.URL) if err != nil { t.Fatal(err) } } func TestNewHTTPCACert(t *testing.T) { server := httptest.NewUnstartedServer( http.HandlerFunc( func(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", `text/plain; version=0.0.4`) w.Write([]byte{}) }, ), ) server.TLS = newTLSConfig(t) server.StartTLS() defer server.Close() cfg := &config.ScrapeConfig{ ScrapeTimeout: config.Duration(1 * time.Second), TLSConfig: config.TLSConfig{ CAFile: "testdata/ca.cer", }, } c, err := newHTTPClient(cfg) if err != nil { t.Fatal(err) } _, err = c.Get(server.URL) if err != nil { t.Fatal(err) } } func TestNewHTTPClientCert(t *testing.T) { server := httptest.NewUnstartedServer( http.HandlerFunc( func(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", `text/plain; version=0.0.4`) w.Write([]byte{}) }, ), ) tlsConfig := newTLSConfig(t) tlsConfig.ClientAuth = tls.RequireAndVerifyClientCert tlsConfig.ClientCAs = tlsConfig.RootCAs tlsConfig.BuildNameToCertificate() server.TLS = tlsConfig server.StartTLS() defer server.Close() cfg := &config.ScrapeConfig{ ScrapeTimeout: config.Duration(1 * time.Second), TLSConfig: config.TLSConfig{ CAFile: "testdata/ca.cer", CertFile: "testdata/client.cer", KeyFile: "testdata/client.key", }, } c, err := newHTTPClient(cfg) if err != nil { t.Fatal(err) } _, err = c.Get(server.URL) if err != nil { t.Fatal(err) } } func newTLSConfig(t *testing.T) *tls.Config { tlsConfig := &tls.Config{} caCertPool := x509.NewCertPool() caCert, err := ioutil.ReadFile("testdata/ca.cer") if err != nil { t.Fatalf("Couldn't set up TLS server: %v", err) } caCertPool.AppendCertsFromPEM(caCert) tlsConfig.RootCAs = caCertPool tlsConfig.ServerName = "127.0.0.1" cert, err := tls.LoadX509KeyPair("testdata/server.cer", "testdata/server.key") if err != nil { t.Errorf("Unable to use specified server cert (%s) & key (%v): %s", "testdata/server.cer", "testdata/server.key", err) } tlsConfig.Certificates = []tls.Certificate{cert} tlsConfig.BuildNameToCertificate() return tlsConfig } prometheus-0.16.2+ds/retrieval/targetmanager.go000066400000000000000000000352141265137125100215520ustar00rootroot00000000000000// Copyright 2013 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package retrieval import ( "fmt" "strings" "sync" "github.com/prometheus/common/log" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/config" "github.com/prometheus/prometheus/retrieval/discovery" "github.com/prometheus/prometheus/storage" ) // A TargetProvider provides information about target groups. It maintains a set // of sources from which TargetGroups can originate. Whenever a target provider // detects a potential change, it sends the TargetGroup through its provided channel. // // The TargetProvider does not have to guarantee that an actual change happened. // It does guarantee that it sends the new TargetGroup whenever a change happens. // // Sources() is guaranteed to be called exactly once before each call to Run(). // On a call to Run() implementing types must send a valid target group for each of // the sources they declared in the last call to Sources(). type TargetProvider interface { // Sources returns the source identifiers the provider is currently aware of. Sources() []string // Run hands a channel to the target provider through which it can send // updated target groups. The channel must be closed by the target provider // if no more updates will be sent. // On receiving from done Run must return. Run(up chan<- config.TargetGroup, done <-chan struct{}) } // TargetManager maintains a set of targets, starts and stops their scraping and // creates the new targets based on the target groups it receives from various // target providers. type TargetManager struct { mtx sync.RWMutex sampleAppender storage.SampleAppender running bool done chan struct{} // Targets by their source ID. targets map[string][]*Target // Providers by the scrape configs they are derived from. providers map[*config.ScrapeConfig][]TargetProvider } // NewTargetManager creates a new TargetManager. func NewTargetManager(sampleAppender storage.SampleAppender) *TargetManager { tm := &TargetManager{ sampleAppender: sampleAppender, targets: map[string][]*Target{}, } return tm } // merge multiple target group channels into a single output channel. func merge(done <-chan struct{}, cs ...<-chan targetGroupUpdate) <-chan targetGroupUpdate { var wg sync.WaitGroup out := make(chan targetGroupUpdate) // Start an output goroutine for each input channel in cs. output // copies values from c to out until c or done is closed, then calls // wg.Done. redir := func(c <-chan targetGroupUpdate) { defer wg.Done() for n := range c { select { case out <- n: case <-done: return } } } wg.Add(len(cs)) for _, c := range cs { go redir(c) } // Close the out channel if all inbound channels are closed. go func() { wg.Wait() close(out) }() return out } // targetGroupUpdate is a potentially changed/new target group // for the given scrape configuration. type targetGroupUpdate struct { tg config.TargetGroup scfg *config.ScrapeConfig } // Run starts background processing to handle target updates. func (tm *TargetManager) Run() { log.Info("Starting target manager...") tm.done = make(chan struct{}) sources := map[string]struct{}{} updates := []<-chan targetGroupUpdate{} for scfg, provs := range tm.providers { for _, prov := range provs { // Get an initial set of available sources so we don't remove // target groups from the last run that are still available. for _, src := range prov.Sources() { sources[src] = struct{}{} } tgc := make(chan config.TargetGroup) // Run the target provider after cleanup of the stale targets is done. defer func(prov TargetProvider, tgc chan<- config.TargetGroup, done <-chan struct{}) { go prov.Run(tgc, done) }(prov, tgc, tm.done) tgupc := make(chan targetGroupUpdate) updates = append(updates, tgupc) go func(scfg *config.ScrapeConfig, done <-chan struct{}) { defer close(tgupc) for { select { case tg := <-tgc: tgupc <- targetGroupUpdate{tg: tg, scfg: scfg} case <-done: return } } }(scfg, tm.done) } } // Merge all channels of incoming target group updates into a single // one and keep applying the updates. go tm.handleUpdates(merge(tm.done, updates...), tm.done) tm.mtx.Lock() defer tm.mtx.Unlock() // Remove old target groups that are no longer in the set of sources. tm.removeTargets(func(src string) bool { if _, ok := sources[src]; ok { return false } return true }) tm.running = true } // handleUpdates receives target group updates and handles them in the // context of the given job config. func (tm *TargetManager) handleUpdates(ch <-chan targetGroupUpdate, done <-chan struct{}) { for { select { case update, ok := <-ch: if !ok { return } log.Debugf("Received potential update for target group %q", update.tg.Source) if err := tm.updateTargetGroup(&update.tg, update.scfg); err != nil { log.Errorf("Error updating targets: %s", err) } case <-done: return } } } // Stop all background processing. func (tm *TargetManager) Stop() { tm.mtx.RLock() if tm.running { defer tm.stop(true) } // Return the lock before calling tm.stop(). defer tm.mtx.RUnlock() } // stop background processing of the target manager. If removeTargets is true, // existing targets will be stopped and removed. func (tm *TargetManager) stop(removeTargets bool) { log.Info("Stopping target manager...") defer log.Info("Target manager stopped.") close(tm.done) tm.mtx.Lock() defer tm.mtx.Unlock() if removeTargets { tm.removeTargets(nil) } tm.running = false } // removeTargets stops and removes targets for sources where f(source) is true // or if f is nil. This method is not thread-safe. func (tm *TargetManager) removeTargets(f func(string) bool) { if f == nil { f = func(string) bool { return true } } var wg sync.WaitGroup for src, targets := range tm.targets { if !f(src) { continue } wg.Add(len(targets)) for _, target := range targets { go func(t *Target) { t.StopScraper() wg.Done() }(target) } delete(tm.targets, src) } wg.Wait() } // updateTargetGroup creates new targets for the group and replaces the old targets // for the source ID. func (tm *TargetManager) updateTargetGroup(tgroup *config.TargetGroup, cfg *config.ScrapeConfig) error { newTargets, err := tm.targetsFromGroup(tgroup, cfg) if err != nil { return err } tm.mtx.Lock() defer tm.mtx.Unlock() if !tm.running { return nil } oldTargets, ok := tm.targets[tgroup.Source] if ok { var wg sync.WaitGroup // Replace the old targets with the new ones while keeping the state // of intersecting targets. for i, tnew := range newTargets { var match *Target for j, told := range oldTargets { if told == nil { continue } if tnew.InstanceIdentifier() == told.InstanceIdentifier() { match = told oldTargets[j] = nil break } } // Update the existing target and discard the new equivalent. // Otherwise start scraping the new target. if match != nil { // Updating is blocked during a scrape. We don't want those wait times // to build up. wg.Add(1) go func(t *Target) { match.Update(cfg, t.fullLabels(), t.metaLabels) wg.Done() }(tnew) newTargets[i] = match } else { go tnew.RunScraper(tm.sampleAppender) } } // Remove all old targets that disappeared. for _, told := range oldTargets { if told != nil { wg.Add(1) go func(t *Target) { t.StopScraper() wg.Done() }(told) } } wg.Wait() } else { // The source ID is new, start all target scrapers. for _, tnew := range newTargets { go tnew.RunScraper(tm.sampleAppender) } } if len(newTargets) > 0 { tm.targets[tgroup.Source] = newTargets } else { delete(tm.targets, tgroup.Source) } return nil } // Pools returns the targets currently being scraped bucketed by their job name. func (tm *TargetManager) Pools() map[string][]*Target { tm.mtx.RLock() defer tm.mtx.RUnlock() pools := map[string][]*Target{} for _, ts := range tm.targets { for _, t := range ts { job := string(t.BaseLabels()[model.JobLabel]) pools[job] = append(pools[job], t) } } return pools } // ApplyConfig resets the manager's target providers and job configurations as defined // by the new cfg. The state of targets that are valid in the new configuration remains unchanged. // Returns true on success. func (tm *TargetManager) ApplyConfig(cfg *config.Config) bool { tm.mtx.RLock() running := tm.running tm.mtx.RUnlock() if running { tm.stop(false) // Even if updating the config failed, we want to continue rather than stop scraping anything. defer tm.Run() } providers := map[*config.ScrapeConfig][]TargetProvider{} for _, scfg := range cfg.ScrapeConfigs { providers[scfg] = providersFromConfig(scfg) } tm.mtx.Lock() defer tm.mtx.Unlock() tm.providers = providers return true } // prefixedTargetProvider wraps TargetProvider and prefixes source strings // to make the sources unique across a configuration. type prefixedTargetProvider struct { TargetProvider job string mechanism string idx int } func (tp *prefixedTargetProvider) prefix(src string) string { return fmt.Sprintf("%s:%s:%d:%s", tp.job, tp.mechanism, tp.idx, src) } func (tp *prefixedTargetProvider) Sources() []string { srcs := tp.TargetProvider.Sources() for i, src := range srcs { srcs[i] = tp.prefix(src) } return srcs } func (tp *prefixedTargetProvider) Run(ch chan<- config.TargetGroup, done <-chan struct{}) { defer close(ch) ch2 := make(chan config.TargetGroup) go tp.TargetProvider.Run(ch2, done) for { select { case <-done: return case tg := <-ch2: tg.Source = tp.prefix(tg.Source) ch <- tg } } } // providersFromConfig returns all TargetProviders configured in cfg. func providersFromConfig(cfg *config.ScrapeConfig) []TargetProvider { var providers []TargetProvider app := func(mech string, i int, tp TargetProvider) { providers = append(providers, &prefixedTargetProvider{ job: cfg.JobName, mechanism: mech, idx: i, TargetProvider: tp, }) } for i, c := range cfg.DNSSDConfigs { app("dns", i, discovery.NewDNSDiscovery(c)) } for i, c := range cfg.FileSDConfigs { app("file", i, discovery.NewFileDiscovery(c)) } for i, c := range cfg.ConsulSDConfigs { k, err := discovery.NewConsulDiscovery(c) if err != nil { log.Errorf("Cannot create Consul discovery: %s", err) continue } app("consul", i, k) } for i, c := range cfg.MarathonSDConfigs { app("marathon", i, discovery.NewMarathonDiscovery(c)) } for i, c := range cfg.KubernetesSDConfigs { k, err := discovery.NewKubernetesDiscovery(c) if err != nil { log.Errorf("Cannot create Kubernetes discovery: %s", err) continue } app("kubernetes", i, k) } for i, c := range cfg.ServersetSDConfigs { app("serverset", i, discovery.NewServersetDiscovery(c)) } for i, c := range cfg.EC2SDConfigs { app("ec2", i, discovery.NewEC2Discovery(c)) } if len(cfg.TargetGroups) > 0 { app("static", 0, NewStaticProvider(cfg.TargetGroups)) } return providers } // targetsFromGroup builds targets based on the given TargetGroup and config. func (tm *TargetManager) targetsFromGroup(tg *config.TargetGroup, cfg *config.ScrapeConfig) ([]*Target, error) { tm.mtx.RLock() defer tm.mtx.RUnlock() targets := make([]*Target, 0, len(tg.Targets)) for i, labels := range tg.Targets { addr := string(labels[model.AddressLabel]) // If no port was provided, infer it based on the used scheme. if !strings.Contains(addr, ":") { switch cfg.Scheme { case "http": addr = fmt.Sprintf("%s:80", addr) case "https": addr = fmt.Sprintf("%s:443", addr) default: panic(fmt.Errorf("targetsFromGroup: invalid scheme %q", cfg.Scheme)) } labels[model.AddressLabel] = model.LabelValue(addr) } for k, v := range cfg.Params { if len(v) > 0 { labels[model.LabelName(model.ParamLabelPrefix+k)] = model.LabelValue(v[0]) } } // Copy labels into the labelset for the target if they are not // set already. Apply the labelsets in order of decreasing precedence. labelsets := []model.LabelSet{ tg.Labels, { model.SchemeLabel: model.LabelValue(cfg.Scheme), model.MetricsPathLabel: model.LabelValue(cfg.MetricsPath), model.JobLabel: model.LabelValue(cfg.JobName), }, } for _, lset := range labelsets { for ln, lv := range lset { if _, ok := labels[ln]; !ok { labels[ln] = lv } } } if _, ok := labels[model.AddressLabel]; !ok { return nil, fmt.Errorf("instance %d in target group %s has no address", i, tg) } preRelabelLabels := labels labels, err := Relabel(labels, cfg.RelabelConfigs...) if err != nil { return nil, fmt.Errorf("error while relabeling instance %d in target group %s: %s", i, tg, err) } // Check if the target was dropped. if labels == nil { continue } if err = config.CheckTargetAddress(labels[model.AddressLabel]); err != nil { return nil, err } for ln := range labels { // Meta labels are deleted after relabelling. Other internal labels propagate to // the target which decides whether they will be part of their label set. if strings.HasPrefix(string(ln), model.MetaLabelPrefix) { delete(labels, ln) } } tr := NewTarget(cfg, labels, preRelabelLabels) targets = append(targets, tr) } return targets, nil } // StaticProvider holds a list of target groups that never change. type StaticProvider struct { TargetGroups []*config.TargetGroup } // NewStaticProvider returns a StaticProvider configured with the given // target groups. func NewStaticProvider(groups []*config.TargetGroup) *StaticProvider { for i, tg := range groups { tg.Source = fmt.Sprintf("%d", i) } return &StaticProvider{ TargetGroups: groups, } } // Run implements the TargetProvider interface. func (sd *StaticProvider) Run(ch chan<- config.TargetGroup, done <-chan struct{}) { defer close(ch) for _, tg := range sd.TargetGroups { select { case <-done: return case ch <- *tg: } } <-done } // Sources returns the provider's sources. func (sd *StaticProvider) Sources() (srcs []string) { for _, tg := range sd.TargetGroups { srcs = append(srcs, tg.Source) } return srcs } prometheus-0.16.2+ds/retrieval/targetmanager_test.go000066400000000000000000000331051265137125100226060ustar00rootroot00000000000000// Copyright 2013 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package retrieval import ( "net/url" "reflect" "testing" "time" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/config" ) func TestPrefixedTargetProvider(t *testing.T) { targetGroups := []*config.TargetGroup{ { Targets: []model.LabelSet{ {model.AddressLabel: "test-1:1234"}, }, }, { Targets: []model.LabelSet{ {model.AddressLabel: "test-1:1235"}, }, }, } tp := &prefixedTargetProvider{ job: "job-x", mechanism: "static", idx: 123, TargetProvider: NewStaticProvider(targetGroups), } expSources := []string{ "job-x:static:123:0", "job-x:static:123:1", } if !reflect.DeepEqual(tp.Sources(), expSources) { t.Fatalf("expected sources %v, got %v", expSources, tp.Sources()) } ch := make(chan config.TargetGroup) done := make(chan struct{}) defer close(done) go tp.Run(ch, done) expGroup1 := *targetGroups[0] expGroup2 := *targetGroups[1] expGroup1.Source = "job-x:static:123:0" expGroup2.Source = "job-x:static:123:1" // The static target provider sends on the channel once per target group. if tg := <-ch; !reflect.DeepEqual(tg, expGroup1) { t.Fatalf("expected target group %v, got %v", expGroup1, tg) } if tg := <-ch; !reflect.DeepEqual(tg, expGroup2) { t.Fatalf("expected target group %v, got %v", expGroup2, tg) } } func TestTargetManagerChan(t *testing.T) { testJob1 := &config.ScrapeConfig{ JobName: "test_job1", ScrapeInterval: config.Duration(1 * time.Minute), TargetGroups: []*config.TargetGroup{{ Targets: []model.LabelSet{ {model.AddressLabel: "example.org:80"}, {model.AddressLabel: "example.com:80"}, }, }}, } prov1 := &fakeTargetProvider{ sources: []string{"src1", "src2"}, update: make(chan *config.TargetGroup), } targetManager := &TargetManager{ sampleAppender: nopAppender{}, providers: map[*config.ScrapeConfig][]TargetProvider{ testJob1: {prov1}, }, targets: make(map[string][]*Target), } go targetManager.Run() defer targetManager.Stop() sequence := []struct { tgroup *config.TargetGroup expected map[string][]model.LabelSet }{ { tgroup: &config.TargetGroup{ Source: "src1", Targets: []model.LabelSet{ {model.AddressLabel: "test-1:1234"}, {model.AddressLabel: "test-2:1234", "label": "set"}, {model.AddressLabel: "test-3:1234"}, }, }, expected: map[string][]model.LabelSet{ "src1": { {model.JobLabel: "test_job1", model.InstanceLabel: "test-1:1234"}, {model.JobLabel: "test_job1", model.InstanceLabel: "test-2:1234", "label": "set"}, {model.JobLabel: "test_job1", model.InstanceLabel: "test-3:1234"}, }, }, }, { tgroup: &config.TargetGroup{ Source: "src2", Targets: []model.LabelSet{ {model.AddressLabel: "test-1:1235"}, {model.AddressLabel: "test-2:1235"}, {model.AddressLabel: "test-3:1235"}, }, Labels: model.LabelSet{"group": "label"}, }, expected: map[string][]model.LabelSet{ "src1": { {model.JobLabel: "test_job1", model.InstanceLabel: "test-1:1234"}, {model.JobLabel: "test_job1", model.InstanceLabel: "test-2:1234", "label": "set"}, {model.JobLabel: "test_job1", model.InstanceLabel: "test-3:1234"}, }, "src2": { {model.JobLabel: "test_job1", model.InstanceLabel: "test-1:1235", "group": "label"}, {model.JobLabel: "test_job1", model.InstanceLabel: "test-2:1235", "group": "label"}, {model.JobLabel: "test_job1", model.InstanceLabel: "test-3:1235", "group": "label"}, }, }, }, { tgroup: &config.TargetGroup{ Source: "src2", Targets: []model.LabelSet{}, }, expected: map[string][]model.LabelSet{ "src1": { {model.JobLabel: "test_job1", model.InstanceLabel: "test-1:1234"}, {model.JobLabel: "test_job1", model.InstanceLabel: "test-2:1234", "label": "set"}, {model.JobLabel: "test_job1", model.InstanceLabel: "test-3:1234"}, }, }, }, { tgroup: &config.TargetGroup{ Source: "src1", Targets: []model.LabelSet{ {model.AddressLabel: "test-1:1234", "added": "label"}, {model.AddressLabel: "test-3:1234"}, {model.AddressLabel: "test-4:1234", "fancy": "label"}, }, }, expected: map[string][]model.LabelSet{ "src1": { {model.JobLabel: "test_job1", model.InstanceLabel: "test-1:1234", "added": "label"}, {model.JobLabel: "test_job1", model.InstanceLabel: "test-3:1234"}, {model.JobLabel: "test_job1", model.InstanceLabel: "test-4:1234", "fancy": "label"}, }, }, }, } for i, step := range sequence { prov1.update <- step.tgroup time.Sleep(20 * time.Millisecond) if len(targetManager.targets) != len(step.expected) { t.Fatalf("step %d: sources mismatch %v, %v", i, targetManager.targets, step.expected) } for source, actTargets := range targetManager.targets { expTargets, ok := step.expected[source] if !ok { t.Fatalf("step %d: unexpected source %q: %v", i, source, actTargets) } for _, expt := range expTargets { found := false for _, actt := range actTargets { if reflect.DeepEqual(expt, actt.BaseLabels()) { found = true break } } if !found { t.Errorf("step %d: expected target %v not found in actual targets", i, expt) } } } } } func TestTargetManagerConfigUpdate(t *testing.T) { testJob1 := &config.ScrapeConfig{ JobName: "test_job1", ScrapeInterval: config.Duration(1 * time.Minute), Params: url.Values{ "testParam": []string{"paramValue", "secondValue"}, }, TargetGroups: []*config.TargetGroup{{ Targets: []model.LabelSet{ {model.AddressLabel: "example.org:80"}, {model.AddressLabel: "example.com:80"}, }, }}, RelabelConfigs: []*config.RelabelConfig{ { // Copy out the URL parameter. SourceLabels: model.LabelNames{"__param_testParam"}, Regex: config.MustNewRegexp("(.*)"), TargetLabel: "testParam", Replacement: "$1", Action: config.RelabelReplace, }, }, } testJob2 := &config.ScrapeConfig{ JobName: "test_job2", ScrapeInterval: config.Duration(1 * time.Minute), TargetGroups: []*config.TargetGroup{ { Targets: []model.LabelSet{ {model.AddressLabel: "example.org:8080"}, {model.AddressLabel: "example.com:8081"}, }, Labels: model.LabelSet{ "foo": "bar", "boom": "box", }, }, { Targets: []model.LabelSet{ {model.AddressLabel: "test.com:1234"}, }, }, { Targets: []model.LabelSet{ {model.AddressLabel: "test.com:1235"}, }, Labels: model.LabelSet{"instance": "fixed"}, }, }, RelabelConfigs: []*config.RelabelConfig{ { SourceLabels: model.LabelNames{model.AddressLabel}, Regex: config.MustNewRegexp(`test\.(.*?):(.*)`), Replacement: "foo.${1}:${2}", TargetLabel: model.AddressLabel, Action: config.RelabelReplace, }, { // Add a new label for example.* targets. SourceLabels: model.LabelNames{model.AddressLabel, "boom", "foo"}, Regex: config.MustNewRegexp("example.*?-b([a-z-]+)r"), TargetLabel: "new", Replacement: "$1", Separator: "-", Action: config.RelabelReplace, }, { // Drop an existing label. SourceLabels: model.LabelNames{"boom"}, Regex: config.MustNewRegexp(".*"), TargetLabel: "boom", Replacement: "", Action: config.RelabelReplace, }, }, } // Test that targets without host:port addresses are dropped. testJob3 := &config.ScrapeConfig{ JobName: "test_job1", ScrapeInterval: config.Duration(1 * time.Minute), TargetGroups: []*config.TargetGroup{{ Targets: []model.LabelSet{ {model.AddressLabel: "example.net:80"}, }, }}, RelabelConfigs: []*config.RelabelConfig{ { SourceLabels: model.LabelNames{model.AddressLabel}, Regex: config.MustNewRegexp("(.*)"), TargetLabel: "__address__", Replacement: "http://$1", Action: config.RelabelReplace, }, }, } sequence := []struct { scrapeConfigs []*config.ScrapeConfig expected map[string][]model.LabelSet }{ { scrapeConfigs: []*config.ScrapeConfig{testJob1}, expected: map[string][]model.LabelSet{ "test_job1:static:0:0": { {model.JobLabel: "test_job1", model.InstanceLabel: "example.org:80", "testParam": "paramValue", model.SchemeLabel: "", model.MetricsPathLabel: "", model.AddressLabel: "example.org:80", model.ParamLabelPrefix + "testParam": "paramValue"}, {model.JobLabel: "test_job1", model.InstanceLabel: "example.com:80", "testParam": "paramValue", model.SchemeLabel: "", model.MetricsPathLabel: "", model.AddressLabel: "example.com:80", model.ParamLabelPrefix + "testParam": "paramValue"}, }, }, }, { scrapeConfigs: []*config.ScrapeConfig{testJob1}, expected: map[string][]model.LabelSet{ "test_job1:static:0:0": { {model.JobLabel: "test_job1", model.InstanceLabel: "example.org:80", "testParam": "paramValue", model.SchemeLabel: "", model.MetricsPathLabel: "", model.AddressLabel: "example.org:80", model.ParamLabelPrefix + "testParam": "paramValue"}, {model.JobLabel: "test_job1", model.InstanceLabel: "example.com:80", "testParam": "paramValue", model.SchemeLabel: "", model.MetricsPathLabel: "", model.AddressLabel: "example.com:80", model.ParamLabelPrefix + "testParam": "paramValue"}, }, }, }, { scrapeConfigs: []*config.ScrapeConfig{testJob1, testJob2}, expected: map[string][]model.LabelSet{ "test_job1:static:0:0": { {model.JobLabel: "test_job1", model.InstanceLabel: "example.org:80", "testParam": "paramValue", model.SchemeLabel: "", model.MetricsPathLabel: "", model.AddressLabel: "example.org:80", model.ParamLabelPrefix + "testParam": "paramValue"}, {model.JobLabel: "test_job1", model.InstanceLabel: "example.com:80", "testParam": "paramValue", model.SchemeLabel: "", model.MetricsPathLabel: "", model.AddressLabel: "example.com:80", model.ParamLabelPrefix + "testParam": "paramValue"}, }, "test_job2:static:0:0": { {model.JobLabel: "test_job2", model.InstanceLabel: "example.org:8080", "foo": "bar", "new": "ox-ba", model.SchemeLabel: "", model.MetricsPathLabel: "", model.AddressLabel: "example.org:8080"}, {model.JobLabel: "test_job2", model.InstanceLabel: "example.com:8081", "foo": "bar", "new": "ox-ba", model.SchemeLabel: "", model.MetricsPathLabel: "", model.AddressLabel: "example.com:8081"}, }, "test_job2:static:0:1": { {model.JobLabel: "test_job2", model.InstanceLabel: "foo.com:1234", model.SchemeLabel: "", model.MetricsPathLabel: "", model.AddressLabel: "foo.com:1234"}, }, "test_job2:static:0:2": { {model.JobLabel: "test_job2", model.InstanceLabel: "fixed", model.SchemeLabel: "", model.MetricsPathLabel: "", model.AddressLabel: "foo.com:1235"}, }, }, }, { scrapeConfigs: []*config.ScrapeConfig{}, expected: map[string][]model.LabelSet{}, }, { scrapeConfigs: []*config.ScrapeConfig{testJob2}, expected: map[string][]model.LabelSet{ "test_job2:static:0:0": { {model.JobLabel: "test_job2", model.InstanceLabel: "example.org:8080", "foo": "bar", "new": "ox-ba", model.SchemeLabel: "", model.MetricsPathLabel: "", model.AddressLabel: "example.org:8080"}, {model.JobLabel: "test_job2", model.InstanceLabel: "example.com:8081", "foo": "bar", "new": "ox-ba", model.SchemeLabel: "", model.MetricsPathLabel: "", model.AddressLabel: "example.com:8081"}, }, "test_job2:static:0:1": { {model.JobLabel: "test_job2", model.InstanceLabel: "foo.com:1234", model.SchemeLabel: "", model.MetricsPathLabel: "", model.AddressLabel: "foo.com:1234"}, }, "test_job2:static:0:2": { {model.JobLabel: "test_job2", model.InstanceLabel: "fixed", model.SchemeLabel: "", model.MetricsPathLabel: "", model.AddressLabel: "foo.com:1235"}, }, }, }, { scrapeConfigs: []*config.ScrapeConfig{testJob3}, expected: map[string][]model.LabelSet{}, }, } conf := &config.Config{} *conf = config.DefaultConfig targetManager := NewTargetManager(nopAppender{}) targetManager.ApplyConfig(conf) targetManager.Run() defer targetManager.Stop() for i, step := range sequence { conf.ScrapeConfigs = step.scrapeConfigs targetManager.ApplyConfig(conf) time.Sleep(50 * time.Millisecond) if len(targetManager.targets) != len(step.expected) { t.Fatalf("step %d: sources mismatch: expected %v, got %v", i, step.expected, targetManager.targets) } for source, actTargets := range targetManager.targets { expTargets, ok := step.expected[source] if !ok { t.Fatalf("step %d: unexpected source %q: %v", i, source, actTargets) } for _, expt := range expTargets { found := false for _, actt := range actTargets { if reflect.DeepEqual(expt, actt.fullLabels()) { found = true break } } if !found { t.Errorf("step %d: expected target %v for %q not found in actual targets", i, expt, source) } } } } } func TestHandleUpdatesReturnsWhenUpdateChanIsClosed(t *testing.T) { tm := NewTargetManager(nopAppender{}) ch := make(chan targetGroupUpdate) close(ch) tm.handleUpdates(ch, make(chan struct{})) } prometheus-0.16.2+ds/retrieval/testdata/000077500000000000000000000000001265137125100202065ustar00rootroot00000000000000prometheus-0.16.2+ds/retrieval/testdata/bearertoken.txt000066400000000000000000000000061265137125100232440ustar00rootroot0000000000000012345 prometheus-0.16.2+ds/retrieval/testdata/ca.cer000066400000000000000000000024221265137125100212640ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIDkTCCAnmgAwIBAgIJAJNsnimNN3tmMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhYMRUwEwYDVQQHDAxEZWZhdWx0IENpdHkxHDAaBgNVBAoME0RlZmF1bHQg Q29tcGFueSBMdGQxGzAZBgNVBAMMElByb21ldGhldXMgVGVzdCBDQTAeFw0xNTA4 MDQxNDA5MjFaFw0yNTA4MDExNDA5MjFaMF8xCzAJBgNVBAYTAlhYMRUwEwYDVQQH DAxEZWZhdWx0IENpdHkxHDAaBgNVBAoME0RlZmF1bHQgQ29tcGFueSBMdGQxGzAZ BgNVBAMMElByb21ldGhldXMgVGVzdCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEP ADCCAQoCggEBAOlSBU3yWpUELbhzizznR0hnAL7dbEHzfEtEc6N3PoSvMNcqrUVq t4kjBRWzqkZ5uJVkzBPERKEBoOI9pWcrqtMTBkMzHJY2Ep7GHTab10e9KC2IFQT6 FKP/jCYixaIVx3azEfajRJooD8r79FGoagWUfHdHyCFWJb/iLt8z8+S91kelSRMS yB9M1ypWomzBz1UFXZp1oiNO5o7/dgXW4MgLUfC2obJ9j5xqpc6GkhWMW4ZFwEr/ VLjuzxG9B8tLfQuhnXKGn1W8+WzZVWCWMD/sLfZfmjKaWlwcXzL51g8E+IEIBJqV w51aMI6lDkcvAM7gLq1auLZMVXyKWSKw7XMCAwEAAaNQME4wHQYDVR0OBBYEFMz1 BZnlqxJp2HiJSjHK8IsLrWYbMB8GA1UdIwQYMBaAFMz1BZnlqxJp2HiJSjHK8IsL rWYbMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAI2iA3w3TK5J15Pu e4fPFB4jxQqsbUwuyXbCCv/jKLeFNCD4BjM181WZEYjPMumeTBVzU3aF45LWQIG1 0DJcrCL4mjMz9qgAoGqA7aDDXiJGbukMgYYsn7vrnVmrZH8T3E8ySlltr7+W578k pJ5FxnbCroQwn0zLyVB3sFbS8E3vpBr3L8oy8PwPHhIScexcNVc3V6/m4vTZsXTH U+vUm1XhDgpDcFMTg2QQiJbfpOYUkwIgnRDAT7t282t2KQWtnlqc3zwPQ1F/6Cpx j19JeNsaF1DArkD7YlyKj/GhZLtHwFHG5cxznH0mLDJTW7bQvqqh2iQTeXmBk1lU mM5lH/s= -----END CERTIFICATE----- prometheus-0.16.2+ds/retrieval/testdata/ca.key000066400000000000000000000032171265137125100213060ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIIEpgIBAAKCAQEA6VIFTfJalQQtuHOLPOdHSGcAvt1sQfN8S0Rzo3c+hK8w1yqt RWq3iSMFFbOqRnm4lWTME8REoQGg4j2lZyuq0xMGQzMcljYSnsYdNpvXR70oLYgV BPoUo/+MJiLFohXHdrMR9qNEmigPyvv0UahqBZR8d0fIIVYlv+Iu3zPz5L3WR6VJ ExLIH0zXKlaibMHPVQVdmnWiI07mjv92BdbgyAtR8Lahsn2PnGqlzoaSFYxbhkXA Sv9UuO7PEb0Hy0t9C6GdcoafVbz5bNlVYJYwP+wt9l+aMppaXBxfMvnWDwT4gQgE mpXDnVowjqUORy8AzuAurVq4tkxVfIpZIrDtcwIDAQABAoIBAQCcVDd3pYWpyLX1 m31UnkX1rgYi3Gs3uTOznra4dSIvds6LrG2SUFGPEibLBql1NQNHHdVa/StakaPB UrqraOe5K0sL5Ygm4S4Ssf1K5JoW2Be+gipLPmBsDcJSnwO6eUs/LfZAQd6qR2Nl hvGJcQUwne/TYAYox/bdHWh4Zu/odz4NrZKZLbnXkdLLDEhZbjA0HpwJZ7NpMcB7 Z6NayOm5dAZncfqBjY+3GNL0VjvDjwwYbESM8GkAbojMgcpODGk0h9arRWCP2RqT SVgmiFI2mVT7sW1XLdVXmyCL2jzak7sktpbLbVgngwOrBmLO/m4NBftzcZrgvxj3 YakCPH/hAoGBAP1v85pIxqWr5dFdRlOW0MG35ifL+YXpavcs233jGDHYNZefrR5q Mw8eA20zwj41OdryqGh58nLYm3zYM0vPFrRJrzWYQfcWDmQELAylr9z9vsMj8gRq IZQD6wzFmLi1PN2QDmovF+2y/CLAq03XK6FQlNsVQxubfjh4hcX5+nXDAoGBAOut /pQaIBbIhaI8y3KjpizU24jxIkV8R/q1yE5V01YCl2OC5hEd4iZP14YLDRXLSHKT e/dyJ/OEyTIzUeDg0ZF3ao9ugbWuASgrnrrdPEooi7C9n9PeaLFTK5oVZoVP2A7E BwhSFW3VdEzQkdJczVE2jOY6JdBKMndjoDQnhT6RAoGBAL4WMO1gdnYeZ0JQJoZd kPgrOZpR2DaDa3I3F+3k3enM0+2EmzE70E4fYcyPTLqh62H4LS4ngRx4sK7D7j2G 9u2EcsDNEXUE+wgzROK7hxtGysTMeiKrg8Hj6nFq53Bqp1s7SESGS/lCDPD398Rr hdL5gJyN5waW6uXqJ9Pk+eFHAoGBAKV/YGcV1XTKSPT9ZgxRmM6ghq0qT1umA1Gt t0QzBp2+Yhqx/+cDKhynMnxhZEXqoyw6HvJLSny5wSMsYJHeratNxRmFizZOQ2e3 AdbMppqY0EdDUWnRI4lqExM3de+let4bj6irI3smSm3qhIvJOTCPcu/04zrZ74hh AE2/dtTRAoGBAO6bENEqLgxZOvX5NnbytTuuoEnbceUPiIvc6S/nWJPEoGXVN2EJ a3OaIOQmknE6bjXIWrHTaXJhwejvPUz9DVa4GxU5aJhs4gpocVGf+owQFvk4nJO8 JL+QVVdXp3XdrXIGyvXJfy0fXXgJg5czrnDHjSTE8/2POtyuZ6VyBtQc -----END RSA PRIVATE KEY----- prometheus-0.16.2+ds/retrieval/testdata/client.cer000066400000000000000000000030051265137125100221550ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIERjCCAy6gAwIBAgIBZDANBgkqhkiG9w0BAQUFADBfMQswCQYDVQQGEwJYWDEV MBMGA1UEBwwMRGVmYXVsdCBDaXR5MRwwGgYDVQQKDBNEZWZhdWx0IENvbXBhbnkg THRkMRswGQYDVQQDDBJQcm9tZXRoZXVzIFRlc3QgQ0EwHhcNMTUwODA0MTQ0MTE2 WhcNNDIxMjIwMTQ0MTE2WjBVMQswCQYDVQQGEwJYWDEVMBMGA1UEBwwMRGVmYXVs dCBDaXR5MRwwGgYDVQQKDBNEZWZhdWx0IENvbXBhbnkgTHRkMREwDwYDVQQDDAh0 ZXN0dXNlcjCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAOKBBXx35X9+ BLGqY/cC2+lQYZzn13Z8ZEDrUKpv5n91QA0B/YZE3gDSnk2yry8dxmp1NJtXm8Wr rIQSBnsTGOKwyIwR1gcggUYPD9fCyy7T7y7YbzBG8drEcxiK/YIWyio0fpRCfT9b 2+fOEeY+0+tgFV++XjbXVzXRCBMmsZ22cOm4t2t7GHKBZhYoUoPgKjDn+4t/rr0r 1od6yVOocYCo6RruQHsWPHj6QlU8VGutkD7PpvLS+w2l/6JqmZDHlY6o6pDidC8a kp8i/t3pNBlexk6st/8YZ5S9j6LjqC6bUnerUZB40b6L8OXXwWS3S5y6t07A1QIn Pv2DZKGbn8Uuj7RvS5OAZdDn1P+M5aVlRLoYbdTHJILrLg+bxyDIokqONbLgj78A FT6a013eJAZJBkeoaN7Djbf/d5FjRDadH2bX0Uur3APh4cbv+0Fo13CPPSckA9EU o42qBmKLWys858D8vRKyS/mq/IeRL0AIwKuaEIJtPtiwCTnk6PvFfQvO80z/Eyq+ uvRBoZbrWHb+3GR8rNzu8Gc1UbTC+jnGYtbQhxx1/7nae52XGRpplnwPO9cb+px2 Zf802h+lP3SMY/XS+nyTAp/jcy/jOAwrZKY4rgz+5ZmKCI61NZ0iovaK7Jqo9qTM iSjykZCamFhm4pg8itECD5FhnUetJ6axAgMBAAGjFzAVMBMGA1UdJQQMMAoGCCsG AQUFBwMCMA0GCSqGSIb3DQEBBQUAA4IBAQDEQyFfY9WAkdnzb+vIlICgfkydceYx KVJZ2WRMvrn2ZoRoSaK3CfGlz4nrCOgDjQxfX8OpKzudr/ghuBQCbDHHzxRrOen5 0Zig9Q+pxTZNrtds/SwX2dHJ7PVEwGxXXaKl8S19bNEdO0syFrRJU6I50ZbeEkJe RI9IEFvBHcuG/GnEfqWj2ozI/+VhIOb4cTItg67ClmIPe8lteT2wj+/aydF9PKgF QhooCe/G1nok1uiaGjo1HzFEn4HzI3s4mrolc8PpBBVsS+HckCOrHpRPWnYuCFEm 0yzS6tGaMrnITywwB2/uJ2aBAZIx2Go1zFhPf0YvFJc3e2x8cAuqBRLu -----END CERTIFICATE----- prometheus-0.16.2+ds/retrieval/testdata/client.key000066400000000000000000000062531265137125100222040ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIIJKAIBAAKCAgEA4oEFfHflf34Esapj9wLb6VBhnOfXdnxkQOtQqm/mf3VADQH9 hkTeANKeTbKvLx3GanU0m1ebxaushBIGexMY4rDIjBHWByCBRg8P18LLLtPvLthv MEbx2sRzGIr9ghbKKjR+lEJ9P1vb584R5j7T62AVX75eNtdXNdEIEyaxnbZw6bi3 a3sYcoFmFihSg+AqMOf7i3+uvSvWh3rJU6hxgKjpGu5AexY8ePpCVTxUa62QPs+m 8tL7DaX/omqZkMeVjqjqkOJ0LxqSnyL+3ek0GV7GTqy3/xhnlL2PouOoLptSd6tR kHjRvovw5dfBZLdLnLq3TsDVAic+/YNkoZufxS6PtG9Lk4Bl0OfU/4zlpWVEuhht 1MckgusuD5vHIMiiSo41suCPvwAVPprTXd4kBkkGR6ho3sONt/93kWNENp0fZtfR S6vcA+Hhxu/7QWjXcI89JyQD0RSjjaoGYotbKzznwPy9ErJL+ar8h5EvQAjAq5oQ gm0+2LAJOeTo+8V9C87zTP8TKr669EGhlutYdv7cZHys3O7wZzVRtML6OcZi1tCH HHX/udp7nZcZGmmWfA871xv6nHZl/zTaH6U/dIxj9dL6fJMCn+NzL+M4DCtkpjiu DP7lmYoIjrU1nSKi9orsmqj2pMyJKPKRkJqYWGbimDyK0QIPkWGdR60nprECAwEA AQKCAgEA18az1ERf9Fm33Q0GmE039IdnxlMy9qQ/2XyS5xsdCXVIZFvuClhW6Y+7 0ScVLpx95fLr/8SxF9mYymRlmh+ySFrDYnSnYTi9DmHQ5OmkKGMr64OyQNqFErSt NMdMA/7z7sr9fv3sVUyMLMMqWB6oQgXRttki5bm1UgZlW+EzuZwQ6wbWbWTiAEt3 VkppeUo2x0poXxdu/rXhdEUrwC+qmTfQgaBQ+zFOwK0gPhTwE3hP/xZQ4+jL08+8 vRwyWTNZLYOLmiSxLCJzZXiwNfUwda7M2iw+SJ0WKCOBz1pzYJsFMA2b8Ta4EX89 Kailiu328UMK19Jp2dhLcLUYS8B2rVVAK5b/O6iKV8UpKTriXDiCKSpcugpsQ1ML zq/6vR0SQXD+/W0MesGaNa33votBXJSsf9kZnYJw43n+W4Z/XFUE5pyNM/+TGAqw yuF4FX2sJL1uP5VMOh2HdthTr+/ewx/Trn9/re0p54z83plVlp4qbcORLiQ2uDf6 ZZ0/gHzNTp4Fzz81ZvHLm9smpe8cLvojrKLvCl0hv5zAf3QtsajpTN9uM7AsshV1 QVZSuAxb5n9bcij5F2za1/dd7WLlvsSzgNJ4Td/gEDI8qepB0+7PGlJ17sMg0nWP nFxUfGIsCF1KOoPwLyaNHHrRGjJigFUufqkbmSWkOzgC6pZVUXECggEBAP81To16 O5BlGDahcQkjKkqUwUtkhjE9/KQBh3zHqxsitI8f0U7eL3Ge1qhbgEgvHwHOjWSV pcG9atE55b7qlqqGQboiO1jfyLfIVLfamj0fHLinO/pV/wcBNy6Hz4rP7DNJDCMz 0agz/Ys3VXrZIk5sO0sUBYMBxho1x0n65Z06iK1SwD/x4Xg3/Psyx+ujEEkSsv5I Gg7aOTHLRSIPUx/OK+4M3sp58PeMGfEYNYxNiEoMiUQgu/srKRjs+pUKXCkEraNW 8s/ODYJ7iso6Z1z4NxfBH+hh+UrxTffh7t0Sz5gdUwUnBNb2I4EdeCcCTOnWYkut /GKW8oHD7f9VDS0CggEBAOM06rrp9rSsl6UhTu8LS5bjBeyUxab4HLZKP5YBitQO ltcPS05MxQ3UQ1BAMDRjXE2nrKlWMOAybrffEXBi4U1jYt7CiuCwwsPyaYNWT5qO Iwdjebkeq3+Mh8c48swhOwRLWSGg6mtRoR/c5cthYU62+s2zdxc/yhVTQ0WNFabT 23PYtjjW41WuR6K7Dhrdcw0MwIs1arZHTsDdU6Hln9raTSNwlHMBWVz/tzuwLieQ WEUXvsQvPtgPyohmDd0ueXiuS2FiYaXKFIMFj5/JyyJc1OCr1vIQN8mMcUjNbk2I VaeeSPawgKIiYARhbjJtjwjY6D59gOZrNGYASQOTGhUCggEAJPOB8SgekbShgd90 L1+BExVgu1rNtzmDZ/e0t1Ntqdsni4WO172B3xChgfTlqQ3xjmBqxoKIYnnbinm4 kyECOaSAxcOJFkAonruJ0Kj9JhZoITBNldx3tXruk3UkjrO2PmK4OCybkaAdeNfF L6lat0Iif6dheOt71HWu6j5CmrZL7dSKc3fBLpfksDZVDgApLntfoUOtSjM8jsIg u2K+pV9Dqw7//w8S3bTSWL8pmavsLNSN12hp7177b1l4mrXKTEIaJglD1OS/vgHH QaqdJq/lwjG7PflZkAlKQbbbz/SWTC8Kwzc4EyvGTj6HFBbYLg9VYiHJ5jh22mUV A6A77QKCAQAM6DWpdp8QNnnK5LCCPecGZFEy1mTADno7FM6169KCJ24EO5cwlIXh Ojy0s2DJqRdWRf82A3J1WggWI/Luqn9YERxNwUl4aDI4RW4fCuksw4RT6B/DF23w qgAQnjiUxhJ/NPSUR3rpq9J2Z+sZ+ac4fIaU5uwOAw6s1XUN32zqdECUPSxk4Dg7 5tGk+fFcL1ZY2G+buOYeAsEDjc8xdET3fs1BBSU5v0rfUJuNJX4Ju1Z4Xlf09yYf yg3cX8fL19cItwYLOzaG34r4wnkdP65tfk6NkNV+HNO+fF73Hsx0VRlgk0pb0T0N eNxxg0NqU/T7MK9I1YJcFJz+ame7b0DdAoIBAFw3Sf9LbVVNh8ef4OqjBZR8RCYq 4HeG0FPYvMLzUtFi7j4uBfiL4+pNpSFvecSuLRKE8Pr5dPRJNPNgJud5gvuykBZX Q9ktQJTAPZK8Q5neLeXfAdoF3szJuEZbDdGSps4JFokVIX+h3c+uFRD9QMSh+bz1 nEXCdYvmTs+bsTL+l7cbXq2iIKk1QnEcL+cRYr3VjP5xxZ/hGnuYqe9wmyo2MVkS NVUmCifIvE34TO072HH49gVPrhj9qIZsfBh4LBpl75eKwXTXx+HFqHhP8OfzuK6U v/JQn9JUGGzkmoMazQ9o5D5h/o0t/OGOPnQeqWL4BIPXdHv/dua6jLnAoU8= -----END RSA PRIVATE KEY----- prometheus-0.16.2+ds/retrieval/testdata/server.cer000066400000000000000000000022641265137125100222130ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIDSzCCAjOgAwIBAgIJAPn0lI/95RQVMA0GCSqGSIb3DQEBBQUAMF8xCzAJBgNV BAYTAlhYMRUwEwYDVQQHDAxEZWZhdWx0IENpdHkxHDAaBgNVBAoME0RlZmF1bHQg Q29tcGFueSBMdGQxGzAZBgNVBAMMElByb21ldGhldXMgVGVzdCBDQTAeFw0xNTA4 MDQxNDE5MjRaFw00MjEyMjAxNDE5MjRaMFYxCzAJBgNVBAYTAlhYMRUwEwYDVQQH DAxEZWZhdWx0IENpdHkxHDAaBgNVBAoME0RlZmF1bHQgQ29tcGFueSBMdGQxEjAQ BgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB AMQhH0walZlA+Gy5ZB3YzzxZta7mhTX3P+yBeQ6G6yrei4H7gv+MTCJj5qUBc+BS cta8loKKUQWjoppjyh4tz8awkTD5sEyedE7/G3DS7mLgmx0PslwqrkXFBQhm/C2f aZfSO69TZ8uu1dgCmmGe9K2XqPnR6fu9egtLpK8RT0s/Cx04bFnaPS0ecyj+3q7A xzDsH84Z1KPo4LHgqNWlHqFsQPqH+7W9ajhF6lnO4ArEDJ3KuLDlgrENzCsDabls 0U2XsccBJzP+Ls+iQwMfKpx2ISQDHqniopSICw+sPufiAv+OGnnG6rGGWQjUstqf w4DnU4DZvkrcEWoGa6fq26kCAwEAAaMTMBEwDwYDVR0RBAgwBocEfwAAATANBgkq hkiG9w0BAQUFAAOCAQEAVPs8IZffawWuRqbXJSvFz7a1q95febWQFjvvMe8ZJeCZ y1k9laQ5ZLHYuQ6NUWn09UbQNtK3fCLF4sJx5PCPCp1vZWx4nJs8N5mNyqdQ1Zfk oyoYTOR2izNcIj6ZUFRoOR/7B9hl2JouCXrbExr96oO13xIfsdslScINz1X68oyW KjU0yUrY+lWG1zEkUGXti9K6ujtXa7YY2n3nK/CvIqny5nVToYUgEMpjUR9S+KgN JUtawY3VQKyp6ZXlHqa0ihsuvY9Hrlh14h0AsZchPAHUtDFv2nEQob/Kf1XynKw6 itVKcj/UFpkhsnc/19aP1gWje76fejXl0tzyPXDXFg== -----END CERTIFICATE----- prometheus-0.16.2+ds/retrieval/testdata/server.key000066400000000000000000000032171265137125100222310ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEAxCEfTBqVmUD4bLlkHdjPPFm1ruaFNfc/7IF5DobrKt6LgfuC /4xMImPmpQFz4FJy1ryWgopRBaOimmPKHi3PxrCRMPmwTJ50Tv8bcNLuYuCbHQ+y XCquRcUFCGb8LZ9pl9I7r1Nny67V2AKaYZ70rZeo+dHp+716C0ukrxFPSz8LHThs Wdo9LR5zKP7ersDHMOwfzhnUo+jgseCo1aUeoWxA+of7tb1qOEXqWc7gCsQMncq4 sOWCsQ3MKwNpuWzRTZexxwEnM/4uz6JDAx8qnHYhJAMeqeKilIgLD6w+5+IC/44a ecbqsYZZCNSy2p/DgOdTgNm+StwRagZrp+rbqQIDAQABAoIBACeOjqNo0TdhtTko gxrJ+bIwXcZy0/c4cPogeuwFJjU1QWnr8lXcVBazk3dAPcDGoEbTLoARqZm7kTYW XlOL5dYrEn2QPpCVfNvZ9AzjXhUvO9m2qsCQEyobPJKfQslo14E5c7Q+3DZmgtbY X47E4pCIgBoyzkBpzM2uaf6tPRLtv8QcLklcf7lP5rd0Zypc325RR6+J5nxfCoFp fD3sj7t/lJLS8Xb6m4/YFjsVJ2qEAelZ086v8unMBEj324Vv/VqrkPFtFNJKI+Az Pd9xFDBdsKijBn1Yam9/dj7CiyZYKaVZ9p/w7Oqkpbrt8J8S8OtNHZ4fz9FJgRu9 uu+VTikCgYEA5ZkDmozDseA/c9JTUGAiPfAt5OrnqlKQNzp2m19GKh+Mlwg4k6O5 uE+0vaQEfc0cX3o8qntWNsb63XC9h6oHewrdyVFMZNS4nzzmKEvGWt9ON6qfQDUs 1cgZ0Y/uKydDX/3hk/hnJbeRW429rk0/GTuSHHilBzhE0uXJ11xPG48CgYEA2q7a yqTdqPmZFIAYT9ny099PhnGYE6cJljTUMX9Xhk4POqcigcq9kvNNsly2O1t0Eq0H 2tYo91xTCZc3Cb0N+Vx3meLIljnzhEtwzU9w6W5VGJHWiqovjGwtCdm/W28OlMzY zM+0gVCJzZLhL0vOwBLwGUJvjgfpvgIb/W+C2UcCgYB5TJ3ayQOath7P0g6yKBfv ITUd+/zovzXx97Ex5OPs3T4pjO5XEejMt0+F4WF+FR8oUiw65W5nAjkHRMjdI7dQ Ci2ibpEttDTV7Bass1vYJqHsRvhbs7w8NbtuO9xYcCXoUPkcc+AKzTC+beQIckcj zZUj9Zk6dz/lLAG3Bc3FgQKBgQC+MmZI6auAU9Y4ZlC+4qi4bfkUzaefMCC+a6RC iKbvQOUt9j+k81h+fu6MuuYkKh6CP8wdITbwLXRrWwGbjrqgrzO2u/AJ+M07uwGZ EAb8f+GzROR8JhjE4TEq6B/uvmDIOoI1YFF2Rz4TdjQ0lpJzrAT3czjjJy68+8is XFhJ8QKBgQCMPpB7taMLQzuilEGabL6Xas9UxryiGoBHk4Umb107GVWgwXxWT6fk YSlvbMQHCgVeaJe374Bghyw33Z3WilWM1fCWya/CxXlw9wakjQHiqFCIOCxdgosX Sr35bRFWJMnHXD+jD0Vr8WrtbGzFSZb3ZrjT6WhWRIGCHcaMANN9ew== -----END RSA PRIVATE KEY----- prometheus-0.16.2+ds/rules/000077500000000000000000000000001265137125100155325ustar00rootroot00000000000000prometheus-0.16.2+ds/rules/alerting.go000066400000000000000000000201311265137125100176630ustar00rootroot00000000000000// Copyright 2013 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package rules import ( "fmt" "html/template" "sync" "time" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/promql" "github.com/prometheus/prometheus/util/strutil" ) const ( // AlertMetricName is the metric name for synthetic alert timeseries. alertMetricName model.LabelValue = "ALERTS" // AlertNameLabel is the label name indicating the name of an alert. alertNameLabel model.LabelName = "alertname" // AlertStateLabel is the label name indicating the state of an alert. alertStateLabel model.LabelName = "alertstate" ) // AlertState denotes the state of an active alert. type AlertState int func (s AlertState) String() string { switch s { case StateInactive: return "inactive" case StatePending: return "pending" case StateFiring: return "firing" default: panic("undefined") } } const ( // StateInactive is the state of an alert that is either firing nor pending. StateInactive AlertState = iota // StatePending is the state of an alert that has been active for less than // the configured threshold duration. StatePending // StateFiring is the state of an alert that has been active for longer than // the configured threshold duration. StateFiring ) // Alert is used to track active (pending/firing) alerts over time. type Alert struct { // The name of the alert. Name string // The vector element labelset triggering this alert. Labels model.LabelSet // The state of the alert (Pending or Firing). State AlertState // The time when the alert first transitioned into Pending state. ActiveSince model.Time // The value of the alert expression for this vector element. Value model.SampleValue } // sample returns a Sample suitable for recording the alert. func (a Alert) sample(timestamp model.Time, value model.SampleValue) *model.Sample { recordedMetric := make(model.Metric, len(a.Labels)+3) for label, value := range a.Labels { recordedMetric[label] = value } recordedMetric[model.MetricNameLabel] = alertMetricName recordedMetric[alertNameLabel] = model.LabelValue(a.Name) recordedMetric[alertStateLabel] = model.LabelValue(a.State.String()) return &model.Sample{ Metric: recordedMetric, Value: value, Timestamp: timestamp, } } // An AlertingRule generates alerts from its vector expression. type AlertingRule struct { // The name of the alert. name string // The vector expression from which to generate alerts. vector promql.Expr // The duration for which a labelset needs to persist in the expression // output vector before an alert transitions from Pending to Firing state. holdDuration time.Duration // Extra labels to attach to the resulting alert sample vectors. labels model.LabelSet // Short alert summary, suitable for email subjects. summary string // More detailed alert description. description string // A reference to a runbook for the alert. runbook string // Protects the below. mutex sync.Mutex // A map of alerts which are currently active (Pending or Firing), keyed by // the fingerprint of the labelset they correspond to. activeAlerts map[model.Fingerprint]*Alert } // NewAlertingRule constructs a new AlertingRule. func NewAlertingRule( name string, vector promql.Expr, holdDuration time.Duration, labels model.LabelSet, summary string, description string, runbook string, ) *AlertingRule { return &AlertingRule{ name: name, vector: vector, holdDuration: holdDuration, labels: labels, summary: summary, description: description, runbook: runbook, activeAlerts: map[model.Fingerprint]*Alert{}, } } // Name returns the name of the alert. func (rule *AlertingRule) Name() string { return rule.name } // eval evaluates the rule expression and then creates pending alerts and fires // or removes previously pending alerts accordingly. func (rule *AlertingRule) eval(timestamp model.Time, engine *promql.Engine) (model.Vector, error) { query, err := engine.NewInstantQuery(rule.vector.String(), timestamp) if err != nil { return nil, err } exprResult, err := query.Exec().Vector() if err != nil { return nil, err } rule.mutex.Lock() defer rule.mutex.Unlock() // Create pending alerts for any new vector elements in the alert expression // or update the expression value for existing elements. resultFPs := map[model.Fingerprint]struct{}{} for _, sample := range exprResult { fp := sample.Metric.Fingerprint() resultFPs[fp] = struct{}{} if alert, ok := rule.activeAlerts[fp]; !ok { labels := model.LabelSet(sample.Metric.Clone()) labels = labels.Merge(rule.labels) if _, ok := labels[model.MetricNameLabel]; ok { delete(labels, model.MetricNameLabel) } rule.activeAlerts[fp] = &Alert{ Name: rule.name, Labels: labels, State: StatePending, ActiveSince: timestamp, Value: sample.Value, } } else { alert.Value = sample.Value } } var vector model.Vector // Check if any pending alerts should be removed or fire now. Write out alert timeseries. for fp, activeAlert := range rule.activeAlerts { if _, ok := resultFPs[fp]; !ok { vector = append(vector, activeAlert.sample(timestamp, 0)) delete(rule.activeAlerts, fp) continue } if activeAlert.State == StatePending && timestamp.Sub(activeAlert.ActiveSince) >= rule.holdDuration { vector = append(vector, activeAlert.sample(timestamp, 0)) activeAlert.State = StateFiring } vector = append(vector, activeAlert.sample(timestamp, 1)) } return vector, nil } func (rule *AlertingRule) String() string { s := fmt.Sprintf("ALERT %s", rule.name) s += fmt.Sprintf("\n\tIF %s", rule.vector) if rule.holdDuration > 0 { s += fmt.Sprintf("\n\tFOR %s", strutil.DurationToString(rule.holdDuration)) } if len(rule.labels) > 0 { s += fmt.Sprintf("\n\tWITH %s", rule.labels) } s += fmt.Sprintf("\n\tSUMMARY %q", rule.summary) s += fmt.Sprintf("\n\tDESCRIPTION %q", rule.description) s += fmt.Sprintf("\n\tRUNBOOK %q", rule.runbook) return s } // HTMLSnippet returns an HTML snippet representing this alerting rule. The // resulting snippet is expected to be presented in a
     element, so that
    // line breaks and other returned whitespace is respected.
    func (rule *AlertingRule) HTMLSnippet(pathPrefix string) template.HTML {
    	alertMetric := model.Metric{
    		model.MetricNameLabel: alertMetricName,
    		alertNameLabel:        model.LabelValue(rule.name),
    	}
    	s := fmt.Sprintf("ALERT %s", pathPrefix+strutil.GraphLinkForExpression(alertMetric.String()), rule.name)
    	s += fmt.Sprintf("\n  IF %s", pathPrefix+strutil.GraphLinkForExpression(rule.vector.String()), rule.vector)
    	if rule.holdDuration > 0 {
    		s += fmt.Sprintf("\n  FOR %s", strutil.DurationToString(rule.holdDuration))
    	}
    	if len(rule.labels) > 0 {
    		s += fmt.Sprintf("\n  WITH %s", rule.labels)
    	}
    	s += fmt.Sprintf("\n  SUMMARY %q", rule.summary)
    	s += fmt.Sprintf("\n  DESCRIPTION %q", rule.description)
    	s += fmt.Sprintf("\n  RUNBOOK %q", rule.runbook)
    	return template.HTML(s)
    }
    
    // State returns the "maximum" state: firing > pending > inactive.
    func (rule *AlertingRule) State() AlertState {
    	rule.mutex.Lock()
    	defer rule.mutex.Unlock()
    
    	maxState := StateInactive
    	for _, activeAlert := range rule.activeAlerts {
    		if activeAlert.State > maxState {
    			maxState = activeAlert.State
    		}
    	}
    	return maxState
    }
    
    // ActiveAlerts returns a slice of active alerts.
    func (rule *AlertingRule) ActiveAlerts() []Alert {
    	rule.mutex.Lock()
    	defer rule.mutex.Unlock()
    
    	alerts := make([]Alert, 0, len(rule.activeAlerts))
    	for _, alert := range rule.activeAlerts {
    		alerts = append(alerts, *alert)
    	}
    	return alerts
    }
    prometheus-0.16.2+ds/rules/manager.go000066400000000000000000000245641265137125100175060ustar00rootroot00000000000000// Copyright 2013 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package rules
    
    import (
    	"fmt"
    	"io/ioutil"
    	"net/url"
    	"path/filepath"
    	"sync"
    	"time"
    
    	html_template "html/template"
    
    	"github.com/prometheus/client_golang/prometheus"
    	"github.com/prometheus/common/log"
    	"github.com/prometheus/common/model"
    
    	"github.com/prometheus/prometheus/config"
    	"github.com/prometheus/prometheus/notification"
    	"github.com/prometheus/prometheus/promql"
    	"github.com/prometheus/prometheus/storage"
    	"github.com/prometheus/prometheus/template"
    	"github.com/prometheus/prometheus/util/strutil"
    )
    
    // Constants for instrumentation.
    const (
    	namespace = "prometheus"
    
    	ruleTypeLabel     = "rule_type"
    	ruleTypeAlerting  = "alerting"
    	ruleTypeRecording = "recording"
    )
    
    var (
    	evalDuration = prometheus.NewSummaryVec(
    		prometheus.SummaryOpts{
    			Namespace: namespace,
    			Name:      "rule_evaluation_duration_milliseconds",
    			Help:      "The duration for a rule to execute.",
    		},
    		[]string{ruleTypeLabel},
    	)
    	evalFailures = prometheus.NewCounter(
    		prometheus.CounterOpts{
    			Namespace: namespace,
    			Name:      "rule_evaluation_failures_total",
    			Help:      "The total number of rule evaluation failures.",
    		},
    	)
    	iterationDuration = prometheus.NewSummary(prometheus.SummaryOpts{
    		Namespace:  namespace,
    		Name:       "evaluator_duration_milliseconds",
    		Help:       "The duration for all evaluations to execute.",
    		Objectives: map[float64]float64{0.01: 0.001, 0.05: 0.005, 0.5: 0.05, 0.90: 0.01, 0.99: 0.001},
    	})
    )
    
    func init() {
    	prometheus.MustRegister(iterationDuration)
    	prometheus.MustRegister(evalFailures)
    	prometheus.MustRegister(evalDuration)
    }
    
    // A Rule encapsulates a vector expression which is evaluated at a specified
    // interval and acted upon (currently either recorded or used for alerting).
    type Rule interface {
    	// Name returns the name of the rule.
    	Name() string
    	// Eval evaluates the rule, including any associated recording or alerting actions.
    	eval(model.Time, *promql.Engine) (model.Vector, error)
    	// String returns a human-readable string representation of the rule.
    	String() string
    	// HTMLSnippet returns a human-readable string representation of the rule,
    	// decorated with HTML elements for use the web frontend.
    	HTMLSnippet(pathPrefix string) html_template.HTML
    }
    
    // The Manager manages recording and alerting rules.
    type Manager struct {
    	// Protects the rules list.
    	sync.Mutex
    	rules []Rule
    
    	done chan bool
    
    	interval    time.Duration
    	queryEngine *promql.Engine
    
    	sampleAppender      storage.SampleAppender
    	notificationHandler *notification.NotificationHandler
    
    	externalURL *url.URL
    }
    
    // ManagerOptions bundles options for the Manager.
    type ManagerOptions struct {
    	EvaluationInterval time.Duration
    	QueryEngine        *promql.Engine
    
    	NotificationHandler *notification.NotificationHandler
    	SampleAppender      storage.SampleAppender
    
    	ExternalURL *url.URL
    }
    
    // NewManager returns an implementation of Manager, ready to be started
    // by calling the Run method.
    func NewManager(o *ManagerOptions) *Manager {
    	manager := &Manager{
    		rules: []Rule{},
    		done:  make(chan bool),
    
    		interval:            o.EvaluationInterval,
    		sampleAppender:      o.SampleAppender,
    		queryEngine:         o.QueryEngine,
    		notificationHandler: o.NotificationHandler,
    		externalURL:         o.ExternalURL,
    	}
    	return manager
    }
    
    // Run the rule manager's periodic rule evaluation.
    func (m *Manager) Run() {
    	defer log.Info("Rule manager stopped.")
    
    	m.Lock()
    	lastInterval := m.interval
    	m.Unlock()
    
    	ticker := time.NewTicker(lastInterval)
    	defer ticker.Stop()
    
    	for {
    		// The outer select clause makes sure that m.done is looked at
    		// first. Otherwise, if m.runIteration takes longer than
    		// m.interval, there is only a 50% chance that m.done will be
    		// looked at before the next m.runIteration call happens.
    		select {
    		case <-m.done:
    			return
    		default:
    			select {
    			case <-ticker.C:
    				start := time.Now()
    				m.runIteration()
    				iterationDuration.Observe(float64(time.Since(start) / time.Millisecond))
    
    				m.Lock()
    				if lastInterval != m.interval {
    					ticker.Stop()
    					ticker = time.NewTicker(m.interval)
    					lastInterval = m.interval
    				}
    				m.Unlock()
    			case <-m.done:
    				return
    			}
    		}
    	}
    }
    
    // Stop the rule manager's rule evaluation cycles.
    func (m *Manager) Stop() {
    	log.Info("Stopping rule manager...")
    	m.done <- true
    }
    
    func (m *Manager) queueAlertNotifications(rule *AlertingRule, timestamp model.Time) {
    	activeAlerts := rule.ActiveAlerts()
    	if len(activeAlerts) == 0 {
    		return
    	}
    
    	notifications := make(notification.NotificationReqs, 0, len(activeAlerts))
    	for _, aa := range activeAlerts {
    		if aa.State != StateFiring {
    			// BUG: In the future, make AlertManager support pending alerts?
    			continue
    		}
    
    		// Provide the alert information to the template.
    		l := map[string]string{}
    		for k, v := range aa.Labels {
    			l[string(k)] = string(v)
    		}
    		tmplData := struct {
    			Labels map[string]string
    			Value  float64
    		}{
    			Labels: l,
    			Value:  float64(aa.Value),
    		}
    		// Inject some convenience variables that are easier to remember for users
    		// who are not used to Go's templating system.
    		defs := "{{$labels := .Labels}}{{$value := .Value}}"
    
    		expand := func(text string) string {
    			tmpl := template.NewTemplateExpander(defs+text, "__alert_"+rule.Name(), tmplData, timestamp, m.queryEngine, m.externalURL.Path)
    			result, err := tmpl.Expand()
    			if err != nil {
    				result = err.Error()
    				log.Warnf("Error expanding alert template %v with data '%v': %v", rule.Name(), tmplData, err)
    			}
    			return result
    		}
    
    		notifications = append(notifications, ¬ification.NotificationReq{
    			Summary:     expand(rule.summary),
    			Description: expand(rule.description),
    			Runbook:     rule.runbook,
    			Labels: aa.Labels.Merge(model.LabelSet{
    				alertNameLabel: model.LabelValue(rule.Name()),
    			}),
    			Value:        aa.Value,
    			ActiveSince:  aa.ActiveSince.Time(),
    			RuleString:   rule.String(),
    			GeneratorURL: m.externalURL.String() + strutil.GraphLinkForExpression(rule.vector.String()),
    		})
    	}
    	m.notificationHandler.SubmitReqs(notifications)
    }
    
    func (m *Manager) runIteration() {
    	now := model.Now()
    	wg := sync.WaitGroup{}
    
    	m.Lock()
    	rulesSnapshot := make([]Rule, len(m.rules))
    	copy(rulesSnapshot, m.rules)
    	m.Unlock()
    
    	for _, rule := range rulesSnapshot {
    		wg.Add(1)
    		// BUG(julius): Look at fixing thundering herd.
    		go func(rule Rule) {
    			defer wg.Done()
    
    			start := time.Now()
    			vector, err := rule.eval(now, m.queryEngine)
    			duration := time.Since(start)
    
    			if err != nil {
    				evalFailures.Inc()
    				log.Warnf("Error while evaluating rule %q: %s", rule, err)
    				return
    			}
    
    			switch r := rule.(type) {
    			case *AlertingRule:
    				m.queueAlertNotifications(r, now)
    				evalDuration.WithLabelValues(ruleTypeAlerting).Observe(
    					float64(duration / time.Millisecond),
    				)
    			case *RecordingRule:
    				evalDuration.WithLabelValues(ruleTypeRecording).Observe(
    					float64(duration / time.Millisecond),
    				)
    			default:
    				panic(fmt.Errorf("unknown rule type: %T", rule))
    			}
    
    			for _, s := range vector {
    				m.sampleAppender.Append(s)
    			}
    		}(rule)
    	}
    	wg.Wait()
    }
    
    // transferAlertState makes a copy of the state of alerting rules and returns a function
    // that restores them in the current state.
    func (m *Manager) transferAlertState() func() {
    
    	alertingRules := map[string]*AlertingRule{}
    	for _, r := range m.rules {
    		if ar, ok := r.(*AlertingRule); ok {
    			alertingRules[ar.name] = ar
    		}
    	}
    
    	return func() {
    		// Restore alerting rule state.
    		for _, r := range m.rules {
    			ar, ok := r.(*AlertingRule)
    			if !ok {
    				continue
    			}
    			if old, ok := alertingRules[ar.name]; ok {
    				ar.activeAlerts = old.activeAlerts
    			}
    		}
    	}
    }
    
    // ApplyConfig updates the rule manager's state as the config requires. If
    // loading the new rules failed the old rule set is restored. Returns true on success.
    func (m *Manager) ApplyConfig(conf *config.Config) bool {
    	m.Lock()
    	defer m.Unlock()
    
    	defer m.transferAlertState()()
    
    	success := true
    	m.interval = time.Duration(conf.GlobalConfig.EvaluationInterval)
    
    	rulesSnapshot := make([]Rule, len(m.rules))
    	copy(rulesSnapshot, m.rules)
    	m.rules = m.rules[:0]
    
    	var files []string
    	for _, pat := range conf.RuleFiles {
    		fs, err := filepath.Glob(pat)
    		if err != nil {
    			// The only error can be a bad pattern.
    			log.Errorf("Error retrieving rule files for %s: %s", pat, err)
    			success = false
    		}
    		files = append(files, fs...)
    	}
    	if err := m.loadRuleFiles(files...); err != nil {
    		// If loading the new rules failed, restore the old rule set.
    		m.rules = rulesSnapshot
    		log.Errorf("Error loading rules, previous rule set restored: %s", err)
    		success = false
    	}
    
    	return success
    }
    
    // loadRuleFiles loads alerting and recording rules from the given files.
    func (m *Manager) loadRuleFiles(filenames ...string) error {
    	for _, fn := range filenames {
    		content, err := ioutil.ReadFile(fn)
    		if err != nil {
    			return err
    		}
    		stmts, err := promql.ParseStmts(string(content))
    		if err != nil {
    			return fmt.Errorf("error parsing %s: %s", fn, err)
    		}
    
    		for _, stmt := range stmts {
    			switch r := stmt.(type) {
    			case *promql.AlertStmt:
    				rule := NewAlertingRule(r.Name, r.Expr, r.Duration, r.Labels, r.Summary, r.Description, r.Runbook)
    				m.rules = append(m.rules, rule)
    			case *promql.RecordStmt:
    				rule := NewRecordingRule(r.Name, r.Expr, r.Labels)
    				m.rules = append(m.rules, rule)
    			default:
    				panic("retrieval.Manager.LoadRuleFiles: unknown statement type")
    			}
    		}
    	}
    	return nil
    }
    
    // Rules returns the list of the manager's rules.
    func (m *Manager) Rules() []Rule {
    	m.Lock()
    	defer m.Unlock()
    
    	rules := make([]Rule, len(m.rules))
    	copy(rules, m.rules)
    	return rules
    }
    
    // AlertingRules returns the list of the manager's alerting rules.
    func (m *Manager) AlertingRules() []*AlertingRule {
    	m.Lock()
    	defer m.Unlock()
    
    	alerts := []*AlertingRule{}
    	for _, rule := range m.rules {
    		if alertingRule, ok := rule.(*AlertingRule); ok {
    			alerts = append(alerts, alertingRule)
    		}
    	}
    	return alerts
    }
    prometheus-0.16.2+ds/rules/manager_test.go000066400000000000000000000127771265137125100205500ustar00rootroot00000000000000// Copyright 2013 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package rules
    
    import (
    	"fmt"
    	"reflect"
    	"strings"
    	"testing"
    	"time"
    
    	"github.com/prometheus/common/model"
    
    	"github.com/prometheus/prometheus/promql"
    )
    
    func TestAlertingRule(t *testing.T) {
    	suite, err := promql.NewTest(t, `
    		load 5m
    			http_requests{job="api-server", instance="0", group="production"}	0+10x10
    			http_requests{job="api-server", instance="1", group="production"}	0+20x10
    			http_requests{job="api-server", instance="0", group="canary"}		0+30x10
    			http_requests{job="api-server", instance="1", group="canary"}		0+40x10
    			http_requests{job="app-server", instance="0", group="production"}	0+50x10
    			http_requests{job="app-server", instance="1", group="production"}	0+60x10
    			http_requests{job="app-server", instance="0", group="canary"}		0+70x10
    			http_requests{job="app-server", instance="1", group="canary"}		0+80x10
    	`)
    	if err != nil {
    		t.Fatal(err)
    	}
    	defer suite.Close()
    
    	if err := suite.Run(); err != nil {
    		t.Fatal(err)
    	}
    
    	expr, err := promql.ParseExpr(`http_requests{group="canary", job="app-server"} < 100`)
    	if err != nil {
    		t.Fatalf("Unable to parse alert expression: %s", err)
    	}
    
    	rule := NewAlertingRule(
    		"HTTPRequestRateLow",
    		expr,
    		time.Minute,
    		model.LabelSet{"severity": "critical"},
    		"summary", "description", "runbook",
    	)
    
    	var tests = []struct {
    		time   time.Duration
    		result []string
    	}{
    		{
    			time: 0,
    			result: []string{
    				`ALERTS{alertname="HTTPRequestRateLow", alertstate="pending", group="canary", instance="0", job="app-server", severity="critical"} => 1 @[%v]`,
    				`ALERTS{alertname="HTTPRequestRateLow", alertstate="pending", group="canary", instance="1", job="app-server", severity="critical"} => 1 @[%v]`,
    			},
    		}, {
    			time: 5 * time.Minute,
    			result: []string{
    				`ALERTS{alertname="HTTPRequestRateLow", alertstate="pending", group="canary", instance="0", job="app-server", severity="critical"} => 0 @[%v]`,
    				`ALERTS{alertname="HTTPRequestRateLow", alertstate="firing", group="canary", instance="0", job="app-server", severity="critical"} => 1 @[%v]`,
    				`ALERTS{alertname="HTTPRequestRateLow", alertstate="pending", group="canary", instance="1", job="app-server", severity="critical"} => 0 @[%v]`,
    				`ALERTS{alertname="HTTPRequestRateLow", alertstate="firing", group="canary", instance="1", job="app-server", severity="critical"} => 1 @[%v]`,
    			},
    		}, {
    			time: 10 * time.Minute,
    			result: []string{
    				`ALERTS{alertname="HTTPRequestRateLow", alertstate="firing", group="canary", instance="1", job="app-server", severity="critical"} => 0 @[%v]`,
    				`ALERTS{alertname="HTTPRequestRateLow", alertstate="firing", group="canary", instance="0", job="app-server", severity="critical"} => 0 @[%v]`,
    			},
    		},
    		{
    			time:   15 * time.Minute,
    			result: nil,
    		},
    		{
    			time:   20 * time.Minute,
    			result: nil,
    		},
    	}
    
    	for i, test := range tests {
    		evalTime := model.Time(0).Add(test.time)
    
    		res, err := rule.eval(evalTime, suite.QueryEngine())
    		if err != nil {
    			t.Fatalf("Error during alerting rule evaluation: %s", err)
    		}
    
    		actual := strings.Split(res.String(), "\n")
    		expected := annotateWithTime(test.result, evalTime)
    		if actual[0] == "" {
    			actual = []string{}
    		}
    
    		if len(actual) != len(expected) {
    			t.Errorf("%d. Number of samples in expected and actual output don't match (%d vs. %d)", i, len(expected), len(actual))
    		}
    
    		for j, expectedSample := range expected {
    			found := false
    			for _, actualSample := range actual {
    				if actualSample == expectedSample {
    					found = true
    				}
    			}
    			if !found {
    				t.Errorf("%d.%d. Couldn't find expected sample in output: '%v'", i, j, expectedSample)
    			}
    		}
    
    		if t.Failed() {
    			t.Errorf("%d. Expected and actual outputs don't match:", i)
    			t.Fatalf("Expected:\n%v\n----\nActual:\n%v", strings.Join(expected, "\n"), strings.Join(actual, "\n"))
    		}
    	}
    }
    
    func annotateWithTime(lines []string, timestamp model.Time) []string {
    	annotatedLines := []string{}
    	for _, line := range lines {
    		annotatedLines = append(annotatedLines, fmt.Sprintf(line, timestamp))
    	}
    	return annotatedLines
    }
    
    func TestTransferAlertState(t *testing.T) {
    	m := NewManager(&ManagerOptions{})
    
    	alert := &Alert{
    		Name:  "testalert",
    		State: StateFiring,
    	}
    
    	arule := AlertingRule{
    		name:         "test",
    		activeAlerts: map[model.Fingerprint]*Alert{},
    	}
    	aruleCopy := arule
    
    	m.rules = append(m.rules, &arule)
    
    	// Set an alert.
    	arule.activeAlerts[0] = alert
    
    	// Save state and get the restore function.
    	restore := m.transferAlertState()
    
    	// Remove arule from the rule list and add an unrelated rule and the
    	// stateless copy of arule.
    	m.rules = []Rule{
    		&AlertingRule{
    			name:         "test_other",
    			activeAlerts: map[model.Fingerprint]*Alert{},
    		},
    		&aruleCopy,
    	}
    
    	// Apply the restore function.
    	restore()
    
    	if ar := m.rules[0].(*AlertingRule); len(ar.activeAlerts) != 0 {
    		t.Fatalf("unexpected alert for unrelated alerting rule")
    	}
    	if ar := m.rules[1].(*AlertingRule); !reflect.DeepEqual(ar.activeAlerts[0], alert) {
    		t.Fatalf("alert state was not restored")
    	}
    }
    prometheus-0.16.2+ds/rules/recording.go000066400000000000000000000055651265137125100200500ustar00rootroot00000000000000// Copyright 2013 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package rules
    
    import (
    	"fmt"
    	"html/template"
    
    	"github.com/prometheus/common/model"
    
    	"github.com/prometheus/prometheus/promql"
    	"github.com/prometheus/prometheus/util/strutil"
    )
    
    // A RecordingRule records its vector expression into new timeseries.
    type RecordingRule struct {
    	name   string
    	vector promql.Expr
    	labels model.LabelSet
    }
    
    // NewRecordingRule returns a new recording rule.
    func NewRecordingRule(name string, vector promql.Expr, labels model.LabelSet) *RecordingRule {
    	return &RecordingRule{
    		name:   name,
    		vector: vector,
    		labels: labels,
    	}
    }
    
    // Name returns the rule name.
    func (rule RecordingRule) Name() string { return rule.name }
    
    // eval evaluates the rule and then overrides the metric names and labels accordingly.
    func (rule RecordingRule) eval(timestamp model.Time, engine *promql.Engine) (model.Vector, error) {
    	query, err := engine.NewInstantQuery(rule.vector.String(), timestamp)
    	if err != nil {
    		return nil, err
    	}
    
    	var (
    		result = query.Exec()
    		vector model.Vector
    	)
    	if result.Err != nil {
    		return nil, err
    	}
    
    	switch result.Value.(type) {
    	case model.Vector:
    		vector, err = result.Vector()
    		if err != nil {
    			return nil, err
    		}
    	case *model.Scalar:
    		scalar, err := result.Scalar()
    		if err != nil {
    			return nil, err
    		}
    		vector = model.Vector{&model.Sample{
    			Value:     scalar.Value,
    			Timestamp: scalar.Timestamp,
    			Metric:    model.Metric{},
    		}}
    	default:
    		return nil, fmt.Errorf("rule result is not a vector or scalar")
    	}
    
    	// Override the metric name and labels.
    	for _, sample := range vector {
    		sample.Metric[model.MetricNameLabel] = model.LabelValue(rule.name)
    
    		for label, value := range rule.labels {
    			if value == "" {
    				delete(sample.Metric, label)
    			} else {
    				sample.Metric[label] = value
    			}
    		}
    	}
    
    	return vector, nil
    }
    
    func (rule RecordingRule) String() string {
    	return fmt.Sprintf("%s%s = %s\n", rule.name, rule.labels, rule.vector)
    }
    
    // HTMLSnippet returns an HTML snippet representing this rule.
    func (rule RecordingRule) HTMLSnippet(pathPrefix string) template.HTML {
    	ruleExpr := rule.vector.String()
    	return template.HTML(fmt.Sprintf(
    		`%s%s = %s`,
    		pathPrefix+strutil.GraphLinkForExpression(rule.name),
    		rule.name,
    		rule.labels,
    		pathPrefix+strutil.GraphLinkForExpression(ruleExpr),
    		ruleExpr))
    }
    prometheus-0.16.2+ds/rules/recording_test.go000066400000000000000000000035221265137125100210760ustar00rootroot00000000000000// Copyright 2013 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package rules
    
    import (
    	"reflect"
    	"testing"
    
    	"github.com/prometheus/common/model"
    
    	"github.com/prometheus/prometheus/promql"
    	"github.com/prometheus/prometheus/storage/local"
    )
    
    func TestRuleEval(t *testing.T) {
    	storage, closer := local.NewTestStorage(t, 1)
    	defer closer.Close()
    	engine := promql.NewEngine(storage, nil)
    	now := model.Now()
    
    	suite := []struct {
    		name   string
    		expr   promql.Expr
    		labels model.LabelSet
    		result model.Vector
    	}{
    		{
    			name:   "nolabels",
    			expr:   &promql.NumberLiteral{Val: 1},
    			labels: model.LabelSet{},
    			result: model.Vector{&model.Sample{
    				Value:     1,
    				Timestamp: now,
    				Metric:    model.Metric{"__name__": "nolabels"},
    			}},
    		},
    		{
    			name:   "labels",
    			expr:   &promql.NumberLiteral{Val: 1},
    			labels: model.LabelSet{"foo": "bar"},
    			result: model.Vector{&model.Sample{
    				Value:     1,
    				Timestamp: now,
    				Metric:    model.Metric{"__name__": "labels", "foo": "bar"},
    			}},
    		},
    	}
    
    	for _, test := range suite {
    		rule := NewRecordingRule(test.name, test.expr, test.labels)
    		result, err := rule.eval(now, engine)
    		if err != nil {
    			t.Fatalf("Error evaluating %s", test.name)
    		}
    		if !reflect.DeepEqual(result, test.result) {
    			t.Fatalf("Error: expected %q, got %q", test.result, result)
    		}
    	}
    }
    prometheus-0.16.2+ds/scripts/000077500000000000000000000000001265137125100160675ustar00rootroot00000000000000prometheus-0.16.2+ds/scripts/build.sh000077500000000000000000000030311265137125100175220ustar00rootroot00000000000000#!/usr/bin/env bash
    
    # Copyright 2015 The Prometheus Authors
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    # http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    set -e
    
    repo_path="github.com/prometheus/prometheus"
    
    version=$( cat version/VERSION )
    revision=$( git rev-parse --short HEAD 2> /dev/null || echo 'unknown' )
    branch=$( git rev-parse --abbrev-ref HEAD 2> /dev/null || echo 'unknown' )
    host=$( hostname -f )
    build_date=$( date +%Y%m%d-%H:%M:%S )
    go_version=$( go version | sed -e 's/^[^0-9.]*\([0-9.]*\).*/\1/' )
    
    if [ "$(go env GOOS)" = "windows" ]; then
    	ext=".exe"
    fi
    
    ldflags="
      -X ${repo_path}/version.Version=${version}
      -X ${repo_path}/version.Revision=${revision}
      -X ${repo_path}/version.Branch=${branch}
      -X ${repo_path}/version.BuildUser=${USER}@${host}
      -X ${repo_path}/version.BuildDate=${build_date}
      -X ${repo_path}/version.GoVersion=${go_version}"
    
    export GO15VENDOREXPERIMENT="1"
    
    echo " >   prometheus"
    go build -ldflags "${ldflags}" -o prometheus${ext} ${repo_path}/cmd/prometheus
    
    echo " >   promtool"
    go build -ldflags "${ldflags}" -o promtool${ext} ${repo_path}/cmd/promtool
    
    exit 0
    prometheus-0.16.2+ds/scripts/goenv.sh000077500000000000000000000026611265137125100175510ustar00rootroot00000000000000# Copyright 2015 The Prometheus Authors
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    # http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    goroot="$1"
    gopath="$2"
    
    go_version_min="1.5"
    go_version_install="1.5.3"
    
    vernum() {
    	printf "%03d%03d%03d" $(echo "$1" | tr '.' ' ')
    }
    
    if command -v "go" >/dev/null; then
        go_version=$(go version | sed -e 's/^[^0-9.]*\([0-9.]*\).*/\1/')
    fi
    
    # If we satisfy the version requirement, there is nothing to do. Otherwise
    # proceed downloading and installing a go environment.
    if [ $(vernum ${go_version}) -ge $(vernum ${go_version_min}) ]; then
    	return
    fi
    
    export GOPATH="${gopath}"
    export GOROOT="${goroot}/${go_version_install}"
    
    export PATH="$PATH:$GOROOT/bin"
    
    if [ ! -x "${GOROOT}/bin/go" ]; then
    
    	mkdir -p "${GOROOT}"
    
    	os=$(uname | tr A-Z a-z)
    	arch=$(uname -m | sed -e 's/x86_64/amd64/' | sed -e 's/i.86/386/')
    
    	url="https://golang.org/dl"
    	tarball="go${go_version_install}.${os}-${arch}.tar.gz"
    
    	wget -qO- "${url}/${tarball}" | tar -C "${GOROOT}" --strip 1 -xz
    fi
    prometheus-0.16.2+ds/scripts/release_tarballs.sh000077500000000000000000000023421265137125100217330ustar00rootroot00000000000000#!/usr/bin/env bash
    
    # Copyright 2015 The Prometheus Authors
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    # http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    set -e
    
    version=$(cat version/VERSION)
    
    for GOOS in "darwin" "freebsd" "linux" "windows"; do
      for GOARCH in "amd64" "386"; do
        export GOARCH
        export GOOS
        make build
    
        tarball_dir="prometheus-${version}.${GOOS}-${GOARCH}"
        tarball="${tarball_dir}.tar.gz"
    
        if [ "$(go env GOOS)" = "windows" ]; then
          ext=".exe"
        fi
    
        echo " >   $tarball"
        mkdir -p "${tarball_dir}"
        cp -a "prometheus${ext}" "promtool${ext}" consoles console_libraries "${tarball_dir}"
        tar --owner=root --group=root -czf "${tarball}" "${tarball_dir}"
        rm -rf "${tarball_dir}"
        rm "prometheus${ext}" "promtool${ext}"
      done
    done
    
    exit 0
    prometheus-0.16.2+ds/storage/000077500000000000000000000000001265137125100160445ustar00rootroot00000000000000prometheus-0.16.2+ds/storage/local/000077500000000000000000000000001265137125100171365ustar00rootroot00000000000000prometheus-0.16.2+ds/storage/local/chunk.go000066400000000000000000000173431265137125100206050ustar00rootroot00000000000000// Copyright 2014 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package local
    
    import (
    	"container/list"
    	"fmt"
    	"io"
    	"sync"
    	"sync/atomic"
    
    	"github.com/prometheus/common/model"
    
    	"github.com/prometheus/prometheus/storage/metric"
    )
    
    // The DefaultChunkEncoding can be changed via a flag.
    var DefaultChunkEncoding = doubleDelta
    
    type chunkEncoding byte
    
    // String implements flag.Value.
    func (ce chunkEncoding) String() string {
    	return fmt.Sprintf("%d", ce)
    }
    
    // Set implements flag.Value.
    func (ce *chunkEncoding) Set(s string) error {
    	switch s {
    	case "0":
    		*ce = delta
    	case "1":
    		*ce = doubleDelta
    	default:
    		return fmt.Errorf("invalid chunk encoding: %s", s)
    	}
    	return nil
    }
    
    const (
    	delta chunkEncoding = iota
    	doubleDelta
    )
    
    // chunkDesc contains meta-data for a chunk. Many of its methods are
    // goroutine-safe proxies for chunk methods.
    type chunkDesc struct {
    	sync.Mutex
    	c              chunk // nil if chunk is evicted.
    	rCnt           int
    	chunkFirstTime model.Time // Used if chunk is evicted.
    	chunkLastTime  model.Time // Used if chunk is evicted.
    
    	// evictListElement is nil if the chunk is not in the evict list.
    	// evictListElement is _not_ protected by the chunkDesc mutex.
    	// It must only be touched by the evict list handler in memorySeriesStorage.
    	evictListElement *list.Element
    }
    
    // newChunkDesc creates a new chunkDesc pointing to the provided chunk. The
    // provided chunk is assumed to be not persisted yet. Therefore, the refCount of
    // the new chunkDesc is 1 (preventing eviction prior to persisting).
    func newChunkDesc(c chunk) *chunkDesc {
    	chunkOps.WithLabelValues(createAndPin).Inc()
    	atomic.AddInt64(&numMemChunks, 1)
    	numMemChunkDescs.Inc()
    	return &chunkDesc{c: c, rCnt: 1}
    }
    
    func (cd *chunkDesc) add(s *model.SamplePair) []chunk {
    	cd.Lock()
    	defer cd.Unlock()
    
    	return cd.c.add(s)
    }
    
    // pin increments the refCount by one. Upon increment from 0 to 1, this
    // chunkDesc is removed from the evict list. To enable the latter, the
    // evictRequests channel has to be provided.
    func (cd *chunkDesc) pin(evictRequests chan<- evictRequest) {
    	cd.Lock()
    	defer cd.Unlock()
    
    	if cd.rCnt == 0 {
    		// Remove ourselves from the evict list.
    		evictRequests <- evictRequest{cd, false}
    	}
    	cd.rCnt++
    }
    
    // unpin decrements the refCount by one. Upon decrement from 1 to 0, this
    // chunkDesc is added to the evict list. To enable the latter, the evictRequests
    // channel has to be provided.
    func (cd *chunkDesc) unpin(evictRequests chan<- evictRequest) {
    	cd.Lock()
    	defer cd.Unlock()
    
    	if cd.rCnt == 0 {
    		panic("cannot unpin already unpinned chunk")
    	}
    	cd.rCnt--
    	if cd.rCnt == 0 {
    		// Add ourselves to the back of the evict list.
    		evictRequests <- evictRequest{cd, true}
    	}
    }
    
    func (cd *chunkDesc) refCount() int {
    	cd.Lock()
    	defer cd.Unlock()
    
    	return cd.rCnt
    }
    
    func (cd *chunkDesc) firstTime() model.Time {
    	cd.Lock()
    	defer cd.Unlock()
    
    	if cd.c == nil {
    		return cd.chunkFirstTime
    	}
    	return cd.c.firstTime()
    }
    
    func (cd *chunkDesc) lastTime() model.Time {
    	cd.Lock()
    	defer cd.Unlock()
    
    	if cd.c == nil {
    		return cd.chunkLastTime
    	}
    	return cd.c.newIterator().lastTimestamp()
    }
    
    func (cd *chunkDesc) lastSamplePair() *model.SamplePair {
    	cd.Lock()
    	defer cd.Unlock()
    
    	if cd.c == nil {
    		return nil
    	}
    	it := cd.c.newIterator()
    	return &model.SamplePair{
    		Timestamp: it.lastTimestamp(),
    		Value:     it.lastSampleValue(),
    	}
    }
    
    func (cd *chunkDesc) isEvicted() bool {
    	cd.Lock()
    	defer cd.Unlock()
    
    	return cd.c == nil
    }
    
    func (cd *chunkDesc) contains(t model.Time) bool {
    	return !t.Before(cd.firstTime()) && !t.After(cd.lastTime())
    }
    
    func (cd *chunkDesc) chunk() chunk {
    	cd.Lock()
    	defer cd.Unlock()
    
    	return cd.c
    }
    
    func (cd *chunkDesc) setChunk(c chunk) {
    	cd.Lock()
    	defer cd.Unlock()
    
    	if cd.c != nil {
    		panic("chunk already set")
    	}
    	cd.c = c
    }
    
    // maybeEvict evicts the chunk if the refCount is 0. It returns whether the chunk
    // is now evicted, which includes the case that the chunk was evicted even
    // before this method was called.
    func (cd *chunkDesc) maybeEvict() bool {
    	cd.Lock()
    	defer cd.Unlock()
    
    	if cd.c == nil {
    		return true
    	}
    	if cd.rCnt != 0 {
    		return false
    	}
    	cd.chunkFirstTime = cd.c.firstTime()
    	cd.chunkLastTime = cd.c.newIterator().lastTimestamp()
    	cd.c = nil
    	chunkOps.WithLabelValues(evict).Inc()
    	atomic.AddInt64(&numMemChunks, -1)
    	return true
    }
    
    // chunk is the interface for all chunks. Chunks are generally not
    // goroutine-safe.
    type chunk interface {
    	// add adds a SamplePair to the chunks, performs any necessary
    	// re-encoding, and adds any necessary overflow chunks. It returns the
    	// new version of the original chunk, followed by overflow chunks, if
    	// any. The first chunk returned might be the same as the original one
    	// or a newly allocated version. In any case, take the returned chunk as
    	// the relevant one and discard the orginal chunk.
    	add(sample *model.SamplePair) []chunk
    	clone() chunk
    	firstTime() model.Time
    	newIterator() chunkIterator
    	marshal(io.Writer) error
    	unmarshal(io.Reader) error
    	unmarshalFromBuf([]byte)
    	encoding() chunkEncoding
    }
    
    // A chunkIterator enables efficient access to the content of a chunk. It is
    // generally not safe to use a chunkIterator concurrently with or after chunk
    // mutation.
    type chunkIterator interface {
    	// length returns the number of samples in the chunk.
    	length() int
    	// Gets the timestamp of the n-th sample in the chunk.
    	timestampAtIndex(int) model.Time
    	// Gets the last timestamp in the chunk.
    	lastTimestamp() model.Time
    	// Gets the sample value of the n-th sample in the chunk.
    	sampleValueAtIndex(int) model.SampleValue
    	// Gets the last sample value in the chunk.
    	lastSampleValue() model.SampleValue
    	// Gets the two values that are immediately adjacent to a given time. In
    	// case a value exist at precisely the given time, only that single
    	// value is returned. Only the first or last value is returned (as a
    	// single value), if the given time is before or after the first or last
    	// value, respectively.
    	valueAtTime(model.Time) []model.SamplePair
    	// Gets all values contained within a given interval.
    	rangeValues(metric.Interval) []model.SamplePair
    	// Whether a given timestamp is contained between first and last value
    	// in the chunk.
    	contains(model.Time) bool
    	// values returns a channel, from which all sample values in the chunk
    	// can be received in order. The channel is closed after the last
    	// one. It is generally not safe to mutate the chunk while the channel
    	// is still open.
    	values() <-chan *model.SamplePair
    }
    
    func transcodeAndAdd(dst chunk, src chunk, s *model.SamplePair) []chunk {
    	chunkOps.WithLabelValues(transcode).Inc()
    
    	head := dst
    	body := []chunk{}
    	for v := range src.newIterator().values() {
    		newChunks := head.add(v)
    		body = append(body, newChunks[:len(newChunks)-1]...)
    		head = newChunks[len(newChunks)-1]
    	}
    	newChunks := head.add(s)
    	return append(body, newChunks...)
    }
    
    // newChunk creates a new chunk according to the encoding set by the
    // defaultChunkEncoding flag.
    func newChunk() chunk {
    	return newChunkForEncoding(DefaultChunkEncoding)
    }
    
    func newChunkForEncoding(encoding chunkEncoding) chunk {
    	switch encoding {
    	case delta:
    		return newDeltaEncodedChunk(d1, d0, true, chunkLen)
    	case doubleDelta:
    		return newDoubleDeltaEncodedChunk(d1, d0, true, chunkLen)
    	default:
    		panic(fmt.Errorf("unknown chunk encoding: %v", encoding))
    	}
    }
    prometheus-0.16.2+ds/storage/local/codable/000077500000000000000000000000001265137125100205275ustar00rootroot00000000000000prometheus-0.16.2+ds/storage/local/codable/codable.go000066400000000000000000000306001265137125100224460ustar00rootroot00000000000000// Copyright 2014 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    // Package codable provides types that implement encoding.BinaryMarshaler and
    // encoding.BinaryUnmarshaler and functions that help to encode and decode
    // primitives. The Prometheus storage backend uses them to persist objects to
    // files and to save objects in LevelDB.
    //
    // The encodings used in this package are designed in a way that objects can be
    // unmarshaled from a continuous byte stream, i.e. the information when to stop
    // reading is determined by the format. No separate termination information is
    // needed.
    //
    // Strings are encoded as the length of their bytes as a varint followed by
    // their bytes.
    //
    // Slices are encoded as their length as a varint followed by their elements.
    //
    // Maps are encoded as the number of mappings as a varint, followed by the
    // mappings, each of which consists of the key followed by the value.
    package codable
    
    import (
    	"bytes"
    	"encoding/binary"
    	"fmt"
    	"io"
    	"sync"
    
    	"github.com/prometheus/common/model"
    )
    
    // A byteReader is an io.ByteReader that also implements the vanilla io.Reader
    // interface.
    type byteReader interface {
    	io.Reader
    	io.ByteReader
    }
    
    // bufPool is a pool for staging buffers. Using a pool allows concurrency-safe
    // reuse of buffers
    var bufPool sync.Pool
    
    // getBuf returns a buffer from the pool. The length of the returned slice is l.
    func getBuf(l int) []byte {
    	x := bufPool.Get()
    	if x == nil {
    		return make([]byte, l)
    	}
    	buf := x.([]byte)
    	if cap(buf) < l {
    		return make([]byte, l)
    	}
    	return buf[:l]
    }
    
    // putBuf returns a buffer to the pool.
    func putBuf(buf []byte) {
    	bufPool.Put(buf)
    }
    
    // EncodeVarint encodes an int64 as a varint and writes it to an io.Writer.
    // It returns the number of bytes written.
    // This is a GC-friendly implementation that takes the required staging buffer
    // from a buffer pool.
    func EncodeVarint(w io.Writer, i int64) (int, error) {
    	buf := getBuf(binary.MaxVarintLen64)
    	defer putBuf(buf)
    
    	bytesWritten := binary.PutVarint(buf, i)
    	_, err := w.Write(buf[:bytesWritten])
    	return bytesWritten, err
    }
    
    // EncodeUvarint encodes an uint64 as a varint and writes it to an io.Writer.
    // It returns the number of bytes written.
    // This is a GC-friendly implementation that takes the required staging buffer
    // from a buffer pool.
    func EncodeUvarint(w io.Writer, i uint64) (int, error) {
    	buf := getBuf(binary.MaxVarintLen64)
    	defer putBuf(buf)
    
    	bytesWritten := binary.PutUvarint(buf, i)
    	_, err := w.Write(buf[:bytesWritten])
    	return bytesWritten, err
    }
    
    // EncodeUint64 writes an uint64 to an io.Writer in big-endian byte-order.
    // This is a GC-friendly implementation that takes the required staging buffer
    // from a buffer pool.
    func EncodeUint64(w io.Writer, u uint64) error {
    	buf := getBuf(8)
    	defer putBuf(buf)
    
    	binary.BigEndian.PutUint64(buf, u)
    	_, err := w.Write(buf)
    	return err
    }
    
    // DecodeUint64 reads an uint64 from an io.Reader in big-endian byte-order.
    // This is a GC-friendly implementation that takes the required staging buffer
    // from a buffer pool.
    func DecodeUint64(r io.Reader) (uint64, error) {
    	buf := getBuf(8)
    	defer putBuf(buf)
    
    	if _, err := io.ReadFull(r, buf); err != nil {
    		return 0, err
    	}
    	return binary.BigEndian.Uint64(buf), nil
    }
    
    // encodeString writes the varint encoded length followed by the bytes of s to
    // b.
    func encodeString(b *bytes.Buffer, s string) error {
    	if _, err := EncodeVarint(b, int64(len(s))); err != nil {
    		return err
    	}
    	if _, err := b.WriteString(s); err != nil {
    		return err
    	}
    	return nil
    }
    
    // decodeString decodes a string encoded by encodeString.
    func decodeString(b byteReader) (string, error) {
    	length, err := binary.ReadVarint(b)
    	if err != nil {
    		return "", err
    	}
    
    	buf := getBuf(int(length))
    	defer putBuf(buf)
    
    	if _, err := io.ReadFull(b, buf); err != nil {
    		return "", err
    	}
    	return string(buf), nil
    }
    
    // A Metric is a model.Metric that implements
    // encoding.BinaryMarshaler and encoding.BinaryUnmarshaler.
    type Metric model.Metric
    
    // MarshalBinary implements encoding.BinaryMarshaler.
    func (m Metric) MarshalBinary() ([]byte, error) {
    	buf := &bytes.Buffer{}
    	if _, err := EncodeVarint(buf, int64(len(m))); err != nil {
    		return nil, err
    	}
    	for l, v := range m {
    		if err := encodeString(buf, string(l)); err != nil {
    			return nil, err
    		}
    		if err := encodeString(buf, string(v)); err != nil {
    			return nil, err
    		}
    	}
    	return buf.Bytes(), nil
    }
    
    // UnmarshalBinary implements encoding.BinaryUnmarshaler. It can be used with the
    // zero value of Metric.
    func (m *Metric) UnmarshalBinary(buf []byte) error {
    	return m.UnmarshalFromReader(bytes.NewReader(buf))
    }
    
    // UnmarshalFromReader unmarshals a Metric from a reader that implements
    // both, io.Reader and io.ByteReader. It can be used with the zero value of
    // Metric.
    func (m *Metric) UnmarshalFromReader(r byteReader) error {
    	numLabelPairs, err := binary.ReadVarint(r)
    	if err != nil {
    		return err
    	}
    	*m = make(Metric, numLabelPairs)
    
    	for ; numLabelPairs > 0; numLabelPairs-- {
    		ln, err := decodeString(r)
    		if err != nil {
    			return err
    		}
    		lv, err := decodeString(r)
    		if err != nil {
    			return err
    		}
    		(*m)[model.LabelName(ln)] = model.LabelValue(lv)
    	}
    	return nil
    }
    
    // A Fingerprint is a model.Fingerprint that implements
    // encoding.BinaryMarshaler and encoding.BinaryUnmarshaler. The implementation
    // depends on model.Fingerprint to be convertible to uint64. It encodes
    // the fingerprint as a big-endian uint64.
    type Fingerprint model.Fingerprint
    
    // MarshalBinary implements encoding.BinaryMarshaler.
    func (fp Fingerprint) MarshalBinary() ([]byte, error) {
    	b := make([]byte, 8)
    	binary.BigEndian.PutUint64(b, uint64(fp))
    	return b, nil
    }
    
    // UnmarshalBinary implements encoding.BinaryUnmarshaler.
    func (fp *Fingerprint) UnmarshalBinary(buf []byte) error {
    	*fp = Fingerprint(binary.BigEndian.Uint64(buf))
    	return nil
    }
    
    // FingerprintSet is a map[model.Fingerprint]struct{} that
    // implements encoding.BinaryMarshaler and encoding.BinaryUnmarshaler. Its
    // binary form is identical to that of Fingerprints.
    type FingerprintSet map[model.Fingerprint]struct{}
    
    // MarshalBinary implements encoding.BinaryMarshaler.
    func (fps FingerprintSet) MarshalBinary() ([]byte, error) {
    	b := make([]byte, binary.MaxVarintLen64+len(fps)*8)
    	lenBytes := binary.PutVarint(b, int64(len(fps)))
    	offset := lenBytes
    
    	for fp := range fps {
    		binary.BigEndian.PutUint64(b[offset:], uint64(fp))
    		offset += 8
    	}
    	return b[:len(fps)*8+lenBytes], nil
    }
    
    // UnmarshalBinary implements encoding.BinaryUnmarshaler.
    func (fps *FingerprintSet) UnmarshalBinary(buf []byte) error {
    	numFPs, offset := binary.Varint(buf)
    	if offset <= 0 {
    		return fmt.Errorf("could not decode length of Fingerprints, varint decoding returned %d", offset)
    	}
    	*fps = make(FingerprintSet, numFPs)
    
    	for i := 0; i < int(numFPs); i++ {
    		(*fps)[model.Fingerprint(binary.BigEndian.Uint64(buf[offset+i*8:]))] = struct{}{}
    	}
    	return nil
    }
    
    // Fingerprints is a model.Fingerprints that implements
    // encoding.BinaryMarshaler and encoding.BinaryUnmarshaler. Its binary form is
    // identical to that of FingerprintSet.
    type Fingerprints model.Fingerprints
    
    // MarshalBinary implements encoding.BinaryMarshaler.
    func (fps Fingerprints) MarshalBinary() ([]byte, error) {
    	b := make([]byte, binary.MaxVarintLen64+len(fps)*8)
    	lenBytes := binary.PutVarint(b, int64(len(fps)))
    
    	for i, fp := range fps {
    		binary.BigEndian.PutUint64(b[i*8+lenBytes:], uint64(fp))
    	}
    	return b[:len(fps)*8+lenBytes], nil
    }
    
    // UnmarshalBinary implements encoding.BinaryUnmarshaler.
    func (fps *Fingerprints) UnmarshalBinary(buf []byte) error {
    	numFPs, offset := binary.Varint(buf)
    	if offset <= 0 {
    		return fmt.Errorf("could not decode length of Fingerprints, varint decoding returned %d", offset)
    	}
    	*fps = make(Fingerprints, numFPs)
    
    	for i := range *fps {
    		(*fps)[i] = model.Fingerprint(binary.BigEndian.Uint64(buf[offset+i*8:]))
    	}
    	return nil
    }
    
    // LabelPair is a model.LabelPair that implements
    // encoding.BinaryMarshaler and encoding.BinaryUnmarshaler.
    type LabelPair model.LabelPair
    
    // MarshalBinary implements encoding.BinaryMarshaler.
    func (lp LabelPair) MarshalBinary() ([]byte, error) {
    	buf := &bytes.Buffer{}
    	if err := encodeString(buf, string(lp.Name)); err != nil {
    		return nil, err
    	}
    	if err := encodeString(buf, string(lp.Value)); err != nil {
    		return nil, err
    	}
    	return buf.Bytes(), nil
    }
    
    // UnmarshalBinary implements encoding.BinaryUnmarshaler.
    func (lp *LabelPair) UnmarshalBinary(buf []byte) error {
    	r := bytes.NewReader(buf)
    	n, err := decodeString(r)
    	if err != nil {
    		return err
    	}
    	v, err := decodeString(r)
    	if err != nil {
    		return err
    	}
    	lp.Name = model.LabelName(n)
    	lp.Value = model.LabelValue(v)
    	return nil
    }
    
    // LabelName is a model.LabelName that implements
    // encoding.BinaryMarshaler and encoding.BinaryUnmarshaler.
    type LabelName model.LabelName
    
    // MarshalBinary implements encoding.BinaryMarshaler.
    func (l LabelName) MarshalBinary() ([]byte, error) {
    	buf := &bytes.Buffer{}
    	if err := encodeString(buf, string(l)); err != nil {
    		return nil, err
    	}
    	return buf.Bytes(), nil
    }
    
    // UnmarshalBinary implements encoding.BinaryUnmarshaler.
    func (l *LabelName) UnmarshalBinary(buf []byte) error {
    	r := bytes.NewReader(buf)
    	n, err := decodeString(r)
    	if err != nil {
    		return err
    	}
    	*l = LabelName(n)
    	return nil
    }
    
    // LabelValueSet is a map[model.LabelValue]struct{} that implements
    // encoding.BinaryMarshaler and encoding.BinaryUnmarshaler. Its binary form is
    // identical to that of LabelValues.
    type LabelValueSet map[model.LabelValue]struct{}
    
    // MarshalBinary implements encoding.BinaryMarshaler.
    func (vs LabelValueSet) MarshalBinary() ([]byte, error) {
    	buf := &bytes.Buffer{}
    	if _, err := EncodeVarint(buf, int64(len(vs))); err != nil {
    		return nil, err
    	}
    	for v := range vs {
    		if err := encodeString(buf, string(v)); err != nil {
    			return nil, err
    		}
    	}
    	return buf.Bytes(), nil
    }
    
    // UnmarshalBinary implements encoding.BinaryUnmarshaler.
    func (vs *LabelValueSet) UnmarshalBinary(buf []byte) error {
    	r := bytes.NewReader(buf)
    	numValues, err := binary.ReadVarint(r)
    	if err != nil {
    		return err
    	}
    	*vs = make(LabelValueSet, numValues)
    
    	for i := int64(0); i < numValues; i++ {
    		v, err := decodeString(r)
    		if err != nil {
    			return err
    		}
    		(*vs)[model.LabelValue(v)] = struct{}{}
    	}
    	return nil
    }
    
    // LabelValues is a model.LabelValues that implements
    // encoding.BinaryMarshaler and encoding.BinaryUnmarshaler. Its binary form is
    // identical to that of LabelValueSet.
    type LabelValues model.LabelValues
    
    // MarshalBinary implements encoding.BinaryMarshaler.
    func (vs LabelValues) MarshalBinary() ([]byte, error) {
    	buf := &bytes.Buffer{}
    	if _, err := EncodeVarint(buf, int64(len(vs))); err != nil {
    		return nil, err
    	}
    	for _, v := range vs {
    		if err := encodeString(buf, string(v)); err != nil {
    			return nil, err
    		}
    	}
    	return buf.Bytes(), nil
    }
    
    // UnmarshalBinary implements encoding.BinaryUnmarshaler.
    func (vs *LabelValues) UnmarshalBinary(buf []byte) error {
    	r := bytes.NewReader(buf)
    	numValues, err := binary.ReadVarint(r)
    	if err != nil {
    		return err
    	}
    	*vs = make(LabelValues, numValues)
    
    	for i := range *vs {
    		v, err := decodeString(r)
    		if err != nil {
    			return err
    		}
    		(*vs)[i] = model.LabelValue(v)
    	}
    	return nil
    }
    
    // TimeRange is used to define a time range and implements
    // encoding.BinaryMarshaler and encoding.BinaryUnmarshaler.
    type TimeRange struct {
    	First, Last model.Time
    }
    
    // MarshalBinary implements encoding.BinaryMarshaler.
    func (tr TimeRange) MarshalBinary() ([]byte, error) {
    	buf := &bytes.Buffer{}
    	if _, err := EncodeVarint(buf, int64(tr.First)); err != nil {
    		return nil, err
    	}
    	if _, err := EncodeVarint(buf, int64(tr.Last)); err != nil {
    		return nil, err
    	}
    	return buf.Bytes(), nil
    }
    
    // UnmarshalBinary implements encoding.BinaryUnmarshaler.
    func (tr *TimeRange) UnmarshalBinary(buf []byte) error {
    	r := bytes.NewReader(buf)
    	first, err := binary.ReadVarint(r)
    	if err != nil {
    		return err
    	}
    	last, err := binary.ReadVarint(r)
    	if err != nil {
    		return err
    	}
    	tr.First = model.Time(first)
    	tr.Last = model.Time(last)
    	return nil
    }
    prometheus-0.16.2+ds/storage/local/codable/codable_test.go000066400000000000000000000072761265137125100235220ustar00rootroot00000000000000// Copyright 2014 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package codable
    
    import (
    	"bytes"
    	"encoding"
    	"reflect"
    	"testing"
    )
    
    func newFingerprint(fp int64) *Fingerprint {
    	cfp := Fingerprint(fp)
    	return &cfp
    }
    
    func newLabelName(ln string) *LabelName {
    	cln := LabelName(ln)
    	return &cln
    }
    
    func TestUint64(t *testing.T) {
    	var b bytes.Buffer
    	const n = uint64(422010471112345)
    	if err := EncodeUint64(&b, n); err != nil {
    		t.Fatal(err)
    	}
    	got, err := DecodeUint64(&b)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if got != n {
    		t.Errorf("want %d, got %d", n, got)
    	}
    }
    
    var scenarios = []struct {
    	in    encoding.BinaryMarshaler
    	out   encoding.BinaryUnmarshaler
    	equal func(in, out interface{}) bool
    }{
    	{
    		in: &Metric{
    			"label_1": "value_2",
    			"label_2": "value_2",
    			"label_3": "value_3",
    		},
    		out: &Metric{},
    	}, {
    		in:  newFingerprint(12345),
    		out: newFingerprint(0),
    	}, {
    		in:  &Fingerprints{1, 2, 56, 1234},
    		out: &Fingerprints{},
    	}, {
    		in:  &Fingerprints{1, 2, 56, 1234},
    		out: &FingerprintSet{},
    		equal: func(in, out interface{}) bool {
    			inSet := FingerprintSet{}
    			for _, fp := range *(in.(*Fingerprints)) {
    				inSet[fp] = struct{}{}
    			}
    			return reflect.DeepEqual(inSet, *(out.(*FingerprintSet)))
    		},
    	}, {
    		in: &FingerprintSet{
    			1:    struct{}{},
    			2:    struct{}{},
    			56:   struct{}{},
    			1234: struct{}{},
    		},
    		out: &FingerprintSet{},
    	}, {
    		in: &FingerprintSet{
    			1:    struct{}{},
    			2:    struct{}{},
    			56:   struct{}{},
    			1234: struct{}{},
    		},
    		out: &Fingerprints{},
    		equal: func(in, out interface{}) bool {
    			outSet := FingerprintSet{}
    			for _, fp := range *(out.(*Fingerprints)) {
    				outSet[fp] = struct{}{}
    			}
    			return reflect.DeepEqual(outSet, *(in.(*FingerprintSet)))
    		},
    	}, {
    		in: &LabelPair{
    			Name:  "label_name",
    			Value: "label_value",
    		},
    		out: &LabelPair{},
    	}, {
    		in:  newLabelName("label_name"),
    		out: newLabelName(""),
    	}, {
    		in:  &LabelValues{"value_1", "value_2", "value_3"},
    		out: &LabelValues{},
    	}, {
    		in:  &LabelValues{"value_1", "value_2", "value_3"},
    		out: &LabelValueSet{},
    		equal: func(in, out interface{}) bool {
    			inSet := LabelValueSet{}
    			for _, lv := range *(in.(*LabelValues)) {
    				inSet[lv] = struct{}{}
    			}
    			return reflect.DeepEqual(inSet, *(out.(*LabelValueSet)))
    		},
    	}, {
    		in: &LabelValueSet{
    			"value_1": struct{}{},
    			"value_2": struct{}{},
    			"value_3": struct{}{},
    		},
    		out: &LabelValueSet{},
    	}, {
    		in: &LabelValueSet{
    			"value_1": struct{}{},
    			"value_2": struct{}{},
    			"value_3": struct{}{},
    		},
    		out: &LabelValues{},
    		equal: func(in, out interface{}) bool {
    			outSet := LabelValueSet{}
    			for _, lv := range *(out.(*LabelValues)) {
    				outSet[lv] = struct{}{}
    			}
    			return reflect.DeepEqual(outSet, *(in.(*LabelValueSet)))
    		},
    	}, {
    		in:  &TimeRange{42, 2001},
    		out: &TimeRange{},
    	},
    }
    
    func TestCodec(t *testing.T) {
    	for i, s := range scenarios {
    		encoded, err := s.in.MarshalBinary()
    		if err != nil {
    			t.Fatal(err)
    		}
    		if err := s.out.UnmarshalBinary(encoded); err != nil {
    			t.Fatal(err)
    		}
    		equal := s.equal
    		if equal == nil {
    			equal = reflect.DeepEqual
    		}
    		if !equal(s.in, s.out) {
    			t.Errorf("%d. Got: %v; want %v; encoded bytes are: %v", i, s.out, s.in, encoded)
    		}
    	}
    }
    prometheus-0.16.2+ds/storage/local/crashrecovery.go000066400000000000000000000376071265137125100223610ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package local
    
    import (
    	"fmt"
    	"io"
    	"os"
    	"path"
    	"strings"
    	"sync/atomic"
    
    	"github.com/prometheus/common/log"
    	"github.com/prometheus/common/model"
    
    	"github.com/prometheus/prometheus/storage/local/codable"
    	"github.com/prometheus/prometheus/storage/local/index"
    )
    
    // recoverFromCrash is called by loadSeriesMapAndHeads if the persistence
    // appears to be dirty after the loading (either because the loading resulted in
    // an error or because the persistence was dirty from the start). Not goroutine
    // safe. Only call before anything else is running (except index processing
    // queue as started by newPersistence).
    func (p *persistence) recoverFromCrash(fingerprintToSeries map[model.Fingerprint]*memorySeries) error {
    	// TODO(beorn): We need proper tests for the crash recovery.
    	log.Warn("Starting crash recovery. Prometheus is inoperational until complete.")
    	log.Warn("To avoid crash recovery in the future, shut down Prometheus with SIGTERM or a HTTP POST to /-/quit.")
    
    	fpsSeen := map[model.Fingerprint]struct{}{}
    	count := 0
    	seriesDirNameFmt := fmt.Sprintf("%%0%dx", seriesDirNameLen)
    
    	// Delete the fingerprint mapping file as it might be stale or
    	// corrupt. We'll rebuild the mappings as we go.
    	if err := os.RemoveAll(p.mappingsFileName()); err != nil {
    		return fmt.Errorf("couldn't remove old fingerprint mapping file %s: %s", p.mappingsFileName(), err)
    	}
    	// The mappings to rebuild.
    	fpm := fpMappings{}
    
    	log.Info("Scanning files.")
    	for i := 0; i < 1<<(seriesDirNameLen*4); i++ {
    		dirname := path.Join(p.basePath, fmt.Sprintf(seriesDirNameFmt, i))
    		dir, err := os.Open(dirname)
    		if os.IsNotExist(err) {
    			continue
    		}
    		if err != nil {
    			return err
    		}
    		defer dir.Close()
    		for fis := []os.FileInfo{}; err != io.EOF; fis, err = dir.Readdir(1024) {
    			if err != nil {
    				return err
    			}
    			for _, fi := range fis {
    				fp, ok := p.sanitizeSeries(dirname, fi, fingerprintToSeries, fpm)
    				if ok {
    					fpsSeen[fp] = struct{}{}
    				}
    				count++
    				if count%10000 == 0 {
    					log.Infof("%d files scanned.", count)
    				}
    			}
    		}
    	}
    	log.Infof("File scan complete. %d series found.", len(fpsSeen))
    
    	log.Info("Checking for series without series file.")
    	for fp, s := range fingerprintToSeries {
    		if _, seen := fpsSeen[fp]; !seen {
    			// fp exists in fingerprintToSeries, but has no representation on disk.
    			if s.persistWatermark == len(s.chunkDescs) {
    				// Oops, everything including the head chunk was
    				// already persisted, but nothing on disk.
    				// Thus, we lost that series completely. Clean
    				// up the remnants.
    				delete(fingerprintToSeries, fp)
    				if err := p.purgeArchivedMetric(fp); err != nil {
    					// Purging the archived metric didn't work, so try
    					// to unindex it, just in case it's in the indexes.
    					p.unindexMetric(fp, s.metric)
    				}
    				log.Warnf("Lost series detected: fingerprint %v, metric %v.", fp, s.metric)
    				continue
    			}
    			// If we are here, the only chunks we have are the chunks in the checkpoint.
    			// Adjust things accordingly.
    			if s.persistWatermark > 0 || s.chunkDescsOffset != 0 {
    				minLostChunks := s.persistWatermark + s.chunkDescsOffset
    				if minLostChunks <= 0 {
    					log.Warnf(
    						"Possible loss of chunks for fingerprint %v, metric %v.",
    						fp, s.metric,
    					)
    				} else {
    					log.Warnf(
    						"Lost at least %d chunks for fingerprint %v, metric %v.",
    						minLostChunks, fp, s.metric,
    					)
    				}
    				s.chunkDescs = append(
    					make([]*chunkDesc, 0, len(s.chunkDescs)-s.persistWatermark),
    					s.chunkDescs[s.persistWatermark:]...,
    				)
    				numMemChunkDescs.Sub(float64(s.persistWatermark))
    				s.persistWatermark = 0
    				s.chunkDescsOffset = 0
    			}
    			maybeAddMapping(fp, s.metric, fpm)
    			fpsSeen[fp] = struct{}{} // Add so that fpsSeen is complete.
    		}
    	}
    	log.Info("Check for series without series file complete.")
    
    	if err := p.cleanUpArchiveIndexes(fingerprintToSeries, fpsSeen, fpm); err != nil {
    		return err
    	}
    	if err := p.rebuildLabelIndexes(fingerprintToSeries); err != nil {
    		return err
    	}
    	// Finally rewrite the mappings file if there are any mappings.
    	if len(fpm) > 0 {
    		if err := p.checkpointFPMappings(fpm); err != nil {
    			return err
    		}
    	}
    
    	p.setDirty(false)
    	log.Warn("Crash recovery complete.")
    	return nil
    }
    
    // sanitizeSeries sanitizes a series based on its series file as defined by the
    // provided directory and FileInfo.  The method returns the fingerprint as
    // derived from the directory and file name, and whether the provided file has
    // been sanitized. A file that failed to be sanitized is moved into the
    // "orphaned" sub-directory, if possible.
    //
    // The following steps are performed:
    //
    // - A file whose name doesn't comply with the naming scheme of a series file is
    //   simply moved into the orphaned directory.
    //
    // - If the size of the series file isn't a multiple of the chunk size,
    //   extraneous bytes are truncated.  If the truncation fails, the file is
    //   moved into the orphaned directory.
    //
    // - A file that is empty (after truncation) is deleted.
    //
    // - A series that is not archived (i.e. it is in the fingerprintToSeries map)
    //   is checked for consistency of its various parameters (like persist
    //   watermark, offset of chunkDescs etc.). In particular, overlap between an
    //   in-memory head chunk with the most recent persisted chunk is
    //   checked. Inconsistencies are rectified.
    //
    // - A series that is archived (i.e. it is not in the fingerprintToSeries map)
    //   is checked for its presence in the index of archived series. If it cannot
    //   be found there, it is moved into the orphaned directory.
    func (p *persistence) sanitizeSeries(
    	dirname string, fi os.FileInfo,
    	fingerprintToSeries map[model.Fingerprint]*memorySeries,
    	fpm fpMappings,
    ) (model.Fingerprint, bool) {
    	filename := path.Join(dirname, fi.Name())
    	purge := func() {
    		var err error
    		defer func() {
    			if err != nil {
    				log.Errorf("Failed to move lost series file %s to orphaned directory, deleting it instead. Error was: %s", filename, err)
    				if err = os.Remove(filename); err != nil {
    					log.Errorf("Even deleting file %s did not work: %s", filename, err)
    				}
    			}
    		}()
    		orphanedDir := path.Join(p.basePath, "orphaned", path.Base(dirname))
    		if err = os.MkdirAll(orphanedDir, 0700); err != nil {
    			return
    		}
    		if err = os.Rename(filename, path.Join(orphanedDir, fi.Name())); err != nil {
    			return
    		}
    	}
    
    	var fp model.Fingerprint
    	var err error
    
    	if len(fi.Name()) != fpLen-seriesDirNameLen+len(seriesFileSuffix) ||
    		!strings.HasSuffix(fi.Name(), seriesFileSuffix) {
    		log.Warnf("Unexpected series file name %s.", filename)
    		purge()
    		return fp, false
    	}
    	if fp, err = model.FingerprintFromString(path.Base(dirname) + fi.Name()[:fpLen-seriesDirNameLen]); err != nil {
    		log.Warnf("Error parsing file name %s: %s", filename, err)
    		purge()
    		return fp, false
    	}
    
    	bytesToTrim := fi.Size() % int64(chunkLenWithHeader)
    	chunksInFile := int(fi.Size()) / chunkLenWithHeader
    	modTime := fi.ModTime()
    	if bytesToTrim != 0 {
    		log.Warnf(
    			"Truncating file %s to exactly %d chunks, trimming %d extraneous bytes.",
    			filename, chunksInFile, bytesToTrim,
    		)
    		f, err := os.OpenFile(filename, os.O_WRONLY, 0640)
    		if err != nil {
    			log.Errorf("Could not open file %s: %s", filename, err)
    			purge()
    			return fp, false
    		}
    		if err := f.Truncate(fi.Size() - bytesToTrim); err != nil {
    			log.Errorf("Failed to truncate file %s: %s", filename, err)
    			purge()
    			return fp, false
    		}
    	}
    	if chunksInFile == 0 {
    		log.Warnf("No chunks left in file %s.", filename)
    		purge()
    		return fp, false
    	}
    
    	s, ok := fingerprintToSeries[fp]
    	if ok { // This series is supposed to not be archived.
    		if s == nil {
    			panic("fingerprint mapped to nil pointer")
    		}
    		maybeAddMapping(fp, s.metric, fpm)
    		if !p.pedanticChecks &&
    			bytesToTrim == 0 &&
    			s.chunkDescsOffset != -1 &&
    			chunksInFile == s.chunkDescsOffset+s.persistWatermark &&
    			modTime.Equal(s.modTime) {
    			// Everything is consistent. We are good.
    			return fp, true
    		}
    		// If we are here, we cannot be sure the series file is
    		// consistent with the checkpoint, so we have to take a closer
    		// look.
    		if s.headChunkClosed {
    			// This is the easy case as we have all chunks on
    			// disk. Treat this series as a freshly unarchived one
    			// by loading the chunkDescs and setting all parameters
    			// based on the loaded chunkDescs.
    			cds, err := p.loadChunkDescs(fp, 0)
    			if err != nil {
    				log.Errorf(
    					"Failed to load chunk descriptors for metric %v, fingerprint %v: %s",
    					s.metric, fp, err,
    				)
    				purge()
    				return fp, false
    			}
    			log.Warnf(
    				"Treating recovered metric %v, fingerprint %v, as freshly unarchived, with %d chunks in series file.",
    				s.metric, fp, len(cds),
    			)
    			s.chunkDescs = cds
    			s.chunkDescsOffset = 0
    			s.savedFirstTime = cds[0].firstTime()
    			s.lastTime = cds[len(cds)-1].lastTime()
    			s.persistWatermark = len(cds)
    			s.modTime = modTime
    			return fp, true
    		}
    		// This is the tricky one: We have chunks from heads.db, but
    		// some of those chunks might already be in the series
    		// file. Strategy: Take the last time of the most recent chunk
    		// in the series file. Then find the oldest chunk among those
    		// from heads.db that has a first time later or equal to the
    		// last time from the series file. Throw away the older chunks
    		// from heads.db and stitch the parts together.
    
    		// First, throw away the chunkDescs without chunks.
    		s.chunkDescs = s.chunkDescs[s.persistWatermark:]
    		numMemChunkDescs.Sub(float64(s.persistWatermark))
    		cds, err := p.loadChunkDescs(fp, 0)
    		if err != nil {
    			log.Errorf(
    				"Failed to load chunk descriptors for metric %v, fingerprint %v: %s",
    				s.metric, fp, err,
    			)
    			purge()
    			return fp, false
    		}
    		s.persistWatermark = len(cds)
    		s.chunkDescsOffset = 0
    		s.savedFirstTime = cds[0].firstTime()
    		s.modTime = modTime
    
    		lastTime := cds[len(cds)-1].lastTime()
    		keepIdx := -1
    		for i, cd := range s.chunkDescs {
    			if cd.firstTime() >= lastTime {
    				keepIdx = i
    				break
    			}
    		}
    		if keepIdx == -1 {
    			log.Warnf(
    				"Recovered metric %v, fingerprint %v: all %d chunks recovered from series file.",
    				s.metric, fp, chunksInFile,
    			)
    			numMemChunkDescs.Sub(float64(len(s.chunkDescs)))
    			atomic.AddInt64(&numMemChunks, int64(-len(s.chunkDescs)))
    			s.chunkDescs = cds
    			s.headChunkClosed = true
    			return fp, true
    		}
    		log.Warnf(
    			"Recovered metric %v, fingerprint %v: recovered %d chunks from series file, recovered %d chunks from checkpoint.",
    			s.metric, fp, chunksInFile, len(s.chunkDescs)-keepIdx,
    		)
    		numMemChunkDescs.Sub(float64(keepIdx))
    		atomic.AddInt64(&numMemChunks, int64(-keepIdx))
    		s.chunkDescs = append(cds, s.chunkDescs[keepIdx:]...)
    		return fp, true
    	}
    	// This series is supposed to be archived.
    	metric, err := p.archivedMetric(fp)
    	if err != nil {
    		log.Errorf(
    			"Fingerprint %v assumed archived but couldn't be looked up in archived index: %s",
    			fp, err,
    		)
    		purge()
    		return fp, false
    	}
    	if metric == nil {
    		log.Warnf(
    			"Fingerprint %v assumed archived but couldn't be found in archived index.",
    			fp,
    		)
    		purge()
    		return fp, false
    	}
    	// This series looks like a properly archived one.
    	maybeAddMapping(fp, metric, fpm)
    	return fp, true
    }
    
    func (p *persistence) cleanUpArchiveIndexes(
    	fpToSeries map[model.Fingerprint]*memorySeries,
    	fpsSeen map[model.Fingerprint]struct{},
    	fpm fpMappings,
    ) error {
    	log.Info("Cleaning up archive indexes.")
    	var fp codable.Fingerprint
    	var m codable.Metric
    	count := 0
    	if err := p.archivedFingerprintToMetrics.ForEach(func(kv index.KeyValueAccessor) error {
    		count++
    		if count%10000 == 0 {
    			log.Infof("%d archived metrics checked.", count)
    		}
    		if err := kv.Key(&fp); err != nil {
    			return err
    		}
    		_, fpSeen := fpsSeen[model.Fingerprint(fp)]
    		inMemory := false
    		if fpSeen {
    			_, inMemory = fpToSeries[model.Fingerprint(fp)]
    		}
    		if !fpSeen || inMemory {
    			if inMemory {
    				log.Warnf("Archive clean-up: Fingerprint %v is not archived. Purging from archive indexes.", model.Fingerprint(fp))
    			}
    			if !fpSeen {
    				log.Warnf("Archive clean-up: Fingerprint %v is unknown. Purging from archive indexes.", model.Fingerprint(fp))
    			}
    			// It's fine if the fp is not in the archive indexes.
    			if _, err := p.archivedFingerprintToMetrics.Delete(fp); err != nil {
    				return err
    			}
    			// Delete from timerange index, too.
    			_, err := p.archivedFingerprintToTimeRange.Delete(fp)
    			return err
    		}
    		// fp is legitimately archived. Now we need the metric to check for a mapped fingerprint.
    		if err := kv.Value(&m); err != nil {
    			return err
    		}
    		maybeAddMapping(model.Fingerprint(fp), model.Metric(m), fpm)
    		// Make sure it is in timerange index, too.
    		has, err := p.archivedFingerprintToTimeRange.Has(fp)
    		if err != nil {
    			return err
    		}
    		if has {
    			return nil // All good.
    		}
    		log.Warnf("Archive clean-up: Fingerprint %v is not in time-range index. Unarchiving it for recovery.")
    		// Again, it's fine if fp is not in the archive index.
    		if _, err := p.archivedFingerprintToMetrics.Delete(fp); err != nil {
    			return err
    		}
    		cds, err := p.loadChunkDescs(model.Fingerprint(fp), 0)
    		if err != nil {
    			return err
    		}
    		series := newMemorySeries(model.Metric(m), cds, p.seriesFileModTime(model.Fingerprint(fp)))
    		fpToSeries[model.Fingerprint(fp)] = series
    		return nil
    	}); err != nil {
    		return err
    	}
    	count = 0
    	if err := p.archivedFingerprintToTimeRange.ForEach(func(kv index.KeyValueAccessor) error {
    		count++
    		if count%10000 == 0 {
    			log.Infof("%d archived time ranges checked.", count)
    		}
    		if err := kv.Key(&fp); err != nil {
    			return err
    		}
    		has, err := p.archivedFingerprintToMetrics.Has(fp)
    		if err != nil {
    			return err
    		}
    		if has {
    			return nil // All good.
    		}
    		log.Warnf("Archive clean-up: Purging unknown fingerprint %v in time-range index.", fp)
    		deleted, err := p.archivedFingerprintToTimeRange.Delete(fp)
    		if err != nil {
    			return err
    		}
    		if !deleted {
    			log.Errorf("Fingerprint %v to be deleted from archivedFingerprintToTimeRange not found. This should never happen.", fp)
    		}
    		return nil
    	}); err != nil {
    		return err
    	}
    	log.Info("Clean-up of archive indexes complete.")
    	return nil
    }
    
    func (p *persistence) rebuildLabelIndexes(
    	fpToSeries map[model.Fingerprint]*memorySeries,
    ) error {
    	count := 0
    	log.Info("Rebuilding label indexes.")
    	log.Info("Indexing metrics in memory.")
    	for fp, s := range fpToSeries {
    		p.indexMetric(fp, s.metric)
    		count++
    		if count%10000 == 0 {
    			log.Infof("%d metrics queued for indexing.", count)
    		}
    	}
    	log.Info("Indexing archived metrics.")
    	var fp codable.Fingerprint
    	var m codable.Metric
    	if err := p.archivedFingerprintToMetrics.ForEach(func(kv index.KeyValueAccessor) error {
    		if err := kv.Key(&fp); err != nil {
    			return err
    		}
    		if err := kv.Value(&m); err != nil {
    			return err
    		}
    		p.indexMetric(model.Fingerprint(fp), model.Metric(m))
    		count++
    		if count%10000 == 0 {
    			log.Infof("%d metrics queued for indexing.", count)
    		}
    		return nil
    	}); err != nil {
    		return err
    	}
    	log.Info("All requests for rebuilding the label indexes queued. (Actual processing may lag behind.)")
    	return nil
    }
    
    // maybeAddMapping adds a fingerprint mapping to fpm if the FastFingerprint of m is different from fp.
    func maybeAddMapping(fp model.Fingerprint, m model.Metric, fpm fpMappings) {
    	if rawFP := m.FastFingerprint(); rawFP != fp {
    		log.Warnf(
    			"Metric %v with fingerprint %v is mapped from raw fingerprint %v.",
    			m, fp, rawFP,
    		)
    		if mappedFPs, ok := fpm[rawFP]; ok {
    			mappedFPs[metricToUniqueString(m)] = fp
    		} else {
    			fpm[rawFP] = map[string]model.Fingerprint{
    				metricToUniqueString(m): fp,
    			}
    		}
    	}
    }
    prometheus-0.16.2+ds/storage/local/delta.go000066400000000000000000000273621265137125100205700ustar00rootroot00000000000000// Copyright 2014 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package local
    
    import (
    	"encoding/binary"
    	"fmt"
    	"io"
    	"math"
    	"sort"
    
    	"github.com/prometheus/common/model"
    
    	"github.com/prometheus/prometheus/storage/metric"
    )
    
    // The 21-byte header of a delta-encoded chunk looks like:
    //
    // - time delta bytes:  1 bytes
    // - value delta bytes: 1 bytes
    // - is integer:        1 byte
    // - base time:         8 bytes
    // - base value:        8 bytes
    // - used buf bytes:    2 bytes
    const (
    	deltaHeaderBytes = 21
    
    	deltaHeaderTimeBytesOffset  = 0
    	deltaHeaderValueBytesOffset = 1
    	deltaHeaderIsIntOffset      = 2
    	deltaHeaderBaseTimeOffset   = 3
    	deltaHeaderBaseValueOffset  = 11
    	deltaHeaderBufLenOffset     = 19
    )
    
    // A deltaEncodedChunk adaptively stores sample timestamps and values with a
    // delta encoding of various types (int, float) and bit widths. However, once 8
    // bytes would be needed to encode a delta value, a fall-back to the absolute
    // numbers happens (so that timestamps are saved directly as int64 and values as
    // float64). It implements the chunk interface.
    type deltaEncodedChunk []byte
    
    // newDeltaEncodedChunk returns a newly allocated deltaEncodedChunk.
    func newDeltaEncodedChunk(tb, vb deltaBytes, isInt bool, length int) *deltaEncodedChunk {
    	if tb < 1 {
    		panic("need at least 1 time delta byte")
    	}
    	if length < deltaHeaderBytes+16 {
    		panic(fmt.Errorf(
    			"chunk length %d bytes is insufficient, need at least %d",
    			length, deltaHeaderBytes+16,
    		))
    	}
    	c := make(deltaEncodedChunk, deltaHeaderIsIntOffset+1, length)
    
    	c[deltaHeaderTimeBytesOffset] = byte(tb)
    	c[deltaHeaderValueBytesOffset] = byte(vb)
    	if vb < d8 && isInt { // Only use int for fewer than 8 value delta bytes.
    		c[deltaHeaderIsIntOffset] = 1
    	} else {
    		c[deltaHeaderIsIntOffset] = 0
    	}
    
    	return &c
    }
    
    // add implements chunk.
    func (c deltaEncodedChunk) add(s *model.SamplePair) []chunk {
    	if c.len() == 0 {
    		c = c[:deltaHeaderBytes]
    		binary.LittleEndian.PutUint64(c[deltaHeaderBaseTimeOffset:], uint64(s.Timestamp))
    		binary.LittleEndian.PutUint64(c[deltaHeaderBaseValueOffset:], math.Float64bits(float64(s.Value)))
    	}
    
    	remainingBytes := cap(c) - len(c)
    	sampleSize := c.sampleSize()
    
    	// Do we generally have space for another sample in this chunk? If not,
    	// overflow into a new one.
    	if remainingBytes < sampleSize {
    		overflowChunks := newChunk().add(s)
    		return []chunk{&c, overflowChunks[0]}
    	}
    
    	baseValue := c.baseValue()
    	dt := s.Timestamp - c.baseTime()
    	if dt < 0 {
    		panic("time delta is less than zero")
    	}
    
    	dv := s.Value - baseValue
    	tb := c.timeBytes()
    	vb := c.valueBytes()
    	isInt := c.isInt()
    
    	// If the new sample is incompatible with the current encoding, reencode the
    	// existing chunk data into new chunk(s).
    
    	ntb, nvb, nInt := tb, vb, isInt
    	if isInt && !isInt64(dv) {
    		// int->float.
    		nvb = d4
    		nInt = false
    	} else if !isInt && vb == d4 && baseValue+model.SampleValue(float32(dv)) != s.Value {
    		// float32->float64.
    		nvb = d8
    	} else {
    		if tb < d8 {
    			// Maybe more bytes for timestamp.
    			ntb = max(tb, bytesNeededForUnsignedTimestampDelta(dt))
    		}
    		if c.isInt() && vb < d8 {
    			// Maybe more bytes for sample value.
    			nvb = max(vb, bytesNeededForIntegerSampleValueDelta(dv))
    		}
    	}
    	if tb != ntb || vb != nvb || isInt != nInt {
    		if len(c)*2 < cap(c) {
    			return transcodeAndAdd(newDeltaEncodedChunk(ntb, nvb, nInt, cap(c)), &c, s)
    		}
    		// Chunk is already half full. Better create a new one and save the transcoding efforts.
    		overflowChunks := newChunk().add(s)
    		return []chunk{&c, overflowChunks[0]}
    	}
    
    	offset := len(c)
    	c = c[:offset+sampleSize]
    
    	switch tb {
    	case d1:
    		c[offset] = byte(dt)
    	case d2:
    		binary.LittleEndian.PutUint16(c[offset:], uint16(dt))
    	case d4:
    		binary.LittleEndian.PutUint32(c[offset:], uint32(dt))
    	case d8:
    		// Store the absolute value (no delta) in case of d8.
    		binary.LittleEndian.PutUint64(c[offset:], uint64(s.Timestamp))
    	default:
    		panic("invalid number of bytes for time delta")
    	}
    
    	offset += int(tb)
    
    	if c.isInt() {
    		switch vb {
    		case d0:
    			// No-op. Constant value is stored as base value.
    		case d1:
    			c[offset] = byte(int8(dv))
    		case d2:
    			binary.LittleEndian.PutUint16(c[offset:], uint16(int16(dv)))
    		case d4:
    			binary.LittleEndian.PutUint32(c[offset:], uint32(int32(dv)))
    		// d8 must not happen. Those samples are encoded as float64.
    		default:
    			panic("invalid number of bytes for integer delta")
    		}
    	} else {
    		switch vb {
    		case d4:
    			binary.LittleEndian.PutUint32(c[offset:], math.Float32bits(float32(dv)))
    		case d8:
    			// Store the absolute value (no delta) in case of d8.
    			binary.LittleEndian.PutUint64(c[offset:], math.Float64bits(float64(s.Value)))
    		default:
    			panic("invalid number of bytes for floating point delta")
    		}
    	}
    	return []chunk{&c}
    }
    
    // clone implements chunk.
    func (c deltaEncodedChunk) clone() chunk {
    	clone := make(deltaEncodedChunk, len(c), cap(c))
    	copy(clone, c)
    	return &clone
    }
    
    // firstTime implements chunk.
    func (c deltaEncodedChunk) firstTime() model.Time {
    	return c.baseTime()
    }
    
    // newIterator implements chunk.
    func (c *deltaEncodedChunk) newIterator() chunkIterator {
    	return &deltaEncodedChunkIterator{
    		c:      *c,
    		len:    c.len(),
    		baseT:  c.baseTime(),
    		baseV:  c.baseValue(),
    		tBytes: c.timeBytes(),
    		vBytes: c.valueBytes(),
    		isInt:  c.isInt(),
    	}
    }
    
    // marshal implements chunk.
    func (c deltaEncodedChunk) marshal(w io.Writer) error {
    	if len(c) > math.MaxUint16 {
    		panic("chunk buffer length would overflow a 16 bit uint.")
    	}
    	binary.LittleEndian.PutUint16(c[deltaHeaderBufLenOffset:], uint16(len(c)))
    
    	n, err := w.Write(c[:cap(c)])
    	if err != nil {
    		return err
    	}
    	if n != cap(c) {
    		return fmt.Errorf("wanted to write %d bytes, wrote %d", len(c), n)
    	}
    	return nil
    }
    
    // unmarshal implements chunk.
    func (c *deltaEncodedChunk) unmarshal(r io.Reader) error {
    	*c = (*c)[:cap(*c)]
    	if _, err := io.ReadFull(r, *c); err != nil {
    		return err
    	}
    	*c = (*c)[:binary.LittleEndian.Uint16((*c)[deltaHeaderBufLenOffset:])]
    	return nil
    }
    
    // unmarshalFromBuf implements chunk.
    func (c *deltaEncodedChunk) unmarshalFromBuf(buf []byte) {
    	*c = (*c)[:cap(*c)]
    	copy(*c, buf)
    	*c = (*c)[:binary.LittleEndian.Uint16((*c)[deltaHeaderBufLenOffset:])]
    }
    
    // encoding implements chunk.
    func (c deltaEncodedChunk) encoding() chunkEncoding { return delta }
    
    func (c deltaEncodedChunk) timeBytes() deltaBytes {
    	return deltaBytes(c[deltaHeaderTimeBytesOffset])
    }
    
    func (c deltaEncodedChunk) valueBytes() deltaBytes {
    	return deltaBytes(c[deltaHeaderValueBytesOffset])
    }
    
    func (c deltaEncodedChunk) isInt() bool {
    	return c[deltaHeaderIsIntOffset] == 1
    }
    
    func (c deltaEncodedChunk) baseTime() model.Time {
    	return model.Time(binary.LittleEndian.Uint64(c[deltaHeaderBaseTimeOffset:]))
    }
    
    func (c deltaEncodedChunk) baseValue() model.SampleValue {
    	return model.SampleValue(math.Float64frombits(binary.LittleEndian.Uint64(c[deltaHeaderBaseValueOffset:])))
    }
    
    func (c deltaEncodedChunk) sampleSize() int {
    	return int(c.timeBytes() + c.valueBytes())
    }
    
    func (c deltaEncodedChunk) len() int {
    	if len(c) < deltaHeaderBytes {
    		return 0
    	}
    	return (len(c) - deltaHeaderBytes) / c.sampleSize()
    }
    
    // deltaEncodedChunkIterator implements chunkIterator.
    type deltaEncodedChunkIterator struct {
    	c              deltaEncodedChunk
    	len            int
    	baseT          model.Time
    	baseV          model.SampleValue
    	tBytes, vBytes deltaBytes
    	isInt          bool
    }
    
    // length implements chunkIterator.
    func (it *deltaEncodedChunkIterator) length() int { return it.len }
    
    // valueAtTime implements chunkIterator.
    func (it *deltaEncodedChunkIterator) valueAtTime(t model.Time) []model.SamplePair {
    	i := sort.Search(it.len, func(i int) bool {
    		return !it.timestampAtIndex(i).Before(t)
    	})
    
    	switch i {
    	case 0:
    		return []model.SamplePair{{
    			Timestamp: it.timestampAtIndex(0),
    			Value:     it.sampleValueAtIndex(0),
    		}}
    	case it.len:
    		return []model.SamplePair{{
    			Timestamp: it.timestampAtIndex(it.len - 1),
    			Value:     it.sampleValueAtIndex(it.len - 1),
    		}}
    	default:
    		ts := it.timestampAtIndex(i)
    		if ts.Equal(t) {
    			return []model.SamplePair{{
    				Timestamp: ts,
    				Value:     it.sampleValueAtIndex(i),
    			}}
    		}
    		return []model.SamplePair{
    			{
    				Timestamp: it.timestampAtIndex(i - 1),
    				Value:     it.sampleValueAtIndex(i - 1),
    			},
    			{
    				Timestamp: ts,
    				Value:     it.sampleValueAtIndex(i),
    			},
    		}
    	}
    }
    
    // rangeValues implements chunkIterator.
    func (it *deltaEncodedChunkIterator) rangeValues(in metric.Interval) []model.SamplePair {
    	oldest := sort.Search(it.len, func(i int) bool {
    		return !it.timestampAtIndex(i).Before(in.OldestInclusive)
    	})
    
    	newest := sort.Search(it.len, func(i int) bool {
    		return it.timestampAtIndex(i).After(in.NewestInclusive)
    	})
    
    	if oldest == it.len {
    		return nil
    	}
    
    	result := make([]model.SamplePair, 0, newest-oldest)
    	for i := oldest; i < newest; i++ {
    		result = append(result, model.SamplePair{
    			Timestamp: it.timestampAtIndex(i),
    			Value:     it.sampleValueAtIndex(i),
    		})
    	}
    	return result
    }
    
    // contains implements chunkIterator.
    func (it *deltaEncodedChunkIterator) contains(t model.Time) bool {
    	return !t.Before(it.baseT) && !t.After(it.timestampAtIndex(it.len-1))
    }
    
    // values implements chunkIterator.
    func (it *deltaEncodedChunkIterator) values() <-chan *model.SamplePair {
    	valuesChan := make(chan *model.SamplePair)
    	go func() {
    		for i := 0; i < it.len; i++ {
    			valuesChan <- &model.SamplePair{
    				Timestamp: it.timestampAtIndex(i),
    				Value:     it.sampleValueAtIndex(i),
    			}
    		}
    		close(valuesChan)
    	}()
    	return valuesChan
    }
    
    // timestampAtIndex implements chunkIterator.
    func (it *deltaEncodedChunkIterator) timestampAtIndex(idx int) model.Time {
    	offset := deltaHeaderBytes + idx*int(it.tBytes+it.vBytes)
    
    	switch it.tBytes {
    	case d1:
    		return it.baseT + model.Time(uint8(it.c[offset]))
    	case d2:
    		return it.baseT + model.Time(binary.LittleEndian.Uint16(it.c[offset:]))
    	case d4:
    		return it.baseT + model.Time(binary.LittleEndian.Uint32(it.c[offset:]))
    	case d8:
    		// Take absolute value for d8.
    		return model.Time(binary.LittleEndian.Uint64(it.c[offset:]))
    	default:
    		panic("invalid number of bytes for time delta")
    	}
    }
    
    // lastTimestamp implements chunkIterator.
    func (it *deltaEncodedChunkIterator) lastTimestamp() model.Time {
    	return it.timestampAtIndex(it.len - 1)
    }
    
    // sampleValueAtIndex implements chunkIterator.
    func (it *deltaEncodedChunkIterator) sampleValueAtIndex(idx int) model.SampleValue {
    	offset := deltaHeaderBytes + idx*int(it.tBytes+it.vBytes) + int(it.tBytes)
    
    	if it.isInt {
    		switch it.vBytes {
    		case d0:
    			return it.baseV
    		case d1:
    			return it.baseV + model.SampleValue(int8(it.c[offset]))
    		case d2:
    			return it.baseV + model.SampleValue(int16(binary.LittleEndian.Uint16(it.c[offset:])))
    		case d4:
    			return it.baseV + model.SampleValue(int32(binary.LittleEndian.Uint32(it.c[offset:])))
    		// No d8 for ints.
    		default:
    			panic("invalid number of bytes for integer delta")
    		}
    	} else {
    		switch it.vBytes {
    		case d4:
    			return it.baseV + model.SampleValue(math.Float32frombits(binary.LittleEndian.Uint32(it.c[offset:])))
    		case d8:
    			// Take absolute value for d8.
    			return model.SampleValue(math.Float64frombits(binary.LittleEndian.Uint64(it.c[offset:])))
    		default:
    			panic("invalid number of bytes for floating point delta")
    		}
    	}
    }
    
    // lastSampleValue implements chunkIterator.
    func (it *deltaEncodedChunkIterator) lastSampleValue() model.SampleValue {
    	return it.sampleValueAtIndex(it.len - 1)
    }
    prometheus-0.16.2+ds/storage/local/delta_helpers.go000066400000000000000000000037131265137125100223040ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package local
    
    import (
    	"math"
    
    	"github.com/prometheus/common/model"
    )
    
    type deltaBytes byte
    
    const (
    	d0 deltaBytes = 0
    	d1 deltaBytes = 1
    	d2 deltaBytes = 2
    	d4 deltaBytes = 4
    	d8 deltaBytes = 8
    )
    
    func bytesNeededForUnsignedTimestampDelta(deltaT model.Time) deltaBytes {
    	switch {
    	case deltaT > math.MaxUint32:
    		return d8
    	case deltaT > math.MaxUint16:
    		return d4
    	case deltaT > math.MaxUint8:
    		return d2
    	default:
    		return d1
    	}
    }
    
    func bytesNeededForSignedTimestampDelta(deltaT model.Time) deltaBytes {
    	switch {
    	case deltaT > math.MaxInt32 || deltaT < math.MinInt32:
    		return d8
    	case deltaT > math.MaxInt16 || deltaT < math.MinInt16:
    		return d4
    	case deltaT > math.MaxInt8 || deltaT < math.MinInt8:
    		return d2
    	default:
    		return d1
    	}
    }
    
    func bytesNeededForIntegerSampleValueDelta(deltaV model.SampleValue) deltaBytes {
    	switch {
    	case deltaV < math.MinInt32 || deltaV > math.MaxInt32:
    		return d8
    	case deltaV < math.MinInt16 || deltaV > math.MaxInt16:
    		return d4
    	case deltaV < math.MinInt8 || deltaV > math.MaxInt8:
    		return d2
    	case deltaV != 0:
    		return d1
    	default:
    		return d0
    	}
    }
    
    func max(a, b deltaBytes) deltaBytes {
    	if a > b {
    		return a
    	}
    	return b
    }
    
    // isInt64 returns true if v can be represented as an int64.
    func isInt64(v model.SampleValue) bool {
    	// Note: Using math.Modf is slower than the conversion approach below.
    	return model.SampleValue(int64(v)) == v
    }
    prometheus-0.16.2+ds/storage/local/doubledelta.go000066400000000000000000000401131265137125100217500ustar00rootroot00000000000000// Copyright 2014 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package local
    
    import (
    	"encoding/binary"
    	"fmt"
    	"io"
    	"math"
    	"sort"
    
    	"github.com/prometheus/common/model"
    
    	"github.com/prometheus/prometheus/storage/metric"
    )
    
    // The 37-byte header of a delta-encoded chunk looks like:
    //
    // - used buf bytes:           2 bytes
    // - time double-delta bytes:  1 bytes
    // - value double-delta bytes: 1 bytes
    // - is integer:               1 byte
    // - base time:                8 bytes
    // - base value:               8 bytes
    // - base time delta:          8 bytes
    // - base value delta:         8 bytes
    const (
    	doubleDeltaHeaderBytes = 37
    
    	doubleDeltaHeaderBufLenOffset         = 0
    	doubleDeltaHeaderTimeBytesOffset      = 2
    	doubleDeltaHeaderValueBytesOffset     = 3
    	doubleDeltaHeaderIsIntOffset          = 4
    	doubleDeltaHeaderBaseTimeOffset       = 5
    	doubleDeltaHeaderBaseValueOffset      = 13
    	doubleDeltaHeaderBaseTimeDeltaOffset  = 21
    	doubleDeltaHeaderBaseValueDeltaOffset = 29
    )
    
    // A doubleDeltaEncodedChunk adaptively stores sample timestamps and values with
    // a double-delta encoding of various types (int, float) and bit widths. A base
    // value and timestamp and a base delta for each is saved in the header. The
    // payload consists of double-deltas, i.e. deviations from the values and
    // timestamps calculated by applying the base value and time and the base deltas.
    // However, once 8 bytes would be needed to encode a double-delta value, a
    // fall-back to the absolute numbers happens (so that timestamps are saved
    // directly as int64 and values as float64).
    // doubleDeltaEncodedChunk implements the chunk interface.
    type doubleDeltaEncodedChunk []byte
    
    // newDoubleDeltaEncodedChunk returns a newly allocated doubleDeltaEncodedChunk.
    func newDoubleDeltaEncodedChunk(tb, vb deltaBytes, isInt bool, length int) *doubleDeltaEncodedChunk {
    	if tb < 1 {
    		panic("need at least 1 time delta byte")
    	}
    	if length < doubleDeltaHeaderBytes+16 {
    		panic(fmt.Errorf(
    			"chunk length %d bytes is insufficient, need at least %d",
    			length, doubleDeltaHeaderBytes+16,
    		))
    	}
    	c := make(doubleDeltaEncodedChunk, doubleDeltaHeaderIsIntOffset+1, length)
    
    	c[doubleDeltaHeaderTimeBytesOffset] = byte(tb)
    	c[doubleDeltaHeaderValueBytesOffset] = byte(vb)
    	if vb < d8 && isInt { // Only use int for fewer than 8 value double-delta bytes.
    		c[doubleDeltaHeaderIsIntOffset] = 1
    	} else {
    		c[doubleDeltaHeaderIsIntOffset] = 0
    	}
    	return &c
    }
    
    // add implements chunk.
    func (c doubleDeltaEncodedChunk) add(s *model.SamplePair) []chunk {
    	if c.len() == 0 {
    		return c.addFirstSample(s)
    	}
    
    	tb := c.timeBytes()
    	vb := c.valueBytes()
    
    	if c.len() == 1 {
    		return c.addSecondSample(s, tb, vb)
    	}
    
    	remainingBytes := cap(c) - len(c)
    	sampleSize := c.sampleSize()
    
    	// Do we generally have space for another sample in this chunk? If not,
    	// overflow into a new one.
    	if remainingBytes < sampleSize {
    		overflowChunks := newChunk().add(s)
    		return []chunk{&c, overflowChunks[0]}
    	}
    
    	projectedTime := c.baseTime() + model.Time(c.len())*c.baseTimeDelta()
    	ddt := s.Timestamp - projectedTime
    
    	projectedValue := c.baseValue() + model.SampleValue(c.len())*c.baseValueDelta()
    	ddv := s.Value - projectedValue
    
    	ntb, nvb, nInt := tb, vb, c.isInt()
    	// If the new sample is incompatible with the current encoding, reencode the
    	// existing chunk data into new chunk(s).
    	if c.isInt() && !isInt64(ddv) {
    		// int->float.
    		nvb = d4
    		nInt = false
    	} else if !c.isInt() && vb == d4 && projectedValue+model.SampleValue(float32(ddv)) != s.Value {
    		// float32->float64.
    		nvb = d8
    	} else {
    		if tb < d8 {
    			// Maybe more bytes for timestamp.
    			ntb = max(tb, bytesNeededForSignedTimestampDelta(ddt))
    		}
    		if c.isInt() && vb < d8 {
    			// Maybe more bytes for sample value.
    			nvb = max(vb, bytesNeededForIntegerSampleValueDelta(ddv))
    		}
    	}
    	if tb != ntb || vb != nvb || c.isInt() != nInt {
    		if len(c)*2 < cap(c) {
    			return transcodeAndAdd(newDoubleDeltaEncodedChunk(ntb, nvb, nInt, cap(c)), &c, s)
    		}
    		// Chunk is already half full. Better create a new one and save the transcoding efforts.
    		overflowChunks := newChunk().add(s)
    		return []chunk{&c, overflowChunks[0]}
    	}
    
    	offset := len(c)
    	c = c[:offset+sampleSize]
    
    	switch tb {
    	case d1:
    		c[offset] = byte(ddt)
    	case d2:
    		binary.LittleEndian.PutUint16(c[offset:], uint16(ddt))
    	case d4:
    		binary.LittleEndian.PutUint32(c[offset:], uint32(ddt))
    	case d8:
    		// Store the absolute value (no delta) in case of d8.
    		binary.LittleEndian.PutUint64(c[offset:], uint64(s.Timestamp))
    	default:
    		panic("invalid number of bytes for time delta")
    	}
    
    	offset += int(tb)
    
    	if c.isInt() {
    		switch vb {
    		case d0:
    			// No-op. Constant delta is stored as base value.
    		case d1:
    			c[offset] = byte(int8(ddv))
    		case d2:
    			binary.LittleEndian.PutUint16(c[offset:], uint16(int16(ddv)))
    		case d4:
    			binary.LittleEndian.PutUint32(c[offset:], uint32(int32(ddv)))
    		// d8 must not happen. Those samples are encoded as float64.
    		default:
    			panic("invalid number of bytes for integer delta")
    		}
    	} else {
    		switch vb {
    		case d4:
    			binary.LittleEndian.PutUint32(c[offset:], math.Float32bits(float32(ddv)))
    		case d8:
    			// Store the absolute value (no delta) in case of d8.
    			binary.LittleEndian.PutUint64(c[offset:], math.Float64bits(float64(s.Value)))
    		default:
    			panic("invalid number of bytes for floating point delta")
    		}
    	}
    	return []chunk{&c}
    }
    
    // clone implements chunk.
    func (c doubleDeltaEncodedChunk) clone() chunk {
    	clone := make(doubleDeltaEncodedChunk, len(c), cap(c))
    	copy(clone, c)
    	return &clone
    }
    
    // firstTime implements chunk.
    func (c doubleDeltaEncodedChunk) firstTime() model.Time {
    	return c.baseTime()
    }
    
    // newIterator implements chunk.
    func (c *doubleDeltaEncodedChunk) newIterator() chunkIterator {
    	return &doubleDeltaEncodedChunkIterator{
    		c:      *c,
    		len:    c.len(),
    		baseT:  c.baseTime(),
    		baseΔT: c.baseTimeDelta(),
    		baseV:  c.baseValue(),
    		baseΔV: c.baseValueDelta(),
    		tBytes: c.timeBytes(),
    		vBytes: c.valueBytes(),
    		isInt:  c.isInt(),
    	}
    }
    
    // marshal implements chunk.
    func (c doubleDeltaEncodedChunk) marshal(w io.Writer) error {
    	if len(c) > math.MaxUint16 {
    		panic("chunk buffer length would overflow a 16 bit uint.")
    	}
    	binary.LittleEndian.PutUint16(c[doubleDeltaHeaderBufLenOffset:], uint16(len(c)))
    
    	n, err := w.Write(c[:cap(c)])
    	if err != nil {
    		return err
    	}
    	if n != cap(c) {
    		return fmt.Errorf("wanted to write %d bytes, wrote %d", len(c), n)
    	}
    	return nil
    }
    
    // unmarshal implements chunk.
    func (c *doubleDeltaEncodedChunk) unmarshal(r io.Reader) error {
    	*c = (*c)[:cap(*c)]
    	if _, err := io.ReadFull(r, *c); err != nil {
    		return err
    	}
    	*c = (*c)[:binary.LittleEndian.Uint16((*c)[doubleDeltaHeaderBufLenOffset:])]
    	return nil
    }
    
    // unmarshalFromBuf implements chunk.
    func (c *doubleDeltaEncodedChunk) unmarshalFromBuf(buf []byte) {
    	*c = (*c)[:cap(*c)]
    	copy(*c, buf)
    	*c = (*c)[:binary.LittleEndian.Uint16((*c)[doubleDeltaHeaderBufLenOffset:])]
    }
    
    // encoding implements chunk.
    func (c doubleDeltaEncodedChunk) encoding() chunkEncoding { return doubleDelta }
    
    func (c doubleDeltaEncodedChunk) baseTime() model.Time {
    	return model.Time(
    		binary.LittleEndian.Uint64(
    			c[doubleDeltaHeaderBaseTimeOffset:],
    		),
    	)
    }
    
    func (c doubleDeltaEncodedChunk) baseValue() model.SampleValue {
    	return model.SampleValue(
    		math.Float64frombits(
    			binary.LittleEndian.Uint64(
    				c[doubleDeltaHeaderBaseValueOffset:],
    			),
    		),
    	)
    }
    
    func (c doubleDeltaEncodedChunk) baseTimeDelta() model.Time {
    	if len(c) < doubleDeltaHeaderBaseTimeDeltaOffset+8 {
    		return 0
    	}
    	return model.Time(
    		binary.LittleEndian.Uint64(
    			c[doubleDeltaHeaderBaseTimeDeltaOffset:],
    		),
    	)
    }
    
    func (c doubleDeltaEncodedChunk) baseValueDelta() model.SampleValue {
    	if len(c) < doubleDeltaHeaderBaseValueDeltaOffset+8 {
    		return 0
    	}
    	return model.SampleValue(
    		math.Float64frombits(
    			binary.LittleEndian.Uint64(
    				c[doubleDeltaHeaderBaseValueDeltaOffset:],
    			),
    		),
    	)
    }
    
    func (c doubleDeltaEncodedChunk) timeBytes() deltaBytes {
    	return deltaBytes(c[doubleDeltaHeaderTimeBytesOffset])
    }
    
    func (c doubleDeltaEncodedChunk) valueBytes() deltaBytes {
    	return deltaBytes(c[doubleDeltaHeaderValueBytesOffset])
    }
    
    func (c doubleDeltaEncodedChunk) sampleSize() int {
    	return int(c.timeBytes() + c.valueBytes())
    }
    
    func (c doubleDeltaEncodedChunk) len() int {
    	if len(c) <= doubleDeltaHeaderIsIntOffset+1 {
    		return 0
    	}
    	if len(c) <= doubleDeltaHeaderBaseValueOffset+8 {
    		return 1
    	}
    	return (len(c)-doubleDeltaHeaderBytes)/c.sampleSize() + 2
    }
    
    func (c doubleDeltaEncodedChunk) isInt() bool {
    	return c[doubleDeltaHeaderIsIntOffset] == 1
    }
    
    // addFirstSample is a helper method only used by c.add(). It adds timestamp and
    // value as base time and value.
    func (c doubleDeltaEncodedChunk) addFirstSample(s *model.SamplePair) []chunk {
    	c = c[:doubleDeltaHeaderBaseValueOffset+8]
    	binary.LittleEndian.PutUint64(
    		c[doubleDeltaHeaderBaseTimeOffset:],
    		uint64(s.Timestamp),
    	)
    	binary.LittleEndian.PutUint64(
    		c[doubleDeltaHeaderBaseValueOffset:],
    		math.Float64bits(float64(s.Value)),
    	)
    	return []chunk{&c}
    }
    
    // addSecondSample is a helper method only used by c.add(). It calculates the
    // base delta from the provided sample and adds it to the chunk.
    func (c doubleDeltaEncodedChunk) addSecondSample(s *model.SamplePair, tb, vb deltaBytes) []chunk {
    	baseTimeDelta := s.Timestamp - c.baseTime()
    	if baseTimeDelta < 0 {
    		panic("base time delta is less than zero")
    	}
    	c = c[:doubleDeltaHeaderBytes]
    	if tb >= d8 || bytesNeededForUnsignedTimestampDelta(baseTimeDelta) >= d8 {
    		// If already the base delta needs d8 (or we are at d8
    		// already, anyway), we better encode this timestamp
    		// directly rather than as a delta and switch everything
    		// to d8.
    		c[doubleDeltaHeaderTimeBytesOffset] = byte(d8)
    		binary.LittleEndian.PutUint64(
    			c[doubleDeltaHeaderBaseTimeDeltaOffset:],
    			uint64(s.Timestamp),
    		)
    	} else {
    		binary.LittleEndian.PutUint64(
    			c[doubleDeltaHeaderBaseTimeDeltaOffset:],
    			uint64(baseTimeDelta),
    		)
    	}
    	baseValue := c.baseValue()
    	baseValueDelta := s.Value - baseValue
    	if vb >= d8 || baseValue+baseValueDelta != s.Value {
    		// If we can't reproduce the original sample value (or
    		// if we are at d8 already, anyway), we better encode
    		// this value directly rather than as a delta and switch
    		// everything to d8.
    		c[doubleDeltaHeaderValueBytesOffset] = byte(d8)
    		c[doubleDeltaHeaderIsIntOffset] = 0
    		binary.LittleEndian.PutUint64(
    			c[doubleDeltaHeaderBaseValueDeltaOffset:],
    			math.Float64bits(float64(s.Value)),
    		)
    	} else {
    		binary.LittleEndian.PutUint64(
    			c[doubleDeltaHeaderBaseValueDeltaOffset:],
    			math.Float64bits(float64(baseValueDelta)),
    		)
    	}
    	return []chunk{&c}
    }
    
    // doubleDeltaEncodedChunkIterator implements chunkIterator.
    type doubleDeltaEncodedChunkIterator struct {
    	c              doubleDeltaEncodedChunk
    	len            int
    	baseT, baseΔT  model.Time
    	baseV, baseΔV  model.SampleValue
    	tBytes, vBytes deltaBytes
    	isInt          bool
    }
    
    // length implements chunkIterator.
    func (it *doubleDeltaEncodedChunkIterator) length() int { return it.len }
    
    // valueAtTime implements chunkIterator.
    func (it *doubleDeltaEncodedChunkIterator) valueAtTime(t model.Time) []model.SamplePair {
    	i := sort.Search(it.len, func(i int) bool {
    		return !it.timestampAtIndex(i).Before(t)
    	})
    
    	switch i {
    	case 0:
    		return []model.SamplePair{{
    			Timestamp: it.timestampAtIndex(0),
    			Value:     it.sampleValueAtIndex(0),
    		}}
    	case it.len:
    		return []model.SamplePair{{
    			Timestamp: it.timestampAtIndex(it.len - 1),
    			Value:     it.sampleValueAtIndex(it.len - 1),
    		}}
    	default:
    		ts := it.timestampAtIndex(i)
    		if ts.Equal(t) {
    			return []model.SamplePair{{
    				Timestamp: ts,
    				Value:     it.sampleValueAtIndex(i),
    			}}
    		}
    		return []model.SamplePair{
    			{
    				Timestamp: it.timestampAtIndex(i - 1),
    				Value:     it.sampleValueAtIndex(i - 1),
    			},
    			{
    				Timestamp: ts,
    				Value:     it.sampleValueAtIndex(i),
    			},
    		}
    	}
    }
    
    // rangeValues implements chunkIterator.
    func (it *doubleDeltaEncodedChunkIterator) rangeValues(in metric.Interval) []model.SamplePair {
    	oldest := sort.Search(it.len, func(i int) bool {
    		return !it.timestampAtIndex(i).Before(in.OldestInclusive)
    	})
    
    	newest := sort.Search(it.len, func(i int) bool {
    		return it.timestampAtIndex(i).After(in.NewestInclusive)
    	})
    
    	if oldest == it.len {
    		return nil
    	}
    
    	result := make([]model.SamplePair, 0, newest-oldest)
    	for i := oldest; i < newest; i++ {
    		result = append(result, model.SamplePair{
    			Timestamp: it.timestampAtIndex(i),
    			Value:     it.sampleValueAtIndex(i),
    		})
    	}
    	return result
    }
    
    // contains implements chunkIterator.
    func (it *doubleDeltaEncodedChunkIterator) contains(t model.Time) bool {
    	return !t.Before(it.baseT) && !t.After(it.timestampAtIndex(it.len-1))
    }
    
    // values implements chunkIterator.
    func (it *doubleDeltaEncodedChunkIterator) values() <-chan *model.SamplePair {
    	valuesChan := make(chan *model.SamplePair)
    	go func() {
    		for i := 0; i < it.len; i++ {
    			valuesChan <- &model.SamplePair{
    				Timestamp: it.timestampAtIndex(i),
    				Value:     it.sampleValueAtIndex(i),
    			}
    		}
    		close(valuesChan)
    	}()
    	return valuesChan
    }
    
    // timestampAtIndex implements chunkIterator.
    func (it *doubleDeltaEncodedChunkIterator) timestampAtIndex(idx int) model.Time {
    	if idx == 0 {
    		return it.baseT
    	}
    	if idx == 1 {
    		// If time bytes are at d8, the time is saved directly rather
    		// than as a difference.
    		if it.tBytes == d8 {
    			return it.baseΔT
    		}
    		return it.baseT + it.baseΔT
    	}
    
    	offset := doubleDeltaHeaderBytes + (idx-2)*int(it.tBytes+it.vBytes)
    
    	switch it.tBytes {
    	case d1:
    		return it.baseT +
    			model.Time(idx)*it.baseΔT +
    			model.Time(int8(it.c[offset]))
    	case d2:
    		return it.baseT +
    			model.Time(idx)*it.baseΔT +
    			model.Time(int16(binary.LittleEndian.Uint16(it.c[offset:])))
    	case d4:
    		return it.baseT +
    			model.Time(idx)*it.baseΔT +
    			model.Time(int32(binary.LittleEndian.Uint32(it.c[offset:])))
    	case d8:
    		// Take absolute value for d8.
    		return model.Time(binary.LittleEndian.Uint64(it.c[offset:]))
    	default:
    		panic("invalid number of bytes for time delta")
    	}
    }
    
    // lastTimestamp implements chunkIterator.
    func (it *doubleDeltaEncodedChunkIterator) lastTimestamp() model.Time {
    	return it.timestampAtIndex(it.len - 1)
    }
    
    // sampleValueAtIndex implements chunkIterator.
    func (it *doubleDeltaEncodedChunkIterator) sampleValueAtIndex(idx int) model.SampleValue {
    	if idx == 0 {
    		return it.baseV
    	}
    	if idx == 1 {
    		// If value bytes are at d8, the value is saved directly rather
    		// than as a difference.
    		if it.vBytes == d8 {
    			return it.baseΔV
    		}
    		return it.baseV + it.baseΔV
    	}
    
    	offset := doubleDeltaHeaderBytes + (idx-2)*int(it.tBytes+it.vBytes) + int(it.tBytes)
    
    	if it.isInt {
    		switch it.vBytes {
    		case d0:
    			return it.baseV +
    				model.SampleValue(idx)*it.baseΔV
    		case d1:
    			return it.baseV +
    				model.SampleValue(idx)*it.baseΔV +
    				model.SampleValue(int8(it.c[offset]))
    		case d2:
    			return it.baseV +
    				model.SampleValue(idx)*it.baseΔV +
    				model.SampleValue(int16(binary.LittleEndian.Uint16(it.c[offset:])))
    		case d4:
    			return it.baseV +
    				model.SampleValue(idx)*it.baseΔV +
    				model.SampleValue(int32(binary.LittleEndian.Uint32(it.c[offset:])))
    		// No d8 for ints.
    		default:
    			panic("invalid number of bytes for integer delta")
    		}
    	} else {
    		switch it.vBytes {
    		case d4:
    			return it.baseV +
    				model.SampleValue(idx)*it.baseΔV +
    				model.SampleValue(math.Float32frombits(binary.LittleEndian.Uint32(it.c[offset:])))
    		case d8:
    			// Take absolute value for d8.
    			return model.SampleValue(math.Float64frombits(binary.LittleEndian.Uint64(it.c[offset:])))
    		default:
    			panic("invalid number of bytes for floating point delta")
    		}
    	}
    }
    
    // lastSampleValue implements chunkIterator.
    func (it *doubleDeltaEncodedChunkIterator) lastSampleValue() model.SampleValue {
    	return it.sampleValueAtIndex(it.len - 1)
    }
    prometheus-0.16.2+ds/storage/local/fixtures/000077500000000000000000000000001265137125100210075ustar00rootroot00000000000000prometheus-0.16.2+ds/storage/local/fixtures/b0/000077500000000000000000000000001265137125100213105ustar00rootroot00000000000000prometheus-0.16.2+ds/storage/local/fixtures/b0/04b821ca50ba26.db000066400000000000000000001717741265137125100236040ustar00rootroot00000000000000W+2L2LW+2L{>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>{>>|>>|>>|>>|>>|>>|>>{>>{>>|>>|>>|>>|>>|>>|>>|>>|>>|>>{>>{>>{>>|>>|>>{>>|>>{>>{>>{>>{>>{>>{>>|>>{>>{>>|>>{>>|>>{>>|>>|>>{>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>{>>|>>|>>|>>|>>{>>|>>|>>2L_2L2Lc2L2Lc2Ls
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    2LM2L2L52L%|2L52L{%2L4[2L{%2Li}+9{HVyeswusqom!k0>iM[gjxe䕭cⲧaϡ_ޞ]	[&Y5CWR`Uo}~S{ҚxQuзrOolMifKdaI^+[G:XHUEWReOCtL‚IAFC?@==:7;419/,7")0&5?#M 3\j1y/_2Lp2L_2L%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%X2L2L/X2Lȟ2LxM2Lȟ2L`Q2L2L`Q2L2L2L2L2L@b2L2L(f2L2L(f2L2Lp2L2LX2Lw2LX2Lz2L(2Lz2L,2L82L,2L 2LЋ2L 2L2Lh=2L2LPA2L2LPA2L2L2L2L2L0R2L2LV2L2LV2L2L`2L2LH2Lf2LH2Lj2L2Lj2Lx2L(2Lx2L2L{2L2L2LX-2L2L@12L2L@12L2L2L2Lp2L B3Lp2LF3L3LF3L3LP3L3L83LV3L83LZ3L3LZ3Lh3L	3Lh3L	3Lk3L	3Lo3LH
    3Lo3L0!
    3L3L0!
    3L3Lx3L3L`3L23L`3L53L3L53L3L@3L3L(3LF3L(3LJ3Lp3LJ3LX3L3LX3L3L[3L3L_3L8
    3L_3L 3Lо3L 3L3Lhp!3L3LPt!3L"#3LPt!3L%#3L$3L%#3L$3L0&3L$3L&3L6(3L&3L:(3L`)3L:(3LH)3L+3LH)3L+3LK-3L+3LxO-3L(.3LxO-3L/3L03L/3Lprometheus-0.16.2+ds/storage/local/fixtures/b0/37c21e884e4fc5.db000066400000000000000000001374371265137125100236350ustar00rootroot00000000000000W+2L2LW+2L{>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>{>>|>>|>>|>>|>>|>>|>>{>>{>>|>>|>>|>>|>>|>>|>>|>>|>>|>>{>>{>>{>>|>>|>>{>>|>>{>>{>>{>>{>>{>>{>>|>>{>>{>>|>>{>>|>>{>>|>>|>>{>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>{>>|>>|>>|>>|>>{>>|>>|>>2L_2L2Lg@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@c2L2Lc2Lg@g@g@g@g@g@g@g@g@g@g@g@g@s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    2LM2L2L52L%|2L52L{%2L4[2L{%2Li}+9{HVyeswusqom!k0>iM[gjxe䕭cⲧaϡ_ޞ]	[&Y5CWR`Uo}~S{ҚxQuзrOolMifKdaI^+[G:XHUEWReOCtL‚IAFC?@==:7;419/,7")0&5?#M 3\j1y/_2Lp2L_2L%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%X2L2L/X2Lȟ2LxM2Lȟ2L`Q2L2L`Q2L2L2L2L2L@b2L2L(f2L2L(f2L2Lp2L2LX2Lw2LX2Lz2L(2Lz2L,2L82L,2L 2LЋ2L 2L2Lh=2L2LPA2L2LPA2L2L2L2L2L0R2L2LV2L2LV2L2L`2L2Ll@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@H2L f2LH2Ll@j2L2Lj2L2LP2L2L82Lz3L82L~3L,3L~3Lh03L 
    3L'h03L@p@
    3L3L
    3L3LPo
    3L3L8s
    3L 3L8s
    3L$3L3L$3Lh3L3Lh3L3L53L3L93LH3L93L03L3L03LȜ3LxJ3LȜ3L`N3L3L`N3L3L3L3L3L@_3L3L(c3L 3L(c3L 3Lp!3L 3LX!3Lt#3LX!3Lw#3L%%3Lw#3L)%3L8&3L)%3Lprometheus-0.16.2+ds/storage/local/fixtures/b0/37de1e884e5469.db000066400000000000000000001435011265137125100235540ustar00rootroot00000000000000W+2L2LW+2L{>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>{>>|>>|>>|>>|>>|>>|>>{>>{>>|>>|>>|>>|>>|>>|>>|>>|>>|>>{>>{>>{>>|>>|>>{>>|>>{>>{>>{>>{>>{>>{>>|>>{>>{>>|>>{>>|>>{>>|>>|>>{>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>|>>{>>|>>|>>|>>|>>{>>|>>|>>2L_2L2Lg@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@g@c2L2Lc2Lg@g@l@l@l@l@l@l@l@l@l@l@l@s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    s
    2LM2L2L52L%|2L52L{%2L4[2L{%2Li}+9{HVyeswusqom!k0>iM[gjxe䕭cⲧaϡ_ޞ]	[&Y5CWR`Uo}~S{ҚxQuзrOolMifKdaI^+[G:XHUEWReOCtL‚IAFC?@==:7;419/,7")0&5?#M 3\j1y/_2Lp2L_2L%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%X2L2L/X2Lȟ2LxM2Lȟ2L`Q2L2L`Q2L2L2L2L2L@b2L2L(f2L2L(f2L2Lp2L2LX2Lw2LX2Lz2L(2Lz2L,2L82L,2L 2LЋ2L 2L2Lh=2L2LPA2L2LPA2L2L2L2L2L0R2L2LV2L2LV2L2L`2L2Ll@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@l@H2L f2LH2Ll@j2L2Lj2L2LP2L2L82Lz3L82L~3L,3L~3Lh03L 
    3L'h03L@p@
    3L3L
    3L3LPo
    3L3L8s
    3L 3L8s
    3L$3L3L$3Lh3L3Lh3L3L53L3L93LH3L93L03L3L03LȜ3LxJ3LȜ3L`N3L3L`N3L3L3L3L3L@_3L3L(c3L 3L(c3L 3Lp!3L 3LX!3Lt#3LX!3Lw#3L%%3Lw#3L)%3L8&3L)%3L &3LЈ(3L &3L(3Lh:*3L(3Lprometheus-0.16.2+ds/storage/local/index/000077500000000000000000000000001265137125100202455ustar00rootroot00000000000000prometheus-0.16.2+ds/storage/local/index/index.go000066400000000000000000000237541265137125100217160ustar00rootroot00000000000000// Copyright 2014 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    // Package index provides a number of indexes backed by persistent key-value
    // stores.  The only supported implementation of a key-value store is currently
    // goleveldb, but other implementations can easily be added.
    package index
    
    import (
    	"os"
    	"path"
    
    	"github.com/prometheus/common/model"
    
    	"github.com/prometheus/prometheus/storage/local/codable"
    )
    
    const (
    	fingerprintToMetricDir     = "archived_fingerprint_to_metric"
    	fingerprintTimeRangeDir    = "archived_fingerprint_to_timerange"
    	labelNameToLabelValuesDir  = "labelname_to_labelvalues"
    	labelPairToFingerprintsDir = "labelpair_to_fingerprints"
    )
    
    // LevelDB cache sizes, changeable via flags.
    var (
    	FingerprintMetricCacheSize     = 10 * 1024 * 1024
    	FingerprintTimeRangeCacheSize  = 5 * 1024 * 1024
    	LabelNameLabelValuesCacheSize  = 10 * 1024 * 1024
    	LabelPairFingerprintsCacheSize = 20 * 1024 * 1024
    )
    
    // FingerprintMetricMapping is an in-memory map of fingerprints to metrics.
    type FingerprintMetricMapping map[model.Fingerprint]model.Metric
    
    // FingerprintMetricIndex models a database mapping fingerprints to metrics.
    type FingerprintMetricIndex struct {
    	KeyValueStore
    }
    
    // IndexBatch indexes a batch of mappings from fingerprints to metrics.
    //
    // This method is goroutine-safe, but note that no specific order of execution
    // can be guaranteed (especially critical if IndexBatch and UnindexBatch are
    // called concurrently for the same fingerprint).
    func (i *FingerprintMetricIndex) IndexBatch(mapping FingerprintMetricMapping) error {
    	b := i.NewBatch()
    
    	for fp, m := range mapping {
    		if err := b.Put(codable.Fingerprint(fp), codable.Metric(m)); err != nil {
    			return err
    		}
    	}
    
    	return i.Commit(b)
    }
    
    // UnindexBatch unindexes a batch of mappings from fingerprints to metrics.
    //
    // This method is goroutine-safe, but note that no specific order of execution
    // can be guaranteed (especially critical if IndexBatch and UnindexBatch are
    // called concurrently for the same fingerprint).
    func (i *FingerprintMetricIndex) UnindexBatch(mapping FingerprintMetricMapping) error {
    	b := i.NewBatch()
    
    	for fp := range mapping {
    		if err := b.Delete(codable.Fingerprint(fp)); err != nil {
    			return err
    		}
    	}
    
    	return i.Commit(b)
    }
    
    // Lookup looks up a metric by fingerprint. Looking up a non-existing
    // fingerprint is not an error. In that case, (nil, false, nil) is returned.
    //
    // This method is goroutine-safe.
    func (i *FingerprintMetricIndex) Lookup(fp model.Fingerprint) (metric model.Metric, ok bool, err error) {
    	ok, err = i.Get(codable.Fingerprint(fp), (*codable.Metric)(&metric))
    	return
    }
    
    // NewFingerprintMetricIndex returns a LevelDB-backed FingerprintMetricIndex
    // ready to use.
    func NewFingerprintMetricIndex(basePath string) (*FingerprintMetricIndex, error) {
    	fingerprintToMetricDB, err := NewLevelDB(LevelDBOptions{
    		Path:           path.Join(basePath, fingerprintToMetricDir),
    		CacheSizeBytes: FingerprintMetricCacheSize,
    	})
    	if err != nil {
    		return nil, err
    	}
    	return &FingerprintMetricIndex{
    		KeyValueStore: fingerprintToMetricDB,
    	}, nil
    }
    
    // LabelNameLabelValuesMapping is an in-memory map of label names to
    // label values.
    type LabelNameLabelValuesMapping map[model.LabelName]codable.LabelValueSet
    
    // LabelNameLabelValuesIndex is a KeyValueStore that maps existing label names
    // to all label values stored for that label name.
    type LabelNameLabelValuesIndex struct {
    	KeyValueStore
    }
    
    // IndexBatch adds a batch of label name to label values mappings to the
    // index. A mapping of a label name to an empty slice of label values results in
    // a deletion of that mapping from the index.
    //
    // While this method is fundamentally goroutine-safe, note that the order of
    // execution for multiple batches executed concurrently is undefined.
    func (i *LabelNameLabelValuesIndex) IndexBatch(b LabelNameLabelValuesMapping) error {
    	batch := i.NewBatch()
    
    	for name, values := range b {
    		if len(values) == 0 {
    			if err := batch.Delete(codable.LabelName(name)); err != nil {
    				return err
    			}
    		} else {
    			if err := batch.Put(codable.LabelName(name), values); err != nil {
    				return err
    			}
    		}
    	}
    
    	return i.Commit(batch)
    }
    
    // Lookup looks up all label values for a given label name and returns them as
    // model.LabelValues (which is a slice). Looking up a non-existing label
    // name is not an error. In that case, (nil, false, nil) is returned.
    //
    // This method is goroutine-safe.
    func (i *LabelNameLabelValuesIndex) Lookup(l model.LabelName) (values model.LabelValues, ok bool, err error) {
    	ok, err = i.Get(codable.LabelName(l), (*codable.LabelValues)(&values))
    	return
    }
    
    // LookupSet looks up all label values for a given label name and returns them
    // as a set. Looking up a non-existing label name is not an error. In that case,
    // (nil, false, nil) is returned.
    //
    // This method is goroutine-safe.
    func (i *LabelNameLabelValuesIndex) LookupSet(l model.LabelName) (values map[model.LabelValue]struct{}, ok bool, err error) {
    	ok, err = i.Get(codable.LabelName(l), (*codable.LabelValueSet)(&values))
    	if values == nil {
    		values = map[model.LabelValue]struct{}{}
    	}
    	return
    }
    
    // NewLabelNameLabelValuesIndex returns a LevelDB-backed
    // LabelNameLabelValuesIndex ready to use.
    func NewLabelNameLabelValuesIndex(basePath string) (*LabelNameLabelValuesIndex, error) {
    	labelNameToLabelValuesDB, err := NewLevelDB(LevelDBOptions{
    		Path:           path.Join(basePath, labelNameToLabelValuesDir),
    		CacheSizeBytes: LabelNameLabelValuesCacheSize,
    	})
    	if err != nil {
    		return nil, err
    	}
    	return &LabelNameLabelValuesIndex{
    		KeyValueStore: labelNameToLabelValuesDB,
    	}, nil
    }
    
    // DeleteLabelNameLabelValuesIndex deletes the LevelDB-backed
    // LabelNameLabelValuesIndex. Use only for a not yet opened index.
    func DeleteLabelNameLabelValuesIndex(basePath string) error {
    	return os.RemoveAll(path.Join(basePath, labelNameToLabelValuesDir))
    }
    
    // LabelPairFingerprintsMapping is an in-memory map of label pairs to
    // fingerprints.
    type LabelPairFingerprintsMapping map[model.LabelPair]codable.FingerprintSet
    
    // LabelPairFingerprintIndex is a KeyValueStore that maps existing label pairs
    // to the fingerprints of all metrics containing those label pairs.
    type LabelPairFingerprintIndex struct {
    	KeyValueStore
    }
    
    // IndexBatch indexes a batch of mappings from label pairs to fingerprints. A
    // mapping to an empty slice of fingerprints results in deletion of that mapping
    // from the index.
    //
    // While this method is fundamentally goroutine-safe, note that the order of
    // execution for multiple batches executed concurrently is undefined.
    func (i *LabelPairFingerprintIndex) IndexBatch(m LabelPairFingerprintsMapping) (err error) {
    	batch := i.NewBatch()
    
    	for pair, fps := range m {
    		if len(fps) == 0 {
    			err = batch.Delete(codable.LabelPair(pair))
    		} else {
    			err = batch.Put(codable.LabelPair(pair), fps)
    		}
    
    		if err != nil {
    			return err
    		}
    	}
    
    	return i.Commit(batch)
    }
    
    // Lookup looks up all fingerprints for a given label pair.  Looking up a
    // non-existing label pair is not an error. In that case, (nil, false, nil) is
    // returned.
    //
    // This method is goroutine-safe.
    func (i *LabelPairFingerprintIndex) Lookup(p model.LabelPair) (fps model.Fingerprints, ok bool, err error) {
    	ok, err = i.Get((codable.LabelPair)(p), (*codable.Fingerprints)(&fps))
    	return
    }
    
    // LookupSet looks up all fingerprints for a given label pair.  Looking up a
    // non-existing label pair is not an error. In that case, (nil, false, nil) is
    // returned.
    //
    // This method is goroutine-safe.
    func (i *LabelPairFingerprintIndex) LookupSet(p model.LabelPair) (fps map[model.Fingerprint]struct{}, ok bool, err error) {
    	ok, err = i.Get((codable.LabelPair)(p), (*codable.FingerprintSet)(&fps))
    	if fps == nil {
    		fps = map[model.Fingerprint]struct{}{}
    	}
    	return
    }
    
    // NewLabelPairFingerprintIndex returns a LevelDB-backed
    // LabelPairFingerprintIndex ready to use.
    func NewLabelPairFingerprintIndex(basePath string) (*LabelPairFingerprintIndex, error) {
    	labelPairToFingerprintsDB, err := NewLevelDB(LevelDBOptions{
    		Path:           path.Join(basePath, labelPairToFingerprintsDir),
    		CacheSizeBytes: LabelPairFingerprintsCacheSize,
    	})
    	if err != nil {
    		return nil, err
    	}
    	return &LabelPairFingerprintIndex{
    		KeyValueStore: labelPairToFingerprintsDB,
    	}, nil
    }
    
    // DeleteLabelPairFingerprintIndex deletes the LevelDB-backed
    // LabelPairFingerprintIndex. Use only for a not yet opened index.
    func DeleteLabelPairFingerprintIndex(basePath string) error {
    	return os.RemoveAll(path.Join(basePath, labelPairToFingerprintsDir))
    }
    
    // FingerprintTimeRangeIndex models a database tracking the time ranges
    // of metrics by their fingerprints.
    type FingerprintTimeRangeIndex struct {
    	KeyValueStore
    }
    
    // Lookup returns the time range for the given fingerprint.  Looking up a
    // non-existing fingerprint is not an error. In that case, (0, 0, false, nil) is
    // returned.
    //
    // This method is goroutine-safe.
    func (i *FingerprintTimeRangeIndex) Lookup(fp model.Fingerprint) (firstTime, lastTime model.Time, ok bool, err error) {
    	var tr codable.TimeRange
    	ok, err = i.Get(codable.Fingerprint(fp), &tr)
    	return tr.First, tr.Last, ok, err
    }
    
    // NewFingerprintTimeRangeIndex returns a LevelDB-backed
    // FingerprintTimeRangeIndex ready to use.
    func NewFingerprintTimeRangeIndex(basePath string) (*FingerprintTimeRangeIndex, error) {
    	fingerprintTimeRangeDB, err := NewLevelDB(LevelDBOptions{
    		Path:           path.Join(basePath, fingerprintTimeRangeDir),
    		CacheSizeBytes: FingerprintTimeRangeCacheSize,
    	})
    	if err != nil {
    		return nil, err
    	}
    	return &FingerprintTimeRangeIndex{
    		KeyValueStore: fingerprintTimeRangeDB,
    	}, nil
    }
    prometheus-0.16.2+ds/storage/local/index/interface.go000066400000000000000000000047671265137125100225520ustar00rootroot00000000000000// Copyright 2014 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package index
    
    import "encoding"
    
    // KeyValueStore persists key/value pairs. Implementations must be fundamentally
    // goroutine-safe. However, it is the caller's responsibility that keys and
    // values can be safely marshaled and unmarshaled (via the MarshalBinary and
    // UnmarshalBinary methods of the keys and values). For example, if you call the
    // Put method of a KeyValueStore implementation, but the key or the value are
    // modified concurrently while being marshaled into its binary representation,
    // you obviously have a problem. Methods of KeyValueStore return only after
    // (un)marshaling is complete.
    type KeyValueStore interface {
    	Put(key, value encoding.BinaryMarshaler) error
    	// Get unmarshals the result into value. It returns false if no entry
    	// could be found for key. If value is nil, Get behaves like Has.
    	Get(key encoding.BinaryMarshaler, value encoding.BinaryUnmarshaler) (bool, error)
    	Has(key encoding.BinaryMarshaler) (bool, error)
    	// Delete returns (false, nil) if key does not exist.
    	Delete(key encoding.BinaryMarshaler) (bool, error)
    
    	NewBatch() Batch
    	Commit(b Batch) error
    
    	// ForEach iterates through the complete KeyValueStore and calls the
    	// supplied function for each mapping.
    	ForEach(func(kv KeyValueAccessor) error) error
    
    	Close() error
    }
    
    // KeyValueAccessor allows access to the key and value of an entry in a
    // KeyValueStore.
    type KeyValueAccessor interface {
    	Key(encoding.BinaryUnmarshaler) error
    	Value(encoding.BinaryUnmarshaler) error
    }
    
    // Batch allows KeyValueStore mutations to be pooled and committed together. An
    // implementation does not have to be goroutine-safe. Never modify a Batch
    // concurrently or commit the same batch multiple times concurrently. Marshaling
    // of keys and values is guaranteed to be complete when the Put or Delete methods
    // have returned.
    type Batch interface {
    	Put(key, value encoding.BinaryMarshaler) error
    	Delete(key encoding.BinaryMarshaler) error
    	Reset()
    }
    prometheus-0.16.2+ds/storage/local/index/leveldb.go000066400000000000000000000117131265137125100222140ustar00rootroot00000000000000// Copyright 2014 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package index
    
    import (
    	"encoding"
    
    	"github.com/syndtr/goleveldb/leveldb"
    	leveldb_filter "github.com/syndtr/goleveldb/leveldb/filter"
    	leveldb_iterator "github.com/syndtr/goleveldb/leveldb/iterator"
    	leveldb_opt "github.com/syndtr/goleveldb/leveldb/opt"
    	leveldb_util "github.com/syndtr/goleveldb/leveldb/util"
    )
    
    var (
    	keyspace = &leveldb_util.Range{
    		Start: nil,
    		Limit: nil,
    	}
    
    	iteratorOpts = &leveldb_opt.ReadOptions{
    		DontFillCache: true,
    	}
    )
    
    // LevelDB is a LevelDB-backed sorted KeyValueStore.
    type LevelDB struct {
    	storage   *leveldb.DB
    	readOpts  *leveldb_opt.ReadOptions
    	writeOpts *leveldb_opt.WriteOptions
    }
    
    // LevelDBOptions provides options for a LevelDB.
    type LevelDBOptions struct {
    	Path           string // Base path to store files.
    	CacheSizeBytes int
    }
    
    // NewLevelDB returns a newly allocated LevelDB-backed KeyValueStore ready to
    // use.
    func NewLevelDB(o LevelDBOptions) (KeyValueStore, error) {
    	options := &leveldb_opt.Options{
    		BlockCacheCapacity: o.CacheSizeBytes,
    		Filter:             leveldb_filter.NewBloomFilter(10),
    	}
    
    	storage, err := leveldb.OpenFile(o.Path, options)
    	if err != nil {
    		return nil, err
    	}
    
    	return &LevelDB{
    		storage:   storage,
    		readOpts:  &leveldb_opt.ReadOptions{},
    		writeOpts: &leveldb_opt.WriteOptions{},
    	}, nil
    }
    
    // NewBatch implements KeyValueStore.
    func (l *LevelDB) NewBatch() Batch {
    	return &LevelDBBatch{
    		batch: &leveldb.Batch{},
    	}
    }
    
    // Close implements KeyValueStore.
    func (l *LevelDB) Close() error {
    	return l.storage.Close()
    }
    
    // Get implements KeyValueStore.
    func (l *LevelDB) Get(key encoding.BinaryMarshaler, value encoding.BinaryUnmarshaler) (bool, error) {
    	k, err := key.MarshalBinary()
    	if err != nil {
    		return false, err
    	}
    	raw, err := l.storage.Get(k, l.readOpts)
    	if err == leveldb.ErrNotFound {
    		return false, nil
    	}
    	if err != nil {
    		return false, err
    	}
    	if value == nil {
    		return true, nil
    	}
    	return true, value.UnmarshalBinary(raw)
    }
    
    // Has implements KeyValueStore.
    func (l *LevelDB) Has(key encoding.BinaryMarshaler) (has bool, err error) {
    	return l.Get(key, nil)
    }
    
    // Delete implements KeyValueStore.
    func (l *LevelDB) Delete(key encoding.BinaryMarshaler) (bool, error) {
    	k, err := key.MarshalBinary()
    	if err != nil {
    		return false, err
    	}
    	// Note that Delete returns nil if k does not exist. So we have to test
    	// for existence with Has first.
    	if has, err := l.storage.Has(k, l.readOpts); !has || err != nil {
    		return false, err
    	}
    	if err = l.storage.Delete(k, l.writeOpts); err != nil {
    		return false, err
    	}
    	return true, nil
    }
    
    // Put implements KeyValueStore.
    func (l *LevelDB) Put(key, value encoding.BinaryMarshaler) error {
    	k, err := key.MarshalBinary()
    	if err != nil {
    		return err
    	}
    	v, err := value.MarshalBinary()
    	if err != nil {
    		return err
    	}
    	return l.storage.Put(k, v, l.writeOpts)
    }
    
    // Commit implements KeyValueStore.
    func (l *LevelDB) Commit(b Batch) error {
    	return l.storage.Write(b.(*LevelDBBatch).batch, l.writeOpts)
    }
    
    // ForEach implements KeyValueStore.
    func (l *LevelDB) ForEach(cb func(kv KeyValueAccessor) error) error {
    	snap, err := l.storage.GetSnapshot()
    	if err != nil {
    		return err
    	}
    	defer snap.Release()
    
    	iter := snap.NewIterator(keyspace, iteratorOpts)
    
    	kv := &levelDBKeyValueAccessor{it: iter}
    
    	for valid := iter.First(); valid; valid = iter.Next() {
    		if err = iter.Error(); err != nil {
    			return err
    		}
    
    		if err := cb(kv); err != nil {
    			return err
    		}
    	}
    	return nil
    }
    
    // LevelDBBatch is a Batch implementation for LevelDB.
    type LevelDBBatch struct {
    	batch *leveldb.Batch
    }
    
    // Put implements Batch.
    func (b *LevelDBBatch) Put(key, value encoding.BinaryMarshaler) error {
    	k, err := key.MarshalBinary()
    	if err != nil {
    		return err
    	}
    	v, err := value.MarshalBinary()
    	if err != nil {
    		return err
    	}
    	b.batch.Put(k, v)
    	return nil
    }
    
    // Delete implements Batch.
    func (b *LevelDBBatch) Delete(key encoding.BinaryMarshaler) error {
    	k, err := key.MarshalBinary()
    	if err != nil {
    		return err
    	}
    	b.batch.Delete(k)
    	return nil
    }
    
    // Reset implements Batch.
    func (b *LevelDBBatch) Reset() {
    	b.batch.Reset()
    }
    
    // levelDBKeyValueAccessor implements KeyValueAccessor.
    type levelDBKeyValueAccessor struct {
    	it leveldb_iterator.Iterator
    }
    
    func (i *levelDBKeyValueAccessor) Key(key encoding.BinaryUnmarshaler) error {
    	return key.UnmarshalBinary(i.it.Key())
    }
    
    func (i *levelDBKeyValueAccessor) Value(value encoding.BinaryUnmarshaler) error {
    	return value.UnmarshalBinary(i.it.Value())
    }
    prometheus-0.16.2+ds/storage/local/instrumentation.go000066400000000000000000000063311265137125100227330ustar00rootroot00000000000000// Copyright 2014 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package local
    
    import "github.com/prometheus/client_golang/prometheus"
    
    // Usually, a separate file for instrumentation is frowned upon. Metrics should
    // be close to where they are used. However, the metrics below are set all over
    // the place, so we go for a separate instrumentation file in this case.
    var (
    	chunkOps = prometheus.NewCounterVec(
    		prometheus.CounterOpts{
    			Namespace: namespace,
    			Subsystem: subsystem,
    			Name:      "chunk_ops_total",
    			Help:      "The total number of chunk operations by their type.",
    		},
    		[]string{opTypeLabel},
    	)
    	chunkDescOps = prometheus.NewCounterVec(
    		prometheus.CounterOpts{
    			Namespace: namespace,
    			Subsystem: subsystem,
    			Name:      "chunkdesc_ops_total",
    			Help:      "The total number of chunk descriptor operations by their type.",
    		},
    		[]string{opTypeLabel},
    	)
    	numMemChunkDescs = prometheus.NewGauge(prometheus.GaugeOpts{
    		Namespace: namespace,
    		Subsystem: subsystem,
    		Name:      "memory_chunkdescs",
    		Help:      "The current number of chunk descriptors in memory.",
    	})
    )
    
    const (
    	namespace = "prometheus"
    	subsystem = "local_storage"
    
    	opTypeLabel = "type"
    
    	// Op-types for seriesOps.
    	create             = "create"
    	archive            = "archive"
    	unarchive          = "unarchive"
    	memoryPurge        = "purge_from_memory"
    	archivePurge       = "purge_from_archive"
    	requestedPurge     = "purge_on_request"
    	memoryMaintenance  = "maintenance_in_memory"
    	archiveMaintenance = "maintenance_in_archive"
    
    	// Op-types for chunkOps.
    	createAndPin    = "create" // A chunkDesc creation with refCount=1.
    	persistAndUnpin = "persist"
    	pin             = "pin"   // Excluding the pin on creation.
    	unpin           = "unpin" // Excluding the unpin on persisting.
    	clone           = "clone"
    	transcode       = "transcode"
    	drop            = "drop"
    
    	// Op-types for chunkOps and chunkDescOps.
    	evict = "evict"
    	load  = "load"
    
    	seriesLocationLabel = "location"
    
    	// Maintenance types for maintainSeriesDuration.
    	maintainInMemory = "memory"
    	maintainArchived = "archived"
    )
    
    func init() {
    	prometheus.MustRegister(chunkOps)
    	prometheus.MustRegister(chunkDescOps)
    	prometheus.MustRegister(numMemChunkDescs)
    }
    
    var (
    	// Global counter, also used internally, so not implemented as
    	// metrics. Collected in memorySeriesStorage.Collect.
    	// TODO(beorn7): As it is used internally, it is actually very bad style
    	// to have it as a global variable.
    	numMemChunks int64
    
    	// Metric descriptors for the above.
    	numMemChunksDesc = prometheus.NewDesc(
    		prometheus.BuildFQName(namespace, subsystem, "memory_chunks"),
    		"The current number of chunks in memory, excluding cloned chunks (i.e. chunks without a descriptor).",
    		nil, nil,
    	)
    )
    prometheus-0.16.2+ds/storage/local/interface.go000066400000000000000000000110571265137125100214310ustar00rootroot00000000000000// Copyright 2014 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package local
    
    import (
    	"time"
    
    	"github.com/prometheus/client_golang/prometheus"
    	"github.com/prometheus/common/model"
    
    	"github.com/prometheus/prometheus/storage/metric"
    )
    
    // Storage ingests and manages samples, along with various indexes. All methods
    // are goroutine-safe. Storage implements storage.SampleAppender.
    type Storage interface {
    	prometheus.Collector
    	// Append stores a sample in the Storage. Multiple samples for the same
    	// fingerprint need to be submitted in chronological order, from oldest
    	// to newest. When Append has returned, the appended sample might not be
    	// queryable immediately. (Use WaitForIndexing to wait for complete
    	// processing.) The implementation might remove labels with empty value
    	// from the provided Sample as those labels are considered equivalent to
    	// a label not present at all.
    	Append(*model.Sample)
    	// NewPreloader returns a new Preloader which allows preloading and pinning
    	// series data into memory for use within a query.
    	NewPreloader() Preloader
    	// MetricsForLabelMatchers returns the metrics from storage that satisfy the given
    	// label matchers. At least one label matcher must be specified that does not
    	// match the empty string.
    	MetricsForLabelMatchers(...*metric.LabelMatcher) map[model.Fingerprint]metric.Metric
    	// LastSamplePairForFingerprint returns the last sample pair for the
    	// provided fingerprint. If the respective time series does not exist or
    	// has an evicted head chunk, nil is returned.
    	LastSamplePairForFingerprint(model.Fingerprint) *model.SamplePair
    	// Get all of the label values that are associated with a given label name.
    	LabelValuesForLabelName(model.LabelName) model.LabelValues
    	// Get the metric associated with the provided fingerprint.
    	MetricForFingerprint(model.Fingerprint) metric.Metric
    	// Construct an iterator for a given fingerprint.
    	// The iterator will never return samples older than retention time,
    	// relative to the time NewIterator was called.
    	NewIterator(model.Fingerprint) SeriesIterator
    	// Drop all time series associated with the given fingerprints.
    	DropMetricsForFingerprints(...model.Fingerprint)
    	// Run the various maintenance loops in goroutines. Returns when the
    	// storage is ready to use. Keeps everything running in the background
    	// until Stop is called.
    	Start() error
    	// Stop shuts down the Storage gracefully, flushes all pending
    	// operations, stops all maintenance loops,and frees all resources.
    	Stop() error
    	// WaitForIndexing returns once all samples in the storage are
    	// indexed. Indexing is needed for FingerprintsForLabelMatchers and
    	// LabelValuesForLabelName and may lag behind.
    	WaitForIndexing()
    }
    
    // SeriesIterator enables efficient access of sample values in a series. Its
    // methods are not goroutine-safe. A SeriesIterator iterates over a snapshot of
    // a series, i.e. it is safe to continue using a SeriesIterator after or during
    // modifying the corresponding series, but the iterator will represent the state
    // of the series prior the modification.
    type SeriesIterator interface {
    	// Gets the two values that are immediately adjacent to a given time. In
    	// case a value exist at precisely the given time, only that single
    	// value is returned. Only the first or last value is returned (as a
    	// single value), if the given time is before or after the first or last
    	// value, respectively.
    	ValueAtTime(model.Time) []model.SamplePair
    	// Gets the boundary values of an interval: the first and last value
    	// within a given interval.
    	BoundaryValues(metric.Interval) []model.SamplePair
    	// Gets all values contained within a given interval.
    	RangeValues(metric.Interval) []model.SamplePair
    }
    
    // A Preloader preloads series data necessary for a query into memory and pins
    // them until released via Close(). Its methods are generally not
    // goroutine-safe.
    type Preloader interface {
    	PreloadRange(
    		fp model.Fingerprint,
    		from model.Time, through model.Time,
    		stalenessDelta time.Duration,
    	) error
    	// Close unpins any previously requested series data from memory.
    	Close()
    }
    prometheus-0.16.2+ds/storage/local/locker.go000066400000000000000000000030731265137125100207470ustar00rootroot00000000000000package local
    
    import (
    	"sync"
    
    	"github.com/prometheus/common/model"
    )
    
    // fingerprintLocker allows locking individual fingerprints. To limit the number
    // of mutexes needed for that, only a fixed number of mutexes are
    // allocated. Fingerprints to be locked are assigned to those pre-allocated
    // mutexes by their value. (Note that fingerprints are calculated by a hash
    // function, so that an approximately equal distribution over the mutexes is
    // expected, even without additional hashing of the fingerprint value.)
    // Collisions are not detected. If two fingerprints get assigned to the same
    // mutex, only one of them can be locked at the same time. As long as the number
    // of pre-allocated mutexes is much larger than the number of goroutines
    // requiring a fingerprint lock concurrently, the loss in efficiency is
    // small. However, a goroutine must never lock more than one fingerprint at the
    // same time. (In that case a collision would try to acquire the same mutex
    // twice).
    type fingerprintLocker struct {
    	fpMtxs    []sync.Mutex
    	numFpMtxs uint
    }
    
    // newFingerprintLocker returns a new fingerprintLocker ready for use.
    func newFingerprintLocker(preallocatedMutexes int) *fingerprintLocker {
    	return &fingerprintLocker{
    		make([]sync.Mutex, preallocatedMutexes),
    		uint(preallocatedMutexes),
    	}
    }
    
    // Lock locks the given fingerprint.
    func (l *fingerprintLocker) Lock(fp model.Fingerprint) {
    	l.fpMtxs[uint(fp)%l.numFpMtxs].Lock()
    }
    
    // Unlock unlocks the given fingerprint.
    func (l *fingerprintLocker) Unlock(fp model.Fingerprint) {
    	l.fpMtxs[uint(fp)%l.numFpMtxs].Unlock()
    }
    prometheus-0.16.2+ds/storage/local/locker_test.go000066400000000000000000000015631265137125100220100ustar00rootroot00000000000000package local
    
    import (
    	"sync"
    	"testing"
    
    	"github.com/prometheus/common/model"
    )
    
    func BenchmarkFingerprintLockerParallel(b *testing.B) {
    	numGoroutines := 10
    	numFingerprints := 10
    	numLockOps := b.N
    	locker := newFingerprintLocker(100)
    
    	wg := sync.WaitGroup{}
    	b.ResetTimer()
    	for i := 0; i < numGoroutines; i++ {
    		wg.Add(1)
    		go func(i int) {
    			for j := 0; j < numLockOps; j++ {
    				fp1 := model.Fingerprint(j % numFingerprints)
    				fp2 := model.Fingerprint(j%numFingerprints + numFingerprints)
    				locker.Lock(fp1)
    				locker.Lock(fp2)
    				locker.Unlock(fp2)
    				locker.Unlock(fp1)
    			}
    			wg.Done()
    		}(i)
    	}
    	wg.Wait()
    }
    
    func BenchmarkFingerprintLockerSerial(b *testing.B) {
    	numFingerprints := 10
    	locker := newFingerprintLocker(100)
    
    	b.ResetTimer()
    	for i := 0; i < b.N; i++ {
    		fp := model.Fingerprint(i % numFingerprints)
    		locker.Lock(fp)
    		locker.Unlock(fp)
    	}
    }
    prometheus-0.16.2+ds/storage/local/mapper.go000066400000000000000000000144031265137125100207530ustar00rootroot00000000000000package local
    
    import (
    	"fmt"
    	"sort"
    	"strings"
    	"sync"
    	"sync/atomic"
    
    	"github.com/prometheus/client_golang/prometheus"
    	"github.com/prometheus/common/log"
    
    	"github.com/prometheus/common/model"
    )
    
    const maxMappedFP = 1 << 20 // About 1M fingerprints reserved for mapping.
    
    var separatorString = string([]byte{model.SeparatorByte})
    
    // fpMappings maps original fingerprints to a map of string representations of
    // metrics to the truly unique fingerprint.
    type fpMappings map[model.Fingerprint]map[string]model.Fingerprint
    
    // fpMapper is used to map fingerprints in order to work around fingerprint
    // collisions.
    type fpMapper struct {
    	// highestMappedFP has to be aligned for atomic operations.
    	highestMappedFP model.Fingerprint
    
    	mtx      sync.RWMutex // Protects mappings.
    	mappings fpMappings
    
    	fpToSeries *seriesMap
    	p          *persistence
    
    	mappingsCounter prometheus.Counter
    }
    
    // newFPMapper loads the collision map from the persistence and
    // returns an fpMapper ready to use.
    func newFPMapper(fpToSeries *seriesMap, p *persistence) (*fpMapper, error) {
    	m := &fpMapper{
    		fpToSeries: fpToSeries,
    		p:          p,
    		mappingsCounter: prometheus.NewCounter(prometheus.CounterOpts{
    			Namespace: namespace,
    			Subsystem: subsystem,
    			Name:      "fingerprint_mappings_total",
    			Help:      "The total number of fingerprints being mapped to avoid collisions.",
    		}),
    	}
    	mappings, nextFP, err := p.loadFPMappings()
    	if err != nil {
    		return nil, err
    	}
    	m.mappings = mappings
    	m.mappingsCounter.Set(float64(len(m.mappings)))
    	m.highestMappedFP = nextFP
    	return m, nil
    }
    
    // mapFP takes a raw fingerprint (as returned by Metrics.FastFingerprint) and
    // returns a truly unique fingerprint. The caller must have locked the raw
    // fingerprint.
    //
    // If an error is encountered, it is returned together with the unchanged raw
    // fingerprint.
    func (m *fpMapper) mapFP(fp model.Fingerprint, metric model.Metric) (model.Fingerprint, error) {
    	// First check if we are in the reserved FP space, in which case this is
    	// automatically a collision that has to be mapped.
    	if fp <= maxMappedFP {
    		return m.maybeAddMapping(fp, metric)
    	}
    
    	// Then check the most likely case: This fp belongs to a series that is
    	// already in memory.
    	s, ok := m.fpToSeries.get(fp)
    	if ok {
    		// FP exists in memory, but is it for the same metric?
    		if metric.Equal(s.metric) {
    			// Yupp. We are done.
    			return fp, nil
    		}
    		// Collision detected!
    		return m.maybeAddMapping(fp, metric)
    	}
    	// Metric is not in memory. Before doing the expensive archive lookup,
    	// check if we have a mapping for this metric in place already.
    	m.mtx.RLock()
    	mappedFPs, fpAlreadyMapped := m.mappings[fp]
    	m.mtx.RUnlock()
    	if fpAlreadyMapped {
    		// We indeed have mapped fp historically.
    		ms := metricToUniqueString(metric)
    		// fp is locked by the caller, so no further locking of
    		// 'collisions' required (it is specific to fp).
    		mappedFP, ok := mappedFPs[ms]
    		if ok {
    			// Historical mapping found, return the mapped FP.
    			return mappedFP, nil
    		}
    	}
    	// If we are here, FP does not exist in memory and is either not mapped
    	// at all, or existing mappings for FP are not for m. Check if we have
    	// something for FP in the archive.
    	archivedMetric, err := m.p.archivedMetric(fp)
    	if err != nil {
    		return fp, err
    	}
    	if archivedMetric != nil {
    		// FP exists in archive, but is it for the same metric?
    		if metric.Equal(archivedMetric) {
    			// Yupp. We are done.
    			return fp, nil
    		}
    		// Collision detected!
    		return m.maybeAddMapping(fp, metric)
    	}
    	// As fp does not exist, neither in memory nor in archive, we can safely
    	// keep it unmapped.
    	return fp, nil
    }
    
    // maybeAddMapping is only used internally. It takes a detected collision and
    // adds it to the collisions map if not yet there. In any case, it returns the
    // truly unique fingerprint for the colliding metric.
    func (m *fpMapper) maybeAddMapping(
    	fp model.Fingerprint,
    	collidingMetric model.Metric,
    ) (model.Fingerprint, error) {
    	ms := metricToUniqueString(collidingMetric)
    	m.mtx.RLock()
    	mappedFPs, ok := m.mappings[fp]
    	m.mtx.RUnlock()
    	if ok {
    		// fp is locked by the caller, so no further locking required.
    		mappedFP, ok := mappedFPs[ms]
    		if ok {
    			return mappedFP, nil // Existing mapping.
    		}
    		// A new mapping has to be created.
    		mappedFP = m.nextMappedFP()
    		mappedFPs[ms] = mappedFP
    		m.mtx.Lock()
    		// Checkpoint mappings after each change.
    		err := m.p.checkpointFPMappings(m.mappings)
    		m.mtx.Unlock()
    		log.Infof(
    			"Collision detected for fingerprint %v, metric %v, mapping to new fingerprint %v.",
    			fp, collidingMetric, mappedFP,
    		)
    		return mappedFP, err
    	}
    	// This is the first collision for fp.
    	mappedFP := m.nextMappedFP()
    	mappedFPs = map[string]model.Fingerprint{ms: mappedFP}
    	m.mtx.Lock()
    	m.mappings[fp] = mappedFPs
    	m.mappingsCounter.Inc()
    	// Checkpoint mappings after each change.
    	err := m.p.checkpointFPMappings(m.mappings)
    	m.mtx.Unlock()
    	log.Infof(
    		"Collision detected for fingerprint %v, metric %v, mapping to new fingerprint %v.",
    		fp, collidingMetric, mappedFP,
    	)
    	return mappedFP, err
    }
    
    func (m *fpMapper) nextMappedFP() model.Fingerprint {
    	mappedFP := model.Fingerprint(atomic.AddUint64((*uint64)(&m.highestMappedFP), 1))
    	if mappedFP > maxMappedFP {
    		panic(fmt.Errorf("more than %v fingerprints mapped in collision detection", maxMappedFP))
    	}
    	return mappedFP
    }
    
    // Describe implements prometheus.Collector.
    func (m *fpMapper) Describe(ch chan<- *prometheus.Desc) {
    	ch <- m.mappingsCounter.Desc()
    }
    
    // Collect implements prometheus.Collector.
    func (m *fpMapper) Collect(ch chan<- prometheus.Metric) {
    	ch <- m.mappingsCounter
    }
    
    // metricToUniqueString turns a metric into a string in a reproducible and
    // unique way, i.e. the same metric will always create the same string, and
    // different metrics will always create different strings. In a way, it is the
    // "ideal" fingerprint function, only that it is more expensive than the
    // FastFingerprint function, and its result is not suitable as a key for maps
    // and indexes as it might become really large, causing a lot of hashing effort
    // in maps and a lot of storage overhead in indexes.
    func metricToUniqueString(m model.Metric) string {
    	parts := make([]string, 0, len(m))
    	for ln, lv := range m {
    		parts = append(parts, string(ln)+separatorString+string(lv))
    	}
    	sort.Strings(parts)
    	return strings.Join(parts, separatorString)
    }
    prometheus-0.16.2+ds/storage/local/mapper_test.go000066400000000000000000000256761265137125100220300ustar00rootroot00000000000000package local
    
    import (
    	"testing"
    
    	"github.com/prometheus/common/model"
    )
    
    var (
    	// cm11, cm12, cm13 are colliding with fp1.
    	// cm21, cm22 are colliding with fp2.
    	// cm31, cm32 are colliding with fp3, which is below maxMappedFP.
    	// Note that fingerprints are set and not actually calculated.
    	// The collision detection is independent from the actually used
    	// fingerprinting algorithm.
    	fp1  = model.Fingerprint(maxMappedFP + 1)
    	fp2  = model.Fingerprint(maxMappedFP + 2)
    	fp3  = model.Fingerprint(1)
    	cm11 = model.Metric{
    		"foo":   "bar",
    		"dings": "bumms",
    	}
    	cm12 = model.Metric{
    		"bar": "foo",
    	}
    	cm13 = model.Metric{
    		"foo": "bar",
    	}
    	cm21 = model.Metric{
    		"foo":   "bumms",
    		"dings": "bar",
    	}
    	cm22 = model.Metric{
    		"dings": "foo",
    		"bar":   "bumms",
    	}
    	cm31 = model.Metric{
    		"bumms": "dings",
    	}
    	cm32 = model.Metric{
    		"bumms": "dings",
    		"bar":   "foo",
    	}
    )
    
    func TestFPMapper(t *testing.T) {
    	sm := newSeriesMap()
    
    	p, closer := newTestPersistence(t, 1)
    	defer closer.Close()
    
    	mapper, err := newFPMapper(sm, p)
    	if err != nil {
    		t.Fatal(err)
    	}
    
    	// Everything is empty, resolving a FP should do nothing.
    	gotFP, err := mapper.mapFP(fp1, cm11)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := fp1; gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp1, cm12)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := fp1; gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    
    	// cm11 is in sm. Adding cm11 should do nothing. Mapping cm12 should resolve
    	// the collision.
    	sm.put(fp1, &memorySeries{metric: cm11})
    	gotFP, err = mapper.mapFP(fp1, cm11)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := fp1; gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp1, cm12)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(1); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    
    	// The mapped cm12 is added to sm, too. That should not change the outcome.
    	sm.put(model.Fingerprint(1), &memorySeries{metric: cm12})
    	gotFP, err = mapper.mapFP(fp1, cm11)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := fp1; gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp1, cm12)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(1); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    
    	// Now map cm13, should reproducibly result in the next mapped FP.
    	gotFP, err = mapper.mapFP(fp1, cm13)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(2); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp1, cm13)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(2); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    
    	// Add cm13 to sm. Should not change anything.
    	sm.put(model.Fingerprint(2), &memorySeries{metric: cm13})
    	gotFP, err = mapper.mapFP(fp1, cm11)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := fp1; gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp1, cm12)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(1); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp1, cm13)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(2); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    
    	// Now add cm21 and cm22 in the same way, checking the mapped FPs.
    	gotFP, err = mapper.mapFP(fp2, cm21)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := fp2; gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	sm.put(fp2, &memorySeries{metric: cm21})
    	gotFP, err = mapper.mapFP(fp2, cm21)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := fp2; gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp2, cm22)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(3); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	sm.put(model.Fingerprint(3), &memorySeries{metric: cm22})
    	gotFP, err = mapper.mapFP(fp2, cm21)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := fp2; gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp2, cm22)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(3); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    
    	// Map cm31, resulting in a mapping straight away.
    	gotFP, err = mapper.mapFP(fp3, cm31)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(4); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	sm.put(model.Fingerprint(4), &memorySeries{metric: cm31})
    
    	// Map cm32, which is now mapped for two reasons...
    	gotFP, err = mapper.mapFP(fp3, cm32)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(5); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	sm.put(model.Fingerprint(5), &memorySeries{metric: cm32})
    
    	// Now check ALL the mappings, just to be sure.
    	gotFP, err = mapper.mapFP(fp1, cm11)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := fp1; gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp1, cm12)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(1); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp1, cm13)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(2); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp2, cm21)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := fp2; gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp2, cm22)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(3); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp3, cm31)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(4); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp3, cm32)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(5); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    
    	// Remove all the fingerprints from sm, which should change nothing, as
    	// the existing mappings stay and should be detected.
    	sm.del(fp1)
    	sm.del(fp2)
    	sm.del(fp3)
    	gotFP, err = mapper.mapFP(fp1, cm11)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := fp1; gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp1, cm12)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(1); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp1, cm13)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(2); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp2, cm21)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := fp2; gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp2, cm22)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(3); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp3, cm31)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(4); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp3, cm32)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(5); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    
    	// Load the mapper anew from disk and then check all the mappings again
    	// to make sure all changes have made it to disk.
    	mapper, err = newFPMapper(sm, p)
    	if err != nil {
    		t.Fatal(err)
    	}
    	gotFP, err = mapper.mapFP(fp1, cm11)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := fp1; gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp1, cm12)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(1); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp1, cm13)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(2); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp2, cm21)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := fp2; gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp2, cm22)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(3); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp3, cm31)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(4); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp3, cm32)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(5); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    
    	// To make sure that the mapping layer is not queried if the FP is found
    	// in sm but the mapping layer is queried before going to the archive,
    	// now put fp1 with cm12 in sm and fp2 with cm22 into archive (which
    	// will never happen in practice as only mapped FPs are put into sm and
    	// the archive).
    	sm.put(fp1, &memorySeries{metric: cm12})
    	p.archiveMetric(fp2, cm22, 0, 0)
    	gotFP, err = mapper.mapFP(fp1, cm12)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := fp1; gotFP != wantFP { // No mapping happened.
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    	gotFP, err = mapper.mapFP(fp2, cm22)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(3); gotFP != wantFP { // Old mapping still applied.
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    
    	// If we now map cm21, we should get a mapping as the collision with the
    	// archived metric is detected. Again, this is a pathological situation
    	// that must never happen in real operations. It's just staged here to
    	// test the expected behavior.
    	gotFP, err = mapper.mapFP(fp2, cm21)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if wantFP := model.Fingerprint(6); gotFP != wantFP {
    		t.Errorf("got fingerprint %v, want fingerprint %v", gotFP, wantFP)
    	}
    
    }
    prometheus-0.16.2+ds/storage/local/persistence.go000066400000000000000000001436051265137125100220220ustar00rootroot00000000000000// Copyright 2014 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package local
    
    import (
    	"bufio"
    	"encoding/binary"
    	"fmt"
    	"io"
    	"io/ioutil"
    	"os"
    	"path"
    	"path/filepath"
    	"strconv"
    	"strings"
    	"sync"
    	"sync/atomic"
    	"time"
    
    	"github.com/prometheus/client_golang/prometheus"
    	"github.com/prometheus/common/log"
    	"github.com/prometheus/common/model"
    
    	"github.com/prometheus/prometheus/storage/local/codable"
    	"github.com/prometheus/prometheus/storage/local/index"
    	"github.com/prometheus/prometheus/util/flock"
    )
    
    const (
    	// Version of the storage as it can be found in the version file.
    	// Increment to protect against incompatible changes.
    	Version         = 1
    	versionFileName = "VERSION"
    
    	seriesFileSuffix     = ".db"
    	seriesTempFileSuffix = ".db.tmp"
    	seriesDirNameLen     = 2 // How many bytes of the fingerprint in dir name.
    
    	headsFileName            = "heads.db"
    	headsTempFileName        = "heads.db.tmp"
    	headsFormatVersion       = 2
    	headsFormatLegacyVersion = 1 // Can read, but will never write.
    	headsMagicString         = "PrometheusHeads"
    
    	mappingsFileName      = "mappings.db"
    	mappingsTempFileName  = "mappings.db.tmp"
    	mappingsFormatVersion = 1
    	mappingsMagicString   = "PrometheusMappings"
    
    	dirtyFileName = "DIRTY"
    
    	fileBufSize = 1 << 16 // 64kiB.
    
    	chunkHeaderLen             = 17
    	chunkHeaderTypeOffset      = 0
    	chunkHeaderFirstTimeOffset = 1
    	chunkHeaderLastTimeOffset  = 9
    	chunkLenWithHeader         = chunkLen + chunkHeaderLen
    	chunkMaxBatchSize          = 64 // How many chunks to load at most in one batch.
    
    	indexingMaxBatchSize  = 1024 * 1024
    	indexingBatchTimeout  = 500 * time.Millisecond // Commit batch when idle for that long.
    	indexingQueueCapacity = 1024 * 16
    )
    
    var fpLen = len(model.Fingerprint(0).String()) // Length of a fingerprint as string.
    
    const (
    	flagHeadChunkPersisted byte = 1 << iota
    	// Add more flags here like:
    	// flagFoo
    	// flagBar
    )
    
    type indexingOpType byte
    
    const (
    	add indexingOpType = iota
    	remove
    )
    
    type indexingOp struct {
    	fingerprint model.Fingerprint
    	metric      model.Metric
    	opType      indexingOpType
    }
    
    // A Persistence is used by a Storage implementation to store samples
    // persistently across restarts. The methods are only goroutine-safe if
    // explicitly marked as such below. The chunk-related methods persistChunks,
    // dropChunks, loadChunks, and loadChunkDescs can be called concurrently with
    // each other if each call refers to a different fingerprint.
    type persistence struct {
    	basePath string
    
    	archivedFingerprintToMetrics   *index.FingerprintMetricIndex
    	archivedFingerprintToTimeRange *index.FingerprintTimeRangeIndex
    	labelPairToFingerprints        *index.LabelPairFingerprintIndex
    	labelNameToLabelValues         *index.LabelNameLabelValuesIndex
    
    	indexingQueue   chan indexingOp
    	indexingStopped chan struct{}
    	indexingFlush   chan chan int
    
    	indexingQueueLength   prometheus.Gauge
    	indexingQueueCapacity prometheus.Metric
    	indexingBatchSizes    prometheus.Summary
    	indexingBatchDuration prometheus.Summary
    	checkpointDuration    prometheus.Gauge
    	dirtyCounter          prometheus.Counter
    
    	dirtyMtx       sync.Mutex     // Protects dirty and becameDirty.
    	dirty          bool           // true if persistence was started in dirty state.
    	becameDirty    bool           // true if an inconsistency came up during runtime.
    	pedanticChecks bool           // true if crash recovery should check each series.
    	dirtyFileName  string         // The file used for locking and to mark dirty state.
    	fLock          flock.Releaser // The file lock to protect against concurrent usage.
    
    	shouldSync syncStrategy
    
    	minShrinkRatio float64 // How much a series file has to shrink to justify dropping chunks.
    
    	bufPool sync.Pool
    }
    
    // newPersistence returns a newly allocated persistence backed by local disk storage, ready to use.
    func newPersistence(
    	basePath string,
    	dirty, pedanticChecks bool,
    	shouldSync syncStrategy,
    	minShrinkRatio float64,
    ) (*persistence, error) {
    	dirtyPath := filepath.Join(basePath, dirtyFileName)
    	versionPath := filepath.Join(basePath, versionFileName)
    
    	if versionData, err := ioutil.ReadFile(versionPath); err == nil {
    		if persistedVersion, err := strconv.Atoi(strings.TrimSpace(string(versionData))); err != nil {
    			return nil, fmt.Errorf("cannot parse content of %s: %s", versionPath, versionData)
    		} else if persistedVersion != Version {
    			return nil, fmt.Errorf("found storage version %d on disk, need version %d - please wipe storage or run a version of Prometheus compatible with storage version %d", persistedVersion, Version, persistedVersion)
    		}
    	} else if os.IsNotExist(err) {
    		// No version file found. Let's create the directory (in case
    		// it's not there yet) and then check if it is actually
    		// empty. If not, we have found an old storage directory without
    		// version file, so we have to bail out.
    		if err := os.MkdirAll(basePath, 0700); err != nil {
    			return nil, err
    		}
    		fis, err := ioutil.ReadDir(basePath)
    		if err != nil {
    			return nil, err
    		}
    		if len(fis) > 0 && !(len(fis) == 1 && fis[0].Name() == "lost+found" && fis[0].IsDir()) {
    			return nil, fmt.Errorf("could not detect storage version on disk, assuming version 0, need version %d - please wipe storage or run a version of Prometheus compatible with storage version 0", Version)
    		}
    		// Finally we can write our own version into a new version file.
    		file, err := os.Create(versionPath)
    		if err != nil {
    			return nil, err
    		}
    		defer file.Close()
    		if _, err := fmt.Fprintf(file, "%d\n", Version); err != nil {
    			return nil, err
    		}
    	} else {
    		return nil, err
    	}
    
    	fLock, dirtyfileExisted, err := flock.New(dirtyPath)
    	if err != nil {
    		log.Errorf("Could not lock %s, Prometheus already running?", dirtyPath)
    		return nil, err
    	}
    	if dirtyfileExisted {
    		dirty = true
    	}
    
    	archivedFingerprintToMetrics, err := index.NewFingerprintMetricIndex(basePath)
    	if err != nil {
    		return nil, err
    	}
    	archivedFingerprintToTimeRange, err := index.NewFingerprintTimeRangeIndex(basePath)
    	if err != nil {
    		return nil, err
    	}
    
    	p := &persistence{
    		basePath: basePath,
    
    		archivedFingerprintToMetrics:   archivedFingerprintToMetrics,
    		archivedFingerprintToTimeRange: archivedFingerprintToTimeRange,
    
    		indexingQueue:   make(chan indexingOp, indexingQueueCapacity),
    		indexingStopped: make(chan struct{}),
    		indexingFlush:   make(chan chan int),
    
    		indexingQueueLength: prometheus.NewGauge(prometheus.GaugeOpts{
    			Namespace: namespace,
    			Subsystem: subsystem,
    			Name:      "indexing_queue_length",
    			Help:      "The number of metrics waiting to be indexed.",
    		}),
    		indexingQueueCapacity: prometheus.MustNewConstMetric(
    			prometheus.NewDesc(
    				prometheus.BuildFQName(namespace, subsystem, "indexing_queue_capacity"),
    				"The capacity of the indexing queue.",
    				nil, nil,
    			),
    			prometheus.GaugeValue,
    			float64(indexingQueueCapacity),
    		),
    		indexingBatchSizes: prometheus.NewSummary(
    			prometheus.SummaryOpts{
    				Namespace: namespace,
    				Subsystem: subsystem,
    				Name:      "indexing_batch_sizes",
    				Help:      "Quantiles for indexing batch sizes (number of metrics per batch).",
    			},
    		),
    		indexingBatchDuration: prometheus.NewSummary(
    			prometheus.SummaryOpts{
    				Namespace: namespace,
    				Subsystem: subsystem,
    				Name:      "indexing_batch_duration_milliseconds",
    				Help:      "Quantiles for batch indexing duration in milliseconds.",
    			},
    		),
    		checkpointDuration: prometheus.NewGauge(prometheus.GaugeOpts{
    			Namespace: namespace,
    			Subsystem: subsystem,
    			Name:      "checkpoint_duration_milliseconds",
    			Help:      "The duration (in milliseconds) it took to checkpoint in-memory metrics and head chunks.",
    		}),
    		dirtyCounter: prometheus.NewCounter(prometheus.CounterOpts{
    			Namespace: namespace,
    			Subsystem: subsystem,
    			Name:      "inconsistencies_total",
    			Help:      "A counter incremented each time an inconsistency in the local storage is detected. If this is greater zero, restart the server as soon as possible.",
    		}),
    		dirty:          dirty,
    		pedanticChecks: pedanticChecks,
    		dirtyFileName:  dirtyPath,
    		fLock:          fLock,
    		shouldSync:     shouldSync,
    		// Create buffers of length 3*chunkLenWithHeader by default because that is still reasonably small
    		// and at the same time enough for many uses. The contract is to never return buffer smaller than
    		// that to the pool so that callers can rely on a minimum buffer size.
    		bufPool: sync.Pool{New: func() interface{} { return make([]byte, 0, 3*chunkLenWithHeader) }},
    	}
    
    	if p.dirty {
    		// Blow away the label indexes. We'll rebuild them later.
    		if err := index.DeleteLabelPairFingerprintIndex(basePath); err != nil {
    			return nil, err
    		}
    		if err := index.DeleteLabelNameLabelValuesIndex(basePath); err != nil {
    			return nil, err
    		}
    	}
    	labelPairToFingerprints, err := index.NewLabelPairFingerprintIndex(basePath)
    	if err != nil {
    		return nil, err
    	}
    	labelNameToLabelValues, err := index.NewLabelNameLabelValuesIndex(basePath)
    	if err != nil {
    		return nil, err
    	}
    	p.labelPairToFingerprints = labelPairToFingerprints
    	p.labelNameToLabelValues = labelNameToLabelValues
    
    	return p, nil
    }
    
    func (p *persistence) run() {
    	p.processIndexingQueue()
    }
    
    // Describe implements prometheus.Collector.
    func (p *persistence) Describe(ch chan<- *prometheus.Desc) {
    	ch <- p.indexingQueueLength.Desc()
    	ch <- p.indexingQueueCapacity.Desc()
    	p.indexingBatchSizes.Describe(ch)
    	p.indexingBatchDuration.Describe(ch)
    	ch <- p.checkpointDuration.Desc()
    	ch <- p.dirtyCounter.Desc()
    }
    
    // Collect implements prometheus.Collector.
    func (p *persistence) Collect(ch chan<- prometheus.Metric) {
    	p.indexingQueueLength.Set(float64(len(p.indexingQueue)))
    
    	ch <- p.indexingQueueLength
    	ch <- p.indexingQueueCapacity
    	p.indexingBatchSizes.Collect(ch)
    	p.indexingBatchDuration.Collect(ch)
    	ch <- p.checkpointDuration
    	ch <- p.dirtyCounter
    }
    
    // isDirty returns the dirty flag in a goroutine-safe way.
    func (p *persistence) isDirty() bool {
    	p.dirtyMtx.Lock()
    	defer p.dirtyMtx.Unlock()
    	return p.dirty
    }
    
    // setDirty sets the dirty flag in a goroutine-safe way. Once the dirty flag was
    // set to true with this method, it cannot be set to false again. (If we became
    // dirty during our runtime, there is no way back. If we were dirty from the
    // start, a clean-up might make us clean again.)
    func (p *persistence) setDirty(dirty bool) {
    	if dirty {
    		p.dirtyCounter.Inc()
    	}
    	p.dirtyMtx.Lock()
    	defer p.dirtyMtx.Unlock()
    	if p.becameDirty {
    		return
    	}
    	p.dirty = dirty
    	if dirty {
    		p.becameDirty = true
    		log.Error("The storage is now inconsistent. Restart Prometheus ASAP to initiate recovery.")
    	}
    }
    
    // fingerprintsForLabelPair returns the fingerprints for the given label
    // pair. This method is goroutine-safe but take into account that metrics queued
    // for indexing with IndexMetric might not have made it into the index
    // yet. (Same applies correspondingly to UnindexMetric.)
    func (p *persistence) fingerprintsForLabelPair(lp model.LabelPair) (model.Fingerprints, error) {
    	fps, _, err := p.labelPairToFingerprints.Lookup(lp)
    	if err != nil {
    		return nil, err
    	}
    	return fps, nil
    }
    
    // labelValuesForLabelName returns the label values for the given label
    // name. This method is goroutine-safe but take into account that metrics queued
    // for indexing with IndexMetric might not have made it into the index
    // yet. (Same applies correspondingly to UnindexMetric.)
    func (p *persistence) labelValuesForLabelName(ln model.LabelName) (model.LabelValues, error) {
    	lvs, _, err := p.labelNameToLabelValues.Lookup(ln)
    	if err != nil {
    		return nil, err
    	}
    	return lvs, nil
    }
    
    // persistChunks persists a number of consecutive chunks of a series. It is the
    // caller's responsibility to not modify the chunks concurrently and to not
    // persist or drop anything for the same fingerprint concurrently. It returns
    // the (zero-based) index of the first persisted chunk within the series
    // file. In case of an error, the returned index is -1 (to avoid the
    // misconception that the chunk was written at position 0).
    func (p *persistence) persistChunks(fp model.Fingerprint, chunks []chunk) (index int, err error) {
    	defer func() {
    		if err != nil {
    			log.Error("Error persisting chunks: ", err)
    			p.setDirty(true)
    		}
    	}()
    
    	f, err := p.openChunkFileForWriting(fp)
    	if err != nil {
    		return -1, err
    	}
    	defer p.closeChunkFile(f)
    
    	if err := writeChunks(f, chunks); err != nil {
    		return -1, err
    	}
    
    	// Determine index within the file.
    	offset, err := f.Seek(0, os.SEEK_CUR)
    	if err != nil {
    		return -1, err
    	}
    	index, err = chunkIndexForOffset(offset)
    	if err != nil {
    		return -1, err
    	}
    
    	return index - len(chunks), err
    }
    
    // loadChunks loads a group of chunks of a timeseries by their index. The chunk
    // with the earliest time will have index 0, the following ones will have
    // incrementally larger indexes. The indexOffset denotes the offset to be added to
    // each index in indexes. It is the caller's responsibility to not persist or
    // drop anything for the same fingerprint concurrently.
    func (p *persistence) loadChunks(fp model.Fingerprint, indexes []int, indexOffset int) ([]chunk, error) {
    	f, err := p.openChunkFileForReading(fp)
    	if err != nil {
    		return nil, err
    	}
    	defer f.Close()
    
    	chunks := make([]chunk, 0, len(indexes))
    	buf := p.bufPool.Get().([]byte)
    	defer func() {
    		// buf may change below, so wrap returning to the pool in a function.
    		// A simple 'defer p.bufPool.Put(buf)' would only return the original buf.
    		p.bufPool.Put(buf)
    	}()
    
    	for i := 0; i < len(indexes); i++ {
    		// This loads chunks in batches. A batch is a streak of
    		// consecutive chunks, read from disk in one go.
    		batchSize := 1
    		if _, err := f.Seek(offsetForChunkIndex(indexes[i]+indexOffset), os.SEEK_SET); err != nil {
    			return nil, err
    		}
    
    		for ; batchSize < chunkMaxBatchSize &&
    			i+1 < len(indexes) &&
    			indexes[i]+1 == indexes[i+1]; i, batchSize = i+1, batchSize+1 {
    		}
    		readSize := batchSize * chunkLenWithHeader
    		if cap(buf) < readSize {
    			buf = make([]byte, readSize)
    		}
    		buf = buf[:readSize]
    
    		if _, err := io.ReadFull(f, buf); err != nil {
    			return nil, err
    		}
    		for c := 0; c < batchSize; c++ {
    			chunk := newChunkForEncoding(chunkEncoding(buf[c*chunkLenWithHeader+chunkHeaderTypeOffset]))
    			chunk.unmarshalFromBuf(buf[c*chunkLenWithHeader+chunkHeaderLen:])
    			chunks = append(chunks, chunk)
    		}
    	}
    	chunkOps.WithLabelValues(load).Add(float64(len(chunks)))
    	atomic.AddInt64(&numMemChunks, int64(len(chunks)))
    	return chunks, nil
    }
    
    // loadChunkDescs loads the chunkDescs for a series from disk. offsetFromEnd is
    // the number of chunkDescs to skip from the end of the series file. It is the
    // caller's responsibility to not persist or drop anything for the same
    // fingerprint concurrently.
    func (p *persistence) loadChunkDescs(fp model.Fingerprint, offsetFromEnd int) ([]*chunkDesc, error) {
    	f, err := p.openChunkFileForReading(fp)
    	if os.IsNotExist(err) {
    		return nil, nil
    	}
    	if err != nil {
    		return nil, err
    	}
    	defer f.Close()
    
    	fi, err := f.Stat()
    	if err != nil {
    		return nil, err
    	}
    	if fi.Size()%int64(chunkLenWithHeader) != 0 {
    		p.setDirty(true)
    		return nil, fmt.Errorf(
    			"size of series file for fingerprint %v is %d, which is not a multiple of the chunk length %d",
    			fp, fi.Size(), chunkLenWithHeader,
    		)
    	}
    
    	numChunks := int(fi.Size())/chunkLenWithHeader - offsetFromEnd
    	cds := make([]*chunkDesc, numChunks)
    	chunkTimesBuf := make([]byte, 16)
    	for i := 0; i < numChunks; i++ {
    		_, err := f.Seek(offsetForChunkIndex(i)+chunkHeaderFirstTimeOffset, os.SEEK_SET)
    		if err != nil {
    			return nil, err
    		}
    
    		_, err = io.ReadAtLeast(f, chunkTimesBuf, 16)
    		if err != nil {
    			return nil, err
    		}
    		cds[i] = &chunkDesc{
    			chunkFirstTime: model.Time(binary.LittleEndian.Uint64(chunkTimesBuf)),
    			chunkLastTime:  model.Time(binary.LittleEndian.Uint64(chunkTimesBuf[8:])),
    		}
    	}
    	chunkDescOps.WithLabelValues(load).Add(float64(len(cds)))
    	numMemChunkDescs.Add(float64(len(cds)))
    	return cds, nil
    }
    
    // checkpointSeriesMapAndHeads persists the fingerprint to memory-series mapping
    // and all non persisted chunks. Do not call concurrently with
    // loadSeriesMapAndHeads. This method will only write heads format v2, but
    // loadSeriesMapAndHeads can also understand v1.
    //
    // Description of the file format (for both, v1 and v2):
    //
    // (1) Magic string (const headsMagicString).
    //
    // (2) Varint-encoded format version (const headsFormatVersion).
    //
    // (3) Number of series in checkpoint as big-endian uint64.
    //
    // (4) Repeated once per series:
    //
    // (4.1) A flag byte, see flag constants above. (Present but unused in v2.)
    //
    // (4.2) The fingerprint as big-endian uint64.
    //
    // (4.3) The metric as defined by codable.Metric.
    //
    // (4.4) The varint-encoded persistWatermark. (Missing in v1.)
    //
    // (4.5) The modification time of the series file as nanoseconds elapsed since
    // January 1, 1970 UTC. -1 if the modification time is unknown or no series file
    // exists yet. (Missing in v1.)
    //
    // (4.6) The varint-encoded chunkDescsOffset.
    //
    // (4.6) The varint-encoded savedFirstTime.
    //
    // (4.7) The varint-encoded number of chunk descriptors.
    //
    // (4.8) Repeated once per chunk descriptor, oldest to most recent, either
    // variant 4.8.1 (if index < persistWatermark) or variant 4.8.2 (if index >=
    // persistWatermark). In v1, everything is variant 4.8.1 except for a
    // non-persisted head-chunk (determined by the flags).
    //
    // (4.8.1.1) The varint-encoded first time.
    // (4.8.1.2) The varint-encoded last time.
    //
    // (4.8.2.1) A byte defining the chunk type.
    // (4.8.2.2) The chunk itself, marshaled with the marshal() method.
    //
    func (p *persistence) checkpointSeriesMapAndHeads(fingerprintToSeries *seriesMap, fpLocker *fingerprintLocker) (err error) {
    	log.Info("Checkpointing in-memory metrics and chunks...")
    	begin := time.Now()
    	f, err := os.OpenFile(p.headsTempFileName(), os.O_WRONLY|os.O_TRUNC|os.O_CREATE, 0640)
    	if err != nil {
    		return err
    	}
    
    	defer func() {
    		syncErr := f.Sync()
    		closeErr := f.Close()
    		if err != nil {
    			return
    		}
    		err = syncErr
    		if err != nil {
    			return
    		}
    		err = closeErr
    		if err != nil {
    			return
    		}
    		err = os.Rename(p.headsTempFileName(), p.headsFileName())
    		duration := time.Since(begin)
    		p.checkpointDuration.Set(float64(duration) / float64(time.Millisecond))
    		log.Infof("Done checkpointing in-memory metrics and chunks in %v.", duration)
    	}()
    
    	w := bufio.NewWriterSize(f, fileBufSize)
    
    	if _, err = w.WriteString(headsMagicString); err != nil {
    		return err
    	}
    	var numberOfSeriesOffset int
    	if numberOfSeriesOffset, err = codable.EncodeVarint(w, headsFormatVersion); err != nil {
    		return err
    	}
    	numberOfSeriesOffset += len(headsMagicString)
    	numberOfSeriesInHeader := uint64(fingerprintToSeries.length())
    	// We have to write the number of series as uint64 because we might need
    	// to overwrite it later, and a varint might change byte width then.
    	if err = codable.EncodeUint64(w, numberOfSeriesInHeader); err != nil {
    		return err
    	}
    
    	iter := fingerprintToSeries.iter()
    	defer func() {
    		// Consume the iterator in any case to not leak goroutines.
    		for range iter {
    		}
    	}()
    
    	var realNumberOfSeries uint64
    	for m := range iter {
    		func() { // Wrapped in function to use defer for unlocking the fp.
    			fpLocker.Lock(m.fp)
    			defer fpLocker.Unlock(m.fp)
    
    			if len(m.series.chunkDescs) == 0 {
    				// This series was completely purged or archived in the meantime. Ignore.
    				return
    			}
    			realNumberOfSeries++
    			// seriesFlags left empty in v2.
    			if err = w.WriteByte(0); err != nil {
    				return
    			}
    			if err = codable.EncodeUint64(w, uint64(m.fp)); err != nil {
    				return
    			}
    			var buf []byte
    			buf, err = codable.Metric(m.series.metric).MarshalBinary()
    			if err != nil {
    				return
    			}
    			if _, err = w.Write(buf); err != nil {
    				return
    			}
    			if _, err = codable.EncodeVarint(w, int64(m.series.persistWatermark)); err != nil {
    				return
    			}
    			if m.series.modTime.IsZero() {
    				if _, err = codable.EncodeVarint(w, -1); err != nil {
    					return
    				}
    			} else {
    				if _, err = codable.EncodeVarint(w, m.series.modTime.UnixNano()); err != nil {
    					return
    				}
    			}
    			if _, err = codable.EncodeVarint(w, int64(m.series.chunkDescsOffset)); err != nil {
    				return
    			}
    			if _, err = codable.EncodeVarint(w, int64(m.series.savedFirstTime)); err != nil {
    				return
    			}
    			if _, err = codable.EncodeVarint(w, int64(len(m.series.chunkDescs))); err != nil {
    				return
    			}
    			for i, chunkDesc := range m.series.chunkDescs {
    				if i < m.series.persistWatermark {
    					if _, err = codable.EncodeVarint(w, int64(chunkDesc.firstTime())); err != nil {
    						return
    					}
    					if _, err = codable.EncodeVarint(w, int64(chunkDesc.lastTime())); err != nil {
    						return
    					}
    				} else {
    					// This is a non-persisted chunk. Fully marshal it.
    					if err = w.WriteByte(byte(chunkDesc.c.encoding())); err != nil {
    						return
    					}
    					if err = chunkDesc.c.marshal(w); err != nil {
    						return
    					}
    				}
    			}
    			// Series is checkpointed now, so declare it clean. In case the entire
    			// checkpoint fails later on, this is fine, as the storage's series
    			// maintenance will mark these series newly dirty again, continuously
    			// increasing the total number of dirty series as seen by the storage.
    			// This has the effect of triggering a new checkpoint attempt even
    			// earlier than if we hadn't incorrectly set "dirty" to "false" here
    			// already.
    			m.series.dirty = false
    		}()
    		if err != nil {
    			return err
    		}
    	}
    	if err = w.Flush(); err != nil {
    		return err
    	}
    	if realNumberOfSeries != numberOfSeriesInHeader {
    		// The number of series has changed in the meantime.
    		// Rewrite it in the header.
    		if _, err = f.Seek(int64(numberOfSeriesOffset), os.SEEK_SET); err != nil {
    			return err
    		}
    		if err = codable.EncodeUint64(f, realNumberOfSeries); err != nil {
    			return err
    		}
    	}
    	return err
    }
    
    // loadSeriesMapAndHeads loads the fingerprint to memory-series mapping and all
    // the chunks contained in the checkpoint (and thus not yet persisted to series
    // files). The method is capable of loading the checkpoint format v1 and v2. If
    // recoverable corruption is detected, or if the dirty flag was set from the
    // beginning, crash recovery is run, which might take a while. If an
    // unrecoverable error is encountered, it is returned. Call this method during
    // start-up while nothing else is running in storage land. This method is
    // utterly goroutine-unsafe.
    func (p *persistence) loadSeriesMapAndHeads() (sm *seriesMap, chunksToPersist int64, err error) {
    	var chunkDescsTotal int64
    	fingerprintToSeries := make(map[model.Fingerprint]*memorySeries)
    	sm = &seriesMap{m: fingerprintToSeries}
    
    	defer func() {
    		if sm != nil && p.dirty {
    			log.Warn("Persistence layer appears dirty.")
    			err = p.recoverFromCrash(fingerprintToSeries)
    			if err != nil {
    				sm = nil
    			}
    		}
    		if err == nil {
    			numMemChunkDescs.Add(float64(chunkDescsTotal))
    		}
    	}()
    
    	f, err := os.Open(p.headsFileName())
    	if os.IsNotExist(err) {
    		return sm, 0, nil
    	}
    	if err != nil {
    		log.Warn("Could not open heads file:", err)
    		p.dirty = true
    		return
    	}
    	defer f.Close()
    	r := bufio.NewReaderSize(f, fileBufSize)
    
    	buf := make([]byte, len(headsMagicString))
    	if _, err := io.ReadFull(r, buf); err != nil {
    		log.Warn("Could not read from heads file:", err)
    		p.dirty = true
    		return sm, 0, nil
    	}
    	magic := string(buf)
    	if magic != headsMagicString {
    		log.Warnf(
    			"unexpected magic string, want %q, got %q",
    			headsMagicString, magic,
    		)
    		p.dirty = true
    		return
    	}
    	version, err := binary.ReadVarint(r)
    	if (version != headsFormatVersion && version != headsFormatLegacyVersion) || err != nil {
    		log.Warnf("unknown heads format version, want %d", headsFormatVersion)
    		p.dirty = true
    		return sm, 0, nil
    	}
    	numSeries, err := codable.DecodeUint64(r)
    	if err != nil {
    		log.Warn("Could not decode number of series:", err)
    		p.dirty = true
    		return sm, 0, nil
    	}
    
    	for ; numSeries > 0; numSeries-- {
    		seriesFlags, err := r.ReadByte()
    		if err != nil {
    			log.Warn("Could not read series flags:", err)
    			p.dirty = true
    			return sm, chunksToPersist, nil
    		}
    		headChunkPersisted := seriesFlags&flagHeadChunkPersisted != 0
    		fp, err := codable.DecodeUint64(r)
    		if err != nil {
    			log.Warn("Could not decode fingerprint:", err)
    			p.dirty = true
    			return sm, chunksToPersist, nil
    		}
    		var metric codable.Metric
    		if err := metric.UnmarshalFromReader(r); err != nil {
    			log.Warn("Could not decode metric:", err)
    			p.dirty = true
    			return sm, chunksToPersist, nil
    		}
    		var persistWatermark int64
    		var modTime time.Time
    		if version != headsFormatLegacyVersion {
    			// persistWatermark only present in v2.
    			persistWatermark, err = binary.ReadVarint(r)
    			if err != nil {
    				log.Warn("Could not decode persist watermark:", err)
    				p.dirty = true
    				return sm, chunksToPersist, nil
    			}
    			modTimeNano, err := binary.ReadVarint(r)
    			if err != nil {
    				log.Warn("Could not decode modification time:", err)
    				p.dirty = true
    				return sm, chunksToPersist, nil
    			}
    			if modTimeNano != -1 {
    				modTime = time.Unix(0, modTimeNano)
    			}
    		}
    		chunkDescsOffset, err := binary.ReadVarint(r)
    		if err != nil {
    			log.Warn("Could not decode chunk descriptor offset:", err)
    			p.dirty = true
    			return sm, chunksToPersist, nil
    		}
    		savedFirstTime, err := binary.ReadVarint(r)
    		if err != nil {
    			log.Warn("Could not decode saved first time:", err)
    			p.dirty = true
    			return sm, chunksToPersist, nil
    		}
    		numChunkDescs, err := binary.ReadVarint(r)
    		if err != nil {
    			log.Warn("Could not decode number of chunk descriptors:", err)
    			p.dirty = true
    			return sm, chunksToPersist, nil
    		}
    		chunkDescs := make([]*chunkDesc, numChunkDescs)
    		if version == headsFormatLegacyVersion {
    			if headChunkPersisted {
    				persistWatermark = numChunkDescs
    			} else {
    				persistWatermark = numChunkDescs - 1
    			}
    		}
    
    		for i := int64(0); i < numChunkDescs; i++ {
    			if i < persistWatermark {
    				firstTime, err := binary.ReadVarint(r)
    				if err != nil {
    					log.Warn("Could not decode first time:", err)
    					p.dirty = true
    					return sm, chunksToPersist, nil
    				}
    				lastTime, err := binary.ReadVarint(r)
    				if err != nil {
    					log.Warn("Could not decode last time:", err)
    					p.dirty = true
    					return sm, chunksToPersist, nil
    				}
    				chunkDescs[i] = &chunkDesc{
    					chunkFirstTime: model.Time(firstTime),
    					chunkLastTime:  model.Time(lastTime),
    				}
    				chunkDescsTotal++
    			} else {
    				// Non-persisted chunk.
    				encoding, err := r.ReadByte()
    				if err != nil {
    					log.Warn("Could not decode chunk type:", err)
    					p.dirty = true
    					return sm, chunksToPersist, nil
    				}
    				chunk := newChunkForEncoding(chunkEncoding(encoding))
    				if err := chunk.unmarshal(r); err != nil {
    					log.Warn("Could not decode chunk:", err)
    					p.dirty = true
    					return sm, chunksToPersist, nil
    				}
    				chunkDescs[i] = newChunkDesc(chunk)
    				chunksToPersist++
    			}
    		}
    
    		fingerprintToSeries[model.Fingerprint(fp)] = &memorySeries{
    			metric:           model.Metric(metric),
    			chunkDescs:       chunkDescs,
    			persistWatermark: int(persistWatermark),
    			modTime:          modTime,
    			chunkDescsOffset: int(chunkDescsOffset),
    			savedFirstTime:   model.Time(savedFirstTime),
    			lastTime:         chunkDescs[len(chunkDescs)-1].lastTime(),
    			headChunkClosed:  persistWatermark >= numChunkDescs,
    		}
    	}
    	return sm, chunksToPersist, nil
    }
    
    // dropAndPersistChunks deletes all chunks from a series file whose last sample
    // time is before beforeTime, and then appends the provided chunks, leaving out
    // those whose last sample time is before beforeTime. It returns the timestamp
    // of the first sample in the oldest chunk _not_ dropped, the offset within the
    // series file of the first chunk persisted (out of the provided chunks), the
    // number of deleted chunks, and true if all chunks of the series have been
    // deleted (in which case the returned timestamp will be 0 and must be ignored).
    // It is the caller's responsibility to make sure nothing is persisted or loaded
    // for the same fingerprint concurrently.
    func (p *persistence) dropAndPersistChunks(
    	fp model.Fingerprint, beforeTime model.Time, chunks []chunk,
    ) (
    	firstTimeNotDropped model.Time,
    	offset int,
    	numDropped int,
    	allDropped bool,
    	err error,
    ) {
    	// Style note: With the many return values, it was decided to use naked
    	// returns in this method. They make the method more readable, but
    	// please handle with care!
    	defer func() {
    		if err != nil {
    			log.Error("Error dropping and/or persisting chunks: ", err)
    			p.setDirty(true)
    		}
    	}()
    
    	if len(chunks) > 0 {
    		// We have chunks to persist. First check if those are already
    		// too old. If that's the case, the chunks in the series file
    		// are all too old, too.
    		i := 0
    		for ; i < len(chunks) && chunks[i].newIterator().lastTimestamp().Before(beforeTime); i++ {
    		}
    		if i < len(chunks) {
    			firstTimeNotDropped = chunks[i].firstTime()
    		}
    		if i > 0 || firstTimeNotDropped.Before(beforeTime) {
    			// Series file has to go.
    			if numDropped, err = p.deleteSeriesFile(fp); err != nil {
    				return
    			}
    			numDropped += i
    			if i == len(chunks) {
    				allDropped = true
    				return
    			}
    			// Now simply persist what has to be persisted to a new file.
    			_, err = p.persistChunks(fp, chunks[i:])
    			return
    		}
    	}
    
    	// If we are here, we have to check the series file itself.
    	f, err := p.openChunkFileForReading(fp)
    	if os.IsNotExist(err) {
    		// No series file. Only need to create new file with chunks to
    		// persist, if there are any.
    		if len(chunks) == 0 {
    			allDropped = true
    			err = nil // Do not report not-exist err.
    			return
    		}
    		offset, err = p.persistChunks(fp, chunks)
    		return
    	}
    	if err != nil {
    		return
    	}
    	defer f.Close()
    
    	headerBuf := make([]byte, chunkHeaderLen)
    	var firstTimeInFile model.Time
    	// Find the first chunk in the file that should be kept.
    	for ; ; numDropped++ {
    		_, err = f.Seek(offsetForChunkIndex(numDropped), os.SEEK_SET)
    		if err != nil {
    			return
    		}
    		_, err = io.ReadFull(f, headerBuf)
    		if err == io.EOF {
    			// We ran into the end of the file without finding any chunks that should
    			// be kept. Remove the whole file.
    			if numDropped, err = p.deleteSeriesFile(fp); err != nil {
    				return
    			}
    			if len(chunks) == 0 {
    				allDropped = true
    				return
    			}
    			offset, err = p.persistChunks(fp, chunks)
    			return
    		}
    		if err != nil {
    			return
    		}
    		if numDropped == 0 {
    			firstTimeInFile = model.Time(
    				binary.LittleEndian.Uint64(headerBuf[chunkHeaderFirstTimeOffset:]),
    			)
    		}
    		lastTime := model.Time(
    			binary.LittleEndian.Uint64(headerBuf[chunkHeaderLastTimeOffset:]),
    		)
    		if !lastTime.Before(beforeTime) {
    			break
    		}
    	}
    
    	// We've found the first chunk that should be kept.
    	// First check if the shrink ratio is good enough to perform the the
    	// actual drop or leave it for next time if it is not worth the effort.
    	fi, err := f.Stat()
    	if err != nil {
    		return
    	}
    	totalChunks := int(fi.Size())/chunkLenWithHeader + len(chunks)
    	if numDropped == 0 || float64(numDropped)/float64(totalChunks) < p.minShrinkRatio {
    		// Nothing to drop. Just adjust the return values and append the chunks (if any).
    		numDropped = 0
    		firstTimeNotDropped = firstTimeInFile
    		if len(chunks) > 0 {
    			offset, err = p.persistChunks(fp, chunks)
    		}
    		return
    	}
    	// If we are here, we have to drop some chunks for real. So we need to
    	// record firstTimeNotDropped from the last read header, seek backwards
    	// to the beginning of its header, and start copying everything from
    	// there into a new file. Then append the chunks to the new file.
    	firstTimeNotDropped = model.Time(
    		binary.LittleEndian.Uint64(headerBuf[chunkHeaderFirstTimeOffset:]),
    	)
    	chunkOps.WithLabelValues(drop).Add(float64(numDropped))
    	_, err = f.Seek(-chunkHeaderLen, os.SEEK_CUR)
    	if err != nil {
    		return
    	}
    
    	temp, err := os.OpenFile(p.tempFileNameForFingerprint(fp), os.O_WRONLY|os.O_CREATE, 0640)
    	if err != nil {
    		return
    	}
    	defer func() {
    		p.closeChunkFile(temp)
    		if err == nil {
    			err = os.Rename(p.tempFileNameForFingerprint(fp), p.fileNameForFingerprint(fp))
    		}
    	}()
    
    	written, err := io.Copy(temp, f)
    	if err != nil {
    		return
    	}
    	offset = int(written / chunkLenWithHeader)
    
    	if len(chunks) > 0 {
    		if err = writeChunks(temp, chunks); err != nil {
    			return
    		}
    	}
    	return
    }
    
    // deleteSeriesFile deletes a series file belonging to the provided
    // fingerprint. It returns the number of chunks that were contained in the
    // deleted file.
    func (p *persistence) deleteSeriesFile(fp model.Fingerprint) (int, error) {
    	fname := p.fileNameForFingerprint(fp)
    	fi, err := os.Stat(fname)
    	if os.IsNotExist(err) {
    		// Great. The file is already gone.
    		return 0, nil
    	}
    	if err != nil {
    		return -1, err
    	}
    	numChunks := int(fi.Size() / chunkLenWithHeader)
    	if err := os.Remove(fname); err != nil {
    		return -1, err
    	}
    	chunkOps.WithLabelValues(drop).Add(float64(numChunks))
    	return numChunks, nil
    }
    
    // seriesFileModTime returns the modification time of the series file belonging
    // to the provided fingerprint. In case of an error, the zero value of time.Time
    // is returned.
    func (p *persistence) seriesFileModTime(fp model.Fingerprint) time.Time {
    	var modTime time.Time
    	if fi, err := os.Stat(p.fileNameForFingerprint(fp)); err == nil {
    		return fi.ModTime()
    	}
    	return modTime
    }
    
    // indexMetric queues the given metric for addition to the indexes needed by
    // fingerprintsForLabelPair, labelValuesForLabelName, and
    // fingerprintsModifiedBefore.  If the queue is full, this method blocks until
    // the metric can be queued.  This method is goroutine-safe.
    func (p *persistence) indexMetric(fp model.Fingerprint, m model.Metric) {
    	p.indexingQueue <- indexingOp{fp, m, add}
    }
    
    // unindexMetric queues references to the given metric for removal from the
    // indexes used for fingerprintsForLabelPair, labelValuesForLabelName, and
    // fingerprintsModifiedBefore. The index of fingerprints to archived metrics is
    // not affected by this removal. (In fact, never call this method for an
    // archived metric. To purge an archived metric, call purgeArchivedMetric.)
    // If the queue is full, this method blocks until the metric can be queued. This
    // method is goroutine-safe.
    func (p *persistence) unindexMetric(fp model.Fingerprint, m model.Metric) {
    	p.indexingQueue <- indexingOp{fp, m, remove}
    }
    
    // waitForIndexing waits until all items in the indexing queue are processed. If
    // queue processing is currently on hold (to gather more ops for batching), this
    // method will trigger an immediate start of processing. This method is
    // goroutine-safe.
    func (p *persistence) waitForIndexing() {
    	wait := make(chan int)
    	for {
    		p.indexingFlush <- wait
    		if <-wait == 0 {
    			break
    		}
    	}
    }
    
    // archiveMetric persists the mapping of the given fingerprint to the given
    // metric, together with the first and last timestamp of the series belonging to
    // the metric. The caller must have locked the fingerprint.
    func (p *persistence) archiveMetric(
    	fp model.Fingerprint, m model.Metric, first, last model.Time,
    ) error {
    	if err := p.archivedFingerprintToMetrics.Put(codable.Fingerprint(fp), codable.Metric(m)); err != nil {
    		p.setDirty(true)
    		return err
    	}
    	if err := p.archivedFingerprintToTimeRange.Put(codable.Fingerprint(fp), codable.TimeRange{First: first, Last: last}); err != nil {
    		p.setDirty(true)
    		return err
    	}
    	return nil
    }
    
    // hasArchivedMetric returns whether the archived metric for the given
    // fingerprint exists and if yes, what the first and last timestamp in the
    // corresponding series is. This method is goroutine-safe.
    func (p *persistence) hasArchivedMetric(fp model.Fingerprint) (
    	hasMetric bool, firstTime, lastTime model.Time, err error,
    ) {
    	firstTime, lastTime, hasMetric, err = p.archivedFingerprintToTimeRange.Lookup(fp)
    	return
    }
    
    // updateArchivedTimeRange updates an archived time range. The caller must make
    // sure that the fingerprint is currently archived (the time range will
    // otherwise be added without the corresponding metric in the archive).
    func (p *persistence) updateArchivedTimeRange(
    	fp model.Fingerprint, first, last model.Time,
    ) error {
    	return p.archivedFingerprintToTimeRange.Put(codable.Fingerprint(fp), codable.TimeRange{First: first, Last: last})
    }
    
    // fingerprintsModifiedBefore returns the fingerprints of archived timeseries
    // that have live samples before the provided timestamp. This method is
    // goroutine-safe.
    func (p *persistence) fingerprintsModifiedBefore(beforeTime model.Time) ([]model.Fingerprint, error) {
    	var fp codable.Fingerprint
    	var tr codable.TimeRange
    	fps := []model.Fingerprint{}
    	err := p.archivedFingerprintToTimeRange.ForEach(func(kv index.KeyValueAccessor) error {
    		if err := kv.Value(&tr); err != nil {
    			return err
    		}
    		if tr.First.Before(beforeTime) {
    			if err := kv.Key(&fp); err != nil {
    				return err
    			}
    			fps = append(fps, model.Fingerprint(fp))
    		}
    		return nil
    	})
    	return fps, err
    }
    
    // archivedMetric retrieves the archived metric with the given fingerprint. This
    // method is goroutine-safe.
    func (p *persistence) archivedMetric(fp model.Fingerprint) (model.Metric, error) {
    	metric, _, err := p.archivedFingerprintToMetrics.Lookup(fp)
    	return metric, err
    }
    
    // purgeArchivedMetric deletes an archived fingerprint and its corresponding
    // metric entirely. It also queues the metric for un-indexing (no need to call
    // unindexMetric for the deleted metric.) It does not touch the series file,
    // though. The caller must have locked the fingerprint.
    func (p *persistence) purgeArchivedMetric(fp model.Fingerprint) (err error) {
    	defer func() {
    		if err != nil {
    			p.setDirty(true)
    		}
    	}()
    
    	metric, err := p.archivedMetric(fp)
    	if err != nil || metric == nil {
    		return err
    	}
    	deleted, err := p.archivedFingerprintToMetrics.Delete(codable.Fingerprint(fp))
    	if err != nil {
    		return err
    	}
    	if !deleted {
    		log.Errorf("Tried to delete non-archived fingerprint %s from archivedFingerprintToMetrics index. This should never happen.", fp)
    	}
    	deleted, err = p.archivedFingerprintToTimeRange.Delete(codable.Fingerprint(fp))
    	if err != nil {
    		return err
    	}
    	if !deleted {
    		log.Errorf("Tried to delete non-archived fingerprint %s from archivedFingerprintToTimeRange index. This should never happen.", fp)
    	}
    	p.unindexMetric(fp, metric)
    	return nil
    }
    
    // unarchiveMetric deletes an archived fingerprint and its metric, but (in
    // contrast to purgeArchivedMetric) does not un-index the metric.  If a metric
    // was actually deleted, the method returns true and the first time and last
    // time of the deleted metric. The caller must have locked the fingerprint.
    func (p *persistence) unarchiveMetric(fp model.Fingerprint) (deletedAnything bool, err error) {
    	defer func() {
    		if err != nil {
    			p.setDirty(true)
    		}
    	}()
    
    	deleted, err := p.archivedFingerprintToMetrics.Delete(codable.Fingerprint(fp))
    	if err != nil || !deleted {
    		return false, err
    	}
    	deleted, err = p.archivedFingerprintToTimeRange.Delete(codable.Fingerprint(fp))
    	if err != nil {
    		return false, err
    	}
    	if !deleted {
    		log.Errorf("Tried to delete non-archived fingerprint %s from archivedFingerprintToTimeRange index. This should never happen.", fp)
    	}
    	return true, nil
    }
    
    // close flushes the indexing queue and other buffered data and releases any
    // held resources. It also removes the dirty marker file if successful and if
    // the persistence is currently not marked as dirty.
    func (p *persistence) close() error {
    	close(p.indexingQueue)
    	<-p.indexingStopped
    
    	var lastError, dirtyFileRemoveError error
    	if err := p.archivedFingerprintToMetrics.Close(); err != nil {
    		lastError = err
    		log.Error("Error closing archivedFingerprintToMetric index DB: ", err)
    	}
    	if err := p.archivedFingerprintToTimeRange.Close(); err != nil {
    		lastError = err
    		log.Error("Error closing archivedFingerprintToTimeRange index DB: ", err)
    	}
    	if err := p.labelPairToFingerprints.Close(); err != nil {
    		lastError = err
    		log.Error("Error closing labelPairToFingerprints index DB: ", err)
    	}
    	if err := p.labelNameToLabelValues.Close(); err != nil {
    		lastError = err
    		log.Error("Error closing labelNameToLabelValues index DB: ", err)
    	}
    	if lastError == nil && !p.isDirty() {
    		dirtyFileRemoveError = os.Remove(p.dirtyFileName)
    	}
    	if err := p.fLock.Release(); err != nil {
    		lastError = err
    		log.Error("Error releasing file lock: ", err)
    	}
    	if dirtyFileRemoveError != nil {
    		// On Windows, removing the dirty file before unlocking is not
    		// possible.  So remove it here if it failed above.
    		lastError = os.Remove(p.dirtyFileName)
    	}
    	return lastError
    }
    
    func (p *persistence) dirNameForFingerprint(fp model.Fingerprint) string {
    	fpStr := fp.String()
    	return path.Join(p.basePath, fpStr[0:seriesDirNameLen])
    }
    
    func (p *persistence) fileNameForFingerprint(fp model.Fingerprint) string {
    	fpStr := fp.String()
    	return path.Join(p.basePath, fpStr[0:seriesDirNameLen], fpStr[seriesDirNameLen:]+seriesFileSuffix)
    }
    
    func (p *persistence) tempFileNameForFingerprint(fp model.Fingerprint) string {
    	fpStr := fp.String()
    	return path.Join(p.basePath, fpStr[0:seriesDirNameLen], fpStr[seriesDirNameLen:]+seriesTempFileSuffix)
    }
    
    func (p *persistence) openChunkFileForWriting(fp model.Fingerprint) (*os.File, error) {
    	if err := os.MkdirAll(p.dirNameForFingerprint(fp), 0700); err != nil {
    		return nil, err
    	}
    	return os.OpenFile(p.fileNameForFingerprint(fp), os.O_WRONLY|os.O_APPEND|os.O_CREATE, 0640)
    	// NOTE: Although the file was opened for append,
    	//     f.Seek(0, os.SEEK_CUR)
    	// would now return '0, nil', so we cannot check for a consistent file length right now.
    	// However, the chunkIndexForOffset function is doing that check, so a wrong file length
    	// would still be detected.
    }
    
    // closeChunkFile first syncs the provided file if mandated so by the sync
    // strategy. Then it closes the file. Errors are logged.
    func (p *persistence) closeChunkFile(f *os.File) {
    	if p.shouldSync() {
    		if err := f.Sync(); err != nil {
    			log.Error("Error syncing file:", err)
    		}
    	}
    	if err := f.Close(); err != nil {
    		log.Error("Error closing chunk file:", err)
    	}
    }
    
    func (p *persistence) openChunkFileForReading(fp model.Fingerprint) (*os.File, error) {
    	return os.Open(p.fileNameForFingerprint(fp))
    }
    
    func (p *persistence) headsFileName() string {
    	return path.Join(p.basePath, headsFileName)
    }
    
    func (p *persistence) headsTempFileName() string {
    	return path.Join(p.basePath, headsTempFileName)
    }
    
    func (p *persistence) mappingsFileName() string {
    	return path.Join(p.basePath, mappingsFileName)
    }
    
    func (p *persistence) mappingsTempFileName() string {
    	return path.Join(p.basePath, mappingsTempFileName)
    }
    
    func (p *persistence) processIndexingQueue() {
    	batchSize := 0
    	nameToValues := index.LabelNameLabelValuesMapping{}
    	pairToFPs := index.LabelPairFingerprintsMapping{}
    	batchTimeout := time.NewTimer(indexingBatchTimeout)
    	defer batchTimeout.Stop()
    
    	commitBatch := func() {
    		p.indexingBatchSizes.Observe(float64(batchSize))
    		defer func(begin time.Time) {
    			p.indexingBatchDuration.Observe(
    				float64(time.Since(begin)) / float64(time.Millisecond),
    			)
    		}(time.Now())
    
    		if err := p.labelPairToFingerprints.IndexBatch(pairToFPs); err != nil {
    			log.Error("Error indexing label pair to fingerprints batch: ", err)
    		}
    		if err := p.labelNameToLabelValues.IndexBatch(nameToValues); err != nil {
    			log.Error("Error indexing label name to label values batch: ", err)
    		}
    		batchSize = 0
    		nameToValues = index.LabelNameLabelValuesMapping{}
    		pairToFPs = index.LabelPairFingerprintsMapping{}
    		batchTimeout.Reset(indexingBatchTimeout)
    	}
    
    	var flush chan chan int
    loop:
    	for {
    		// Only process flush requests if the queue is currently empty.
    		if len(p.indexingQueue) == 0 {
    			flush = p.indexingFlush
    		} else {
    			flush = nil
    		}
    		select {
    		case <-batchTimeout.C:
    			// Only commit if we have something to commit _and_
    			// nothing is waiting in the queue to be picked up. That
    			// prevents a death spiral if the LookupSet calls below
    			// are slow for some reason.
    			if batchSize > 0 && len(p.indexingQueue) == 0 {
    				commitBatch()
    			} else {
    				batchTimeout.Reset(indexingBatchTimeout)
    			}
    		case r := <-flush:
    			if batchSize > 0 {
    				commitBatch()
    			}
    			r <- len(p.indexingQueue)
    		case op, ok := <-p.indexingQueue:
    			if !ok {
    				if batchSize > 0 {
    					commitBatch()
    				}
    				break loop
    			}
    
    			batchSize++
    			for ln, lv := range op.metric {
    				lp := model.LabelPair{Name: ln, Value: lv}
    				baseFPs, ok := pairToFPs[lp]
    				if !ok {
    					var err error
    					baseFPs, _, err = p.labelPairToFingerprints.LookupSet(lp)
    					if err != nil {
    						log.Errorf("Error looking up label pair %v: %s", lp, err)
    						continue
    					}
    					pairToFPs[lp] = baseFPs
    				}
    				baseValues, ok := nameToValues[ln]
    				if !ok {
    					var err error
    					baseValues, _, err = p.labelNameToLabelValues.LookupSet(ln)
    					if err != nil {
    						log.Errorf("Error looking up label name %v: %s", ln, err)
    						continue
    					}
    					nameToValues[ln] = baseValues
    				}
    				switch op.opType {
    				case add:
    					baseFPs[op.fingerprint] = struct{}{}
    					baseValues[lv] = struct{}{}
    				case remove:
    					delete(baseFPs, op.fingerprint)
    					if len(baseFPs) == 0 {
    						delete(baseValues, lv)
    					}
    				default:
    					panic("unknown op type")
    				}
    			}
    
    			if batchSize >= indexingMaxBatchSize {
    				commitBatch()
    			}
    		}
    	}
    	close(p.indexingStopped)
    }
    
    // checkpointFPMappings persists the fingerprint mappings. This method is not
    // goroutine-safe.
    //
    // Description of the file format, v1:
    //
    // (1) Magic string (const mappingsMagicString).
    //
    // (2) Uvarint-encoded format version (const mappingsFormatVersion).
    //
    // (3) Uvarint-encoded number of mappings in fpMappings.
    //
    // (4) Repeated once per mapping:
    //
    // (4.1) The raw fingerprint as big-endian uint64.
    //
    // (4.2) The uvarint-encoded number of sub-mappings for the raw fingerprint.
    //
    // (4.3) Repeated once per sub-mapping:
    //
    // (4.3.1) The uvarint-encoded length of the unique metric string.
    // (4.3.2) The unique metric string.
    // (4.3.3) The mapped fingerprint as big-endian uint64.
    func (p *persistence) checkpointFPMappings(fpm fpMappings) (err error) {
    	log.Info("Checkpointing fingerprint mappings...")
    	begin := time.Now()
    	f, err := os.OpenFile(p.mappingsTempFileName(), os.O_WRONLY|os.O_TRUNC|os.O_CREATE, 0640)
    	if err != nil {
    		return
    	}
    
    	defer func() {
    		syncErr := f.Sync()
    		closeErr := f.Close()
    		if err != nil {
    			return
    		}
    		err = syncErr
    		if err != nil {
    			return
    		}
    		err = closeErr
    		if err != nil {
    			return
    		}
    		err = os.Rename(p.mappingsTempFileName(), p.mappingsFileName())
    		duration := time.Since(begin)
    		log.Infof("Done checkpointing fingerprint mappings in %v.", duration)
    	}()
    
    	w := bufio.NewWriterSize(f, fileBufSize)
    
    	if _, err = w.WriteString(mappingsMagicString); err != nil {
    		return
    	}
    	if _, err = codable.EncodeUvarint(w, mappingsFormatVersion); err != nil {
    		return
    	}
    	if _, err = codable.EncodeUvarint(w, uint64(len(fpm))); err != nil {
    		return
    	}
    
    	for fp, mappings := range fpm {
    		if err = codable.EncodeUint64(w, uint64(fp)); err != nil {
    			return
    		}
    		if _, err = codable.EncodeUvarint(w, uint64(len(mappings))); err != nil {
    			return
    		}
    		for ms, mappedFP := range mappings {
    			if _, err = codable.EncodeUvarint(w, uint64(len(ms))); err != nil {
    				return
    			}
    			if _, err = w.WriteString(ms); err != nil {
    				return
    			}
    			if err = codable.EncodeUint64(w, uint64(mappedFP)); err != nil {
    				return
    			}
    		}
    	}
    	err = w.Flush()
    	return
    }
    
    // loadFPMappings loads the fingerprint mappings. It also returns the highest
    // mapped fingerprint and any error encountered. If p.mappingsFileName is not
    // found, the method returns (fpMappings{}, 0, nil). Do not call concurrently
    // with checkpointFPMappings.
    func (p *persistence) loadFPMappings() (fpMappings, model.Fingerprint, error) {
    	fpm := fpMappings{}
    	var highestMappedFP model.Fingerprint
    
    	f, err := os.Open(p.mappingsFileName())
    	if os.IsNotExist(err) {
    		return fpm, 0, nil
    	}
    	if err != nil {
    		return nil, 0, err
    	}
    	defer f.Close()
    	r := bufio.NewReaderSize(f, fileBufSize)
    
    	buf := make([]byte, len(mappingsMagicString))
    	if _, err := io.ReadFull(r, buf); err != nil {
    		return nil, 0, err
    	}
    	magic := string(buf)
    	if magic != mappingsMagicString {
    		return nil, 0, fmt.Errorf(
    			"unexpected magic string, want %q, got %q",
    			mappingsMagicString, magic,
    		)
    	}
    	version, err := binary.ReadUvarint(r)
    	if version != mappingsFormatVersion || err != nil {
    		return nil, 0, fmt.Errorf("unknown fingerprint mappings format version, want %d", mappingsFormatVersion)
    	}
    	numRawFPs, err := binary.ReadUvarint(r)
    	if err != nil {
    		return nil, 0, err
    	}
    	for ; numRawFPs > 0; numRawFPs-- {
    		rawFP, err := codable.DecodeUint64(r)
    		if err != nil {
    			return nil, 0, err
    		}
    		numMappings, err := binary.ReadUvarint(r)
    		if err != nil {
    			return nil, 0, err
    		}
    		mappings := make(map[string]model.Fingerprint, numMappings)
    		for ; numMappings > 0; numMappings-- {
    			lenMS, err := binary.ReadUvarint(r)
    			if err != nil {
    				return nil, 0, err
    			}
    			buf := make([]byte, lenMS)
    			if _, err := io.ReadFull(r, buf); err != nil {
    				return nil, 0, err
    			}
    			fp, err := codable.DecodeUint64(r)
    			if err != nil {
    				return nil, 0, err
    			}
    			mappedFP := model.Fingerprint(fp)
    			if mappedFP > highestMappedFP {
    				highestMappedFP = mappedFP
    			}
    			mappings[string(buf)] = mappedFP
    		}
    		fpm[model.Fingerprint(rawFP)] = mappings
    	}
    	return fpm, highestMappedFP, nil
    }
    
    func offsetForChunkIndex(i int) int64 {
    	return int64(i * chunkLenWithHeader)
    }
    
    func chunkIndexForOffset(offset int64) (int, error) {
    	if int(offset)%chunkLenWithHeader != 0 {
    		return -1, fmt.Errorf(
    			"offset %d is not a multiple of on-disk chunk length %d",
    			offset, chunkLenWithHeader,
    		)
    	}
    	return int(offset) / chunkLenWithHeader, nil
    }
    
    func writeChunkHeader(w io.Writer, c chunk) error {
    	header := make([]byte, chunkHeaderLen)
    	header[chunkHeaderTypeOffset] = byte(c.encoding())
    	binary.LittleEndian.PutUint64(
    		header[chunkHeaderFirstTimeOffset:],
    		uint64(c.firstTime()),
    	)
    	binary.LittleEndian.PutUint64(
    		header[chunkHeaderLastTimeOffset:],
    		uint64(c.newIterator().lastTimestamp()),
    	)
    	_, err := w.Write(header)
    	return err
    }
    
    func writeChunks(w io.Writer, chunks []chunk) error {
    	b := bufio.NewWriterSize(w, len(chunks)*chunkLenWithHeader)
    	for _, chunk := range chunks {
    		if err := writeChunkHeader(b, chunk); err != nil {
    			return err
    		}
    
    		if err := chunk.marshal(b); err != nil {
    			return err
    		}
    	}
    	return b.Flush()
    }
    prometheus-0.16.2+ds/storage/local/persistence_test.go000066400000000000000000000670741265137125100230660ustar00rootroot00000000000000// Copyright 2014 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package local
    
    import (
    	"reflect"
    	"sync"
    	"testing"
    	"time"
    
    	"github.com/prometheus/common/model"
    
    	"github.com/prometheus/prometheus/storage/local/codable"
    	"github.com/prometheus/prometheus/storage/local/index"
    	"github.com/prometheus/prometheus/util/testutil"
    )
    
    var (
    	m1 = model.Metric{"label": "value1"}
    	m2 = model.Metric{"label": "value2"}
    	m3 = model.Metric{"label": "value3"}
    	m4 = model.Metric{"label": "value4"}
    	m5 = model.Metric{"label": "value5"}
    )
    
    func newTestPersistence(t *testing.T, encoding chunkEncoding) (*persistence, testutil.Closer) {
    	DefaultChunkEncoding = encoding
    	dir := testutil.NewTemporaryDirectory("test_persistence", t)
    	p, err := newPersistence(dir.Path(), false, false, func() bool { return false }, 0.1)
    	if err != nil {
    		dir.Close()
    		t.Fatal(err)
    	}
    	go p.run()
    	return p, testutil.NewCallbackCloser(func() {
    		p.close()
    		dir.Close()
    	})
    }
    
    func buildTestChunks(encoding chunkEncoding) map[model.Fingerprint][]chunk {
    	fps := model.Fingerprints{
    		m1.FastFingerprint(),
    		m2.FastFingerprint(),
    		m3.FastFingerprint(),
    	}
    	fpToChunks := map[model.Fingerprint][]chunk{}
    
    	for _, fp := range fps {
    		fpToChunks[fp] = make([]chunk, 0, 10)
    		for i := 0; i < 10; i++ {
    			fpToChunks[fp] = append(fpToChunks[fp], newChunkForEncoding(encoding).add(&model.SamplePair{
    				Timestamp: model.Time(i),
    				Value:     model.SampleValue(fp),
    			})[0])
    		}
    	}
    	return fpToChunks
    }
    
    func chunksEqual(c1, c2 chunk) bool {
    	values2 := c2.newIterator().values()
    	for v1 := range c1.newIterator().values() {
    		v2 := <-values2
    		if !v1.Equal(v2) {
    			return false
    		}
    	}
    	return true
    }
    
    func testPersistLoadDropChunks(t *testing.T, encoding chunkEncoding) {
    	p, closer := newTestPersistence(t, encoding)
    	defer closer.Close()
    
    	fpToChunks := buildTestChunks(encoding)
    
    	for fp, chunks := range fpToChunks {
    		firstTimeNotDropped, offset, numDropped, allDropped, err :=
    			p.dropAndPersistChunks(fp, model.Earliest, chunks)
    		if err != nil {
    			t.Fatal(err)
    		}
    		if got, want := firstTimeNotDropped, model.Time(0); got != want {
    			t.Errorf("Want firstTimeNotDropped %v, got %v.", got, want)
    		}
    		if got, want := offset, 0; got != want {
    			t.Errorf("Want offset %v, got %v.", got, want)
    		}
    		if got, want := numDropped, 0; got != want {
    			t.Errorf("Want numDropped %v, got %v.", got, want)
    		}
    		if allDropped {
    			t.Error("All dropped.")
    		}
    	}
    
    	for fp, expectedChunks := range fpToChunks {
    		indexes := make([]int, 0, len(expectedChunks))
    		for i := range expectedChunks {
    			indexes = append(indexes, i)
    		}
    		actualChunks, err := p.loadChunks(fp, indexes, 0)
    		if err != nil {
    			t.Fatal(err)
    		}
    		for _, i := range indexes {
    			if !chunksEqual(expectedChunks[i], actualChunks[i]) {
    				t.Errorf("%d. Chunks not equal.", i)
    			}
    		}
    		// Load all chunk descs.
    		actualChunkDescs, err := p.loadChunkDescs(fp, 0)
    		if len(actualChunkDescs) != 10 {
    			t.Errorf("Got %d chunkDescs, want %d.", len(actualChunkDescs), 10)
    		}
    		for i, cd := range actualChunkDescs {
    			if cd.firstTime() != model.Time(i) || cd.lastTime() != model.Time(i) {
    				t.Errorf(
    					"Want ts=%v, got firstTime=%v, lastTime=%v.",
    					i, cd.firstTime(), cd.lastTime(),
    				)
    			}
    
    		}
    		// Load chunk descs partially.
    		actualChunkDescs, err = p.loadChunkDescs(fp, 5)
    		if len(actualChunkDescs) != 5 {
    			t.Errorf("Got %d chunkDescs, want %d.", len(actualChunkDescs), 5)
    		}
    		for i, cd := range actualChunkDescs {
    			if cd.firstTime() != model.Time(i) || cd.lastTime() != model.Time(i) {
    				t.Errorf(
    					"Want ts=%v, got firstTime=%v, lastTime=%v.",
    					i, cd.firstTime(), cd.lastTime(),
    				)
    			}
    
    		}
    	}
    	// Drop half of the chunks.
    	for fp, expectedChunks := range fpToChunks {
    		firstTime, offset, numDropped, allDropped, err := p.dropAndPersistChunks(fp, 5, nil)
    		if err != nil {
    			t.Fatal(err)
    		}
    		if offset != 5 {
    			t.Errorf("want offset 5, got %d", offset)
    		}
    		if firstTime != 5 {
    			t.Errorf("want first time 5, got %d", firstTime)
    		}
    		if numDropped != 5 {
    			t.Errorf("want 5 dropped chunks, got %v", numDropped)
    		}
    		if allDropped {
    			t.Error("all chunks dropped")
    		}
    		indexes := make([]int, 5)
    		for i := range indexes {
    			indexes[i] = i
    		}
    		actualChunks, err := p.loadChunks(fp, indexes, 0)
    		if err != nil {
    			t.Fatal(err)
    		}
    		for _, i := range indexes {
    			if !chunksEqual(expectedChunks[i+5], actualChunks[i]) {
    				t.Errorf("%d. Chunks not equal.", i)
    			}
    		}
    	}
    	// Drop all the chunks.
    	for fp := range fpToChunks {
    		firstTime, offset, numDropped, allDropped, err := p.dropAndPersistChunks(fp, 100, nil)
    		if firstTime != 0 {
    			t.Errorf("want first time 0, got %d", firstTime)
    		}
    		if err != nil {
    			t.Fatal(err)
    		}
    		if offset != 0 {
    			t.Errorf("want offset 0, got %d", offset)
    		}
    		if numDropped != 5 {
    			t.Errorf("want 5 dropped chunks, got %v", numDropped)
    		}
    		if !allDropped {
    			t.Error("not all chunks dropped")
    		}
    	}
    	// Re-add first two of the chunks.
    	for fp, chunks := range fpToChunks {
    		firstTimeNotDropped, offset, numDropped, allDropped, err :=
    			p.dropAndPersistChunks(fp, model.Earliest, chunks[:2])
    		if err != nil {
    			t.Fatal(err)
    		}
    		if got, want := firstTimeNotDropped, model.Time(0); got != want {
    			t.Errorf("Want firstTimeNotDropped %v, got %v.", got, want)
    		}
    		if got, want := offset, 0; got != want {
    			t.Errorf("Want offset %v, got %v.", got, want)
    		}
    		if got, want := numDropped, 0; got != want {
    			t.Errorf("Want numDropped %v, got %v.", got, want)
    		}
    		if allDropped {
    			t.Error("All dropped.")
    		}
    	}
    	// Drop the first of the chunks while adding two more.
    	for fp, chunks := range fpToChunks {
    		firstTime, offset, numDropped, allDropped, err := p.dropAndPersistChunks(fp, 1, chunks[2:4])
    		if err != nil {
    			t.Fatal(err)
    		}
    		if offset != 1 {
    			t.Errorf("want offset 1, got %d", offset)
    		}
    		if firstTime != 1 {
    			t.Errorf("want first time 1, got %d", firstTime)
    		}
    		if numDropped != 1 {
    			t.Errorf("want 1 dropped chunk, got %v", numDropped)
    		}
    		if allDropped {
    			t.Error("all chunks dropped")
    		}
    		wantChunks := chunks[1:4]
    		indexes := make([]int, len(wantChunks))
    		for i := range indexes {
    			indexes[i] = i
    		}
    		gotChunks, err := p.loadChunks(fp, indexes, 0)
    		if err != nil {
    			t.Fatal(err)
    		}
    		for i, wantChunk := range wantChunks {
    			if !chunksEqual(wantChunk, gotChunks[i]) {
    				t.Errorf("%d. Chunks not equal.", i)
    			}
    		}
    	}
    	// Drop all the chunks while adding two more.
    	for fp, chunks := range fpToChunks {
    		firstTime, offset, numDropped, allDropped, err := p.dropAndPersistChunks(fp, 4, chunks[4:6])
    		if err != nil {
    			t.Fatal(err)
    		}
    		if offset != 0 {
    			t.Errorf("want offset 0, got %d", offset)
    		}
    		if firstTime != 4 {
    			t.Errorf("want first time 4, got %d", firstTime)
    		}
    		if numDropped != 3 {
    			t.Errorf("want 3 dropped chunks, got %v", numDropped)
    		}
    		if allDropped {
    			t.Error("all chunks dropped")
    		}
    		wantChunks := chunks[4:6]
    		indexes := make([]int, len(wantChunks))
    		for i := range indexes {
    			indexes[i] = i
    		}
    		gotChunks, err := p.loadChunks(fp, indexes, 0)
    		if err != nil {
    			t.Fatal(err)
    		}
    		for i, wantChunk := range wantChunks {
    			if !chunksEqual(wantChunk, gotChunks[i]) {
    				t.Errorf("%d. Chunks not equal.", i)
    			}
    		}
    	}
    	// While adding two more, drop all but one of the added ones.
    	for fp, chunks := range fpToChunks {
    		firstTime, offset, numDropped, allDropped, err := p.dropAndPersistChunks(fp, 7, chunks[6:8])
    		if err != nil {
    			t.Fatal(err)
    		}
    		if offset != 0 {
    			t.Errorf("want offset 0, got %d", offset)
    		}
    		if firstTime != 7 {
    			t.Errorf("want first time 7, got %d", firstTime)
    		}
    		if numDropped != 3 {
    			t.Errorf("want 3 dropped chunks, got %v", numDropped)
    		}
    		if allDropped {
    			t.Error("all chunks dropped")
    		}
    		wantChunks := chunks[7:8]
    		indexes := make([]int, len(wantChunks))
    		for i := range indexes {
    			indexes[i] = i
    		}
    		gotChunks, err := p.loadChunks(fp, indexes, 0)
    		if err != nil {
    			t.Fatal(err)
    		}
    		for i, wantChunk := range wantChunks {
    			if !chunksEqual(wantChunk, gotChunks[i]) {
    				t.Errorf("%d. Chunks not equal.", i)
    			}
    		}
    	}
    	// While adding two more, drop all chunks including the added ones.
    	for fp, chunks := range fpToChunks {
    		firstTime, offset, numDropped, allDropped, err := p.dropAndPersistChunks(fp, 10, chunks[8:])
    		if err != nil {
    			t.Fatal(err)
    		}
    		if offset != 0 {
    			t.Errorf("want offset 0, got %d", offset)
    		}
    		if firstTime != 0 {
    			t.Errorf("want first time 0, got %d", firstTime)
    		}
    		if numDropped != 3 {
    			t.Errorf("want 3 dropped chunks, got %v", numDropped)
    		}
    		if !allDropped {
    			t.Error("not all chunks dropped")
    		}
    	}
    	// Now set minShrinkRatio to 0.25 and play with it.
    	p.minShrinkRatio = 0.25
    	// Re-add 8 chunks.
    	for fp, chunks := range fpToChunks {
    		firstTimeNotDropped, offset, numDropped, allDropped, err :=
    			p.dropAndPersistChunks(fp, model.Earliest, chunks[:8])
    		if err != nil {
    			t.Fatal(err)
    		}
    		if got, want := firstTimeNotDropped, model.Time(0); got != want {
    			t.Errorf("Want firstTimeNotDropped %v, got %v.", got, want)
    		}
    		if got, want := offset, 0; got != want {
    			t.Errorf("Want offset %v, got %v.", got, want)
    		}
    		if got, want := numDropped, 0; got != want {
    			t.Errorf("Want numDropped %v, got %v.", got, want)
    		}
    		if allDropped {
    			t.Error("All dropped.")
    		}
    	}
    	// Drop only the first chunk should not happen, but persistence should still work.
    	for fp, chunks := range fpToChunks {
    		firstTime, offset, numDropped, allDropped, err := p.dropAndPersistChunks(fp, 1, chunks[8:9])
    		if err != nil {
    			t.Fatal(err)
    		}
    		if offset != 8 {
    			t.Errorf("want offset 8, got %d", offset)
    		}
    		if firstTime != 0 {
    			t.Errorf("want first time 0, got %d", firstTime)
    		}
    		if numDropped != 0 {
    			t.Errorf("want 0 dropped chunk, got %v", numDropped)
    		}
    		if allDropped {
    			t.Error("all chunks dropped")
    		}
    	}
    	// Drop only the first two chunks should not happen, either.
    	for fp := range fpToChunks {
    		firstTime, offset, numDropped, allDropped, err := p.dropAndPersistChunks(fp, 2, nil)
    		if err != nil {
    			t.Fatal(err)
    		}
    		if offset != 0 {
    			t.Errorf("want offset 0, got %d", offset)
    		}
    		if firstTime != 0 {
    			t.Errorf("want first time 0, got %d", firstTime)
    		}
    		if numDropped != 0 {
    			t.Errorf("want 0 dropped chunk, got %v", numDropped)
    		}
    		if allDropped {
    			t.Error("all chunks dropped")
    		}
    	}
    	// Drop the first three chunks should finally work.
    	for fp, chunks := range fpToChunks {
    		firstTime, offset, numDropped, allDropped, err := p.dropAndPersistChunks(fp, 3, chunks[9:])
    		if err != nil {
    			t.Fatal(err)
    		}
    		if offset != 6 {
    			t.Errorf("want offset 6, got %d", offset)
    		}
    		if firstTime != 3 {
    			t.Errorf("want first time 3, got %d", firstTime)
    		}
    		if numDropped != 3 {
    			t.Errorf("want 3 dropped chunk, got %v", numDropped)
    		}
    		if allDropped {
    			t.Error("all chunks dropped")
    		}
    	}
    }
    
    func TestPersistLoadDropChunksType0(t *testing.T) {
    	testPersistLoadDropChunks(t, 0)
    }
    
    func TestPersistLoadDropChunksType1(t *testing.T) {
    	testPersistLoadDropChunks(t, 1)
    }
    
    func testCheckpointAndLoadSeriesMapAndHeads(t *testing.T, encoding chunkEncoding) {
    	p, closer := newTestPersistence(t, encoding)
    	defer closer.Close()
    
    	fpLocker := newFingerprintLocker(10)
    	sm := newSeriesMap()
    	s1 := newMemorySeries(m1, nil, time.Time{})
    	s2 := newMemorySeries(m2, nil, time.Time{})
    	s3 := newMemorySeries(m3, nil, time.Time{})
    	s4 := newMemorySeries(m4, nil, time.Time{})
    	s5 := newMemorySeries(m5, nil, time.Time{})
    	s1.add(&model.SamplePair{Timestamp: 1, Value: 3.14})
    	s3.add(&model.SamplePair{Timestamp: 2, Value: 2.7})
    	s3.headChunkClosed = true
    	s3.persistWatermark = 1
    	for i := 0; i < 10000; i++ {
    		s4.add(&model.SamplePair{
    			Timestamp: model.Time(i),
    			Value:     model.SampleValue(i) / 2,
    		})
    		s5.add(&model.SamplePair{
    			Timestamp: model.Time(i),
    			Value:     model.SampleValue(i * i),
    		})
    	}
    	s5.persistWatermark = 3
    	chunkCountS4 := len(s4.chunkDescs)
    	chunkCountS5 := len(s5.chunkDescs)
    	sm.put(m1.FastFingerprint(), s1)
    	sm.put(m2.FastFingerprint(), s2)
    	sm.put(m3.FastFingerprint(), s3)
    	sm.put(m4.FastFingerprint(), s4)
    	sm.put(m5.FastFingerprint(), s5)
    
    	if err := p.checkpointSeriesMapAndHeads(sm, fpLocker); err != nil {
    		t.Fatal(err)
    	}
    
    	loadedSM, _, err := p.loadSeriesMapAndHeads()
    	if err != nil {
    		t.Fatal(err)
    	}
    	if loadedSM.length() != 4 {
    		t.Errorf("want 4 series in map, got %d", loadedSM.length())
    	}
    	if loadedS1, ok := loadedSM.get(m1.FastFingerprint()); ok {
    		if !reflect.DeepEqual(loadedS1.metric, m1) {
    			t.Errorf("want metric %v, got %v", m1, loadedS1.metric)
    		}
    		if !reflect.DeepEqual(loadedS1.head().c, s1.head().c) {
    			t.Error("head chunks differ")
    		}
    		if loadedS1.chunkDescsOffset != 0 {
    			t.Errorf("want chunkDescsOffset 0, got %d", loadedS1.chunkDescsOffset)
    		}
    		if loadedS1.headChunkClosed {
    			t.Error("headChunkClosed is true")
    		}
    	} else {
    		t.Errorf("couldn't find %v in loaded map", m1)
    	}
    	if loadedS3, ok := loadedSM.get(m3.FastFingerprint()); ok {
    		if !reflect.DeepEqual(loadedS3.metric, m3) {
    			t.Errorf("want metric %v, got %v", m3, loadedS3.metric)
    		}
    		if loadedS3.head().c != nil {
    			t.Error("head chunk not evicted")
    		}
    		if loadedS3.chunkDescsOffset != 0 {
    			t.Errorf("want chunkDescsOffset 0, got %d", loadedS3.chunkDescsOffset)
    		}
    		if !loadedS3.headChunkClosed {
    			t.Error("headChunkClosed is false")
    		}
    	} else {
    		t.Errorf("couldn't find %v in loaded map", m3)
    	}
    	if loadedS4, ok := loadedSM.get(m4.FastFingerprint()); ok {
    		if !reflect.DeepEqual(loadedS4.metric, m4) {
    			t.Errorf("want metric %v, got %v", m4, loadedS4.metric)
    		}
    		if got, want := len(loadedS4.chunkDescs), chunkCountS4; got != want {
    			t.Errorf("got %d chunkDescs, want %d", got, want)
    		}
    		if got, want := loadedS4.persistWatermark, 0; got != want {
    			t.Errorf("got persistWatermark %d, want %d", got, want)
    		}
    		if loadedS4.chunkDescs[2].isEvicted() {
    			t.Error("3rd chunk evicted")
    		}
    		if loadedS4.chunkDescs[3].isEvicted() {
    			t.Error("4th chunk evicted")
    		}
    		if loadedS4.chunkDescsOffset != 0 {
    			t.Errorf("want chunkDescsOffset 0, got %d", loadedS4.chunkDescsOffset)
    		}
    		if loadedS4.headChunkClosed {
    			t.Error("headChunkClosed is true")
    		}
    	} else {
    		t.Errorf("couldn't find %v in loaded map", m4)
    	}
    	if loadedS5, ok := loadedSM.get(m5.FastFingerprint()); ok {
    		if !reflect.DeepEqual(loadedS5.metric, m5) {
    			t.Errorf("want metric %v, got %v", m5, loadedS5.metric)
    		}
    		if got, want := len(loadedS5.chunkDescs), chunkCountS5; got != want {
    			t.Errorf("got %d chunkDescs, want %d", got, want)
    		}
    		if got, want := loadedS5.persistWatermark, 3; got != want {
    			t.Errorf("got persistWatermark %d, want %d", got, want)
    		}
    		if !loadedS5.chunkDescs[2].isEvicted() {
    			t.Error("3rd chunk not evicted")
    		}
    		if loadedS5.chunkDescs[3].isEvicted() {
    			t.Error("4th chunk evicted")
    		}
    		if loadedS5.chunkDescsOffset != 0 {
    			t.Errorf("want chunkDescsOffset 0, got %d", loadedS5.chunkDescsOffset)
    		}
    		if loadedS5.headChunkClosed {
    			t.Error("headChunkClosed is true")
    		}
    	} else {
    		t.Errorf("couldn't find %v in loaded map", m5)
    	}
    }
    
    func TestCheckpointAndLoadSeriesMapAndHeadsChunkType0(t *testing.T) {
    	testCheckpointAndLoadSeriesMapAndHeads(t, 0)
    }
    
    func TestCheckpointAndLoadSeriesMapAndHeadsChunkType1(t *testing.T) {
    	testCheckpointAndLoadSeriesMapAndHeads(t, 1)
    }
    
    func TestCheckpointAndLoadFPMappings(t *testing.T) {
    	p, closer := newTestPersistence(t, 1)
    	defer closer.Close()
    
    	in := fpMappings{
    		1: map[string]model.Fingerprint{
    			"foo": 1,
    			"bar": 2,
    		},
    		3: map[string]model.Fingerprint{
    			"baz": 4,
    		},
    	}
    
    	if err := p.checkpointFPMappings(in); err != nil {
    		t.Fatal(err)
    	}
    
    	out, fp, err := p.loadFPMappings()
    	if err != nil {
    		t.Fatal(err)
    	}
    	if got, want := fp, model.Fingerprint(4); got != want {
    		t.Errorf("got highest FP %v, want %v", got, want)
    	}
    	if !reflect.DeepEqual(in, out) {
    		t.Errorf("got collision map %v, want %v", out, in)
    	}
    }
    
    func testFingerprintsModifiedBefore(t *testing.T, encoding chunkEncoding) {
    	p, closer := newTestPersistence(t, encoding)
    	defer closer.Close()
    
    	m1 := model.Metric{"n1": "v1"}
    	m2 := model.Metric{"n2": "v2"}
    	m3 := model.Metric{"n1": "v2"}
    	p.archiveMetric(1, m1, 2, 4)
    	p.archiveMetric(2, m2, 1, 6)
    	p.archiveMetric(3, m3, 5, 5)
    
    	expectedFPs := map[model.Time][]model.Fingerprint{
    		0: {},
    		1: {},
    		2: {2},
    		3: {1, 2},
    		4: {1, 2},
    		5: {1, 2},
    		6: {1, 2, 3},
    	}
    
    	for ts, want := range expectedFPs {
    		got, err := p.fingerprintsModifiedBefore(ts)
    		if err != nil {
    			t.Fatal(err)
    		}
    		if !reflect.DeepEqual(want, got) {
    			t.Errorf("timestamp: %v, want FPs %v, got %v", ts, want, got)
    		}
    	}
    
    	unarchived, err := p.unarchiveMetric(1)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if !unarchived {
    		t.Error("expected actual unarchival")
    	}
    	unarchived, err = p.unarchiveMetric(1)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if unarchived {
    		t.Error("expected no unarchival")
    	}
    
    	expectedFPs = map[model.Time][]model.Fingerprint{
    		0: {},
    		1: {},
    		2: {2},
    		3: {2},
    		4: {2},
    		5: {2},
    		6: {2, 3},
    	}
    
    	for ts, want := range expectedFPs {
    		got, err := p.fingerprintsModifiedBefore(ts)
    		if err != nil {
    			t.Fatal(err)
    		}
    		if !reflect.DeepEqual(want, got) {
    			t.Errorf("timestamp: %v, want FPs %v, got %v", ts, want, got)
    		}
    	}
    }
    
    func TestFingerprintsModifiedBeforeChunkType0(t *testing.T) {
    	testFingerprintsModifiedBefore(t, 0)
    }
    
    func TestFingerprintsModifiedBeforeChunkType1(t *testing.T) {
    	testFingerprintsModifiedBefore(t, 1)
    }
    
    func testDropArchivedMetric(t *testing.T, encoding chunkEncoding) {
    	p, closer := newTestPersistence(t, encoding)
    	defer closer.Close()
    
    	m1 := model.Metric{"n1": "v1"}
    	m2 := model.Metric{"n2": "v2"}
    	p.archiveMetric(1, m1, 2, 4)
    	p.archiveMetric(2, m2, 1, 6)
    	p.indexMetric(1, m1)
    	p.indexMetric(2, m2)
    	p.waitForIndexing()
    
    	outFPs, err := p.fingerprintsForLabelPair(model.LabelPair{Name: "n1", Value: "v1"})
    	if err != nil {
    		t.Fatal(err)
    	}
    	want := model.Fingerprints{1}
    	if !reflect.DeepEqual(outFPs, want) {
    		t.Errorf("want %#v, got %#v", want, outFPs)
    	}
    	outFPs, err = p.fingerprintsForLabelPair(model.LabelPair{Name: "n2", Value: "v2"})
    	if err != nil {
    		t.Fatal(err)
    	}
    	want = model.Fingerprints{2}
    	if !reflect.DeepEqual(outFPs, want) {
    		t.Errorf("want %#v, got %#v", want, outFPs)
    	}
    	if archived, _, _, err := p.hasArchivedMetric(1); err != nil || !archived {
    		t.Error("want FP 1 archived")
    	}
    	if archived, _, _, err := p.hasArchivedMetric(2); err != nil || !archived {
    		t.Error("want FP 2 archived")
    	}
    
    	if err != p.purgeArchivedMetric(1) {
    		t.Fatal(err)
    	}
    	if err != p.purgeArchivedMetric(3) {
    		// Purging something that has not beet archived is not an error.
    		t.Fatal(err)
    	}
    	p.waitForIndexing()
    
    	outFPs, err = p.fingerprintsForLabelPair(model.LabelPair{Name: "n1", Value: "v1"})
    	if err != nil {
    		t.Fatal(err)
    	}
    	want = nil
    	if !reflect.DeepEqual(outFPs, want) {
    		t.Errorf("want %#v, got %#v", want, outFPs)
    	}
    	outFPs, err = p.fingerprintsForLabelPair(model.LabelPair{Name: "n2", Value: "v2"})
    	if err != nil {
    		t.Fatal(err)
    	}
    	want = model.Fingerprints{2}
    	if !reflect.DeepEqual(outFPs, want) {
    		t.Errorf("want %#v, got %#v", want, outFPs)
    	}
    	if archived, _, _, err := p.hasArchivedMetric(1); err != nil || archived {
    		t.Error("want FP 1 not archived")
    	}
    	if archived, _, _, err := p.hasArchivedMetric(2); err != nil || !archived {
    		t.Error("want FP 2 archived")
    	}
    }
    
    func TestDropArchivedMetricChunkType0(t *testing.T) {
    	testDropArchivedMetric(t, 0)
    }
    
    func TestDropArchivedMetricChunkType1(t *testing.T) {
    	testDropArchivedMetric(t, 1)
    }
    
    type incrementalBatch struct {
    	fpToMetric      index.FingerprintMetricMapping
    	expectedLnToLvs index.LabelNameLabelValuesMapping
    	expectedLpToFps index.LabelPairFingerprintsMapping
    }
    
    func testIndexing(t *testing.T, encoding chunkEncoding) {
    	batches := []incrementalBatch{
    		{
    			fpToMetric: index.FingerprintMetricMapping{
    				0: {
    					model.MetricNameLabel: "metric_0",
    					"label_1":             "value_1",
    				},
    				1: {
    					model.MetricNameLabel: "metric_0",
    					"label_2":             "value_2",
    					"label_3":             "value_3",
    				},
    				2: {
    					model.MetricNameLabel: "metric_1",
    					"label_1":             "value_2",
    				},
    			},
    			expectedLnToLvs: index.LabelNameLabelValuesMapping{
    				model.MetricNameLabel: codable.LabelValueSet{
    					"metric_0": struct{}{},
    					"metric_1": struct{}{},
    				},
    				"label_1": codable.LabelValueSet{
    					"value_1": struct{}{},
    					"value_2": struct{}{},
    				},
    				"label_2": codable.LabelValueSet{
    					"value_2": struct{}{},
    				},
    				"label_3": codable.LabelValueSet{
    					"value_3": struct{}{},
    				},
    			},
    			expectedLpToFps: index.LabelPairFingerprintsMapping{
    				model.LabelPair{
    					Name:  model.MetricNameLabel,
    					Value: "metric_0",
    				}: codable.FingerprintSet{0: struct{}{}, 1: struct{}{}},
    				model.LabelPair{
    					Name:  model.MetricNameLabel,
    					Value: "metric_1",
    				}: codable.FingerprintSet{2: struct{}{}},
    				model.LabelPair{
    					Name:  "label_1",
    					Value: "value_1",
    				}: codable.FingerprintSet{0: struct{}{}},
    				model.LabelPair{
    					Name:  "label_1",
    					Value: "value_2",
    				}: codable.FingerprintSet{2: struct{}{}},
    				model.LabelPair{
    					Name:  "label_2",
    					Value: "value_2",
    				}: codable.FingerprintSet{1: struct{}{}},
    				model.LabelPair{
    					Name:  "label_3",
    					Value: "value_3",
    				}: codable.FingerprintSet{1: struct{}{}},
    			},
    		}, {
    			fpToMetric: index.FingerprintMetricMapping{
    				3: {
    					model.MetricNameLabel: "metric_0",
    					"label_1":             "value_3",
    				},
    				4: {
    					model.MetricNameLabel: "metric_2",
    					"label_2":             "value_2",
    					"label_3":             "value_1",
    				},
    				5: {
    					model.MetricNameLabel: "metric_1",
    					"label_1":             "value_3",
    				},
    			},
    			expectedLnToLvs: index.LabelNameLabelValuesMapping{
    				model.MetricNameLabel: codable.LabelValueSet{
    					"metric_0": struct{}{},
    					"metric_1": struct{}{},
    					"metric_2": struct{}{},
    				},
    				"label_1": codable.LabelValueSet{
    					"value_1": struct{}{},
    					"value_2": struct{}{},
    					"value_3": struct{}{},
    				},
    				"label_2": codable.LabelValueSet{
    					"value_2": struct{}{},
    				},
    				"label_3": codable.LabelValueSet{
    					"value_1": struct{}{},
    					"value_3": struct{}{},
    				},
    			},
    			expectedLpToFps: index.LabelPairFingerprintsMapping{
    				model.LabelPair{
    					Name:  model.MetricNameLabel,
    					Value: "metric_0",
    				}: codable.FingerprintSet{0: struct{}{}, 1: struct{}{}, 3: struct{}{}},
    				model.LabelPair{
    					Name:  model.MetricNameLabel,
    					Value: "metric_1",
    				}: codable.FingerprintSet{2: struct{}{}, 5: struct{}{}},
    				model.LabelPair{
    					Name:  model.MetricNameLabel,
    					Value: "metric_2",
    				}: codable.FingerprintSet{4: struct{}{}},
    				model.LabelPair{
    					Name:  "label_1",
    					Value: "value_1",
    				}: codable.FingerprintSet{0: struct{}{}},
    				model.LabelPair{
    					Name:  "label_1",
    					Value: "value_2",
    				}: codable.FingerprintSet{2: struct{}{}},
    				model.LabelPair{
    					Name:  "label_1",
    					Value: "value_3",
    				}: codable.FingerprintSet{3: struct{}{}, 5: struct{}{}},
    				model.LabelPair{
    					Name:  "label_2",
    					Value: "value_2",
    				}: codable.FingerprintSet{1: struct{}{}, 4: struct{}{}},
    				model.LabelPair{
    					Name:  "label_3",
    					Value: "value_1",
    				}: codable.FingerprintSet{4: struct{}{}},
    				model.LabelPair{
    					Name:  "label_3",
    					Value: "value_3",
    				}: codable.FingerprintSet{1: struct{}{}},
    			},
    		},
    	}
    
    	p, closer := newTestPersistence(t, encoding)
    	defer closer.Close()
    
    	indexedFpsToMetrics := index.FingerprintMetricMapping{}
    	for i, b := range batches {
    		for fp, m := range b.fpToMetric {
    			p.indexMetric(fp, m)
    			if err := p.archiveMetric(fp, m, 1, 2); err != nil {
    				t.Fatal(err)
    			}
    			indexedFpsToMetrics[fp] = m
    		}
    		verifyIndexedState(i, t, b, indexedFpsToMetrics, p)
    	}
    
    	for i := len(batches) - 1; i >= 0; i-- {
    		b := batches[i]
    		verifyIndexedState(i, t, batches[i], indexedFpsToMetrics, p)
    		for fp, m := range b.fpToMetric {
    			p.unindexMetric(fp, m)
    			unarchived, err := p.unarchiveMetric(fp)
    			if err != nil {
    				t.Fatal(err)
    			}
    			if !unarchived {
    				t.Errorf("%d. metric not unarchived", i)
    			}
    			delete(indexedFpsToMetrics, fp)
    		}
    	}
    }
    
    func TestIndexingChunkType0(t *testing.T) {
    	testIndexing(t, 0)
    }
    
    func TestIndexingChunkType1(t *testing.T) {
    	testIndexing(t, 1)
    }
    
    func verifyIndexedState(i int, t *testing.T, b incrementalBatch, indexedFpsToMetrics index.FingerprintMetricMapping, p *persistence) {
    	p.waitForIndexing()
    	for fp, m := range indexedFpsToMetrics {
    		// Compare archived metrics with input metrics.
    		mOut, err := p.archivedMetric(fp)
    		if err != nil {
    			t.Fatal(err)
    		}
    		if !mOut.Equal(m) {
    			t.Errorf("%d. %v: Got: %s; want %s", i, fp, mOut, m)
    		}
    
    		// Check that archived metrics are in membership index.
    		has, first, last, err := p.hasArchivedMetric(fp)
    		if err != nil {
    			t.Fatal(err)
    		}
    		if !has {
    			t.Errorf("%d. fingerprint %v not found", i, fp)
    		}
    		if first != 1 || last != 2 {
    			t.Errorf(
    				"%d. %v: Got first: %d, last %d; want first: %d, last %d",
    				i, fp, first, last, 1, 2,
    			)
    		}
    	}
    
    	// Compare label name -> label values mappings.
    	for ln, lvs := range b.expectedLnToLvs {
    		outLvs, err := p.labelValuesForLabelName(ln)
    		if err != nil {
    			t.Fatal(err)
    		}
    
    		outSet := codable.LabelValueSet{}
    		for _, lv := range outLvs {
    			outSet[lv] = struct{}{}
    		}
    
    		if !reflect.DeepEqual(lvs, outSet) {
    			t.Errorf("%d. label values don't match. Got: %v; want %v", i, outSet, lvs)
    		}
    	}
    
    	// Compare label pair -> fingerprints mappings.
    	for lp, fps := range b.expectedLpToFps {
    		outFPs, err := p.fingerprintsForLabelPair(lp)
    		if err != nil {
    			t.Fatal(err)
    		}
    
    		outSet := codable.FingerprintSet{}
    		for _, fp := range outFPs {
    			outSet[fp] = struct{}{}
    		}
    
    		if !reflect.DeepEqual(fps, outSet) {
    			t.Errorf("%d. %v: fingerprints don't match. Got: %v; want %v", i, lp, outSet, fps)
    		}
    	}
    }
    
    var fpStrings = []string{
    	"b004b821ca50ba26",
    	"b037c21e884e4fc5",
    	"b037de1e884e5469",
    }
    
    func BenchmarkLoadChunksSequentially(b *testing.B) {
    	p := persistence{
    		basePath: "fixtures",
    		bufPool:  sync.Pool{New: func() interface{} { return make([]byte, 0, 3*chunkLenWithHeader) }},
    	}
    	sequentialIndexes := make([]int, 47)
    	for i := range sequentialIndexes {
    		sequentialIndexes[i] = i
    	}
    
    	var fp model.Fingerprint
    	for i := 0; i < b.N; i++ {
    		for _, s := range fpStrings {
    			fp, _ = model.FingerprintFromString(s)
    			cds, err := p.loadChunks(fp, sequentialIndexes, 0)
    			if err != nil {
    				b.Error(err)
    			}
    			if len(cds) == 0 {
    				b.Error("could not read any chunks")
    			}
    		}
    	}
    }
    
    func BenchmarkLoadChunksRandomly(b *testing.B) {
    	p := persistence{
    		basePath: "fixtures",
    		bufPool:  sync.Pool{New: func() interface{} { return make([]byte, 0, 3*chunkLenWithHeader) }},
    	}
    	randomIndexes := []int{1, 5, 6, 8, 11, 14, 18, 23, 29, 33, 42, 46}
    
    	var fp model.Fingerprint
    	for i := 0; i < b.N; i++ {
    		for _, s := range fpStrings {
    			fp, _ = model.FingerprintFromString(s)
    			cds, err := p.loadChunks(fp, randomIndexes, 0)
    			if err != nil {
    				b.Error(err)
    			}
    			if len(cds) == 0 {
    				b.Error("could not read any chunks")
    			}
    		}
    	}
    }
    
    func BenchmarkLoadChunkDescs(b *testing.B) {
    	p := persistence{
    		basePath: "fixtures",
    	}
    
    	var fp model.Fingerprint
    	for i := 0; i < b.N; i++ {
    		for _, s := range fpStrings {
    			fp, _ = model.FingerprintFromString(s)
    			cds, err := p.loadChunkDescs(fp, 0)
    			if err != nil {
    				b.Error(err)
    			}
    			if len(cds) == 0 {
    				b.Error("could not read any chunk descs")
    			}
    		}
    	}
    }
    prometheus-0.16.2+ds/storage/local/preload.go000066400000000000000000000057121265137125100211200ustar00rootroot00000000000000// Copyright 2014 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package local
    
    import (
    	"time"
    
    	"github.com/prometheus/common/model"
    )
    
    // memorySeriesPreloader is a Preloader for the memorySeriesStorage.
    type memorySeriesPreloader struct {
    	storage          *memorySeriesStorage
    	pinnedChunkDescs []*chunkDesc
    }
    
    // PreloadRange implements Preloader.
    func (p *memorySeriesPreloader) PreloadRange(
    	fp model.Fingerprint,
    	from model.Time, through model.Time,
    	stalenessDelta time.Duration,
    ) error {
    	cds, err := p.storage.preloadChunksForRange(fp, from, through, stalenessDelta)
    	if err != nil {
    		return err
    	}
    	p.pinnedChunkDescs = append(p.pinnedChunkDescs, cds...)
    	return nil
    }
    
    /*
    // MetricAtTime implements Preloader.
    func (p *memorySeriesPreloader) MetricAtTime(fp model.Fingerprint, t model.Time) error {
    	cds, err := p.storage.preloadChunks(fp, &timeSelector{
    		from:    t,
    		through: t,
    	})
    	if err != nil {
    		return err
    	}
    	p.pinnedChunkDescs = append(p.pinnedChunkDescs, cds...)
    	return nil
    }
    
    // MetricAtInterval implements Preloader.
    func (p *memorySeriesPreloader) MetricAtInterval(fp model.Fingerprint, from, through model.Time, interval time.Duration) error {
    	cds, err := p.storage.preloadChunks(fp, &timeSelector{
    		from:     from,
    		through:  through,
    		interval: interval,
    	})
    	if err != nil {
    		return err
    	}
    	p.pinnedChunkDescs = append(p.pinnedChunkDescs, cds...)
    	return
    }
    
    // MetricRange implements Preloader.
    func (p *memorySeriesPreloader) MetricRange(fp model.Fingerprint, t model.Time, rangeDuration time.Duration) error {
    	cds, err := p.storage.preloadChunks(fp, &timeSelector{
    		from:          t,
    		through:       t,
    		rangeDuration: through.Sub(from),
    	})
    	if err != nil {
    		return err
    	}
    	p.pinnedChunkDescs = append(p.pinnedChunkDescs, cds...)
    	return
    }
    
    // MetricRangeAtInterval implements Preloader.
    func (p *memorySeriesPreloader) MetricRangeAtInterval(fp model.Fingerprint, from, through model.Time, interval, rangeDuration time.Duration) error {
    	cds, err := p.storage.preloadChunks(fp, &timeSelector{
    		from:          from,
    		through:       through,
    		interval:      interval,
    		rangeDuration: rangeDuration,
    	})
    	if err != nil {
    		return err
    	}
    	p.pinnedChunkDescs = append(p.pinnedChunkDescs, cds...)
    	return
    }
    */
    
    // Close implements Preloader.
    func (p *memorySeriesPreloader) Close() {
    	for _, cd := range p.pinnedChunkDescs {
    		cd.unpin(p.storage.evictRequests)
    	}
    	chunkOps.WithLabelValues(unpin).Add(float64(len(p.pinnedChunkDescs)))
    
    }
    prometheus-0.16.2+ds/storage/local/series.go000066400000000000000000000514371265137125100207710ustar00rootroot00000000000000// Copyright 2014 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package local
    
    import (
    	"sort"
    	"sync"
    	"time"
    
    	"github.com/prometheus/common/model"
    
    	"github.com/prometheus/prometheus/storage/metric"
    )
    
    const (
    	// chunkDescEvictionFactor is a factor used for chunkDesc eviction (as opposed
    	// to evictions of chunks, see method evictOlderThan. A chunk takes about 20x
    	// more memory than a chunkDesc. With a chunkDescEvictionFactor of 10, not more
    	// than a third of the total memory taken by a series will be used for
    	// chunkDescs.
    	chunkDescEvictionFactor = 10
    
    	headChunkTimeout = time.Hour // Close head chunk if not touched for that long.
    )
    
    // fingerprintSeriesPair pairs a fingerprint with a memorySeries pointer.
    type fingerprintSeriesPair struct {
    	fp     model.Fingerprint
    	series *memorySeries
    }
    
    // seriesMap maps fingerprints to memory series. All its methods are
    // goroutine-safe. A SeriesMap is effectively is a goroutine-safe version of
    // map[model.Fingerprint]*memorySeries.
    type seriesMap struct {
    	mtx sync.RWMutex
    	m   map[model.Fingerprint]*memorySeries
    }
    
    // newSeriesMap returns a newly allocated empty seriesMap. To create a seriesMap
    // based on a prefilled map, use an explicit initializer.
    func newSeriesMap() *seriesMap {
    	return &seriesMap{m: make(map[model.Fingerprint]*memorySeries)}
    }
    
    // length returns the number of mappings in the seriesMap.
    func (sm *seriesMap) length() int {
    	sm.mtx.RLock()
    	defer sm.mtx.RUnlock()
    
    	return len(sm.m)
    }
    
    // get returns a memorySeries for a fingerprint. Return values have the same
    // semantics as the native Go map.
    func (sm *seriesMap) get(fp model.Fingerprint) (s *memorySeries, ok bool) {
    	sm.mtx.RLock()
    	defer sm.mtx.RUnlock()
    
    	s, ok = sm.m[fp]
    	return
    }
    
    // put adds a mapping to the seriesMap. It panics if s == nil.
    func (sm *seriesMap) put(fp model.Fingerprint, s *memorySeries) {
    	sm.mtx.Lock()
    	defer sm.mtx.Unlock()
    
    	if s == nil {
    		panic("tried to add nil pointer to seriesMap")
    	}
    	sm.m[fp] = s
    }
    
    // del removes a mapping from the series Map.
    func (sm *seriesMap) del(fp model.Fingerprint) {
    	sm.mtx.Lock()
    	defer sm.mtx.Unlock()
    
    	delete(sm.m, fp)
    }
    
    // iter returns a channel that produces all mappings in the seriesMap. The
    // channel will be closed once all fingerprints have been received. Not
    // consuming all fingerprints from the channel will leak a goroutine. The
    // semantics of concurrent modification of seriesMap is the similar as the one
    // for iterating over a map with a 'range' clause. However, if the next element
    // in iteration order is removed after the current element has been received
    // from the channel, it will still be produced by the channel.
    func (sm *seriesMap) iter() <-chan fingerprintSeriesPair {
    	ch := make(chan fingerprintSeriesPair)
    	go func() {
    		sm.mtx.RLock()
    		for fp, s := range sm.m {
    			sm.mtx.RUnlock()
    			ch <- fingerprintSeriesPair{fp, s}
    			sm.mtx.RLock()
    		}
    		sm.mtx.RUnlock()
    		close(ch)
    	}()
    	return ch
    }
    
    // fpIter returns a channel that produces all fingerprints in the seriesMap. The
    // channel will be closed once all fingerprints have been received. Not
    // consuming all fingerprints from the channel will leak a goroutine. The
    // semantics of concurrent modification of seriesMap is the similar as the one
    // for iterating over a map with a 'range' clause. However, if the next element
    // in iteration order is removed after the current element has been received
    // from the channel, it will still be produced by the channel.
    func (sm *seriesMap) fpIter() <-chan model.Fingerprint {
    	ch := make(chan model.Fingerprint)
    	go func() {
    		sm.mtx.RLock()
    		for fp := range sm.m {
    			sm.mtx.RUnlock()
    			ch <- fp
    			sm.mtx.RLock()
    		}
    		sm.mtx.RUnlock()
    		close(ch)
    	}()
    	return ch
    }
    
    type memorySeries struct {
    	metric model.Metric
    	// Sorted by start time, overlapping chunk ranges are forbidden.
    	chunkDescs []*chunkDesc
    	// The index (within chunkDescs above) of the first chunkDesc that
    	// points to a non-persisted chunk. If all chunks are persisted, then
    	// persistWatermark == len(chunkDescs).
    	persistWatermark int
    	// The modification time of the series file. The zero value of time.Time
    	// is used to mark an unknown modification time.
    	modTime time.Time
    	// The chunkDescs in memory might not have all the chunkDescs for the
    	// chunks that are persisted to disk. The missing chunkDescs are all
    	// contiguous and at the tail end. chunkDescsOffset is the index of the
    	// chunk on disk that corresponds to the first chunkDesc in memory. If
    	// it is 0, the chunkDescs are all loaded. A value of -1 denotes a
    	// special case: There are chunks on disk, but the offset to the
    	// chunkDescs in memory is unknown. Also, in this special case, there is
    	// no overlap between chunks on disk and chunks in memory (implying that
    	// upon first persisting of a chunk in memory, the offset has to be
    	// set).
    	chunkDescsOffset int
    	// The savedFirstTime field is used as a fallback when the
    	// chunkDescsOffset is not 0. It can be used to save the firstTime of the
    	// first chunk before its chunk desc is evicted. In doubt, this field is
    	// just set to the oldest possible timestamp.
    	savedFirstTime model.Time
    	// The timestamp of the last sample in this series. Needed for fast access to
    	// ensure timestamp monotonicity during ingestion.
    	lastTime model.Time
    	// Whether the current head chunk has already been finished.  If true,
    	// the current head chunk must not be modified anymore.
    	headChunkClosed bool
    	// Whether the current head chunk is used by an iterator. In that case,
    	// a non-closed head chunk has to be cloned before more samples are
    	// appended.
    	headChunkUsedByIterator bool
    	// Whether the series is inconsistent with the last checkpoint in a way
    	// that would require a disk seek during crash recovery.
    	dirty bool
    }
    
    // newMemorySeries returns a pointer to a newly allocated memorySeries for the
    // given metric. chunkDescs and modTime in the new series are set according to
    // the provided parameters. chunkDescs can be nil or empty if this is a
    // genuinely new time series (i.e. not one that is being unarchived). In that
    // case, headChunkClosed is set to false, and firstTime and lastTime are both
    // set to model.Earliest. The zero value for modTime can be used if the
    // modification time of the series file is unknown (e.g. if this is a genuinely
    // new series).
    func newMemorySeries(m model.Metric, chunkDescs []*chunkDesc, modTime time.Time) *memorySeries {
    	firstTime := model.Earliest
    	lastTime := model.Earliest
    	if len(chunkDescs) > 0 {
    		firstTime = chunkDescs[0].firstTime()
    		lastTime = chunkDescs[len(chunkDescs)-1].lastTime()
    	}
    	return &memorySeries{
    		metric:           m,
    		chunkDescs:       chunkDescs,
    		headChunkClosed:  len(chunkDescs) > 0,
    		savedFirstTime:   firstTime,
    		lastTime:         lastTime,
    		persistWatermark: len(chunkDescs),
    		modTime:          modTime,
    	}
    }
    
    // add adds a sample pair to the series. It returns the number of newly
    // completed chunks (which are now eligible for persistence).
    //
    // The caller must have locked the fingerprint of the series.
    func (s *memorySeries) add(v *model.SamplePair) int {
    	if len(s.chunkDescs) == 0 || s.headChunkClosed {
    		newHead := newChunkDesc(newChunk())
    		s.chunkDescs = append(s.chunkDescs, newHead)
    		s.headChunkClosed = false
    	} else if s.headChunkUsedByIterator && s.head().refCount() > 1 {
    		// We only need to clone the head chunk if the current head
    		// chunk was used in an iterator at all and if the refCount is
    		// still greater than the 1 we always have because the head
    		// chunk is not yet persisted. The latter is just an
    		// approximation. We will still clone unnecessarily if an older
    		// iterator using a previous version of the head chunk is still
    		// around and keep the head chunk pinned. We needed to track
    		// pins by version of the head chunk, which is probably not
    		// worth the effort.
    		chunkOps.WithLabelValues(clone).Inc()
    		// No locking needed here because a non-persisted head chunk can
    		// not get evicted concurrently.
    		s.head().c = s.head().c.clone()
    		s.headChunkUsedByIterator = false
    	}
    
    	chunks := s.head().add(v)
    	s.head().c = chunks[0]
    
    	for _, c := range chunks[1:] {
    		s.chunkDescs = append(s.chunkDescs, newChunkDesc(c))
    	}
    
    	s.lastTime = v.Timestamp
    	return len(chunks) - 1
    }
    
    // maybeCloseHeadChunk closes the head chunk if it has not been touched for the
    // duration of headChunkTimeout. It returns whether the head chunk was closed.
    // If the head chunk is already closed, the method is a no-op and returns false.
    //
    // The caller must have locked the fingerprint of the series.
    func (s *memorySeries) maybeCloseHeadChunk() bool {
    	if s.headChunkClosed {
    		return false
    	}
    	if time.Now().Sub(s.lastTime.Time()) > headChunkTimeout {
    		s.headChunkClosed = true
    		// Since we cannot modify the head chunk from now on, we
    		// don't need to bother with cloning anymore.
    		s.headChunkUsedByIterator = false
    		return true
    	}
    	return false
    }
    
    // evictChunkDescs evicts chunkDescs if there are chunkDescEvictionFactor times
    // more than non-evicted chunks. iOldestNotEvicted is the index within the
    // current chunkDescs of the oldest chunk that is not evicted.
    func (s *memorySeries) evictChunkDescs(iOldestNotEvicted int) {
    	lenToKeep := chunkDescEvictionFactor * (len(s.chunkDescs) - iOldestNotEvicted)
    	if lenToKeep < len(s.chunkDescs) {
    		s.savedFirstTime = s.firstTime()
    		lenEvicted := len(s.chunkDescs) - lenToKeep
    		s.chunkDescsOffset += lenEvicted
    		s.persistWatermark -= lenEvicted
    		chunkDescOps.WithLabelValues(evict).Add(float64(lenEvicted))
    		numMemChunkDescs.Sub(float64(lenEvicted))
    		s.chunkDescs = append(
    			make([]*chunkDesc, 0, lenToKeep),
    			s.chunkDescs[lenEvicted:]...,
    		)
    		s.dirty = true
    	}
    }
    
    // dropChunks removes chunkDescs older than t. The caller must have locked the
    // fingerprint of the series.
    func (s *memorySeries) dropChunks(t model.Time) {
    	keepIdx := len(s.chunkDescs)
    	for i, cd := range s.chunkDescs {
    		if !cd.lastTime().Before(t) {
    			keepIdx = i
    			break
    		}
    	}
    	if keepIdx > 0 {
    		s.chunkDescs = append(
    			make([]*chunkDesc, 0, len(s.chunkDescs)-keepIdx),
    			s.chunkDescs[keepIdx:]...,
    		)
    		s.persistWatermark -= keepIdx
    		if s.persistWatermark < 0 {
    			panic("dropped unpersisted chunks from memory")
    		}
    		if s.chunkDescsOffset != -1 {
    			s.chunkDescsOffset += keepIdx
    		}
    		numMemChunkDescs.Sub(float64(keepIdx))
    		s.dirty = true
    	}
    }
    
    // preloadChunks is an internal helper method.
    func (s *memorySeries) preloadChunks(
    	indexes []int, fp model.Fingerprint, mss *memorySeriesStorage,
    ) ([]*chunkDesc, error) {
    	loadIndexes := []int{}
    	pinnedChunkDescs := make([]*chunkDesc, 0, len(indexes))
    	for _, idx := range indexes {
    		cd := s.chunkDescs[idx]
    		pinnedChunkDescs = append(pinnedChunkDescs, cd)
    		cd.pin(mss.evictRequests) // Have to pin everything first to prevent immediate eviction on chunk loading.
    		if cd.isEvicted() {
    			loadIndexes = append(loadIndexes, idx)
    		}
    	}
    	chunkOps.WithLabelValues(pin).Add(float64(len(pinnedChunkDescs)))
    
    	if len(loadIndexes) > 0 {
    		if s.chunkDescsOffset == -1 {
    			panic("requested loading chunks from persistence in a situation where we must not have persisted data for chunk descriptors in memory")
    		}
    		chunks, err := mss.loadChunks(fp, loadIndexes, s.chunkDescsOffset)
    		if err != nil {
    			// Unpin the chunks since we won't return them as pinned chunks now.
    			for _, cd := range pinnedChunkDescs {
    				cd.unpin(mss.evictRequests)
    			}
    			chunkOps.WithLabelValues(unpin).Add(float64(len(pinnedChunkDescs)))
    			return nil, err
    		}
    		for i, c := range chunks {
    			s.chunkDescs[loadIndexes[i]].setChunk(c)
    		}
    	}
    	return pinnedChunkDescs, nil
    }
    
    /*
    func (s *memorySeries) preloadChunksAtTime(t model.Time, p *persistence) (chunkDescs, error) {
    	s.mtx.Lock()
    	defer s.mtx.Unlock()
    
    	if len(s.chunkDescs) == 0 {
    		return nil, nil
    	}
    
    	var pinIndexes []int
    	// Find first chunk where lastTime() is after or equal to t.
    	i := sort.Search(len(s.chunkDescs), func(i int) bool {
    		return !s.chunkDescs[i].lastTime().Before(t)
    	})
    	switch i {
    	case 0:
    		pinIndexes = []int{0}
    	case len(s.chunkDescs):
    		pinIndexes = []int{i - 1}
    	default:
    		if s.chunkDescs[i].contains(t) {
    			pinIndexes = []int{i}
    		} else {
    			pinIndexes = []int{i - 1, i}
    		}
    	}
    
    	return s.preloadChunks(pinIndexes, p)
    }
    */
    
    // preloadChunksForRange loads chunks for the given range from the persistence.
    // The caller must have locked the fingerprint of the series.
    func (s *memorySeries) preloadChunksForRange(
    	from model.Time, through model.Time,
    	fp model.Fingerprint, mss *memorySeriesStorage,
    ) ([]*chunkDesc, error) {
    	firstChunkDescTime := model.Latest
    	if len(s.chunkDescs) > 0 {
    		firstChunkDescTime = s.chunkDescs[0].firstTime()
    	}
    	if s.chunkDescsOffset != 0 && from.Before(firstChunkDescTime) {
    		cds, err := mss.loadChunkDescs(fp, s.persistWatermark)
    		if err != nil {
    			return nil, err
    		}
    		s.chunkDescs = append(cds, s.chunkDescs...)
    		s.chunkDescsOffset = 0
    		s.persistWatermark += len(cds)
    	}
    
    	if len(s.chunkDescs) == 0 {
    		return nil, nil
    	}
    
    	// Find first chunk with start time after "from".
    	fromIdx := sort.Search(len(s.chunkDescs), func(i int) bool {
    		return s.chunkDescs[i].firstTime().After(from)
    	})
    	// Find first chunk with start time after "through".
    	throughIdx := sort.Search(len(s.chunkDescs), func(i int) bool {
    		return s.chunkDescs[i].firstTime().After(through)
    	})
    	if fromIdx > 0 {
    		fromIdx--
    	}
    	if throughIdx == len(s.chunkDescs) {
    		throughIdx--
    	}
    
    	pinIndexes := make([]int, 0, throughIdx-fromIdx+1)
    	for i := fromIdx; i <= throughIdx; i++ {
    		pinIndexes = append(pinIndexes, i)
    	}
    	return s.preloadChunks(pinIndexes, fp, mss)
    }
    
    // newIterator returns a new SeriesIterator. The caller must have locked the
    // fingerprint of the memorySeries.
    func (s *memorySeries) newIterator() SeriesIterator {
    	chunks := make([]chunk, 0, len(s.chunkDescs))
    	for i, cd := range s.chunkDescs {
    		if chunk := cd.chunk(); chunk != nil {
    			if i == len(s.chunkDescs)-1 && !s.headChunkClosed {
    				s.headChunkUsedByIterator = true
    			}
    			chunks = append(chunks, chunk)
    		}
    	}
    
    	return &memorySeriesIterator{
    		chunks:   chunks,
    		chunkIts: make([]chunkIterator, len(chunks)),
    	}
    }
    
    // head returns a pointer to the head chunk descriptor. The caller must have
    // locked the fingerprint of the memorySeries. This method will panic if this
    // series has no chunk descriptors.
    func (s *memorySeries) head() *chunkDesc {
    	return s.chunkDescs[len(s.chunkDescs)-1]
    }
    
    // firstTime returns the timestamp of the first sample in the series. The caller
    // must have locked the fingerprint of the memorySeries.
    func (s *memorySeries) firstTime() model.Time {
    	if s.chunkDescsOffset == 0 && len(s.chunkDescs) > 0 {
    		return s.chunkDescs[0].firstTime()
    	}
    	return s.savedFirstTime
    }
    
    // chunksToPersist returns a slice of chunkDescs eligible for persistence. It's
    // the caller's responsibility to actually persist the returned chunks
    // afterwards. The method sets the persistWatermark and the dirty flag
    // accordingly.
    //
    // The caller must have locked the fingerprint of the series.
    func (s *memorySeries) chunksToPersist() []*chunkDesc {
    	newWatermark := len(s.chunkDescs)
    	if !s.headChunkClosed {
    		newWatermark--
    	}
    	if newWatermark == s.persistWatermark {
    		return nil
    	}
    	cds := s.chunkDescs[s.persistWatermark:newWatermark]
    	s.dirty = true
    	s.persistWatermark = newWatermark
    	return cds
    }
    
    // memorySeriesIterator implements SeriesIterator.
    type memorySeriesIterator struct {
    	chunkIt  chunkIterator   // Last chunkIterator used by ValueAtTime.
    	chunkIts []chunkIterator // Caches chunkIterators.
    	chunks   []chunk
    }
    
    // ValueAtTime implements SeriesIterator.
    func (it *memorySeriesIterator) ValueAtTime(t model.Time) []model.SamplePair {
    	// The most common case. We are iterating through a chunk.
    	if it.chunkIt != nil && it.chunkIt.contains(t) {
    		return it.chunkIt.valueAtTime(t)
    	}
    
    	if len(it.chunks) == 0 {
    		return nil
    	}
    
    	// Before or exactly on the first sample of the series.
    	it.chunkIt = it.chunkIterator(0)
    	ts := it.chunkIt.timestampAtIndex(0)
    	if !t.After(ts) {
    		// return first value of first chunk
    		return []model.SamplePair{{
    			Timestamp: ts,
    			Value:     it.chunkIt.sampleValueAtIndex(0),
    		}}
    	}
    
    	// After or exactly on the last sample of the series.
    	it.chunkIt = it.chunkIterator(len(it.chunks) - 1)
    	ts = it.chunkIt.lastTimestamp()
    	if !t.Before(ts) {
    		// return last value of last chunk
    		return []model.SamplePair{{
    			Timestamp: ts,
    			Value:     it.chunkIt.sampleValueAtIndex(it.chunkIt.length() - 1),
    		}}
    	}
    
    	// Find last chunk where firstTime() is before or equal to t.
    	l := len(it.chunks) - 1
    	i := sort.Search(len(it.chunks), func(i int) bool {
    		return !it.chunks[l-i].firstTime().After(t)
    	})
    	if i == len(it.chunks) {
    		panic("out of bounds")
    	}
    	it.chunkIt = it.chunkIterator(l - i)
    	ts = it.chunkIt.lastTimestamp()
    	if t.After(ts) {
    		// We ended up between two chunks.
    		sp1 := model.SamplePair{
    			Timestamp: ts,
    			Value:     it.chunkIt.sampleValueAtIndex(it.chunkIt.length() - 1),
    		}
    		it.chunkIt = it.chunkIterator(l - i + 1)
    		return []model.SamplePair{
    			sp1,
    			{
    				Timestamp: it.chunkIt.timestampAtIndex(0),
    				Value:     it.chunkIt.sampleValueAtIndex(0),
    			},
    		}
    	}
    	return it.chunkIt.valueAtTime(t)
    }
    
    // BoundaryValues implements SeriesIterator.
    func (it *memorySeriesIterator) BoundaryValues(in metric.Interval) []model.SamplePair {
    	// Find the first chunk for which the first sample is within the interval.
    	i := sort.Search(len(it.chunks), func(i int) bool {
    		return !it.chunks[i].firstTime().Before(in.OldestInclusive)
    	})
    	// Only now check the last timestamp of the previous chunk (which is
    	// fairly expensive).
    	if i > 0 && !it.chunkIterator(i-1).lastTimestamp().Before(in.OldestInclusive) {
    		i--
    	}
    
    	values := make([]model.SamplePair, 0, 2)
    	for j, c := range it.chunks[i:] {
    		if c.firstTime().After(in.NewestInclusive) {
    			if len(values) == 1 {
    				// We found the first value before but are now
    				// already past the last value. The value we
    				// want must be the last value of the previous
    				// chunk. So backtrack...
    				chunkIt := it.chunkIterator(i + j - 1)
    				values = append(values, model.SamplePair{
    					Timestamp: chunkIt.lastTimestamp(),
    					Value:     chunkIt.lastSampleValue(),
    				})
    			}
    			break
    		}
    		chunkIt := it.chunkIterator(i + j)
    		if len(values) == 0 {
    			firstValues := chunkIt.valueAtTime(in.OldestInclusive)
    			switch len(firstValues) {
    			case 2:
    				values = append(values, firstValues[1])
    			case 1:
    				values = firstValues
    			default:
    				panic("unexpected return from valueAtTime")
    			}
    		}
    		if chunkIt.lastTimestamp().After(in.NewestInclusive) {
    			values = append(values, chunkIt.valueAtTime(in.NewestInclusive)[0])
    			break
    		}
    	}
    	if len(values) == 1 {
    		// We found exactly one value. In that case, add the most recent we know.
    		chunkIt := it.chunkIterator(len(it.chunks) - 1)
    		values = append(values, model.SamplePair{
    			Timestamp: chunkIt.lastTimestamp(),
    			Value:     chunkIt.lastSampleValue(),
    		})
    	}
    	if len(values) == 2 && values[0].Equal(&values[1]) {
    		return values[:1]
    	}
    	return values
    }
    
    // RangeValues implements SeriesIterator.
    func (it *memorySeriesIterator) RangeValues(in metric.Interval) []model.SamplePair {
    	// Find the first chunk for which the first sample is within the interval.
    	i := sort.Search(len(it.chunks), func(i int) bool {
    		return !it.chunks[i].firstTime().Before(in.OldestInclusive)
    	})
    	// Only now check the last timestamp of the previous chunk (which is
    	// fairly expensive).
    	if i > 0 && !it.chunkIterator(i-1).lastTimestamp().Before(in.OldestInclusive) {
    		i--
    	}
    
    	values := []model.SamplePair{}
    	for j, c := range it.chunks[i:] {
    		if c.firstTime().After(in.NewestInclusive) {
    			break
    		}
    		values = append(values, it.chunkIterator(i+j).rangeValues(in)...)
    	}
    	return values
    }
    
    // chunkIterator returns the chunkIterator for the chunk at position i (and
    // creates it if needed).
    func (it *memorySeriesIterator) chunkIterator(i int) chunkIterator {
    	chunkIt := it.chunkIts[i]
    	if chunkIt == nil {
    		chunkIt = it.chunks[i].newIterator()
    		it.chunkIts[i] = chunkIt
    	}
    	return chunkIt
    }
    
    // nopSeriesIterator implements Series Iterator. It never returns any values.
    type nopSeriesIterator struct{}
    
    // ValueAtTime implements SeriesIterator.
    func (i nopSeriesIterator) ValueAtTime(t model.Time) []model.SamplePair {
    	return []model.SamplePair{}
    }
    
    // BoundaryValues implements SeriesIterator.
    func (i nopSeriesIterator) BoundaryValues(in metric.Interval) []model.SamplePair {
    	return []model.SamplePair{}
    }
    
    // RangeValues implements SeriesIterator.
    func (i nopSeriesIterator) RangeValues(in metric.Interval) []model.SamplePair {
    	return []model.SamplePair{}
    }
    prometheus-0.16.2+ds/storage/local/storage.go000066400000000000000000001133031265137125100211320ustar00rootroot00000000000000// Copyright 2014 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    // Package local contains the local time series storage used by Prometheus.
    package local
    
    import (
    	"container/list"
    	"fmt"
    	"sync/atomic"
    	"time"
    
    	"github.com/prometheus/client_golang/prometheus"
    	"github.com/prometheus/common/log"
    	"github.com/prometheus/common/model"
    
    	"github.com/prometheus/prometheus/storage/metric"
    )
    
    const (
    	evictRequestsCap = 1024
    	chunkLen         = 1024
    
    	// See waitForNextFP.
    	fpMaxSweepTime    = 6 * time.Hour
    	fpMaxWaitDuration = 10 * time.Second
    
    	// See waitForNextFP.
    	maxEvictInterval = time.Minute
    
    	// If numChunskToPersist is this percentage of maxChunksToPersist, we
    	// consider the storage in "graceful degradation mode", i.e. we do not
    	// checkpoint anymore based on the dirty series count, and we do not
    	// sync series files anymore if using the adaptive sync strategy.
    	percentChunksToPersistForDegradation = 80
    )
    
    var (
    	numChunksToPersistDesc = prometheus.NewDesc(
    		prometheus.BuildFQName(namespace, subsystem, "chunks_to_persist"),
    		"The current number of chunks waiting for persistence.",
    		nil, nil,
    	)
    	maxChunksToPersistDesc = prometheus.NewDesc(
    		prometheus.BuildFQName(namespace, subsystem, "max_chunks_to_persist"),
    		"The maximum number of chunks that can be waiting for persistence before sample ingestion will stop.",
    		nil, nil,
    	)
    )
    
    type evictRequest struct {
    	cd    *chunkDesc
    	evict bool
    }
    
    // SyncStrategy is an enum to select a sync strategy for series files.
    type SyncStrategy int
    
    // String implements flag.Value.
    func (ss SyncStrategy) String() string {
    	switch ss {
    	case Adaptive:
    		return "adaptive"
    	case Always:
    		return "always"
    	case Never:
    		return "never"
    	}
    	return ""
    }
    
    // Set implements flag.Value.
    func (ss *SyncStrategy) Set(s string) error {
    	switch s {
    	case "adaptive":
    		*ss = Adaptive
    	case "always":
    		*ss = Always
    	case "never":
    		*ss = Never
    	default:
    		return fmt.Errorf("invalid sync strategy: %s", s)
    	}
    	return nil
    }
    
    // Possible values for SyncStrategy.
    const (
    	_ SyncStrategy = iota
    	Never
    	Always
    	Adaptive
    )
    
    // A syncStrategy is a function that returns whether series files should be
    // synced or not. It does not need to be goroutine safe.
    type syncStrategy func() bool
    
    type memorySeriesStorage struct {
    	// numChunksToPersist has to be aligned for atomic operations.
    	numChunksToPersist int64 // The number of chunks waiting for persistence.
    	maxChunksToPersist int   // If numChunksToPersist reaches this threshold, ingestion will stall.
    	degraded           bool
    
    	fpLocker   *fingerprintLocker
    	fpToSeries *seriesMap
    
    	options *MemorySeriesStorageOptions
    
    	loopStopping, loopStopped  chan struct{}
    	maxMemoryChunks            int
    	dropAfter                  time.Duration
    	checkpointInterval         time.Duration
    	checkpointDirtySeriesLimit int
    
    	persistence *persistence
    	mapper      *fpMapper
    
    	evictList                   *list.List
    	evictRequests               chan evictRequest
    	evictStopping, evictStopped chan struct{}
    
    	persistErrors               prometheus.Counter
    	numSeries                   prometheus.Gauge
    	seriesOps                   *prometheus.CounterVec
    	ingestedSamplesCount        prometheus.Counter
    	outOfOrderSamplesCount      prometheus.Counter
    	invalidPreloadRequestsCount prometheus.Counter
    	maintainSeriesDuration      *prometheus.SummaryVec
    }
    
    // MemorySeriesStorageOptions contains options needed by
    // NewMemorySeriesStorage. It is not safe to leave any of those at their zero
    // values.
    type MemorySeriesStorageOptions struct {
    	MemoryChunks               int           // How many chunks to keep in memory.
    	MaxChunksToPersist         int           // Max number of chunks waiting to be persisted.
    	PersistenceStoragePath     string        // Location of persistence files.
    	PersistenceRetentionPeriod time.Duration // Chunks at least that old are dropped.
    	CheckpointInterval         time.Duration // How often to checkpoint the series map and head chunks.
    	CheckpointDirtySeriesLimit int           // How many dirty series will trigger an early checkpoint.
    	Dirty                      bool          // Force the storage to consider itself dirty on startup.
    	PedanticChecks             bool          // If dirty, perform crash-recovery checks on each series file.
    	SyncStrategy               SyncStrategy  // Which sync strategy to apply to series files.
    	MinShrinkRatio             float64       // Minimum ratio a series file has to shrink during truncation.
    }
    
    // NewMemorySeriesStorage returns a newly allocated Storage. Storage.Serve still
    // has to be called to start the storage.
    func NewMemorySeriesStorage(o *MemorySeriesStorageOptions) Storage {
    	s := &memorySeriesStorage{
    		fpLocker: newFingerprintLocker(1024),
    
    		options: o,
    
    		loopStopping:               make(chan struct{}),
    		loopStopped:                make(chan struct{}),
    		maxMemoryChunks:            o.MemoryChunks,
    		dropAfter:                  o.PersistenceRetentionPeriod,
    		checkpointInterval:         o.CheckpointInterval,
    		checkpointDirtySeriesLimit: o.CheckpointDirtySeriesLimit,
    
    		maxChunksToPersist: o.MaxChunksToPersist,
    
    		evictList:     list.New(),
    		evictRequests: make(chan evictRequest, evictRequestsCap),
    		evictStopping: make(chan struct{}),
    		evictStopped:  make(chan struct{}),
    
    		persistErrors: prometheus.NewCounter(prometheus.CounterOpts{
    			Namespace: namespace,
    			Subsystem: subsystem,
    			Name:      "persist_errors_total",
    			Help:      "The total number of errors while persisting chunks.",
    		}),
    		numSeries: prometheus.NewGauge(prometheus.GaugeOpts{
    			Namespace: namespace,
    			Subsystem: subsystem,
    			Name:      "memory_series",
    			Help:      "The current number of series in memory.",
    		}),
    		seriesOps: prometheus.NewCounterVec(
    			prometheus.CounterOpts{
    				Namespace: namespace,
    				Subsystem: subsystem,
    				Name:      "series_ops_total",
    				Help:      "The total number of series operations by their type.",
    			},
    			[]string{opTypeLabel},
    		),
    		ingestedSamplesCount: prometheus.NewCounter(prometheus.CounterOpts{
    			Namespace: namespace,
    			Subsystem: subsystem,
    			Name:      "ingested_samples_total",
    			Help:      "The total number of samples ingested.",
    		}),
    		outOfOrderSamplesCount: prometheus.NewCounter(prometheus.CounterOpts{
    			Namespace: namespace,
    			Subsystem: subsystem,
    			Name:      "out_of_order_samples_total",
    			Help:      "The total number of samples that were discarded because their timestamps were at or before the last received sample for a series.",
    		}),
    		invalidPreloadRequestsCount: prometheus.NewCounter(prometheus.CounterOpts{
    			Namespace: namespace,
    			Subsystem: subsystem,
    			Name:      "invalid_preload_requests_total",
    			Help:      "The total number of preload requests referring to a non-existent series. This is an indication of outdated label indexes.",
    		}),
    		maintainSeriesDuration: prometheus.NewSummaryVec(
    			prometheus.SummaryOpts{
    				Namespace: namespace,
    				Subsystem: subsystem,
    				Name:      "maintain_series_duration_milliseconds",
    				Help:      "The duration (in milliseconds) it took to perform maintenance on a series.",
    			},
    			[]string{seriesLocationLabel},
    		),
    	}
    	return s
    }
    
    // Start implements Storage.
    func (s *memorySeriesStorage) Start() (err error) {
    	var syncStrategy syncStrategy
    	switch s.options.SyncStrategy {
    	case Never:
    		syncStrategy = func() bool { return false }
    	case Always:
    		syncStrategy = func() bool { return true }
    	case Adaptive:
    		syncStrategy = func() bool { return !s.isDegraded() }
    	default:
    		panic("unknown sync strategy")
    	}
    
    	var p *persistence
    	p, err = newPersistence(
    		s.options.PersistenceStoragePath,
    		s.options.Dirty, s.options.PedanticChecks,
    		syncStrategy,
    		s.options.MinShrinkRatio,
    	)
    	if err != nil {
    		return err
    	}
    	s.persistence = p
    	// Persistence must start running before loadSeriesMapAndHeads() is called.
    	go s.persistence.run()
    
    	defer func() {
    		if err != nil {
    			if e := p.close(); e != nil {
    				log.Errorln("Error closing persistence:", e)
    			}
    		}
    	}()
    
    	log.Info("Loading series map and head chunks...")
    	s.fpToSeries, s.numChunksToPersist, err = p.loadSeriesMapAndHeads()
    	if err != nil {
    		return err
    	}
    	log.Infof("%d series loaded.", s.fpToSeries.length())
    	s.numSeries.Set(float64(s.fpToSeries.length()))
    
    	s.mapper, err = newFPMapper(s.fpToSeries, p)
    	if err != nil {
    		return err
    	}
    
    	go s.handleEvictList()
    	go s.loop()
    
    	return nil
    }
    
    // Stop implements Storage.
    func (s *memorySeriesStorage) Stop() error {
    	log.Info("Stopping local storage...")
    
    	log.Info("Stopping maintenance loop...")
    	close(s.loopStopping)
    	<-s.loopStopped
    
    	log.Info("Stopping chunk eviction...")
    	close(s.evictStopping)
    	<-s.evictStopped
    
    	// One final checkpoint of the series map and the head chunks.
    	if err := s.persistence.checkpointSeriesMapAndHeads(s.fpToSeries, s.fpLocker); err != nil {
    		return err
    	}
    
    	if err := s.persistence.close(); err != nil {
    		return err
    	}
    	log.Info("Local storage stopped.")
    	return nil
    }
    
    // WaitForIndexing implements Storage.
    func (s *memorySeriesStorage) WaitForIndexing() {
    	s.persistence.waitForIndexing()
    }
    
    // NewIterator implements Storage.
    func (s *memorySeriesStorage) NewIterator(fp model.Fingerprint) SeriesIterator {
    	s.fpLocker.Lock(fp)
    	defer s.fpLocker.Unlock(fp)
    
    	series, ok := s.fpToSeries.get(fp)
    	if !ok {
    		// Oops, no series for fp found. That happens if, after
    		// preloading is done, the whole series is identified as old
    		// enough for purging and hence purged for good. As there is no
    		// data left to iterate over, return an iterator that will never
    		// return any values.
    		return nopSeriesIterator{}
    	}
    	return &boundedIterator{
    		it:    series.newIterator(),
    		start: model.Now().Add(-s.dropAfter),
    	}
    }
    
    // LastSampleForFingerprint implements Storage.
    func (s *memorySeriesStorage) LastSamplePairForFingerprint(fp model.Fingerprint) *model.SamplePair {
    	s.fpLocker.Lock(fp)
    	defer s.fpLocker.Unlock(fp)
    
    	series, ok := s.fpToSeries.get(fp)
    	if !ok {
    		return nil
    	}
    	return series.head().lastSamplePair()
    }
    
    // boundedIterator wraps a SeriesIterator and does not allow fetching
    // data from earlier than the configured start time.
    type boundedIterator struct {
    	it    SeriesIterator
    	start model.Time
    }
    
    // ValueAtTime implements the SeriesIterator interface.
    func (bit *boundedIterator) ValueAtTime(ts model.Time) []model.SamplePair {
    	if ts < bit.start {
    		return []model.SamplePair{}
    	}
    	return bit.it.ValueAtTime(ts)
    }
    
    // BoundaryValues implements the SeriesIterator interface.
    func (bit *boundedIterator) BoundaryValues(interval metric.Interval) []model.SamplePair {
    	if interval.NewestInclusive < bit.start {
    		return []model.SamplePair{}
    	}
    	if interval.OldestInclusive < bit.start {
    		interval.OldestInclusive = bit.start
    	}
    	return bit.it.BoundaryValues(interval)
    }
    
    // RangeValues implements the SeriesIterator interface.
    func (bit *boundedIterator) RangeValues(interval metric.Interval) []model.SamplePair {
    	if interval.NewestInclusive < bit.start {
    		return []model.SamplePair{}
    	}
    	if interval.OldestInclusive < bit.start {
    		interval.OldestInclusive = bit.start
    	}
    	return bit.it.RangeValues(interval)
    }
    
    // NewPreloader implements Storage.
    func (s *memorySeriesStorage) NewPreloader() Preloader {
    	return &memorySeriesPreloader{
    		storage: s,
    	}
    }
    
    // fingerprintsForLabelPairs returns the set of fingerprints that have the given labels.
    // This does not work with empty label values.
    func (s *memorySeriesStorage) fingerprintsForLabelPairs(pairs ...model.LabelPair) map[model.Fingerprint]struct{} {
    	var result map[model.Fingerprint]struct{}
    	for _, pair := range pairs {
    		intersection := map[model.Fingerprint]struct{}{}
    		fps, err := s.persistence.fingerprintsForLabelPair(pair)
    		if err != nil {
    			log.Error("Error getting fingerprints for label pair: ", err)
    		}
    		if len(fps) == 0 {
    			return nil
    		}
    		for _, fp := range fps {
    			if _, ok := result[fp]; ok || result == nil {
    				intersection[fp] = struct{}{}
    			}
    		}
    		if len(intersection) == 0 {
    			return nil
    		}
    		result = intersection
    	}
    	return result
    }
    
    // MetricsForLabelMatchers implements Storage.
    func (s *memorySeriesStorage) MetricsForLabelMatchers(matchers ...*metric.LabelMatcher) map[model.Fingerprint]metric.Metric {
    	var (
    		equals  []model.LabelPair
    		filters []*metric.LabelMatcher
    	)
    	for _, lm := range matchers {
    		if lm.Type == metric.Equal && lm.Value != "" {
    			equals = append(equals, model.LabelPair{
    				Name:  lm.Name,
    				Value: lm.Value,
    			})
    		} else {
    			filters = append(filters, lm)
    		}
    	}
    
    	var resFPs map[model.Fingerprint]struct{}
    	if len(equals) > 0 {
    		resFPs = s.fingerprintsForLabelPairs(equals...)
    	} else {
    		// If we cannot make a preselection based on equality matchers, expanding the other matchers to labels
    		// and intersecting their fingerprints is still likely to be the best choice.
    		var remaining metric.LabelMatchers
    		for _, matcher := range filters {
    			// Equal matches are all empty values.
    			if matcher.Match("") {
    				remaining = append(remaining, matcher)
    				continue
    			}
    			intersection := map[model.Fingerprint]struct{}{}
    
    			matches := matcher.Filter(s.LabelValuesForLabelName(matcher.Name))
    			if len(matches) == 0 {
    				return nil
    			}
    			for _, v := range matches {
    				fps := s.fingerprintsForLabelPairs(model.LabelPair{
    					Name:  matcher.Name,
    					Value: v,
    				})
    				for fp := range fps {
    					if _, ok := resFPs[fp]; ok || resFPs == nil {
    						intersection[fp] = struct{}{}
    					}
    				}
    			}
    			resFPs = intersection
    		}
    		// The intersected matchers no longer need to be compared against the actual metrics.
    		filters = remaining
    	}
    
    	result := make(map[model.Fingerprint]metric.Metric, len(resFPs))
    	for fp := range resFPs {
    		result[fp] = s.MetricForFingerprint(fp)
    	}
    	for _, matcher := range filters {
    		for fp, met := range result {
    			if !matcher.Match(met.Metric[matcher.Name]) {
    				delete(result, fp)
    			}
    		}
    	}
    	return result
    }
    
    // LabelValuesForLabelName implements Storage.
    func (s *memorySeriesStorage) LabelValuesForLabelName(labelName model.LabelName) model.LabelValues {
    	lvs, err := s.persistence.labelValuesForLabelName(labelName)
    	if err != nil {
    		log.Errorf("Error getting label values for label name %q: %v", labelName, err)
    	}
    	return lvs
    }
    
    // MetricForFingerprint implements Storage.
    func (s *memorySeriesStorage) MetricForFingerprint(fp model.Fingerprint) metric.Metric {
    	s.fpLocker.Lock(fp)
    	defer s.fpLocker.Unlock(fp)
    
    	series, ok := s.fpToSeries.get(fp)
    	if ok {
    		// Wrap the returned metric in a copy-on-write (COW) metric here because
    		// the caller might mutate it.
    		return metric.Metric{
    			Metric: series.metric,
    		}
    	}
    	met, err := s.persistence.archivedMetric(fp)
    	if err != nil {
    		log.Errorf("Error retrieving archived metric for fingerprint %v: %v", fp, err)
    	}
    
    	return metric.Metric{
    		Metric: met,
    		Copied: false,
    	}
    }
    
    // DropMetric implements Storage.
    func (s *memorySeriesStorage) DropMetricsForFingerprints(fps ...model.Fingerprint) {
    	for _, fp := range fps {
    		s.fpLocker.Lock(fp)
    
    		if series, ok := s.fpToSeries.get(fp); ok {
    			s.fpToSeries.del(fp)
    			s.numSeries.Dec()
    			s.persistence.unindexMetric(fp, series.metric)
    		} else if err := s.persistence.purgeArchivedMetric(fp); err != nil {
    			log.Errorf("Error purging metric with fingerprint %v: %v", fp, err)
    		}
    		// Attempt to delete series file in any case.
    		if _, err := s.persistence.deleteSeriesFile(fp); err != nil {
    			log.Errorf("Error deleting series file for %v: %v", fp, err)
    		}
    
    		s.fpLocker.Unlock(fp)
    		s.seriesOps.WithLabelValues(requestedPurge).Inc()
    	}
    }
    
    // Append implements Storage.
    func (s *memorySeriesStorage) Append(sample *model.Sample) {
    	for ln, lv := range sample.Metric {
    		if len(lv) == 0 {
    			delete(sample.Metric, ln)
    		}
    	}
    	if s.getNumChunksToPersist() >= s.maxChunksToPersist {
    		log.Warnf(
    			"%d chunks waiting for persistence, sample ingestion suspended.",
    			s.getNumChunksToPersist(),
    		)
    		for s.getNumChunksToPersist() >= s.maxChunksToPersist {
    			time.Sleep(time.Second)
    		}
    		log.Warn("Sample ingestion resumed.")
    	}
    	rawFP := sample.Metric.FastFingerprint()
    	s.fpLocker.Lock(rawFP)
    	fp, err := s.mapper.mapFP(rawFP, sample.Metric)
    	if err != nil {
    		log.Errorf("Error while mapping fingerprint %v: %v", rawFP, err)
    		s.persistence.setDirty(true)
    	}
    	if fp != rawFP {
    		// Switch locks.
    		s.fpLocker.Unlock(rawFP)
    		s.fpLocker.Lock(fp)
    	}
    	series := s.getOrCreateSeries(fp, sample.Metric)
    
    	if sample.Timestamp <= series.lastTime {
    		// Don't log and track equal timestamps, as they are a common occurrence
    		// when using client-side timestamps (e.g. Pushgateway or federation).
    		// It would be even better to also compare the sample values here, but
    		// we don't have efficient access to a series's last value.
    		if sample.Timestamp != series.lastTime {
    			log.Warnf("Ignoring sample with out-of-order timestamp for fingerprint %v (%v): %v is not after %v", fp, series.metric, sample.Timestamp, series.lastTime)
    			s.outOfOrderSamplesCount.Inc()
    		}
    		s.fpLocker.Unlock(fp)
    		return
    	}
    	completedChunksCount := series.add(&model.SamplePair{
    		Value:     sample.Value,
    		Timestamp: sample.Timestamp,
    	})
    	s.fpLocker.Unlock(fp)
    	s.ingestedSamplesCount.Inc()
    	s.incNumChunksToPersist(completedChunksCount)
    }
    
    func (s *memorySeriesStorage) getOrCreateSeries(fp model.Fingerprint, m model.Metric) *memorySeries {
    	series, ok := s.fpToSeries.get(fp)
    	if !ok {
    		var cds []*chunkDesc
    		var modTime time.Time
    		unarchived, err := s.persistence.unarchiveMetric(fp)
    		if err != nil {
    			log.Errorf("Error unarchiving fingerprint %v (metric %v): %v", fp, m, err)
    		}
    		if unarchived {
    			s.seriesOps.WithLabelValues(unarchive).Inc()
    			// We have to load chunkDescs anyway to do anything with
    			// the series, so let's do it right now so that we don't
    			// end up with a series without any chunkDescs for a
    			// while (which is confusing as it makes the series
    			// appear as archived or purged).
    			cds, err = s.loadChunkDescs(fp, 0)
    			if err != nil {
    				log.Errorf("Error loading chunk descs for fingerprint %v (metric %v): %v", fp, m, err)
    			}
    			modTime = s.persistence.seriesFileModTime(fp)
    		} else {
    			// This was a genuinely new series, so index the metric.
    			s.persistence.indexMetric(fp, m)
    			s.seriesOps.WithLabelValues(create).Inc()
    		}
    		series = newMemorySeries(m, cds, modTime)
    		s.fpToSeries.put(fp, series)
    		s.numSeries.Inc()
    	}
    	return series
    }
    
    func (s *memorySeriesStorage) preloadChunksForRange(
    	fp model.Fingerprint,
    	from model.Time, through model.Time,
    	stalenessDelta time.Duration,
    ) ([]*chunkDesc, error) {
    	s.fpLocker.Lock(fp)
    	defer s.fpLocker.Unlock(fp)
    
    	series, ok := s.fpToSeries.get(fp)
    	if !ok {
    		has, first, last, err := s.persistence.hasArchivedMetric(fp)
    		if err != nil {
    			return nil, err
    		}
    		if !has {
    			s.invalidPreloadRequestsCount.Inc()
    			return nil, nil
    		}
    		if from.Add(-stalenessDelta).Before(last) && through.Add(stalenessDelta).After(first) {
    			metric, err := s.persistence.archivedMetric(fp)
    			if err != nil {
    				return nil, err
    			}
    			series = s.getOrCreateSeries(fp, metric)
    		} else {
    			return nil, nil
    		}
    	}
    	return series.preloadChunksForRange(from, through, fp, s)
    }
    
    func (s *memorySeriesStorage) handleEvictList() {
    	ticker := time.NewTicker(maxEvictInterval)
    	count := 0
    
    	for {
    		// To batch up evictions a bit, this tries evictions at least
    		// once per evict interval, but earlier if the number of evict
    		// requests with evict==true that have happened since the last
    		// evict run is more than maxMemoryChunks/1000.
    		select {
    		case req := <-s.evictRequests:
    			if req.evict {
    				req.cd.evictListElement = s.evictList.PushBack(req.cd)
    				count++
    				if count > s.maxMemoryChunks/1000 {
    					s.maybeEvict()
    					count = 0
    				}
    			} else {
    				if req.cd.evictListElement != nil {
    					s.evictList.Remove(req.cd.evictListElement)
    					req.cd.evictListElement = nil
    				}
    			}
    		case <-ticker.C:
    			if s.evictList.Len() > 0 {
    				s.maybeEvict()
    			}
    		case <-s.evictStopping:
    			// Drain evictRequests forever in a goroutine to not let
    			// requesters hang.
    			go func() {
    				for {
    					<-s.evictRequests
    				}
    			}()
    			ticker.Stop()
    			log.Info("Chunk eviction stopped.")
    			close(s.evictStopped)
    			return
    		}
    	}
    }
    
    // maybeEvict is a local helper method. Must only be called by handleEvictList.
    func (s *memorySeriesStorage) maybeEvict() {
    	numChunksToEvict := int(atomic.LoadInt64(&numMemChunks)) - s.maxMemoryChunks
    	if numChunksToEvict <= 0 {
    		return
    	}
    	chunkDescsToEvict := make([]*chunkDesc, numChunksToEvict)
    	for i := range chunkDescsToEvict {
    		e := s.evictList.Front()
    		if e == nil {
    			break
    		}
    		cd := e.Value.(*chunkDesc)
    		cd.evictListElement = nil
    		chunkDescsToEvict[i] = cd
    		s.evictList.Remove(e)
    	}
    	// Do the actual eviction in a goroutine as we might otherwise deadlock,
    	// in the following way: A chunk was unpinned completely and therefore
    	// scheduled for eviction. At the time we actually try to evict it,
    	// another goroutine is pinning the chunk. The pinning goroutine has
    	// currently locked the chunk and tries to send the evict request (to
    	// remove the chunk from the evict list) to the evictRequests
    	// channel. The send blocks because evictRequests is full. However, the
    	// goroutine that is supposed to empty the channel is waiting for the
    	// chunkDesc lock to try to evict the chunk.
    	go func() {
    		for _, cd := range chunkDescsToEvict {
    			if cd == nil {
    				break
    			}
    			cd.maybeEvict()
    			// We don't care if the eviction succeeds. If the chunk
    			// was pinned in the meantime, it will be added to the
    			// evict list once it gets unpinned again.
    		}
    	}()
    }
    
    // waitForNextFP waits an estimated duration, after which we want to process
    // another fingerprint so that we will process all fingerprints in a tenth of
    // s.dropAfter assuming that the system is doing nothing else, e.g. if we want
    // to drop chunks after 40h, we want to cycle through all fingerprints within
    // 4h.  The estimation is based on the total number of fingerprints as passed
    // in. However, the maximum sweep time is capped at fpMaxSweepTime. Also, the
    // method will never wait for longer than fpMaxWaitDuration.
    //
    // The maxWaitDurationFactor can be used to reduce the waiting time if a faster
    // processing is required (for example because unpersisted chunks pile up too
    // much).
    //
    // Normally, the method returns true once the wait duration has passed. However,
    // if s.loopStopped is closed, it will return false immediately.
    func (s *memorySeriesStorage) waitForNextFP(numberOfFPs int, maxWaitDurationFactor float64) bool {
    	d := fpMaxWaitDuration
    	if numberOfFPs != 0 {
    		sweepTime := s.dropAfter / 10
    		if sweepTime > fpMaxSweepTime {
    			sweepTime = fpMaxSweepTime
    		}
    		calculatedWait := time.Duration(float64(sweepTime) / float64(numberOfFPs) * maxWaitDurationFactor)
    		if calculatedWait < d {
    			d = calculatedWait
    		}
    	}
    	if d == 0 {
    		return true
    	}
    	t := time.NewTimer(d)
    	select {
    	case <-t.C:
    		return true
    	case <-s.loopStopping:
    		return false
    	}
    }
    
    // cycleThroughMemoryFingerprints returns a channel that emits fingerprints for
    // series in memory in a throttled fashion. It continues to cycle through all
    // fingerprints in memory until s.loopStopping is closed.
    func (s *memorySeriesStorage) cycleThroughMemoryFingerprints() chan model.Fingerprint {
    	memoryFingerprints := make(chan model.Fingerprint)
    	go func() {
    		var fpIter <-chan model.Fingerprint
    
    		defer func() {
    			if fpIter != nil {
    				for range fpIter {
    					// Consume the iterator.
    				}
    			}
    			close(memoryFingerprints)
    		}()
    
    		for {
    			// Initial wait, also important if there are no FPs yet.
    			if !s.waitForNextFP(s.fpToSeries.length(), 1) {
    				return
    			}
    			begin := time.Now()
    			fpIter = s.fpToSeries.fpIter()
    			count := 0
    			for fp := range fpIter {
    				select {
    				case memoryFingerprints <- fp:
    				case <-s.loopStopping:
    					return
    				}
    				// Reduce the wait time by the backlog score.
    				s.waitForNextFP(s.fpToSeries.length(), s.persistenceBacklogScore())
    				count++
    			}
    			if count > 0 {
    				log.Infof(
    					"Completed maintenance sweep through %d in-memory fingerprints in %v.",
    					count, time.Since(begin),
    				)
    			}
    		}
    	}()
    
    	return memoryFingerprints
    }
    
    // cycleThroughArchivedFingerprints returns a channel that emits fingerprints
    // for archived series in a throttled fashion. It continues to cycle through all
    // archived fingerprints until s.loopStopping is closed.
    func (s *memorySeriesStorage) cycleThroughArchivedFingerprints() chan model.Fingerprint {
    	archivedFingerprints := make(chan model.Fingerprint)
    	go func() {
    		defer close(archivedFingerprints)
    
    		for {
    			archivedFPs, err := s.persistence.fingerprintsModifiedBefore(
    				model.Now().Add(-s.dropAfter),
    			)
    			if err != nil {
    				log.Error("Failed to lookup archived fingerprint ranges: ", err)
    				s.waitForNextFP(0, 1)
    				continue
    			}
    			// Initial wait, also important if there are no FPs yet.
    			if !s.waitForNextFP(len(archivedFPs), 1) {
    				return
    			}
    			begin := time.Now()
    			for _, fp := range archivedFPs {
    				select {
    				case archivedFingerprints <- fp:
    				case <-s.loopStopping:
    					return
    				}
    				// Never speed up maintenance of archived FPs.
    				s.waitForNextFP(len(archivedFPs), 1)
    			}
    			if len(archivedFPs) > 0 {
    				log.Infof(
    					"Completed maintenance sweep through %d archived fingerprints in %v.",
    					len(archivedFPs), time.Since(begin),
    				)
    			}
    		}
    	}()
    	return archivedFingerprints
    }
    
    func (s *memorySeriesStorage) loop() {
    	checkpointTimer := time.NewTimer(s.checkpointInterval)
    
    	dirtySeriesCount := 0
    
    	defer func() {
    		checkpointTimer.Stop()
    		log.Info("Maintenance loop stopped.")
    		close(s.loopStopped)
    	}()
    
    	memoryFingerprints := s.cycleThroughMemoryFingerprints()
    	archivedFingerprints := s.cycleThroughArchivedFingerprints()
    
    loop:
    	for {
    		select {
    		case <-s.loopStopping:
    			break loop
    		case <-checkpointTimer.C:
    			err := s.persistence.checkpointSeriesMapAndHeads(s.fpToSeries, s.fpLocker)
    			if err != nil {
    				log.Errorln("Error while checkpointing:", err)
    			} else {
    				dirtySeriesCount = 0
    			}
    			checkpointTimer.Reset(s.checkpointInterval)
    		case fp := <-memoryFingerprints:
    			if s.maintainMemorySeries(fp, model.Now().Add(-s.dropAfter)) {
    				dirtySeriesCount++
    				// Check if we have enough "dirty" series so that we need an early checkpoint.
    				// However, if we are already behind persisting chunks, creating a checkpoint
    				// would be counterproductive, as it would slow down chunk persisting even more,
    				// while in a situation like that, where we are clearly lacking speed of disk
    				// maintenance, the best we can do for crash recovery is to persist chunks as
    				// quickly as possible. So only checkpoint if the storage is not in "graceful
    				// degradation mode".
    				if dirtySeriesCount >= s.checkpointDirtySeriesLimit && !s.isDegraded() {
    					checkpointTimer.Reset(0)
    				}
    			}
    		case fp := <-archivedFingerprints:
    			s.maintainArchivedSeries(fp, model.Now().Add(-s.dropAfter))
    		}
    	}
    	// Wait until both channels are closed.
    	for range memoryFingerprints {
    	}
    	for range archivedFingerprints {
    	}
    }
    
    // maintainMemorySeries maintains a series that is in memory (i.e. not
    // archived). It returns true if the method has changed from clean to dirty
    // (i.e. it is inconsistent with the latest checkpoint now so that in case of a
    // crash a recovery operation that requires a disk seek needed to be applied).
    //
    // The method first closes the head chunk if it was not touched for the duration
    // of headChunkTimeout.
    //
    // Then it determines the chunks that need to be purged and the chunks that need
    // to be persisted. Depending on the result, it does the following:
    //
    // - If all chunks of a series need to be purged, the whole series is deleted
    // for good and the method returns false. (Detecting non-existence of a series
    // file does not require a disk seek.)
    //
    // - If any chunks need to be purged (but not all of them), it purges those
    // chunks from memory and rewrites the series file on disk, leaving out the
    // purged chunks and appending all chunks not yet persisted (with the exception
    // of a still open head chunk).
    //
    // - If no chunks on disk need to be purged, but chunks need to be persisted,
    // those chunks are simply appended to the existing series file (or the file is
    // created if it does not exist yet).
    //
    // - If no chunks need to be purged and no chunks need to be persisted, nothing
    // happens in this step.
    //
    // Next, the method checks if all chunks in the series are evicted. In that
    // case, it archives the series and returns true.
    //
    // Finally, it evicts chunkDescs if there are too many.
    func (s *memorySeriesStorage) maintainMemorySeries(
    	fp model.Fingerprint, beforeTime model.Time,
    ) (becameDirty bool) {
    	defer func(begin time.Time) {
    		s.maintainSeriesDuration.WithLabelValues(maintainInMemory).Observe(
    			float64(time.Since(begin)) / float64(time.Millisecond),
    		)
    	}(time.Now())
    
    	s.fpLocker.Lock(fp)
    	defer s.fpLocker.Unlock(fp)
    
    	series, ok := s.fpToSeries.get(fp)
    	if !ok {
    		// Series is actually not in memory, perhaps archived or dropped in the meantime.
    		return false
    	}
    
    	defer s.seriesOps.WithLabelValues(memoryMaintenance).Inc()
    
    	if series.maybeCloseHeadChunk() {
    		s.incNumChunksToPersist(1)
    	}
    
    	seriesWasDirty := series.dirty
    
    	if s.writeMemorySeries(fp, series, beforeTime) {
    		// Series is gone now, we are done.
    		return false
    	}
    
    	iOldestNotEvicted := -1
    	for i, cd := range series.chunkDescs {
    		if !cd.isEvicted() {
    			iOldestNotEvicted = i
    			break
    		}
    	}
    
    	// Archive if all chunks are evicted.
    	if iOldestNotEvicted == -1 {
    		s.fpToSeries.del(fp)
    		s.numSeries.Dec()
    		if err := s.persistence.archiveMetric(
    			fp, series.metric, series.firstTime(), series.lastTime,
    		); err != nil {
    			log.Errorf("Error archiving metric %v: %v", series.metric, err)
    			return
    		}
    		s.seriesOps.WithLabelValues(archive).Inc()
    		return
    	}
    	// If we are here, the series is not archived, so check for chunkDesc
    	// eviction next.
    	series.evictChunkDescs(iOldestNotEvicted)
    
    	return series.dirty && !seriesWasDirty
    }
    
    // writeMemorySeries (re-)writes a memory series file. While doing so, it drops
    // chunks older than beforeTime from both the series file (if it exists) as well
    // as from memory. The provided chunksToPersist are appended to the newly
    // written series file. If no chunks need to be purged, but chunksToPersist is
    // not empty, those chunks are simply appended to the series file. If the series
    // contains no chunks after dropping old chunks, it is purged entirely. In that
    // case, the method returns true.
    //
    // The caller must have locked the fp.
    func (s *memorySeriesStorage) writeMemorySeries(
    	fp model.Fingerprint, series *memorySeries, beforeTime model.Time,
    ) bool {
    	cds := series.chunksToPersist()
    	defer func() {
    		for _, cd := range cds {
    			cd.unpin(s.evictRequests)
    		}
    		s.incNumChunksToPersist(-len(cds))
    		chunkOps.WithLabelValues(persistAndUnpin).Add(float64(len(cds)))
    		series.modTime = s.persistence.seriesFileModTime(fp)
    	}()
    
    	// Get the actual chunks from underneath the chunkDescs.
    	// No lock required as chunks still to persist cannot be evicted.
    	chunks := make([]chunk, len(cds))
    	for i, cd := range cds {
    		chunks[i] = cd.c
    	}
    
    	if !series.firstTime().Before(beforeTime) {
    		// Oldest sample not old enough, just append chunks, if any.
    		if len(cds) == 0 {
    			return false
    		}
    		offset, err := s.persistence.persistChunks(fp, chunks)
    		if err != nil {
    			s.persistErrors.Inc()
    			return false
    		}
    		if series.chunkDescsOffset == -1 {
    			// This is the first chunk persisted for a newly created
    			// series that had prior chunks on disk. Finally, we can
    			// set the chunkDescsOffset.
    			series.chunkDescsOffset = offset
    		}
    		return false
    	}
    
    	newFirstTime, offset, numDroppedFromPersistence, allDroppedFromPersistence, err :=
    		s.persistence.dropAndPersistChunks(fp, beforeTime, chunks)
    	if err != nil {
    		s.persistErrors.Inc()
    		return false
    	}
    	series.dropChunks(beforeTime)
    	if len(series.chunkDescs) == 0 && allDroppedFromPersistence {
    		// All chunks dropped from both memory and persistence. Delete the series for good.
    		s.fpToSeries.del(fp)
    		s.numSeries.Dec()
    		s.seriesOps.WithLabelValues(memoryPurge).Inc()
    		s.persistence.unindexMetric(fp, series.metric)
    		return true
    	}
    	series.savedFirstTime = newFirstTime
    	if series.chunkDescsOffset == -1 {
    		series.chunkDescsOffset = offset
    	} else {
    		series.chunkDescsOffset -= numDroppedFromPersistence
    		if series.chunkDescsOffset < 0 {
    			log.Errorf("Dropped more chunks from persistence than from memory for fingerprint %v, series %v.", fp, series)
    			s.persistence.setDirty(true)
    			series.chunkDescsOffset = -1 // Makes sure it will be looked at during crash recovery.
    		}
    	}
    	return false
    }
    
    // maintainArchivedSeries drops chunks older than beforeTime from an archived
    // series. If the series contains no chunks after that, it is purged entirely.
    func (s *memorySeriesStorage) maintainArchivedSeries(fp model.Fingerprint, beforeTime model.Time) {
    	defer func(begin time.Time) {
    		s.maintainSeriesDuration.WithLabelValues(maintainArchived).Observe(
    			float64(time.Since(begin)) / float64(time.Millisecond),
    		)
    	}(time.Now())
    
    	s.fpLocker.Lock(fp)
    	defer s.fpLocker.Unlock(fp)
    
    	has, firstTime, lastTime, err := s.persistence.hasArchivedMetric(fp)
    	if err != nil {
    		log.Error("Error looking up archived time range: ", err)
    		return
    	}
    	if !has || !firstTime.Before(beforeTime) {
    		// Oldest sample not old enough, or metric purged or unarchived in the meantime.
    		return
    	}
    
    	defer s.seriesOps.WithLabelValues(archiveMaintenance).Inc()
    
    	newFirstTime, _, _, allDropped, err := s.persistence.dropAndPersistChunks(fp, beforeTime, nil)
    	if err != nil {
    		log.Error("Error dropping persisted chunks: ", err)
    	}
    	if allDropped {
    		if err := s.persistence.purgeArchivedMetric(fp); err != nil {
    			log.Errorf("Error purging archived metric for fingerprint %v: %v", fp, err)
    			return
    		}
    		s.seriesOps.WithLabelValues(archivePurge).Inc()
    		return
    	}
    	if err := s.persistence.updateArchivedTimeRange(fp, newFirstTime, lastTime); err != nil {
    		log.Errorf("Error updating archived time range for fingerprint %v: %s", fp, err)
    	}
    }
    
    // See persistence.loadChunks for detailed explanation.
    func (s *memorySeriesStorage) loadChunks(fp model.Fingerprint, indexes []int, indexOffset int) ([]chunk, error) {
    	return s.persistence.loadChunks(fp, indexes, indexOffset)
    }
    
    // See persistence.loadChunkDescs for detailed explanation.
    func (s *memorySeriesStorage) loadChunkDescs(fp model.Fingerprint, offsetFromEnd int) ([]*chunkDesc, error) {
    	return s.persistence.loadChunkDescs(fp, offsetFromEnd)
    }
    
    // getNumChunksToPersist returns numChunksToPersist in a goroutine-safe way.
    func (s *memorySeriesStorage) getNumChunksToPersist() int {
    	return int(atomic.LoadInt64(&s.numChunksToPersist))
    }
    
    // incNumChunksToPersist increments numChunksToPersist in a goroutine-safe way. Use a
    // negative 'by' to decrement.
    func (s *memorySeriesStorage) incNumChunksToPersist(by int) {
    	atomic.AddInt64(&s.numChunksToPersist, int64(by))
    }
    
    // isDegraded returns whether the storage is in "graceful degradation mode",
    // which is the case if the number of chunks waiting for persistence has reached
    // a percentage of maxChunksToPersist that exceeds
    // percentChunksToPersistForDegradation. The method is not goroutine safe (but
    // only ever called from the goroutine dealing with series maintenance).
    // Changes of degradation mode are logged.
    func (s *memorySeriesStorage) isDegraded() bool {
    	nowDegraded := s.getNumChunksToPersist() > s.maxChunksToPersist*percentChunksToPersistForDegradation/100
    	if s.degraded && !nowDegraded {
    		log.Warn("Storage has left graceful degradation mode. Things are back to normal.")
    	} else if !s.degraded && nowDegraded {
    		log.Warnf(
    			"%d chunks waiting for persistence (%d%% of the allowed maximum %d). Storage is now in graceful degradation mode. Series files are not synced anymore if following the adaptive strategy. Checkpoints are not performed more often than every %v. Series maintenance happens as frequently as possible.",
    			s.getNumChunksToPersist(),
    			s.getNumChunksToPersist()*100/s.maxChunksToPersist,
    			s.maxChunksToPersist,
    			s.checkpointInterval)
    	}
    	s.degraded = nowDegraded
    	return s.degraded
    }
    
    // persistenceBacklogScore works similar to isDegraded, but returns a score
    // about how close we are to degradation. This score is 1.0 if no chunks are
    // waiting for persistence and 0.0 if we are at or above the degradation
    // threshold.
    func (s *memorySeriesStorage) persistenceBacklogScore() float64 {
    	score := 1 - float64(s.getNumChunksToPersist())/float64(s.maxChunksToPersist*percentChunksToPersistForDegradation/100)
    	if score < 0 {
    		return 0
    	}
    	return score
    }
    
    // Describe implements prometheus.Collector.
    func (s *memorySeriesStorage) Describe(ch chan<- *prometheus.Desc) {
    	s.persistence.Describe(ch)
    	s.mapper.Describe(ch)
    
    	ch <- s.persistErrors.Desc()
    	ch <- maxChunksToPersistDesc
    	ch <- numChunksToPersistDesc
    	ch <- s.numSeries.Desc()
    	s.seriesOps.Describe(ch)
    	ch <- s.ingestedSamplesCount.Desc()
    	ch <- s.outOfOrderSamplesCount.Desc()
    	ch <- s.invalidPreloadRequestsCount.Desc()
    	ch <- numMemChunksDesc
    	s.maintainSeriesDuration.Describe(ch)
    }
    
    // Collect implements prometheus.Collector.
    func (s *memorySeriesStorage) Collect(ch chan<- prometheus.Metric) {
    	s.persistence.Collect(ch)
    	s.mapper.Collect(ch)
    
    	ch <- s.persistErrors
    	ch <- prometheus.MustNewConstMetric(
    		maxChunksToPersistDesc,
    		prometheus.GaugeValue,
    		float64(s.maxChunksToPersist),
    	)
    	ch <- prometheus.MustNewConstMetric(
    		numChunksToPersistDesc,
    		prometheus.GaugeValue,
    		float64(s.getNumChunksToPersist()),
    	)
    	ch <- s.numSeries
    	s.seriesOps.Collect(ch)
    	ch <- s.ingestedSamplesCount
    	ch <- s.outOfOrderSamplesCount
    	ch <- s.invalidPreloadRequestsCount
    	ch <- prometheus.MustNewConstMetric(
    		numMemChunksDesc,
    		prometheus.GaugeValue,
    		float64(atomic.LoadInt64(&numMemChunks)),
    	)
    	s.maintainSeriesDuration.Collect(ch)
    }
    prometheus-0.16.2+ds/storage/local/storage_test.go000066400000000000000000001242071265137125100221760ustar00rootroot00000000000000// Copyright 2014 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package local
    
    import (
    	"fmt"
    	"hash/fnv"
    	"math/rand"
    	"os"
    	"reflect"
    	"testing"
    	"testing/quick"
    	"time"
    
    	"github.com/prometheus/common/log"
    	"github.com/prometheus/common/model"
    
    	"github.com/prometheus/prometheus/storage/metric"
    	"github.com/prometheus/prometheus/util/testutil"
    )
    
    func TestMatches(t *testing.T) {
    	storage, closer := NewTestStorage(t, 1)
    	defer closer.Close()
    
    	samples := make([]*model.Sample, 100)
    	fingerprints := make(model.Fingerprints, 100)
    
    	for i := range samples {
    		metric := model.Metric{
    			model.MetricNameLabel: model.LabelValue(fmt.Sprintf("test_metric_%d", i)),
    			"label1":              model.LabelValue(fmt.Sprintf("test_%d", i/10)),
    			"label2":              model.LabelValue(fmt.Sprintf("test_%d", (i+5)/10)),
    			"all":                 "const",
    		}
    		samples[i] = &model.Sample{
    			Metric:    metric,
    			Timestamp: model.Time(i),
    			Value:     model.SampleValue(i),
    		}
    		fingerprints[i] = metric.FastFingerprint()
    	}
    	for _, s := range samples {
    		storage.Append(s)
    	}
    	storage.WaitForIndexing()
    
    	newMatcher := func(matchType metric.MatchType, name model.LabelName, value model.LabelValue) *metric.LabelMatcher {
    		lm, err := metric.NewLabelMatcher(matchType, name, value)
    		if err != nil {
    			t.Fatalf("error creating label matcher: %s", err)
    		}
    		return lm
    	}
    
    	var matcherTests = []struct {
    		matchers metric.LabelMatchers
    		expected model.Fingerprints
    	}{
    		{
    			matchers: metric.LabelMatchers{newMatcher(metric.Equal, "label1", "x")},
    			expected: model.Fingerprints{},
    		},
    		{
    			matchers: metric.LabelMatchers{newMatcher(metric.Equal, "label1", "test_0")},
    			expected: fingerprints[:10],
    		},
    		{
    			matchers: metric.LabelMatchers{
    				newMatcher(metric.Equal, "label1", "test_0"),
    				newMatcher(metric.Equal, "label2", "test_1"),
    			},
    			expected: fingerprints[5:10],
    		},
    		{
    			matchers: metric.LabelMatchers{
    				newMatcher(metric.Equal, "all", "const"),
    				newMatcher(metric.NotEqual, "label1", "x"),
    			},
    			expected: fingerprints,
    		},
    		{
    			matchers: metric.LabelMatchers{
    				newMatcher(metric.Equal, "all", "const"),
    				newMatcher(metric.NotEqual, "label1", "test_0"),
    			},
    			expected: fingerprints[10:],
    		},
    		{
    			matchers: metric.LabelMatchers{
    				newMatcher(metric.Equal, "all", "const"),
    				newMatcher(metric.NotEqual, "label1", "test_0"),
    				newMatcher(metric.NotEqual, "label1", "test_1"),
    				newMatcher(metric.NotEqual, "label1", "test_2"),
    			},
    			expected: fingerprints[30:],
    		},
    		{
    			matchers: metric.LabelMatchers{
    				newMatcher(metric.Equal, "label1", ""),
    			},
    			expected: fingerprints[:0],
    		},
    		{
    			matchers: metric.LabelMatchers{
    				newMatcher(metric.NotEqual, "label1", "test_0"),
    				newMatcher(metric.Equal, "label1", ""),
    			},
    			expected: fingerprints[:0],
    		},
    		{
    			matchers: metric.LabelMatchers{
    				newMatcher(metric.NotEqual, "label1", "test_0"),
    				newMatcher(metric.Equal, "label2", ""),
    			},
    			expected: fingerprints[:0],
    		},
    		{
    			matchers: metric.LabelMatchers{
    				newMatcher(metric.Equal, "all", "const"),
    				newMatcher(metric.NotEqual, "label1", "test_0"),
    				newMatcher(metric.Equal, "not_existant", ""),
    			},
    			expected: fingerprints[10:],
    		},
    		{
    			matchers: metric.LabelMatchers{
    				newMatcher(metric.RegexMatch, "label1", `test_[3-5]`),
    			},
    			expected: fingerprints[30:60],
    		},
    		{
    			matchers: metric.LabelMatchers{
    				newMatcher(metric.Equal, "all", "const"),
    				newMatcher(metric.RegexNoMatch, "label1", `test_[3-5]`),
    			},
    			expected: append(append(model.Fingerprints{}, fingerprints[:30]...), fingerprints[60:]...),
    		},
    		{
    			matchers: metric.LabelMatchers{
    				newMatcher(metric.RegexMatch, "label1", `test_[3-5]`),
    				newMatcher(metric.RegexMatch, "label2", `test_[4-6]`),
    			},
    			expected: fingerprints[35:60],
    		},
    		{
    			matchers: metric.LabelMatchers{
    				newMatcher(metric.RegexMatch, "label1", `test_[3-5]`),
    				newMatcher(metric.NotEqual, "label2", `test_4`),
    			},
    			expected: append(append(model.Fingerprints{}, fingerprints[30:35]...), fingerprints[45:60]...),
    		},
    		{
    			matchers: metric.LabelMatchers{
    				newMatcher(metric.Equal, "label1", `nonexistent`),
    				newMatcher(metric.RegexMatch, "label2", `test`),
    			},
    			expected: model.Fingerprints{},
    		},
    		{
    			matchers: metric.LabelMatchers{
    				newMatcher(metric.Equal, "label1", `test_0`),
    				newMatcher(metric.RegexMatch, "label2", `nonexistent`),
    			},
    			expected: model.Fingerprints{},
    		},
    	}
    
    	for _, mt := range matcherTests {
    		res := storage.MetricsForLabelMatchers(mt.matchers...)
    		if len(mt.expected) != len(res) {
    			t.Fatalf("expected %d matches for %q, found %d", len(mt.expected), mt.matchers, len(res))
    		}
    		for fp1 := range res {
    			found := false
    			for _, fp2 := range mt.expected {
    				if fp1 == fp2 {
    					found = true
    					break
    				}
    			}
    			if !found {
    				t.Errorf("expected fingerprint %s for %q not in result", fp1, mt.matchers)
    			}
    		}
    	}
    }
    
    func TestFingerprintsForLabels(t *testing.T) {
    	storage, closer := NewTestStorage(t, 1)
    	defer closer.Close()
    
    	samples := make([]*model.Sample, 100)
    	fingerprints := make(model.Fingerprints, 100)
    
    	for i := range samples {
    		metric := model.Metric{
    			model.MetricNameLabel: model.LabelValue(fmt.Sprintf("test_metric_%d", i)),
    			"label1":              model.LabelValue(fmt.Sprintf("test_%d", i/10)),
    			"label2":              model.LabelValue(fmt.Sprintf("test_%d", (i+5)/10)),
    		}
    		samples[i] = &model.Sample{
    			Metric:    metric,
    			Timestamp: model.Time(i),
    			Value:     model.SampleValue(i),
    		}
    		fingerprints[i] = metric.FastFingerprint()
    	}
    	for _, s := range samples {
    		storage.Append(s)
    	}
    	storage.WaitForIndexing()
    
    	var matcherTests = []struct {
    		pairs    []model.LabelPair
    		expected model.Fingerprints
    	}{
    		{
    			pairs:    []model.LabelPair{{"label1", "x"}},
    			expected: fingerprints[:0],
    		},
    		{
    			pairs:    []model.LabelPair{{"label1", "test_0"}},
    			expected: fingerprints[:10],
    		},
    		{
    			pairs: []model.LabelPair{
    				{"label1", "test_0"},
    				{"label1", "test_1"},
    			},
    			expected: fingerprints[:0],
    		},
    		{
    			pairs: []model.LabelPair{
    				{"label1", "test_0"},
    				{"label2", "test_1"},
    			},
    			expected: fingerprints[5:10],
    		},
    		{
    			pairs: []model.LabelPair{
    				{"label1", "test_1"},
    				{"label2", "test_2"},
    			},
    			expected: fingerprints[15:20],
    		},
    	}
    
    	for _, mt := range matcherTests {
    		resfps := storage.fingerprintsForLabelPairs(mt.pairs...)
    		if len(mt.expected) != len(resfps) {
    			t.Fatalf("expected %d matches for %q, found %d", len(mt.expected), mt.pairs, len(resfps))
    		}
    		for fp1 := range resfps {
    			found := false
    			for _, fp2 := range mt.expected {
    				if fp1 == fp2 {
    					found = true
    					break
    				}
    			}
    			if !found {
    				t.Errorf("expected fingerprint %s for %q not in result", fp1, mt.pairs)
    			}
    		}
    	}
    }
    
    var benchLabelMatchingRes map[model.Fingerprint]metric.Metric
    
    func BenchmarkLabelMatching(b *testing.B) {
    	s, closer := NewTestStorage(b, 1)
    	defer closer.Close()
    
    	h := fnv.New64a()
    	lbl := func(x int) model.LabelValue {
    		h.Reset()
    		h.Write([]byte(fmt.Sprintf("%d", x)))
    		return model.LabelValue(fmt.Sprintf("%d", h.Sum64()))
    	}
    
    	M := 32
    	met := model.Metric{}
    	for i := 0; i < M; i++ {
    		met["label_a"] = lbl(i)
    		for j := 0; j < M; j++ {
    			met["label_b"] = lbl(j)
    			for k := 0; k < M; k++ {
    				met["label_c"] = lbl(k)
    				for l := 0; l < M; l++ {
    					met["label_d"] = lbl(l)
    					s.Append(&model.Sample{
    						Metric:    met.Clone(),
    						Timestamp: 0,
    						Value:     1,
    					})
    				}
    			}
    		}
    	}
    	s.WaitForIndexing()
    
    	newMatcher := func(matchType metric.MatchType, name model.LabelName, value model.LabelValue) *metric.LabelMatcher {
    		lm, err := metric.NewLabelMatcher(matchType, name, value)
    		if err != nil {
    			b.Fatalf("error creating label matcher: %s", err)
    		}
    		return lm
    	}
    
    	var matcherTests = []metric.LabelMatchers{
    		{
    			newMatcher(metric.Equal, "label_a", lbl(1)),
    		},
    		{
    			newMatcher(metric.Equal, "label_a", lbl(3)),
    			newMatcher(metric.Equal, "label_c", lbl(3)),
    		},
    		{
    			newMatcher(metric.Equal, "label_a", lbl(3)),
    			newMatcher(metric.Equal, "label_c", lbl(3)),
    			newMatcher(metric.NotEqual, "label_d", lbl(3)),
    		},
    		{
    			newMatcher(metric.Equal, "label_a", lbl(3)),
    			newMatcher(metric.Equal, "label_b", lbl(3)),
    			newMatcher(metric.Equal, "label_c", lbl(3)),
    			newMatcher(metric.NotEqual, "label_d", lbl(3)),
    		},
    		{
    			newMatcher(metric.RegexMatch, "label_a", ".+"),
    		},
    		{
    			newMatcher(metric.Equal, "label_a", lbl(3)),
    			newMatcher(metric.RegexMatch, "label_a", ".+"),
    		},
    		{
    			newMatcher(metric.Equal, "label_a", lbl(1)),
    			newMatcher(metric.RegexMatch, "label_c", "("+lbl(3)+"|"+lbl(10)+")"),
    		},
    		{
    			newMatcher(metric.Equal, "label_a", lbl(3)),
    			newMatcher(metric.Equal, "label_a", lbl(4)),
    			newMatcher(metric.RegexMatch, "label_c", "("+lbl(3)+"|"+lbl(10)+")"),
    		},
    	}
    
    	b.ReportAllocs()
    	b.ResetTimer()
    
    	for i := 0; i < b.N; i++ {
    		benchLabelMatchingRes = map[model.Fingerprint]metric.Metric{}
    		for _, mt := range matcherTests {
    			benchLabelMatchingRes = s.MetricsForLabelMatchers(mt...)
    		}
    	}
    	// Stop timer to not count the storage closing.
    	b.StopTimer()
    }
    
    func TestRetentionCutoff(t *testing.T) {
    	now := model.Now()
    	insertStart := now.Add(-2 * time.Hour)
    
    	s, closer := NewTestStorage(t, 1)
    	defer closer.Close()
    
    	// Stop maintenance loop to prevent actual purging.
    	s.loopStopping <- struct{}{}
    
    	s.dropAfter = 1 * time.Hour
    
    	for i := 0; i < 120; i++ {
    		smpl := &model.Sample{
    			Metric:    model.Metric{"job": "test"},
    			Timestamp: insertStart.Add(time.Duration(i) * time.Minute), // 1 minute intervals.
    			Value:     1,
    		}
    		s.Append(smpl)
    	}
    	s.WaitForIndexing()
    
    	var fp model.Fingerprint
    	for f := range s.fingerprintsForLabelPairs(model.LabelPair{Name: "job", Value: "test"}) {
    		fp = f
    		break
    	}
    
    	pl := s.NewPreloader()
    	defer pl.Close()
    
    	// Preload everything.
    	err := pl.PreloadRange(fp, insertStart, now, 5*time.Minute)
    	if err != nil {
    		t.Fatalf("Error preloading outdated chunks: %s", err)
    	}
    
    	it := s.NewIterator(fp)
    
    	vals := it.ValueAtTime(now.Add(-61 * time.Minute))
    	if len(vals) != 0 {
    		t.Errorf("unexpected result for timestamp before retention period")
    	}
    
    	vals = it.RangeValues(metric.Interval{OldestInclusive: insertStart, NewestInclusive: now})
    	// We get 59 values here because the model.Now() is slightly later
    	// than our now.
    	if len(vals) != 59 {
    		t.Errorf("expected 59 values but got %d", len(vals))
    	}
    	if expt := now.Add(-1 * time.Hour).Add(time.Minute); vals[0].Timestamp != expt {
    		t.Errorf("unexpected timestamp for first sample: %v, expected %v", vals[0].Timestamp.Time(), expt.Time())
    	}
    
    	vals = it.BoundaryValues(metric.Interval{OldestInclusive: insertStart, NewestInclusive: now})
    	if len(vals) != 2 {
    		t.Errorf("expected 2 values but got %d", len(vals))
    	}
    	if expt := now.Add(-1 * time.Hour).Add(time.Minute); vals[0].Timestamp != expt {
    		t.Errorf("unexpected timestamp for first sample: %v, expected %v", vals[0].Timestamp.Time(), expt.Time())
    	}
    }
    
    func TestDropMetrics(t *testing.T) {
    	now := model.Now()
    	insertStart := now.Add(-2 * time.Hour)
    
    	s, closer := NewTestStorage(t, 1)
    	defer closer.Close()
    
    	chunkFileExists := func(fp model.Fingerprint) (bool, error) {
    		f, err := s.persistence.openChunkFileForReading(fp)
    		if err == nil {
    			f.Close()
    			return true, nil
    		}
    		if os.IsNotExist(err) {
    			return false, nil
    		}
    		return false, err
    	}
    
    	m1 := model.Metric{model.MetricNameLabel: "test", "n1": "v1"}
    	m2 := model.Metric{model.MetricNameLabel: "test", "n1": "v2"}
    	m3 := model.Metric{model.MetricNameLabel: "test", "n1": "v3"}
    
    	N := 120000
    
    	for j, m := range []model.Metric{m1, m2, m3} {
    		for i := 0; i < N; i++ {
    			smpl := &model.Sample{
    				Metric:    m,
    				Timestamp: insertStart.Add(time.Duration(i) * time.Millisecond), // 1 millisecond intervals.
    				Value:     model.SampleValue(j),
    			}
    			s.Append(smpl)
    		}
    	}
    	s.WaitForIndexing()
    
    	// Archive m3, but first maintain it so that at least something is written to disk.
    	fpToBeArchived := m3.FastFingerprint()
    	s.maintainMemorySeries(fpToBeArchived, 0)
    	s.fpLocker.Lock(fpToBeArchived)
    	s.fpToSeries.del(fpToBeArchived)
    	if err := s.persistence.archiveMetric(
    		fpToBeArchived, m3, 0, insertStart.Add(time.Duration(N-1)*time.Millisecond),
    	); err != nil {
    		t.Error(err)
    	}
    	s.fpLocker.Unlock(fpToBeArchived)
    
    	fps := s.fingerprintsForLabelPairs(model.LabelPair{Name: model.MetricNameLabel, Value: "test"})
    	if len(fps) != 3 {
    		t.Errorf("unexpected number of fingerprints: %d", len(fps))
    	}
    
    	fpList := model.Fingerprints{m1.FastFingerprint(), m2.FastFingerprint(), fpToBeArchived}
    
    	s.DropMetricsForFingerprints(fpList[0])
    	s.WaitForIndexing()
    
    	fps2 := s.fingerprintsForLabelPairs(model.LabelPair{
    		Name: model.MetricNameLabel, Value: "test",
    	})
    	if len(fps2) != 2 {
    		t.Errorf("unexpected number of fingerprints: %d", len(fps2))
    	}
    
    	it := s.NewIterator(fpList[0])
    	if vals := it.RangeValues(metric.Interval{OldestInclusive: insertStart, NewestInclusive: now}); len(vals) != 0 {
    		t.Errorf("unexpected number of samples: %d", len(vals))
    	}
    	it = s.NewIterator(fpList[1])
    	if vals := it.RangeValues(metric.Interval{OldestInclusive: insertStart, NewestInclusive: now}); len(vals) != N {
    		t.Errorf("unexpected number of samples: %d", len(vals))
    	}
    	exists, err := chunkFileExists(fpList[2])
    	if err != nil {
    		t.Fatal(err)
    	}
    	if !exists {
    		t.Errorf("chunk file does not exist for fp=%v", fpList[2])
    	}
    
    	s.DropMetricsForFingerprints(fpList...)
    	s.WaitForIndexing()
    
    	fps3 := s.fingerprintsForLabelPairs(model.LabelPair{
    		Name: model.MetricNameLabel, Value: "test",
    	})
    	if len(fps3) != 0 {
    		t.Errorf("unexpected number of fingerprints: %d", len(fps3))
    	}
    
    	it = s.NewIterator(fpList[0])
    	if vals := it.RangeValues(metric.Interval{OldestInclusive: insertStart, NewestInclusive: now}); len(vals) != 0 {
    		t.Errorf("unexpected number of samples: %d", len(vals))
    	}
    	it = s.NewIterator(fpList[1])
    	if vals := it.RangeValues(metric.Interval{OldestInclusive: insertStart, NewestInclusive: now}); len(vals) != 0 {
    		t.Errorf("unexpected number of samples: %d", len(vals))
    	}
    	exists, err = chunkFileExists(fpList[2])
    	if err != nil {
    		t.Fatal(err)
    	}
    	if exists {
    		t.Errorf("chunk file still exists for fp=%v", fpList[2])
    	}
    }
    
    // TestLoop is just a smoke test for the loop method, if we can switch it on and
    // off without disaster.
    func TestLoop(t *testing.T) {
    	if testing.Short() {
    		t.Skip("Skipping test in short mode.")
    	}
    	samples := make(model.Samples, 1000)
    	for i := range samples {
    		samples[i] = &model.Sample{
    			Timestamp: model.Time(2 * i),
    			Value:     model.SampleValue(float64(i) * 0.2),
    		}
    	}
    	directory := testutil.NewTemporaryDirectory("test_storage", t)
    	defer directory.Close()
    	o := &MemorySeriesStorageOptions{
    		MemoryChunks:               50,
    		MaxChunksToPersist:         1000000,
    		PersistenceRetentionPeriod: 24 * 7 * time.Hour,
    		PersistenceStoragePath:     directory.Path(),
    		CheckpointInterval:         250 * time.Millisecond,
    		SyncStrategy:               Adaptive,
    		MinShrinkRatio:             0.1,
    	}
    	storage := NewMemorySeriesStorage(o)
    	if err := storage.Start(); err != nil {
    		t.Errorf("Error starting storage: %s", err)
    	}
    	for _, s := range samples {
    		storage.Append(s)
    	}
    	storage.WaitForIndexing()
    	series, _ := storage.(*memorySeriesStorage).fpToSeries.get(model.Metric{}.FastFingerprint())
    	cdsBefore := len(series.chunkDescs)
    	time.Sleep(fpMaxWaitDuration + time.Second) // TODO(beorn7): Ugh, need to wait for maintenance to kick in.
    	cdsAfter := len(series.chunkDescs)
    	storage.Stop()
    	if cdsBefore <= cdsAfter {
    		t.Errorf(
    			"Number of chunk descriptors should have gone down by now. Got before %d, after %d.",
    			cdsBefore, cdsAfter,
    		)
    	}
    }
    
    func testChunk(t *testing.T, encoding chunkEncoding) {
    	samples := make(model.Samples, 500000)
    	for i := range samples {
    		samples[i] = &model.Sample{
    			Timestamp: model.Time(i),
    			Value:     model.SampleValue(float64(i) * 0.2),
    		}
    	}
    	s, closer := NewTestStorage(t, encoding)
    	defer closer.Close()
    
    	for _, sample := range samples {
    		s.Append(sample)
    	}
    	s.WaitForIndexing()
    
    	for m := range s.fpToSeries.iter() {
    		s.fpLocker.Lock(m.fp)
    
    		var values []model.SamplePair
    		for _, cd := range m.series.chunkDescs {
    			if cd.isEvicted() {
    				continue
    			}
    			for sample := range cd.c.newIterator().values() {
    				values = append(values, *sample)
    			}
    		}
    
    		for i, v := range values {
    			if samples[i].Timestamp != v.Timestamp {
    				t.Errorf("%d. Got %v; want %v", i, v.Timestamp, samples[i].Timestamp)
    			}
    			if samples[i].Value != v.Value {
    				t.Errorf("%d. Got %v; want %v", i, v.Value, samples[i].Value)
    			}
    		}
    		s.fpLocker.Unlock(m.fp)
    	}
    	log.Info("test done, closing")
    }
    
    func TestChunkType0(t *testing.T) {
    	testChunk(t, 0)
    }
    
    func TestChunkType1(t *testing.T) {
    	testChunk(t, 1)
    }
    
    func testValueAtTime(t *testing.T, encoding chunkEncoding) {
    	samples := make(model.Samples, 10000)
    	for i := range samples {
    		samples[i] = &model.Sample{
    			Timestamp: model.Time(2 * i),
    			Value:     model.SampleValue(float64(i) * 0.2),
    		}
    	}
    	s, closer := NewTestStorage(t, encoding)
    	defer closer.Close()
    
    	for _, sample := range samples {
    		s.Append(sample)
    	}
    	s.WaitForIndexing()
    
    	fp := model.Metric{}.FastFingerprint()
    
    	it := s.NewIterator(fp)
    
    	// #1 Exactly on a sample.
    	for i, expected := range samples {
    		actual := it.ValueAtTime(expected.Timestamp)
    
    		if len(actual) != 1 {
    			t.Fatalf("1.%d. Expected exactly one result, got %d.", i, len(actual))
    		}
    		if expected.Timestamp != actual[0].Timestamp {
    			t.Errorf("1.%d. Got %v; want %v", i, actual[0].Timestamp, expected.Timestamp)
    		}
    		if expected.Value != actual[0].Value {
    			t.Errorf("1.%d. Got %v; want %v", i, actual[0].Value, expected.Value)
    		}
    	}
    
    	// #2 Between samples.
    	for i, expected1 := range samples {
    		if i == len(samples)-1 {
    			continue
    		}
    		expected2 := samples[i+1]
    		actual := it.ValueAtTime(expected1.Timestamp + 1)
    
    		if len(actual) != 2 {
    			t.Fatalf("2.%d. Expected exactly 2 results, got %d.", i, len(actual))
    		}
    		if expected1.Timestamp != actual[0].Timestamp {
    			t.Errorf("2.%d. Got %v; want %v", i, actual[0].Timestamp, expected1.Timestamp)
    		}
    		if expected1.Value != actual[0].Value {
    			t.Errorf("2.%d. Got %v; want %v", i, actual[0].Value, expected1.Value)
    		}
    		if expected2.Timestamp != actual[1].Timestamp {
    			t.Errorf("2.%d. Got %v; want %v", i, actual[1].Timestamp, expected1.Timestamp)
    		}
    		if expected2.Value != actual[1].Value {
    			t.Errorf("2.%d. Got %v; want %v", i, actual[1].Value, expected1.Value)
    		}
    	}
    
    	// #3 Corner cases: Just before the first sample, just after the last.
    	expected := samples[0]
    	actual := it.ValueAtTime(expected.Timestamp - 1)
    	if len(actual) != 1 {
    		t.Fatalf("3.1. Expected exactly one result, got %d.", len(actual))
    	}
    	if expected.Timestamp != actual[0].Timestamp {
    		t.Errorf("3.1. Got %v; want %v", actual[0].Timestamp, expected.Timestamp)
    	}
    	if expected.Value != actual[0].Value {
    		t.Errorf("3.1. Got %v; want %v", actual[0].Value, expected.Value)
    	}
    	expected = samples[len(samples)-1]
    	actual = it.ValueAtTime(expected.Timestamp + 1)
    	if len(actual) != 1 {
    		t.Fatalf("3.2. Expected exactly one result, got %d.", len(actual))
    	}
    	if expected.Timestamp != actual[0].Timestamp {
    		t.Errorf("3.2. Got %v; want %v", actual[0].Timestamp, expected.Timestamp)
    	}
    	if expected.Value != actual[0].Value {
    		t.Errorf("3.2. Got %v; want %v", actual[0].Value, expected.Value)
    	}
    }
    
    func TestValueAtTimeChunkType0(t *testing.T) {
    	testValueAtTime(t, 0)
    }
    
    func TestValueAtTimeChunkType1(t *testing.T) {
    	testValueAtTime(t, 1)
    }
    
    func benchmarkValueAtTime(b *testing.B, encoding chunkEncoding) {
    	samples := make(model.Samples, 10000)
    	for i := range samples {
    		samples[i] = &model.Sample{
    			Timestamp: model.Time(2 * i),
    			Value:     model.SampleValue(float64(i) * 0.2),
    		}
    	}
    	s, closer := NewTestStorage(b, encoding)
    	defer closer.Close()
    
    	for _, sample := range samples {
    		s.Append(sample)
    	}
    	s.WaitForIndexing()
    
    	fp := model.Metric{}.FastFingerprint()
    
    	b.ResetTimer()
    
    	for i := 0; i < b.N; i++ {
    		it := s.NewIterator(fp)
    
    		// #1 Exactly on a sample.
    		for i, expected := range samples {
    			actual := it.ValueAtTime(expected.Timestamp)
    
    			if len(actual) != 1 {
    				b.Fatalf("1.%d. Expected exactly one result, got %d.", i, len(actual))
    			}
    			if expected.Timestamp != actual[0].Timestamp {
    				b.Errorf("1.%d. Got %v; want %v", i, actual[0].Timestamp, expected.Timestamp)
    			}
    			if expected.Value != actual[0].Value {
    				b.Errorf("1.%d. Got %v; want %v", i, actual[0].Value, expected.Value)
    			}
    		}
    
    		// #2 Between samples.
    		for i, expected1 := range samples {
    			if i == len(samples)-1 {
    				continue
    			}
    			expected2 := samples[i+1]
    			actual := it.ValueAtTime(expected1.Timestamp + 1)
    
    			if len(actual) != 2 {
    				b.Fatalf("2.%d. Expected exactly 2 results, got %d.", i, len(actual))
    			}
    			if expected1.Timestamp != actual[0].Timestamp {
    				b.Errorf("2.%d. Got %v; want %v", i, actual[0].Timestamp, expected1.Timestamp)
    			}
    			if expected1.Value != actual[0].Value {
    				b.Errorf("2.%d. Got %v; want %v", i, actual[0].Value, expected1.Value)
    			}
    			if expected2.Timestamp != actual[1].Timestamp {
    				b.Errorf("2.%d. Got %v; want %v", i, actual[1].Timestamp, expected1.Timestamp)
    			}
    			if expected2.Value != actual[1].Value {
    				b.Errorf("2.%d. Got %v; want %v", i, actual[1].Value, expected1.Value)
    			}
    		}
    	}
    }
    
    func BenchmarkValueAtTimeChunkType0(b *testing.B) {
    	benchmarkValueAtTime(b, 0)
    }
    
    func BenchmarkValueAtTimeChunkType1(b *testing.B) {
    	benchmarkValueAtTime(b, 1)
    }
    
    func testRangeValues(t *testing.T, encoding chunkEncoding) {
    	samples := make(model.Samples, 10000)
    	for i := range samples {
    		samples[i] = &model.Sample{
    			Timestamp: model.Time(2 * i),
    			Value:     model.SampleValue(float64(i) * 0.2),
    		}
    	}
    	s, closer := NewTestStorage(t, encoding)
    	defer closer.Close()
    
    	for _, sample := range samples {
    		s.Append(sample)
    	}
    	s.WaitForIndexing()
    
    	fp := model.Metric{}.FastFingerprint()
    
    	it := s.NewIterator(fp)
    
    	// #1 Zero length interval at sample.
    	for i, expected := range samples {
    		actual := it.RangeValues(metric.Interval{
    			OldestInclusive: expected.Timestamp,
    			NewestInclusive: expected.Timestamp,
    		})
    
    		if len(actual) != 1 {
    			t.Fatalf("1.%d. Expected exactly one result, got %d.", i, len(actual))
    		}
    		if expected.Timestamp != actual[0].Timestamp {
    			t.Errorf("1.%d. Got %v; want %v.", i, actual[0].Timestamp, expected.Timestamp)
    		}
    		if expected.Value != actual[0].Value {
    			t.Errorf("1.%d. Got %v; want %v.", i, actual[0].Value, expected.Value)
    		}
    	}
    
    	// #2 Zero length interval off sample.
    	for i, expected := range samples {
    		actual := it.RangeValues(metric.Interval{
    			OldestInclusive: expected.Timestamp + 1,
    			NewestInclusive: expected.Timestamp + 1,
    		})
    
    		if len(actual) != 0 {
    			t.Fatalf("2.%d. Expected no result, got %d.", i, len(actual))
    		}
    	}
    
    	// #3 2sec interval around sample.
    	for i, expected := range samples {
    		actual := it.RangeValues(metric.Interval{
    			OldestInclusive: expected.Timestamp - 1,
    			NewestInclusive: expected.Timestamp + 1,
    		})
    
    		if len(actual) != 1 {
    			t.Fatalf("3.%d. Expected exactly one result, got %d.", i, len(actual))
    		}
    		if expected.Timestamp != actual[0].Timestamp {
    			t.Errorf("3.%d. Got %v; want %v.", i, actual[0].Timestamp, expected.Timestamp)
    		}
    		if expected.Value != actual[0].Value {
    			t.Errorf("3.%d. Got %v; want %v.", i, actual[0].Value, expected.Value)
    		}
    	}
    
    	// #4 2sec interval sample to sample.
    	for i, expected1 := range samples {
    		if i == len(samples)-1 {
    			continue
    		}
    		expected2 := samples[i+1]
    		actual := it.RangeValues(metric.Interval{
    			OldestInclusive: expected1.Timestamp,
    			NewestInclusive: expected1.Timestamp + 2,
    		})
    
    		if len(actual) != 2 {
    			t.Fatalf("4.%d. Expected exactly 2 results, got %d.", i, len(actual))
    		}
    		if expected1.Timestamp != actual[0].Timestamp {
    			t.Errorf("4.%d. Got %v for 1st result; want %v.", i, actual[0].Timestamp, expected1.Timestamp)
    		}
    		if expected1.Value != actual[0].Value {
    			t.Errorf("4.%d. Got %v for 1st result; want %v.", i, actual[0].Value, expected1.Value)
    		}
    		if expected2.Timestamp != actual[1].Timestamp {
    			t.Errorf("4.%d. Got %v for 2nd result; want %v.", i, actual[1].Timestamp, expected2.Timestamp)
    		}
    		if expected2.Value != actual[1].Value {
    			t.Errorf("4.%d. Got %v for 2nd result; want %v.", i, actual[1].Value, expected2.Value)
    		}
    	}
    
    	// #5 corner cases: Interval ends at first sample, interval starts
    	// at last sample, interval entirely before/after samples.
    	expected := samples[0]
    	actual := it.RangeValues(metric.Interval{
    		OldestInclusive: expected.Timestamp - 2,
    		NewestInclusive: expected.Timestamp,
    	})
    	if len(actual) != 1 {
    		t.Fatalf("5.1. Expected exactly one result, got %d.", len(actual))
    	}
    	if expected.Timestamp != actual[0].Timestamp {
    		t.Errorf("5.1. Got %v; want %v.", actual[0].Timestamp, expected.Timestamp)
    	}
    	if expected.Value != actual[0].Value {
    		t.Errorf("5.1. Got %v; want %v.", actual[0].Value, expected.Value)
    	}
    	expected = samples[len(samples)-1]
    	actual = it.RangeValues(metric.Interval{
    		OldestInclusive: expected.Timestamp,
    		NewestInclusive: expected.Timestamp + 2,
    	})
    	if len(actual) != 1 {
    		t.Fatalf("5.2. Expected exactly one result, got %d.", len(actual))
    	}
    	if expected.Timestamp != actual[0].Timestamp {
    		t.Errorf("5.2. Got %v; want %v.", actual[0].Timestamp, expected.Timestamp)
    	}
    	if expected.Value != actual[0].Value {
    		t.Errorf("5.2. Got %v; want %v.", actual[0].Value, expected.Value)
    	}
    	firstSample := samples[0]
    	actual = it.RangeValues(metric.Interval{
    		OldestInclusive: firstSample.Timestamp - 4,
    		NewestInclusive: firstSample.Timestamp - 2,
    	})
    	if len(actual) != 0 {
    		t.Fatalf("5.3. Expected no results, got %d.", len(actual))
    	}
    	lastSample := samples[len(samples)-1]
    	actual = it.RangeValues(metric.Interval{
    		OldestInclusive: lastSample.Timestamp + 2,
    		NewestInclusive: lastSample.Timestamp + 4,
    	})
    	if len(actual) != 0 {
    		t.Fatalf("5.3. Expected no results, got %d.", len(actual))
    	}
    }
    
    func TestRangeValuesChunkType0(t *testing.T) {
    	testRangeValues(t, 0)
    }
    
    func TestRangeValuesChunkType1(t *testing.T) {
    	testRangeValues(t, 1)
    }
    
    func benchmarkRangeValues(b *testing.B, encoding chunkEncoding) {
    	samples := make(model.Samples, 10000)
    	for i := range samples {
    		samples[i] = &model.Sample{
    			Timestamp: model.Time(2 * i),
    			Value:     model.SampleValue(float64(i) * 0.2),
    		}
    	}
    	s, closer := NewTestStorage(b, encoding)
    	defer closer.Close()
    
    	for _, sample := range samples {
    		s.Append(sample)
    	}
    	s.WaitForIndexing()
    
    	fp := model.Metric{}.FastFingerprint()
    
    	b.ResetTimer()
    
    	for i := 0; i < b.N; i++ {
    
    		it := s.NewIterator(fp)
    
    		for _, sample := range samples {
    			actual := it.RangeValues(metric.Interval{
    				OldestInclusive: sample.Timestamp - 20,
    				NewestInclusive: sample.Timestamp + 20,
    			})
    
    			if len(actual) < 10 {
    				b.Fatalf("not enough samples found")
    			}
    		}
    	}
    }
    
    func BenchmarkRangeValuesChunkType0(b *testing.B) {
    	benchmarkRangeValues(b, 0)
    }
    
    func BenchmarkRangeValuesChunkType1(b *testing.B) {
    	benchmarkRangeValues(b, 1)
    }
    
    func testEvictAndPurgeSeries(t *testing.T, encoding chunkEncoding) {
    	samples := make(model.Samples, 10000)
    	for i := range samples {
    		samples[i] = &model.Sample{
    			Timestamp: model.Time(2 * i),
    			Value:     model.SampleValue(float64(i * i)),
    		}
    	}
    	s, closer := NewTestStorage(t, encoding)
    	defer closer.Close()
    
    	for _, sample := range samples {
    		s.Append(sample)
    	}
    	s.WaitForIndexing()
    
    	fp := model.Metric{}.FastFingerprint()
    
    	// Drop ~half of the chunks.
    	s.maintainMemorySeries(fp, 10000)
    	it := s.NewIterator(fp)
    	actual := it.BoundaryValues(metric.Interval{
    		OldestInclusive: 0,
    		NewestInclusive: 100000,
    	})
    	if len(actual) != 2 {
    		t.Fatal("expected two results after purging half of series")
    	}
    	if actual[0].Timestamp < 6000 || actual[0].Timestamp > 10000 {
    		t.Errorf("1st timestamp out of expected range: %v", actual[0].Timestamp)
    	}
    	want := model.Time(19998)
    	if actual[1].Timestamp != want {
    		t.Errorf("2nd timestamp: want %v, got %v", want, actual[1].Timestamp)
    	}
    
    	// Drop everything.
    	s.maintainMemorySeries(fp, 100000)
    	it = s.NewIterator(fp)
    	actual = it.BoundaryValues(metric.Interval{
    		OldestInclusive: 0,
    		NewestInclusive: 100000,
    	})
    	if len(actual) != 0 {
    		t.Fatal("expected zero results after purging the whole series")
    	}
    
    	// Recreate series.
    	for _, sample := range samples {
    		s.Append(sample)
    	}
    	s.WaitForIndexing()
    
    	series, ok := s.fpToSeries.get(fp)
    	if !ok {
    		t.Fatal("could not find series")
    	}
    
    	// Persist head chunk so we can safely archive.
    	series.headChunkClosed = true
    	s.maintainMemorySeries(fp, model.Earliest)
    
    	// Archive metrics.
    	s.fpToSeries.del(fp)
    	if err := s.persistence.archiveMetric(
    		fp, series.metric, series.firstTime(), series.head().lastTime(),
    	); err != nil {
    		t.Fatal(err)
    	}
    
    	archived, _, _, err := s.persistence.hasArchivedMetric(fp)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if !archived {
    		t.Fatal("not archived")
    	}
    
    	// Drop ~half of the chunks of an archived series.
    	s.maintainArchivedSeries(fp, 10000)
    	archived, _, _, err = s.persistence.hasArchivedMetric(fp)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if !archived {
    		t.Fatal("archived series purged although only half of the chunks dropped")
    	}
    
    	// Drop everything.
    	s.maintainArchivedSeries(fp, 100000)
    	archived, _, _, err = s.persistence.hasArchivedMetric(fp)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if archived {
    		t.Fatal("archived series not dropped")
    	}
    
    	// Recreate series.
    	for _, sample := range samples {
    		s.Append(sample)
    	}
    	s.WaitForIndexing()
    
    	series, ok = s.fpToSeries.get(fp)
    	if !ok {
    		t.Fatal("could not find series")
    	}
    
    	// Persist head chunk so we can safely archive.
    	series.headChunkClosed = true
    	s.maintainMemorySeries(fp, model.Earliest)
    
    	// Archive metrics.
    	s.fpToSeries.del(fp)
    	if err := s.persistence.archiveMetric(
    		fp, series.metric, series.firstTime(), series.head().lastTime(),
    	); err != nil {
    		t.Fatal(err)
    	}
    
    	archived, _, _, err = s.persistence.hasArchivedMetric(fp)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if !archived {
    		t.Fatal("not archived")
    	}
    
    	// Unarchive metrics.
    	s.getOrCreateSeries(fp, model.Metric{})
    
    	series, ok = s.fpToSeries.get(fp)
    	if !ok {
    		t.Fatal("could not find series")
    	}
    	archived, _, _, err = s.persistence.hasArchivedMetric(fp)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if archived {
    		t.Fatal("archived")
    	}
    
    	// This will archive again, but must not drop it completely, despite the
    	// memorySeries being empty.
    	s.maintainMemorySeries(fp, 10000)
    	archived, _, _, err = s.persistence.hasArchivedMetric(fp)
    	if err != nil {
    		t.Fatal(err)
    	}
    	if !archived {
    		t.Fatal("series purged completely")
    	}
    }
    
    func TestEvictAndPurgeSeriesChunkType0(t *testing.T) {
    	testEvictAndPurgeSeries(t, 0)
    }
    
    func TestEvictAndPurgeSeriesChunkType1(t *testing.T) {
    	testEvictAndPurgeSeries(t, 1)
    }
    
    func testEvictAndLoadChunkDescs(t *testing.T, encoding chunkEncoding) {
    	samples := make(model.Samples, 10000)
    	for i := range samples {
    		samples[i] = &model.Sample{
    			Timestamp: model.Time(2 * i),
    			Value:     model.SampleValue(float64(i * i)),
    		}
    	}
    	// Give last sample a timestamp of now so that the head chunk will not
    	// be closed (which would then archive the time series later as
    	// everything will get evicted).
    	samples[len(samples)-1] = &model.Sample{
    		Timestamp: model.Now(),
    		Value:     model.SampleValue(3.14),
    	}
    
    	s, closer := NewTestStorage(t, encoding)
    	defer closer.Close()
    
    	// Adjust memory chunks to lower value to see evictions.
    	s.maxMemoryChunks = 1
    
    	for _, sample := range samples {
    		s.Append(sample)
    	}
    	s.WaitForIndexing()
    
    	fp := model.Metric{}.FastFingerprint()
    
    	series, ok := s.fpToSeries.get(fp)
    	if !ok {
    		t.Fatal("could not find series")
    	}
    
    	oldLen := len(series.chunkDescs)
    	// Maintain series without any dropped chunks.
    	s.maintainMemorySeries(fp, 0)
    	// Give the evict goroutine an opportunity to run.
    	time.Sleep(50 * time.Millisecond)
    	// Maintain series again to trigger chunkDesc eviction
    	s.maintainMemorySeries(fp, 0)
    
    	if oldLen <= len(series.chunkDescs) {
    		t.Errorf("Expected number of chunkDescs to decrease, old number %d, current number %d.", oldLen, len(series.chunkDescs))
    	}
    
    	// Load everything back.
    	p := s.NewPreloader()
    	p.PreloadRange(fp, 0, 100000, time.Hour)
    
    	if oldLen != len(series.chunkDescs) {
    		t.Errorf("Expected number of chunkDescs to have reached old value again, old number %d, current number %d.", oldLen, len(series.chunkDescs))
    	}
    
    	p.Close()
    
    	// Now maintain series with drops to make sure nothing crazy happens.
    	s.maintainMemorySeries(fp, 100000)
    
    	if len(series.chunkDescs) != 1 {
    		t.Errorf("Expected exactly one chunkDesc left, got %d.", len(series.chunkDescs))
    	}
    }
    
    func TestEvictAndLoadChunkDescsType0(t *testing.T) {
    	testEvictAndLoadChunkDescs(t, 0)
    }
    
    func TestEvictAndLoadChunkDescsType1(t *testing.T) {
    	testEvictAndLoadChunkDescs(t, 1)
    }
    
    func benchmarkAppend(b *testing.B, encoding chunkEncoding) {
    	samples := make(model.Samples, b.N)
    	for i := range samples {
    		samples[i] = &model.Sample{
    			Metric: model.Metric{
    				model.MetricNameLabel: model.LabelValue(fmt.Sprintf("test_metric_%d", i%10)),
    				"label1":              model.LabelValue(fmt.Sprintf("test_metric_%d", i%10)),
    				"label2":              model.LabelValue(fmt.Sprintf("test_metric_%d", i%10)),
    			},
    			Timestamp: model.Time(i),
    			Value:     model.SampleValue(i),
    		}
    	}
    	b.ResetTimer()
    	s, closer := NewTestStorage(b, encoding)
    	defer closer.Close()
    
    	for _, sample := range samples {
    		s.Append(sample)
    	}
    }
    
    func BenchmarkAppendType0(b *testing.B) {
    	benchmarkAppend(b, 0)
    }
    
    func BenchmarkAppendType1(b *testing.B) {
    	benchmarkAppend(b, 1)
    }
    
    // Append a large number of random samples and then check if we can get them out
    // of the storage alright.
    func testFuzz(t *testing.T, encoding chunkEncoding) {
    	if testing.Short() {
    		t.Skip("Skipping test in short mode.")
    	}
    
    	check := func(seed int64) bool {
    		rand.Seed(seed)
    		s, c := NewTestStorage(t, encoding)
    		defer c.Close()
    
    		samples := createRandomSamples("test_fuzz", 10000)
    		for _, sample := range samples {
    			s.Append(sample)
    		}
    		return verifyStorage(t, s, samples, 24*7*time.Hour)
    	}
    
    	if err := quick.Check(check, nil); err != nil {
    		t.Fatal(err)
    	}
    }
    
    func TestFuzzChunkType0(t *testing.T) {
    	testFuzz(t, 0)
    }
    
    func TestFuzzChunkType1(t *testing.T) {
    	testFuzz(t, 1)
    }
    
    // benchmarkFuzz is the benchmark version of testFuzz. The storage options are
    // set such that evictions, checkpoints, and purging will happen concurrently,
    // too. This benchmark will have a very long runtime (up to minutes). You can
    // use it as an actual benchmark. Run it like this:
    //
    // go test -cpu 1,2,4,8 -run=NONE -bench BenchmarkFuzzChunkType -benchmem
    //
    // You can also use it as a test for races. In that case, run it like this (will
    // make things even slower):
    //
    // go test -race -cpu 8 -short -bench BenchmarkFuzzChunkType
    func benchmarkFuzz(b *testing.B, encoding chunkEncoding) {
    	DefaultChunkEncoding = encoding
    	const samplesPerRun = 100000
    	rand.Seed(42)
    	directory := testutil.NewTemporaryDirectory("test_storage", b)
    	defer directory.Close()
    	o := &MemorySeriesStorageOptions{
    		MemoryChunks:               100,
    		MaxChunksToPersist:         1000000,
    		PersistenceRetentionPeriod: time.Hour,
    		PersistenceStoragePath:     directory.Path(),
    		CheckpointInterval:         time.Second,
    		SyncStrategy:               Adaptive,
    		MinShrinkRatio:             0.1,
    	}
    	s := NewMemorySeriesStorage(o)
    	if err := s.Start(); err != nil {
    		b.Fatalf("Error starting storage: %s", err)
    	}
    	s.Start()
    	defer s.Stop()
    
    	samples := createRandomSamples("benchmark_fuzz", samplesPerRun*b.N)
    
    	b.ResetTimer()
    
    	for i := 0; i < b.N; i++ {
    		start := samplesPerRun * i
    		end := samplesPerRun * (i + 1)
    		middle := (start + end) / 2
    		for _, sample := range samples[start:middle] {
    			s.Append(sample)
    		}
    		verifyStorage(b, s.(*memorySeriesStorage), samples[:middle], o.PersistenceRetentionPeriod)
    		for _, sample := range samples[middle:end] {
    			s.Append(sample)
    		}
    		verifyStorage(b, s.(*memorySeriesStorage), samples[:end], o.PersistenceRetentionPeriod)
    	}
    }
    
    func BenchmarkFuzzChunkType0(b *testing.B) {
    	benchmarkFuzz(b, 0)
    }
    
    func BenchmarkFuzzChunkType1(b *testing.B) {
    	benchmarkFuzz(b, 1)
    }
    
    func createRandomSamples(metricName string, minLen int) model.Samples {
    	type valueCreator func() model.SampleValue
    	type deltaApplier func(model.SampleValue) model.SampleValue
    
    	var (
    		maxMetrics         = 5
    		maxStreakLength    = 500
    		maxTimeDelta       = 10000
    		maxTimeDeltaFactor = 10
    		timestamp          = model.Now() - model.Time(maxTimeDelta*maxTimeDeltaFactor*minLen/4) // So that some timestamps are in the future.
    		generators         = []struct {
    			createValue valueCreator
    			applyDelta  []deltaApplier
    		}{
    			{ // "Boolean".
    				createValue: func() model.SampleValue {
    					return model.SampleValue(rand.Intn(2))
    				},
    				applyDelta: []deltaApplier{
    					func(_ model.SampleValue) model.SampleValue {
    						return model.SampleValue(rand.Intn(2))
    					},
    				},
    			},
    			{ // Integer with int deltas of various byte length.
    				createValue: func() model.SampleValue {
    					return model.SampleValue(rand.Int63() - 1<<62)
    				},
    				applyDelta: []deltaApplier{
    					func(v model.SampleValue) model.SampleValue {
    						return model.SampleValue(rand.Intn(1<<8) - 1<<7 + int(v))
    					},
    					func(v model.SampleValue) model.SampleValue {
    						return model.SampleValue(rand.Intn(1<<16) - 1<<15 + int(v))
    					},
    					func(v model.SampleValue) model.SampleValue {
    						return model.SampleValue(rand.Int63n(1<<32) - 1<<31 + int64(v))
    					},
    				},
    			},
    			{ // Float with float32 and float64 deltas.
    				createValue: func() model.SampleValue {
    					return model.SampleValue(rand.NormFloat64())
    				},
    				applyDelta: []deltaApplier{
    					func(v model.SampleValue) model.SampleValue {
    						return v + model.SampleValue(float32(rand.NormFloat64()))
    					},
    					func(v model.SampleValue) model.SampleValue {
    						return v + model.SampleValue(rand.NormFloat64())
    					},
    				},
    			},
    		}
    	)
    
    	// Prefill result with two samples with colliding metrics (to test fingerprint mapping).
    	result := model.Samples{
    		&model.Sample{
    			Metric: model.Metric{
    				"instance": "ip-10-33-84-73.l05.ams5.s-cloud.net:24483",
    				"status":   "503",
    			},
    			Value:     42,
    			Timestamp: timestamp,
    		},
    		&model.Sample{
    			Metric: model.Metric{
    				"instance": "ip-10-33-84-73.l05.ams5.s-cloud.net:24480",
    				"status":   "500",
    			},
    			Value:     2010,
    			Timestamp: timestamp + 1,
    		},
    	}
    
    	metrics := []model.Metric{}
    	for n := rand.Intn(maxMetrics); n >= 0; n-- {
    		metrics = append(metrics, model.Metric{
    			model.MetricNameLabel:                             model.LabelValue(metricName),
    			model.LabelName(fmt.Sprintf("labelname_%d", n+1)): model.LabelValue(fmt.Sprintf("labelvalue_%d", rand.Int())),
    		})
    	}
    
    	for len(result) < minLen {
    		// Pick a metric for this cycle.
    		metric := metrics[rand.Intn(len(metrics))]
    		timeDelta := rand.Intn(maxTimeDelta) + 1
    		generator := generators[rand.Intn(len(generators))]
    		createValue := generator.createValue
    		applyDelta := generator.applyDelta[rand.Intn(len(generator.applyDelta))]
    		incTimestamp := func() { timestamp += model.Time(timeDelta * (rand.Intn(maxTimeDeltaFactor) + 1)) }
    		switch rand.Intn(4) {
    		case 0: // A single sample.
    			result = append(result, &model.Sample{
    				Metric:    metric,
    				Value:     createValue(),
    				Timestamp: timestamp,
    			})
    			incTimestamp()
    		case 1: // A streak of random sample values.
    			for n := rand.Intn(maxStreakLength); n >= 0; n-- {
    				result = append(result, &model.Sample{
    					Metric:    metric,
    					Value:     createValue(),
    					Timestamp: timestamp,
    				})
    				incTimestamp()
    			}
    		case 2: // A streak of sample values with incremental changes.
    			value := createValue()
    			for n := rand.Intn(maxStreakLength); n >= 0; n-- {
    				result = append(result, &model.Sample{
    					Metric:    metric,
    					Value:     value,
    					Timestamp: timestamp,
    				})
    				incTimestamp()
    				value = applyDelta(value)
    			}
    		case 3: // A streak of constant sample values.
    			value := createValue()
    			for n := rand.Intn(maxStreakLength); n >= 0; n-- {
    				result = append(result, &model.Sample{
    					Metric:    metric,
    					Value:     value,
    					Timestamp: timestamp,
    				})
    				incTimestamp()
    			}
    		}
    	}
    
    	return result
    }
    
    func verifyStorage(t testing.TB, s *memorySeriesStorage, samples model.Samples, maxAge time.Duration) bool {
    	s.WaitForIndexing()
    	result := true
    	for _, i := range rand.Perm(len(samples)) {
    		sample := samples[i]
    		if sample.Timestamp.Before(model.TimeFromUnixNano(time.Now().Add(-maxAge).UnixNano())) {
    			continue
    			// TODO: Once we have a guaranteed cutoff at the
    			// retention period, we can verify here that no results
    			// are returned.
    		}
    		fp, err := s.mapper.mapFP(sample.Metric.FastFingerprint(), sample.Metric)
    		if err != nil {
    			t.Fatal(err)
    		}
    		p := s.NewPreloader()
    		p.PreloadRange(fp, sample.Timestamp, sample.Timestamp, time.Hour)
    		found := s.NewIterator(fp).ValueAtTime(sample.Timestamp)
    		if len(found) != 1 {
    			t.Errorf("Sample %#v: Expected exactly one value, found %d.", sample, len(found))
    			result = false
    			p.Close()
    			continue
    		}
    		want := sample.Value
    		got := found[0].Value
    		if want != got || sample.Timestamp != found[0].Timestamp {
    			t.Errorf(
    				"Value (or timestamp) mismatch, want %f (at time %v), got %f (at time %v).",
    				want, sample.Timestamp, got, found[0].Timestamp,
    			)
    			result = false
    		}
    		p.Close()
    	}
    	return result
    }
    
    func TestAppendOutOfOrder(t *testing.T) {
    	s, closer := NewTestStorage(t, 1)
    	defer closer.Close()
    
    	m := model.Metric{
    		model.MetricNameLabel: "out_of_order",
    	}
    
    	for i, t := range []int{0, 2, 2, 1} {
    		s.Append(&model.Sample{
    			Metric:    m,
    			Timestamp: model.Time(t),
    			Value:     model.SampleValue(i),
    		})
    	}
    
    	fp, err := s.mapper.mapFP(m.FastFingerprint(), m)
    	if err != nil {
    		t.Fatal(err)
    	}
    
    	pl := s.NewPreloader()
    	defer pl.Close()
    
    	err = pl.PreloadRange(fp, 0, 2, 5*time.Minute)
    	if err != nil {
    		t.Fatalf("Error preloading chunks: %s", err)
    	}
    
    	it := s.NewIterator(fp)
    
    	want := []model.SamplePair{
    		{
    			Timestamp: 0,
    			Value:     0,
    		},
    		{
    			Timestamp: 2,
    			Value:     1,
    		},
    	}
    	got := it.RangeValues(metric.Interval{OldestInclusive: 0, NewestInclusive: 2})
    	if !reflect.DeepEqual(want, got) {
    		t.Fatalf("want %v, got %v", want, got)
    	}
    }
    prometheus-0.16.2+ds/storage/local/test_helpers.go000066400000000000000000000042541265137125100221730ustar00rootroot00000000000000// Copyright 2014 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    // NOTE ON FILENAME: Do not rename this file helpers_test.go (which might appear
    // an obvious choice). We need NewTestStorage in tests outside of the local
    // package, too. On the other hand, moving NewTestStorage in its own package
    // would cause circular dependencies in the tests in packages local.
    
    package local
    
    import (
    	"time"
    
    	"github.com/prometheus/prometheus/util/testutil"
    )
    
    type testStorageCloser struct {
    	storage   Storage
    	directory testutil.Closer
    }
    
    func (t *testStorageCloser) Close() {
    	if err := t.storage.Stop(); err != nil {
    		panic(err)
    	}
    	t.directory.Close()
    }
    
    // NewTestStorage creates a storage instance backed by files in a temporary
    // directory. The returned storage is already in serving state. Upon closing the
    // returned test.Closer, the temporary directory is cleaned up.
    func NewTestStorage(t testutil.T, encoding chunkEncoding) (*memorySeriesStorage, testutil.Closer) {
    	DefaultChunkEncoding = encoding
    	directory := testutil.NewTemporaryDirectory("test_storage", t)
    	o := &MemorySeriesStorageOptions{
    		MemoryChunks:               1000000,
    		MaxChunksToPersist:         1000000,
    		PersistenceRetentionPeriod: 24 * time.Hour * 365 * 100, // Enough to never trigger purging.
    		PersistenceStoragePath:     directory.Path(),
    		CheckpointInterval:         time.Hour,
    		SyncStrategy:               Adaptive,
    	}
    	storage := NewMemorySeriesStorage(o)
    	if err := storage.Start(); err != nil {
    		directory.Close()
    		t.Fatalf("Error creating storage: %s", err)
    	}
    
    	closer := &testStorageCloser{
    		storage:   storage,
    		directory: directory,
    	}
    
    	return storage.(*memorySeriesStorage), closer
    }
    prometheus-0.16.2+ds/storage/metric/000077500000000000000000000000001265137125100173275ustar00rootroot00000000000000prometheus-0.16.2+ds/storage/metric/matcher.go000066400000000000000000000047511265137125100213100ustar00rootroot00000000000000// Copyright 2014 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package metric
    
    import (
    	"fmt"
    	"regexp"
    
    	"github.com/prometheus/common/model"
    )
    
    // MatchType is an enum for label matching types.
    type MatchType int
    
    // Possible MatchTypes.
    const (
    	Equal MatchType = iota
    	NotEqual
    	RegexMatch
    	RegexNoMatch
    )
    
    func (m MatchType) String() string {
    	typeToStr := map[MatchType]string{
    		Equal:        "=",
    		NotEqual:     "!=",
    		RegexMatch:   "=~",
    		RegexNoMatch: "!~",
    	}
    	if str, ok := typeToStr[m]; ok {
    		return str
    	}
    	panic("unknown match type")
    }
    
    // LabelMatchers is a slice of LabelMatcher objects.
    type LabelMatchers []*LabelMatcher
    
    // LabelMatcher models the matching of a label.
    type LabelMatcher struct {
    	Type  MatchType
    	Name  model.LabelName
    	Value model.LabelValue
    	re    *regexp.Regexp
    }
    
    // NewLabelMatcher returns a LabelMatcher object ready to use.
    func NewLabelMatcher(matchType MatchType, name model.LabelName, value model.LabelValue) (*LabelMatcher, error) {
    	m := &LabelMatcher{
    		Type:  matchType,
    		Name:  name,
    		Value: value,
    	}
    	if matchType == RegexMatch || matchType == RegexNoMatch {
    		re, err := regexp.Compile(string(value))
    		if err != nil {
    			return nil, err
    		}
    		m.re = re
    	}
    	return m, nil
    }
    
    func (m *LabelMatcher) String() string {
    	return fmt.Sprintf("%s%s%q", m.Name, m.Type, m.Value)
    }
    
    // Match returns true if the label matcher matches the supplied label value.
    func (m *LabelMatcher) Match(v model.LabelValue) bool {
    	switch m.Type {
    	case Equal:
    		return m.Value == v
    	case NotEqual:
    		return m.Value != v
    	case RegexMatch:
    		return m.re.MatchString(string(v))
    	case RegexNoMatch:
    		return !m.re.MatchString(string(v))
    	default:
    		panic("invalid match type")
    	}
    }
    
    // Filter takes a list of label values and returns all label values which match
    // the label matcher.
    func (m *LabelMatcher) Filter(in model.LabelValues) model.LabelValues {
    	out := model.LabelValues{}
    	for _, v := range in {
    		if m.Match(v) {
    			out = append(out, v)
    		}
    	}
    	return out
    }
    prometheus-0.16.2+ds/storage/metric/metric.go000066400000000000000000000035441265137125100211470ustar00rootroot00000000000000// Copyright 2014 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package metric
    
    import "github.com/prometheus/common/model"
    
    // Metric wraps a model.Metric and copies it upon modification if Copied is false.
    type Metric struct {
    	Copied bool
    	Metric model.Metric
    }
    
    // Set sets a label name in the wrapped Metric to a given value and copies the
    // Metric initially, if it is not already a copy.
    func (m *Metric) Set(ln model.LabelName, lv model.LabelValue) {
    	m.Copy()
    	m.Metric[ln] = lv
    }
    
    // Del deletes a given label name from the wrapped Metric and copies the
    // Metric initially, if it is not already a copy.
    func (m *Metric) Del(ln model.LabelName) {
    	m.Copy()
    	delete(m.Metric, ln)
    }
    
    // Get the value for the given label name. An empty value is returned
    // if the label does not exist in the metric.
    func (m *Metric) Get(ln model.LabelName) model.LabelValue {
    	return m.Metric[ln]
    }
    
    // Gets behaves as Get but the returned boolean is false iff the label
    // does not exist.
    func (m *Metric) Gets(ln model.LabelName) (model.LabelValue, bool) {
    	lv, ok := m.Metric[ln]
    	return lv, ok
    }
    
    // Copy the underlying Metric if it is not already a copy.
    func (m *Metric) Copy() *Metric {
    	if !m.Copied {
    		m.Metric = m.Metric.Clone()
    		m.Copied = true
    	}
    	return m
    }
    
    // String implements fmt.Stringer.
    func (m Metric) String() string {
    	return m.Metric.String()
    }
    prometheus-0.16.2+ds/storage/metric/metric_test.go000066400000000000000000000031451265137125100222030ustar00rootroot00000000000000// Copyright 2014 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package metric
    
    import (
    	"testing"
    
    	"github.com/prometheus/common/model"
    )
    
    func TestMetric(t *testing.T) {
    	testMetric := model.Metric{
    		"to_delete": "test1",
    		"to_change": "test2",
    	}
    
    	scenarios := []struct {
    		fn  func(*Metric)
    		out model.Metric
    	}{
    		{
    			fn: func(cm *Metric) {
    				cm.Del("to_delete")
    			},
    			out: model.Metric{
    				"to_change": "test2",
    			},
    		},
    		{
    			fn: func(cm *Metric) {
    				cm.Set("to_change", "changed")
    			},
    			out: model.Metric{
    				"to_delete": "test1",
    				"to_change": "changed",
    			},
    		},
    	}
    
    	for i, s := range scenarios {
    		orig := testMetric.Clone()
    		cm := &Metric{
    			Metric: orig,
    			Copied: false,
    		}
    
    		s.fn(cm)
    
    		// Test that the original metric was not modified.
    		if !orig.Equal(testMetric) {
    			t.Fatalf("%d. original metric changed; expected %v, got %v", i, testMetric, orig)
    		}
    
    		// Test that the new metric has the right changes.
    		if !cm.Metric.Equal(s.out) {
    			t.Fatalf("%d. copied metric doesn't contain expected changes; expected %v, got %v", i, s.out, cm.Metric)
    		}
    	}
    }
    prometheus-0.16.2+ds/storage/metric/sample.go000066400000000000000000000014441265137125100211420ustar00rootroot00000000000000// Copyright 2013 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package metric
    
    import "github.com/prometheus/common/model"
    
    // Interval describes the inclusive interval between two Timestamps.
    type Interval struct {
    	OldestInclusive model.Time
    	NewestInclusive model.Time
    }
    prometheus-0.16.2+ds/storage/remote/000077500000000000000000000000001265137125100173375ustar00rootroot00000000000000prometheus-0.16.2+ds/storage/remote/graphite/000077500000000000000000000000001265137125100211425ustar00rootroot00000000000000prometheus-0.16.2+ds/storage/remote/graphite/client.go000066400000000000000000000051731265137125100227550ustar00rootroot00000000000000// Copyright 2015 The Prometheus Authors
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    package graphite
    
    import (
    	"bytes"
    	"fmt"
    	"math"
    	"net"
    	"sort"
    	"time"
    
    	"github.com/prometheus/common/log"
    	"github.com/prometheus/common/model"
    )
    
    // Client allows sending batches of Prometheus samples to Graphite.
    type Client struct {
    	address   string
    	transport string
    	timeout   time.Duration
    	prefix    string
    }
    
    // NewClient creates a new Client.
    func NewClient(address string, transport string, timeout time.Duration, prefix string) *Client {
    	return &Client{
    		address:   address,
    		transport: transport,
    		timeout:   timeout,
    		prefix:    prefix,
    	}
    }
    
    func pathFromMetric(m model.Metric, prefix string) string {
    	var buffer bytes.Buffer
    
    	buffer.WriteString(prefix)
    	buffer.WriteString(escape(m[model.MetricNameLabel]))
    
    	// We want to sort the labels.
    	labels := make(model.LabelNames, 0, len(m))
    	for l, _ := range m {
    		labels = append(labels, l)
    	}
    	sort.Sort(labels)
    
    	// For each label, in order, add ".
    "); return; } for (var i = 0; i < data.result.length; i++) { var s = data.result[i]; var tsName = self.metricToTsName(s.metric); tBody.append(""); } break; case "matrix": if (data.result.length === 0) { tBody.append(""); return; } for (var i = 0; i < data.result.length; i++) { var v = data.result[i]; var tsName = self.metricToTsName(v.metric); var valueText = ""; for (var j = 0; j < v.values.length; j++) { valueText += v.values[j][1] + " @" + v.values[j][0] + "
    "; } tBody.append("
    "); } break; case "scalar": tBody.append(""); break; case "string": tBody.append(""); break; default: self.showError("Unsupported value type!"); break; } }; function parseGraphOptionsFromURL() { var hashOptions = window.location.hash.slice(1); if (!hashOptions) { return []; } var optionsJSON = decodeURIComponent(window.location.hash.slice(1)); options = JSON.parse(optionsJSON); return options; } // NOTE: This needs to be kept in sync with rules/helpers.go:GraphLinkForExpression! function storeGraphOptionsInURL() { var allGraphsOptions = []; for (var i = 0; i < graphs.length; i++) { allGraphsOptions.push(graphs[i].getOptions()); } var optionsJSON = JSON.stringify(allGraphsOptions); window.location.hash = encodeURIComponent(optionsJSON); } function addGraph(options) { var graph = new Prometheus.Graph($("#graph_container"), options); graphs.push(graph); graph.onChange(function() { storeGraphOptionsInURL(); }); $(window).resize(function() { graph.resizeGraph(); }); } function escapeHTML(string) { var entityMap = { "&": "&", "<": "<", ">": ">", '"': '"', "'": ''', "/": '/' }; return String(string).replace(/[&<>"'\/]/g, function (s) { return entityMap[s]; }); } function init() { $.ajaxSetup({ cache: false }); $.ajax({ url: PATH_PREFIX + "/static/js/graph_template.handlebar", success: function(data) { graphTemplate = Handlebars.compile(data); var options = parseGraphOptionsFromURL(); if (options.length === 0) { options.push({}); } for (var i = 0; i < options.length; i++) { addGraph(options[i]); } $("#add_graph").click(function() { addGraph({}); }); } }); } $(init); prometheus-0.16.2+ds/web/blob/static/js/graph_template.handlebar000066400000000000000000000137661265137125100246510ustar00rootroot00000000000000
    ajax_spinner
    Device Up Ports Up Ports Total In Out Discards Errors
    {{ .Labels.instance }} Yes{{ else }} class="alert-danger">No{{ end }} {{ query (printf "ifOperStatus{job='snmp',instance='%s'} == 1" .Labels.instance) | len }} {{ template "prom_query_drilldown" (args (printf "count(ifOperStatus{job='snmp',instance='%s'})" .Labels.instance) ) }} {{ template "prom_query_drilldown" (args (printf "8 * sum by (instance)(irate(ifHCInOctets{job='snmp',instance='%s'}[5m]) or rate(ifInOctets{job='snmp',instance='%s'}[5m]))" .Labels.instance .Labels.instance) "b/s" "humanize")}} {{ template "prom_query_drilldown" (args (printf "8 * sum by (instance)(irate(ifHCOutOctets{job='snmp',instance='%s'}[5m]) or rate(ifOutOctets{job='snmp',instance='%s'}[5m]))" .Labels.instance .Labels.instance) "b/s" "humanize")}} {{ template "prom_query_drilldown" (args (printf "8 * sum by (instance)(irate(ifInDiscards{job='snmp',instance='%s'}[5m]) or rate(ifOutDiscards{job='snmp',instance='%s'}[5m]))" .Labels.instance .Labels.instance) "/s" "humanizeNoSmallPrefix")}} {{ template "prom_query_drilldown" (args (printf "8 * sum by (instance)(irate(ifInErrors{job='snmp',instance='%s'}[5m]) or rate(ifOutErrors{job='snmp',instance='%s'}[5m]))" .Labels.instance .Labels.instance) "/s" "humanizeNoSmallPrefix")}}
    No devices found.
    no data
    " + escapeHTML(tsName) + "" + s.value[1] + "
    no data
    " + escapeHTML(tsName) + "" + valueText + "
    scalar" + data.result[1] + "
    string" + escapeHTML(data.result[1]) + "
    Element Value
    no data
    prometheus-0.16.2+ds/web/blob/static/js/prom_console.js000066400000000000000000000507171265137125100230450ustar00rootroot00000000000000/* * Functions to make it easier to write prometheus consoles, such * as graphs. * */ PromConsole = {}; PromConsole.NumberFormatter = {}; PromConsole.NumberFormatter.prefixesBig = ["k", "M", "G", "T", "P", "E", "Z", "Y"]; PromConsole.NumberFormatter.prefixesBig1024 = ["ki", "Mi", "Gi", "Ti", "Pi", "Ei", "Zi", "Yi"]; PromConsole.NumberFormatter.prefixesSmall = ["m", "u", "n", "p", "f", "a", "z", "y"]; PromConsole._stripTrailingZero = function(x) { if (x.indexOf("e") == -1) { // It's not safe to strip if it's scientific notation. return x.replace(/\.?0*$/, ''); } return x; }; // Humanize a number. PromConsole.NumberFormatter.humanize = function(x) { var ret = PromConsole.NumberFormatter._humanize( x, PromConsole.NumberFormatter.prefixesBig, PromConsole.NumberFormatter.prefixesSmall, 1000); x = ret[0]; var prefix = ret[1]; if (Math.abs(x) < 1) { return x.toExponential(3) + prefix; } return PromConsole._stripTrailingZero(x.toFixed(3)) + prefix; }; // Humanize a number, don't use milli/micro/etc. prefixes. PromConsole.NumberFormatter.humanizeNoSmallPrefix = function(x) { if (Math.abs(x) < 1) { return PromConsole._stripTrailingZero(x.toPrecision(3)); } var ret = PromConsole.NumberFormatter._humanize( x, PromConsole.NumberFormatter.prefixesBig, [], 1000); x = ret[0]; var prefix = ret[1]; return PromConsole._stripTrailingZero(x.toFixed(3)) + prefix; }; // Humanize a number with 1024 as the base, rather than 1000. PromConsole.NumberFormatter.humanize1024 = function(x) { var ret = PromConsole.NumberFormatter._humanize( x, PromConsole.NumberFormatter.prefixesBig1024, [], 1024); x = ret[0]; var prefix = ret[1]; if (Math.abs(x) < 1) { return x.toExponential(3) + prefix; } return PromConsole._stripTrailingZero(x.toFixed(3)) + prefix; }; // Humanize a number, returning an exact representation. PromConsole.NumberFormatter.humanizeExact = function(x) { var ret = PromConsole.NumberFormatter._humanize( x, PromConsole.NumberFormatter.prefixesBig, PromConsole.NumberFormatter.prefixesSmall, 1000); return ret[0] + ret[1]; }; PromConsole.NumberFormatter._humanize = function(x, prefixesBig, prefixesSmall, factor) { var prefix = ""; if (x === 0) { /* Do nothing. */ } else if (Math.abs(x) >= 1) { for (var i=0; i < prefixesBig.length && Math.abs(x) >= factor; ++i) { x /= factor; prefix = prefixesBig[i]; } } else { for (var i=0; i < prefixesSmall.length && Math.abs(x) < 1; ++i) { x *= factor; prefix = prefixesSmall[i]; } } return [x, prefix]; }; PromConsole.TimeControl = function() { document.getElementById("prom_graph_duration_shrink").onclick = this.decreaseDuration.bind(this); document.getElementById("prom_graph_duration_grow").onclick = this.increaseDuration.bind(this); document.getElementById("prom_graph_time_back").onclick = this.decreaseEnd.bind(this); document.getElementById("prom_graph_time_forward").onclick = this.increaseEnd.bind(this); document.getElementById("prom_graph_refresh_button").onclick = this.refresh.bind(this); this.durationElement = document.getElementById("prom_graph_duration"); this.endElement = document.getElementById("prom_graph_time_end"); this.durationElement.oninput = this.dispatch.bind(this); this.endElement.oninput = this.dispatch.bind(this); this.endElement.oninput = this.dispatch.bind(this); this.refreshValueElement = document.getElementById("prom_graph_refresh_button_value"); var refreshList = document.getElementById("prom_graph_refresh_intervals"); var refreshIntervals = ["Off", "1m", "5m", "15m", "1h"]; for (var i=0; i < refreshIntervals.length; ++i) { var li = document.createElement("li"); li.onclick = this.setRefresh.bind(this, refreshIntervals[i]); li.textContent = refreshIntervals[i]; refreshList.appendChild(li); } this.durationElement.value = PromConsole.TimeControl.prototype.getHumanDuration( PromConsole.TimeControl._initialValues.duration); if (PromConsole.TimeControl._initialValues.endTimeNow === undefined) { this.endElement.value = PromConsole.TimeControl.prototype.getHumanDate( new Date(PromConsole.TimeControl._initialValues.endTime * 1000)); } }; PromConsole.TimeControl.timeFactors = { "y": 60 * 60 * 24 * 365, "w": 60 * 60 * 24 * 7, "d": 60 * 60 * 24, "h": 60 * 60, "m": 60, "s": 1 }; PromConsole.TimeControl.stepValues = [ "10s", "1m", "5m", "15m", "30m", "1h", "2h", "6h", "12h", "1d", "2d", "1w", "2w", "4w", "8w", "1y", "2y" ]; PromConsole.TimeControl.prototype._setHash = function() { var duration = this.parseDuration(this.durationElement.value); var endTime = this.getEndDate() / 1000; window.location.hash = "#pctc" + encodeURIComponent(JSON.stringify( {duration: duration, endTime: endTime})); }; PromConsole.TimeControl._initialValues = function() { var hash = window.location.hash; if (hash.indexOf('#pctc') === 0) { return JSON.parse(decodeURIComponent(hash.substring(5))); } return {duration: 3600, endTime: new Date().getTime() / 1000, endTimeNow: true}; }(); PromConsole.TimeControl.prototype.parseDuration = function(durationText) { var durationRE = new RegExp("^([0-9]+)([ywdhms]?)$"); var matches = durationText.match(durationRE); if (!matches) { return 3600; } var value = parseInt(matches[1]); var unit = matches[2] || 's'; return value * PromConsole.TimeControl.timeFactors[unit]; }; PromConsole.TimeControl.prototype.getHumanDuration = function(duration) { var units = []; for (var key in PromConsole.TimeControl.timeFactors) { units.push([PromConsole.TimeControl.timeFactors[key], key]); } units.sort(function(a, b) { return b[0] - a[0]; }); for (var i = 0; i < units.length; ++i) { if (duration % units[i][0] === 0) { return (duration / units[i][0]) + units[i][1]; } } return duration; }; PromConsole.TimeControl.prototype.increaseDuration = function() { var durationSeconds = this.parseDuration(this.durationElement.value); for (var i = 0; i < PromConsole.TimeControl.stepValues.length; i++) { if (durationSeconds < this.parseDuration(PromConsole.TimeControl.stepValues[i])) { this.setDuration(PromConsole.TimeControl.stepValues[i]); this.dispatch(); return; } } }; PromConsole.TimeControl.prototype.decreaseDuration = function() { var durationSeconds = this.parseDuration(this.durationElement.value); for (var i = PromConsole.TimeControl.stepValues.length - 1; i >= 0; i--) { if (durationSeconds > this.parseDuration(PromConsole.TimeControl.stepValues[i])) { this.setDuration(PromConsole.TimeControl.stepValues[i]); this.dispatch(); return; } } }; PromConsole.TimeControl.prototype.setDuration = function(duration) { this.durationElement.value = duration; this._setHash(); }; PromConsole.TimeControl.prototype.getEndDate = function() { if (this.endElement.value === '') { return null; } return new Date(this.endElement.value).getTime(); }; PromConsole.TimeControl.prototype.getOrSetEndDate = function() { var date = this.getEndDate(); if (date) { return date; } date = new Date(); this.setEndDate(date); return date; }; PromConsole.TimeControl.prototype.getHumanDate = function(date) { var hours = date.getHours() < 10 ? '0' + date.getHours() : date.getHours(); var minutes = date.getMinutes() < 10 ? '0' + date.getMinutes() : date.getMinutes(); return date.getFullYear() + "-" + (date.getMonth()+1) + "-" + date.getDate() + " " + hours + ":" + minutes; }; PromConsole.TimeControl.prototype.setEndDate = function(date) { this.setRefresh("Off"); this.endElement.value = this.getHumanDate(date); this._setHash(); }; PromConsole.TimeControl.prototype.increaseEnd = function() { // Increase duration 25% range & convert ms to s. this.setEndDate(new Date(this.getOrSetEndDate() + this.parseDuration(this.durationElement.value) * 1000/4 )); this.dispatch(); }; PromConsole.TimeControl.prototype.decreaseEnd = function() { this.setEndDate(new Date(this.getOrSetEndDate() - this.parseDuration(this.durationElement.value) * 1000/4 )); this.dispatch(); }; PromConsole.TimeControl.prototype.refresh = function() { this.endElement.value = ''; this._setHash(); this.dispatch(); }; PromConsole.TimeControl.prototype.dispatch = function() { var durationSeconds = this.parseDuration(this.durationElement.value); var end = this.getEndDate(); if (end === null) { end = new Date().getTime(); } for (var i = 0; i< PromConsole._graph_registry.length; i++) { var graph = PromConsole._graph_registry[i]; graph.params.duration = durationSeconds; graph.params.endTime = end / 1000; graph.dispatch(); } }; PromConsole.TimeControl.prototype._refreshInterval = null; PromConsole.TimeControl.prototype.setRefresh = function(duration) { if (this._refreshInterval !== null) { window.clearInterval(this._refreshInterval); this._refreshInterval = null; } if (duration != "Off") { if (this.endElement.value !== '') { this.refresh(); } var durationSeconds = this.parseDuration(duration); this._refreshInterval = window.setInterval(this.dispatch.bind(this), durationSeconds * 1000); } this.refreshValueElement.textContent = duration; }; // List of all graphs, used by time controls. PromConsole._graph_registry = []; PromConsole.graphDefaults = { expr: null, // Expression to graph. Can be a list of strings. node: null, // DOM node to place graph under. // How long the graph is over, in seconds. duration: PromConsole.TimeControl._initialValues.duration, // The unixtime the graph ends at. endTime: PromConsole.TimeControl._initialValues.endTime, width: null, // Height of the graph div, excluding titles and legends. // Defaults to auto-detection. height: 200, // Height of the graph div, excluding titles and legends. min: "auto", // Minimum Y-axis value, defaults to lowest data value. max: undefined, // Maximum Y-axis value, defaults to highest data value. renderer: 'line', // Type of graphs, options are 'line' and 'area'. name: null, // What to call plots, defaults to trying to do // something reasonable. // If a string, it'll use that. [[ label ]] will be substituted. // If a function it'll be called with a map of keys to values, // and should return the name to use. // Can be a list of strings/functions, each element // will be applied to the plots from the corresponding // element of the expr list. xTitle: "Time", // The title of the x axis. yUnits: "", // The units of the y axis. yTitle: "", // The title of the y axis. // Number formatter for y axis. yAxisFormatter: PromConsole.NumberFormatter.humanize, // Number formatter for y values hover detail. yHoverFormatter: PromConsole.NumberFormatter.humanizeExact, }; PromConsole.Graph = function(params) { for (var k in PromConsole.graphDefaults) { if (!(k in params)) { params[k] = PromConsole.graphDefaults[k]; } } if (typeof params.expr == "string") { params.expr = [params.expr]; } if (typeof params.name == "string" || typeof params.name == "function") { var name = []; for (var i = 0; i < params.expr.length; i++) { name.push(params.name); } params.name = name; } this.params = params; this.rendered_data = null; PromConsole._graph_registry.push(this); /* * Table layout: * | yTitle | Graph | * | | xTitle | * | /graph | Legend | */ var table = document.createElement("table"); table.className = "prom_graph_table"; params.node.appendChild(table); var tr = document.createElement("tr"); table.appendChild(tr); var yTitleTd = document.createElement("td"); tr.appendChild(yTitleTd); var yTitleDiv = document.createElement("td"); yTitleTd.appendChild(yTitleDiv); yTitleDiv.className = "prom_graph_ytitle"; yTitleDiv.textContent = params.yTitle + (params.yUnits ? " (" + params.yUnits.trim() + ")" : ""); this.graphTd = document.createElement("td"); tr.appendChild(this.graphTd); this.graphTd.className = "rickshaw_graph"; this.graphTd.width = params.width; this.graphTd.height = params.height; tr = document.createElement("tr"); table.appendChild(tr); tr.appendChild(document.createElement("td")); var xTitleTd = document.createElement("td"); tr.appendChild(xTitleTd); xTitleTd.className = "prom_graph_xtitle"; xTitleTd.textContent = params.xTitle; tr = document.createElement("tr"); table.appendChild(tr); var graphLinkTd = document.createElement("td"); tr.appendChild(graphLinkTd); var graphLinkA = document.createElement("a"); graphLinkTd.appendChild(graphLinkA); graphLinkA.className = "prom_graph_link"; graphLinkA.textContent = "+"; graphLinkA.href = PromConsole._graphsToSlashGraphURL(params.expr); var legendTd = document.createElement("td"); tr.appendChild(legendTd); this.legendDiv = document.createElement("div"); legendTd.width = params.width; legendTd.appendChild(this.legendDiv); window.addEventListener('resize', function() { if(this.rendered_data !== null) { this._render(this.rendered_data); } }.bind(this)); this.dispatch(); }; PromConsole.Graph.prototype._parseValue = function(value) { var val = parseFloat(value); if (isNaN(val)) { // "+Inf", "-Inf", "+Inf" will be parsed into NaN by parseFloat(). The // can't be graphed, so show them as gaps (null). return null; } return val; }; PromConsole.Graph.prototype._escapeHTML = function(string) { var entityMap = { "&": "&", "<": "<", ">": ">", '"': '"', "'": ''', "/": '/' }; return string.replace(/[&<>"'\/]/g, function (s) { return entityMap[s]; }); }; PromConsole.Graph.prototype._render = function(data) { var self = this; var palette = new Rickshaw.Color.Palette(); var series = []; // This will be used on resize. this.rendered_data = data; var nameFuncs = []; if (this.params.name === null) { var chooser = PromConsole._chooseNameFunction(data); for (var i = 0; i < this.params.expr.length; i++) { nameFuncs.push(chooser); } } else { for (var i = 0; i < this.params.name.length; i++) { if (typeof this.params.name[i] == "string") { nameFuncs.push(function(i, metric) { return PromConsole._interpolateName(this.params.name[i], metric); }.bind(this, i)); } else { nameFuncs.push(this.params.name[i]); } } } // Get the data into the right format. var seriesLen = 0; for (var e = 0; e < data.length; e++) { for (var i = 0; i < data[e].value.length; i++) { series[seriesLen++] = { data: data[e].value[i].values.map(function(s) { return {x: s[0], y: self._parseValue(s[1])}; }), color: palette.color(), name: self._escapeHTML(nameFuncs[e](data[e].value[i].metric)), }; } } this._clearGraph(); if (!series.length) { var errorText = document.createElement("div"); errorText.className = 'prom_graph_error'; errorText.textContent = 'No timeseries returned'; this.graphTd.appendChild(errorText); return; } // Render. var graph = new Rickshaw.Graph({ interpolation: "linear", width: this.graphTd.offsetWidth, height: this.params.height, element: this.graphTd, renderer: this.params.renderer, max: this.params.max, min: this.params.min, series: series }); var hoverDetail = new Rickshaw.Graph.HoverDetail({ graph: graph, onRender: function() { var xLabel = this.element.getElementsByClassName("x_label")[0]; var item = this.element.getElementsByClassName("item")[0]; if (xLabel.offsetWidth + xLabel.offsetLeft + this.element.offsetLeft > graph.element.offsetWidth || item.offsetWidth + item.offsetLeft + this.element.offsetLeft > graph.element.offsetWidth) { xLabel.classList.add("prom_graph_hover_flipped"); item.classList.add("prom_graph_hover_flipped"); } else { xLabel.classList.remove("prom_graph_hover_flipped"); item.classList.remove("prom_graph_hover_flipped"); } }, yFormatter: function(y) { return this.params.yHoverFormatter(y) + this.params.yUnits; }.bind(this) }); var yAxis = new Rickshaw.Graph.Axis.Y({ graph: graph, tickFormat: this.params.yAxisFormatter }); var xAxis = new Rickshaw.Graph.Axis.Time({ graph: graph, }); var legend = new Rickshaw.Graph.Legend({ graph: graph, element: this.legendDiv }); xAxis.render(); yAxis.render(); graph.render(); }; PromConsole.Graph.prototype._clearGraph = function() { while (this.graphTd.lastChild) { this.graphTd.removeChild(this.graphTd.lastChild); } while (this.legendDiv.lastChild) { this.legendDiv.removeChild(this.legendDiv.lastChild); } }; PromConsole.Graph.prototype._xhrs = []; PromConsole.Graph.prototype.dispatch = function() { for (var j = 0; j < this._xhrs.length; j++) { this._xhrs[j].abort(); } var all_data = new Array(this.params.expr.length); this._xhrs = new Array(this.params.expr.length); var pending_requests = this.params.expr.length; for (var i = 0; i < this.params.expr.length; ++i) { var endTime = this.params.endTime; var url = PATH_PREFIX + "/api/query_range?expr=" + encodeURIComponent(this.params.expr[i]) + "&step=" + this.params.duration / this.graphTd.offsetWidth + "&range=" + this.params.duration + "&end=" + endTime; var xhr = new XMLHttpRequest(); xhr.open('get', url, true); xhr.responseType = 'json'; xhr.onerror = function(xhr, i) { this._clearGraph(); var errorText = document.createElement("div"); errorText.className = 'prom_graph_error'; errorText.textContent = 'Error loading data'; this.graphTd.appendChild(errorText); console.log('Error loading data for ' + this.params.expr[i]); pending_requests = 0; // onabort gets any aborts. for (var j = 0; j < pending_requests; j++) { this._xhrs[j].abort(); } }.bind(this, xhr, i); xhr.onload = function(xhr, i) { if (pending_requests === 0) { // Got an error before this success. return; } var data = xhr.response; pending_requests -= 1; all_data[i] = data; if (pending_requests === 0) { this._xhrs = []; this._render(all_data); } }.bind(this, xhr, i); xhr.send(); this._xhrs[i] = xhr; } var loadingImg = document.createElement("img"); loadingImg.src = PATH_PREFIX + '/static/img/ajax-loader.gif'; loadingImg.alt = 'Loading...'; loadingImg.className = 'prom_graph_loading'; this.graphTd.appendChild(loadingImg); }; // Substitute the value of 'label' for [[ label ]]. PromConsole._interpolateName = function(name, metric) { var re = /(.*?)\[\[\s*(\w+)+\s*\]\](.*?)/g; var result = ''; while (match = re.exec(name)) { result = result + match[1] + metric[match[2]] + match[3]; } if (!result) { return name; } return result; }; // Given the data returned by the API, return an appropriate function // to return plot names. PromConsole._chooseNameFunction = function(data) { // By default, use the full metric name. var nameFunc = function (metric) { name = metric.__name__ + "{"; for (var label in metric) { if (label.substring(0,2) == "__") { continue; } name += label + "='" + metric[label] + "',"; } return name + "}"; }; // If only one label varies, use that value. var labelValues = {}; for (var e = 0; e < data.length; e++) { for (var i = 0; i < data[e].value.length; i++) { for (var label in data[e].value[i].metric) { if (!(label in labelValues)) { labelValues[label] = {}; } labelValues[label][data[e].value[i].metric[label]] = 1; } } } var multiValueLabels = []; for (var label in labelValues) { if (Object.keys(labelValues[label]).length > 1) { multiValueLabels.push(label); } } if (multiValueLabels.length == 1) { nameFunc = function(metric) { return metric[multiValueLabels[0]]; }; } return nameFunc; }; // Given a list of expressions, produce the /graph url for them. PromConsole._graphsToSlashGraphURL = function(exprs) { var data = []; for (var i = 0; i < exprs.length; ++i) { data.push({'expr': exprs[i], 'tab': 0}); } return PATH_PREFIX + '/graph#' + encodeURIComponent(JSON.stringify(data)); }; prometheus-0.16.2+ds/web/blob/static/vendor/000077500000000000000000000000001265137125100206575ustar00rootroot00000000000000prometheus-0.16.2+ds/web/blob/static/vendor/bootstrap-datetimepicker/000077500000000000000000000000001265137125100256645ustar00rootroot00000000000000prometheus-0.16.2+ds/web/blob/static/vendor/bootstrap-datetimepicker/bootstrap-datetimepicker.js000066400000000000000000001477011265137125100332410ustar00rootroot00000000000000/** * version 1.0.4 * @license * ========================================================= * bootstrap-datetimepicker.js * http://www.eyecon.ro/bootstrap-datepicker * ========================================================= * Copyright 2012 Stefan Petre * * Contributions: * - Andrew Rowls * - Thiago de Arruda * - updated for Bootstrap v3 by Jonathan Peterson @Eonasdan * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * ========================================================= */ (function ($) { // Picker object var smartPhone = (window.orientation !== undefined); var DateTimePicker = function (element, options) { this.id = dpgId++; this.init(element, options); }; var dateToDate = function (dt) { if (typeof dt === 'string') { return new Date(dt); } return dt; }; DateTimePicker.prototype = { constructor: DateTimePicker, init: function (element, options) { var icon = false; if (!(options.pickTime || options.pickDate)) throw new Error('Must choose at least one picker'); this.options = options; this.$element = $(element); this.language = options.language in dates ? options.language : 'en'; this.pickDate = options.pickDate; this.pickTime = options.pickTime; this.isInput = this.$element.is('input'); this.component = false; if (this.$element.hasClass('input-group')) this.component = this.$element.find('.input-group-addon'); this.format = options.format; if (!this.format) { if (dates[this.language].format != null) this.format = dates[this.language].format; else if (this.isInput) this.format = this.$element.data('format'); else this.format = this.$element.find('input').data('format'); if (!this.format) this.format = (this.pickDate ? 'MM/dd/yyyy' : '') this.format += (this.pickTime ? ' hh:mm' : '') + (this.pickSeconds ? ':ss' : ''); } this._compileFormat(); if (this.component) icon = this.component.find('span'); if (this.pickTime) { if (icon && icon.length) { this.timeIcon = icon.data('time-icon'); this.upIcon = icon.data('up-icon'); this.downIcon = icon.data('down-icon'); } if (!this.timeIcon) this.timeIcon = 'glyphicon glyphicon-time'; if (!this.upIcon) this.upIcon = 'glyphicon glyphicon-chevron-up'; if (!this.downIcon) this.downIcon = 'glyphicon glyphicon-chevron-down'; if (icon) icon.addClass(this.timeIcon); } if (this.pickDate) { if (icon && icon.length) this.dateIcon = icon.data('date-icon'); if (!this.dateIcon) this.dateIcon = 'glyphicon glyphicon-calendar'; if (icon) { icon.removeClass(this.timeIcon); icon.addClass(this.dateIcon); } } this.widget = $(getTemplate(this.timeIcon, this.upIcon, this.downIcon, options.pickDate, options.pickTime, options.pick12HourFormat, options.pickSeconds, options.collapse)).appendTo('body'); this.minViewMode = options.minViewMode || this.$element.data('date-minviewmode') || 0; if (typeof this.minViewMode === 'string') { switch (this.minViewMode) { case 'months': this.minViewMode = 1; break; case 'years': this.minViewMode = 2; break; default: this.minViewMode = 0; break; } } this.viewMode = options.viewMode || this.$element.data('date-viewmode') || 0; if (typeof this.viewMode === 'string') { switch (this.viewMode) { case 'months': this.viewMode = 1; break; case 'years': this.viewMode = 2; break; default: this.viewMode = 0; break; } } this.startViewMode = this.viewMode; this.weekStart = options.weekStart || this.$element.data('date-weekstart') || 0; this.weekEnd = this.weekStart === 0 ? 6 : this.weekStart - 1; this.setStartDate(options.startDate || this.$element.data('date-startdate')); this.setEndDate(options.endDate || this.$element.data('date-enddate')); this.fillDow(); this.fillMonths(); this.fillHours(); this.fillMinutes(); this.fillSeconds(); this.update(); this.showMode(); this._attachDatePickerEvents(); }, show: function (e) { this.widget.show(); this.height = this.component ? this.component.outerHeight() : this.$element.outerHeight(); this.place(); this.$element.trigger({ type: 'show', date: this._date }); this._attachDatePickerGlobalEvents(); if (e) { e.stopPropagation(); e.preventDefault(); } }, disable: function () { this.$element.find('input').prop('disabled', true); this._detachDatePickerEvents(); }, enable: function () { this.$element.find('input').prop('disabled', false); this._attachDatePickerEvents(); }, hide: function () { // Ignore event if in the middle of a picker transition var collapse = this.widget.find('.collapse'); for (var i = 0; i < collapse.length; i++) { var collapseData = collapse.eq(i).data('collapse'); if (collapseData && collapseData.transitioning) return; } this.widget.hide(); this.viewMode = this.startViewMode; this.showMode(); this.$element.trigger({ type: 'hide', date: this._date }); this._detachDatePickerGlobalEvents(); }, set: function () { var formatted = ''; if (!this._unset) formatted = this.formatDate(this._date); if (!this.isInput) { if (this.component) { var input = this.$element.find('input'); input.val(formatted); this._resetMaskPos(input); } this.$element.data('date', formatted); } else { this.$element.val(formatted); this._resetMaskPos(this.$element); } if (!this.pickTime) this.hide(); }, setValue: function (newDate) { if (!newDate) { this._unset = true; } else { this._unset = false; } if (typeof newDate === 'string') { this._date = this.parseDate(newDate); } else if (newDate) { this._date = new Date(newDate); } this.set(); this.viewDate = new UTCDate(this._date.getUTCFullYear(), this._date.getUTCMonth(), 1, 0, 0, 0, 0); this.fillDate(); this.fillTime(); }, getDate: function () { if (this._unset) return null; return new Date(this._date.valueOf()); }, setDate: function (date) { if (!date) this.setValue(null); else this.setValue(date.valueOf()); }, setStartDate: function (date) { if (date instanceof Date) { this.startDate = date; } else if (typeof date === 'string') { this.startDate = new UTCDate(date); if (!this.startDate.getUTCFullYear()) { this.startDate = -Infinity; } } else { this.startDate = -Infinity; } if (this.viewDate) { this.update(); } }, setEndDate: function (date) { if (date instanceof Date) { this.endDate = date; } else if (typeof date === 'string') { this.endDate = new UTCDate(date); if (!this.endDate.getUTCFullYear()) { this.endDate = Infinity; } } else { this.endDate = Infinity; } if (this.viewDate) { this.update(); } }, getLocalDate: function () { if (this._unset) return null; var d = this._date; return new Date(d.getUTCFullYear(), d.getUTCMonth(), d.getUTCDate(), d.getUTCHours(), d.getUTCMinutes(), d.getUTCSeconds(), d.getUTCMilliseconds()); }, setLocalDate: function (localDate) { if (!localDate) this.setValue(null); else this.setValue(Date.UTC( localDate.getFullYear(), localDate.getMonth(), localDate.getDate(), localDate.getHours(), localDate.getMinutes(), localDate.getSeconds(), localDate.getMilliseconds())); }, place: function () { var position = 'absolute'; var offset = this.component ? this.component.offset() : this.$element.offset(); this.width = this.component ? this.component.outerWidth() : this.$element.outerWidth(); offset.top = offset.top + this.height; var $window = $(window); if (this.options.width !== undefined) { this.widget.width(this.options.width); } if (this.options.orientation === 'left') { this.widget.addClass('left-oriented'); offset.left = offset.left - this.widget.width() + 20; } if (this._isInFixed()) { position = 'fixed'; offset.top -= $window.scrollTop(); offset.left -= $window.scrollLeft(); } if ($window.width() < offset.left + this.widget.outerWidth()) { offset.right = $window.width() - offset.left - this.width; offset.left = 'auto'; this.widget.addClass('pull-right'); } else { offset.right = 'auto'; this.widget.removeClass('pull-right'); } this.widget.css({ position: position, top: offset.top, left: offset.left, right: offset.right }); }, notifyChange: function () { this.$element.trigger({ type: 'changeDate', date: this.getDate(), localDate: this.getLocalDate() }); }, update: function (newDate) { var dateStr = newDate; if (!dateStr) { if (this.isInput) { dateStr = this.$element.val(); } else { dateStr = this.$element.find('input').val(); } if (dateStr) { this._date = this.parseDate(dateStr); } if (!this._date) { var tmp = new Date(); this._date = new UTCDate(tmp.getFullYear(), tmp.getMonth(), tmp.getDate(), tmp.getHours(), tmp.getMinutes(), tmp.getSeconds(), tmp.getMilliseconds()); } } this.viewDate = new UTCDate(this._date.getUTCFullYear(), this._date.getUTCMonth(), 1, 0, 0, 0, 0); this.fillDate(); this.fillTime(); }, fillDow: function () { var dowCnt = this.weekStart; var html = $(''); while (dowCnt < this.weekStart + 7) { html.append('' + dates[this.language].daysMin[(dowCnt++) % 7] + ''); } this.widget.find('.datepicker-days thead').append(html); }, fillMonths: function () { var html = ''; var i = 0; while (i < 12) { html += '' + dates[this.language].monthsShort[i++] + ''; } this.widget.find('.datepicker-months td').append(html); }, fillDate: function () { var year = this.viewDate.getUTCFullYear(); var month = this.viewDate.getUTCMonth(); var currentDate = UTCDate( this._date.getUTCFullYear(), this._date.getUTCMonth(), this._date.getUTCDate(), 0, 0, 0, 0 ); var startYear = typeof this.startDate === 'object' ? this.startDate.getUTCFullYear() : -Infinity; var startMonth = typeof this.startDate === 'object' ? this.startDate.getUTCMonth() : -1; var endYear = typeof this.endDate === 'object' ? this.endDate.getUTCFullYear() : Infinity; var endMonth = typeof this.endDate === 'object' ? this.endDate.getUTCMonth() : 12; this.widget.find('.datepicker-days').find('.disabled').removeClass('disabled'); this.widget.find('.datepicker-months').find('.disabled').removeClass('disabled'); this.widget.find('.datepicker-years').find('.disabled').removeClass('disabled'); this.widget.find('.datepicker-days th:eq(1)').text( dates[this.language].months[month] + ' ' + year); var prevMonth = UTCDate(year, month - 1, 28, 0, 0, 0, 0); var day = DPGlobal.getDaysInMonth( prevMonth.getUTCFullYear(), prevMonth.getUTCMonth()); prevMonth.setUTCDate(day); prevMonth.setUTCDate(day - (prevMonth.getUTCDay() - this.weekStart + 7) % 7); if ((year == startYear && month <= startMonth) || year < startYear) { this.widget.find('.datepicker-days th:eq(0)').addClass('disabled'); } if ((year == endYear && month >= endMonth) || year > endYear) { this.widget.find('.datepicker-days th:eq(2)').addClass('disabled'); } var nextMonth = new Date(prevMonth.valueOf()); nextMonth.setUTCDate(nextMonth.getUTCDate() + 42); nextMonth = nextMonth.valueOf(); var html = []; var row; var clsName; while (prevMonth.valueOf() < nextMonth) { if (prevMonth.getUTCDay() === this.weekStart) { row = $(''); html.push(row); } clsName = ''; if (prevMonth.getUTCFullYear() < year || (prevMonth.getUTCFullYear() == year && prevMonth.getUTCMonth() < month)) { clsName += ' old'; } else if (prevMonth.getUTCFullYear() > year || (prevMonth.getUTCFullYear() == year && prevMonth.getUTCMonth() > month)) { clsName += ' new'; } if (prevMonth.valueOf() === currentDate.valueOf()) { clsName += ' active'; } if ((prevMonth.valueOf() + 86400000) <= this.startDate) { clsName += ' disabled'; } if (prevMonth.valueOf() > this.endDate) { clsName += ' disabled'; } row.append('' + prevMonth.getUTCDate() + ''); prevMonth.setUTCDate(prevMonth.getUTCDate() + 1); } this.widget.find('.datepicker-days tbody').empty().append(html); var currentYear = this._date.getUTCFullYear(); var months = this.widget.find('.datepicker-months').find( 'th:eq(1)').text(year).end().find('span').removeClass('active'); if (currentYear === year) { months.eq(this._date.getUTCMonth()).addClass('active'); } if (currentYear - 1 < startYear) { this.widget.find('.datepicker-months th:eq(0)').addClass('disabled'); } if (currentYear + 1 > endYear) { this.widget.find('.datepicker-months th:eq(2)').addClass('disabled'); } for (var i = 0; i < 12; i++) { if ((year == startYear && startMonth > i) || (year < startYear)) { $(months[i]).addClass('disabled'); } else if ((year == endYear && endMonth < i) || (year > endYear)) { $(months[i]).addClass('disabled'); } } html = ''; year = parseInt(year / 10, 10) * 10; var yearCont = this.widget.find('.datepicker-years').find( 'th:eq(1)').text(year + '-' + (year + 9)).end().find('td'); this.widget.find('.datepicker-years').find('th').removeClass('disabled'); if (startYear > year) { this.widget.find('.datepicker-years').find('th:eq(0)').addClass('disabled'); } if (endYear < year + 9) { this.widget.find('.datepicker-years').find('th:eq(2)').addClass('disabled'); } year -= 1; for (var i = -1; i < 11; i++) { html += '' + year + ''; year += 1; } yearCont.html(html); }, fillHours: function () { var table = this.widget.find( '.timepicker .timepicker-hours table'); table.parent().hide(); var html = ''; if (this.options.pick12HourFormat) { var current = 1; for (var i = 0; i < 3; i += 1) { html += ''; for (var j = 0; j < 4; j += 1) { var c = current.toString(); html += '' + padLeft(c, 2, '0') + ''; current++; } html += ''; } } else { var current = 0; for (var i = 0; i < 6; i += 1) { html += ''; for (var j = 0; j < 4; j += 1) { var c = current.toString(); html += '' + padLeft(c, 2, '0') + ''; current++; } html += ''; } } table.html(html); }, fillMinutes: function () { var table = this.widget.find( '.timepicker .timepicker-minutes table'); table.parent().hide(); var html = ''; var current = 0; for (var i = 0; i < 5; i++) { html += ''; for (var j = 0; j < 4; j += 1) { var c = current.toString(); html += '' + padLeft(c, 2, '0') + ''; current += 3; } html += ''; } table.html(html); }, fillSeconds: function () { var table = this.widget.find( '.timepicker .timepicker-seconds table'); table.parent().hide(); var html = ''; var current = 0; for (var i = 0; i < 5; i++) { html += ''; for (var j = 0; j < 4; j += 1) { var c = current.toString(); html += '' + padLeft(c, 2, '0') + ''; current += 3; } html += ''; } table.html(html); }, fillTime: function () { if (!this._date) return; var timeComponents = this.widget.find('.timepicker span[data-time-component]'); var table = timeComponents.closest('table'); var is12HourFormat = this.options.pick12HourFormat; var hour = this._date.getUTCHours(); var period = 'AM'; if (is12HourFormat) { if (hour >= 12) period = 'PM'; if (hour === 0) hour = 12; else if (hour != 12) hour = hour % 12; this.widget.find( '.timepicker [data-action=togglePeriod]').text(period); } hour = padLeft(hour.toString(), 2, '0'); var minute = padLeft(this._date.getUTCMinutes().toString(), 2, '0'); var second = padLeft(this._date.getUTCSeconds().toString(), 2, '0'); timeComponents.filter('[data-time-component=hours]').text(hour); timeComponents.filter('[data-time-component=minutes]').text(minute); timeComponents.filter('[data-time-component=seconds]').text(second); }, click: function (e) { e.stopPropagation(); e.preventDefault(); this._unset = false; var target = $(e.target).closest('span, td, th'); if (target.length === 1) { if (!target.is('.disabled')) { switch (target[0].nodeName.toLowerCase()) { case 'th': switch (target[0].className) { case 'switch': this.showMode(1); break; case 'prev': case 'next': var vd = this.viewDate; var navFnc = DPGlobal.modes[this.viewMode].navFnc; var step = DPGlobal.modes[this.viewMode].navStep; if (target[0].className === 'prev') step = step * -1; vd['set' + navFnc](vd['get' + navFnc]() + step); this.fillDate(); break; } break; case 'span': if (target.is('.month')) { var month = target.parent().find('span').index(target); this.viewDate.setUTCMonth(month); } else { var year = parseInt(target.text(), 10) || 0; this.viewDate.setUTCFullYear(year); } if (this.viewMode !== 0) { this._date = UTCDate( this.viewDate.getUTCFullYear(), this.viewDate.getUTCMonth(), this.viewDate.getUTCDate(), this._date.getUTCHours(), this._date.getUTCMinutes(), this._date.getUTCSeconds(), this._date.getUTCMilliseconds() ); this.notifyChange(); } this.showMode(-1); this.fillDate(); break; case 'td': if (target.is('.day')) { var day = parseInt(target.text(), 10) || 1; var month = this.viewDate.getUTCMonth(); var year = this.viewDate.getUTCFullYear(); if (target.is('.old')) { if (month === 0) { month = 11; year -= 1; } else { month -= 1; } } else if (target.is('.new')) { if (month == 11) { month = 0; year += 1; } else { month += 1; } } this._date = UTCDate( year, month, day, this._date.getUTCHours(), this._date.getUTCMinutes(), this._date.getUTCSeconds(), this._date.getUTCMilliseconds() ); this.viewDate = UTCDate( year, month, Math.min(28, day), 0, 0, 0, 0); this.fillDate(); this.set(); this.notifyChange(); } break; } } } }, actions: { incrementHours: function (e) { this._date.setUTCHours(this._date.getUTCHours() + 1); }, incrementMinutes: function (e) { this._date.setUTCMinutes(this._date.getUTCMinutes() + 1); }, incrementSeconds: function (e) { this._date.setUTCSeconds(this._date.getUTCSeconds() + 1); }, decrementHours: function (e) { this._date.setUTCHours(this._date.getUTCHours() - 1); }, decrementMinutes: function (e) { this._date.setUTCMinutes(this._date.getUTCMinutes() - 1); }, decrementSeconds: function (e) { this._date.setUTCSeconds(this._date.getUTCSeconds() - 1); }, togglePeriod: function (e) { var hour = this._date.getUTCHours(); if (hour >= 12) hour -= 12; else hour += 12; this._date.setUTCHours(hour); }, showPicker: function () { this.widget.find('.timepicker > div:not(.timepicker-picker)').hide(); this.widget.find('.timepicker .timepicker-picker').show(); }, showHours: function () { this.widget.find('.timepicker .timepicker-picker').hide(); this.widget.find('.timepicker .timepicker-hours').show(); }, showMinutes: function () { this.widget.find('.timepicker .timepicker-picker').hide(); this.widget.find('.timepicker .timepicker-minutes').show(); }, showSeconds: function () { this.widget.find('.timepicker .timepicker-picker').hide(); this.widget.find('.timepicker .timepicker-seconds').show(); }, selectHour: function (e) { var tgt = $(e.target); var value = parseInt(tgt.text(), 10); if (this.options.pick12HourFormat) { var current = this._date.getUTCHours(); if (current >= 12) { if (value != 12) value = (value + 12) % 24; } else { if (value === 12) value = 0; else value = value % 12; } } this._date.setUTCHours(value); this.actions.showPicker.call(this); }, selectMinute: function (e) { var tgt = $(e.target); var value = parseInt(tgt.text(), 10); this._date.setUTCMinutes(value); this.actions.showPicker.call(this); }, selectSecond: function (e) { var tgt = $(e.target); var value = parseInt(tgt.text(), 10); this._date.setUTCSeconds(value); this.actions.showPicker.call(this); } }, doAction: function (e) { e.stopPropagation(); e.preventDefault(); if (!this._date) this._date = UTCDate(1970, 0, 0, 0, 0, 0, 0); var action = $(e.currentTarget).data('action'); var rv = this.actions[action].apply(this, arguments); this.set(); this.fillTime(); this.notifyChange(); return rv; }, stopEvent: function (e) { e.stopPropagation(); e.preventDefault(); }, // part of the following code was taken from // http://cloud.github.com/downloads/digitalBush/jquery.maskedinput/jquery.maskedinput-1.3.js keydown: function (e) { var self = this, k = e.which, input = $(e.target); if (k == 8 || k == 46) { // backspace and delete cause the maskPosition // to be recalculated setTimeout(function () { self._resetMaskPos(input); }); } }, keypress: function (e) { var k = e.which; if (k == 8 || k == 46) { // For those browsers which will trigger // keypress on backspace/delete return; } var input = $(e.target); var c = String.fromCharCode(k); var val = input.val() || ''; val += c; var mask = this._mask[this._maskPos]; if (!mask) { return false; } if (mask.end != val.length) { return; } if (!mask.pattern.test(val.slice(mask.start))) { val = val.slice(0, val.length - 1); while ((mask = this._mask[this._maskPos]) && mask.character) { val += mask.character; // advance mask position past static // part this._maskPos++; } val += c; if (mask.end != val.length) { input.val(val); return false; } else { if (!mask.pattern.test(val.slice(mask.start))) { input.val(val.slice(0, mask.start)); return false; } else { input.val(val); this._maskPos++; return false; } } } else { this._maskPos++; } }, change: function (e) { var input = $(e.target); var val = input.val(); if (this._formatPattern.test(val)) { this.update(); this.setValue(this._date.getTime()); this.notifyChange(); this.set(); } else if (val && val.trim()) { this.setValue(this._date.getTime()); if (this._date) this.set(); else input.val(''); } else { if (this._date) { this.setValue(null); // unset the date when the input is // erased this.notifyChange(); this._unset = true; } } this._resetMaskPos(input); }, showMode: function (dir) { if (dir) { this.viewMode = Math.max(this.minViewMode, Math.min( 2, this.viewMode + dir)); } this.widget.find('.datepicker > div').hide().filter( '.datepicker-' + DPGlobal.modes[this.viewMode].clsName).show(); }, destroy: function () { this._detachDatePickerEvents(); this._detachDatePickerGlobalEvents(); this.widget.remove(); this.$element.removeData('datetimepicker'); this.component.removeData('datetimepicker'); }, formatDate: function (d) { return this.format.replace(formatReplacer, function (match) { var methodName, property, rv, len = match.length; if (match === 'ms') len = 1; property = dateFormatComponents[match].property; if (property === 'Hours12') { rv = d.getUTCHours(); if (rv === 0) rv = 12; else if (rv !== 12) rv = rv % 12; } else if (property === 'Period12') { if (d.getUTCHours() >= 12) return 'PM'; else return 'AM'; } else { methodName = 'get' + property; rv = d[methodName](); } if (methodName === 'getUTCMonth') rv = rv + 1; if (methodName === 'getUTCYear') rv = rv + 1900 - 2000; return padLeft(rv.toString(), len, '0'); }); }, parseDate: function (str) { var match, i, property, methodName, value, parsed = {}; if (!(match = this._formatPattern.exec(str))) return null; for (i = 1; i < match.length; i++) { property = this._propertiesByIndex[i]; if (!property) continue; value = match[i]; if (/^\d+$/.test(value)) value = parseInt(value, 10); parsed[property] = value; } return this._finishParsingDate(parsed); }, _resetMaskPos: function (input) { var val = input.val(); for (var i = 0; i < this._mask.length; i++) { if (this._mask[i].end > val.length) { // If the mask has ended then jump to // the next this._maskPos = i; break; } else if (this._mask[i].end === val.length) { this._maskPos = i + 1; break; } } }, _finishParsingDate: function (parsed) { var year, month, date, hours, minutes, seconds, milliseconds; year = parsed.UTCFullYear; if (parsed.UTCYear) year = 2000 + parsed.UTCYear; if (!year) year = 1970; if (parsed.UTCMonth) month = parsed.UTCMonth - 1; else month = 0; date = parsed.UTCDate || 1; hours = parsed.UTCHours || 0; minutes = parsed.UTCMinutes || 0; seconds = parsed.UTCSeconds || 0; milliseconds = parsed.UTCMilliseconds || 0; if (parsed.Hours12) { hours = parsed.Hours12; } if (parsed.Period12) { if (/pm/i.test(parsed.Period12)) { if (hours != 12) hours = (hours + 12) % 24; } else { hours = hours % 12; } } return UTCDate(year, month, date, hours, minutes, seconds, milliseconds); }, _compileFormat: function () { var match, component, components = [], mask = [], str = this.format, propertiesByIndex = {}, i = 0, pos = 0; while (match = formatComponent.exec(str)) { component = match[0]; if (component in dateFormatComponents) { i++; propertiesByIndex[i] = dateFormatComponents[component].property; components.push('\\s*' + dateFormatComponents[component].getPattern( this) + '\\s*'); mask.push({ pattern: new RegExp(dateFormatComponents[component].getPattern( this)), property: dateFormatComponents[component].property, start: pos, end: pos += component.length }); } else { components.push(escapeRegExp(component)); mask.push({ pattern: new RegExp(escapeRegExp(component)), character: component, start: pos, end: ++pos }); } str = str.slice(component.length); } this._mask = mask; this._maskPos = 0; this._formatPattern = new RegExp( '^\\s*' + components.join('') + '\\s*$'); this._propertiesByIndex = propertiesByIndex; }, _attachDatePickerEvents: function () { var self = this; // this handles date picker clicks this.widget.on('click', '.datepicker *', $.proxy(this.click, this)); // this handles time picker clicks this.widget.on('click', '[data-action]', $.proxy(this.doAction, this)); this.widget.on('mousedown', $.proxy(this.stopEvent, this)); if (this.pickDate && this.pickTime) { this.widget.on('click.togglePicker', '.accordion-toggle', function (e) { e.stopPropagation(); var $this = $(this); var $parent = $this.closest('ul'); var expanded = $parent.find('.in'); var closed = $parent.find('.collapse:not(.in)'); if (expanded && expanded.length) { var collapseData = expanded.data('collapse'); if (collapseData && collapseData.transitioning) return; expanded.collapse('hide'); closed.collapse('show'); $this.find('span').toggleClass(self.timeIcon + ' ' + self.dateIcon); self.$element.find('.input-group-addon span').toggleClass(self.timeIcon + ' ' + self.dateIcon); } }); } if (this.isInput) { this.$element.on({ 'focus': $.proxy(this.show, this), 'change': $.proxy(this.change, this), 'blur': $.proxy(this.hide, this) }); if (this.options.maskInput) { this.$element.on({ 'keydown': $.proxy(this.keydown, this), 'keypress': $.proxy(this.keypress, this) }); } } else { this.$element.on({ 'change': $.proxy(this.change, this) }, 'input'); if (this.options.maskInput) { this.$element.on({ 'keydown': $.proxy(this.keydown, this), 'keypress': $.proxy(this.keypress, this) }, 'input'); } if (this.component) { this.component.on('click', $.proxy(this.show, this)); } else { this.$element.on('click', $.proxy(this.show, this)); } } }, _attachDatePickerGlobalEvents: function () { $(window).on( 'resize.datetimepicker' + this.id, $.proxy(this.place, this)); if (!this.isInput) { $(document).on( 'mousedown.datetimepicker' + this.id, $.proxy(this.hide, this)); } }, _detachDatePickerEvents: function () { this.widget.off('click', '.datepicker *', this.click); this.widget.off('click', '[data-action]'); this.widget.off('mousedown', this.stopEvent); if (this.pickDate && this.pickTime) { this.widget.off('click.togglePicker'); } if (this.isInput) { this.$element.off({ 'focus': this.show, 'change': this.change }); if (this.options.maskInput) { this.$element.off({ 'keydown': this.keydown, 'keypress': this.keypress }); } } else { this.$element.off({ 'change': this.change }, 'input'); if (this.options.maskInput) { this.$element.off({ 'keydown': this.keydown, 'keypress': this.keypress }, 'input'); } if (this.component) { this.component.off('click', this.show); } else { this.$element.off('click', this.show); } } }, _detachDatePickerGlobalEvents: function () { $(window).off('resize.datetimepicker' + this.id); if (!this.isInput) { $(document).off('mousedown.datetimepicker' + this.id); } }, _isInFixed: function () { if (this.$element) { var parents = this.$element.parents(); var inFixed = false; for (var i = 0; i < parents.length; i++) { if ($(parents[i]).css('position') == 'fixed') { inFixed = true; break; } }; return inFixed; } else { return false; } } }; $.fn.datetimepicker = function (option, val) { return this.each(function () { var $this = $(this), data = $this.data('datetimepicker'), options = typeof option === 'object' && option; if (!data) { $this.data('datetimepicker', (data = new DateTimePicker( this, $.extend({}, $.fn.datetimepicker.defaults, options)))); } if (typeof option === 'string') data[option](val); }); }; $.fn.datetimepicker.defaults = { maskInput: false, pickDate: true, pickTime: true, pick12HourFormat: false, pickSeconds: true, startDate: -Infinity, endDate: Infinity, collapse: true, defaultDate: "" }; $.fn.datetimepicker.Constructor = DateTimePicker; var dpgId = 0; var dates = $.fn.datetimepicker.dates = { en: { days: ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"], daysShort: ["Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"], daysMin: ["Su", "Mo", "Tu", "We", "Th", "Fr", "Sa", "Su"], months: ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"], monthsShort: ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"] } }; var dateFormatComponents = { dd: { property: 'UTCDate', getPattern: function () { return '(0?[1-9]|[1-2][0-9]|3[0-1])\\b'; } }, MM: { property: 'UTCMonth', getPattern: function () { return '(0?[1-9]|1[0-2])\\b'; } }, yy: { property: 'UTCYear', getPattern: function () { return '(\\d{2})\\b'; } }, yyyy: { property: 'UTCFullYear', getPattern: function () { return '(\\d{4})\\b'; } }, hh: { property: 'UTCHours', getPattern: function () { return '(0?[0-9]|1[0-9]|2[0-3])\\b'; } }, mm: { property: 'UTCMinutes', getPattern: function () { return '(0?[0-9]|[1-5][0-9])\\b'; } }, ss: { property: 'UTCSeconds', getPattern: function () { return '(0?[0-9]|[1-5][0-9])\\b'; } }, ms: { property: 'UTCMilliseconds', getPattern: function () { return '([0-9]{1,3})\\b'; } }, HH: { property: 'Hours12', getPattern: function () { return '(0?[1-9]|1[0-2])\\b'; } }, PP: { property: 'Period12', getPattern: function () { return '(AM|PM|am|pm|Am|aM|Pm|pM)\\b'; } } }; var keys = []; for (var k in dateFormatComponents) keys.push(k); keys[keys.length - 1] += '\\b'; keys.push('.'); var formatComponent = new RegExp(keys.join('\\b|')); keys.pop(); var formatReplacer = new RegExp(keys.join('\\b|'), 'g'); function escapeRegExp(str) { // http://stackoverflow.com/questions/3446170/escape-string-for-use-in-javascript-regex return str.replace(/[\-\[\]\/\{\}\(\)\*\+\?\.\\\^\$\|]/g, "\\$&"); } function padLeft(s, l, c) { if (l < s.length) return s; else return Array(l - s.length + 1).join(c || ' ') + s; } function getTemplate(timeIcon, upIcon, downIcon, pickDate, pickTime, is12Hours, showSeconds, collapse) { if (pickDate && pickTime) { return ( '' ); } else if (pickTime) { return ( '' ); } else { return ( '' ); } } function UTCDate() { return new Date(Date.UTC.apply(Date, arguments)); } var DPGlobal = { modes: [ { clsName: 'days', navFnc: 'UTCMonth', navStep: 1 }, { clsName: 'months', navFnc: 'UTCFullYear', navStep: 1 }, { clsName: 'years', navFnc: 'UTCFullYear', navStep: 10 }], isLeapYear: function (year) { return (((year % 4 === 0) && (year % 100 !== 0)) || (year % 400 === 0)); }, getDaysInMonth: function (year, month) { return [31, (DPGlobal.isLeapYear(year) ? 29 : 28), 31, 30, 31, 30, 31, 31, 30, 31, 30, 31][month]; }, headTemplate: '' + '' + '‹' + '' + '›' + '' + '', contTemplate: '' }; DPGlobal.template = '
    ' + '' + DPGlobal.headTemplate + '' + '
    ' + '
    ' + '
    ' + '' + DPGlobal.headTemplate + DPGlobal.contTemplate + '
    ' + '
    ' + '
    ' + '' + DPGlobal.headTemplate + DPGlobal.contTemplate + '
    ' + '
    '; var TPGlobal = { hourTemplate: '', minuteTemplate: '', secondTemplate: '' }; TPGlobal.getTemplate = function (is12Hours, showSeconds, upIcon, downIcon) { return ( '
    ' + '' + '' + '' + '' + '' + (showSeconds ? '' + '' : '') + (is12Hours ? '' : '') + '' + '' + ' ' + '' + ' ' + (showSeconds ? '' + '' : '') + (is12Hours ? '' + '' : '') + '' + '' + '' + '' + '' + (showSeconds ? '' + '' : '') + (is12Hours ? '' : '') + '' + '
    ' + TPGlobal.hourTemplate + ':' + TPGlobal.minuteTemplate + ':' + TPGlobal.secondTemplate + '' + '' + '
    ' + '
    ' + '
    ' + '' + '
    ' + '
    ' + '
    ' + '' + '
    ' + '
    ' + (showSeconds ? '
    ' + '' + '
    ' + '
    ' : '') ); }; })(window.jQuery) prometheus-0.16.2+ds/web/blob/static/vendor/bootstrap-datetimepicker/bootstrap-datetimepicker.less000066400000000000000000000104641265137125100335660ustar00rootroot00000000000000/*! * Datepicker for Bootstrap * * Copyright 2012 Stefan Petre * Licensed under the Apache License v2.0 * http://www.apache.org/licenses/LICENSE-2.0 * */ //@import "../../bootstrap/variables.less"; @import "../bootstrap/less/variables.less"; .bootstrap-datetimepicker-widget { top: 0; left: 0; width: 250px; padding: 4px; margin-top: 1px; z-index: 3000; border-radius: 4px; .btn { padding:6px; } &:before { content: ''; display: inline-block; border-left: 7px solid transparent; border-right: 7px solid transparent; border-bottom: 7px solid #ccc; border-bottom-color: rgba(0,0,0,.2); position: absolute; top: -7px; left: 6px; } &:after { content: ''; display: inline-block; border-left: 6px solid transparent; border-right: 6px solid transparent; border-bottom: 6px solid white; position: absolute; top: -6px; left: 7px; } &.pull-right { &:before { left: auto; right: 6px; } &:after { left: auto; right: 7px; } } >ul { list-style-type: none; margin: 0; } .timepicker-hour, .timepicker-minute, .timepicker-second { width: 100%; font-weight: bold; font-size: 1.2em; } table[data-hour-format="12"] .separator { width: 4px; padding: 0; margin: 0; } .datepicker > div { display: none; } .picker-switch { text-align: center; } table { width: 100%; margin: 0; } td, th { text-align: center; width: 20px; height: 20px; border-radius: 4px; } td { &.day:hover, &.hour:hover, &.minute:hover, &.second:hover { background: @gray-lighter; cursor: pointer; } &.old, &.new { color: @gray-light; } &.active, &.active:hover { background-color: @btn-primary-bg; color: #fff; text-shadow: 0 -1px 0 rgba(0,0,0,.25); } &.disabled, &.disabled:hover { background: none; color: @gray-light; cursor: not-allowed; } span { display: block; width: 47px; height: 54px; line-height: 54px; float: left; margin: 2px; cursor: pointer; border-radius: 4px; &:hover { background: @gray-lighter; } &.active { background-color: @btn-primary-bg; color: #fff; text-shadow: 0 -1px 0 rgba(0,0,0,.25); } &.old { color: @gray-light; } &.disabled, &.disabled:hover { background: none; color: @gray-light; cursor: not-allowed; } } } th { &.switch { width: 145px; } &.next, &.prev { font-size: @font-size-base * 1.5; } &.disabled, &.disabled:hover { background: none; color: @gray-light; cursor: not-allowed; } } thead tr:first-child th { cursor: pointer; &:hover { background: @gray-lighter; } } /*.dow { border-top: 1px solid #ddd !important; }*/ } .input-group { &.date { .input-group-addon span { display: block; cursor: pointer; width: 16px; height: 16px; } } } .bootstrap-datetimepicker-widget.left-oriented { &:before { left: auto; right: 6px; } &:after { left: auto; right: 7px; } } .bootstrap-datetimepicker-widget ul.list-unstyled li.in div.timepicker div.timepicker-picker table.table-condensed tbody > tr > td { padding: 0px !important; } prometheus-0.16.2+ds/web/blob/static/vendor/bootstrap/000077500000000000000000000000001265137125100226745ustar00rootroot00000000000000prometheus-0.16.2+ds/web/blob/static/vendor/bootstrap/LICENSE000066400000000000000000000020741265137125100237040ustar00rootroot00000000000000The MIT License (MIT) Copyright (c) 2011-2014 Twitter, Inc Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. prometheus-0.16.2+ds/web/blob/static/vendor/bootstrap/less/000077500000000000000000000000001265137125100236425ustar00rootroot00000000000000prometheus-0.16.2+ds/web/blob/static/vendor/bootstrap/less/variables.less000066400000000000000000000632621265137125100265130ustar00rootroot00000000000000// // Variables // -------------------------------------------------- //== Colors // //## Gray and brand colors for use across Bootstrap. @gray-base: #000; @gray-darker: lighten(@gray-base, 13.5%); // #222 @gray-dark: lighten(@gray-base, 20%); // #333 @gray: lighten(@gray-base, 33.5%); // #555 @gray-light: lighten(@gray-base, 46.7%); // #777 @gray-lighter: lighten(@gray-base, 93.5%); // #eee @brand-primary: #428bca; @brand-success: #5cb85c; @brand-info: #5bc0de; @brand-warning: #f0ad4e; @brand-danger: #d9534f; //== Scaffolding // //## Settings for some of the most global styles. //** Background color for ``. @body-bg: #fff; //** Global text color on ``. @text-color: @gray-dark; //** Global textual link color. @link-color: @brand-primary; //** Link hover color set via `darken()` function. @link-hover-color: darken(@link-color, 15%); //== Typography // //## Font, line-height, and color for body text, headings, and more. @font-family-sans-serif: "Helvetica Neue", Helvetica, Arial, sans-serif; @font-family-serif: Georgia, "Times New Roman", Times, serif; //** Default monospace fonts for ``, ``, and `
    `.
    @font-family-monospace:   Menlo, Monaco, Consolas, "Courier New", monospace;
    @font-family-base:        @font-family-sans-serif;
    
    @font-size-base:          14px;
    @font-size-large:         ceil((@font-size-base * 1.25)); // ~18px
    @font-size-small:         ceil((@font-size-base * 0.85)); // ~12px
    
    @font-size-h1:            floor((@font-size-base * 2.6)); // ~36px
    @font-size-h2:            floor((@font-size-base * 2.15)); // ~30px
    @font-size-h3:            ceil((@font-size-base * 1.7)); // ~24px
    @font-size-h4:            ceil((@font-size-base * 1.25)); // ~18px
    @font-size-h5:            @font-size-base;
    @font-size-h6:            ceil((@font-size-base * 0.85)); // ~12px
    
    //** Unit-less `line-height` for use in components like buttons.
    @line-height-base:        1.428571429; // 20/14
    //** Computed "line-height" (`font-size` * `line-height`) for use with `margin`, `padding`, etc.
    @line-height-computed:    floor((@font-size-base * @line-height-base)); // ~20px
    
    //** By default, this inherits from the ``.
    @headings-font-family:    inherit;
    @headings-font-weight:    500;
    @headings-line-height:    1.1;
    @headings-color:          inherit;
    
    
    //== Iconography
    //
    //## Specify custom location and filename of the included Glyphicons icon font. Useful for those including Bootstrap via Bower.
    
    //** Load fonts from this directory.
    @icon-font-path:          "../fonts/";
    //** File name for all font files.
    @icon-font-name:          "glyphicons-halflings-regular";
    //** Element ID within SVG icon file.
    @icon-font-svg-id:        "glyphicons_halflingsregular";
    
    
    //== Components
    //
    //## Define common padding and border radius sizes and more. Values based on 14px text and 1.428 line-height (~20px to start).
    
    @padding-base-vertical:     6px;
    @padding-base-horizontal:   12px;
    
    @padding-large-vertical:    10px;
    @padding-large-horizontal:  16px;
    
    @padding-small-vertical:    5px;
    @padding-small-horizontal:  10px;
    
    @padding-xs-vertical:       1px;
    @padding-xs-horizontal:     5px;
    
    @line-height-large:         1.33;
    @line-height-small:         1.5;
    
    @border-radius-base:        4px;
    @border-radius-large:       6px;
    @border-radius-small:       3px;
    
    //** Global color for active items (e.g., navs or dropdowns).
    @component-active-color:    #fff;
    //** Global background color for active items (e.g., navs or dropdowns).
    @component-active-bg:       @brand-primary;
    
    //** Width of the `border` for generating carets that indicator dropdowns.
    @caret-width-base:          4px;
    //** Carets increase slightly in size for larger components.
    @caret-width-large:         5px;
    
    
    //== Tables
    //
    //## Customizes the `.table` component with basic values, each used across all table variations.
    
    //** Padding for ``s and ``s.
    @table-cell-padding:            8px;
    //** Padding for cells in `.table-condensed`.
    @table-condensed-cell-padding:  5px;
    
    //** Default background color used for all tables.
    @table-bg:                      transparent;
    //** Background color used for `.table-striped`.
    @table-bg-accent:               #f9f9f9;
    //** Background color used for `.table-hover`.
    @table-bg-hover:                #f5f5f5;
    @table-bg-active:               @table-bg-hover;
    
    //** Border color for table and cell borders.
    @table-border-color:            #ddd;
    
    
    //== Buttons
    //
    //## For each of Bootstrap's buttons, define text, background and border color.
    
    @btn-font-weight:                normal;
    
    @btn-default-color:              #333;
    @btn-default-bg:                 #fff;
    @btn-default-border:             #ccc;
    
    @btn-primary-color:              #fff;
    @btn-primary-bg:                 @brand-primary;
    @btn-primary-border:             darken(@btn-primary-bg, 5%);
    
    @btn-success-color:              #fff;
    @btn-success-bg:                 @brand-success;
    @btn-success-border:             darken(@btn-success-bg, 5%);
    
    @btn-info-color:                 #fff;
    @btn-info-bg:                    @brand-info;
    @btn-info-border:                darken(@btn-info-bg, 5%);
    
    @btn-warning-color:              #fff;
    @btn-warning-bg:                 @brand-warning;
    @btn-warning-border:             darken(@btn-warning-bg, 5%);
    
    @btn-danger-color:               #fff;
    @btn-danger-bg:                  @brand-danger;
    @btn-danger-border:              darken(@btn-danger-bg, 5%);
    
    @btn-link-disabled-color:        @gray-light;
    
    
    //== Forms
    //
    //##
    
    //** `` background color
    @input-bg:                       #fff;
    //** `` background color
    @input-bg-disabled:              @gray-lighter;
    
    //** Text color for ``s
    @input-color:                    @gray;
    //** `` border color
    @input-border:                   #ccc;
    //** `` border radius
    @input-border-radius:            @border-radius-base;
    //** Border color for inputs on focus
    @input-border-focus:             #66afe9;
    
    //** Placeholder text color
    @input-color-placeholder:        @gray-light;
    
    //** Default `.form-control` height
    @input-height-base:              (@line-height-computed + (@padding-base-vertical * 2) + 2);
    //** Large `.form-control` height
    @input-height-large:             (ceil(@font-size-large * @line-height-large) + (@padding-large-vertical * 2) + 2);
    //** Small `.form-control` height
    @input-height-small:             (floor(@font-size-small * @line-height-small) + (@padding-small-vertical * 2) + 2);
    
    @legend-color:                   @gray-dark;
    @legend-border-color:            #e5e5e5;
    
    //** Background color for textual input addons
    @input-group-addon-bg:           @gray-lighter;
    //** Border color for textual input addons
    @input-group-addon-border-color: @input-border;
    
    
    //== Dropdowns
    //
    //## Dropdown menu container and contents.
    
    //** Background for the dropdown menu.
    @dropdown-bg:                    #fff;
    //** Dropdown menu `border-color`.
    @dropdown-border:                rgba(0,0,0,.15);
    //** Dropdown menu `border-color` **for IE8**.
    @dropdown-fallback-border:       #ccc;
    //** Divider color for between dropdown items.
    @dropdown-divider-bg:            #e5e5e5;
    
    //** Dropdown link text color.
    @dropdown-link-color:            @gray-dark;
    //** Hover color for dropdown links.
    @dropdown-link-hover-color:      darken(@gray-dark, 5%);
    //** Hover background for dropdown links.
    @dropdown-link-hover-bg:         #f5f5f5;
    
    //** Active dropdown menu item text color.
    @dropdown-link-active-color:     @component-active-color;
    //** Active dropdown menu item background color.
    @dropdown-link-active-bg:        @component-active-bg;
    
    //** Disabled dropdown menu item background color.
    @dropdown-link-disabled-color:   @gray-light;
    
    //** Text color for headers within dropdown menus.
    @dropdown-header-color:          @gray-light;
    
    //** Deprecated `@dropdown-caret-color` as of v3.1.0
    @dropdown-caret-color:           #000;
    
    
    //-- Z-index master list
    //
    // Warning: Avoid customizing these values. They're used for a bird's eye view
    // of components dependent on the z-axis and are designed to all work together.
    //
    // Note: These variables are not generated into the Customizer.
    
    @zindex-navbar:            1000;
    @zindex-dropdown:          1000;
    @zindex-popover:           1060;
    @zindex-tooltip:           1070;
    @zindex-navbar-fixed:      1030;
    @zindex-modal-background:  1040;
    @zindex-modal:             1050;
    
    
    //== Media queries breakpoints
    //
    //## Define the breakpoints at which your layout will change, adapting to different screen sizes.
    
    // Extra small screen / phone
    //** Deprecated `@screen-xs` as of v3.0.1
    @screen-xs:                  480px;
    //** Deprecated `@screen-xs-min` as of v3.2.0
    @screen-xs-min:              @screen-xs;
    //** Deprecated `@screen-phone` as of v3.0.1
    @screen-phone:               @screen-xs-min;
    
    // Small screen / tablet
    //** Deprecated `@screen-sm` as of v3.0.1
    @screen-sm:                  768px;
    @screen-sm-min:              @screen-sm;
    //** Deprecated `@screen-tablet` as of v3.0.1
    @screen-tablet:              @screen-sm-min;
    
    // Medium screen / desktop
    //** Deprecated `@screen-md` as of v3.0.1
    @screen-md:                  992px;
    @screen-md-min:              @screen-md;
    //** Deprecated `@screen-desktop` as of v3.0.1
    @screen-desktop:             @screen-md-min;
    
    // Large screen / wide desktop
    //** Deprecated `@screen-lg` as of v3.0.1
    @screen-lg:                  1200px;
    @screen-lg-min:              @screen-lg;
    //** Deprecated `@screen-lg-desktop` as of v3.0.1
    @screen-lg-desktop:          @screen-lg-min;
    
    // So media queries don't overlap when required, provide a maximum
    @screen-xs-max:              (@screen-sm-min - 1);
    @screen-sm-max:              (@screen-md-min - 1);
    @screen-md-max:              (@screen-lg-min - 1);
    
    
    //== Grid system
    //
    //## Define your custom responsive grid.
    
    //** Number of columns in the grid.
    @grid-columns:              12;
    //** Padding between columns. Gets divided in half for the left and right.
    @grid-gutter-width:         30px;
    // Navbar collapse
    //** Point at which the navbar becomes uncollapsed.
    @grid-float-breakpoint:     @screen-sm-min;
    //** Point at which the navbar begins collapsing.
    @grid-float-breakpoint-max: (@grid-float-breakpoint - 1);
    
    
    //== Container sizes
    //
    //## Define the maximum width of `.container` for different screen sizes.
    
    // Small screen / tablet
    @container-tablet:             ((720px + @grid-gutter-width));
    //** For `@screen-sm-min` and up.
    @container-sm:                 @container-tablet;
    
    // Medium screen / desktop
    @container-desktop:            ((940px + @grid-gutter-width));
    //** For `@screen-md-min` and up.
    @container-md:                 @container-desktop;
    
    // Large screen / wide desktop
    @container-large-desktop:      ((1140px + @grid-gutter-width));
    //** For `@screen-lg-min` and up.
    @container-lg:                 @container-large-desktop;
    
    
    //== Navbar
    //
    //##
    
    // Basics of a navbar
    @navbar-height:                    50px;
    @navbar-margin-bottom:             @line-height-computed;
    @navbar-border-radius:             @border-radius-base;
    @navbar-padding-horizontal:        floor((@grid-gutter-width / 2));
    @navbar-padding-vertical:          ((@navbar-height - @line-height-computed) / 2);
    @navbar-collapse-max-height:       340px;
    
    @navbar-default-color:             #777;
    @navbar-default-bg:                #f8f8f8;
    @navbar-default-border:            darken(@navbar-default-bg, 6.5%);
    
    // Navbar links
    @navbar-default-link-color:                #777;
    @navbar-default-link-hover-color:          #333;
    @navbar-default-link-hover-bg:             transparent;
    @navbar-default-link-active-color:         #555;
    @navbar-default-link-active-bg:            darken(@navbar-default-bg, 6.5%);
    @navbar-default-link-disabled-color:       #ccc;
    @navbar-default-link-disabled-bg:          transparent;
    
    // Navbar brand label
    @navbar-default-brand-color:               @navbar-default-link-color;
    @navbar-default-brand-hover-color:         darken(@navbar-default-brand-color, 10%);
    @navbar-default-brand-hover-bg:            transparent;
    
    // Navbar toggle
    @navbar-default-toggle-hover-bg:           #ddd;
    @navbar-default-toggle-icon-bar-bg:        #888;
    @navbar-default-toggle-border-color:       #ddd;
    
    
    // Inverted navbar
    // Reset inverted navbar basics
    @navbar-inverse-color:                      lighten(@gray-light, 12%);
    @navbar-inverse-bg:                         #222;
    @navbar-inverse-border:                     darken(@navbar-inverse-bg, 10%);
    
    // Inverted navbar links
    @navbar-inverse-link-color:                 @gray-light;
    @navbar-inverse-link-hover-color:           #fff;
    @navbar-inverse-link-hover-bg:              transparent;
    @navbar-inverse-link-active-color:          @navbar-inverse-link-hover-color;
    @navbar-inverse-link-active-bg:             darken(@navbar-inverse-bg, 10%);
    @navbar-inverse-link-disabled-color:        #444;
    @navbar-inverse-link-disabled-bg:           transparent;
    
    // Inverted navbar brand label
    @navbar-inverse-brand-color:                @navbar-inverse-link-color;
    @navbar-inverse-brand-hover-color:          #fff;
    @navbar-inverse-brand-hover-bg:             transparent;
    
    // Inverted navbar toggle
    @navbar-inverse-toggle-hover-bg:            #333;
    @navbar-inverse-toggle-icon-bar-bg:         #fff;
    @navbar-inverse-toggle-border-color:        #333;
    
    
    //== Navs
    //
    //##
    
    //=== Shared nav styles
    @nav-link-padding:                          10px 15px;
    @nav-link-hover-bg:                         @gray-lighter;
    
    @nav-disabled-link-color:                   @gray-light;
    @nav-disabled-link-hover-color:             @gray-light;
    
    @nav-open-link-hover-color:                 #fff;
    
    //== Tabs
    @nav-tabs-border-color:                     #ddd;
    
    @nav-tabs-link-hover-border-color:          @gray-lighter;
    
    @nav-tabs-active-link-hover-bg:             @body-bg;
    @nav-tabs-active-link-hover-color:          @gray;
    @nav-tabs-active-link-hover-border-color:   #ddd;
    
    @nav-tabs-justified-link-border-color:            #ddd;
    @nav-tabs-justified-active-link-border-color:     @body-bg;
    
    //== Pills
    @nav-pills-border-radius:                   @border-radius-base;
    @nav-pills-active-link-hover-bg:            @component-active-bg;
    @nav-pills-active-link-hover-color:         @component-active-color;
    
    
    //== Pagination
    //
    //##
    
    @pagination-color:                     @link-color;
    @pagination-bg:                        #fff;
    @pagination-border:                    #ddd;
    
    @pagination-hover-color:               @link-hover-color;
    @pagination-hover-bg:                  @gray-lighter;
    @pagination-hover-border:              #ddd;
    
    @pagination-active-color:              #fff;
    @pagination-active-bg:                 @brand-primary;
    @pagination-active-border:             @brand-primary;
    
    @pagination-disabled-color:            @gray-light;
    @pagination-disabled-bg:               #fff;
    @pagination-disabled-border:           #ddd;
    
    
    //== Pager
    //
    //##
    
    @pager-bg:                             @pagination-bg;
    @pager-border:                         @pagination-border;
    @pager-border-radius:                  15px;
    
    @pager-hover-bg:                       @pagination-hover-bg;
    
    @pager-active-bg:                      @pagination-active-bg;
    @pager-active-color:                   @pagination-active-color;
    
    @pager-disabled-color:                 @pagination-disabled-color;
    
    
    //== Jumbotron
    //
    //##
    
    @jumbotron-padding:              30px;
    @jumbotron-color:                inherit;
    @jumbotron-bg:                   @gray-lighter;
    @jumbotron-heading-color:        inherit;
    @jumbotron-font-size:            ceil((@font-size-base * 1.5));
    
    
    //== Form states and alerts
    //
    //## Define colors for form feedback states and, by default, alerts.
    
    @state-success-text:             #3c763d;
    @state-success-bg:               #dff0d8;
    @state-success-border:           darken(spin(@state-success-bg, -10), 5%);
    
    @state-info-text:                #31708f;
    @state-info-bg:                  #d9edf7;
    @state-info-border:              darken(spin(@state-info-bg, -10), 7%);
    
    @state-warning-text:             #8a6d3b;
    @state-warning-bg:               #fcf8e3;
    @state-warning-border:           darken(spin(@state-warning-bg, -10), 5%);
    
    @state-danger-text:              #a94442;
    @state-danger-bg:                #f2dede;
    @state-danger-border:            darken(spin(@state-danger-bg, -10), 5%);
    
    
    //== Tooltips
    //
    //##
    
    //** Tooltip max width
    @tooltip-max-width:           200px;
    //** Tooltip text color
    @tooltip-color:               #fff;
    //** Tooltip background color
    @tooltip-bg:                  #000;
    @tooltip-opacity:             .9;
    
    //** Tooltip arrow width
    @tooltip-arrow-width:         5px;
    //** Tooltip arrow color
    @tooltip-arrow-color:         @tooltip-bg;
    
    
    //== Popovers
    //
    //##
    
    //** Popover body background color
    @popover-bg:                          #fff;
    //** Popover maximum width
    @popover-max-width:                   276px;
    //** Popover border color
    @popover-border-color:                rgba(0,0,0,.2);
    //** Popover fallback border color
    @popover-fallback-border-color:       #ccc;
    
    //** Popover title background color
    @popover-title-bg:                    darken(@popover-bg, 3%);
    
    //** Popover arrow width
    @popover-arrow-width:                 10px;
    //** Popover arrow color
    @popover-arrow-color:                 #fff;
    
    //** Popover outer arrow width
    @popover-arrow-outer-width:           (@popover-arrow-width + 1);
    //** Popover outer arrow color
    @popover-arrow-outer-color:           fadein(@popover-border-color, 5%);
    //** Popover outer arrow fallback color
    @popover-arrow-outer-fallback-color:  darken(@popover-fallback-border-color, 20%);
    
    
    //== Labels
    //
    //##
    
    //** Default label background color
    @label-default-bg:            @gray-light;
    //** Primary label background color
    @label-primary-bg:            @brand-primary;
    //** Success label background color
    @label-success-bg:            @brand-success;
    //** Info label background color
    @label-info-bg:               @brand-info;
    //** Warning label background color
    @label-warning-bg:            @brand-warning;
    //** Danger label background color
    @label-danger-bg:             @brand-danger;
    
    //** Default label text color
    @label-color:                 #fff;
    //** Default text color of a linked label
    @label-link-hover-color:      #fff;
    
    
    //== Modals
    //
    //##
    
    //** Padding applied to the modal body
    @modal-inner-padding:         15px;
    
    //** Padding applied to the modal title
    @modal-title-padding:         15px;
    //** Modal title line-height
    @modal-title-line-height:     @line-height-base;
    
    //** Background color of modal content area
    @modal-content-bg:                             #fff;
    //** Modal content border color
    @modal-content-border-color:                   rgba(0,0,0,.2);
    //** Modal content border color **for IE8**
    @modal-content-fallback-border-color:          #999;
    
    //** Modal backdrop background color
    @modal-backdrop-bg:           #000;
    //** Modal backdrop opacity
    @modal-backdrop-opacity:      .5;
    //** Modal header border color
    @modal-header-border-color:   #e5e5e5;
    //** Modal footer border color
    @modal-footer-border-color:   @modal-header-border-color;
    
    @modal-lg:                    900px;
    @modal-md:                    600px;
    @modal-sm:                    300px;
    
    
    //== Alerts
    //
    //## Define alert colors, border radius, and padding.
    
    @alert-padding:               15px;
    @alert-border-radius:         @border-radius-base;
    @alert-link-font-weight:      bold;
    
    @alert-success-bg:            @state-success-bg;
    @alert-success-text:          @state-success-text;
    @alert-success-border:        @state-success-border;
    
    @alert-info-bg:               @state-info-bg;
    @alert-info-text:             @state-info-text;
    @alert-info-border:           @state-info-border;
    
    @alert-warning-bg:            @state-warning-bg;
    @alert-warning-text:          @state-warning-text;
    @alert-warning-border:        @state-warning-border;
    
    @alert-danger-bg:             @state-danger-bg;
    @alert-danger-text:           @state-danger-text;
    @alert-danger-border:         @state-danger-border;
    
    
    //== Progress bars
    //
    //##
    
    //** Background color of the whole progress component
    @progress-bg:                 #f5f5f5;
    //** Progress bar text color
    @progress-bar-color:          #fff;
    
    //** Default progress bar color
    @progress-bar-bg:             @brand-primary;
    //** Success progress bar color
    @progress-bar-success-bg:     @brand-success;
    //** Warning progress bar color
    @progress-bar-warning-bg:     @brand-warning;
    //** Danger progress bar color
    @progress-bar-danger-bg:      @brand-danger;
    //** Info progress bar color
    @progress-bar-info-bg:        @brand-info;
    
    
    //== List group
    //
    //##
    
    //** Background color on `.list-group-item`
    @list-group-bg:                 #fff;
    //** `.list-group-item` border color
    @list-group-border:             #ddd;
    //** List group border radius
    @list-group-border-radius:      @border-radius-base;
    
    //** Background color of single list items on hover
    @list-group-hover-bg:           #f5f5f5;
    //** Text color of active list items
    @list-group-active-color:       @component-active-color;
    //** Background color of active list items
    @list-group-active-bg:          @component-active-bg;
    //** Border color of active list elements
    @list-group-active-border:      @list-group-active-bg;
    //** Text color for content within active list items
    @list-group-active-text-color:  lighten(@list-group-active-bg, 40%);
    
    //** Text color of disabled list items
    @list-group-disabled-color:      @gray-light;
    //** Background color of disabled list items
    @list-group-disabled-bg:         @gray-lighter;
    //** Text color for content within disabled list items
    @list-group-disabled-text-color: @list-group-disabled-color;
    
    @list-group-link-color:         #555;
    @list-group-link-hover-color:   @list-group-link-color;
    @list-group-link-heading-color: #333;
    
    
    //== Panels
    //
    //##
    
    @panel-bg:                    #fff;
    @panel-body-padding:          15px;
    @panel-heading-padding:       10px 15px;
    @panel-footer-padding:        @panel-heading-padding;
    @panel-border-radius:         @border-radius-base;
    
    //** Border color for elements within panels
    @panel-inner-border:          #ddd;
    @panel-footer-bg:             #f5f5f5;
    
    @panel-default-text:          @gray-dark;
    @panel-default-border:        #ddd;
    @panel-default-heading-bg:    #f5f5f5;
    
    @panel-primary-text:          #fff;
    @panel-primary-border:        @brand-primary;
    @panel-primary-heading-bg:    @brand-primary;
    
    @panel-success-text:          @state-success-text;
    @panel-success-border:        @state-success-border;
    @panel-success-heading-bg:    @state-success-bg;
    
    @panel-info-text:             @state-info-text;
    @panel-info-border:           @state-info-border;
    @panel-info-heading-bg:       @state-info-bg;
    
    @panel-warning-text:          @state-warning-text;
    @panel-warning-border:        @state-warning-border;
    @panel-warning-heading-bg:    @state-warning-bg;
    
    @panel-danger-text:           @state-danger-text;
    @panel-danger-border:         @state-danger-border;
    @panel-danger-heading-bg:     @state-danger-bg;
    
    
    //== Thumbnails
    //
    //##
    
    //** Padding around the thumbnail image
    @thumbnail-padding:           4px;
    //** Thumbnail background color
    @thumbnail-bg:                @body-bg;
    //** Thumbnail border color
    @thumbnail-border:            #ddd;
    //** Thumbnail border radius
    @thumbnail-border-radius:     @border-radius-base;
    
    //** Custom text color for thumbnail captions
    @thumbnail-caption-color:     @text-color;
    //** Padding around the thumbnail caption
    @thumbnail-caption-padding:   9px;
    
    
    //== Wells
    //
    //##
    
    @well-bg:                     #f5f5f5;
    @well-border:                 darken(@well-bg, 7%);
    
    
    //== Badges
    //
    //##
    
    @badge-color:                 #fff;
    //** Linked badge text color on hover
    @badge-link-hover-color:      #fff;
    @badge-bg:                    @gray-light;
    
    //** Badge text color in active nav link
    @badge-active-color:          @link-color;
    //** Badge background color in active nav link
    @badge-active-bg:             #fff;
    
    @badge-font-weight:           bold;
    @badge-line-height:           1;
    @badge-border-radius:         10px;
    
    
    //== Breadcrumbs
    //
    //##
    
    @breadcrumb-padding-vertical:   8px;
    @breadcrumb-padding-horizontal: 15px;
    //** Breadcrumb background color
    @breadcrumb-bg:                 #f5f5f5;
    //** Breadcrumb text color
    @breadcrumb-color:              #ccc;
    //** Text color of current page in the breadcrumb
    @breadcrumb-active-color:       @gray-light;
    //** Textual separator for between breadcrumb elements
    @breadcrumb-separator:          "/";
    
    
    //== Carousel
    //
    //##
    
    @carousel-text-shadow:                        0 1px 2px rgba(0,0,0,.6);
    
    @carousel-control-color:                      #fff;
    @carousel-control-width:                      15%;
    @carousel-control-opacity:                    .5;
    @carousel-control-font-size:                  20px;
    
    @carousel-indicator-active-bg:                #fff;
    @carousel-indicator-border-color:             #fff;
    
    @carousel-caption-color:                      #fff;
    
    
    //== Close
    //
    //##
    
    @close-font-weight:           bold;
    @close-color:                 #000;
    @close-text-shadow:           0 1px 0 #fff;
    
    
    //== Code
    //
    //##
    
    @code-color:                  #c7254e;
    @code-bg:                     #f9f2f4;
    
    @kbd-color:                   #fff;
    @kbd-bg:                      #333;
    
    @pre-bg:                      #f5f5f5;
    @pre-color:                   @gray-dark;
    @pre-border-color:            #ccc;
    @pre-scrollable-max-height:   340px;
    
    
    //== Type
    //
    //##
    
    //** Horizontal offset for forms and lists.
    @component-offset-horizontal: 180px;
    //** Text muted color
    @text-muted:                  @gray-light;
    //** Abbreviations and acronyms border color
    @abbr-border-color:           @gray-light;
    //** Headings small color
    @headings-small-color:        @gray-light;
    //** Blockquote small color
    @blockquote-small-color:      @gray-light;
    //** Blockquote font size
    @blockquote-font-size:        (@font-size-base * 1.25);
    //** Blockquote border color
    @blockquote-border-color:     @gray-lighter;
    //** Page header border color
    @page-header-border-color:    @gray-lighter;
    //** Width of horizontal description list titles
    @dl-horizontal-offset:        @component-offset-horizontal;
    //** Horizontal line color.
    @hr-border:                   @gray-lighter;
    
    
    prometheus-0.16.2+ds/web/blob/static/vendor/bootstrap3-typeahead/000077500000000000000000000000001265137125100247215ustar00rootroot00000000000000prometheus-0.16.2+ds/web/blob/static/vendor/bootstrap3-typeahead/bootstrap3-typeahead.js000066400000000000000000000313371265137125100313300ustar00rootroot00000000000000/* =============================================================
     * bootstrap3-typeahead.js v3.1.0
     * https://github.com/bassjobsen/Bootstrap-3-Typeahead
     * =============================================================
     * Original written by @mdo and @fat
     * =============================================================
     * Copyright 2014 Bass Jobsen @bassjobsen
     *
     * Licensed under the Apache License, Version 2.0 (the 'License');
     * you may not use this file except in compliance with the License.
     * You may obtain a copy of the License at
     *
     * http://www.apache.org/licenses/LICENSE-2.0
     *
     * Unless required by applicable law or agreed to in writing, software
     * distributed under the License is distributed on an 'AS IS' BASIS,
     * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     * See the License for the specific language governing permissions and
     * limitations under the License.
     * ============================================================ */
    
    
    (function (root, factory) {
    
      'use strict';
    
      // CommonJS module is defined
      if (typeof module !== 'undefined' && module.exports) {
        module.exports = factory(require('jquery'));
      }
      // AMD module is defined
      else if (typeof define === 'function' && define.amd) {
        define(['jquery'], function ($) {
          return factory ($);
        });
      } else {
        factory(root.jQuery);
      }
    
    }(this, function ($) {
    
      'use strict';
      // jshint laxcomma: true
    
    
     /* TYPEAHEAD PUBLIC CLASS DEFINITION
      * ================================= */
    
      var Typeahead = function (element, options) {
        this.$element = $(element);
        this.options = $.extend({}, $.fn.typeahead.defaults, options);
        this.matcher = this.options.matcher || this.matcher;
        this.sorter = this.options.sorter || this.sorter;
        this.select = this.options.select || this.select;
        this.autoSelect = typeof this.options.autoSelect == 'boolean' ? this.options.autoSelect : true;
        this.highlighter = this.options.highlighter || this.highlighter;
        this.render = this.options.render || this.render;
        this.updater = this.options.updater || this.updater;
        this.displayText = this.options.displayText || this.displayText;
        this.source = this.options.source;
        this.delay = this.options.delay;
        this.$menu = $(this.options.menu);
        this.$appendTo = this.options.appendTo ? $(this.options.appendTo) : null;   
        this.shown = false;
        this.listen();
        this.showHintOnFocus = typeof this.options.showHintOnFocus == 'boolean' ? this.options.showHintOnFocus : false;
        this.afterSelect = this.options.afterSelect;
        this.addItem = false;
      };
    
      Typeahead.prototype = {
    
        constructor: Typeahead,
    
        select: function () {
          var val = this.$menu.find('.active').data('value');
          this.$element.data('active', val);
          if(this.autoSelect || val) {
            var newVal = this.updater(val);
            this.$element
              .val(this.displayText(newVal) || newVal)
              .change();
            this.afterSelect(newVal);
          }
          return this.hide();
        },
    
        updater: function (item) {
          return item;
        },
    
        setSource: function (source) {
          this.source = source;
        },
    
        show: function () {
          var pos = $.extend({}, this.$element.position(), {
            height: this.$element[0].offsetHeight
          }), scrollHeight;
    
          scrollHeight = typeof this.options.scrollHeight == 'function' ?
              this.options.scrollHeight.call() :
              this.options.scrollHeight;
    
          (this.$appendTo ? this.$menu.appendTo(this.$appendTo) : this.$menu.insertAfter(this.$element))
            .css({
              top: pos.top + pos.height + scrollHeight
            , left: pos.left
            })
            .show();
    
          this.shown = true;
          return this;
        },
    
        hide: function () {
          this.$menu.hide();
          this.shown = false;
          return this;
        },
    
        lookup: function (query) {
          var items;
          if (typeof(query) != 'undefined' && query !== null) {
            this.query = query;
          } else {
            this.query = this.$element.val() ||  '';
          }
    
          if (this.query.length < this.options.minLength) {
            return this.shown ? this.hide() : this;
          }
    
          var worker = $.proxy(function() {
            
            if($.isFunction(this.source)) this.source(this.query, $.proxy(this.process, this));
            else if (this.source) {
              this.process(this.source);
            }
          }, this);
    
          clearTimeout(this.lookupWorker);
          this.lookupWorker = setTimeout(worker, this.delay);
        },
    
        process: function (items) {
          var that = this;
    
          items = $.grep(items, function (item) {
            return that.matcher(item);
          });
    
          items = this.sorter(items);
    
          if (!items.length && !this.options.addItem) {
            return this.shown ? this.hide() : this;
          }
          
          if (items.length > 0) {
            this.$element.data('active', items[0]);
          } else {
            this.$element.data('active', null);
          }
          
          // Add item
          if (this.options.addItem){
            items.push(this.options.addItem);
          }
    
          if (this.options.items == 'all') {
            return this.render(items).show();
          } else {
            return this.render(items.slice(0, this.options.items)).show();
          }
        },
    
        matcher: function (item) {
        var it = this.displayText(item);
          return ~it.toLowerCase().indexOf(this.query.toLowerCase());
        },
    
        sorter: function (items) {
          var beginswith = []
            , caseSensitive = []
            , caseInsensitive = []
            , item;
    
          while ((item = items.shift())) {
            var it = this.displayText(item);
            if (!it.toLowerCase().indexOf(this.query.toLowerCase())) beginswith.push(item);
            else if (~it.indexOf(this.query)) caseSensitive.push(item);
            else caseInsensitive.push(item);
          }
    
          return beginswith.concat(caseSensitive, caseInsensitive);
        },
    
        highlighter: function (item) {
              var html = $('
    '); var query = this.query; var i = item.toLowerCase().indexOf(query.toLowerCase()); var len, leftPart, middlePart, rightPart, strong; len = query.length; if(len === 0){ return html.text(item).html(); } while (i > -1) { leftPart = item.substr(0, i); middlePart = item.substr(i, len); rightPart = item.substr(i + len); strong = $('').text(middlePart); html .append(document.createTextNode(leftPart)) .append(strong); item = rightPart; i = item.toLowerCase().indexOf(query.toLowerCase()); } return html.append(document.createTextNode(item)).html(); }, render: function (items) { var that = this; var self = this; var activeFound = false; items = $(items).map(function (i, item) { var text = self.displayText(item); i = $(that.options.item).data('value', item); i.find('a').html(that.highlighter(text)); if (text == self.$element.val()) { i.addClass('active'); self.$element.data('active', item); activeFound = true; } return i[0]; }); if (this.autoSelect && !activeFound) { items.first().addClass('active'); this.$element.data('active', items.first().data('value')); } this.$menu.html(items); return this; }, displayText: function(item) { return item.name || item; }, next: function (event) { var active = this.$menu.find('.active').removeClass('active') , next = active.next(); if (!next.length) { next = $(this.$menu.find('li')[0]); } next.addClass('active'); }, prev: function (event) { var active = this.$menu.find('.active').removeClass('active') , prev = active.prev(); if (!prev.length) { prev = this.$menu.find('li').last(); } prev.addClass('active'); }, listen: function () { this.$element .on('focus', $.proxy(this.focus, this)) .on('blur', $.proxy(this.blur, this)) .on('keypress', $.proxy(this.keypress, this)) .on('keyup', $.proxy(this.keyup, this)); if (this.eventSupported('keydown')) { this.$element.on('keydown', $.proxy(this.keydown, this)); } this.$menu .on('click', $.proxy(this.click, this)) .on('mouseenter', 'li', $.proxy(this.mouseenter, this)) .on('mouseleave', 'li', $.proxy(this.mouseleave, this)); }, destroy : function () { this.$element.data('typeahead',null); this.$element.data('active',null); this.$element .off('focus') .off('blur') .off('keypress') .off('keyup'); if (this.eventSupported('keydown')) { this.$element.off('keydown'); } this.$menu.remove(); }, eventSupported: function(eventName) { var isSupported = eventName in this.$element; if (!isSupported) { this.$element.setAttribute(eventName, 'return;'); isSupported = typeof this.$element[eventName] === 'function'; } return isSupported; }, move: function (e) { if (!this.shown) return; switch(e.keyCode) { case 9: // tab case 13: // enter case 27: // escape e.preventDefault(); break; case 38: // up arrow // with the shiftKey (this is actually the left parenthesis) if (e.shiftKey) return; e.preventDefault(); this.prev(); break; case 40: // down arrow // with the shiftKey (this is actually the right parenthesis) if (e.shiftKey) return; e.preventDefault(); this.next(); break; } e.stopPropagation(); }, keydown: function (e) { this.suppressKeyPressRepeat = ~$.inArray(e.keyCode, [40,38,9,13,27]); if (!this.shown && e.keyCode == 40) { this.lookup(); } else { this.move(e); } }, keypress: function (e) { if (this.suppressKeyPressRepeat) return; this.move(e); }, keyup: function (e) { switch(e.keyCode) { case 40: // down arrow case 38: // up arrow case 16: // shift case 17: // ctrl case 18: // alt break; case 9: // tab case 13: // enter if (!this.shown) return; this.select(); break; case 27: // escape if (!this.shown) return; this.hide(); break; default: this.lookup(); } e.stopPropagation(); e.preventDefault(); }, focus: function (e) { if (!this.focused) { this.focused = true; if (this.options.showHintOnFocus) { this.lookup(''); } } }, blur: function (e) { this.focused = false; if (!this.mousedover && this.shown) this.hide(); }, click: function (e) { e.stopPropagation(); e.preventDefault(); this.select(); this.$element.focus(); }, mouseenter: function (e) { this.mousedover = true; this.$menu.find('.active').removeClass('active'); $(e.currentTarget).addClass('active'); }, mouseleave: function (e) { this.mousedover = false; if (!this.focused && this.shown) this.hide(); } }; /* TYPEAHEAD PLUGIN DEFINITION * =========================== */ var old = $.fn.typeahead; $.fn.typeahead = function (option) { var arg = arguments; if (typeof option == 'string' && option == 'getActive') { return this.data('active'); } return this.each(function () { var $this = $(this) , data = $this.data('typeahead') , options = typeof option == 'object' && option; if (!data) $this.data('typeahead', (data = new Typeahead(this, options))); if (typeof option == 'string') { if (arg.length > 1) { data[option].apply(data, Array.prototype.slice.call(arg ,1)); } else { data[option](); } } }); }; $.fn.typeahead.defaults = { source: [] , items: 8 , menu: '' , item: '
  • ' , minLength: 1 , scrollHeight: 0 , autoSelect: true , afterSelect: $.noop , addItem: false , delay: 0 }; $.fn.typeahead.Constructor = Typeahead; /* TYPEAHEAD NO CONFLICT * =================== */ $.fn.typeahead.noConflict = function () { $.fn.typeahead = old; return this; }; /* TYPEAHEAD DATA-API * ================== */ $(document).on('focus.typeahead.data-api', '[data-provide="typeahead"]', function (e) { var $this = $(this); if ($this.data('typeahead')) return; $this.typeahead($this.data()); }); })); prometheus-0.16.2+ds/web/blob/static/vendor/jquery/000077500000000000000000000000001265137125100221765ustar00rootroot00000000000000prometheus-0.16.2+ds/web/blob/static/vendor/jquery/jquery.js000066400000000000000000010530301265137125100240550ustar00rootroot00000000000000/*! * jQuery JavaScript Library v1.11.2 * http://jquery.com/ * * Includes Sizzle.js * http://sizzlejs.com/ * * Copyright 2005, 2014 jQuery Foundation, Inc. and other contributors * Released under the MIT license * http://jquery.org/license * * Date: 2014-12-17T15:27Z */ (function( global, factory ) { if ( typeof module === "object" && typeof module.exports === "object" ) { // For CommonJS and CommonJS-like environments where a proper window is present, // execute the factory and get jQuery // For environments that do not inherently posses a window with a document // (such as Node.js), expose a jQuery-making factory as module.exports // This accentuates the need for the creation of a real window // e.g. var jQuery = require("jquery")(window); // See ticket #14549 for more info module.exports = global.document ? factory( global, true ) : function( w ) { if ( !w.document ) { throw new Error( "jQuery requires a window with a document" ); } return factory( w ); }; } else { factory( global ); } // Pass this if window is not defined yet }(typeof window !== "undefined" ? window : this, function( window, noGlobal ) { // Can't do this because several apps including ASP.NET trace // the stack via arguments.caller.callee and Firefox dies if // you try to trace through "use strict" call chains. (#13335) // Support: Firefox 18+ // var deletedIds = []; var slice = deletedIds.slice; var concat = deletedIds.concat; var push = deletedIds.push; var indexOf = deletedIds.indexOf; var class2type = {}; var toString = class2type.toString; var hasOwn = class2type.hasOwnProperty; var support = {}; var version = "1.11.2", // Define a local copy of jQuery jQuery = function( selector, context ) { // The jQuery object is actually just the init constructor 'enhanced' // Need init if jQuery is called (just allow error to be thrown if not included) return new jQuery.fn.init( selector, context ); }, // Support: Android<4.1, IE<9 // Make sure we trim BOM and NBSP rtrim = /^[\s\uFEFF\xA0]+|[\s\uFEFF\xA0]+$/g, // Matches dashed string for camelizing rmsPrefix = /^-ms-/, rdashAlpha = /-([\da-z])/gi, // Used by jQuery.camelCase as callback to replace() fcamelCase = function( all, letter ) { return letter.toUpperCase(); }; jQuery.fn = jQuery.prototype = { // The current version of jQuery being used jquery: version, constructor: jQuery, // Start with an empty selector selector: "", // The default length of a jQuery object is 0 length: 0, toArray: function() { return slice.call( this ); }, // Get the Nth element in the matched element set OR // Get the whole matched element set as a clean array get: function( num ) { return num != null ? // Return just the one element from the set ( num < 0 ? this[ num + this.length ] : this[ num ] ) : // Return all the elements in a clean array slice.call( this ); }, // Take an array of elements and push it onto the stack // (returning the new matched element set) pushStack: function( elems ) { // Build a new jQuery matched element set var ret = jQuery.merge( this.constructor(), elems ); // Add the old object onto the stack (as a reference) ret.prevObject = this; ret.context = this.context; // Return the newly-formed element set return ret; }, // Execute a callback for every element in the matched set. // (You can seed the arguments with an array of args, but this is // only used internally.) each: function( callback, args ) { return jQuery.each( this, callback, args ); }, map: function( callback ) { return this.pushStack( jQuery.map(this, function( elem, i ) { return callback.call( elem, i, elem ); })); }, slice: function() { return this.pushStack( slice.apply( this, arguments ) ); }, first: function() { return this.eq( 0 ); }, last: function() { return this.eq( -1 ); }, eq: function( i ) { var len = this.length, j = +i + ( i < 0 ? len : 0 ); return this.pushStack( j >= 0 && j < len ? [ this[j] ] : [] ); }, end: function() { return this.prevObject || this.constructor(null); }, // For internal use only. // Behaves like an Array's method, not like a jQuery method. push: push, sort: deletedIds.sort, splice: deletedIds.splice }; jQuery.extend = jQuery.fn.extend = function() { var src, copyIsArray, copy, name, options, clone, target = arguments[0] || {}, i = 1, length = arguments.length, deep = false; // Handle a deep copy situation if ( typeof target === "boolean" ) { deep = target; // skip the boolean and the target target = arguments[ i ] || {}; i++; } // Handle case when target is a string or something (possible in deep copy) if ( typeof target !== "object" && !jQuery.isFunction(target) ) { target = {}; } // extend jQuery itself if only one argument is passed if ( i === length ) { target = this; i--; } for ( ; i < length; i++ ) { // Only deal with non-null/undefined values if ( (options = arguments[ i ]) != null ) { // Extend the base object for ( name in options ) { src = target[ name ]; copy = options[ name ]; // Prevent never-ending loop if ( target === copy ) { continue; } // Recurse if we're merging plain objects or arrays if ( deep && copy && ( jQuery.isPlainObject(copy) || (copyIsArray = jQuery.isArray(copy)) ) ) { if ( copyIsArray ) { copyIsArray = false; clone = src && jQuery.isArray(src) ? src : []; } else { clone = src && jQuery.isPlainObject(src) ? src : {}; } // Never move original objects, clone them target[ name ] = jQuery.extend( deep, clone, copy ); // Don't bring in undefined values } else if ( copy !== undefined ) { target[ name ] = copy; } } } } // Return the modified object return target; }; jQuery.extend({ // Unique for each copy of jQuery on the page expando: "jQuery" + ( version + Math.random() ).replace( /\D/g, "" ), // Assume jQuery is ready without the ready module isReady: true, error: function( msg ) { throw new Error( msg ); }, noop: function() {}, // See test/unit/core.js for details concerning isFunction. // Since version 1.3, DOM methods and functions like alert // aren't supported. They return false on IE (#2968). isFunction: function( obj ) { return jQuery.type(obj) === "function"; }, isArray: Array.isArray || function( obj ) { return jQuery.type(obj) === "array"; }, isWindow: function( obj ) { /* jshint eqeqeq: false */ return obj != null && obj == obj.window; }, isNumeric: function( obj ) { // parseFloat NaNs numeric-cast false positives (null|true|false|"") // ...but misinterprets leading-number strings, particularly hex literals ("0x...") // subtraction forces infinities to NaN // adding 1 corrects loss of precision from parseFloat (#15100) return !jQuery.isArray( obj ) && (obj - parseFloat( obj ) + 1) >= 0; }, isEmptyObject: function( obj ) { var name; for ( name in obj ) { return false; } return true; }, isPlainObject: function( obj ) { var key; // Must be an Object. // Because of IE, we also have to check the presence of the constructor property. // Make sure that DOM nodes and window objects don't pass through, as well if ( !obj || jQuery.type(obj) !== "object" || obj.nodeType || jQuery.isWindow( obj ) ) { return false; } try { // Not own constructor property must be Object if ( obj.constructor && !hasOwn.call(obj, "constructor") && !hasOwn.call(obj.constructor.prototype, "isPrototypeOf") ) { return false; } } catch ( e ) { // IE8,9 Will throw exceptions on certain host objects #9897 return false; } // Support: IE<9 // Handle iteration over inherited properties before own properties. if ( support.ownLast ) { for ( key in obj ) { return hasOwn.call( obj, key ); } } // Own properties are enumerated firstly, so to speed up, // if last one is own, then all properties are own. for ( key in obj ) {} return key === undefined || hasOwn.call( obj, key ); }, type: function( obj ) { if ( obj == null ) { return obj + ""; } return typeof obj === "object" || typeof obj === "function" ? class2type[ toString.call(obj) ] || "object" : typeof obj; }, // Evaluates a script in a global context // Workarounds based on findings by Jim Driscoll // http://weblogs.java.net/blog/driscoll/archive/2009/09/08/eval-javascript-global-context globalEval: function( data ) { if ( data && jQuery.trim( data ) ) { // We use execScript on Internet Explorer // We use an anonymous function so that context is window // rather than jQuery in Firefox ( window.execScript || function( data ) { window[ "eval" ].call( window, data ); } )( data ); } }, // Convert dashed to camelCase; used by the css and data modules // Microsoft forgot to hump their vendor prefix (#9572) camelCase: function( string ) { return string.replace( rmsPrefix, "ms-" ).replace( rdashAlpha, fcamelCase ); }, nodeName: function( elem, name ) { return elem.nodeName && elem.nodeName.toLowerCase() === name.toLowerCase(); }, // args is for internal usage only each: function( obj, callback, args ) { var value, i = 0, length = obj.length, isArray = isArraylike( obj ); if ( args ) { if ( isArray ) { for ( ; i < length; i++ ) { value = callback.apply( obj[ i ], args ); if ( value === false ) { break; } } } else { for ( i in obj ) { value = callback.apply( obj[ i ], args ); if ( value === false ) { break; } } } // A special, fast, case for the most common use of each } else { if ( isArray ) { for ( ; i < length; i++ ) { value = callback.call( obj[ i ], i, obj[ i ] ); if ( value === false ) { break; } } } else { for ( i in obj ) { value = callback.call( obj[ i ], i, obj[ i ] ); if ( value === false ) { break; } } } } return obj; }, // Support: Android<4.1, IE<9 trim: function( text ) { return text == null ? "" : ( text + "" ).replace( rtrim, "" ); }, // results is for internal usage only makeArray: function( arr, results ) { var ret = results || []; if ( arr != null ) { if ( isArraylike( Object(arr) ) ) { jQuery.merge( ret, typeof arr === "string" ? [ arr ] : arr ); } else { push.call( ret, arr ); } } return ret; }, inArray: function( elem, arr, i ) { var len; if ( arr ) { if ( indexOf ) { return indexOf.call( arr, elem, i ); } len = arr.length; i = i ? i < 0 ? Math.max( 0, len + i ) : i : 0; for ( ; i < len; i++ ) { // Skip accessing in sparse arrays if ( i in arr && arr[ i ] === elem ) { return i; } } } return -1; }, merge: function( first, second ) { var len = +second.length, j = 0, i = first.length; while ( j < len ) { first[ i++ ] = second[ j++ ]; } // Support: IE<9 // Workaround casting of .length to NaN on otherwise arraylike objects (e.g., NodeLists) if ( len !== len ) { while ( second[j] !== undefined ) { first[ i++ ] = second[ j++ ]; } } first.length = i; return first; }, grep: function( elems, callback, invert ) { var callbackInverse, matches = [], i = 0, length = elems.length, callbackExpect = !invert; // Go through the array, only saving the items // that pass the validator function for ( ; i < length; i++ ) { callbackInverse = !callback( elems[ i ], i ); if ( callbackInverse !== callbackExpect ) { matches.push( elems[ i ] ); } } return matches; }, // arg is for internal usage only map: function( elems, callback, arg ) { var value, i = 0, length = elems.length, isArray = isArraylike( elems ), ret = []; // Go through the array, translating each of the items to their new values if ( isArray ) { for ( ; i < length; i++ ) { value = callback( elems[ i ], i, arg ); if ( value != null ) { ret.push( value ); } } // Go through every key on the object, } else { for ( i in elems ) { value = callback( elems[ i ], i, arg ); if ( value != null ) { ret.push( value ); } } } // Flatten any nested arrays return concat.apply( [], ret ); }, // A global GUID counter for objects guid: 1, // Bind a function to a context, optionally partially applying any // arguments. proxy: function( fn, context ) { var args, proxy, tmp; if ( typeof context === "string" ) { tmp = fn[ context ]; context = fn; fn = tmp; } // Quick check to determine if target is callable, in the spec // this throws a TypeError, but we will just return undefined. if ( !jQuery.isFunction( fn ) ) { return undefined; } // Simulated bind args = slice.call( arguments, 2 ); proxy = function() { return fn.apply( context || this, args.concat( slice.call( arguments ) ) ); }; // Set the guid of unique handler to the same of original handler, so it can be removed proxy.guid = fn.guid = fn.guid || jQuery.guid++; return proxy; }, now: function() { return +( new Date() ); }, // jQuery.support is not used in Core but other projects attach their // properties to it so it needs to exist. support: support }); // Populate the class2type map jQuery.each("Boolean Number String Function Array Date RegExp Object Error".split(" "), function(i, name) { class2type[ "[object " + name + "]" ] = name.toLowerCase(); }); function isArraylike( obj ) { var length = obj.length, type = jQuery.type( obj ); if ( type === "function" || jQuery.isWindow( obj ) ) { return false; } if ( obj.nodeType === 1 && length ) { return true; } return type === "array" || length === 0 || typeof length === "number" && length > 0 && ( length - 1 ) in obj; } var Sizzle = /*! * Sizzle CSS Selector Engine v2.2.0-pre * http://sizzlejs.com/ * * Copyright 2008, 2014 jQuery Foundation, Inc. and other contributors * Released under the MIT license * http://jquery.org/license * * Date: 2014-12-16 */ (function( window ) { var i, support, Expr, getText, isXML, tokenize, compile, select, outermostContext, sortInput, hasDuplicate, // Local document vars setDocument, document, docElem, documentIsHTML, rbuggyQSA, rbuggyMatches, matches, contains, // Instance-specific data expando = "sizzle" + 1 * new Date(), preferredDoc = window.document, dirruns = 0, done = 0, classCache = createCache(), tokenCache = createCache(), compilerCache = createCache(), sortOrder = function( a, b ) { if ( a === b ) { hasDuplicate = true; } return 0; }, // General-purpose constants MAX_NEGATIVE = 1 << 31, // Instance methods hasOwn = ({}).hasOwnProperty, arr = [], pop = arr.pop, push_native = arr.push, push = arr.push, slice = arr.slice, // Use a stripped-down indexOf as it's faster than native // http://jsperf.com/thor-indexof-vs-for/5 indexOf = function( list, elem ) { var i = 0, len = list.length; for ( ; i < len; i++ ) { if ( list[i] === elem ) { return i; } } return -1; }, booleans = "checked|selected|async|autofocus|autoplay|controls|defer|disabled|hidden|ismap|loop|multiple|open|readonly|required|scoped", // Regular expressions // Whitespace characters http://www.w3.org/TR/css3-selectors/#whitespace whitespace = "[\\x20\\t\\r\\n\\f]", // http://www.w3.org/TR/css3-syntax/#characters characterEncoding = "(?:\\\\.|[\\w-]|[^\\x00-\\xa0])+", // Loosely modeled on CSS identifier characters // An unquoted value should be a CSS identifier http://www.w3.org/TR/css3-selectors/#attribute-selectors // Proper syntax: http://www.w3.org/TR/CSS21/syndata.html#value-def-identifier identifier = characterEncoding.replace( "w", "w#" ), // Attribute selectors: http://www.w3.org/TR/selectors/#attribute-selectors attributes = "\\[" + whitespace + "*(" + characterEncoding + ")(?:" + whitespace + // Operator (capture 2) "*([*^$|!~]?=)" + whitespace + // "Attribute values must be CSS identifiers [capture 5] or strings [capture 3 or capture 4]" "*(?:'((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\"|(" + identifier + "))|)" + whitespace + "*\\]", pseudos = ":(" + characterEncoding + ")(?:\\((" + // To reduce the number of selectors needing tokenize in the preFilter, prefer arguments: // 1. quoted (capture 3; capture 4 or capture 5) "('((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\")|" + // 2. simple (capture 6) "((?:\\\\.|[^\\\\()[\\]]|" + attributes + ")*)|" + // 3. anything else (capture 2) ".*" + ")\\)|)", // Leading and non-escaped trailing whitespace, capturing some non-whitespace characters preceding the latter rwhitespace = new RegExp( whitespace + "+", "g" ), rtrim = new RegExp( "^" + whitespace + "+|((?:^|[^\\\\])(?:\\\\.)*)" + whitespace + "+$", "g" ), rcomma = new RegExp( "^" + whitespace + "*," + whitespace + "*" ), rcombinators = new RegExp( "^" + whitespace + "*([>+~]|" + whitespace + ")" + whitespace + "*" ), rattributeQuotes = new RegExp( "=" + whitespace + "*([^\\]'\"]*?)" + whitespace + "*\\]", "g" ), rpseudo = new RegExp( pseudos ), ridentifier = new RegExp( "^" + identifier + "$" ), matchExpr = { "ID": new RegExp( "^#(" + characterEncoding + ")" ), "CLASS": new RegExp( "^\\.(" + characterEncoding + ")" ), "TAG": new RegExp( "^(" + characterEncoding.replace( "w", "w*" ) + ")" ), "ATTR": new RegExp( "^" + attributes ), "PSEUDO": new RegExp( "^" + pseudos ), "CHILD": new RegExp( "^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\(" + whitespace + "*(even|odd|(([+-]|)(\\d*)n|)" + whitespace + "*(?:([+-]|)" + whitespace + "*(\\d+)|))" + whitespace + "*\\)|)", "i" ), "bool": new RegExp( "^(?:" + booleans + ")$", "i" ), // For use in libraries implementing .is() // We use this for POS matching in `select` "needsContext": new RegExp( "^" + whitespace + "*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\(" + whitespace + "*((?:-\\d)?\\d*)" + whitespace + "*\\)|)(?=[^-]|$)", "i" ) }, rinputs = /^(?:input|select|textarea|button)$/i, rheader = /^h\d$/i, rnative = /^[^{]+\{\s*\[native \w/, // Easily-parseable/retrievable ID or TAG or CLASS selectors rquickExpr = /^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/, rsibling = /[+~]/, rescape = /'|\\/g, // CSS escapes http://www.w3.org/TR/CSS21/syndata.html#escaped-characters runescape = new RegExp( "\\\\([\\da-f]{1,6}" + whitespace + "?|(" + whitespace + ")|.)", "ig" ), funescape = function( _, escaped, escapedWhitespace ) { var high = "0x" + escaped - 0x10000; // NaN means non-codepoint // Support: Firefox<24 // Workaround erroneous numeric interpretation of +"0x" return high !== high || escapedWhitespace ? escaped : high < 0 ? // BMP codepoint String.fromCharCode( high + 0x10000 ) : // Supplemental Plane codepoint (surrogate pair) String.fromCharCode( high >> 10 | 0xD800, high & 0x3FF | 0xDC00 ); }, // Used for iframes // See setDocument() // Removing the function wrapper causes a "Permission Denied" // error in IE unloadHandler = function() { setDocument(); }; // Optimize for push.apply( _, NodeList ) try { push.apply( (arr = slice.call( preferredDoc.childNodes )), preferredDoc.childNodes ); // Support: Android<4.0 // Detect silently failing push.apply arr[ preferredDoc.childNodes.length ].nodeType; } catch ( e ) { push = { apply: arr.length ? // Leverage slice if possible function( target, els ) { push_native.apply( target, slice.call(els) ); } : // Support: IE<9 // Otherwise append directly function( target, els ) { var j = target.length, i = 0; // Can't trust NodeList.length while ( (target[j++] = els[i++]) ) {} target.length = j - 1; } }; } function Sizzle( selector, context, results, seed ) { var match, elem, m, nodeType, // QSA vars i, groups, old, nid, newContext, newSelector; if ( ( context ? context.ownerDocument || context : preferredDoc ) !== document ) { setDocument( context ); } context = context || document; results = results || []; nodeType = context.nodeType; if ( typeof selector !== "string" || !selector || nodeType !== 1 && nodeType !== 9 && nodeType !== 11 ) { return results; } if ( !seed && documentIsHTML ) { // Try to shortcut find operations when possible (e.g., not under DocumentFragment) if ( nodeType !== 11 && (match = rquickExpr.exec( selector )) ) { // Speed-up: Sizzle("#ID") if ( (m = match[1]) ) { if ( nodeType === 9 ) { elem = context.getElementById( m ); // Check parentNode to catch when Blackberry 4.6 returns // nodes that are no longer in the document (jQuery #6963) if ( elem && elem.parentNode ) { // Handle the case where IE, Opera, and Webkit return items // by name instead of ID if ( elem.id === m ) { results.push( elem ); return results; } } else { return results; } } else { // Context is not a document if ( context.ownerDocument && (elem = context.ownerDocument.getElementById( m )) && contains( context, elem ) && elem.id === m ) { results.push( elem ); return results; } } // Speed-up: Sizzle("TAG") } else if ( match[2] ) { push.apply( results, context.getElementsByTagName( selector ) ); return results; // Speed-up: Sizzle(".CLASS") } else if ( (m = match[3]) && support.getElementsByClassName ) { push.apply( results, context.getElementsByClassName( m ) ); return results; } } // QSA path if ( support.qsa && (!rbuggyQSA || !rbuggyQSA.test( selector )) ) { nid = old = expando; newContext = context; newSelector = nodeType !== 1 && selector; // qSA works strangely on Element-rooted queries // We can work around this by specifying an extra ID on the root // and working up from there (Thanks to Andrew Dupont for the technique) // IE 8 doesn't work on object elements if ( nodeType === 1 && context.nodeName.toLowerCase() !== "object" ) { groups = tokenize( selector ); if ( (old = context.getAttribute("id")) ) { nid = old.replace( rescape, "\\$&" ); } else { context.setAttribute( "id", nid ); } nid = "[id='" + nid + "'] "; i = groups.length; while ( i-- ) { groups[i] = nid + toSelector( groups[i] ); } newContext = rsibling.test( selector ) && testContext( context.parentNode ) || context; newSelector = groups.join(","); } if ( newSelector ) { try { push.apply( results, newContext.querySelectorAll( newSelector ) ); return results; } catch(qsaError) { } finally { if ( !old ) { context.removeAttribute("id"); } } } } } // All others return select( selector.replace( rtrim, "$1" ), context, results, seed ); } /** * Create key-value caches of limited size * @returns {Function(string, Object)} Returns the Object data after storing it on itself with * property name the (space-suffixed) string and (if the cache is larger than Expr.cacheLength) * deleting the oldest entry */ function createCache() { var keys = []; function cache( key, value ) { // Use (key + " ") to avoid collision with native prototype properties (see Issue #157) if ( keys.push( key + " " ) > Expr.cacheLength ) { // Only keep the most recent entries delete cache[ keys.shift() ]; } return (cache[ key + " " ] = value); } return cache; } /** * Mark a function for special use by Sizzle * @param {Function} fn The function to mark */ function markFunction( fn ) { fn[ expando ] = true; return fn; } /** * Support testing using an element * @param {Function} fn Passed the created div and expects a boolean result */ function assert( fn ) { var div = document.createElement("div"); try { return !!fn( div ); } catch (e) { return false; } finally { // Remove from its parent by default if ( div.parentNode ) { div.parentNode.removeChild( div ); } // release memory in IE div = null; } } /** * Adds the same handler for all of the specified attrs * @param {String} attrs Pipe-separated list of attributes * @param {Function} handler The method that will be applied */ function addHandle( attrs, handler ) { var arr = attrs.split("|"), i = attrs.length; while ( i-- ) { Expr.attrHandle[ arr[i] ] = handler; } } /** * Checks document order of two siblings * @param {Element} a * @param {Element} b * @returns {Number} Returns less than 0 if a precedes b, greater than 0 if a follows b */ function siblingCheck( a, b ) { var cur = b && a, diff = cur && a.nodeType === 1 && b.nodeType === 1 && ( ~b.sourceIndex || MAX_NEGATIVE ) - ( ~a.sourceIndex || MAX_NEGATIVE ); // Use IE sourceIndex if available on both nodes if ( diff ) { return diff; } // Check if b follows a if ( cur ) { while ( (cur = cur.nextSibling) ) { if ( cur === b ) { return -1; } } } return a ? 1 : -1; } /** * Returns a function to use in pseudos for input types * @param {String} type */ function createInputPseudo( type ) { return function( elem ) { var name = elem.nodeName.toLowerCase(); return name === "input" && elem.type === type; }; } /** * Returns a function to use in pseudos for buttons * @param {String} type */ function createButtonPseudo( type ) { return function( elem ) { var name = elem.nodeName.toLowerCase(); return (name === "input" || name === "button") && elem.type === type; }; } /** * Returns a function to use in pseudos for positionals * @param {Function} fn */ function createPositionalPseudo( fn ) { return markFunction(function( argument ) { argument = +argument; return markFunction(function( seed, matches ) { var j, matchIndexes = fn( [], seed.length, argument ), i = matchIndexes.length; // Match elements found at the specified indexes while ( i-- ) { if ( seed[ (j = matchIndexes[i]) ] ) { seed[j] = !(matches[j] = seed[j]); } } }); }); } /** * Checks a node for validity as a Sizzle context * @param {Element|Object=} context * @returns {Element|Object|Boolean} The input node if acceptable, otherwise a falsy value */ function testContext( context ) { return context && typeof context.getElementsByTagName !== "undefined" && context; } // Expose support vars for convenience support = Sizzle.support = {}; /** * Detects XML nodes * @param {Element|Object} elem An element or a document * @returns {Boolean} True iff elem is a non-HTML XML node */ isXML = Sizzle.isXML = function( elem ) { // documentElement is verified for cases where it doesn't yet exist // (such as loading iframes in IE - #4833) var documentElement = elem && (elem.ownerDocument || elem).documentElement; return documentElement ? documentElement.nodeName !== "HTML" : false; }; /** * Sets document-related variables once based on the current document * @param {Element|Object} [doc] An element or document object to use to set the document * @returns {Object} Returns the current document */ setDocument = Sizzle.setDocument = function( node ) { var hasCompare, parent, doc = node ? node.ownerDocument || node : preferredDoc; // If no document and documentElement is available, return if ( doc === document || doc.nodeType !== 9 || !doc.documentElement ) { return document; } // Set our document document = doc; docElem = doc.documentElement; parent = doc.defaultView; // Support: IE>8 // If iframe document is assigned to "document" variable and if iframe has been reloaded, // IE will throw "permission denied" error when accessing "document" variable, see jQuery #13936 // IE6-8 do not support the defaultView property so parent will be undefined if ( parent && parent !== parent.top ) { // IE11 does not have attachEvent, so all must suffer if ( parent.addEventListener ) { parent.addEventListener( "unload", unloadHandler, false ); } else if ( parent.attachEvent ) { parent.attachEvent( "onunload", unloadHandler ); } } /* Support tests ---------------------------------------------------------------------- */ documentIsHTML = !isXML( doc ); /* Attributes ---------------------------------------------------------------------- */ // Support: IE<8 // Verify that getAttribute really returns attributes and not properties // (excepting IE8 booleans) support.attributes = assert(function( div ) { div.className = "i"; return !div.getAttribute("className"); }); /* getElement(s)By* ---------------------------------------------------------------------- */ // Check if getElementsByTagName("*") returns only elements support.getElementsByTagName = assert(function( div ) { div.appendChild( doc.createComment("") ); return !div.getElementsByTagName("*").length; }); // Support: IE<9 support.getElementsByClassName = rnative.test( doc.getElementsByClassName ); // Support: IE<10 // Check if getElementById returns elements by name // The broken getElementById methods don't pick up programatically-set names, // so use a roundabout getElementsByName test support.getById = assert(function( div ) { docElem.appendChild( div ).id = expando; return !doc.getElementsByName || !doc.getElementsByName( expando ).length; }); // ID find and filter if ( support.getById ) { Expr.find["ID"] = function( id, context ) { if ( typeof context.getElementById !== "undefined" && documentIsHTML ) { var m = context.getElementById( id ); // Check parentNode to catch when Blackberry 4.6 returns // nodes that are no longer in the document #6963 return m && m.parentNode ? [ m ] : []; } }; Expr.filter["ID"] = function( id ) { var attrId = id.replace( runescape, funescape ); return function( elem ) { return elem.getAttribute("id") === attrId; }; }; } else { // Support: IE6/7 // getElementById is not reliable as a find shortcut delete Expr.find["ID"]; Expr.filter["ID"] = function( id ) { var attrId = id.replace( runescape, funescape ); return function( elem ) { var node = typeof elem.getAttributeNode !== "undefined" && elem.getAttributeNode("id"); return node && node.value === attrId; }; }; } // Tag Expr.find["TAG"] = support.getElementsByTagName ? function( tag, context ) { if ( typeof context.getElementsByTagName !== "undefined" ) { return context.getElementsByTagName( tag ); // DocumentFragment nodes don't have gEBTN } else if ( support.qsa ) { return context.querySelectorAll( tag ); } } : function( tag, context ) { var elem, tmp = [], i = 0, // By happy coincidence, a (broken) gEBTN appears on DocumentFragment nodes too results = context.getElementsByTagName( tag ); // Filter out possible comments if ( tag === "*" ) { while ( (elem = results[i++]) ) { if ( elem.nodeType === 1 ) { tmp.push( elem ); } } return tmp; } return results; }; // Class Expr.find["CLASS"] = support.getElementsByClassName && function( className, context ) { if ( documentIsHTML ) { return context.getElementsByClassName( className ); } }; /* QSA/matchesSelector ---------------------------------------------------------------------- */ // QSA and matchesSelector support // matchesSelector(:active) reports false when true (IE9/Opera 11.5) rbuggyMatches = []; // qSa(:focus) reports false when true (Chrome 21) // We allow this because of a bug in IE8/9 that throws an error // whenever `document.activeElement` is accessed on an iframe // So, we allow :focus to pass through QSA all the time to avoid the IE error // See http://bugs.jquery.com/ticket/13378 rbuggyQSA = []; if ( (support.qsa = rnative.test( doc.querySelectorAll )) ) { // Build QSA regex // Regex strategy adopted from Diego Perini assert(function( div ) { // Select is set to empty string on purpose // This is to test IE's treatment of not explicitly // setting a boolean content attribute, // since its presence should be enough // http://bugs.jquery.com/ticket/12359 docElem.appendChild( div ).innerHTML = "" + ""; // Support: IE8, Opera 11-12.16 // Nothing should be selected when empty strings follow ^= or $= or *= // The test attribute must be unknown in Opera but "safe" for WinRT // http://msdn.microsoft.com/en-us/library/ie/hh465388.aspx#attribute_section if ( div.querySelectorAll("[msallowcapture^='']").length ) { rbuggyQSA.push( "[*^$]=" + whitespace + "*(?:''|\"\")" ); } // Support: IE8 // Boolean attributes and "value" are not treated correctly if ( !div.querySelectorAll("[selected]").length ) { rbuggyQSA.push( "\\[" + whitespace + "*(?:value|" + booleans + ")" ); } // Support: Chrome<29, Android<4.2+, Safari<7.0+, iOS<7.0+, PhantomJS<1.9.7+ if ( !div.querySelectorAll( "[id~=" + expando + "-]" ).length ) { rbuggyQSA.push("~="); } // Webkit/Opera - :checked should return selected option elements // http://www.w3.org/TR/2011/REC-css3-selectors-20110929/#checked // IE8 throws error here and will not see later tests if ( !div.querySelectorAll(":checked").length ) { rbuggyQSA.push(":checked"); } // Support: Safari 8+, iOS 8+ // https://bugs.webkit.org/show_bug.cgi?id=136851 // In-page `selector#id sibing-combinator selector` fails if ( !div.querySelectorAll( "a#" + expando + "+*" ).length ) { rbuggyQSA.push(".#.+[+~]"); } }); assert(function( div ) { // Support: Windows 8 Native Apps // The type and name attributes are restricted during .innerHTML assignment var input = doc.createElement("input"); input.setAttribute( "type", "hidden" ); div.appendChild( input ).setAttribute( "name", "D" ); // Support: IE8 // Enforce case-sensitivity of name attribute if ( div.querySelectorAll("[name=d]").length ) { rbuggyQSA.push( "name" + whitespace + "*[*^$|!~]?=" ); } // FF 3.5 - :enabled/:disabled and hidden elements (hidden elements are still enabled) // IE8 throws error here and will not see later tests if ( !div.querySelectorAll(":enabled").length ) { rbuggyQSA.push( ":enabled", ":disabled" ); } // Opera 10-11 does not throw on post-comma invalid pseudos div.querySelectorAll("*,:x"); rbuggyQSA.push(",.*:"); }); } if ( (support.matchesSelector = rnative.test( (matches = docElem.matches || docElem.webkitMatchesSelector || docElem.mozMatchesSelector || docElem.oMatchesSelector || docElem.msMatchesSelector) )) ) { assert(function( div ) { // Check to see if it's possible to do matchesSelector // on a disconnected node (IE 9) support.disconnectedMatch = matches.call( div, "div" ); // This should fail with an exception // Gecko does not error, returns false instead matches.call( div, "[s!='']:x" ); rbuggyMatches.push( "!=", pseudos ); }); } rbuggyQSA = rbuggyQSA.length && new RegExp( rbuggyQSA.join("|") ); rbuggyMatches = rbuggyMatches.length && new RegExp( rbuggyMatches.join("|") ); /* Contains ---------------------------------------------------------------------- */ hasCompare = rnative.test( docElem.compareDocumentPosition ); // Element contains another // Purposefully does not implement inclusive descendent // As in, an element does not contain itself contains = hasCompare || rnative.test( docElem.contains ) ? function( a, b ) { var adown = a.nodeType === 9 ? a.documentElement : a, bup = b && b.parentNode; return a === bup || !!( bup && bup.nodeType === 1 && ( adown.contains ? adown.contains( bup ) : a.compareDocumentPosition && a.compareDocumentPosition( bup ) & 16 )); } : function( a, b ) { if ( b ) { while ( (b = b.parentNode) ) { if ( b === a ) { return true; } } } return false; }; /* Sorting ---------------------------------------------------------------------- */ // Document order sorting sortOrder = hasCompare ? function( a, b ) { // Flag for duplicate removal if ( a === b ) { hasDuplicate = true; return 0; } // Sort on method existence if only one input has compareDocumentPosition var compare = !a.compareDocumentPosition - !b.compareDocumentPosition; if ( compare ) { return compare; } // Calculate position if both inputs belong to the same document compare = ( a.ownerDocument || a ) === ( b.ownerDocument || b ) ? a.compareDocumentPosition( b ) : // Otherwise we know they are disconnected 1; // Disconnected nodes if ( compare & 1 || (!support.sortDetached && b.compareDocumentPosition( a ) === compare) ) { // Choose the first element that is related to our preferred document if ( a === doc || a.ownerDocument === preferredDoc && contains(preferredDoc, a) ) { return -1; } if ( b === doc || b.ownerDocument === preferredDoc && contains(preferredDoc, b) ) { return 1; } // Maintain original order return sortInput ? ( indexOf( sortInput, a ) - indexOf( sortInput, b ) ) : 0; } return compare & 4 ? -1 : 1; } : function( a, b ) { // Exit early if the nodes are identical if ( a === b ) { hasDuplicate = true; return 0; } var cur, i = 0, aup = a.parentNode, bup = b.parentNode, ap = [ a ], bp = [ b ]; // Parentless nodes are either documents or disconnected if ( !aup || !bup ) { return a === doc ? -1 : b === doc ? 1 : aup ? -1 : bup ? 1 : sortInput ? ( indexOf( sortInput, a ) - indexOf( sortInput, b ) ) : 0; // If the nodes are siblings, we can do a quick check } else if ( aup === bup ) { return siblingCheck( a, b ); } // Otherwise we need full lists of their ancestors for comparison cur = a; while ( (cur = cur.parentNode) ) { ap.unshift( cur ); } cur = b; while ( (cur = cur.parentNode) ) { bp.unshift( cur ); } // Walk down the tree looking for a discrepancy while ( ap[i] === bp[i] ) { i++; } return i ? // Do a sibling check if the nodes have a common ancestor siblingCheck( ap[i], bp[i] ) : // Otherwise nodes in our document sort first ap[i] === preferredDoc ? -1 : bp[i] === preferredDoc ? 1 : 0; }; return doc; }; Sizzle.matches = function( expr, elements ) { return Sizzle( expr, null, null, elements ); }; Sizzle.matchesSelector = function( elem, expr ) { // Set document vars if needed if ( ( elem.ownerDocument || elem ) !== document ) { setDocument( elem ); } // Make sure that attribute selectors are quoted expr = expr.replace( rattributeQuotes, "='$1']" ); if ( support.matchesSelector && documentIsHTML && ( !rbuggyMatches || !rbuggyMatches.test( expr ) ) && ( !rbuggyQSA || !rbuggyQSA.test( expr ) ) ) { try { var ret = matches.call( elem, expr ); // IE 9's matchesSelector returns false on disconnected nodes if ( ret || support.disconnectedMatch || // As well, disconnected nodes are said to be in a document // fragment in IE 9 elem.document && elem.document.nodeType !== 11 ) { return ret; } } catch (e) {} } return Sizzle( expr, document, null, [ elem ] ).length > 0; }; Sizzle.contains = function( context, elem ) { // Set document vars if needed if ( ( context.ownerDocument || context ) !== document ) { setDocument( context ); } return contains( context, elem ); }; Sizzle.attr = function( elem, name ) { // Set document vars if needed if ( ( elem.ownerDocument || elem ) !== document ) { setDocument( elem ); } var fn = Expr.attrHandle[ name.toLowerCase() ], // Don't get fooled by Object.prototype properties (jQuery #13807) val = fn && hasOwn.call( Expr.attrHandle, name.toLowerCase() ) ? fn( elem, name, !documentIsHTML ) : undefined; return val !== undefined ? val : support.attributes || !documentIsHTML ? elem.getAttribute( name ) : (val = elem.getAttributeNode(name)) && val.specified ? val.value : null; }; Sizzle.error = function( msg ) { throw new Error( "Syntax error, unrecognized expression: " + msg ); }; /** * Document sorting and removing duplicates * @param {ArrayLike} results */ Sizzle.uniqueSort = function( results ) { var elem, duplicates = [], j = 0, i = 0; // Unless we *know* we can detect duplicates, assume their presence hasDuplicate = !support.detectDuplicates; sortInput = !support.sortStable && results.slice( 0 ); results.sort( sortOrder ); if ( hasDuplicate ) { while ( (elem = results[i++]) ) { if ( elem === results[ i ] ) { j = duplicates.push( i ); } } while ( j-- ) { results.splice( duplicates[ j ], 1 ); } } // Clear input after sorting to release objects // See https://github.com/jquery/sizzle/pull/225 sortInput = null; return results; }; /** * Utility function for retrieving the text value of an array of DOM nodes * @param {Array|Element} elem */ getText = Sizzle.getText = function( elem ) { var node, ret = "", i = 0, nodeType = elem.nodeType; if ( !nodeType ) { // If no nodeType, this is expected to be an array while ( (node = elem[i++]) ) { // Do not traverse comment nodes ret += getText( node ); } } else if ( nodeType === 1 || nodeType === 9 || nodeType === 11 ) { // Use textContent for elements // innerText usage removed for consistency of new lines (jQuery #11153) if ( typeof elem.textContent === "string" ) { return elem.textContent; } else { // Traverse its children for ( elem = elem.firstChild; elem; elem = elem.nextSibling ) { ret += getText( elem ); } } } else if ( nodeType === 3 || nodeType === 4 ) { return elem.nodeValue; } // Do not include comment or processing instruction nodes return ret; }; Expr = Sizzle.selectors = { // Can be adjusted by the user cacheLength: 50, createPseudo: markFunction, match: matchExpr, attrHandle: {}, find: {}, relative: { ">": { dir: "parentNode", first: true }, " ": { dir: "parentNode" }, "+": { dir: "previousSibling", first: true }, "~": { dir: "previousSibling" } }, preFilter: { "ATTR": function( match ) { match[1] = match[1].replace( runescape, funescape ); // Move the given value to match[3] whether quoted or unquoted match[3] = ( match[3] || match[4] || match[5] || "" ).replace( runescape, funescape ); if ( match[2] === "~=" ) { match[3] = " " + match[3] + " "; } return match.slice( 0, 4 ); }, "CHILD": function( match ) { /* matches from matchExpr["CHILD"] 1 type (only|nth|...) 2 what (child|of-type) 3 argument (even|odd|\d*|\d*n([+-]\d+)?|...) 4 xn-component of xn+y argument ([+-]?\d*n|) 5 sign of xn-component 6 x of xn-component 7 sign of y-component 8 y of y-component */ match[1] = match[1].toLowerCase(); if ( match[1].slice( 0, 3 ) === "nth" ) { // nth-* requires argument if ( !match[3] ) { Sizzle.error( match[0] ); } // numeric x and y parameters for Expr.filter.CHILD // remember that false/true cast respectively to 0/1 match[4] = +( match[4] ? match[5] + (match[6] || 1) : 2 * ( match[3] === "even" || match[3] === "odd" ) ); match[5] = +( ( match[7] + match[8] ) || match[3] === "odd" ); // other types prohibit arguments } else if ( match[3] ) { Sizzle.error( match[0] ); } return match; }, "PSEUDO": function( match ) { var excess, unquoted = !match[6] && match[2]; if ( matchExpr["CHILD"].test( match[0] ) ) { return null; } // Accept quoted arguments as-is if ( match[3] ) { match[2] = match[4] || match[5] || ""; // Strip excess characters from unquoted arguments } else if ( unquoted && rpseudo.test( unquoted ) && // Get excess from tokenize (recursively) (excess = tokenize( unquoted, true )) && // advance to the next closing parenthesis (excess = unquoted.indexOf( ")", unquoted.length - excess ) - unquoted.length) ) { // excess is a negative index match[0] = match[0].slice( 0, excess ); match[2] = unquoted.slice( 0, excess ); } // Return only captures needed by the pseudo filter method (type and argument) return match.slice( 0, 3 ); } }, filter: { "TAG": function( nodeNameSelector ) { var nodeName = nodeNameSelector.replace( runescape, funescape ).toLowerCase(); return nodeNameSelector === "*" ? function() { return true; } : function( elem ) { return elem.nodeName && elem.nodeName.toLowerCase() === nodeName; }; }, "CLASS": function( className ) { var pattern = classCache[ className + " " ]; return pattern || (pattern = new RegExp( "(^|" + whitespace + ")" + className + "(" + whitespace + "|$)" )) && classCache( className, function( elem ) { return pattern.test( typeof elem.className === "string" && elem.className || typeof elem.getAttribute !== "undefined" && elem.getAttribute("class") || "" ); }); }, "ATTR": function( name, operator, check ) { return function( elem ) { var result = Sizzle.attr( elem, name ); if ( result == null ) { return operator === "!="; } if ( !operator ) { return true; } result += ""; return operator === "=" ? result === check : operator === "!=" ? result !== check : operator === "^=" ? check && result.indexOf( check ) === 0 : operator === "*=" ? check && result.indexOf( check ) > -1 : operator === "$=" ? check && result.slice( -check.length ) === check : operator === "~=" ? ( " " + result.replace( rwhitespace, " " ) + " " ).indexOf( check ) > -1 : operator === "|=" ? result === check || result.slice( 0, check.length + 1 ) === check + "-" : false; }; }, "CHILD": function( type, what, argument, first, last ) { var simple = type.slice( 0, 3 ) !== "nth", forward = type.slice( -4 ) !== "last", ofType = what === "of-type"; return first === 1 && last === 0 ? // Shortcut for :nth-*(n) function( elem ) { return !!elem.parentNode; } : function( elem, context, xml ) { var cache, outerCache, node, diff, nodeIndex, start, dir = simple !== forward ? "nextSibling" : "previousSibling", parent = elem.parentNode, name = ofType && elem.nodeName.toLowerCase(), useCache = !xml && !ofType; if ( parent ) { // :(first|last|only)-(child|of-type) if ( simple ) { while ( dir ) { node = elem; while ( (node = node[ dir ]) ) { if ( ofType ? node.nodeName.toLowerCase() === name : node.nodeType === 1 ) { return false; } } // Reverse direction for :only-* (if we haven't yet done so) start = dir = type === "only" && !start && "nextSibling"; } return true; } start = [ forward ? parent.firstChild : parent.lastChild ]; // non-xml :nth-child(...) stores cache data on `parent` if ( forward && useCache ) { // Seek `elem` from a previously-cached index outerCache = parent[ expando ] || (parent[ expando ] = {}); cache = outerCache[ type ] || []; nodeIndex = cache[0] === dirruns && cache[1]; diff = cache[0] === dirruns && cache[2]; node = nodeIndex && parent.childNodes[ nodeIndex ]; while ( (node = ++nodeIndex && node && node[ dir ] || // Fallback to seeking `elem` from the start (diff = nodeIndex = 0) || start.pop()) ) { // When found, cache indexes on `parent` and break if ( node.nodeType === 1 && ++diff && node === elem ) { outerCache[ type ] = [ dirruns, nodeIndex, diff ]; break; } } // Use previously-cached element index if available } else if ( useCache && (cache = (elem[ expando ] || (elem[ expando ] = {}))[ type ]) && cache[0] === dirruns ) { diff = cache[1]; // xml :nth-child(...) or :nth-last-child(...) or :nth(-last)?-of-type(...) } else { // Use the same loop as above to seek `elem` from the start while ( (node = ++nodeIndex && node && node[ dir ] || (diff = nodeIndex = 0) || start.pop()) ) { if ( ( ofType ? node.nodeName.toLowerCase() === name : node.nodeType === 1 ) && ++diff ) { // Cache the index of each encountered element if ( useCache ) { (node[ expando ] || (node[ expando ] = {}))[ type ] = [ dirruns, diff ]; } if ( node === elem ) { break; } } } } // Incorporate the offset, then check against cycle size diff -= last; return diff === first || ( diff % first === 0 && diff / first >= 0 ); } }; }, "PSEUDO": function( pseudo, argument ) { // pseudo-class names are case-insensitive // http://www.w3.org/TR/selectors/#pseudo-classes // Prioritize by case sensitivity in case custom pseudos are added with uppercase letters // Remember that setFilters inherits from pseudos var args, fn = Expr.pseudos[ pseudo ] || Expr.setFilters[ pseudo.toLowerCase() ] || Sizzle.error( "unsupported pseudo: " + pseudo ); // The user may use createPseudo to indicate that // arguments are needed to create the filter function // just as Sizzle does if ( fn[ expando ] ) { return fn( argument ); } // But maintain support for old signatures if ( fn.length > 1 ) { args = [ pseudo, pseudo, "", argument ]; return Expr.setFilters.hasOwnProperty( pseudo.toLowerCase() ) ? markFunction(function( seed, matches ) { var idx, matched = fn( seed, argument ), i = matched.length; while ( i-- ) { idx = indexOf( seed, matched[i] ); seed[ idx ] = !( matches[ idx ] = matched[i] ); } }) : function( elem ) { return fn( elem, 0, args ); }; } return fn; } }, pseudos: { // Potentially complex pseudos "not": markFunction(function( selector ) { // Trim the selector passed to compile // to avoid treating leading and trailing // spaces as combinators var input = [], results = [], matcher = compile( selector.replace( rtrim, "$1" ) ); return matcher[ expando ] ? markFunction(function( seed, matches, context, xml ) { var elem, unmatched = matcher( seed, null, xml, [] ), i = seed.length; // Match elements unmatched by `matcher` while ( i-- ) { if ( (elem = unmatched[i]) ) { seed[i] = !(matches[i] = elem); } } }) : function( elem, context, xml ) { input[0] = elem; matcher( input, null, xml, results ); // Don't keep the element (issue #299) input[0] = null; return !results.pop(); }; }), "has": markFunction(function( selector ) { return function( elem ) { return Sizzle( selector, elem ).length > 0; }; }), "contains": markFunction(function( text ) { text = text.replace( runescape, funescape ); return function( elem ) { return ( elem.textContent || elem.innerText || getText( elem ) ).indexOf( text ) > -1; }; }), // "Whether an element is represented by a :lang() selector // is based solely on the element's language value // being equal to the identifier C, // or beginning with the identifier C immediately followed by "-". // The matching of C against the element's language value is performed case-insensitively. // The identifier C does not have to be a valid language name." // http://www.w3.org/TR/selectors/#lang-pseudo "lang": markFunction( function( lang ) { // lang value must be a valid identifier if ( !ridentifier.test(lang || "") ) { Sizzle.error( "unsupported lang: " + lang ); } lang = lang.replace( runescape, funescape ).toLowerCase(); return function( elem ) { var elemLang; do { if ( (elemLang = documentIsHTML ? elem.lang : elem.getAttribute("xml:lang") || elem.getAttribute("lang")) ) { elemLang = elemLang.toLowerCase(); return elemLang === lang || elemLang.indexOf( lang + "-" ) === 0; } } while ( (elem = elem.parentNode) && elem.nodeType === 1 ); return false; }; }), // Miscellaneous "target": function( elem ) { var hash = window.location && window.location.hash; return hash && hash.slice( 1 ) === elem.id; }, "root": function( elem ) { return elem === docElem; }, "focus": function( elem ) { return elem === document.activeElement && (!document.hasFocus || document.hasFocus()) && !!(elem.type || elem.href || ~elem.tabIndex); }, // Boolean properties "enabled": function( elem ) { return elem.disabled === false; }, "disabled": function( elem ) { return elem.disabled === true; }, "checked": function( elem ) { // In CSS3, :checked should return both checked and selected elements // http://www.w3.org/TR/2011/REC-css3-selectors-20110929/#checked var nodeName = elem.nodeName.toLowerCase(); return (nodeName === "input" && !!elem.checked) || (nodeName === "option" && !!elem.selected); }, "selected": function( elem ) { // Accessing this property makes selected-by-default // options in Safari work properly if ( elem.parentNode ) { elem.parentNode.selectedIndex; } return elem.selected === true; }, // Contents "empty": function( elem ) { // http://www.w3.org/TR/selectors/#empty-pseudo // :empty is negated by element (1) or content nodes (text: 3; cdata: 4; entity ref: 5), // but not by others (comment: 8; processing instruction: 7; etc.) // nodeType < 6 works because attributes (2) do not appear as children for ( elem = elem.firstChild; elem; elem = elem.nextSibling ) { if ( elem.nodeType < 6 ) { return false; } } return true; }, "parent": function( elem ) { return !Expr.pseudos["empty"]( elem ); }, // Element/input types "header": function( elem ) { return rheader.test( elem.nodeName ); }, "input": function( elem ) { return rinputs.test( elem.nodeName ); }, "button": function( elem ) { var name = elem.nodeName.toLowerCase(); return name === "input" && elem.type === "button" || name === "button"; }, "text": function( elem ) { var attr; return elem.nodeName.toLowerCase() === "input" && elem.type === "text" && // Support: IE<8 // New HTML5 attribute values (e.g., "search") appear with elem.type === "text" ( (attr = elem.getAttribute("type")) == null || attr.toLowerCase() === "text" ); }, // Position-in-collection "first": createPositionalPseudo(function() { return [ 0 ]; }), "last": createPositionalPseudo(function( matchIndexes, length ) { return [ length - 1 ]; }), "eq": createPositionalPseudo(function( matchIndexes, length, argument ) { return [ argument < 0 ? argument + length : argument ]; }), "even": createPositionalPseudo(function( matchIndexes, length ) { var i = 0; for ( ; i < length; i += 2 ) { matchIndexes.push( i ); } return matchIndexes; }), "odd": createPositionalPseudo(function( matchIndexes, length ) { var i = 1; for ( ; i < length; i += 2 ) { matchIndexes.push( i ); } return matchIndexes; }), "lt": createPositionalPseudo(function( matchIndexes, length, argument ) { var i = argument < 0 ? argument + length : argument; for ( ; --i >= 0; ) { matchIndexes.push( i ); } return matchIndexes; }), "gt": createPositionalPseudo(function( matchIndexes, length, argument ) { var i = argument < 0 ? argument + length : argument; for ( ; ++i < length; ) { matchIndexes.push( i ); } return matchIndexes; }) } }; Expr.pseudos["nth"] = Expr.pseudos["eq"]; // Add button/input type pseudos for ( i in { radio: true, checkbox: true, file: true, password: true, image: true } ) { Expr.pseudos[ i ] = createInputPseudo( i ); } for ( i in { submit: true, reset: true } ) { Expr.pseudos[ i ] = createButtonPseudo( i ); } // Easy API for creating new setFilters function setFilters() {} setFilters.prototype = Expr.filters = Expr.pseudos; Expr.setFilters = new setFilters(); tokenize = Sizzle.tokenize = function( selector, parseOnly ) { var matched, match, tokens, type, soFar, groups, preFilters, cached = tokenCache[ selector + " " ]; if ( cached ) { return parseOnly ? 0 : cached.slice( 0 ); } soFar = selector; groups = []; preFilters = Expr.preFilter; while ( soFar ) { // Comma and first run if ( !matched || (match = rcomma.exec( soFar )) ) { if ( match ) { // Don't consume trailing commas as valid soFar = soFar.slice( match[0].length ) || soFar; } groups.push( (tokens = []) ); } matched = false; // Combinators if ( (match = rcombinators.exec( soFar )) ) { matched = match.shift(); tokens.push({ value: matched, // Cast descendant combinators to space type: match[0].replace( rtrim, " " ) }); soFar = soFar.slice( matched.length ); } // Filters for ( type in Expr.filter ) { if ( (match = matchExpr[ type ].exec( soFar )) && (!preFilters[ type ] || (match = preFilters[ type ]( match ))) ) { matched = match.shift(); tokens.push({ value: matched, type: type, matches: match }); soFar = soFar.slice( matched.length ); } } if ( !matched ) { break; } } // Return the length of the invalid excess // if we're just parsing // Otherwise, throw an error or return tokens return parseOnly ? soFar.length : soFar ? Sizzle.error( selector ) : // Cache the tokens tokenCache( selector, groups ).slice( 0 ); }; function toSelector( tokens ) { var i = 0, len = tokens.length, selector = ""; for ( ; i < len; i++ ) { selector += tokens[i].value; } return selector; } function addCombinator( matcher, combinator, base ) { var dir = combinator.dir, checkNonElements = base && dir === "parentNode", doneName = done++; return combinator.first ? // Check against closest ancestor/preceding element function( elem, context, xml ) { while ( (elem = elem[ dir ]) ) { if ( elem.nodeType === 1 || checkNonElements ) { return matcher( elem, context, xml ); } } } : // Check against all ancestor/preceding elements function( elem, context, xml ) { var oldCache, outerCache, newCache = [ dirruns, doneName ]; // We can't set arbitrary data on XML nodes, so they don't benefit from dir caching if ( xml ) { while ( (elem = elem[ dir ]) ) { if ( elem.nodeType === 1 || checkNonElements ) { if ( matcher( elem, context, xml ) ) { return true; } } } } else { while ( (elem = elem[ dir ]) ) { if ( elem.nodeType === 1 || checkNonElements ) { outerCache = elem[ expando ] || (elem[ expando ] = {}); if ( (oldCache = outerCache[ dir ]) && oldCache[ 0 ] === dirruns && oldCache[ 1 ] === doneName ) { // Assign to newCache so results back-propagate to previous elements return (newCache[ 2 ] = oldCache[ 2 ]); } else { // Reuse newcache so results back-propagate to previous elements outerCache[ dir ] = newCache; // A match means we're done; a fail means we have to keep checking if ( (newCache[ 2 ] = matcher( elem, context, xml )) ) { return true; } } } } } }; } function elementMatcher( matchers ) { return matchers.length > 1 ? function( elem, context, xml ) { var i = matchers.length; while ( i-- ) { if ( !matchers[i]( elem, context, xml ) ) { return false; } } return true; } : matchers[0]; } function multipleContexts( selector, contexts, results ) { var i = 0, len = contexts.length; for ( ; i < len; i++ ) { Sizzle( selector, contexts[i], results ); } return results; } function condense( unmatched, map, filter, context, xml ) { var elem, newUnmatched = [], i = 0, len = unmatched.length, mapped = map != null; for ( ; i < len; i++ ) { if ( (elem = unmatched[i]) ) { if ( !filter || filter( elem, context, xml ) ) { newUnmatched.push( elem ); if ( mapped ) { map.push( i ); } } } } return newUnmatched; } function setMatcher( preFilter, selector, matcher, postFilter, postFinder, postSelector ) { if ( postFilter && !postFilter[ expando ] ) { postFilter = setMatcher( postFilter ); } if ( postFinder && !postFinder[ expando ] ) { postFinder = setMatcher( postFinder, postSelector ); } return markFunction(function( seed, results, context, xml ) { var temp, i, elem, preMap = [], postMap = [], preexisting = results.length, // Get initial elements from seed or context elems = seed || multipleContexts( selector || "*", context.nodeType ? [ context ] : context, [] ), // Prefilter to get matcher input, preserving a map for seed-results synchronization matcherIn = preFilter && ( seed || !selector ) ? condense( elems, preMap, preFilter, context, xml ) : elems, matcherOut = matcher ? // If we have a postFinder, or filtered seed, or non-seed postFilter or preexisting results, postFinder || ( seed ? preFilter : preexisting || postFilter ) ? // ...intermediate processing is necessary [] : // ...otherwise use results directly results : matcherIn; // Find primary matches if ( matcher ) { matcher( matcherIn, matcherOut, context, xml ); } // Apply postFilter if ( postFilter ) { temp = condense( matcherOut, postMap ); postFilter( temp, [], context, xml ); // Un-match failing elements by moving them back to matcherIn i = temp.length; while ( i-- ) { if ( (elem = temp[i]) ) { matcherOut[ postMap[i] ] = !(matcherIn[ postMap[i] ] = elem); } } } if ( seed ) { if ( postFinder || preFilter ) { if ( postFinder ) { // Get the final matcherOut by condensing this intermediate into postFinder contexts temp = []; i = matcherOut.length; while ( i-- ) { if ( (elem = matcherOut[i]) ) { // Restore matcherIn since elem is not yet a final match temp.push( (matcherIn[i] = elem) ); } } postFinder( null, (matcherOut = []), temp, xml ); } // Move matched elements from seed to results to keep them synchronized i = matcherOut.length; while ( i-- ) { if ( (elem = matcherOut[i]) && (temp = postFinder ? indexOf( seed, elem ) : preMap[i]) > -1 ) { seed[temp] = !(results[temp] = elem); } } } // Add elements to results, through postFinder if defined } else { matcherOut = condense( matcherOut === results ? matcherOut.splice( preexisting, matcherOut.length ) : matcherOut ); if ( postFinder ) { postFinder( null, results, matcherOut, xml ); } else { push.apply( results, matcherOut ); } } }); } function matcherFromTokens( tokens ) { var checkContext, matcher, j, len = tokens.length, leadingRelative = Expr.relative[ tokens[0].type ], implicitRelative = leadingRelative || Expr.relative[" "], i = leadingRelative ? 1 : 0, // The foundational matcher ensures that elements are reachable from top-level context(s) matchContext = addCombinator( function( elem ) { return elem === checkContext; }, implicitRelative, true ), matchAnyContext = addCombinator( function( elem ) { return indexOf( checkContext, elem ) > -1; }, implicitRelative, true ), matchers = [ function( elem, context, xml ) { var ret = ( !leadingRelative && ( xml || context !== outermostContext ) ) || ( (checkContext = context).nodeType ? matchContext( elem, context, xml ) : matchAnyContext( elem, context, xml ) ); // Avoid hanging onto element (issue #299) checkContext = null; return ret; } ]; for ( ; i < len; i++ ) { if ( (matcher = Expr.relative[ tokens[i].type ]) ) { matchers = [ addCombinator(elementMatcher( matchers ), matcher) ]; } else { matcher = Expr.filter[ tokens[i].type ].apply( null, tokens[i].matches ); // Return special upon seeing a positional matcher if ( matcher[ expando ] ) { // Find the next relative operator (if any) for proper handling j = ++i; for ( ; j < len; j++ ) { if ( Expr.relative[ tokens[j].type ] ) { break; } } return setMatcher( i > 1 && elementMatcher( matchers ), i > 1 && toSelector( // If the preceding token was a descendant combinator, insert an implicit any-element `*` tokens.slice( 0, i - 1 ).concat({ value: tokens[ i - 2 ].type === " " ? "*" : "" }) ).replace( rtrim, "$1" ), matcher, i < j && matcherFromTokens( tokens.slice( i, j ) ), j < len && matcherFromTokens( (tokens = tokens.slice( j )) ), j < len && toSelector( tokens ) ); } matchers.push( matcher ); } } return elementMatcher( matchers ); } function matcherFromGroupMatchers( elementMatchers, setMatchers ) { var bySet = setMatchers.length > 0, byElement = elementMatchers.length > 0, superMatcher = function( seed, context, xml, results, outermost ) { var elem, j, matcher, matchedCount = 0, i = "0", unmatched = seed && [], setMatched = [], contextBackup = outermostContext, // We must always have either seed elements or outermost context elems = seed || byElement && Expr.find["TAG"]( "*", outermost ), // Use integer dirruns iff this is the outermost matcher dirrunsUnique = (dirruns += contextBackup == null ? 1 : Math.random() || 0.1), len = elems.length; if ( outermost ) { outermostContext = context !== document && context; } // Add elements passing elementMatchers directly to results // Keep `i` a string if there are no elements so `matchedCount` will be "00" below // Support: IE<9, Safari // Tolerate NodeList properties (IE: "length"; Safari: ) matching elements by id for ( ; i !== len && (elem = elems[i]) != null; i++ ) { if ( byElement && elem ) { j = 0; while ( (matcher = elementMatchers[j++]) ) { if ( matcher( elem, context, xml ) ) { results.push( elem ); break; } } if ( outermost ) { dirruns = dirrunsUnique; } } // Track unmatched elements for set filters if ( bySet ) { // They will have gone through all possible matchers if ( (elem = !matcher && elem) ) { matchedCount--; } // Lengthen the array for every element, matched or not if ( seed ) { unmatched.push( elem ); } } } // Apply set filters to unmatched elements matchedCount += i; if ( bySet && i !== matchedCount ) { j = 0; while ( (matcher = setMatchers[j++]) ) { matcher( unmatched, setMatched, context, xml ); } if ( seed ) { // Reintegrate element matches to eliminate the need for sorting if ( matchedCount > 0 ) { while ( i-- ) { if ( !(unmatched[i] || setMatched[i]) ) { setMatched[i] = pop.call( results ); } } } // Discard index placeholder values to get only actual matches setMatched = condense( setMatched ); } // Add matches to results push.apply( results, setMatched ); // Seedless set matches succeeding multiple successful matchers stipulate sorting if ( outermost && !seed && setMatched.length > 0 && ( matchedCount + setMatchers.length ) > 1 ) { Sizzle.uniqueSort( results ); } } // Override manipulation of globals by nested matchers if ( outermost ) { dirruns = dirrunsUnique; outermostContext = contextBackup; } return unmatched; }; return bySet ? markFunction( superMatcher ) : superMatcher; } compile = Sizzle.compile = function( selector, match /* Internal Use Only */ ) { var i, setMatchers = [], elementMatchers = [], cached = compilerCache[ selector + " " ]; if ( !cached ) { // Generate a function of recursive functions that can be used to check each element if ( !match ) { match = tokenize( selector ); } i = match.length; while ( i-- ) { cached = matcherFromTokens( match[i] ); if ( cached[ expando ] ) { setMatchers.push( cached ); } else { elementMatchers.push( cached ); } } // Cache the compiled function cached = compilerCache( selector, matcherFromGroupMatchers( elementMatchers, setMatchers ) ); // Save selector and tokenization cached.selector = selector; } return cached; }; /** * A low-level selection function that works with Sizzle's compiled * selector functions * @param {String|Function} selector A selector or a pre-compiled * selector function built with Sizzle.compile * @param {Element} context * @param {Array} [results] * @param {Array} [seed] A set of elements to match against */ select = Sizzle.select = function( selector, context, results, seed ) { var i, tokens, token, type, find, compiled = typeof selector === "function" && selector, match = !seed && tokenize( (selector = compiled.selector || selector) ); results = results || []; // Try to minimize operations if there is no seed and only one group if ( match.length === 1 ) { // Take a shortcut and set the context if the root selector is an ID tokens = match[0] = match[0].slice( 0 ); if ( tokens.length > 2 && (token = tokens[0]).type === "ID" && support.getById && context.nodeType === 9 && documentIsHTML && Expr.relative[ tokens[1].type ] ) { context = ( Expr.find["ID"]( token.matches[0].replace(runescape, funescape), context ) || [] )[0]; if ( !context ) { return results; // Precompiled matchers will still verify ancestry, so step up a level } else if ( compiled ) { context = context.parentNode; } selector = selector.slice( tokens.shift().value.length ); } // Fetch a seed set for right-to-left matching i = matchExpr["needsContext"].test( selector ) ? 0 : tokens.length; while ( i-- ) { token = tokens[i]; // Abort if we hit a combinator if ( Expr.relative[ (type = token.type) ] ) { break; } if ( (find = Expr.find[ type ]) ) { // Search, expanding context for leading sibling combinators if ( (seed = find( token.matches[0].replace( runescape, funescape ), rsibling.test( tokens[0].type ) && testContext( context.parentNode ) || context )) ) { // If seed is empty or no tokens remain, we can return early tokens.splice( i, 1 ); selector = seed.length && toSelector( tokens ); if ( !selector ) { push.apply( results, seed ); return results; } break; } } } } // Compile and execute a filtering function if one is not provided // Provide `match` to avoid retokenization if we modified the selector above ( compiled || compile( selector, match ) )( seed, context, !documentIsHTML, results, rsibling.test( selector ) && testContext( context.parentNode ) || context ); return results; }; // One-time assignments // Sort stability support.sortStable = expando.split("").sort( sortOrder ).join("") === expando; // Support: Chrome 14-35+ // Always assume duplicates if they aren't passed to the comparison function support.detectDuplicates = !!hasDuplicate; // Initialize against the default document setDocument(); // Support: Webkit<537.32 - Safari 6.0.3/Chrome 25 (fixed in Chrome 27) // Detached nodes confoundingly follow *each other* support.sortDetached = assert(function( div1 ) { // Should return 1, but returns 4 (following) return div1.compareDocumentPosition( document.createElement("div") ) & 1; }); // Support: IE<8 // Prevent attribute/property "interpolation" // http://msdn.microsoft.com/en-us/library/ms536429%28VS.85%29.aspx if ( !assert(function( div ) { div.innerHTML = ""; return div.firstChild.getAttribute("href") === "#" ; }) ) { addHandle( "type|href|height|width", function( elem, name, isXML ) { if ( !isXML ) { return elem.getAttribute( name, name.toLowerCase() === "type" ? 1 : 2 ); } }); } // Support: IE<9 // Use defaultValue in place of getAttribute("value") if ( !support.attributes || !assert(function( div ) { div.innerHTML = ""; div.firstChild.setAttribute( "value", "" ); return div.firstChild.getAttribute( "value" ) === ""; }) ) { addHandle( "value", function( elem, name, isXML ) { if ( !isXML && elem.nodeName.toLowerCase() === "input" ) { return elem.defaultValue; } }); } // Support: IE<9 // Use getAttributeNode to fetch booleans when getAttribute lies if ( !assert(function( div ) { return div.getAttribute("disabled") == null; }) ) { addHandle( booleans, function( elem, name, isXML ) { var val; if ( !isXML ) { return elem[ name ] === true ? name.toLowerCase() : (val = elem.getAttributeNode( name )) && val.specified ? val.value : null; } }); } return Sizzle; })( window ); jQuery.find = Sizzle; jQuery.expr = Sizzle.selectors; jQuery.expr[":"] = jQuery.expr.pseudos; jQuery.unique = Sizzle.uniqueSort; jQuery.text = Sizzle.getText; jQuery.isXMLDoc = Sizzle.isXML; jQuery.contains = Sizzle.contains; var rneedsContext = jQuery.expr.match.needsContext; var rsingleTag = (/^<(\w+)\s*\/?>(?:<\/\1>|)$/); var risSimple = /^.[^:#\[\.,]*$/; // Implement the identical functionality for filter and not function winnow( elements, qualifier, not ) { if ( jQuery.isFunction( qualifier ) ) { return jQuery.grep( elements, function( elem, i ) { /* jshint -W018 */ return !!qualifier.call( elem, i, elem ) !== not; }); } if ( qualifier.nodeType ) { return jQuery.grep( elements, function( elem ) { return ( elem === qualifier ) !== not; }); } if ( typeof qualifier === "string" ) { if ( risSimple.test( qualifier ) ) { return jQuery.filter( qualifier, elements, not ); } qualifier = jQuery.filter( qualifier, elements ); } return jQuery.grep( elements, function( elem ) { return ( jQuery.inArray( elem, qualifier ) >= 0 ) !== not; }); } jQuery.filter = function( expr, elems, not ) { var elem = elems[ 0 ]; if ( not ) { expr = ":not(" + expr + ")"; } return elems.length === 1 && elem.nodeType === 1 ? jQuery.find.matchesSelector( elem, expr ) ? [ elem ] : [] : jQuery.find.matches( expr, jQuery.grep( elems, function( elem ) { return elem.nodeType === 1; })); }; jQuery.fn.extend({ find: function( selector ) { var i, ret = [], self = this, len = self.length; if ( typeof selector !== "string" ) { return this.pushStack( jQuery( selector ).filter(function() { for ( i = 0; i < len; i++ ) { if ( jQuery.contains( self[ i ], this ) ) { return true; } } }) ); } for ( i = 0; i < len; i++ ) { jQuery.find( selector, self[ i ], ret ); } // Needed because $( selector, context ) becomes $( context ).find( selector ) ret = this.pushStack( len > 1 ? jQuery.unique( ret ) : ret ); ret.selector = this.selector ? this.selector + " " + selector : selector; return ret; }, filter: function( selector ) { return this.pushStack( winnow(this, selector || [], false) ); }, not: function( selector ) { return this.pushStack( winnow(this, selector || [], true) ); }, is: function( selector ) { return !!winnow( this, // If this is a positional/relative selector, check membership in the returned set // so $("p:first").is("p:last") won't return true for a doc with two "p". typeof selector === "string" && rneedsContext.test( selector ) ? jQuery( selector ) : selector || [], false ).length; } }); // Initialize a jQuery object // A central reference to the root jQuery(document) var rootjQuery, // Use the correct document accordingly with window argument (sandbox) document = window.document, // A simple way to check for HTML strings // Prioritize #id over to avoid XSS via location.hash (#9521) // Strict HTML recognition (#11290: must start with <) rquickExpr = /^(?:\s*(<[\w\W]+>)[^>]*|#([\w-]*))$/, init = jQuery.fn.init = function( selector, context ) { var match, elem; // HANDLE: $(""), $(null), $(undefined), $(false) if ( !selector ) { return this; } // Handle HTML strings if ( typeof selector === "string" ) { if ( selector.charAt(0) === "<" && selector.charAt( selector.length - 1 ) === ">" && selector.length >= 3 ) { // Assume that strings that start and end with <> are HTML and skip the regex check match = [ null, selector, null ]; } else { match = rquickExpr.exec( selector ); } // Match html or make sure no context is specified for #id if ( match && (match[1] || !context) ) { // HANDLE: $(html) -> $(array) if ( match[1] ) { context = context instanceof jQuery ? context[0] : context; // scripts is true for back-compat // Intentionally let the error be thrown if parseHTML is not present jQuery.merge( this, jQuery.parseHTML( match[1], context && context.nodeType ? context.ownerDocument || context : document, true ) ); // HANDLE: $(html, props) if ( rsingleTag.test( match[1] ) && jQuery.isPlainObject( context ) ) { for ( match in context ) { // Properties of context are called as methods if possible if ( jQuery.isFunction( this[ match ] ) ) { this[ match ]( context[ match ] ); // ...and otherwise set as attributes } else { this.attr( match, context[ match ] ); } } } return this; // HANDLE: $(#id) } else { elem = document.getElementById( match[2] ); // Check parentNode to catch when Blackberry 4.6 returns // nodes that are no longer in the document #6963 if ( elem && elem.parentNode ) { // Handle the case where IE and Opera return items // by name instead of ID if ( elem.id !== match[2] ) { return rootjQuery.find( selector ); } // Otherwise, we inject the element directly into the jQuery object this.length = 1; this[0] = elem; } this.context = document; this.selector = selector; return this; } // HANDLE: $(expr, $(...)) } else if ( !context || context.jquery ) { return ( context || rootjQuery ).find( selector ); // HANDLE: $(expr, context) // (which is just equivalent to: $(context).find(expr) } else { return this.constructor( context ).find( selector ); } // HANDLE: $(DOMElement) } else if ( selector.nodeType ) { this.context = this[0] = selector; this.length = 1; return this; // HANDLE: $(function) // Shortcut for document ready } else if ( jQuery.isFunction( selector ) ) { return typeof rootjQuery.ready !== "undefined" ? rootjQuery.ready( selector ) : // Execute immediately if ready is not present selector( jQuery ); } if ( selector.selector !== undefined ) { this.selector = selector.selector; this.context = selector.context; } return jQuery.makeArray( selector, this ); }; // Give the init function the jQuery prototype for later instantiation init.prototype = jQuery.fn; // Initialize central reference rootjQuery = jQuery( document ); var rparentsprev = /^(?:parents|prev(?:Until|All))/, // methods guaranteed to produce a unique set when starting from a unique set guaranteedUnique = { children: true, contents: true, next: true, prev: true }; jQuery.extend({ dir: function( elem, dir, until ) { var matched = [], cur = elem[ dir ]; while ( cur && cur.nodeType !== 9 && (until === undefined || cur.nodeType !== 1 || !jQuery( cur ).is( until )) ) { if ( cur.nodeType === 1 ) { matched.push( cur ); } cur = cur[dir]; } return matched; }, sibling: function( n, elem ) { var r = []; for ( ; n; n = n.nextSibling ) { if ( n.nodeType === 1 && n !== elem ) { r.push( n ); } } return r; } }); jQuery.fn.extend({ has: function( target ) { var i, targets = jQuery( target, this ), len = targets.length; return this.filter(function() { for ( i = 0; i < len; i++ ) { if ( jQuery.contains( this, targets[i] ) ) { return true; } } }); }, closest: function( selectors, context ) { var cur, i = 0, l = this.length, matched = [], pos = rneedsContext.test( selectors ) || typeof selectors !== "string" ? jQuery( selectors, context || this.context ) : 0; for ( ; i < l; i++ ) { for ( cur = this[i]; cur && cur !== context; cur = cur.parentNode ) { // Always skip document fragments if ( cur.nodeType < 11 && (pos ? pos.index(cur) > -1 : // Don't pass non-elements to Sizzle cur.nodeType === 1 && jQuery.find.matchesSelector(cur, selectors)) ) { matched.push( cur ); break; } } } return this.pushStack( matched.length > 1 ? jQuery.unique( matched ) : matched ); }, // Determine the position of an element within // the matched set of elements index: function( elem ) { // No argument, return index in parent if ( !elem ) { return ( this[0] && this[0].parentNode ) ? this.first().prevAll().length : -1; } // index in selector if ( typeof elem === "string" ) { return jQuery.inArray( this[0], jQuery( elem ) ); } // Locate the position of the desired element return jQuery.inArray( // If it receives a jQuery object, the first element is used elem.jquery ? elem[0] : elem, this ); }, add: function( selector, context ) { return this.pushStack( jQuery.unique( jQuery.merge( this.get(), jQuery( selector, context ) ) ) ); }, addBack: function( selector ) { return this.add( selector == null ? this.prevObject : this.prevObject.filter(selector) ); } }); function sibling( cur, dir ) { do { cur = cur[ dir ]; } while ( cur && cur.nodeType !== 1 ); return cur; } jQuery.each({ parent: function( elem ) { var parent = elem.parentNode; return parent && parent.nodeType !== 11 ? parent : null; }, parents: function( elem ) { return jQuery.dir( elem, "parentNode" ); }, parentsUntil: function( elem, i, until ) { return jQuery.dir( elem, "parentNode", until ); }, next: function( elem ) { return sibling( elem, "nextSibling" ); }, prev: function( elem ) { return sibling( elem, "previousSibling" ); }, nextAll: function( elem ) { return jQuery.dir( elem, "nextSibling" ); }, prevAll: function( elem ) { return jQuery.dir( elem, "previousSibling" ); }, nextUntil: function( elem, i, until ) { return jQuery.dir( elem, "nextSibling", until ); }, prevUntil: function( elem, i, until ) { return jQuery.dir( elem, "previousSibling", until ); }, siblings: function( elem ) { return jQuery.sibling( ( elem.parentNode || {} ).firstChild, elem ); }, children: function( elem ) { return jQuery.sibling( elem.firstChild ); }, contents: function( elem ) { return jQuery.nodeName( elem, "iframe" ) ? elem.contentDocument || elem.contentWindow.document : jQuery.merge( [], elem.childNodes ); } }, function( name, fn ) { jQuery.fn[ name ] = function( until, selector ) { var ret = jQuery.map( this, fn, until ); if ( name.slice( -5 ) !== "Until" ) { selector = until; } if ( selector && typeof selector === "string" ) { ret = jQuery.filter( selector, ret ); } if ( this.length > 1 ) { // Remove duplicates if ( !guaranteedUnique[ name ] ) { ret = jQuery.unique( ret ); } // Reverse order for parents* and prev-derivatives if ( rparentsprev.test( name ) ) { ret = ret.reverse(); } } return this.pushStack( ret ); }; }); var rnotwhite = (/\S+/g); // String to Object options format cache var optionsCache = {}; // Convert String-formatted options into Object-formatted ones and store in cache function createOptions( options ) { var object = optionsCache[ options ] = {}; jQuery.each( options.match( rnotwhite ) || [], function( _, flag ) { object[ flag ] = true; }); return object; } /* * Create a callback list using the following parameters: * * options: an optional list of space-separated options that will change how * the callback list behaves or a more traditional option object * * By default a callback list will act like an event callback list and can be * "fired" multiple times. * * Possible options: * * once: will ensure the callback list can only be fired once (like a Deferred) * * memory: will keep track of previous values and will call any callback added * after the list has been fired right away with the latest "memorized" * values (like a Deferred) * * unique: will ensure a callback can only be added once (no duplicate in the list) * * stopOnFalse: interrupt callings when a callback returns false * */ jQuery.Callbacks = function( options ) { // Convert options from String-formatted to Object-formatted if needed // (we check in cache first) options = typeof options === "string" ? ( optionsCache[ options ] || createOptions( options ) ) : jQuery.extend( {}, options ); var // Flag to know if list is currently firing firing, // Last fire value (for non-forgettable lists) memory, // Flag to know if list was already fired fired, // End of the loop when firing firingLength, // Index of currently firing callback (modified by remove if needed) firingIndex, // First callback to fire (used internally by add and fireWith) firingStart, // Actual callback list list = [], // Stack of fire calls for repeatable lists stack = !options.once && [], // Fire callbacks fire = function( data ) { memory = options.memory && data; fired = true; firingIndex = firingStart || 0; firingStart = 0; firingLength = list.length; firing = true; for ( ; list && firingIndex < firingLength; firingIndex++ ) { if ( list[ firingIndex ].apply( data[ 0 ], data[ 1 ] ) === false && options.stopOnFalse ) { memory = false; // To prevent further calls using add break; } } firing = false; if ( list ) { if ( stack ) { if ( stack.length ) { fire( stack.shift() ); } } else if ( memory ) { list = []; } else { self.disable(); } } }, // Actual Callbacks object self = { // Add a callback or a collection of callbacks to the list add: function() { if ( list ) { // First, we save the current length var start = list.length; (function add( args ) { jQuery.each( args, function( _, arg ) { var type = jQuery.type( arg ); if ( type === "function" ) { if ( !options.unique || !self.has( arg ) ) { list.push( arg ); } } else if ( arg && arg.length && type !== "string" ) { // Inspect recursively add( arg ); } }); })( arguments ); // Do we need to add the callbacks to the // current firing batch? if ( firing ) { firingLength = list.length; // With memory, if we're not firing then // we should call right away } else if ( memory ) { firingStart = start; fire( memory ); } } return this; }, // Remove a callback from the list remove: function() { if ( list ) { jQuery.each( arguments, function( _, arg ) { var index; while ( ( index = jQuery.inArray( arg, list, index ) ) > -1 ) { list.splice( index, 1 ); // Handle firing indexes if ( firing ) { if ( index <= firingLength ) { firingLength--; } if ( index <= firingIndex ) { firingIndex--; } } } }); } return this; }, // Check if a given callback is in the list. // If no argument is given, return whether or not list has callbacks attached. has: function( fn ) { return fn ? jQuery.inArray( fn, list ) > -1 : !!( list && list.length ); }, // Remove all callbacks from the list empty: function() { list = []; firingLength = 0; return this; }, // Have the list do nothing anymore disable: function() { list = stack = memory = undefined; return this; }, // Is it disabled? disabled: function() { return !list; }, // Lock the list in its current state lock: function() { stack = undefined; if ( !memory ) { self.disable(); } return this; }, // Is it locked? locked: function() { return !stack; }, // Call all callbacks with the given context and arguments fireWith: function( context, args ) { if ( list && ( !fired || stack ) ) { args = args || []; args = [ context, args.slice ? args.slice() : args ]; if ( firing ) { stack.push( args ); } else { fire( args ); } } return this; }, // Call all the callbacks with the given arguments fire: function() { self.fireWith( this, arguments ); return this; }, // To know if the callbacks have already been called at least once fired: function() { return !!fired; } }; return self; }; jQuery.extend({ Deferred: function( func ) { var tuples = [ // action, add listener, listener list, final state [ "resolve", "done", jQuery.Callbacks("once memory"), "resolved" ], [ "reject", "fail", jQuery.Callbacks("once memory"), "rejected" ], [ "notify", "progress", jQuery.Callbacks("memory") ] ], state = "pending", promise = { state: function() { return state; }, always: function() { deferred.done( arguments ).fail( arguments ); return this; }, then: function( /* fnDone, fnFail, fnProgress */ ) { var fns = arguments; return jQuery.Deferred(function( newDefer ) { jQuery.each( tuples, function( i, tuple ) { var fn = jQuery.isFunction( fns[ i ] ) && fns[ i ]; // deferred[ done | fail | progress ] for forwarding actions to newDefer deferred[ tuple[1] ](function() { var returned = fn && fn.apply( this, arguments ); if ( returned && jQuery.isFunction( returned.promise ) ) { returned.promise() .done( newDefer.resolve ) .fail( newDefer.reject ) .progress( newDefer.notify ); } else { newDefer[ tuple[ 0 ] + "With" ]( this === promise ? newDefer.promise() : this, fn ? [ returned ] : arguments ); } }); }); fns = null; }).promise(); }, // Get a promise for this deferred // If obj is provided, the promise aspect is added to the object promise: function( obj ) { return obj != null ? jQuery.extend( obj, promise ) : promise; } }, deferred = {}; // Keep pipe for back-compat promise.pipe = promise.then; // Add list-specific methods jQuery.each( tuples, function( i, tuple ) { var list = tuple[ 2 ], stateString = tuple[ 3 ]; // promise[ done | fail | progress ] = list.add promise[ tuple[1] ] = list.add; // Handle state if ( stateString ) { list.add(function() { // state = [ resolved | rejected ] state = stateString; // [ reject_list | resolve_list ].disable; progress_list.lock }, tuples[ i ^ 1 ][ 2 ].disable, tuples[ 2 ][ 2 ].lock ); } // deferred[ resolve | reject | notify ] deferred[ tuple[0] ] = function() { deferred[ tuple[0] + "With" ]( this === deferred ? promise : this, arguments ); return this; }; deferred[ tuple[0] + "With" ] = list.fireWith; }); // Make the deferred a promise promise.promise( deferred ); // Call given func if any if ( func ) { func.call( deferred, deferred ); } // All done! return deferred; }, // Deferred helper when: function( subordinate /* , ..., subordinateN */ ) { var i = 0, resolveValues = slice.call( arguments ), length = resolveValues.length, // the count of uncompleted subordinates remaining = length !== 1 || ( subordinate && jQuery.isFunction( subordinate.promise ) ) ? length : 0, // the master Deferred. If resolveValues consist of only a single Deferred, just use that. deferred = remaining === 1 ? subordinate : jQuery.Deferred(), // Update function for both resolve and progress values updateFunc = function( i, contexts, values ) { return function( value ) { contexts[ i ] = this; values[ i ] = arguments.length > 1 ? slice.call( arguments ) : value; if ( values === progressValues ) { deferred.notifyWith( contexts, values ); } else if ( !(--remaining) ) { deferred.resolveWith( contexts, values ); } }; }, progressValues, progressContexts, resolveContexts; // add listeners to Deferred subordinates; treat others as resolved if ( length > 1 ) { progressValues = new Array( length ); progressContexts = new Array( length ); resolveContexts = new Array( length ); for ( ; i < length; i++ ) { if ( resolveValues[ i ] && jQuery.isFunction( resolveValues[ i ].promise ) ) { resolveValues[ i ].promise() .done( updateFunc( i, resolveContexts, resolveValues ) ) .fail( deferred.reject ) .progress( updateFunc( i, progressContexts, progressValues ) ); } else { --remaining; } } } // if we're not waiting on anything, resolve the master if ( !remaining ) { deferred.resolveWith( resolveContexts, resolveValues ); } return deferred.promise(); } }); // The deferred used on DOM ready var readyList; jQuery.fn.ready = function( fn ) { // Add the callback jQuery.ready.promise().done( fn ); return this; }; jQuery.extend({ // Is the DOM ready to be used? Set to true once it occurs. isReady: false, // A counter to track how many items to wait for before // the ready event fires. See #6781 readyWait: 1, // Hold (or release) the ready event holdReady: function( hold ) { if ( hold ) { jQuery.readyWait++; } else { jQuery.ready( true ); } }, // Handle when the DOM is ready ready: function( wait ) { // Abort if there are pending holds or we're already ready if ( wait === true ? --jQuery.readyWait : jQuery.isReady ) { return; } // Make sure body exists, at least, in case IE gets a little overzealous (ticket #5443). if ( !document.body ) { return setTimeout( jQuery.ready ); } // Remember that the DOM is ready jQuery.isReady = true; // If a normal DOM Ready event fired, decrement, and wait if need be if ( wait !== true && --jQuery.readyWait > 0 ) { return; } // If there are functions bound, to execute readyList.resolveWith( document, [ jQuery ] ); // Trigger any bound ready events if ( jQuery.fn.triggerHandler ) { jQuery( document ).triggerHandler( "ready" ); jQuery( document ).off( "ready" ); } } }); /** * Clean-up method for dom ready events */ function detach() { if ( document.addEventListener ) { document.removeEventListener( "DOMContentLoaded", completed, false ); window.removeEventListener( "load", completed, false ); } else { document.detachEvent( "onreadystatechange", completed ); window.detachEvent( "onload", completed ); } } /** * The ready event handler and self cleanup method */ function completed() { // readyState === "complete" is good enough for us to call the dom ready in oldIE if ( document.addEventListener || event.type === "load" || document.readyState === "complete" ) { detach(); jQuery.ready(); } } jQuery.ready.promise = function( obj ) { if ( !readyList ) { readyList = jQuery.Deferred(); // Catch cases where $(document).ready() is called after the browser event has already occurred. // we once tried to use readyState "interactive" here, but it caused issues like the one // discovered by ChrisS here: http://bugs.jquery.com/ticket/12282#comment:15 if ( document.readyState === "complete" ) { // Handle it asynchronously to allow scripts the opportunity to delay ready setTimeout( jQuery.ready ); // Standards-based browsers support DOMContentLoaded } else if ( document.addEventListener ) { // Use the handy event callback document.addEventListener( "DOMContentLoaded", completed, false ); // A fallback to window.onload, that will always work window.addEventListener( "load", completed, false ); // If IE event model is used } else { // Ensure firing before onload, maybe late but safe also for iframes document.attachEvent( "onreadystatechange", completed ); // A fallback to window.onload, that will always work window.attachEvent( "onload", completed ); // If IE and not a frame // continually check to see if the document is ready var top = false; try { top = window.frameElement == null && document.documentElement; } catch(e) {} if ( top && top.doScroll ) { (function doScrollCheck() { if ( !jQuery.isReady ) { try { // Use the trick by Diego Perini // http://javascript.nwbox.com/IEContentLoaded/ top.doScroll("left"); } catch(e) { return setTimeout( doScrollCheck, 50 ); } // detach all dom ready events detach(); // and execute any waiting functions jQuery.ready(); } })(); } } } return readyList.promise( obj ); }; var strundefined = typeof undefined; // Support: IE<9 // Iteration over object's inherited properties before its own var i; for ( i in jQuery( support ) ) { break; } support.ownLast = i !== "0"; // Note: most support tests are defined in their respective modules. // false until the test is run support.inlineBlockNeedsLayout = false; // Execute ASAP in case we need to set body.style.zoom jQuery(function() { // Minified: var a,b,c,d var val, div, body, container; body = document.getElementsByTagName( "body" )[ 0 ]; if ( !body || !body.style ) { // Return for frameset docs that don't have a body return; } // Setup div = document.createElement( "div" ); container = document.createElement( "div" ); container.style.cssText = "position:absolute;border:0;width:0;height:0;top:0;left:-9999px"; body.appendChild( container ).appendChild( div ); if ( typeof div.style.zoom !== strundefined ) { // Support: IE<8 // Check if natively block-level elements act like inline-block // elements when setting their display to 'inline' and giving // them layout div.style.cssText = "display:inline;margin:0;border:0;padding:1px;width:1px;zoom:1"; support.inlineBlockNeedsLayout = val = div.offsetWidth === 3; if ( val ) { // Prevent IE 6 from affecting layout for positioned elements #11048 // Prevent IE from shrinking the body in IE 7 mode #12869 // Support: IE<8 body.style.zoom = 1; } } body.removeChild( container ); }); (function() { var div = document.createElement( "div" ); // Execute the test only if not already executed in another module. if (support.deleteExpando == null) { // Support: IE<9 support.deleteExpando = true; try { delete div.test; } catch( e ) { support.deleteExpando = false; } } // Null elements to avoid leaks in IE. div = null; })(); /** * Determines whether an object can have data */ jQuery.acceptData = function( elem ) { var noData = jQuery.noData[ (elem.nodeName + " ").toLowerCase() ], nodeType = +elem.nodeType || 1; // Do not set data on non-element DOM nodes because it will not be cleared (#8335). return nodeType !== 1 && nodeType !== 9 ? false : // Nodes accept data unless otherwise specified; rejection can be conditional !noData || noData !== true && elem.getAttribute("classid") === noData; }; var rbrace = /^(?:\{[\w\W]*\}|\[[\w\W]*\])$/, rmultiDash = /([A-Z])/g; function dataAttr( elem, key, data ) { // If nothing was found internally, try to fetch any // data from the HTML5 data-* attribute if ( data === undefined && elem.nodeType === 1 ) { var name = "data-" + key.replace( rmultiDash, "-$1" ).toLowerCase(); data = elem.getAttribute( name ); if ( typeof data === "string" ) { try { data = data === "true" ? true : data === "false" ? false : data === "null" ? null : // Only convert to a number if it doesn't change the string +data + "" === data ? +data : rbrace.test( data ) ? jQuery.parseJSON( data ) : data; } catch( e ) {} // Make sure we set the data so it isn't changed later jQuery.data( elem, key, data ); } else { data = undefined; } } return data; } // checks a cache object for emptiness function isEmptyDataObject( obj ) { var name; for ( name in obj ) { // if the public data object is empty, the private is still empty if ( name === "data" && jQuery.isEmptyObject( obj[name] ) ) { continue; } if ( name !== "toJSON" ) { return false; } } return true; } function internalData( elem, name, data, pvt /* Internal Use Only */ ) { if ( !jQuery.acceptData( elem ) ) { return; } var ret, thisCache, internalKey = jQuery.expando, // We have to handle DOM nodes and JS objects differently because IE6-7 // can't GC object references properly across the DOM-JS boundary isNode = elem.nodeType, // Only DOM nodes need the global jQuery cache; JS object data is // attached directly to the object so GC can occur automatically cache = isNode ? jQuery.cache : elem, // Only defining an ID for JS objects if its cache already exists allows // the code to shortcut on the same path as a DOM node with no cache id = isNode ? elem[ internalKey ] : elem[ internalKey ] && internalKey; // Avoid doing any more work than we need to when trying to get data on an // object that has no data at all if ( (!id || !cache[id] || (!pvt && !cache[id].data)) && data === undefined && typeof name === "string" ) { return; } if ( !id ) { // Only DOM nodes need a new unique ID for each element since their data // ends up in the global cache if ( isNode ) { id = elem[ internalKey ] = deletedIds.pop() || jQuery.guid++; } else { id = internalKey; } } if ( !cache[ id ] ) { // Avoid exposing jQuery metadata on plain JS objects when the object // is serialized using JSON.stringify cache[ id ] = isNode ? {} : { toJSON: jQuery.noop }; } // An object can be passed to jQuery.data instead of a key/value pair; this gets // shallow copied over onto the existing cache if ( typeof name === "object" || typeof name === "function" ) { if ( pvt ) { cache[ id ] = jQuery.extend( cache[ id ], name ); } else { cache[ id ].data = jQuery.extend( cache[ id ].data, name ); } } thisCache = cache[ id ]; // jQuery data() is stored in a separate object inside the object's internal data // cache in order to avoid key collisions between internal data and user-defined // data. if ( !pvt ) { if ( !thisCache.data ) { thisCache.data = {}; } thisCache = thisCache.data; } if ( data !== undefined ) { thisCache[ jQuery.camelCase( name ) ] = data; } // Check for both converted-to-camel and non-converted data property names // If a data property was specified if ( typeof name === "string" ) { // First Try to find as-is property data ret = thisCache[ name ]; // Test for null|undefined property data if ( ret == null ) { // Try to find the camelCased property ret = thisCache[ jQuery.camelCase( name ) ]; } } else { ret = thisCache; } return ret; } function internalRemoveData( elem, name, pvt ) { if ( !jQuery.acceptData( elem ) ) { return; } var thisCache, i, isNode = elem.nodeType, // See jQuery.data for more information cache = isNode ? jQuery.cache : elem, id = isNode ? elem[ jQuery.expando ] : jQuery.expando; // If there is already no cache entry for this object, there is no // purpose in continuing if ( !cache[ id ] ) { return; } if ( name ) { thisCache = pvt ? cache[ id ] : cache[ id ].data; if ( thisCache ) { // Support array or space separated string names for data keys if ( !jQuery.isArray( name ) ) { // try the string as a key before any manipulation if ( name in thisCache ) { name = [ name ]; } else { // split the camel cased version by spaces unless a key with the spaces exists name = jQuery.camelCase( name ); if ( name in thisCache ) { name = [ name ]; } else { name = name.split(" "); } } } else { // If "name" is an array of keys... // When data is initially created, via ("key", "val") signature, // keys will be converted to camelCase. // Since there is no way to tell _how_ a key was added, remove // both plain key and camelCase key. #12786 // This will only penalize the array argument path. name = name.concat( jQuery.map( name, jQuery.camelCase ) ); } i = name.length; while ( i-- ) { delete thisCache[ name[i] ]; } // If there is no data left in the cache, we want to continue // and let the cache object itself get destroyed if ( pvt ? !isEmptyDataObject(thisCache) : !jQuery.isEmptyObject(thisCache) ) { return; } } } // See jQuery.data for more information if ( !pvt ) { delete cache[ id ].data; // Don't destroy the parent cache unless the internal data object // had been the only thing left in it if ( !isEmptyDataObject( cache[ id ] ) ) { return; } } // Destroy the cache if ( isNode ) { jQuery.cleanData( [ elem ], true ); // Use delete when supported for expandos or `cache` is not a window per isWindow (#10080) /* jshint eqeqeq: false */ } else if ( support.deleteExpando || cache != cache.window ) { /* jshint eqeqeq: true */ delete cache[ id ]; // When all else fails, null } else { cache[ id ] = null; } } jQuery.extend({ cache: {}, // The following elements (space-suffixed to avoid Object.prototype collisions) // throw uncatchable exceptions if you attempt to set expando properties noData: { "applet ": true, "embed ": true, // ...but Flash objects (which have this classid) *can* handle expandos "object ": "clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" }, hasData: function( elem ) { elem = elem.nodeType ? jQuery.cache[ elem[jQuery.expando] ] : elem[ jQuery.expando ]; return !!elem && !isEmptyDataObject( elem ); }, data: function( elem, name, data ) { return internalData( elem, name, data ); }, removeData: function( elem, name ) { return internalRemoveData( elem, name ); }, // For internal use only. _data: function( elem, name, data ) { return internalData( elem, name, data, true ); }, _removeData: function( elem, name ) { return internalRemoveData( elem, name, true ); } }); jQuery.fn.extend({ data: function( key, value ) { var i, name, data, elem = this[0], attrs = elem && elem.attributes; // Special expections of .data basically thwart jQuery.access, // so implement the relevant behavior ourselves // Gets all values if ( key === undefined ) { if ( this.length ) { data = jQuery.data( elem ); if ( elem.nodeType === 1 && !jQuery._data( elem, "parsedAttrs" ) ) { i = attrs.length; while ( i-- ) { // Support: IE11+ // The attrs elements can be null (#14894) if ( attrs[ i ] ) { name = attrs[ i ].name; if ( name.indexOf( "data-" ) === 0 ) { name = jQuery.camelCase( name.slice(5) ); dataAttr( elem, name, data[ name ] ); } } } jQuery._data( elem, "parsedAttrs", true ); } } return data; } // Sets multiple values if ( typeof key === "object" ) { return this.each(function() { jQuery.data( this, key ); }); } return arguments.length > 1 ? // Sets one value this.each(function() { jQuery.data( this, key, value ); }) : // Gets one value // Try to fetch any internally stored data first elem ? dataAttr( elem, key, jQuery.data( elem, key ) ) : undefined; }, removeData: function( key ) { return this.each(function() { jQuery.removeData( this, key ); }); } }); jQuery.extend({ queue: function( elem, type, data ) { var queue; if ( elem ) { type = ( type || "fx" ) + "queue"; queue = jQuery._data( elem, type ); // Speed up dequeue by getting out quickly if this is just a lookup if ( data ) { if ( !queue || jQuery.isArray(data) ) { queue = jQuery._data( elem, type, jQuery.makeArray(data) ); } else { queue.push( data ); } } return queue || []; } }, dequeue: function( elem, type ) { type = type || "fx"; var queue = jQuery.queue( elem, type ), startLength = queue.length, fn = queue.shift(), hooks = jQuery._queueHooks( elem, type ), next = function() { jQuery.dequeue( elem, type ); }; // If the fx queue is dequeued, always remove the progress sentinel if ( fn === "inprogress" ) { fn = queue.shift(); startLength--; } if ( fn ) { // Add a progress sentinel to prevent the fx queue from being // automatically dequeued if ( type === "fx" ) { queue.unshift( "inprogress" ); } // clear up the last queue stop function delete hooks.stop; fn.call( elem, next, hooks ); } if ( !startLength && hooks ) { hooks.empty.fire(); } }, // not intended for public consumption - generates a queueHooks object, or returns the current one _queueHooks: function( elem, type ) { var key = type + "queueHooks"; return jQuery._data( elem, key ) || jQuery._data( elem, key, { empty: jQuery.Callbacks("once memory").add(function() { jQuery._removeData( elem, type + "queue" ); jQuery._removeData( elem, key ); }) }); } }); jQuery.fn.extend({ queue: function( type, data ) { var setter = 2; if ( typeof type !== "string" ) { data = type; type = "fx"; setter--; } if ( arguments.length < setter ) { return jQuery.queue( this[0], type ); } return data === undefined ? this : this.each(function() { var queue = jQuery.queue( this, type, data ); // ensure a hooks for this queue jQuery._queueHooks( this, type ); if ( type === "fx" && queue[0] !== "inprogress" ) { jQuery.dequeue( this, type ); } }); }, dequeue: function( type ) { return this.each(function() { jQuery.dequeue( this, type ); }); }, clearQueue: function( type ) { return this.queue( type || "fx", [] ); }, // Get a promise resolved when queues of a certain type // are emptied (fx is the type by default) promise: function( type, obj ) { var tmp, count = 1, defer = jQuery.Deferred(), elements = this, i = this.length, resolve = function() { if ( !( --count ) ) { defer.resolveWith( elements, [ elements ] ); } }; if ( typeof type !== "string" ) { obj = type; type = undefined; } type = type || "fx"; while ( i-- ) { tmp = jQuery._data( elements[ i ], type + "queueHooks" ); if ( tmp && tmp.empty ) { count++; tmp.empty.add( resolve ); } } resolve(); return defer.promise( obj ); } }); var pnum = (/[+-]?(?:\d*\.|)\d+(?:[eE][+-]?\d+|)/).source; var cssExpand = [ "Top", "Right", "Bottom", "Left" ]; var isHidden = function( elem, el ) { // isHidden might be called from jQuery#filter function; // in that case, element will be second argument elem = el || elem; return jQuery.css( elem, "display" ) === "none" || !jQuery.contains( elem.ownerDocument, elem ); }; // Multifunctional method to get and set values of a collection // The value/s can optionally be executed if it's a function var access = jQuery.access = function( elems, fn, key, value, chainable, emptyGet, raw ) { var i = 0, length = elems.length, bulk = key == null; // Sets many values if ( jQuery.type( key ) === "object" ) { chainable = true; for ( i in key ) { jQuery.access( elems, fn, i, key[i], true, emptyGet, raw ); } // Sets one value } else if ( value !== undefined ) { chainable = true; if ( !jQuery.isFunction( value ) ) { raw = true; } if ( bulk ) { // Bulk operations run against the entire set if ( raw ) { fn.call( elems, value ); fn = null; // ...except when executing function values } else { bulk = fn; fn = function( elem, key, value ) { return bulk.call( jQuery( elem ), value ); }; } } if ( fn ) { for ( ; i < length; i++ ) { fn( elems[i], key, raw ? value : value.call( elems[i], i, fn( elems[i], key ) ) ); } } } return chainable ? elems : // Gets bulk ? fn.call( elems ) : length ? fn( elems[0], key ) : emptyGet; }; var rcheckableType = (/^(?:checkbox|radio)$/i); (function() { // Minified: var a,b,c var input = document.createElement( "input" ), div = document.createElement( "div" ), fragment = document.createDocumentFragment(); // Setup div.innerHTML = "
    a"; // IE strips leading whitespace when .innerHTML is used support.leadingWhitespace = div.firstChild.nodeType === 3; // Make sure that tbody elements aren't automatically inserted // IE will insert them into empty tables support.tbody = !div.getElementsByTagName( "tbody" ).length; // Make sure that link elements get serialized correctly by innerHTML // This requires a wrapper element in IE support.htmlSerialize = !!div.getElementsByTagName( "link" ).length; // Makes sure cloning an html5 element does not cause problems // Where outerHTML is undefined, this still works support.html5Clone = document.createElement( "nav" ).cloneNode( true ).outerHTML !== "<:nav>"; // Check if a disconnected checkbox will retain its checked // value of true after appended to the DOM (IE6/7) input.type = "checkbox"; input.checked = true; fragment.appendChild( input ); support.appendChecked = input.checked; // Make sure textarea (and checkbox) defaultValue is properly cloned // Support: IE6-IE11+ div.innerHTML = ""; support.noCloneChecked = !!div.cloneNode( true ).lastChild.defaultValue; // #11217 - WebKit loses check when the name is after the checked attribute fragment.appendChild( div ); div.innerHTML = ""; // Support: Safari 5.1, iOS 5.1, Android 4.x, Android 2.3 // old WebKit doesn't clone checked state correctly in fragments support.checkClone = div.cloneNode( true ).cloneNode( true ).lastChild.checked; // Support: IE<9 // Opera does not clone events (and typeof div.attachEvent === undefined). // IE9-10 clones events bound via attachEvent, but they don't trigger with .click() support.noCloneEvent = true; if ( div.attachEvent ) { div.attachEvent( "onclick", function() { support.noCloneEvent = false; }); div.cloneNode( true ).click(); } // Execute the test only if not already executed in another module. if (support.deleteExpando == null) { // Support: IE<9 support.deleteExpando = true; try { delete div.test; } catch( e ) { support.deleteExpando = false; } } })(); (function() { var i, eventName, div = document.createElement( "div" ); // Support: IE<9 (lack submit/change bubble), Firefox 23+ (lack focusin event) for ( i in { submit: true, change: true, focusin: true }) { eventName = "on" + i; if ( !(support[ i + "Bubbles" ] = eventName in window) ) { // Beware of CSP restrictions (https://developer.mozilla.org/en/Security/CSP) div.setAttribute( eventName, "t" ); support[ i + "Bubbles" ] = div.attributes[ eventName ].expando === false; } } // Null elements to avoid leaks in IE. div = null; })(); var rformElems = /^(?:input|select|textarea)$/i, rkeyEvent = /^key/, rmouseEvent = /^(?:mouse|pointer|contextmenu)|click/, rfocusMorph = /^(?:focusinfocus|focusoutblur)$/, rtypenamespace = /^([^.]*)(?:\.(.+)|)$/; function returnTrue() { return true; } function returnFalse() { return false; } function safeActiveElement() { try { return document.activeElement; } catch ( err ) { } } /* * Helper functions for managing events -- not part of the public interface. * Props to Dean Edwards' addEvent library for many of the ideas. */ jQuery.event = { global: {}, add: function( elem, types, handler, data, selector ) { var tmp, events, t, handleObjIn, special, eventHandle, handleObj, handlers, type, namespaces, origType, elemData = jQuery._data( elem ); // Don't attach events to noData or text/comment nodes (but allow plain objects) if ( !elemData ) { return; } // Caller can pass in an object of custom data in lieu of the handler if ( handler.handler ) { handleObjIn = handler; handler = handleObjIn.handler; selector = handleObjIn.selector; } // Make sure that the handler has a unique ID, used to find/remove it later if ( !handler.guid ) { handler.guid = jQuery.guid++; } // Init the element's event structure and main handler, if this is the first if ( !(events = elemData.events) ) { events = elemData.events = {}; } if ( !(eventHandle = elemData.handle) ) { eventHandle = elemData.handle = function( e ) { // Discard the second event of a jQuery.event.trigger() and // when an event is called after a page has unloaded return typeof jQuery !== strundefined && (!e || jQuery.event.triggered !== e.type) ? jQuery.event.dispatch.apply( eventHandle.elem, arguments ) : undefined; }; // Add elem as a property of the handle fn to prevent a memory leak with IE non-native events eventHandle.elem = elem; } // Handle multiple events separated by a space types = ( types || "" ).match( rnotwhite ) || [ "" ]; t = types.length; while ( t-- ) { tmp = rtypenamespace.exec( types[t] ) || []; type = origType = tmp[1]; namespaces = ( tmp[2] || "" ).split( "." ).sort(); // There *must* be a type, no attaching namespace-only handlers if ( !type ) { continue; } // If event changes its type, use the special event handlers for the changed type special = jQuery.event.special[ type ] || {}; // If selector defined, determine special event api type, otherwise given type type = ( selector ? special.delegateType : special.bindType ) || type; // Update special based on newly reset type special = jQuery.event.special[ type ] || {}; // handleObj is passed to all event handlers handleObj = jQuery.extend({ type: type, origType: origType, data: data, handler: handler, guid: handler.guid, selector: selector, needsContext: selector && jQuery.expr.match.needsContext.test( selector ), namespace: namespaces.join(".") }, handleObjIn ); // Init the event handler queue if we're the first if ( !(handlers = events[ type ]) ) { handlers = events[ type ] = []; handlers.delegateCount = 0; // Only use addEventListener/attachEvent if the special events handler returns false if ( !special.setup || special.setup.call( elem, data, namespaces, eventHandle ) === false ) { // Bind the global event handler to the element if ( elem.addEventListener ) { elem.addEventListener( type, eventHandle, false ); } else if ( elem.attachEvent ) { elem.attachEvent( "on" + type, eventHandle ); } } } if ( special.add ) { special.add.call( elem, handleObj ); if ( !handleObj.handler.guid ) { handleObj.handler.guid = handler.guid; } } // Add to the element's handler list, delegates in front if ( selector ) { handlers.splice( handlers.delegateCount++, 0, handleObj ); } else { handlers.push( handleObj ); } // Keep track of which events have ever been used, for event optimization jQuery.event.global[ type ] = true; } // Nullify elem to prevent memory leaks in IE elem = null; }, // Detach an event or set of events from an element remove: function( elem, types, handler, selector, mappedTypes ) { var j, handleObj, tmp, origCount, t, events, special, handlers, type, namespaces, origType, elemData = jQuery.hasData( elem ) && jQuery._data( elem ); if ( !elemData || !(events = elemData.events) ) { return; } // Once for each type.namespace in types; type may be omitted types = ( types || "" ).match( rnotwhite ) || [ "" ]; t = types.length; while ( t-- ) { tmp = rtypenamespace.exec( types[t] ) || []; type = origType = tmp[1]; namespaces = ( tmp[2] || "" ).split( "." ).sort(); // Unbind all events (on this namespace, if provided) for the element if ( !type ) { for ( type in events ) { jQuery.event.remove( elem, type + types[ t ], handler, selector, true ); } continue; } special = jQuery.event.special[ type ] || {}; type = ( selector ? special.delegateType : special.bindType ) || type; handlers = events[ type ] || []; tmp = tmp[2] && new RegExp( "(^|\\.)" + namespaces.join("\\.(?:.*\\.|)") + "(\\.|$)" ); // Remove matching events origCount = j = handlers.length; while ( j-- ) { handleObj = handlers[ j ]; if ( ( mappedTypes || origType === handleObj.origType ) && ( !handler || handler.guid === handleObj.guid ) && ( !tmp || tmp.test( handleObj.namespace ) ) && ( !selector || selector === handleObj.selector || selector === "**" && handleObj.selector ) ) { handlers.splice( j, 1 ); if ( handleObj.selector ) { handlers.delegateCount--; } if ( special.remove ) { special.remove.call( elem, handleObj ); } } } // Remove generic event handler if we removed something and no more handlers exist // (avoids potential for endless recursion during removal of special event handlers) if ( origCount && !handlers.length ) { if ( !special.teardown || special.teardown.call( elem, namespaces, elemData.handle ) === false ) { jQuery.removeEvent( elem, type, elemData.handle ); } delete events[ type ]; } } // Remove the expando if it's no longer used if ( jQuery.isEmptyObject( events ) ) { delete elemData.handle; // removeData also checks for emptiness and clears the expando if empty // so use it instead of delete jQuery._removeData( elem, "events" ); } }, trigger: function( event, data, elem, onlyHandlers ) { var handle, ontype, cur, bubbleType, special, tmp, i, eventPath = [ elem || document ], type = hasOwn.call( event, "type" ) ? event.type : event, namespaces = hasOwn.call( event, "namespace" ) ? event.namespace.split(".") : []; cur = tmp = elem = elem || document; // Don't do events on text and comment nodes if ( elem.nodeType === 3 || elem.nodeType === 8 ) { return; } // focus/blur morphs to focusin/out; ensure we're not firing them right now if ( rfocusMorph.test( type + jQuery.event.triggered ) ) { return; } if ( type.indexOf(".") >= 0 ) { // Namespaced trigger; create a regexp to match event type in handle() namespaces = type.split("."); type = namespaces.shift(); namespaces.sort(); } ontype = type.indexOf(":") < 0 && "on" + type; // Caller can pass in a jQuery.Event object, Object, or just an event type string event = event[ jQuery.expando ] ? event : new jQuery.Event( type, typeof event === "object" && event ); // Trigger bitmask: & 1 for native handlers; & 2 for jQuery (always true) event.isTrigger = onlyHandlers ? 2 : 3; event.namespace = namespaces.join("."); event.namespace_re = event.namespace ? new RegExp( "(^|\\.)" + namespaces.join("\\.(?:.*\\.|)") + "(\\.|$)" ) : null; // Clean up the event in case it is being reused event.result = undefined; if ( !event.target ) { event.target = elem; } // Clone any incoming data and prepend the event, creating the handler arg list data = data == null ? [ event ] : jQuery.makeArray( data, [ event ] ); // Allow special events to draw outside the lines special = jQuery.event.special[ type ] || {}; if ( !onlyHandlers && special.trigger && special.trigger.apply( elem, data ) === false ) { return; } // Determine event propagation path in advance, per W3C events spec (#9951) // Bubble up to document, then to window; watch for a global ownerDocument var (#9724) if ( !onlyHandlers && !special.noBubble && !jQuery.isWindow( elem ) ) { bubbleType = special.delegateType || type; if ( !rfocusMorph.test( bubbleType + type ) ) { cur = cur.parentNode; } for ( ; cur; cur = cur.parentNode ) { eventPath.push( cur ); tmp = cur; } // Only add window if we got to document (e.g., not plain obj or detached DOM) if ( tmp === (elem.ownerDocument || document) ) { eventPath.push( tmp.defaultView || tmp.parentWindow || window ); } } // Fire handlers on the event path i = 0; while ( (cur = eventPath[i++]) && !event.isPropagationStopped() ) { event.type = i > 1 ? bubbleType : special.bindType || type; // jQuery handler handle = ( jQuery._data( cur, "events" ) || {} )[ event.type ] && jQuery._data( cur, "handle" ); if ( handle ) { handle.apply( cur, data ); } // Native handler handle = ontype && cur[ ontype ]; if ( handle && handle.apply && jQuery.acceptData( cur ) ) { event.result = handle.apply( cur, data ); if ( event.result === false ) { event.preventDefault(); } } } event.type = type; // If nobody prevented the default action, do it now if ( !onlyHandlers && !event.isDefaultPrevented() ) { if ( (!special._default || special._default.apply( eventPath.pop(), data ) === false) && jQuery.acceptData( elem ) ) { // Call a native DOM method on the target with the same name name as the event. // Can't use an .isFunction() check here because IE6/7 fails that test. // Don't do default actions on window, that's where global variables be (#6170) if ( ontype && elem[ type ] && !jQuery.isWindow( elem ) ) { // Don't re-trigger an onFOO event when we call its FOO() method tmp = elem[ ontype ]; if ( tmp ) { elem[ ontype ] = null; } // Prevent re-triggering of the same event, since we already bubbled it above jQuery.event.triggered = type; try { elem[ type ](); } catch ( e ) { // IE<9 dies on focus/blur to hidden element (#1486,#12518) // only reproducible on winXP IE8 native, not IE9 in IE8 mode } jQuery.event.triggered = undefined; if ( tmp ) { elem[ ontype ] = tmp; } } } } return event.result; }, dispatch: function( event ) { // Make a writable jQuery.Event from the native event object event = jQuery.event.fix( event ); var i, ret, handleObj, matched, j, handlerQueue = [], args = slice.call( arguments ), handlers = ( jQuery._data( this, "events" ) || {} )[ event.type ] || [], special = jQuery.event.special[ event.type ] || {}; // Use the fix-ed jQuery.Event rather than the (read-only) native event args[0] = event; event.delegateTarget = this; // Call the preDispatch hook for the mapped type, and let it bail if desired if ( special.preDispatch && special.preDispatch.call( this, event ) === false ) { return; } // Determine handlers handlerQueue = jQuery.event.handlers.call( this, event, handlers ); // Run delegates first; they may want to stop propagation beneath us i = 0; while ( (matched = handlerQueue[ i++ ]) && !event.isPropagationStopped() ) { event.currentTarget = matched.elem; j = 0; while ( (handleObj = matched.handlers[ j++ ]) && !event.isImmediatePropagationStopped() ) { // Triggered event must either 1) have no namespace, or // 2) have namespace(s) a subset or equal to those in the bound event (both can have no namespace). if ( !event.namespace_re || event.namespace_re.test( handleObj.namespace ) ) { event.handleObj = handleObj; event.data = handleObj.data; ret = ( (jQuery.event.special[ handleObj.origType ] || {}).handle || handleObj.handler ) .apply( matched.elem, args ); if ( ret !== undefined ) { if ( (event.result = ret) === false ) { event.preventDefault(); event.stopPropagation(); } } } } } // Call the postDispatch hook for the mapped type if ( special.postDispatch ) { special.postDispatch.call( this, event ); } return event.result; }, handlers: function( event, handlers ) { var sel, handleObj, matches, i, handlerQueue = [], delegateCount = handlers.delegateCount, cur = event.target; // Find delegate handlers // Black-hole SVG instance trees (#13180) // Avoid non-left-click bubbling in Firefox (#3861) if ( delegateCount && cur.nodeType && (!event.button || event.type !== "click") ) { /* jshint eqeqeq: false */ for ( ; cur != this; cur = cur.parentNode || this ) { /* jshint eqeqeq: true */ // Don't check non-elements (#13208) // Don't process clicks on disabled elements (#6911, #8165, #11382, #11764) if ( cur.nodeType === 1 && (cur.disabled !== true || event.type !== "click") ) { matches = []; for ( i = 0; i < delegateCount; i++ ) { handleObj = handlers[ i ]; // Don't conflict with Object.prototype properties (#13203) sel = handleObj.selector + " "; if ( matches[ sel ] === undefined ) { matches[ sel ] = handleObj.needsContext ? jQuery( sel, this ).index( cur ) >= 0 : jQuery.find( sel, this, null, [ cur ] ).length; } if ( matches[ sel ] ) { matches.push( handleObj ); } } if ( matches.length ) { handlerQueue.push({ elem: cur, handlers: matches }); } } } } // Add the remaining (directly-bound) handlers if ( delegateCount < handlers.length ) { handlerQueue.push({ elem: this, handlers: handlers.slice( delegateCount ) }); } return handlerQueue; }, fix: function( event ) { if ( event[ jQuery.expando ] ) { return event; } // Create a writable copy of the event object and normalize some properties var i, prop, copy, type = event.type, originalEvent = event, fixHook = this.fixHooks[ type ]; if ( !fixHook ) { this.fixHooks[ type ] = fixHook = rmouseEvent.test( type ) ? this.mouseHooks : rkeyEvent.test( type ) ? this.keyHooks : {}; } copy = fixHook.props ? this.props.concat( fixHook.props ) : this.props; event = new jQuery.Event( originalEvent ); i = copy.length; while ( i-- ) { prop = copy[ i ]; event[ prop ] = originalEvent[ prop ]; } // Support: IE<9 // Fix target property (#1925) if ( !event.target ) { event.target = originalEvent.srcElement || document; } // Support: Chrome 23+, Safari? // Target should not be a text node (#504, #13143) if ( event.target.nodeType === 3 ) { event.target = event.target.parentNode; } // Support: IE<9 // For mouse/key events, metaKey==false if it's undefined (#3368, #11328) event.metaKey = !!event.metaKey; return fixHook.filter ? fixHook.filter( event, originalEvent ) : event; }, // Includes some event props shared by KeyEvent and MouseEvent props: "altKey bubbles cancelable ctrlKey currentTarget eventPhase metaKey relatedTarget shiftKey target timeStamp view which".split(" "), fixHooks: {}, keyHooks: { props: "char charCode key keyCode".split(" "), filter: function( event, original ) { // Add which for key events if ( event.which == null ) { event.which = original.charCode != null ? original.charCode : original.keyCode; } return event; } }, mouseHooks: { props: "button buttons clientX clientY fromElement offsetX offsetY pageX pageY screenX screenY toElement".split(" "), filter: function( event, original ) { var body, eventDoc, doc, button = original.button, fromElement = original.fromElement; // Calculate pageX/Y if missing and clientX/Y available if ( event.pageX == null && original.clientX != null ) { eventDoc = event.target.ownerDocument || document; doc = eventDoc.documentElement; body = eventDoc.body; event.pageX = original.clientX + ( doc && doc.scrollLeft || body && body.scrollLeft || 0 ) - ( doc && doc.clientLeft || body && body.clientLeft || 0 ); event.pageY = original.clientY + ( doc && doc.scrollTop || body && body.scrollTop || 0 ) - ( doc && doc.clientTop || body && body.clientTop || 0 ); } // Add relatedTarget, if necessary if ( !event.relatedTarget && fromElement ) { event.relatedTarget = fromElement === event.target ? original.toElement : fromElement; } // Add which for click: 1 === left; 2 === middle; 3 === right // Note: button is not normalized, so don't use it if ( !event.which && button !== undefined ) { event.which = ( button & 1 ? 1 : ( button & 2 ? 3 : ( button & 4 ? 2 : 0 ) ) ); } return event; } }, special: { load: { // Prevent triggered image.load events from bubbling to window.load noBubble: true }, focus: { // Fire native event if possible so blur/focus sequence is correct trigger: function() { if ( this !== safeActiveElement() && this.focus ) { try { this.focus(); return false; } catch ( e ) { // Support: IE<9 // If we error on focus to hidden element (#1486, #12518), // let .trigger() run the handlers } } }, delegateType: "focusin" }, blur: { trigger: function() { if ( this === safeActiveElement() && this.blur ) { this.blur(); return false; } }, delegateType: "focusout" }, click: { // For checkbox, fire native event so checked state will be right trigger: function() { if ( jQuery.nodeName( this, "input" ) && this.type === "checkbox" && this.click ) { this.click(); return false; } }, // For cross-browser consistency, don't fire native .click() on links _default: function( event ) { return jQuery.nodeName( event.target, "a" ); } }, beforeunload: { postDispatch: function( event ) { // Support: Firefox 20+ // Firefox doesn't alert if the returnValue field is not set. if ( event.result !== undefined && event.originalEvent ) { event.originalEvent.returnValue = event.result; } } } }, simulate: function( type, elem, event, bubble ) { // Piggyback on a donor event to simulate a different one. // Fake originalEvent to avoid donor's stopPropagation, but if the // simulated event prevents default then we do the same on the donor. var e = jQuery.extend( new jQuery.Event(), event, { type: type, isSimulated: true, originalEvent: {} } ); if ( bubble ) { jQuery.event.trigger( e, null, elem ); } else { jQuery.event.dispatch.call( elem, e ); } if ( e.isDefaultPrevented() ) { event.preventDefault(); } } }; jQuery.removeEvent = document.removeEventListener ? function( elem, type, handle ) { if ( elem.removeEventListener ) { elem.removeEventListener( type, handle, false ); } } : function( elem, type, handle ) { var name = "on" + type; if ( elem.detachEvent ) { // #8545, #7054, preventing memory leaks for custom events in IE6-8 // detachEvent needed property on element, by name of that event, to properly expose it to GC if ( typeof elem[ name ] === strundefined ) { elem[ name ] = null; } elem.detachEvent( name, handle ); } }; jQuery.Event = function( src, props ) { // Allow instantiation without the 'new' keyword if ( !(this instanceof jQuery.Event) ) { return new jQuery.Event( src, props ); } // Event object if ( src && src.type ) { this.originalEvent = src; this.type = src.type; // Events bubbling up the document may have been marked as prevented // by a handler lower down the tree; reflect the correct value. this.isDefaultPrevented = src.defaultPrevented || src.defaultPrevented === undefined && // Support: IE < 9, Android < 4.0 src.returnValue === false ? returnTrue : returnFalse; // Event type } else { this.type = src; } // Put explicitly provided properties onto the event object if ( props ) { jQuery.extend( this, props ); } // Create a timestamp if incoming event doesn't have one this.timeStamp = src && src.timeStamp || jQuery.now(); // Mark it as fixed this[ jQuery.expando ] = true; }; // jQuery.Event is based on DOM3 Events as specified by the ECMAScript Language Binding // http://www.w3.org/TR/2003/WD-DOM-Level-3-Events-20030331/ecma-script-binding.html jQuery.Event.prototype = { isDefaultPrevented: returnFalse, isPropagationStopped: returnFalse, isImmediatePropagationStopped: returnFalse, preventDefault: function() { var e = this.originalEvent; this.isDefaultPrevented = returnTrue; if ( !e ) { return; } // If preventDefault exists, run it on the original event if ( e.preventDefault ) { e.preventDefault(); // Support: IE // Otherwise set the returnValue property of the original event to false } else { e.returnValue = false; } }, stopPropagation: function() { var e = this.originalEvent; this.isPropagationStopped = returnTrue; if ( !e ) { return; } // If stopPropagation exists, run it on the original event if ( e.stopPropagation ) { e.stopPropagation(); } // Support: IE // Set the cancelBubble property of the original event to true e.cancelBubble = true; }, stopImmediatePropagation: function() { var e = this.originalEvent; this.isImmediatePropagationStopped = returnTrue; if ( e && e.stopImmediatePropagation ) { e.stopImmediatePropagation(); } this.stopPropagation(); } }; // Create mouseenter/leave events using mouseover/out and event-time checks jQuery.each({ mouseenter: "mouseover", mouseleave: "mouseout", pointerenter: "pointerover", pointerleave: "pointerout" }, function( orig, fix ) { jQuery.event.special[ orig ] = { delegateType: fix, bindType: fix, handle: function( event ) { var ret, target = this, related = event.relatedTarget, handleObj = event.handleObj; // For mousenter/leave call the handler if related is outside the target. // NB: No relatedTarget if the mouse left/entered the browser window if ( !related || (related !== target && !jQuery.contains( target, related )) ) { event.type = handleObj.origType; ret = handleObj.handler.apply( this, arguments ); event.type = fix; } return ret; } }; }); // IE submit delegation if ( !support.submitBubbles ) { jQuery.event.special.submit = { setup: function() { // Only need this for delegated form submit events if ( jQuery.nodeName( this, "form" ) ) { return false; } // Lazy-add a submit handler when a descendant form may potentially be submitted jQuery.event.add( this, "click._submit keypress._submit", function( e ) { // Node name check avoids a VML-related crash in IE (#9807) var elem = e.target, form = jQuery.nodeName( elem, "input" ) || jQuery.nodeName( elem, "button" ) ? elem.form : undefined; if ( form && !jQuery._data( form, "submitBubbles" ) ) { jQuery.event.add( form, "submit._submit", function( event ) { event._submit_bubble = true; }); jQuery._data( form, "submitBubbles", true ); } }); // return undefined since we don't need an event listener }, postDispatch: function( event ) { // If form was submitted by the user, bubble the event up the tree if ( event._submit_bubble ) { delete event._submit_bubble; if ( this.parentNode && !event.isTrigger ) { jQuery.event.simulate( "submit", this.parentNode, event, true ); } } }, teardown: function() { // Only need this for delegated form submit events if ( jQuery.nodeName( this, "form" ) ) { return false; } // Remove delegated handlers; cleanData eventually reaps submit handlers attached above jQuery.event.remove( this, "._submit" ); } }; } // IE change delegation and checkbox/radio fix if ( !support.changeBubbles ) { jQuery.event.special.change = { setup: function() { if ( rformElems.test( this.nodeName ) ) { // IE doesn't fire change on a check/radio until blur; trigger it on click // after a propertychange. Eat the blur-change in special.change.handle. // This still fires onchange a second time for check/radio after blur. if ( this.type === "checkbox" || this.type === "radio" ) { jQuery.event.add( this, "propertychange._change", function( event ) { if ( event.originalEvent.propertyName === "checked" ) { this._just_changed = true; } }); jQuery.event.add( this, "click._change", function( event ) { if ( this._just_changed && !event.isTrigger ) { this._just_changed = false; } // Allow triggered, simulated change events (#11500) jQuery.event.simulate( "change", this, event, true ); }); } return false; } // Delegated event; lazy-add a change handler on descendant inputs jQuery.event.add( this, "beforeactivate._change", function( e ) { var elem = e.target; if ( rformElems.test( elem.nodeName ) && !jQuery._data( elem, "changeBubbles" ) ) { jQuery.event.add( elem, "change._change", function( event ) { if ( this.parentNode && !event.isSimulated && !event.isTrigger ) { jQuery.event.simulate( "change", this.parentNode, event, true ); } }); jQuery._data( elem, "changeBubbles", true ); } }); }, handle: function( event ) { var elem = event.target; // Swallow native change events from checkbox/radio, we already triggered them above if ( this !== elem || event.isSimulated || event.isTrigger || (elem.type !== "radio" && elem.type !== "checkbox") ) { return event.handleObj.handler.apply( this, arguments ); } }, teardown: function() { jQuery.event.remove( this, "._change" ); return !rformElems.test( this.nodeName ); } }; } // Create "bubbling" focus and blur events if ( !support.focusinBubbles ) { jQuery.each({ focus: "focusin", blur: "focusout" }, function( orig, fix ) { // Attach a single capturing handler on the document while someone wants focusin/focusout var handler = function( event ) { jQuery.event.simulate( fix, event.target, jQuery.event.fix( event ), true ); }; jQuery.event.special[ fix ] = { setup: function() { var doc = this.ownerDocument || this, attaches = jQuery._data( doc, fix ); if ( !attaches ) { doc.addEventListener( orig, handler, true ); } jQuery._data( doc, fix, ( attaches || 0 ) + 1 ); }, teardown: function() { var doc = this.ownerDocument || this, attaches = jQuery._data( doc, fix ) - 1; if ( !attaches ) { doc.removeEventListener( orig, handler, true ); jQuery._removeData( doc, fix ); } else { jQuery._data( doc, fix, attaches ); } } }; }); } jQuery.fn.extend({ on: function( types, selector, data, fn, /*INTERNAL*/ one ) { var type, origFn; // Types can be a map of types/handlers if ( typeof types === "object" ) { // ( types-Object, selector, data ) if ( typeof selector !== "string" ) { // ( types-Object, data ) data = data || selector; selector = undefined; } for ( type in types ) { this.on( type, selector, data, types[ type ], one ); } return this; } if ( data == null && fn == null ) { // ( types, fn ) fn = selector; data = selector = undefined; } else if ( fn == null ) { if ( typeof selector === "string" ) { // ( types, selector, fn ) fn = data; data = undefined; } else { // ( types, data, fn ) fn = data; data = selector; selector = undefined; } } if ( fn === false ) { fn = returnFalse; } else if ( !fn ) { return this; } if ( one === 1 ) { origFn = fn; fn = function( event ) { // Can use an empty set, since event contains the info jQuery().off( event ); return origFn.apply( this, arguments ); }; // Use same guid so caller can remove using origFn fn.guid = origFn.guid || ( origFn.guid = jQuery.guid++ ); } return this.each( function() { jQuery.event.add( this, types, fn, data, selector ); }); }, one: function( types, selector, data, fn ) { return this.on( types, selector, data, fn, 1 ); }, off: function( types, selector, fn ) { var handleObj, type; if ( types && types.preventDefault && types.handleObj ) { // ( event ) dispatched jQuery.Event handleObj = types.handleObj; jQuery( types.delegateTarget ).off( handleObj.namespace ? handleObj.origType + "." + handleObj.namespace : handleObj.origType, handleObj.selector, handleObj.handler ); return this; } if ( typeof types === "object" ) { // ( types-object [, selector] ) for ( type in types ) { this.off( type, selector, types[ type ] ); } return this; } if ( selector === false || typeof selector === "function" ) { // ( types [, fn] ) fn = selector; selector = undefined; } if ( fn === false ) { fn = returnFalse; } return this.each(function() { jQuery.event.remove( this, types, fn, selector ); }); }, trigger: function( type, data ) { return this.each(function() { jQuery.event.trigger( type, data, this ); }); }, triggerHandler: function( type, data ) { var elem = this[0]; if ( elem ) { return jQuery.event.trigger( type, data, elem, true ); } } }); function createSafeFragment( document ) { var list = nodeNames.split( "|" ), safeFrag = document.createDocumentFragment(); if ( safeFrag.createElement ) { while ( list.length ) { safeFrag.createElement( list.pop() ); } } return safeFrag; } var nodeNames = "abbr|article|aside|audio|bdi|canvas|data|datalist|details|figcaption|figure|footer|" + "header|hgroup|mark|meter|nav|output|progress|section|summary|time|video", rinlinejQuery = / jQuery\d+="(?:null|\d+)"/g, rnoshimcache = new RegExp("<(?:" + nodeNames + ")[\\s/>]", "i"), rleadingWhitespace = /^\s+/, rxhtmlTag = /<(?!area|br|col|embed|hr|img|input|link|meta|param)(([\w:]+)[^>]*)\/>/gi, rtagName = /<([\w:]+)/, rtbody = /\s*$/g, // We have to close these tags to support XHTML (#13200) wrapMap = { option: [ 1, "" ], legend: [ 1, "
    ", "
    " ], area: [ 1, "", "" ], param: [ 1, "", "" ], thead: [ 1, "", "
    " ], tr: [ 2, "", "
    " ], col: [ 2, "", "
    " ], td: [ 3, "", "
    " ], // IE6-8 can't serialize link, script, style, or any html5 (NoScope) tags, // unless wrapped in a div with non-breaking characters in front of it. _default: support.htmlSerialize ? [ 0, "", "" ] : [ 1, "X
    ", "
    " ] }, safeFragment = createSafeFragment( document ), fragmentDiv = safeFragment.appendChild( document.createElement("div") ); wrapMap.optgroup = wrapMap.option; wrapMap.tbody = wrapMap.tfoot = wrapMap.colgroup = wrapMap.caption = wrapMap.thead; wrapMap.th = wrapMap.td; function getAll( context, tag ) { var elems, elem, i = 0, found = typeof context.getElementsByTagName !== strundefined ? context.getElementsByTagName( tag || "*" ) : typeof context.querySelectorAll !== strundefined ? context.querySelectorAll( tag || "*" ) : undefined; if ( !found ) { for ( found = [], elems = context.childNodes || context; (elem = elems[i]) != null; i++ ) { if ( !tag || jQuery.nodeName( elem, tag ) ) { found.push( elem ); } else { jQuery.merge( found, getAll( elem, tag ) ); } } } return tag === undefined || tag && jQuery.nodeName( context, tag ) ? jQuery.merge( [ context ], found ) : found; } // Used in buildFragment, fixes the defaultChecked property function fixDefaultChecked( elem ) { if ( rcheckableType.test( elem.type ) ) { elem.defaultChecked = elem.checked; } } // Support: IE<8 // Manipulating tables requires a tbody function manipulationTarget( elem, content ) { return jQuery.nodeName( elem, "table" ) && jQuery.nodeName( content.nodeType !== 11 ? content : content.firstChild, "tr" ) ? elem.getElementsByTagName("tbody")[0] || elem.appendChild( elem.ownerDocument.createElement("tbody") ) : elem; } // Replace/restore the type attribute of script elements for safe DOM manipulation function disableScript( elem ) { elem.type = (jQuery.find.attr( elem, "type" ) !== null) + "/" + elem.type; return elem; } function restoreScript( elem ) { var match = rscriptTypeMasked.exec( elem.type ); if ( match ) { elem.type = match[1]; } else { elem.removeAttribute("type"); } return elem; } // Mark scripts as having already been evaluated function setGlobalEval( elems, refElements ) { var elem, i = 0; for ( ; (elem = elems[i]) != null; i++ ) { jQuery._data( elem, "globalEval", !refElements || jQuery._data( refElements[i], "globalEval" ) ); } } function cloneCopyEvent( src, dest ) { if ( dest.nodeType !== 1 || !jQuery.hasData( src ) ) { return; } var type, i, l, oldData = jQuery._data( src ), curData = jQuery._data( dest, oldData ), events = oldData.events; if ( events ) { delete curData.handle; curData.events = {}; for ( type in events ) { for ( i = 0, l = events[ type ].length; i < l; i++ ) { jQuery.event.add( dest, type, events[ type ][ i ] ); } } } // make the cloned public data object a copy from the original if ( curData.data ) { curData.data = jQuery.extend( {}, curData.data ); } } function fixCloneNodeIssues( src, dest ) { var nodeName, e, data; // We do not need to do anything for non-Elements if ( dest.nodeType !== 1 ) { return; } nodeName = dest.nodeName.toLowerCase(); // IE6-8 copies events bound via attachEvent when using cloneNode. if ( !support.noCloneEvent && dest[ jQuery.expando ] ) { data = jQuery._data( dest ); for ( e in data.events ) { jQuery.removeEvent( dest, e, data.handle ); } // Event data gets referenced instead of copied if the expando gets copied too dest.removeAttribute( jQuery.expando ); } // IE blanks contents when cloning scripts, and tries to evaluate newly-set text if ( nodeName === "script" && dest.text !== src.text ) { disableScript( dest ).text = src.text; restoreScript( dest ); // IE6-10 improperly clones children of object elements using classid. // IE10 throws NoModificationAllowedError if parent is null, #12132. } else if ( nodeName === "object" ) { if ( dest.parentNode ) { dest.outerHTML = src.outerHTML; } // This path appears unavoidable for IE9. When cloning an object // element in IE9, the outerHTML strategy above is not sufficient. // If the src has innerHTML and the destination does not, // copy the src.innerHTML into the dest.innerHTML. #10324 if ( support.html5Clone && ( src.innerHTML && !jQuery.trim(dest.innerHTML) ) ) { dest.innerHTML = src.innerHTML; } } else if ( nodeName === "input" && rcheckableType.test( src.type ) ) { // IE6-8 fails to persist the checked state of a cloned checkbox // or radio button. Worse, IE6-7 fail to give the cloned element // a checked appearance if the defaultChecked value isn't also set dest.defaultChecked = dest.checked = src.checked; // IE6-7 get confused and end up setting the value of a cloned // checkbox/radio button to an empty string instead of "on" if ( dest.value !== src.value ) { dest.value = src.value; } // IE6-8 fails to return the selected option to the default selected // state when cloning options } else if ( nodeName === "option" ) { dest.defaultSelected = dest.selected = src.defaultSelected; // IE6-8 fails to set the defaultValue to the correct value when // cloning other types of input fields } else if ( nodeName === "input" || nodeName === "textarea" ) { dest.defaultValue = src.defaultValue; } } jQuery.extend({ clone: function( elem, dataAndEvents, deepDataAndEvents ) { var destElements, node, clone, i, srcElements, inPage = jQuery.contains( elem.ownerDocument, elem ); if ( support.html5Clone || jQuery.isXMLDoc(elem) || !rnoshimcache.test( "<" + elem.nodeName + ">" ) ) { clone = elem.cloneNode( true ); // IE<=8 does not properly clone detached, unknown element nodes } else { fragmentDiv.innerHTML = elem.outerHTML; fragmentDiv.removeChild( clone = fragmentDiv.firstChild ); } if ( (!support.noCloneEvent || !support.noCloneChecked) && (elem.nodeType === 1 || elem.nodeType === 11) && !jQuery.isXMLDoc(elem) ) { // We eschew Sizzle here for performance reasons: http://jsperf.com/getall-vs-sizzle/2 destElements = getAll( clone ); srcElements = getAll( elem ); // Fix all IE cloning issues for ( i = 0; (node = srcElements[i]) != null; ++i ) { // Ensure that the destination node is not null; Fixes #9587 if ( destElements[i] ) { fixCloneNodeIssues( node, destElements[i] ); } } } // Copy the events from the original to the clone if ( dataAndEvents ) { if ( deepDataAndEvents ) { srcElements = srcElements || getAll( elem ); destElements = destElements || getAll( clone ); for ( i = 0; (node = srcElements[i]) != null; i++ ) { cloneCopyEvent( node, destElements[i] ); } } else { cloneCopyEvent( elem, clone ); } } // Preserve script evaluation history destElements = getAll( clone, "script" ); if ( destElements.length > 0 ) { setGlobalEval( destElements, !inPage && getAll( elem, "script" ) ); } destElements = srcElements = node = null; // Return the cloned set return clone; }, buildFragment: function( elems, context, scripts, selection ) { var j, elem, contains, tmp, tag, tbody, wrap, l = elems.length, // Ensure a safe fragment safe = createSafeFragment( context ), nodes = [], i = 0; for ( ; i < l; i++ ) { elem = elems[ i ]; if ( elem || elem === 0 ) { // Add nodes directly if ( jQuery.type( elem ) === "object" ) { jQuery.merge( nodes, elem.nodeType ? [ elem ] : elem ); // Convert non-html into a text node } else if ( !rhtml.test( elem ) ) { nodes.push( context.createTextNode( elem ) ); // Convert html into DOM nodes } else { tmp = tmp || safe.appendChild( context.createElement("div") ); // Deserialize a standard representation tag = (rtagName.exec( elem ) || [ "", "" ])[ 1 ].toLowerCase(); wrap = wrapMap[ tag ] || wrapMap._default; tmp.innerHTML = wrap[1] + elem.replace( rxhtmlTag, "<$1>" ) + wrap[2]; // Descend through wrappers to the right content j = wrap[0]; while ( j-- ) { tmp = tmp.lastChild; } // Manually add leading whitespace removed by IE if ( !support.leadingWhitespace && rleadingWhitespace.test( elem ) ) { nodes.push( context.createTextNode( rleadingWhitespace.exec( elem )[0] ) ); } // Remove IE's autoinserted from table fragments if ( !support.tbody ) { // String was a , *may* have spurious elem = tag === "table" && !rtbody.test( elem ) ? tmp.firstChild : // String was a bare or wrap[1] === "
    " && !rtbody.test( elem ) ? tmp : 0; j = elem && elem.childNodes.length; while ( j-- ) { if ( jQuery.nodeName( (tbody = elem.childNodes[j]), "tbody" ) && !tbody.childNodes.length ) { elem.removeChild( tbody ); } } } jQuery.merge( nodes, tmp.childNodes ); // Fix #12392 for WebKit and IE > 9 tmp.textContent = ""; // Fix #12392 for oldIE while ( tmp.firstChild ) { tmp.removeChild( tmp.firstChild ); } // Remember the top-level container for proper cleanup tmp = safe.lastChild; } } } // Fix #11356: Clear elements from fragment if ( tmp ) { safe.removeChild( tmp ); } // Reset defaultChecked for any radios and checkboxes // about to be appended to the DOM in IE 6/7 (#8060) if ( !support.appendChecked ) { jQuery.grep( getAll( nodes, "input" ), fixDefaultChecked ); } i = 0; while ( (elem = nodes[ i++ ]) ) { // #4087 - If origin and destination elements are the same, and this is // that element, do not do anything if ( selection && jQuery.inArray( elem, selection ) !== -1 ) { continue; } contains = jQuery.contains( elem.ownerDocument, elem ); // Append to fragment tmp = getAll( safe.appendChild( elem ), "script" ); // Preserve script evaluation history if ( contains ) { setGlobalEval( tmp ); } // Capture executables if ( scripts ) { j = 0; while ( (elem = tmp[ j++ ]) ) { if ( rscriptType.test( elem.type || "" ) ) { scripts.push( elem ); } } } } tmp = null; return safe; }, cleanData: function( elems, /* internal */ acceptData ) { var elem, type, id, data, i = 0, internalKey = jQuery.expando, cache = jQuery.cache, deleteExpando = support.deleteExpando, special = jQuery.event.special; for ( ; (elem = elems[i]) != null; i++ ) { if ( acceptData || jQuery.acceptData( elem ) ) { id = elem[ internalKey ]; data = id && cache[ id ]; if ( data ) { if ( data.events ) { for ( type in data.events ) { if ( special[ type ] ) { jQuery.event.remove( elem, type ); // This is a shortcut to avoid jQuery.event.remove's overhead } else { jQuery.removeEvent( elem, type, data.handle ); } } } // Remove cache only if it was not already removed by jQuery.event.remove if ( cache[ id ] ) { delete cache[ id ]; // IE does not allow us to delete expando properties from nodes, // nor does it have a removeAttribute function on Document nodes; // we must handle all of these cases if ( deleteExpando ) { delete elem[ internalKey ]; } else if ( typeof elem.removeAttribute !== strundefined ) { elem.removeAttribute( internalKey ); } else { elem[ internalKey ] = null; } deletedIds.push( id ); } } } } } }); jQuery.fn.extend({ text: function( value ) { return access( this, function( value ) { return value === undefined ? jQuery.text( this ) : this.empty().append( ( this[0] && this[0].ownerDocument || document ).createTextNode( value ) ); }, null, value, arguments.length ); }, append: function() { return this.domManip( arguments, function( elem ) { if ( this.nodeType === 1 || this.nodeType === 11 || this.nodeType === 9 ) { var target = manipulationTarget( this, elem ); target.appendChild( elem ); } }); }, prepend: function() { return this.domManip( arguments, function( elem ) { if ( this.nodeType === 1 || this.nodeType === 11 || this.nodeType === 9 ) { var target = manipulationTarget( this, elem ); target.insertBefore( elem, target.firstChild ); } }); }, before: function() { return this.domManip( arguments, function( elem ) { if ( this.parentNode ) { this.parentNode.insertBefore( elem, this ); } }); }, after: function() { return this.domManip( arguments, function( elem ) { if ( this.parentNode ) { this.parentNode.insertBefore( elem, this.nextSibling ); } }); }, remove: function( selector, keepData /* Internal Use Only */ ) { var elem, elems = selector ? jQuery.filter( selector, this ) : this, i = 0; for ( ; (elem = elems[i]) != null; i++ ) { if ( !keepData && elem.nodeType === 1 ) { jQuery.cleanData( getAll( elem ) ); } if ( elem.parentNode ) { if ( keepData && jQuery.contains( elem.ownerDocument, elem ) ) { setGlobalEval( getAll( elem, "script" ) ); } elem.parentNode.removeChild( elem ); } } return this; }, empty: function() { var elem, i = 0; for ( ; (elem = this[i]) != null; i++ ) { // Remove element nodes and prevent memory leaks if ( elem.nodeType === 1 ) { jQuery.cleanData( getAll( elem, false ) ); } // Remove any remaining nodes while ( elem.firstChild ) { elem.removeChild( elem.firstChild ); } // If this is a select, ensure that it displays empty (#12336) // Support: IE<9 if ( elem.options && jQuery.nodeName( elem, "select" ) ) { elem.options.length = 0; } } return this; }, clone: function( dataAndEvents, deepDataAndEvents ) { dataAndEvents = dataAndEvents == null ? false : dataAndEvents; deepDataAndEvents = deepDataAndEvents == null ? dataAndEvents : deepDataAndEvents; return this.map(function() { return jQuery.clone( this, dataAndEvents, deepDataAndEvents ); }); }, html: function( value ) { return access( this, function( value ) { var elem = this[ 0 ] || {}, i = 0, l = this.length; if ( value === undefined ) { return elem.nodeType === 1 ? elem.innerHTML.replace( rinlinejQuery, "" ) : undefined; } // See if we can take a shortcut and just use innerHTML if ( typeof value === "string" && !rnoInnerhtml.test( value ) && ( support.htmlSerialize || !rnoshimcache.test( value ) ) && ( support.leadingWhitespace || !rleadingWhitespace.test( value ) ) && !wrapMap[ (rtagName.exec( value ) || [ "", "" ])[ 1 ].toLowerCase() ] ) { value = value.replace( rxhtmlTag, "<$1>" ); try { for (; i < l; i++ ) { // Remove element nodes and prevent memory leaks elem = this[i] || {}; if ( elem.nodeType === 1 ) { jQuery.cleanData( getAll( elem, false ) ); elem.innerHTML = value; } } elem = 0; // If using innerHTML throws an exception, use the fallback method } catch(e) {} } if ( elem ) { this.empty().append( value ); } }, null, value, arguments.length ); }, replaceWith: function() { var arg = arguments[ 0 ]; // Make the changes, replacing each context element with the new content this.domManip( arguments, function( elem ) { arg = this.parentNode; jQuery.cleanData( getAll( this ) ); if ( arg ) { arg.replaceChild( elem, this ); } }); // Force removal if there was no new content (e.g., from empty arguments) return arg && (arg.length || arg.nodeType) ? this : this.remove(); }, detach: function( selector ) { return this.remove( selector, true ); }, domManip: function( args, callback ) { // Flatten any nested arrays args = concat.apply( [], args ); var first, node, hasScripts, scripts, doc, fragment, i = 0, l = this.length, set = this, iNoClone = l - 1, value = args[0], isFunction = jQuery.isFunction( value ); // We can't cloneNode fragments that contain checked, in WebKit if ( isFunction || ( l > 1 && typeof value === "string" && !support.checkClone && rchecked.test( value ) ) ) { return this.each(function( index ) { var self = set.eq( index ); if ( isFunction ) { args[0] = value.call( this, index, self.html() ); } self.domManip( args, callback ); }); } if ( l ) { fragment = jQuery.buildFragment( args, this[ 0 ].ownerDocument, false, this ); first = fragment.firstChild; if ( fragment.childNodes.length === 1 ) { fragment = first; } if ( first ) { scripts = jQuery.map( getAll( fragment, "script" ), disableScript ); hasScripts = scripts.length; // Use the original fragment for the last item instead of the first because it can end up // being emptied incorrectly in certain situations (#8070). for ( ; i < l; i++ ) { node = fragment; if ( i !== iNoClone ) { node = jQuery.clone( node, true, true ); // Keep references to cloned scripts for later restoration if ( hasScripts ) { jQuery.merge( scripts, getAll( node, "script" ) ); } } callback.call( this[i], node, i ); } if ( hasScripts ) { doc = scripts[ scripts.length - 1 ].ownerDocument; // Reenable scripts jQuery.map( scripts, restoreScript ); // Evaluate executable scripts on first document insertion for ( i = 0; i < hasScripts; i++ ) { node = scripts[ i ]; if ( rscriptType.test( node.type || "" ) && !jQuery._data( node, "globalEval" ) && jQuery.contains( doc, node ) ) { if ( node.src ) { // Optional AJAX dependency, but won't run scripts if not present if ( jQuery._evalUrl ) { jQuery._evalUrl( node.src ); } } else { jQuery.globalEval( ( node.text || node.textContent || node.innerHTML || "" ).replace( rcleanScript, "" ) ); } } } } // Fix #11809: Avoid leaking memory fragment = first = null; } } return this; } }); jQuery.each({ appendTo: "append", prependTo: "prepend", insertBefore: "before", insertAfter: "after", replaceAll: "replaceWith" }, function( name, original ) { jQuery.fn[ name ] = function( selector ) { var elems, i = 0, ret = [], insert = jQuery( selector ), last = insert.length - 1; for ( ; i <= last; i++ ) { elems = i === last ? this : this.clone(true); jQuery( insert[i] )[ original ]( elems ); // Modern browsers can apply jQuery collections as arrays, but oldIE needs a .get() push.apply( ret, elems.get() ); } return this.pushStack( ret ); }; }); var iframe, elemdisplay = {}; /** * Retrieve the actual display of a element * @param {String} name nodeName of the element * @param {Object} doc Document object */ // Called only from within defaultDisplay function actualDisplay( name, doc ) { var style, elem = jQuery( doc.createElement( name ) ).appendTo( doc.body ), // getDefaultComputedStyle might be reliably used only on attached element display = window.getDefaultComputedStyle && ( style = window.getDefaultComputedStyle( elem[ 0 ] ) ) ? // Use of this method is a temporary fix (more like optmization) until something better comes along, // since it was removed from specification and supported only in FF style.display : jQuery.css( elem[ 0 ], "display" ); // We don't have any data stored on the element, // so use "detach" method as fast way to get rid of the element elem.detach(); return display; } /** * Try to determine the default display value of an element * @param {String} nodeName */ function defaultDisplay( nodeName ) { var doc = document, display = elemdisplay[ nodeName ]; if ( !display ) { display = actualDisplay( nodeName, doc ); // If the simple way fails, read from inside an iframe if ( display === "none" || !display ) { // Use the already-created iframe if possible iframe = (iframe || jQuery( "