gnocchi-4.2.5/0000775000372000037200000000000013310505365013777 5ustar travistravis00000000000000gnocchi-4.2.5/.mailmap0000664000372000037200000000003213310505170015405 0ustar travistravis00000000000000gord chung gnocchi-4.2.5/doc/0000775000372000037200000000000013310505365014544 5ustar travistravis00000000000000gnocchi-4.2.5/doc/source/0000775000372000037200000000000013310505365016044 5ustar travistravis00000000000000gnocchi-4.2.5/doc/source/contributing.rst0000664000372000037200000000366513310505170021311 0ustar travistravis00000000000000============== Contributing ============== Issues ------ We use the `GitHub issue tracker`_ for reporting issues. Before opening a new issue, ensure the bug was not already reported by searching on Issue tracker first. If you're unable to find an open issue addressing the problem, open a new one. Be sure to include a title and clear description, as much relevant information as possible, and a code sample or an executable test case demonstrating the expected behavior that is not occurring. If you are looking to contribute for the first time, some issues are tagged with the "`good first issue`_" label and are easy targets for newcomers. .. _`GitHub issue tracker`: https://github.com/gnocchixyz/gnocchi/issues .. _`good first issue`: https://github.com/gnocchixyz/gnocchi/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22 Pull-requests ------------- When opening a pull-request, make sure that: * You write a comprehensive summary of your problem and the solution you implemented. * If you update or fix your pull-request, make sure the commits are atomic. Do not include fix-up commits in your history, rewrite it properly using e.g. `git rebase --interactive` and/or `git commit --amend`. * We recommend using `git pull-request`_ to send your pull-requests. .. _`git pull-request`: https://github.com/jd/git-pull-request Running the Tests ----------------- Tests are run using `tox `_. Tox creates a virtual environment for each test environment, so make sure you are using an up to date version of `virtualenv `_. Different test environments and configurations can be found by running the ``tox -l`` command. For example, to run tests with Python 2.7, PostgreSQL as indexer, and file as storage backend: :: tox -e py27-postgresql-file To run tests with Python 2.7, MySQL as indexer, and Ceph as storage backend: :: tox -e py35-mysql-ceph gnocchi-4.2.5/doc/source/rest.j20000664000372000037200000007267413310505170017270 0ustar travistravis00000000000000================ REST API Usage ================ Authentication ============== By default, the authentication is configured to the `"basic" mode`_. You need to provide an `Authorization` header in your HTTP requests with a valid username(the password is not used). The "admin" username is granted all privileges, whereas any other username is recognize as having standard permissions. .. _"basic" mode: https://tools.ietf.org/html/rfc7617 You can customize permissions by specifying a different `policy_file` than the default one. If you set the `api.auth_mode` value to `keystone`, the OpenStack Keystone middleware will be enabled for authentication. It is then needed to authenticate against Keystone and provide a `X-Auth-Token` header with a valid token for each request sent to Gnocchi's API. If you set the `api.auth_mode` value to `remoteuser`, Gnocchi will look at the HTTP server REMOTE_USER environment variable to get the username. Then the permissions model is the same as the "basic" mode. Metrics ======= Gnocchi provides an object type that is called |metric|. A |metric| designates any thing that can be measured: the CPU usage of a server, the temperature of a room or the number of bytes sent by a network interface. A |metric| only has a few properties: a UUID to identify it, a name, the |archive policy| that will be used to store and aggregate the |measures|. Create ------ To create a |metric|, the following API request should be used: {{ scenarios['create-metric']['doc'] }} .. note:: Once the |metric| is created, the |archive policy| attribute is fixed and unchangeable. The definition of the |archive policy| can be changed through the :ref:`archive_policy endpoint` though. Read ---- Once created, you can retrieve the |metric| information: {{ scenarios['get-metric']['doc'] }} List ---- To retrieve the list of all the |metrics| created, use the following request: {{ scenarios['list-metric']['doc'] }} Pagination ~~~~~~~~~~ Considering the large volume of |metrics| Gnocchi will store, query results are limited to `max_limit` value set in the configuration file. Returned results are ordered by |metrics|' id values. To retrieve the next page of results, the id of a |metric| should be given as `marker` for the beginning of the next page of results. Default ordering and limits as well as page start can be modified using query parameters: {{ scenarios['list-metric-pagination']['doc'] }} Delete ------ Metrics can be deleted through a request: {{ scenarios['delete-metric']['doc'] }} See also :ref:`Resources ` for similar operations specific to metrics associated with a |resource|. Measures ======== Push ---- It is possible to send |measures| to the |metric|: {{ scenarios['post-measures']['doc'] }} If there are no errors, Gnocchi does not return a response body, only a simple status code. It is possible to provide any number of |measures|. .. IMPORTANT:: While it is possible to send any number of (timestamp, value), they still need to honor constraints defined by the |archive policy| used by the |metric|, such as the maximum |timespan|. Batch ~~~~~ It is also possible to batch |measures| sending, i.e. send several |measures| for different |metrics| in a simple call: {{ scenarios['post-measures-batch']['doc'] }} Or using named |metrics| of |resources|: {{ scenarios['post-measures-batch-named']['doc'] }} If some named |metrics| specified in the batch request do not exist, Gnocchi can try to create them as long as an |archive policy| rule matches: {{ scenarios['post-measures-batch-named-create']['doc'] }} Read ---- Once |measures| are sent, it is possible to retrieve |aggregates| using *GET* on the same endpoint: {{ scenarios['get-measures']['doc'] }} The list of points returned is composed of tuples with (timestamp, |granularity|, value) sorted by timestamp. The |granularity| is the |timespan| covered by aggregation for this point. Refresh ~~~~~~~ Depending on the driver, there may be some lag after pushing |measures| before they are processed and queryable. To ensure your query returns all |aggregates| that have been pushed and processed, you can force any unprocessed |measures| to be handled: {{ scenarios['get-measures-refresh']['doc'] }} .. note:: Depending on the amount of data that is unprocessed, `refresh` may add some overhead to your query. Filter ~~~~~~ Time range `````````` It is possible to filter the |aggregates| over a time range by specifying the *start* and/or *stop* parameters to the query with timestamp. The timestamp format can be either a floating number (UNIX epoch) or an ISO8601 formated timestamp: {{ scenarios['get-measures-from']['doc'] }} Aggregation ``````````` By default, the aggregated values that are returned use the *mean* |aggregation method|. It is possible to request for any other method defined by the policy by specifying the *aggregation* query parameter: {{ scenarios['get-measures-max']['doc'] }} Granularity ``````````` It's possible to provide the |granularity| argument to specify the |granularity| to retrieve, rather than all the |granularities| available: {{ scenarios['get-measures-granularity']['doc'] }} Resample ~~~~~~~~ In addition to |granularities| defined by the |archive policy|, |aggregates| can be resampled to a new |granularity|. {{ scenarios['get-measures-resample']['doc'] }} .. note:: If you plan to execute the query often, it is recommended for performance to leverage an |archive policy| with the needed |granularity| instead of resampling the time series on each query. .. note:: Depending on the |aggregation method| and frequency of |measures|, resampled data may lack accuracy as it is working against previously aggregated data. .. note:: Gnocchi has an :ref:`aggregates ` endpoint which provides resampling as well as additional capabilities. Archive Policy ============== When sending |measures| for a |metric| to Gnocchi, the values are dynamically aggregated. That means that Gnocchi does not store all sent |measures|, but aggregates them over a certain period of time. Gnocchi provides several |aggregation methods| that are builtin. The list of |aggregation method| available is: *mean*, *sum*, *last*, *max*, *min*, *std*, *median*, *first*, *count* and *Npct* (with 0 < N < 100). Those can be prefix by `rate:` to compute the rate of change before doing the aggregation. An |archive policy| is defined by a list of items in the `definition` field. Each item is composed of: the |timespan|; the |granularity|, which is the level of precision that must be kept when aggregating data; and the number of points. The |archive policy| is determined using at least 2 of the points, |granularity| and |timespan| fields. For example, an item might be defined as 12 points over 1 hour (one point every 5 minutes), or 1 point every 1 hour over 1 day (24 points). By default, new |measures| can only be processed if they have timestamps in the future or part of the last aggregation period. The last aggregation period size is based on the largest |granularity| defined in the |archive policy| definition. To allow processing |measures| that are older than the period, the `back_window` parameter can be used to set the number of coarsest periods to keep. That way it is possible to process |measures| that are older than the last timestamp period boundary. For example, if an |archive policy| is defined with coarsest aggregation of 1 hour, and the last point processed has a timestamp of 14:34, it's possible to process |measures| back to 14:00 with a `back_window` of 0. If the `back_window` is set to 2, it will be possible to send |measures| with timestamp back to 12:00 (14:00 minus 2 times 1 hour). Create ------ The REST API allows to create |archive policies| in this way: {{ scenarios['create-archive-policy']['doc'] }} By default, the |aggregation methods| computed and stored are the ones defined with `default_aggregation_methods` in the configuration file. It is possible to change the |aggregation methods| used in an |archive policy| by specifying the list of |aggregation method| to use in the `aggregation_methods` attribute of an |archive policy|. {{ scenarios['create-archive-policy-without-max']['doc'] }} The list of |aggregation methods| can either be: - a list of |aggregation methods| to use, e.g. `["mean", "max"]` - a list of methods to remove (prefixed by `-`) and/or to add (prefixed by `+`) to the default list (e.g. `["+mean", "-last"]`) If `*` is included in the list, it's substituted by the list of all supported |aggregation methods|. Read ---- Once the |archive policy| is created, the complete set of properties is computed and returned, with the URL of the |archive policy|. This URL can be used to retrieve the details of the |archive policy| later: {{ scenarios['get-archive-policy']['doc'] }} List ---- It is also possible to list |archive policies|: {{ scenarios['list-archive-policy']['doc'] }} .. _archive-policy-patch: Update ------ Existing |archive policies| can be modified to retain more or less data depending on requirements. If the policy coverage is expanded, |aggregates| are not retroactively calculated as backfill to accommodate the new |timespan|: {{ scenarios['update-archive-policy']['doc'] }} .. note:: |Granularities| cannot be changed to a different rate. Also, |granularities| cannot be added or dropped from a policy. Delete ------ It is possible to delete an |archive policy| if it is not used by any |metric|: {{ scenarios['delete-archive-policy']['doc'] }} .. note:: An |archive policy| cannot be deleted until all |metrics| associated with it are removed by a metricd daemon. Archive Policy Rule =================== Gnocchi provides the ability to define a mapping called `archive_policy_rule`. An |archive policy| rule defines a mapping between a |metric| and an |archive policy|. This gives users the ability to pre-define rules so an |archive policy| is assigned to |metrics| based on a matched pattern. An |archive policy| rule has a few properties: a name to identify it, an |archive policy| name that will be used to store the policy name and |metric| pattern to match |metric| names. An |archive policy| rule for example could be a mapping to default a medium |archive policy| for any volume |metric| with a pattern matching `volume.*`. When a sample |metric| is posted with a name of `volume.size`, that would match the pattern and the rule applies and sets the |archive policy| to medium. If multiple rules match, the longest matching rule is taken. For example, if two rules exists which match `*` and `disk.*`, a `disk.io.rate` |metric| would match the `disk.*` rule rather than `*` rule. Create ------ To create a rule, the following API request should be used: {{ scenarios['create-archive-policy-rule']['doc'] }} The `metric_pattern` is used to pattern match so as some examples, - `*` matches anything - `disk.*` matches disk.io - `disk.io.*` matches disk.io.rate Read ---- Once created, you can retrieve the rule information: {{ scenarios['get-archive-policy-rule']['doc'] }} List ---- It is also possible to list |archive policy| rules. The result set is ordered by the `metric_pattern`, in reverse alphabetical order: {{ scenarios['list-archive-policy-rule']['doc'] }} Update ------ It is possible to rename an archive policy rule: {{ scenarios['rename-archive-policy-rule']['doc'] }} Delete ------ It is possible to delete an |archive policy| rule: {{ scenarios['delete-archive-policy-rule']['doc'] }} .. _resources-endpoint: Resources ========= Gnocchi provides the ability to store and index |resources|. Each |resource| has a type. The basic type of |resources| is *generic*, but more specialized subtypes also exist, especially to describe OpenStack resources. Create ------ To create a generic |resource|: {{ scenarios['create-resource-generic']['doc'] }} The *user_id* and *project_id* attributes may be any arbitrary string. The :ref:`timestamps` describing the lifespan of the |resource| are optional, and *started_at* is by default set to the current timestamp. The *id* attribute may be a UUID or some other arbitrary string. If it is a UUID, Gnocchi will use it verbatim. If it is not a UUID, the original value will be stored in the *original_resource_id* attribute and Gnocchi will generate a new UUID that is unique for the user. That is, if two users submit create requests with the same non-UUID *id* attribute, the resulting resources will have different UUID values in their respective *id* attributes. You may use either of the *id* or the *original_resource_id* attributes to refer to the |resource|. The value returned by the create operation includes a `Location` header referencing the *id*. Non-generic resources ~~~~~~~~~~~~~~~~~~~~~ More specialized |resources| can be created. For example, the *instance* is used to describe an OpenStack instance as managed by Nova_. {{ scenarios['create-resource-instance']['doc'] }} All specialized types have their own optional and mandatory attributes, but they all include attributes from the generic type as well. .. _Nova: http://launchpad.net/nova With metrics ~~~~~~~~~~~~ Each |resource| can be linked to any number of |metrics| on creation: {{ scenarios['create-resource-instance-with-metrics']['doc'] }} It is also possible to create |metrics| at the same time you create a |resource| to save some requests: {{ scenarios['create-resource-instance-with-dynamic-metrics']['doc'] }} Read ---- To retrieve a |resource| by its URL provided by the `Location` header at creation time: {{ scenarios['get-resource-generic']['doc'] }} List ---- All |resources| can be listed, either by using the `generic` type that will list all types of |resources|, or by filtering on their |resource| type: {{ scenarios['list-resource-generic']['doc'] }} Specific resource type ~~~~~~~~~~~~~~~~~~~~~~ No attributes specific to the |resource| type are retrieved when using the `generic` endpoint. To retrieve the details, either list using the specific |resource| type endpoint: {{ scenarios['list-resource-instance']['doc'] }} With details ~~~~~~~~~~~~ To retrieve a more detailed view of the resources, use `details=true` in the query parameter: {{ scenarios['list-resource-generic-details']['doc'] }} Limit attributes ~~~~~~~~~~~~~~~~ To limit response attributes, use `attrs=id&attrs=started_at&attrs=user_id` in the query parameter: {{ scenarios['list-resource-generic-limit-attrs']['doc'] }} Pagination ~~~~~~~~~~ Similar to |metric| list, query results are limited to `max_limit` value set in the configuration file. Returned results represent a single page of data and are ordered by resouces' revision_start time and started_at values: {{ scenarios['list-resource-generic-pagination']['doc'] }} List resource metrics --------------------- The |metrics| associated with a |resource| can be accessed and manipulated using the usual `/v1/metric` endpoint or using the named relationship with the |resource|: {{ scenarios['get-resource-named-metrics-measures']['doc'] }} Update ------ It's possible to modify a |resource| by re-uploading it partially with the modified fields: {{ scenarios['patch-resource']['doc'] }} It is also possible to associate additional |metrics| with a |resource|: {{ scenarios['append-metrics-to-resource']['doc'] }} History ------- And to retrieve a |resource|'s modification history: {{ scenarios['get-patched-instance-history']['doc'] }} Delete ------ It is possible to delete a |resource| altogether: {{ scenarios['delete-resource-generic']['doc'] }} It is also possible to delete a batch of |resources| based on attribute values, and returns a number of deleted |resources|. Batch ~~~~~ To delete |resources| based on ids: {{ scenarios['delete-resources-by-ids']['doc'] }} or delete |resources| based on time: {{ scenarios['delete-resources-by-time']['doc']}} .. IMPORTANT:: When a |resource| is deleted, all its associated |metrics| are deleted at the same time. When a batch of |resources| are deleted, an attribute filter is required to avoid deletion of the entire database. Resource Types ============== Gnocchi is able to manage |resource| types with custom attributes. Create ------ To create a new |resource| type: {{ scenarios['create-resource-type']['doc'] }} Read ---- Then to retrieve its description: {{ scenarios['get-resource-type']['doc'] }} List ---- All |resource| types can be listed like this: {{ scenarios['list-resource-type']['doc'] }} Update ------ Attributes can be added or removed: {{ scenarios['patch-resource-type']['doc'] }} Delete ------ It can also be deleted if no more |resources| are associated to it: {{ scenarios['delete-resource-type']['doc'] }} .. note:: Creating |resource| type means creation of new tables on the indexer backend. This is heavy operation that will lock some tables for a short amount of time. When the |resource| type is created, its initial `state` is `creating`. When the new tables have been created, the state switches to `active` and the new |resource| type is ready to be used. If something unexpected occurs during this step, the state switches to `creation_error`. The same behavior occurs when the |resource| type is deleted. The state starts to switch to `deleting`, the |resource| type is no longer usable. Then the tables are removed and then finally the resource_type is really deleted from the database. If some unexpected error occurs the state switches to `deletion_error`. Search ====== Gnocchi's search API supports to the ability to execute a query across |resources| or |metrics|. This API provides a language to construct more complex matching contraints beyond basic filtering. Usage ----- You can specify a time range to look for by specifying the `start` and/or `stop` query parameter, and the |aggregation method| to use by specifying the `aggregation` query parameter. The supported operators are: equal to (`=`, `==` or `eq`), lesser than (`<` or `lt`), greater than (`>` or `gt`), less than or equal to (`<=`, `le` or `≤`) greater than or equal to (`>=`, `ge` or `≥`) not equal to (`!=`, `ne` or `≠`), addition (`+` or `add`), substraction (`-` or `sub`), multiplication (`*`, `mul` or `×`), division (`/`, `div` or `÷`). These operations take either one argument, and in this case the second argument passed is the value, or it. The operators or (`or` or `∨`), and (`and` or `∧`) and `not` are also supported, and take a list of arguments as parameters. .. _search-resource: Resource -------- It's possible to search for |resources| using a query mechanism, using the `POST` method and uploading a JSON formatted query. Single filter ~~~~~~~~~~~~~ When listing |resources|, it is possible to filter |resources| based on attributes values: {{ scenarios['search-resource-for-user']['doc'] }} Or even: {{ scenarios['search-resource-for-host-like']['doc'] }} Multiple filters ~~~~~~~~~~~~~~~~ Complex operators such as `and` and `or` are also available: {{ scenarios['search-resource-for-user-after-timestamp']['doc'] }} With details ~~~~~~~~~~~~ Details about the |resource| can also be retrieved at the same time: {{ scenarios['search-resource-for-user-details']['doc'] }} Limit attributes ~~~~~~~~~~~~~~~~ To limit response attributes, use `attrs=id&attrs=started_at&attrs=user_id` in the query parameter: {{ scenarios['search-resource-for-user-limit-attrs']['doc'] }} History ~~~~~~~ It's possible to search for old revisions of |resources| in the same ways: {{ scenarios['search-resource-history']['doc'] }} Time range `````````` The timerange of the history can be set, too: {{ scenarios['search-resource-history-partial']['doc'] }} Magic ~~~~~ The special attribute `lifespan` which is equivalent to `ended_at - started_at` is also available in the filtering queries. {{ scenarios['search-resource-lifespan']['doc'] }} Metric ------ It is possible to search for values in |metrics|. For example, this will look for all values that are greater than or equal to 50 if we add 23 to them and that are not equal to 55. You have to specify the list of |metrics| to look into by using the `metric_id` query parameter several times. {{ scenarios['search-value-in-metric']['doc'] }} And it is possible to search for values in |metrics| by using one or more |granularities|: {{ scenarios['search-value-in-metrics-by-granularity']['doc'] }} .. _aggregates: Dynamic Aggregates ================== Gnocchi supports the ability to make on-the-fly reaggregations of existing |metrics| and the ability to manipulate and transform |metrics| as required. This is accomplished by passing an `operations` value describing the actions to apply to the |metrics|. .. note:: `operations` can also be passed as a string, for example: `"operations": "(aggregate mean (metric (metric-id aggregation) (metric-id aggregation))"` Cross-metric Usage ------------------ Aggregation across multiple |metrics| have different behavior depending on whether boundary values are set (`start` and `stop`) and if `needed_overlap` is set. Overlap percentage ~~~~~~~~~~~~~~~~~~ Gnocchi expects that time series have a certain percentage of timestamps in common. This percent is controlled by the `needed_overlap` needed_overlap, which by default expects 100% overlap. If this percentage is not reached, an error is returned. .. note:: If `start` or `stop` boundary is not set, Gnocchi will set the missing boundary to the first or last timestamp common across all series. Backfill ~~~~~~~~ The ability to fill in missing points from a subset of time series is supported by specifying a `fill` value. Valid fill values include any float, `dropna` or `null`. In the case of `null`, Gnocchi will compute the aggregation using only the existing points. `dropna` is like `null` but remove NaN from the result. The `fill` parameter will not backfill timestamps which contain no points in any of the time series. Only timestamps which have datapoints in at least one of the time series is returned. {{ scenarios['get-aggregates-by-metric-ids-fill']['doc'] }} Search and aggregate -------------------- It's also possible to do that aggregation on |metrics| linked to |resources|. In order to select these |resources|, the following endpoint accepts a query such as the one described in the :ref:`resource search API `. {{ scenarios['get-aggregates-by-attributes-lookup']['doc'] }} And metric name can be `wildcard` too. {{ scenarios['get-aggregates-by-attributes-lookup-wildcard']['doc'] }} Groupby ~~~~~~~ It is possible to group the |resource| search results by any attribute of the requested |resource| type, and then compute the aggregation: {{ scenarios['get-aggregates-by-attributes-lookup-groupby']['doc'] }} List of supported ------------------------------ Get one or more metrics ~~~~~~~~~~~~~~~~~~~~~~~ :: (metric ) (metric (( ), ( ), ...)) metric-id: the id of a metric to retrieve aggregation: the aggregation method to retrieve .. note:: When used alone, this provides the ability to retrieve multiple |metrics| in a single request. Rolling window aggregation ~~~~~~~~~~~~~~~~~~~~~~~~~~ :: (rolling ()) aggregation method: the aggregation method to use to compute the rolling window. (mean, median, std, min, max, sum, var, count) rolling window: number of previous values to aggregate Aggregation across metrics ~~~~~~~~~~~~~~~~~~~~~~~~~~ :: aggregate ((), (), ...)) aggregation method: the aggregation method to use to compute the aggregate between metrics (mean, median, std, min, max, sum, var, count) Resample ~~~~~~~~ :: (resample ()) aggregation method: the aggregation method to use to compute the aggregate between metrics (mean, median, std, min, max, sum, var, count) granularity: the granularity (e.g.: 1d, 60s, ...) .. note:: If you plan to execute the query often, it is recommended for performance to leverage an |archive policy| with the needed |granularity| instead of resampling the time series on each query. Math operations ~~~~~~~~~~~~~~~ :: ( ) operator: %, mod, +, add, -, sub, *, ×, mul, /, ÷, div, **, ^, pow Boolean operations ~~~~~~~~~~~~~~~~~~ :: ( ) operator: =, ==, eq, <, lt, >, gt, <=, ≤, le, =, ≥, ge, !=, ≠, ne Function operations ~~~~~~~~~~~~~~~~~~~ :: (abs ()) (absolute ()) (neg ()) (negative ()) (cos ()) (sin ()) (tan ()) (floor ()) (ceil ()) (rateofchange ()) Examples -------- Aggregate then math ~~~~~~~~~~~~~~~~~~~ The following computes the mean aggregates with `all` metrics listed in `metrics` and then multiples it by `4`. {{ scenarios['get-aggregates-by-metric-ids']['doc'] }} Between metrics ~~~~~~~~~~~~~~~ Operations between metrics can also be done, such as: {{ scenarios['get-aggregates-between-metrics']['doc'] }} List the top N resources that consume the most CPU during the last hour ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The following is configured so that `stop` - `start` = `granularity` will get only one point per instance. This will give all information needed to order by `cpu.util` timeseries which can be filtered down to N results. {{ scenarios['use-case1-top-cpuutil-per-instances']['doc'] }} Aggregation across metrics (deprecated) ======================================= .. Note:: This API have been replaced by the more flexible :ref:`aggregates API ` Gnocchi supports on-the-fly aggregation of previously aggregated data of |metrics|. It can be done by providing the list of |metrics| to aggregate: {{ scenarios['get-across-metrics-measures-by-metric-ids']['doc'] }} .. Note:: This aggregation is done against the |aggregates| built and updated for a |metric| when new measurements are posted in Gnocchi. Therefore, the aggregate of this already aggregated data may not have sense for certain kind of |aggregation method| (e.g. stdev). By default, the |measures| are aggregated using the |aggregation method| provided, e.g. you'll get a mean of means, or a max of maxs. You can specify what method to use over the retrieved aggregation by using the `reaggregation` parameter: {{ scenarios['get-across-metrics-measures-by-metric-ids-reaggregate']['doc'] }} It's also possible to do that aggregation on |metrics| linked to |resources|. In order to select these |resources|, the following endpoint accepts a query such as the one described in the :ref:`resource search API `. {{ scenarios['get-across-metrics-measures-by-attributes-lookup']['doc'] }} It is possible to group the |resource| search results by any attribute of the requested |resource| type, and then compute the aggregation: {{ scenarios['get-across-metrics-measures-by-attributes-lookup-groupby']['doc'] }} Similar to retrieving |aggregates| for a single |metric|, the `refresh` parameter can be provided to force all POSTed |measures| to be processed across all |metrics| before computing the result. The `resample` parameter may be used as well. .. note:: Tranformations (eg: resample, absolute, ...) are done prior to any reaggregation if both parameters are specified. Also, aggregation across |metrics| have different behavior depending on whether boundary values are set ('start' and 'stop') and if 'needed_overlap' is set. Gnocchi expects that we have a certain percent of timestamps common between time series. This percent is controlled by needed_overlap, which by default expects 100% overlap. If this percentage is not reached, an error is returned. .. note:: If `start` or `stop` boundary is not set, Gnocchi will set the missing boundary to the first or last timestamp common across all series. The ability to fill in missing points from a subset of time series is supported by specifying a `fill` value. Valid fill values include any float, `dropna` or `null`. In the case of `null`, Gnocchi will compute the aggregation using only the existing points. `dropna` is like `null` but remove NaN from the result. The `fill` parameter will not backfill timestamps which contain no points in any of the time series. Only timestamps which have datapoints in at least one of the time series is returned. {{ scenarios['get-across-metrics-measures-by-metric-ids-fill']['doc'] }} Capabilities ============ The list |aggregation methods| that can be used in Gnocchi are extendable and can differ between deployments. It is possible to get the supported list of |aggregation methods| from the API server: {{ scenarios['get-capabilities']['doc'] }} Status ====== The overall status of the Gnocchi installation can be retrieved via an API call reporting values such as the number of new |measures| to process for each |metric|: {{ scenarios['get-status']['doc'] }} .. _timestamp-format: Timestamp format ================ Timestamps used in Gnocchi are always returned using the ISO 8601 format. Gnocchi is able to understand a few formats of timestamp when querying or creating |resources|, for example - "2014-01-01 12:12:34" or "2014-05-20T10:00:45.856219", ISO 8601 timestamps. - "10 minutes", which means "10 minutes from now". - "-2 days", which means "2 days ago". - 1421767030, a Unix epoch based timestamp. .. include:: include/term-substitution.rst gnocchi-4.2.5/doc/source/statsd.rst0000664000372000037200000000374613310505170020104 0ustar travistravis00000000000000=================== Statsd Daemon Usage =================== What Is It? =========== `Statsd`_ is a network daemon that listens for statistics sent over the network using TCP or UDP, and then sends |aggregates| to another backend. Gnocchi provides a daemon that is compatible with the statsd protocol and can listen to |metrics| sent over the network, named `gnocchi-statsd`. .. _`Statsd`: https://github.com/etsy/statsd/ How Does It Work? ================= In order to enable statsd support in Gnocchi, you need to configure the `[statsd]` option group in the configuration file. You need to provide a |resource| ID that will be used as the main generic |resource| where all the |metrics| will be attached, a user and project id that will be associated with the |resource| and |metrics|, and an |archive policy| name that will be used to create the |metrics|. All the |metrics| will be created dynamically as the |metrics| are sent to `gnocchi-statsd`, and attached with the provided name to the |resource| ID you configured. The `gnocchi-statsd` may be scaled, but trade-offs have been made due to the nature of the statsd protocol. That means that if you use |metrics| of type `counter`_ or sampling (`c` in the protocol), you should always send those |metrics| to the same daemon – or not use them at all. The other supported types (`timing`_ and `gauges`_) do not suffer this limitation, but be aware that you might have more |measures| than expected if you send the same |metric| to different `gnocchi-statsd` servers, as neither their cache nor their flush delay are synchronized. .. _`counter`: https://github.com/etsy/statsd/blob/master/docs/metric_types.md#counting .. _`timing`: https://github.com/etsy/statsd/blob/master/docs/metric_types.md#timing .. _`gauges`: https://github.com/etsy/statsd/blob/master/docs/metric_types.md#gauges .. note :: The statsd protocol support is incomplete: relative gauge values with +/- and sets are not supported yet. .. include:: include/term-substitution.rst gnocchi-4.2.5/doc/source/include/0000775000372000037200000000000013310505365017467 5ustar travistravis00000000000000gnocchi-4.2.5/doc/source/include/term-substitution.rst0000664000372000037200000000167113310505170023741 0ustar travistravis00000000000000.. |resource| replace:: :term:`resource` .. |resources| replace:: :term:`resources` .. |metric| replace:: :term:`metric` .. |metrics| replace:: :term:`metrics` .. |measure| replace:: :term:`measure` .. |measures| replace:: :term:`measures` .. |archive policy| replace:: :term:`archive policy` .. |archive policies| replace:: :term:`archive policies` .. |granularity| replace:: :term:`granularity` .. |granularities| replace:: :term:`granularities` .. |time series| replace:: :term:`time series