pax_global_header00006660000000000000000000000064147652003140014514gustar00rootroot0000000000000052 comment=71dbe37fb14a4ae2537c1790a239dc1e568ffba5 redis_exporter-1.69.0/000077500000000000000000000000001476520031400146475ustar00rootroot00000000000000redis_exporter-1.69.0/.github/000077500000000000000000000000001476520031400162075ustar00rootroot00000000000000redis_exporter-1.69.0/.github/ISSUE_TEMPLATE/000077500000000000000000000000001476520031400203725ustar00rootroot00000000000000redis_exporter-1.69.0/.github/ISSUE_TEMPLATE/bug_report.md000066400000000000000000000017161476520031400230710ustar00rootroot00000000000000--- name: Bug report about: File a bug report title: '' labels: bug assignees: oliver006 --- **Describe the problem** A clear and concise description of what the bug is. **What version of redis_exporter are you running?** Please run `redis_exporter --version` if you're not sure what version you're running. [ ] 0.3x.x [ ] 1.x.x **Running the exporter** What's the full command you're using to run the exporter? (please remove passwords and other sensitive data) **Expected behavior** What metrics are missing? What metrics are wrong? Is something missing that was present in an earlier version? Did you upgrade from 0.3x.x to 1.0 and are scraping multiple hosts? [Have a look here ](https://github.com/oliver006/redis_exporter#prometheus-configuration-to-scrape-multiple-redis-hosts) how the configuration changed. **Screenshots** If applicable, add screenshots to help explain your problem. **Additional context** Add any other context about the problem here. redis_exporter-1.69.0/.github/ISSUE_TEMPLATE/config.yml000066400000000000000000000000341476520031400223570ustar00rootroot00000000000000blank_issues_enabled: false redis_exporter-1.69.0/.github/ISSUE_TEMPLATE/question.md000066400000000000000000000013251476520031400225640ustar00rootroot00000000000000--- name: Question about: Ask a question title: '' labels: question assignees: oliver006 --- **Describe the problem** A clear and concise description of what the question is. **What version of redis_exporter are you running?** Please run `redis_exporter --version` if you're not sure what version you're running. [ ] 0.3x.x [ ] 1.x.x **Running the exporter** What's the full command you're using to run the exporter? (please remove passwords and other sensitive data) Please include details about env variables, command line parameters, your orchestration setup, etc. **Screenshots** If applicable, add screenshots to help explain your question. **Additional context** Add any other context about the question here. redis_exporter-1.69.0/.github/dependabot.yml000066400000000000000000000005771476520031400210500ustar00rootroot00000000000000version: 2 updates: # https://docs.github.com/en/github/administering-a-repository/configuration-options-for-dependency-updates - package-ecosystem: "gomod" directory: "/" schedule: interval: "weekly" open-pull-requests-limit: 10 - package-ecosystem: "github-actions" directory: "/" schedule: interval: "weekly" open-pull-requests-limit: 10 redis_exporter-1.69.0/.github/renovate.json000066400000000000000000000002061476520031400207230ustar00rootroot00000000000000{ "ignorePaths": [ ".drone.yml" ], "packageRules": [ { "matchPackageNames": ["redis"], "enabled": false } ] } redis_exporter-1.69.0/.github/workflows/000077500000000000000000000000001476520031400202445ustar00rootroot00000000000000redis_exporter-1.69.0/.github/workflows/codeql-analysis.yml000066400000000000000000000035231476520031400240620ustar00rootroot00000000000000name: "Code scanning - action" on: push: branches: [master, ] pull_request: schedule: - cron: '0 15 * * 5' permissions: contents: read jobs: CodeQL-Build: permissions: actions: read # for github/codeql-action/init to get workflow details contents: read # for actions/checkout to fetch code security-events: write # for github/codeql-action/autobuild to send a status report runs-on: ubuntu-latest steps: - name: Checkout repository uses: actions/checkout@v4 with: # We must fetch at least the immediate parents so that if this is # a pull request then we can checkout the head. fetch-depth: 2 # If this run was triggered by a pull request event, then checkout # the head of the pull request instead of the merge commit. - run: git checkout HEAD^2 if: ${{ github.event_name == 'pull_request' }} # Initializes the CodeQL tools for scanning. - name: Initialize CodeQL uses: github/codeql-action/init@v3 # Override language selection by uncommenting this and choosing your languages # with: # languages: go, javascript, csharp, python, cpp, java # Autobuild attempts to build any compiled languages (C/C++, C#, or Java). # If this step fails, then you should remove it and run the build manually (see below) - name: Autobuild uses: github/codeql-action/autobuild@v3 # â„šī¸ Command-line programs to run using the OS shell. # 📚 https://git.io/JvXDl # âœī¸ If the Autobuild fails above, remove it and uncomment the following three lines # and modify them (or add more) to build your code if your project # uses a compiled language #- run: | # make bootstrap # make release - name: Perform CodeQL Analysis uses: github/codeql-action/analyze@v3 redis_exporter-1.69.0/.github/workflows/depsreview.yaml000066400000000000000000000004501476520031400233040ustar00rootroot00000000000000name: 'Dependency Review' on: [pull_request] permissions: contents: read jobs: dependency-review: runs-on: ubuntu-latest steps: - name: 'Checkout Repository' uses: actions/checkout@v4 - name: 'Dependency Review' uses: actions/dependency-review-action@v4 redis_exporter-1.69.0/.github/workflows/release.yml000066400000000000000000000057721476520031400224220ustar00rootroot00000000000000name: Release on: push: tags: - 'v*' jobs: release-binaries: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v4 - name: Setup Go uses: actions/setup-go@v5 with: go-version: '1.24' - name: Build binaries run: | make build-all-binaries ls -la ls -la .build/ ./package-github-binaries.sh ls -la dist/ - name: Add binaries to release uses: ncipollo/release-action@v1 with: artifacts: "dist/*" allowUpdates: true omitBodyDuringUpdate: true build-and-push-docker-images: runs-on: ubuntu-latest permissions: contents: read packages: write attestations: write id-token: write steps: - name: Checkout code uses: actions/checkout@v4 - name: Set up QEMU uses: docker/setup-qemu-action@v3 - name: Set up Docker Buildx uses: docker/setup-buildx-action@v3 - name: Login to Docker Hub uses: docker/login-action@v3 with: username: ${{ secrets.DOCKERHUB_USERNAME }} password: ${{ secrets.DOCKERHUB_TOKEN }} - name: Login to ghcr.io uses: docker/login-action@v3 with: registry: ghcr.io username: ${{ github.repository_owner }} password: ${{ secrets.GITHUB_TOKEN }} - name: Login to quay.io uses: docker/login-action@v3 with: registry: quay.io username: ${{ secrets.QUAY_USERNAME }} password: ${{ secrets.QUAY_TOKEN }} - name: Docker meta id: meta uses: docker/metadata-action@v5 with: # list of Docker images to use as base name for tags images: | oliver006/redis_exporter ghcr.io/oliver006/redis_exporter quay.io/oliver006/redis_exporter - name: Build and push scratch image uses: docker/build-push-action@v6 with: context: . target: scratch-release platforms: linux/amd64,linux/arm,linux/arm64 push: true tags: ${{ steps.meta.outputs.tags }} labels: ${{ steps.meta.outputs.labels }} build-args: | TAG=${{ github.ref_name }} SHA1=${{ github.sha }} - name: Build and push alpine image uses: docker/build-push-action@v6 with: context: . target: alpine platforms: linux/amd64,linux/arm,linux/arm64 push: true tags: oliver006/redis_exporter:${{ github.ref_name }}-alpine,ghcr.io/oliver006/redis_exporter:${{ github.ref_name }}-alpine,quay.io/oliver006/redis_exporter:${{ github.ref_name }}-alpine,oliver006/redis_exporter:alpine,ghcr.io/oliver006/redis_exporter:alpine,quay.io/oliver006/redis_exporter:alpine labels: ${{ steps.meta.outputs.labels }} build-args: | TAG=${{ github.ref_name }} SHA1=${{ github.sha }} redis_exporter-1.69.0/.github/workflows/tests.yml000066400000000000000000000064371476520031400221430ustar00rootroot00000000000000name: Tests on: pull_request: push: branches: - master - "v*" jobs: test-stuff: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v4 - name: Set up Docker uses: docker/setup-buildx-action@v3 - name: Set up Docker Compose run: sudo apt-get install docker-compose # need to do this before we start the services as they need the TLS creds - name: Create test certs for TLS run: | make test-certs chmod 777 ./contrib/tls/* - name: Start services run: docker-compose up -d working-directory: ./ - name: Setup Go uses: actions/setup-go@v5 with: go-version: '1.24' - name: Install Dependencies run: go mod tidy - name: Docker logs run: | echo "${{ toJson(job) }}" docker ps -a echo "ok" - name: Run tests env: LOG_LEVEL: "info" run: | sleep 15 make test - name: Run tests - valkey 8 env: LOG_LEVEL: "info" TEST_REDIS_URI: "redis://localhost:16382" TEST_VALKEY8_TLS_URI: "valkeys://localhost:16386" TEST_PWD_REDIS_URI: "redis://:redis-password@localhost:16380" run: | go test -v -race -p 1 ./... - name: Upload coverage to Codecov uses: codecov/codecov-action@v5 with: fail_ci_if_error: true files: ./coverage.txt token: ${{ secrets.CODECOV_TOKEN }} # required verbose: true - name: Upload coverage to Coveralls uses: coverallsapp/github-action@v2 with: file: coverage.txt - name: Stop services run: docker-compose down working-directory: ./ lint-stuff: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v4 - name: Setup Go uses: actions/setup-go@v5 with: go-version: '1.24' - name: Install Dependencies run: go mod tidy - name: golangci-lint uses: golangci/golangci-lint-action@v6 with: version: v1.64 args: "--tests=false" - name: Run checks env: LOG_LEVEL: "info" run: | make checks build-stuff: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v4 - name: Setup Go uses: actions/setup-go@v5 with: go-version: '1.24' - name: Install Dependencies run: go mod tidy - name: Build some binaries run: make build-some-amd64-binaries - name: Generate mixin run: make mixin - name: Set up Docker Buildx uses: docker/setup-buildx-action@v3 - name: Test Docker Image Build - Alpine uses: docker/build-push-action@v6 with: push: false target: alpine tags: user/app:tst file: Dockerfile build-args: "GOARCH=amd64" - name: Test Docker Image Build - Scratch uses: docker/build-push-action@v6 with: push: false target: scratch-release tags: user/app:tst file: Dockerfile build-args: "GOARCH=amd64" redis_exporter-1.69.0/.gitignore000066400000000000000000000003051476520031400166350ustar00rootroot00000000000000redis_exporter coverage.out coverage.txt dist/ pkg/ src/ .build/ .DS_Store .idea .vscode/ *.rdb contrib/tls/ca.crt contrib/tls/ca.key contrib/tls/ca.txt contrib/tls/redis.crt contrib/tls/redis.key redis_exporter-1.69.0/Dockerfile000066400000000000000000000025541476520031400166470ustar00rootroot00000000000000ARG TARGETPLATFORM # # build container # FROM --platform=$BUILDPLATFORM golang:1.24-alpine AS builder WORKDIR /go/src/github.com/oliver006/redis_exporter/ ADD . /go/src/github.com/oliver006/redis_exporter/ ARG SHA1="[no-sha]" ARG TAG="[no-tag]" ARG TARGETOS ARG TARGETARCH #RUN printf "nameserver 1.1.1.1\nnameserver 8.8.8.8"> /etc/resolv.conf \ && apk --no-cache add ca-certificates git RUN apk --no-cache add ca-certificates git RUN BUILD_DATE=$(date +%F-%T) CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /redis_exporter \ -ldflags "-s -w -extldflags \"-static\" -X main.BuildVersion=$TAG -X main.BuildCommitSha=$SHA1 -X main.BuildDate=$BUILD_DATE" . RUN [ "$TARGETARCH" = "amd64" ] && /redis_exporter -version || ls -la /redis_exporter # # scratch release container # FROM scratch AS scratch-release COPY --from=builder /redis_exporter /redis_exporter COPY --from=builder /etc/ssl/certs /etc/ssl/certs COPY --from=builder /etc/nsswitch.conf /etc/nsswitch.conf # Run as non-root user for secure environments USER 59000:59000 EXPOSE 9121 ENTRYPOINT [ "/redis_exporter" ] # # Alpine release container # FROM alpine:3.21 AS alpine COPY --from=builder /redis_exporter /redis_exporter COPY --from=builder /etc/ssl/certs /etc/ssl/certs # Run as non-root user for secure environments USER 59000:59000 EXPOSE 9121 ENTRYPOINT [ "/redis_exporter" ] redis_exporter-1.69.0/LICENSE000066400000000000000000000020471476520031400156570ustar00rootroot00000000000000MIT License Copyright (c) 2016 Oliver Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. redis_exporter-1.69.0/Makefile000066400000000000000000000073761476520031400163240ustar00rootroot00000000000000.DEFAULT_GOAL := build DOCKER_COMPOSE := $(if $(shell which docker-compose),docker-compose,docker compose) .PHONY: build build: go build . .PHONY: docker-all docker-all: docker-env-up docker-test docker-env-down .PHONY: docker-env-up docker-env-up: $(DOCKER_COMPOSE) -f docker-compose.yml up -d .PHONY: docker-env-down docker-env-down: $(DOCKER_COMPOSE) -f docker-compose.yml down .PHONY: docker-test docker-test: $(DOCKER_COMPOSE) -f docker-compose.yml up -d $(DOCKER_COMPOSE) -f docker-compose.yml run --rm tests bash -c 'make test' .PHONY: test-certs test-certs: contrib/tls/gen-test-certs.sh .PHONY: test test: TEST_VALKEY7_URI="valkey://localhost:16384" \ TEST_VALKEY8_URI="valkey://localhost:16382" \ TEST_VALKEY8_TLS_URI="valkeys://localhost:16386" \ TEST_REDIS7_TLS_URI="rediss://localhost:16387" \ TEST_REDIS_URI="redis://localhost:16385" \ TEST_REDIS7_URI="redis://localhost:16385" \ TEST_REDIS5_URI="redis://localhost:16383" \ TEST_REDIS6_URI="redis://localhost:16379" \ TEST_REDIS_2_8_URI="redis://localhost:16381" \ TEST_KEYDB01_URI="redis://localhost:16401" \ TEST_KEYDB02_URI="redis://localhost:16402" \ TEST_PWD_REDIS_URI="redis://:redis-password@localhost:16380" \ TEST_USER_PWD_REDIS_URI="redis://exporter:exporter-password@localhost:16390" \ TEST_REDIS_CLUSTER_MASTER_URI="redis://localhost:17000" \ TEST_REDIS_CLUSTER_SLAVE_URI="redis://localhost:17005" \ TEST_REDIS_CLUSTER_PASSWORD_URI="redis://localhost:17006" \ TEST_TILE38_URI="redis://localhost:19851" \ TEST_REDIS_SENTINEL_URI="redis://localhost:26379" \ TEST_REDIS_MODULES_URI="redis://localhost:36379" \ go test -v -covermode=atomic -cover -race -coverprofile=coverage.txt -p 1 ./... .PHONY: lint lint: # # this will run the default linters on non-test files # and then all but the "errcheck" linters on the tests golangci-lint run --tests=false --exclude-use-default golangci-lint run -D=errcheck --exclude-use-default .PHONY: checks checks: go vet ./... echo "checking gofmt" @if [ "$(shell gofmt -e -l . | wc -l)" -ne 0 ]; then exit 1; fi echo "checking gofmt - DONE" .PHONY: mixin mixin: cd contrib/redis-mixin && \ $(MAKE) all && \ cd ../../ BUILD_DT:=$(shell date +%F-%T) GO_LDFLAGS:="-s -w -extldflags \"-static\" -X main.BuildVersion=${GITHUB_REF_NAME} -X main.BuildCommitSha=${GITHUB_SHA} -X main.BuildDate=$(BUILD_DT)" .PHONE: build-some-amd64-binaries build-some-amd64-binaries: go install github.com/oliver006/gox@master rm -rf .build | true export CGO_ENABLED=0 ; \ gox -os="linux windows" -arch="amd64" -verbose -rebuild -ldflags $(GO_LDFLAGS) -output ".build/redis_exporter-${GITHUB_REF_NAME}.{{.OS}}-{{.Arch}}/{{.Dir}}" && echo "done" .PHONE: build-all-binaries build-all-binaries: go install github.com/oliver006/gox@master rm -rf .build | true export CGO_ENABLED=0 ; \ gox -os="linux windows freebsd netbsd openbsd" -arch="amd64 386" -verbose -rebuild -ldflags $(GO_LDFLAGS) -output ".build/redis_exporter-${GITHUB_REF_NAME}.{{.OS}}-{{.Arch}}/{{.Dir}}" && \ gox -os="darwin solaris illumos" -arch="amd64" -verbose -rebuild -ldflags $(GO_LDFLAGS) -output ".build/redis_exporter-${GITHUB_REF_NAME}.{{.OS}}-{{.Arch}}/{{.Dir}}" && \ gox -os="darwin" -arch="arm64" -verbose -rebuild -ldflags $(GO_LDFLAGS) -output ".build/redis_exporter-${GITHUB_REF_NAME}.{{.OS}}-{{.Arch}}/{{.Dir}}" && \ gox -os="linux freebsd netbsd" -arch="arm" -verbose -rebuild -ldflags $(GO_LDFLAGS) -output ".build/redis_exporter-${GITHUB_REF_NAME}.{{.OS}}-{{.Arch}}/{{.Dir}}" && \ gox -os="linux" -arch="arm64 mips64 mips64le ppc64 ppc64le s390x" -verbose -rebuild -ldflags $(GO_LDFLAGS) -output ".build/redis_exporter-${GITHUB_REF_NAME}.{{.OS}}-{{.Arch}}/{{.Dir}}" && \ echo "done" redis_exporter-1.69.0/README.md000066400000000000000000001441471476520031400161410ustar00rootroot00000000000000# Prometheus ValKey & Redis Metrics Exporter [![Tests](https://github.com/oliver006/redis_exporter/actions/workflows/tests.yml/badge.svg)](https://github.com/oliver006/redis_exporter/actions/workflows/tests.yml) [![Coverage Status](https://coveralls.io/repos/github/oliver006/redis_exporter/badge.svg?branch=master)](https://coveralls.io/github/oliver006/redis_exporter?branch=master) [![codecov](https://codecov.io/gh/oliver006/redis_exporter/branch/master/graph/badge.svg)](https://codecov.io/gh/oliver006/redis_exporter) [![docker_pulls](https://img.shields.io/docker/pulls/oliver006/redis_exporter.svg)](https://img.shields.io/docker/pulls/oliver006/redis_exporter.svg) [![Stand With Ukraine](https://raw.githubusercontent.com/vshymanskyy/StandWithUkraine/main/badges/StandWithUkraine.svg)](https://stand-with-ukraine.pp.ua) Prometheus exporter for ValKey metrics (Redis-compatible).\ Supports ValKey and Redis 2.x, 3.x, 4.x, 5.x, 6.x, and 7.x #### Ukraine is still suffering from Russian aggression, [please consider supporting Ukraine with a donation](https://www.supportukraine.co/). [![Stand With Ukraine](https://raw.githubusercontent.com/vshymanskyy/StandWithUkraine/main/banner2-direct.svg)](https://stand-with-ukraine.pp.ua) ## Building and running the exporter ### Build and run locally ```sh git clone https://github.com/oliver006/redis_exporter.git cd redis_exporter go build . ./redis_exporter --version ``` ### Pre-build binaries For pre-built binaries please take a look at [the releases](https://github.com/oliver006/redis_exporter/releases). ### Basic Prometheus Configuration Add a block to the `scrape_configs` of your prometheus.yml config file: ```yaml scrape_configs: - job_name: redis_exporter static_configs: - targets: ['<>:9121'] ``` and adjust the host name accordingly. ### Kubernetes SD configurations To have instances in the drop-down as human readable names rather than IPs, it is suggested to use [instance relabelling](https://www.robustperception.io/controlling-the-instance-label). For example, if the metrics are being scraped via the pod role, one could add: ```yaml - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: instance regex: (.*redis.*) ``` as a relabel config to the corresponding scrape config. As per the regex value, only pods with "redis" in their name will be relabelled as such. Similar approaches can be taken with [other role types](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config) depending on how scrape targets are retrieved. ### Prometheus Configuration to Scrape Multiple Redis Hosts The Prometheus docs have a [very informative article](https://prometheus.io/docs/guides/multi-target-exporter/) on how multi-target exporters are intended to work. Run the exporter with the command line flag `--redis.addr=` so it won't try to access the local instance every time the `/metrics` endpoint is scraped. Using below config instead of the /metric endpoint the /scrape endpoint will be used by prometheus. As an example the first target will be queried with this web request: http://exporterhost:9121/scrape?target=first-redis-host:6379 ```yaml scrape_configs: ## config for the multiple Redis targets that the exporter will scrape - job_name: 'redis_exporter_targets' static_configs: - targets: - redis://first-redis-host:6379 - redis://second-redis-host:6379 - redis://second-redis-host:6380 - redis://second-redis-host:6381 metrics_path: /scrape relabel_configs: - source_labels: [__address__] target_label: __param_target - source_labels: [__param_target] target_label: instance - target_label: __address__ replacement: <>:9121 ## config for scraping the exporter itself - job_name: 'redis_exporter' static_configs: - targets: - <>:9121 ``` The Redis instances are listed under `targets`, the Redis exporter hostname is configured via the last relabel_config rule.\ If authentication is needed for the Redis instances then you can set the password via the `--redis.password` command line option of the exporter (this means you can currently only use one password across the instances you try to scrape this way. Use several exporters if this is a problem). \ You can also use a json file to supply multiple targets by using `file_sd_configs` like so: ```yaml scrape_configs: - job_name: 'redis_exporter_targets' file_sd_configs: - files: - targets-redis-instances.json metrics_path: /scrape relabel_configs: - source_labels: [__address__] target_label: __param_target - source_labels: [__param_target] target_label: instance - target_label: __address__ replacement: <>:9121 ## config for scraping the exporter itself - job_name: 'redis_exporter' static_configs: - targets: - <>:9121 ``` The `targets-redis-instances.json` should look something like this: ```json [ { "targets": [ "redis://redis-host-01:6379", "redis://redis-host-02:6379"], "labels": { } } ] ``` Prometheus uses file watches and all changes to the json file are applied immediately. ### Command line flags | Name | Environment Variable Name | Description | |-------------------------|----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | redis.addr | REDIS_ADDR | Address of the Redis instance, defaults to `redis://localhost:6379`. If TLS is enabled, the address must be like the following `rediss://localhost:6379` | | redis.user | REDIS_USER | User name to use for authentication (Redis ACL for Redis 6.0 and newer). | | redis.password | REDIS_PASSWORD | Password of the Redis instance, defaults to `""` (no password). | | redis.password-file | REDIS_PASSWORD_FILE | Password file of the Redis instance to scrape, defaults to `""` (no password file). | | check-keys | REDIS_EXPORTER_CHECK_KEYS | Comma separated list of key patterns to export value and length/size, eg: `db3=user_count` will export key `user_count` from db `3`. db defaults to `0` if omitted. The key patterns specified with this flag will be found using [SCAN](https://redis.io/commands/scan). Use this option if you need glob pattern matching; `check-single-keys` is faster for non-pattern keys. Warning: using `--check-keys` to match a very large number of keys can slow down the exporter to the point where it doesn't finish scraping the redis instance. --check-keys doesn't work in fluster mode as "SCAN" does not work across multiple instances. | | check-single-keys | REDIS_EXPORTER_CHECK_SINGLE_KEYS | Comma separated list of keys to export value and length/size, eg: `db3=user_count` will export key `user_count` from db `3`. db defaults to `0` if omitted. The keys specified with this flag will be looked up directly without any glob pattern matching. Use this option if you don't need glob pattern matching; it is faster than `check-keys`. | | check-streams | REDIS_EXPORTER_CHECK_STREAMS | Comma separated list of stream-patterns to export info about streams, groups and consumers. Syntax is the same as `check-keys`. | | check-single-streams | REDIS_EXPORTER_CHECK_SINGLE_STREAMS | Comma separated list of streams to export info about streams, groups and consumers. The streams specified with this flag will be looked up directly without any glob pattern matching. Use this option if you don't need glob pattern matching; it is faster than `check-streams`. | | streams-exclude-consumer-metrics | REDIS_EXPORTER_STREAMS_EXCLUDE_CONSUMER_METRICS | Don't collect per consumer metrics for streams (decreases amount of metrics and cardinality). | | check-keys-batch-size | REDIS_EXPORTER_CHECK_KEYS_BATCH_SIZE | Approximate number of keys to process in each execution. This is basically the COUNT option that will be passed into the SCAN command as part of the execution of the key or key group metrics, see [COUNT option](https://redis.io/commands/scan#the-count-option). Larger value speeds up scanning. Still Redis is a single-threaded app, huge `COUNT` can affect production environment. | | count-keys | REDIS_EXPORTER_COUNT_KEYS | Comma separated list of patterns to count, eg: `db3=sessions:*` will count all keys with prefix `sessions:` from db `3`. db defaults to `0` if omitted. Warning: The exporter runs SCAN to count the keys. This might not perform well on large databases. | | script | REDIS_EXPORTER_SCRIPT | Comma separated list of path(s) to Redis Lua script(s) for gathering extra metrics. | | debug | REDIS_EXPORTER_DEBUG | Verbose debug output | | log-format | REDIS_EXPORTER_LOG_FORMAT | Log format, valid options are `txt` (default) and `json`. | | namespace | REDIS_EXPORTER_NAMESPACE | Namespace for the metrics, defaults to `redis`. | | connection-timeout | REDIS_EXPORTER_CONNECTION_TIMEOUT | Timeout for connection to Redis instance, defaults to "15s" (in Golang duration format) | | web.listen-address | REDIS_EXPORTER_WEB_LISTEN_ADDRESS | Address to listen on for web interface and telemetry, defaults to `0.0.0.0:9121`. | | web.telemetry-path | REDIS_EXPORTER_WEB_TELEMETRY_PATH | Path under which to expose metrics, defaults to `/metrics`. | | redis-only-metrics | REDIS_EXPORTER_REDIS_ONLY_METRICS | Whether to also export go runtime metrics, defaults to false. | | include-config-metrics | REDIS_EXPORTER_INCL_CONFIG_METRICS | Whether to include all config settings as metrics, defaults to false. | | include-system-metrics | REDIS_EXPORTER_INCL_SYSTEM_METRICS | Whether to include system metrics like `total_system_memory_bytes`, defaults to false. | | include-modules-metrics | REDIS_EXPORTER_INCL_MODULES_METRICS | Whether to collect Redis Modules metrics, defaults to false. | | exclude-latency-histogram-metrics | REDIS_EXPORTER_EXCLUDE_LATENCY_HISTOGRAM_METRICS | Do not try to collect latency histogram metrics (to avoid `WARNING, LOGGED ONCE ONLY: cmd LATENCY HISTOGRAM` error on Redis < v7). | | redact-config-metrics | REDIS_EXPORTER_REDACT_CONFIG_METRICS | Whether to redact config settings that include potentially sensitive information like passwords. | | ping-on-connect | REDIS_EXPORTER_PING_ON_CONNECT | Whether to ping the redis instance after connecting and record the duration as a metric, defaults to false. | | is-tile38 | REDIS_EXPORTER_IS_TILE38 | Whether to scrape Tile38 specific metrics, defaults to false. | | is-cluster | REDIS_EXPORTER_IS_CLUSTER | Whether this is a redis cluster (Enable this if you need to fetch key level data on a Redis Cluster). | | export-client-list | REDIS_EXPORTER_EXPORT_CLIENT_LIST | Whether to scrape Client List specific metrics, defaults to false. | | export-client-port | REDIS_EXPORTER_EXPORT_CLIENT_PORT | Whether to include the client's port when exporting the client list. Warning: including the port increases the number of metrics generated and will make your Prometheus server take up more memory | | skip-tls-verification | REDIS_EXPORTER_SKIP_TLS_VERIFICATION | Whether to skip TLS verification when the exporter connects to a Redis instance | | tls-client-key-file | REDIS_EXPORTER_TLS_CLIENT_KEY_FILE | Name of the client key file (including full path) if the server requires TLS client authentication | | tls-client-cert-file | REDIS_EXPORTER_TLS_CLIENT_CERT_FILE | Name the client cert file (including full path) if the server requires TLS client authentication | | tls-server-key-file | REDIS_EXPORTER_TLS_SERVER_KEY_FILE | Name of the server key file (including full path) if the web interface and telemetry should use TLS | | tls-server-cert-file | REDIS_EXPORTER_TLS_SERVER_CERT_FILE | Name of the server certificate file (including full path) if the web interface and telemetry should use TLS | | tls-server-ca-cert-file | REDIS_EXPORTER_TLS_SERVER_CA_CERT_FILE | Name of the CA certificate file (including full path) if the web interface and telemetry should use TLS | | tls-server-min-version | REDIS_EXPORTER_TLS_SERVER_MIN_VERSION | Minimum TLS version that is acceptable by the web interface and telemetry when using TLS, defaults to `TLS1.2` (supports `TLS1.0`,`TLS1.1`,`TLS1.2`,`TLS1.3`). | | tls-ca-cert-file | REDIS_EXPORTER_TLS_CA_CERT_FILE | Name of the CA certificate file (including full path) if the server requires TLS client authentication | | set-client-name | REDIS_EXPORTER_SET_CLIENT_NAME | Whether to set client name to redis_exporter, defaults to true. | | check-key-groups | REDIS_EXPORTER_CHECK_KEY_GROUPS | Comma separated list of [LUA regexes](https://www.lua.org/pil/20.1.html) for classifying keys into groups. The regexes are applied in specified order to individual keys, and the group name is generated by concatenating all capture groups of the first regex that matches a key. A key will be tracked under the `unclassified` group if none of the specified regexes matches it. | | max-distinct-key-groups | REDIS_EXPORTER_MAX_DISTINCT_KEY_GROUPS | Maximum number of distinct key groups that can be tracked independently *per Redis database*. If exceeded, only key groups with the highest memory consumption within the limit will be tracked separately, all remaining key groups will be tracked under a single `overflow` key group. | | config-command | REDIS_EXPORTER_CONFIG_COMMAND | What to use for the CONFIG command, defaults to `CONFIG`, , set to "-" to skip config metrics extraction. | | basic-auth-username | REDIS_EXPORTER_BASIC_AUTH_USERNAME | Username for Basic Authentication with the redis exporter needs to be set together with basic-auth-password to be effective | basic-auth-password | REDIS_EXPORTER_BASIC_AUTH_PASSWORD | Password for Basic Authentication with the redis exporter needs to be set together with basic-auth-username to be effective Redis instance addresses can be tcp addresses: `redis://localhost:6379`, `redis.example.com:6379` or e.g. unix sockets: `unix:///tmp/redis.sock`.\ SSL is supported by using the `rediss://` schema, for example: `rediss://azure-ssl-enabled-host.redis.cache.windows.net:6380` (note that the port is required when connecting to a non-standard 6379 port, e.g. with Azure Redis instances).\ Command line settings take precedence over any configurations provided by the environment variables. ### Authenticating with Redis If your Redis instance requires authentication then there are several ways how you can supply a username (new in Redis 6.x with ACLs) and a password. You can provide the username and password as part of the address, see [here](https://www.iana.org/assignments/uri-schemes/prov/redis) for the official documentation of the `redis://` scheme. You can set `-redis.password-file=sample-pwd-file.json` to specify a password file, it's used whenever the exporter connects to a Redis instance, no matter if you're using the `/scrape` endpoint for multiple instances or the normal `/metrics` endpoint when scraping just one instance. It only takes effect when `redis.password == ""`. See the [contrib/sample-pwd-file.json](contrib/sample-pwd-file.json) for a working example, and make sure to always include the `redis://` in your password file entries. An example for a URI including a password is: `redis://<>:<>@<>:<>` Alternatively, you can provide the username and/or password using the `--redis.user` and `--redis.password` directly to the redis_exporter. If you want to use a dedicated Redis user for the redis_exporter (instead of the default user) then you need enable a list of commands for that user. You can use the following Redis command to set up the user, just replace `<<>>` and `<<>>` with your desired values. ``` ACL SETUSER <<>> -@all +@connection +memory -readonly +strlen +config|get +xinfo +pfcount -quit +zcard +type +xlen -readwrite -command +client -wait +scard +llen +hlen +get +eval +slowlog +cluster|info -hello -echo +info +latency +scan -reset -auth -asking ><<>> ``` For monitoring a Sentinel-node you may use the following command with the right ACL: ``` ACL SETUSER <<>> -@all +@connection -command +client -hello +info -auth +sentinel|masters +sentinel|replicas +sentinel|slaves +sentinel|sentinels +sentinel|ckquorum ><<>> ``` ### Run via Docker The latest release is automatically published to the [Docker registry](https://hub.docker.com/r/oliver006/redis_exporter/). You can run it like this: ```sh docker run -d --name redis_exporter -p 9121:9121 oliver006/redis_exporter ``` Docker images are also published to the [quay.io docker repo](https://quay.io/oliver006/redis_exporter) so you can pull them from there if for instance you run into rate limiting issues with Docker hub. ```sh docker run -d --name redis_exporter -p 9121:9121 quay.io/oliver006/redis_exporter ``` The `latest` docker image contains only the exporter binary. If e.g. for debugging purposes, you need the exporter running in an image that has a shell then you can run the `alpine` image: ```sh docker run -d --name redis_exporter -p 9121:9121 oliver006/redis_exporter:alpine ``` If you try to access a Redis instance running on the host node, you'll need to add `--network host` so the redis_exporter container can access it: ```sh docker run -d --name redis_exporter --network host oliver006/redis_exporter ``` ### Run on Kubernetes [Here](contrib/k8s-redis-and-exporter-deployment.yaml) is an example Kubernetes deployment configuration for how to deploy the redis_exporter as a sidecar to a Redis instance. ### Tile38 [Tile38](https://tile38.com) now has native Prometheus support for exporting server metrics and basic stats about number of objects, strings, etc. You can also use redis_exporter to export Tile38 metrics, especially more advanced metrics by using Lua scripts or the `-check-keys` flag.\ To enable Tile38 support, run the exporter with `--is-tile38=true`. ## What's exported Most items from the INFO command are exported, see [Redis documentation](https://redis.io/commands/info) for details.\ In addition, for every database there are metrics for total keys, expiring keys and the average TTL for keys in the database.\ You can also export values of keys by using the `-check-keys` (or related) flag. The exporter will also export the size (or, depending on the data type, the length) of the key. This can be used to export the number of elements in (sorted) sets, hashes, lists, streams, etc. If a key is in string format and matches with `--check-keys` (or related) then its string value will be exported as a label in the `key_value_as_string` metric. If you require custom metric collection, you can provide comma separated list of path(s) to [Redis Lua script(s)](https://redis.io/commands/eval) using the `-script` flag. If you pass only one script, you can omit comma. An example can be found [in the contrib folder](./contrib/sample_collect_script.lua). ### The redis_memory_max_bytes metric The metric `redis_memory_max_bytes` will show the maximum number of bytes Redis can use.\ It is zero if no memory limit is set for the Redis instance you're scraping (this is the default setting for Redis).\ You can confirm that's the case by checking if the metric `redis_config_maxmemory` is zero or by connecting to the Redis instance via redis-cli and running the command `CONFIG GET MAXMEMORY`. ## What it looks like Example [Grafana](http://grafana.org/) screenshots: ![redis_exporter_screen_01](https://cloud.githubusercontent.com/assets/1222339/19412031/897549c6-92da-11e6-84a0-b091f9deb81d.png) ![redis_exporter_screen_02](https://cloud.githubusercontent.com/assets/1222339/19412041/dee6d7bc-92da-11e6-84f8-610c025d6182.png) Grafana dashboard is available on [grafana.com](https://grafana.com/grafana/dashboards/763-redis-dashboard-for-prometheus-redis-exporter-1-x/) and/or [github.com](contrib/grafana_prometheus_redis_dashboard.json). ### Viewing multiple Redis simultaneously If running [Redis Sentinel](https://redis.io/topics/sentinel), it may be desirable to view the metrics of the various cluster members simultaneously. For this reason the dashboard's drop down is of the multi-value type, allowing for the selection of multiple Redis. Please note that there is a caveat; the single stat panels up top namely `uptime`, `total memory use` and `clients` do not function upon viewing multiple Redis. ## Using the mixin There is a set of sample rules, alerts and dashboards available in [redis-mixin](contrib/redis-mixin/) ## Upgrading from 0.x to 1.x [PR #256](https://github.com/oliver006/redis_exporter/pull/256) introduced breaking changes which were released as version v1.0.0. If you only scrape one Redis instance and use command line flags `--redis.address` and `--redis.password` then you're most probably not affected. Otherwise, please see [PR #256](https://github.com/oliver006/redis_exporter/pull/256) and [this README](https://github.com/oliver006/redis_exporter#prometheus-configuration-to-scrape-multiple-redis-hosts) for more information. ## Memory Usage Aggregation by Key Groups When a single Redis instance is used for multiple purposes, it is useful to be able to see how Redis memory is consumed among the different usage scenarios. This is particularly important when a Redis instance with no eviction policy is running low on memory as we want to identify whether certain applications are misbehaving (e.g. not deleting keys that are no longer in use) or the Redis instance needs to be scaled up to handle the increased resource demand. Fortunately, most applications using Redis will employ some sort of naming conventions for keys tied to their specific purpose such as (hierarchical) namespace prefixes which can be exploited by the check-keys, check-single-keys, and count-keys parameters of redis_exporter to surface the memory usage metrics of specific scenarios. *Memory usage aggregation by key groups* takes this one step further by harnessing the flexibility of Redis LUA scripting support to classify all keys on a Redis instance into groups through a list of user-defined [LUA regular expressions](https://www.lua.org/pil/20.1.html) so memory usage metrics can be aggregated into readily identifiable groups. To enable memory usage aggregation by key groups, simply specify a non-empty comma-separated list of LUA regular expressions through the `check-key-groups` redis_exporter parameter. On each aggregation of memory metrics by key groups, redis_exporter will set up a `SCAN` cursor through all keys for each Redis database to be processed in batches via a LUA script. Each key batch is then processed by the same LUA script on a key-by-key basis as follows: 1. The `MEMORY USAGE` command is called to gather memory usage for each key 2. The specified LUA regexes are applied to each key in the specified order, and the group name that a given key belongs to will be derived from concatenating the capture groups of the first regex that matches the key. For example, applying the regex `^(.*)_[^_]+$` to the key `key_exp_Nick` would yield a group name of `key_exp`. If none of the specified regexes matches a key, the key will be assigned to the `unclassified` group Once a key has been classified, the memory usage and key counter for the corresponding group will be incremented in a local LUA table. This aggregated metrics table will then be returned alongside the next `SCAN` cursor position to redis_exporter when all keys in a batch have been processed, and redis_exporter can aggregate the data from all batches into a single table of grouped memory usage metrics for the Prometheus metrics scrapper. Besides making the full flexibility of LUA regex available for classifying keys into groups, the LUA script also has the benefit of reducing network traffic by executing all `MEMORY USAGE` commands on the Redis server and returning aggregated data to redis_exporter in a far more compact format than key-level data. The use of `SCAN` cursor over batches of keys processed by a server-side LUA script also helps prevent unbounded latency bubble in Redis's single processing thread, and the batch size can be tailored to specific environments via the `check-keys-batch-size` parameter. Scanning the entire key space of a Redis instance may sound a lttle extravagant, but it takes only a single scan to classify all keys into groups, and on a moderately sized system with ~780K keys and a rather complex list of 17 regexes, it takes an average of ~5s to perform a full aggregation of memory usage by key groups. Of course, the actual performance for specific systems will vary widely depending on the total number of keys, the number and complexity of regexes used for classification, and the configured batch size. To protect Prometheus from being overwhelmed by a large number of time series resulting from misconfigured group classification regular expression (e.g. applying the regular expression `^(.*)$` where each key will be classified into its own distinct group), a limit on the number of distinct key groups *per Redis database* can be configured via the `max-distinct-key-groups` parameter. If the `max-distinct-key-groups` limit is exceeded, only the key groups with the highest memory usage within the limit will be tracked separately, remaining key groups will be reported under a single `overflow` key group. Here is a list of additional metrics that will be exposed when memory usage aggregation by key groups is enabled: | Name | Labels | Description | |----------------------------------------------------|--------------|-----------------------------------------------------------------------------------------------| | redis_key_group_count | db,key_group | Number of keys in a key group | | redis_key_group_memory_usage_bytes | db,key_group | Memory usage by key group | | redis_number_of_distinct_key_groups | db | Number of distinct key groups in a Redis database when the `overflow` group is fully expanded | | redis_last_key_groups_scrape_duration_milliseconds | | Duration of the last memory usage aggregation by key groups in milliseconds | ### Script to collect Redis lists and respective sizes. If using Redis version < 4.0, most of the helpful metrics which we need to gather based on length or memory is not possible via default redis_exporter. With the help of LUA scripts, we can gather these metrics. One of these scripts [contrib/collect_lists_length_growing.lua](./contrib/collect_lists_length_growing.lua) will help to collect the length of redis lists. With this count, we can take following actions such as Create alerts or dashboards in Grafana or any similar tools with these Prometheus metrics. ## Development The tests require a variety of real Redis instances to not only verify correctness of the exporter but also compatibility with older versions of Redis and with Redis-like systems like KeyDB or Tile38.\ The [docker-compose.yml](docker-compose.yml) file has service definitions for everything that's needed.\ You can bring up the Redis test instances first by running `make docker-env-up` and then, every time you want to run the tests, you can run `make docker-test`. This will mount the current directory (with the .go source files) into a docker container and kick off the tests.\ Once you're done testing you can bring down the stack by running `make docker-env-down`.\ Or you can bring up the stack, run the tests, and then tear down the stack, all in one shot, by running `make docker-all`. ***Note.** Tests initialization can lead to unexpected results when using a persistent testing environment. When `make docker-env-up` is executed once and `make docker-test` is constantly run or stopped during execution, the number of keys in the database changes, which can lead to unexpected failures of tests. Use `make docker-env-down` periodacally to clean up as a workaround.* ## Communal effort Open an issue or PR if you have more suggestions, questions or ideas about what to add. redis_exporter-1.69.0/contrib/000077500000000000000000000000001476520031400163075ustar00rootroot00000000000000redis_exporter-1.69.0/contrib/collect_lists_length_growing.lua000066400000000000000000000011761476520031400247570ustar00rootroot00000000000000local result = {} local function lengthOfList (key) return redis.call("LLEN", key) end redis.call("SELECT", DB_NO) local keysPresent = redis.call("KEYS", "KEYS_PATTERN") if keysPresent ~= nil then for _,key in ipairs(keysPresent) do --error catching and status=true for success calls local status, retval = pcall(lengthOfList, key) if status == true then local keyName = "redis_list_length_" .. key local keyValue = retval .. "" table.insert(result, keyName) -- store the keyname table.insert(result, keyValue) --store the bit count end end end return result redis_exporter-1.69.0/contrib/grafana_prometheus_redis_dashboard.json000066400000000000000000001165151476520031400262620ustar00rootroot00000000000000{ "__inputs": [ { "name": "DS_PROM", "label": "prom", "description": "", "type": "datasource", "pluginId": "prometheus", "pluginName": "Prometheus" } ], "__elements": {}, "__requires": [ { "type": "panel", "id": "gauge", "name": "Gauge", "version": "" }, { "type": "grafana", "id": "grafana", "name": "Grafana", "version": "10.3.3" }, { "type": "datasource", "id": "prometheus", "name": "Prometheus", "version": "1.0.0" }, { "type": "panel", "id": "stat", "name": "Stat", "version": "" }, { "type": "panel", "id": "timeseries", "name": "Time series", "version": "" } ], "annotations": { "list": [ { "builtIn": 1, "datasource": { "type": "datasource", "uid": "grafana" }, "enable": true, "hide": true, "iconColor": "rgba(0, 211, 255, 1)", "name": "Annotations & Alerts", "type": "dashboard" } ] }, "description": "Redis Dashboard for Prometheus Redis Exporter 1.x", "editable": true, "fiscalYearStartMonth": 0, "gnetId": 763, "graphTooltip": 1, "id": null, "links": [], "liveNow": false, "panels": [ { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "fieldConfig": { "defaults": { "color": { "fixedColor": "rgb(31, 120, 193)", "mode": "fixed" }, "decimals": 0, "mappings": [ { "options": { "match": "null", "result": { "text": "N/A" } }, "type": "special" } ], "thresholds": { "mode": "absolute", "steps": [ { "color": "green", "value": null }, { "color": "red", "value": 80 } ] }, "unit": "s", "unitScale": true }, "overrides": [] }, "gridPos": { "h": 7, "w": 3, "x": 0, "y": 0 }, "id": 9, "links": [], "maxDataPoints": 100, "options": { "colorMode": "none", "graphMode": "area", "justifyMode": "auto", "orientation": "horizontal", "reduceOptions": { "calcs": [ "lastNotNull" ], "fields": "", "values": false }, "showPercentChange": false, "textMode": "auto", "wideLayout": true }, "pluginVersion": "10.3.3", "targets": [ { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "expr": "max(max_over_time(redis_uptime_in_seconds{instance=~\"$instance\"}[$__interval]))", "format": "time_series", "interval": "", "intervalFactor": 2, "legendFormat": "", "metric": "", "refId": "A", "step": 1800 } ], "title": "Max Uptime", "type": "stat" }, { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "fieldConfig": { "defaults": { "color": { "fixedColor": "rgb(31, 120, 193)", "mode": "fixed" }, "decimals": 0, "mappings": [ { "options": { "match": "null", "result": { "text": "N/A" } }, "type": "special" } ], "thresholds": { "mode": "absolute", "steps": [ { "color": "green", "value": null }, { "color": "red", "value": 80 } ] }, "unit": "none", "unitScale": true }, "overrides": [] }, "gridPos": { "h": 7, "w": 2, "x": 3, "y": 0 }, "hideTimeOverride": true, "id": 12, "links": [], "maxDataPoints": 100, "options": { "colorMode": "none", "graphMode": "area", "justifyMode": "auto", "orientation": "horizontal", "reduceOptions": { "calcs": [ "lastNotNull" ], "fields": "", "values": false }, "showPercentChange": false, "textMode": "auto", "wideLayout": true }, "pluginVersion": "10.3.3", "targets": [ { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "expr": "sum(redis_connected_clients{instance=~\"$instance\"})", "format": "time_series", "intervalFactor": 2, "legendFormat": "", "metric": "", "refId": "A", "step": 2 } ], "timeFrom": "1m", "title": "Clients", "type": "stat" }, { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "fieldConfig": { "defaults": { "color": { "mode": "thresholds" }, "decimals": 0, "mappings": [ { "options": { "match": "null", "result": { "text": "N/A" } }, "type": "special" } ], "max": 100, "min": 0, "thresholds": { "mode": "absolute", "steps": [ { "color": "rgba(50, 172, 45, 0.97)", "value": null }, { "color": "rgba(237, 129, 40, 0.89)", "value": 80 }, { "color": "rgba(245, 54, 54, 0.9)", "value": 95 } ] }, "unit": "percent", "unitScale": true }, "overrides": [] }, "gridPos": { "h": 7, "w": 3, "x": 5, "y": 0 }, "hideTimeOverride": true, "id": 11, "links": [], "maxDataPoints": 100, "options": { "minVizHeight": 75, "minVizWidth": 75, "orientation": "horizontal", "reduceOptions": { "calcs": [ "lastNotNull" ], "fields": "", "values": false }, "showThresholdLabels": false, "showThresholdMarkers": true, "sizing": "auto" }, "pluginVersion": "10.3.3", "targets": [ { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "expr": "sum(100 * (redis_memory_used_bytes{instance=~\"$instance\"} / redis_memory_max_bytes{instance=~\"$instance\"}))", "format": "time_series", "intervalFactor": 2, "legendFormat": "", "metric": "", "refId": "A", "step": 2 } ], "timeFrom": "1m", "title": "Memory Usage", "type": "gauge" }, { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "fieldConfig": { "defaults": { "color": { "mode": "palette-classic" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "barAlignment": 0, "drawStyle": "line", "fillOpacity": 80, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, "scaleDistribution": { "type": "linear" }, "showPoints": "never", "spanNulls": true, "stacking": { "group": "A", "mode": "normal" }, "thresholdsStyle": { "mode": "off" } }, "links": [], "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green", "value": null }, { "color": "red", "value": 80 } ] }, "unit": "short", "unitScale": true }, "overrides": [] }, "gridPos": { "h": 7, "w": 8, "x": 8, "y": 0 }, "id": 18, "links": [], "options": { "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": false }, "tooltip": { "mode": "multi", "sort": "desc" } }, "pluginVersion": "10.3.3", "targets": [ { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "expr": "sum(rate(redis_commands_total{instance=~\"$instance\"} [1m])) by (cmd)", "format": "time_series", "interval": "", "intervalFactor": 2, "legendFormat": "{{ cmd }}", "metric": "redis_command_calls_total", "refId": "A", "step": 240 } ], "title": "Total Commands / sec", "type": "timeseries" }, { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "fieldConfig": { "defaults": { "color": { "mode": "palette-classic" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "barAlignment": 0, "drawStyle": "line", "fillOpacity": 10, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 2, "pointSize": 5, "scaleDistribution": { "type": "linear" }, "showPoints": "never", "spanNulls": true, "stacking": { "group": "A", "mode": "none" }, "thresholdsStyle": { "mode": "off" } }, "links": [], "mappings": [], "min": 0, "thresholds": { "mode": "absolute", "steps": [ { "color": "green", "value": null }, { "color": "red", "value": 80 } ] }, "unit": "short", "unitScale": true }, "overrides": [] }, "gridPos": { "h": 7, "w": 8, "x": 16, "y": 0 }, "id": 1, "links": [], "options": { "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": false }, "tooltip": { "mode": "multi", "sort": "none" } }, "pluginVersion": "10.3.3", "targets": [ { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "expr": "irate(redis_keyspace_hits_total{instance=~\"$instance\"}[5m])", "format": "time_series", "hide": false, "interval": "", "intervalFactor": 2, "legendFormat": "hits, {{ instance }}", "metric": "", "refId": "A", "step": 240, "target": "" }, { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "expr": "irate(redis_keyspace_misses_total{instance=~\"$instance\"}[5m])", "format": "time_series", "hide": false, "interval": "", "intervalFactor": 2, "legendFormat": "misses, {{ instance }}", "metric": "", "refId": "B", "step": 240, "target": "" } ], "title": "Hits / Misses per Sec", "type": "timeseries" }, { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "fieldConfig": { "defaults": { "color": { "mode": "palette-classic" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "barAlignment": 0, "drawStyle": "line", "fillOpacity": 10, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 2, "pointSize": 5, "scaleDistribution": { "type": "linear" }, "showPoints": "never", "spanNulls": false, "stacking": { "group": "A", "mode": "none" }, "thresholdsStyle": { "mode": "off" } }, "links": [], "mappings": [], "min": 0, "thresholds": { "mode": "absolute", "steps": [ { "color": "green", "value": null }, { "color": "red", "value": 80 } ] }, "unit": "bytes", "unitScale": true }, "overrides": [ { "matcher": { "id": "byName", "options": "max" }, "properties": [ { "id": "color", "value": { "fixedColor": "#BF1B00", "mode": "fixed" } } ] } ] }, "gridPos": { "h": 7, "w": 12, "x": 0, "y": 7 }, "id": 7, "links": [], "options": { "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": true }, "tooltip": { "mode": "multi", "sort": "none" } }, "pluginVersion": "10.3.3", "targets": [ { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "expr": "redis_memory_used_bytes{instance=~\"$instance\"}", "format": "time_series", "intervalFactor": 2, "legendFormat": "used, {{ instance }}", "metric": "", "refId": "A", "step": 240, "target": "" }, { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "expr": "redis_memory_max_bytes{instance=~\"$instance\"}", "format": "time_series", "hide": false, "intervalFactor": 2, "legendFormat": "max, {{ instance }}", "refId": "B", "step": 240 } ], "title": "Total Memory Usage", "type": "timeseries" }, { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "fieldConfig": { "defaults": { "color": { "mode": "palette-classic" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "barAlignment": 0, "drawStyle": "line", "fillOpacity": 10, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 2, "pointSize": 5, "scaleDistribution": { "type": "linear" }, "showPoints": "never", "spanNulls": true, "stacking": { "group": "A", "mode": "none" }, "thresholdsStyle": { "mode": "off" } }, "links": [], "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green", "value": null }, { "color": "red", "value": 80 } ] }, "unit": "bytes", "unitScale": true }, "overrides": [] }, "gridPos": { "h": 7, "w": 12, "x": 12, "y": 7 }, "id": 10, "links": [], "options": { "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": true }, "tooltip": { "mode": "multi", "sort": "none" } }, "pluginVersion": "10.3.3", "targets": [ { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "expr": "sum(rate(redis_net_input_bytes_total{instance=~\"$instance\"}[5m]))", "format": "time_series", "intervalFactor": 2, "legendFormat": "{{ input }}", "refId": "A", "step": 240 }, { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "expr": "sum(rate(redis_net_output_bytes_total{instance=~\"$instance\"}[5m]))", "format": "time_series", "interval": "", "intervalFactor": 2, "legendFormat": "{{ output }}", "refId": "B", "step": 240 } ], "title": "Network I/O", "type": "timeseries" }, { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "fieldConfig": { "defaults": { "color": { "mode": "palette-classic" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "barAlignment": 0, "drawStyle": "line", "fillOpacity": 70, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 2, "pointSize": 5, "scaleDistribution": { "type": "linear" }, "showPoints": "never", "spanNulls": true, "stacking": { "group": "A", "mode": "normal" }, "thresholdsStyle": { "mode": "off" } }, "links": [], "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green", "value": null }, { "color": "red", "value": 80 } ] }, "unit": "short", "unitScale": true }, "overrides": [ { "matcher": { "id": "byValue", "options": { "op": "gte", "reducer": "allIsZero", "value": 0 } }, "properties": [ { "id": "custom.hideFrom", "value": { "legend": true, "tooltip": true, "viz": false } } ] } ] }, "gridPos": { "h": 7, "w": 12, "x": 0, "y": 14 }, "id": 5, "links": [], "options": { "legend": { "calcs": [ "lastNotNull" ], "displayMode": "list", "placement": "bottom", "showLegend": true }, "tooltip": { "mode": "multi", "sort": "none" } }, "pluginVersion": "10.3.3", "targets": [ { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "expr": "sum (redis_db_keys{instance=~\"$instance\"}) by (db, instance)", "format": "time_series", "interval": "", "intervalFactor": 2, "legendFormat": "{{ db }}, {{ instance }}", "refId": "A", "step": 240, "target": "" } ], "title": "Total Items per DB", "type": "timeseries" }, { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "fieldConfig": { "defaults": { "color": { "mode": "palette-classic" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "barAlignment": 0, "drawStyle": "line", "fillOpacity": 70, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 2, "pointSize": 5, "scaleDistribution": { "type": "linear" }, "showPoints": "never", "spanNulls": true, "stacking": { "group": "A", "mode": "normal" }, "thresholdsStyle": { "mode": "off" } }, "links": [], "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green", "value": null }, { "color": "red", "value": 80 } ] }, "unit": "short", "unitScale": true }, "overrides": [] }, "gridPos": { "h": 7, "w": 12, "x": 12, "y": 14 }, "id": 13, "links": [], "options": { "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": true }, "tooltip": { "mode": "multi", "sort": "none" } }, "pluginVersion": "10.3.3", "targets": [ { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "expr": "sum (redis_db_keys{instance=~\"$instance\"}) by (instance) - sum (redis_db_keys_expiring{instance=~\"$instance\"}) by (instance)", "format": "time_series", "interval": "", "intervalFactor": 2, "legendFormat": "not expiring, {{ instance }}", "refId": "A", "step": 240, "target": "" }, { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "expr": "sum (redis_db_keys_expiring{instance=~\"$instance\"}) by (instance)", "format": "time_series", "interval": "", "intervalFactor": 2, "legendFormat": "expiring, {{ instance }}", "metric": "", "refId": "B", "step": 240 } ], "title": "Expiring vs Not-Expiring Keys", "type": "timeseries" }, { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "fieldConfig": { "defaults": { "color": { "mode": "palette-classic" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "barAlignment": 0, "drawStyle": "line", "fillOpacity": 10, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 2, "pointSize": 5, "scaleDistribution": { "type": "linear" }, "showPoints": "never", "spanNulls": true, "stacking": { "group": "A", "mode": "none" }, "thresholdsStyle": { "mode": "off" } }, "links": [], "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green" }, { "color": "red", "value": 80 } ] }, "unit": "short", "unitScale": true }, "overrides": [ { "matcher": { "id": "byName", "options": "evicts" }, "properties": [ { "id": "color", "value": { "fixedColor": "#890F02", "mode": "fixed" } } ] }, { "matcher": { "id": "byName", "options": "memcached_items_evicted_total{instance=\"172.17.0.1:9150\",job=\"prometheus\"}" }, "properties": [ { "id": "color", "value": { "fixedColor": "#890F02", "mode": "fixed" } } ] }, { "matcher": { "id": "byName", "options": "reclaims" }, "properties": [ { "id": "color", "value": { "fixedColor": "#3F6833", "mode": "fixed" } } ] } ] }, "gridPos": { "h": 7, "w": 12, "x": 0, "y": 21 }, "id": 8, "links": [], "options": { "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": true }, "tooltip": { "mode": "multi", "sort": "none" } }, "pluginVersion": "10.3.3", "targets": [ { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "expr": "sum(rate(redis_expired_keys_total{instance=~\"$instance\"}[5m])) by (instance)", "format": "time_series", "hide": false, "interval": "", "intervalFactor": 2, "legendFormat": "expired, {{ instance }}", "metric": "", "refId": "A", "step": 240, "target": "" }, { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "expr": "sum(rate(redis_evicted_keys_total{instance=~\"$instance\"}[5m])) by (instance)", "format": "time_series", "interval": "", "intervalFactor": 2, "legendFormat": "evicted, {{ instance }}", "refId": "B", "step": 240 } ], "title": "Expired/Evicted Keys", "type": "timeseries" }, { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "fieldConfig": { "defaults": { "color": { "mode": "palette-classic" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "barAlignment": 0, "drawStyle": "line", "fillOpacity": 10, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, "scaleDistribution": { "type": "linear" }, "showPoints": "never", "spanNulls": false, "stacking": { "group": "A", "mode": "none" }, "thresholdsStyle": { "mode": "off" } }, "links": [], "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green" }, { "color": "red", "value": 80 } ] }, "unit": "short", "unitScale": true }, "overrides": [] }, "gridPos": { "h": 7, "w": 12, "x": 12, "y": 21 }, "id": 16, "links": [], "options": { "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": true }, "tooltip": { "mode": "multi", "sort": "none" } }, "pluginVersion": "10.3.3", "targets": [ { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "expr": "sum(redis_connected_clients{instance=~\"$instance\"})", "format": "time_series", "intervalFactor": 1, "legendFormat": "connected", "refId": "A" }, { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "expr": "sum(redis_blocked_clients{instance=~\"$instance\"})", "format": "time_series", "intervalFactor": 1, "legendFormat": "blocked", "refId": "B" } ], "title": "Connected/Blocked Clients", "type": "timeseries" }, { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "fieldConfig": { "defaults": { "color": { "mode": "palette-classic" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "barAlignment": 0, "drawStyle": "line", "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, "scaleDistribution": { "type": "linear" }, "showPoints": "never", "spanNulls": true, "stacking": { "group": "A", "mode": "none" }, "thresholdsStyle": { "mode": "off" } }, "links": [], "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green" }, { "color": "red", "value": 80 } ] }, "unit": "s", "unitScale": true }, "overrides": [ { "matcher": { "id": "byValue", "options": { "op": "gte", "reducer": "allIsZero", "value": 0 } }, "properties": [ { "id": "custom.hideFrom", "value": { "legend": true, "tooltip": true, "viz": false } } ] } ] }, "gridPos": { "h": 7, "w": 12, "x": 0, "y": 28 }, "id": 20, "links": [], "options": { "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": true }, "tooltip": { "mode": "multi", "sort": "desc" } }, "pluginVersion": "10.3.3", "targets": [ { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "expr": "sum(irate(redis_commands_duration_seconds_total{instance =~ \"$instance\"}[1m])) by (cmd)\n /\nsum(irate(redis_commands_total{instance =~ \"$instance\"}[1m])) by (cmd)\n", "format": "time_series", "interval": "", "intervalFactor": 2, "legendFormat": "{{ cmd }}", "metric": "redis_command_calls_total", "refId": "A", "step": 240 } ], "title": "Average Time Spent by Command / sec", "type": "timeseries" }, { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "fieldConfig": { "defaults": { "color": { "mode": "palette-classic" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "barAlignment": 0, "drawStyle": "line", "fillOpacity": 80, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, "scaleDistribution": { "type": "linear" }, "showPoints": "never", "spanNulls": true, "stacking": { "group": "A", "mode": "normal" }, "thresholdsStyle": { "mode": "off" } }, "links": [], "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green" }, { "color": "red", "value": 80 } ] }, "unit": "s", "unitScale": true }, "overrides": [] }, "gridPos": { "h": 7, "w": 12, "x": 12, "y": 28 }, "id": 14, "links": [], "options": { "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": true }, "tooltip": { "mode": "multi", "sort": "desc" } }, "pluginVersion": "10.3.3", "targets": [ { "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "expr": "sum(irate(redis_commands_duration_seconds_total{instance=~\"$instance\"}[1m])) by (cmd) != 0", "format": "time_series", "interval": "", "intervalFactor": 2, "legendFormat": "{{ cmd }}", "metric": "redis_command_calls_total", "refId": "A", "step": 240 } ], "title": "Total Time Spent by Command / sec", "type": "timeseries" } ], "refresh": "", "schemaVersion": 39, "tags": [ "prometheus", "redis" ], "templating": { "list": [ { "current": { "selected": false, "text": "default", "value": "default" }, "hide": 0, "includeAll": false, "label": "datasource", "multi": false, "name": "DS_PROM", "options": [], "query": "prometheus", "refresh": 1, "regex": "", "skipUrlSync": false, "type": "datasource" }, { "current": {}, "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "definition": "label_values(redis_up, namespace)", "hide": 0, "includeAll": false, "multi": false, "name": "namespace", "options": [], "query": "label_values(redis_up, namespace)", "refresh": 2, "regex": "", "skipUrlSync": false, "sort": 1, "tagValuesQuery": "", "tagsQuery": "", "type": "query", "useTags": false }, { "current": {}, "datasource": { "type": "prometheus", "uid": "${DS_PROM}" }, "definition": "label_values(redis_up{namespace=~\"$namespace\"}, instance)", "hide": 0, "includeAll": false, "multi": true, "name": "instance", "options": [], "query": "label_values(redis_up{namespace=~\"$namespace\"}, instance)", "refresh": 2, "regex": "", "skipUrlSync": false, "sort": 1, "tagValuesQuery": "", "tagsQuery": "", "type": "query", "useTags": false } ] }, "time": { "from": "now-24h", "to": "now" }, "timepicker": { "refresh_intervals": [ "5s", "10s", "30s", "1m", "5m", "15m", "30m", "1h", "2h", "1d" ], "time_options": [ "5m", "15m", "1h", "6h", "12h", "24h", "2d", "7d", "30d" ] }, "timezone": "browser", "title": "Redis Dashboard for Prometheus Redis Exporter 1.x", "uid": "e008bc3f-81a2-40f9-baf2-a33fd8dec7ec", "version": 2, "weekStart": "" }redis_exporter-1.69.0/contrib/grafana_prometheus_redis_dashboard_exporter_version_0.3x.json000066400000000000000000000660501476520031400325250ustar00rootroot00000000000000{ "__inputs": [ { "name": "DS_PROMETHEUS", "label": "Prometheus", "description": "", "type": "datasource", "pluginId": "prometheus", "pluginName": "Prometheus" } ], "__requires": [ { "type": "panel", "id": "singlestat", "name": "Singlestat", "version": "" }, { "type": "panel", "id": "graph", "name": "Graph", "version": "" }, { "type": "grafana", "id": "grafana", "name": "Grafana", "version": "3.1.1" }, { "type": "datasource", "id": "prometheus", "name": "Prometheus", "version": "1.0.0" } ], "id": null, "title": "Prometheus Redis", "description": "Prometheus dashboard for Redis servers", "tags": [ "prometheus", "redis" ], "style": "dark", "timezone": "browser", "editable": true, "hideControls": false, "sharedCrosshair": false, "panels": [ { "cacheTimeout": null, "colorBackground": false, "colorValue": false, "colors": [ "rgba(245, 54, 54, 0.9)", "rgba(237, 129, 40, 0.89)", "rgba(50, 172, 45, 0.97)" ], "datasource": "${DS_PROMETHEUS}", "decimals": 0, "editable": true, "error": false, "format": "s", "gauge": { "maxValue": 100, "minValue": 0, "show": false, "thresholdLabels": false, "thresholdMarkers": true }, "gridPos": { "h": 7, "w": 2, "x": 0, "y": 0 }, "id": 9, "interval": null, "isNew": true, "links": [], "mappingType": 1, "mappingTypes": [ { "name": "value to text", "value": 1 }, { "name": "range to text", "value": 2 } ], "maxDataPoints": 100, "nullPointMode": "connected", "nullText": null, "options": {}, "postfix": "", "postfixFontSize": "50%", "prefix": "", "prefixFontSize": "50%", "rangeMaps": [ { "from": "null", "text": "N/A", "to": "null" } ], "sparkline": { "fillColor": "rgba(31, 118, 189, 0.18)", "full": false, "lineColor": "rgb(31, 120, 193)", "show": true }, "tableColumn": "", "targets": [ { "expr": "max(max_over_time(redis_uptime_in_seconds{addr=\"$addr\"}[$__interval]))", "intervalFactor": 2, "legendFormat": "", "metric": "", "refId": "A", "step": 1800 } ], "thresholds": "", "title": "Uptime", "type": "singlestat", "valueFontSize": "70%", "valueMaps": [ { "op": "=", "text": "N/A", "value": "null" } ], "valueName": "current" }, { "cacheTimeout": null, "colorBackground": false, "colorValue": false, "colors": [ "rgba(245, 54, 54, 0.9)", "rgba(237, 129, 40, 0.89)", "rgba(50, 172, 45, 0.97)" ], "datasource": "${DS_PROMETHEUS}", "decimals": 0, "editable": true, "error": false, "format": "none", "gauge": { "maxValue": 100, "minValue": 0, "show": false, "thresholdLabels": false, "thresholdMarkers": true }, "gridPos": { "h": 7, "w": 2, "x": 2, "y": 0 }, "hideTimeOverride": true, "id": 12, "interval": null, "isNew": true, "links": [], "mappingType": 1, "mappingTypes": [ { "name": "value to text", "value": 1 }, { "name": "range to text", "value": 2 } ], "maxDataPoints": 100, "nullPointMode": "connected", "nullText": null, "options": {}, "postfix": "", "postfixFontSize": "50%", "prefix": "", "prefixFontSize": "50%", "rangeMaps": [ { "from": "null", "text": "N/A", "to": "null" } ], "sparkline": { "fillColor": "rgba(31, 118, 189, 0.18)", "full": false, "lineColor": "rgb(31, 120, 193)", "show": true }, "tableColumn": "", "targets": [ { "expr": "redis_connected_clients{addr=\"$addr\"}", "intervalFactor": 2, "legendFormat": "", "metric": "", "refId": "A", "step": 2 } ], "thresholds": "", "timeFrom": "1m", "timeShift": null, "title": "Clients", "type": "singlestat", "valueFontSize": "80%", "valueMaps": [ { "op": "=", "text": "N/A", "value": "null" } ], "valueName": "current" }, { "cacheTimeout": null, "colorBackground": false, "colorValue": false, "colors": [ "rgba(50, 172, 45, 0.97)", "rgba(237, 129, 40, 0.89)", "rgba(245, 54, 54, 0.9)" ], "datasource": "${DS_PROMETHEUS}", "decimals": 0, "editable": true, "error": false, "format": "percent", "gauge": { "maxValue": 100, "minValue": 0, "show": true, "thresholdLabels": false, "thresholdMarkers": true }, "gridPos": { "h": 7, "w": 4, "x": 4, "y": 0 }, "hideTimeOverride": true, "id": 11, "interval": null, "isNew": true, "links": [], "mappingType": 1, "mappingTypes": [ { "name": "value to text", "value": 1 }, { "name": "range to text", "value": 2 } ], "maxDataPoints": 100, "nullPointMode": "connected", "nullText": null, "options": {}, "postfix": "", "postfixFontSize": "50%", "prefix": "", "prefixFontSize": "50%", "rangeMaps": [ { "from": "null", "text": "N/A", "to": "null" } ], "sparkline": { "fillColor": "rgba(31, 118, 189, 0.18)", "full": false, "lineColor": "rgb(31, 120, 193)", "show": true }, "tableColumn": "", "targets": [ { "expr": "100 * (redis_memory_used_bytes{addr=~\"$addr\"} / redis_memory_max_bytes{addr=~\"$addr\"} )", "intervalFactor": 2, "legendFormat": "", "metric": "", "refId": "A", "step": 2 } ], "thresholds": "80,95", "timeFrom": "1m", "timeShift": null, "title": "Memory Usage", "type": "singlestat", "valueFontSize": "80%", "valueMaps": [ { "op": "=", "text": "N/A", "value": "null" } ], "valueName": "current" }, { "aliasColors": {}, "bars": false, "dashLength": 10, "dashes": false, "datasource": "${DS_PROMETHEUS}", "editable": true, "error": false, "fill": 1, "grid": {}, "gridPos": { "h": 7, "w": 8, "x": 8, "y": 0 }, "id": 2, "isNew": true, "legend": { "avg": false, "current": false, "max": false, "min": false, "show": false, "total": false, "values": false }, "lines": true, "linewidth": 2, "links": [], "nullPointMode": "connected", "options": {}, "percentage": false, "pointradius": 5, "points": false, "renderer": "flot", "seriesOverrides": [], "spaceLength": 10, "stack": false, "steppedLine": false, "targets": [ { "expr": "rate(redis_commands_processed_total{addr=~\"$addr\"}[5m])", "interval": "", "intervalFactor": 2, "legendFormat": "", "metric": "A", "refId": "A", "step": 240, "target": "" } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "Commands Executed / sec", "tooltip": { "msResolution": false, "shared": true, "sort": 0, "value_type": "cumulative" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } }, { "aliasColors": {}, "bars": false, "dashLength": 10, "dashes": false, "datasource": "${DS_PROMETHEUS}", "decimals": 2, "editable": true, "error": false, "fill": 1, "grid": {}, "gridPos": { "h": 7, "w": 8, "x": 16, "y": 0 }, "id": 1, "isNew": true, "legend": { "avg": false, "current": false, "max": false, "min": false, "show": false, "total": false, "values": false }, "lines": true, "linewidth": 2, "links": [], "nullPointMode": "connected", "options": {}, "percentage": true, "pointradius": 5, "points": false, "renderer": "flot", "seriesOverrides": [], "spaceLength": 10, "stack": false, "steppedLine": false, "targets": [ { "expr": "irate(redis_keyspace_hits_total{addr=\"$addr\"}[5m])", "hide": false, "interval": "", "intervalFactor": 2, "legendFormat": "hits", "metric": "", "refId": "A", "step": 240, "target": "" }, { "expr": "irate(redis_keyspace_misses_total{addr=\"$addr\"}[5m])", "hide": false, "interval": "", "intervalFactor": 2, "legendFormat": "misses", "metric": "", "refId": "B", "step": 240, "target": "" } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "Hits / Misses per Sec", "tooltip": { "msResolution": false, "shared": true, "sort": 0, "value_type": "individual" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "short", "label": "", "logBase": 1, "max": null, "min": 0, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } }, { "aliasColors": { "max": "#BF1B00" }, "bars": false, "dashLength": 10, "dashes": false, "datasource": "${DS_PROMETHEUS}", "editable": true, "error": false, "fill": 1, "grid": {}, "gridPos": { "h": 7, "w": 12, "x": 0, "y": 7 }, "id": 7, "isNew": true, "legend": { "avg": false, "current": false, "hideEmpty": false, "hideZero": false, "max": false, "min": false, "show": true, "total": false, "values": false }, "lines": true, "linewidth": 2, "links": [], "nullPointMode": "null as zero", "options": {}, "percentage": false, "pointradius": 5, "points": false, "renderer": "flot", "seriesOverrides": [], "spaceLength": 10, "stack": false, "steppedLine": false, "targets": [ { "expr": "redis_memory_used_bytes{addr=~\"$addr\"} ", "intervalFactor": 2, "legendFormat": "used", "metric": "", "refId": "A", "step": 240, "target": "" }, { "expr": "redis_memory_max_bytes{addr=~\"$addr\"} ", "hide": false, "intervalFactor": 2, "legendFormat": "max", "refId": "B", "step": 240 } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "Total Memory Usage", "tooltip": { "msResolution": false, "shared": true, "sort": 0, "value_type": "cumulative" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "bytes", "label": null, "logBase": 1, "max": null, "min": 0, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } }, { "aliasColors": {}, "bars": false, "dashLength": 10, "dashes": false, "datasource": "${DS_PROMETHEUS}", "editable": true, "error": false, "fill": 1, "grid": {}, "gridPos": { "h": 7, "w": 12, "x": 12, "y": 7 }, "id": 10, "isNew": true, "legend": { "avg": false, "current": false, "max": false, "min": false, "show": true, "total": false, "values": false }, "lines": true, "linewidth": 2, "links": [], "nullPointMode": "connected", "options": {}, "percentage": false, "pointradius": 5, "points": false, "renderer": "flot", "seriesOverrides": [], "spaceLength": 10, "stack": false, "steppedLine": false, "targets": [ { "expr": "rate(redis_net_input_bytes_total{addr=\"$addr\"}[5m])", "intervalFactor": 2, "legendFormat": "{{ input }}", "refId": "A", "step": 240 }, { "expr": "rate(redis_net_output_bytes_total{addr=\"$addr\"}[5m])", "interval": "", "intervalFactor": 2, "legendFormat": "{{ output }}", "refId": "B", "step": 240 } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "Network I/O", "tooltip": { "msResolution": true, "shared": true, "sort": 0, "value_type": "cumulative" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "bytes", "label": null, "logBase": 1, "max": null, "min": null, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } }, { "aliasColors": {}, "bars": false, "dashLength": 10, "dashes": false, "datasource": "${DS_PROMETHEUS}", "editable": true, "error": false, "fill": 7, "grid": {}, "gridPos": { "h": 7, "w": 12, "x": 0, "y": 14 }, "id": 5, "isNew": true, "legend": { "alignAsTable": true, "avg": false, "current": true, "max": false, "min": false, "rightSide": true, "show": true, "total": false, "values": true }, "lines": true, "linewidth": 2, "links": [], "nullPointMode": "connected", "options": {}, "percentage": false, "pointradius": 5, "points": false, "renderer": "flot", "seriesOverrides": [], "spaceLength": 10, "stack": true, "steppedLine": false, "targets": [ { "expr": "sum (redis_db_keys{addr=~\"$addr\"}) by (db)", "interval": "", "intervalFactor": 2, "legendFormat": "{{ db }} ", "refId": "A", "step": 240, "target": "" } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "Total Items per DB", "tooltip": { "msResolution": false, "shared": true, "sort": 0, "value_type": "individual" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "none", "label": null, "logBase": 1, "max": null, "min": null, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } }, { "aliasColors": {}, "bars": false, "dashLength": 10, "dashes": false, "datasource": "${DS_PROMETHEUS}", "editable": true, "error": false, "fill": 7, "grid": {}, "gridPos": { "h": 7, "w": 12, "x": 12, "y": 14 }, "id": 13, "isNew": true, "legend": { "avg": false, "current": false, "max": false, "min": false, "show": true, "total": false, "values": false }, "lines": true, "linewidth": 2, "links": [], "nullPointMode": "connected", "options": {}, "percentage": false, "pointradius": 5, "points": false, "renderer": "flot", "seriesOverrides": [], "spaceLength": 10, "stack": true, "steppedLine": false, "targets": [ { "expr": "sum (redis_db_keys{addr=~\"$addr\"}) - sum (redis_db_keys_expiring{addr=~\"$addr\"}) ", "interval": "", "intervalFactor": 2, "legendFormat": "not expiring", "refId": "A", "step": 240, "target": "" }, { "expr": "sum (redis_db_keys_expiring{addr=~\"$addr\"}) ", "interval": "", "intervalFactor": 2, "legendFormat": "expiring", "metric": "", "refId": "B", "step": 240 } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "Expiring vs Not-Expiring Keys", "tooltip": { "msResolution": false, "shared": true, "sort": 0, "value_type": "individual" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } }, { "aliasColors": { "evicts": "#890F02", "memcached_items_evicted_total{instance=\"172.17.0.1:9150\",job=\"prometheus\"}": "#890F02", "reclaims": "#3F6833" }, "bars": false, "dashLength": 10, "dashes": false, "datasource": "${DS_PROMETHEUS}", "editable": true, "error": false, "fill": 1, "grid": {}, "gridPos": { "h": 7, "w": 12, "x": 0, "y": 21 }, "id": 8, "isNew": true, "legend": { "avg": false, "current": false, "max": false, "min": false, "show": true, "total": false, "values": false }, "lines": true, "linewidth": 2, "links": [], "nullPointMode": "connected", "options": {}, "percentage": false, "pointradius": 5, "points": false, "renderer": "flot", "seriesOverrides": [ { "alias": "reclaims", "yaxis": 2 } ], "spaceLength": 10, "stack": false, "steppedLine": false, "targets": [ { "expr": "sum(rate(redis_expired_keys_total{addr=~\"$addr\"}[5m])) by (addr)", "interval": "", "intervalFactor": 2, "legendFormat": "expired", "metric": "", "refId": "A", "step": 240, "target": "" }, { "expr": "sum(rate(redis_evicted_keys_total{addr=~\"$addr\"}[5m])) by (addr)", "interval": "", "intervalFactor": 2, "legendFormat": "evicted", "refId": "B", "step": 240 } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "Expired / Evicted", "tooltip": { "msResolution": false, "shared": true, "sort": 0, "value_type": "cumulative" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } }, { "aliasColors": {}, "bars": false, "dashLength": 10, "dashes": false, "datasource": "${DS_PROMETHEUS}", "editable": true, "error": false, "fill": 8, "grid": {}, "gridPos": { "h": 7, "w": 12, "x": 12, "y": 21 }, "id": 14, "isNew": true, "legend": { "avg": false, "current": false, "max": false, "min": false, "show": true, "total": false, "values": false }, "lines": true, "linewidth": 1, "links": [], "nullPointMode": "connected", "options": {}, "percentage": false, "pointradius": 5, "points": false, "renderer": "flot", "seriesOverrides": [], "spaceLength": 10, "stack": true, "steppedLine": false, "targets": [ { "expr": "topk(5, irate(redis_commands_total{addr=~\"$addr\"} [1m]))", "interval": "", "intervalFactor": 2, "legendFormat": "{{ cmd }}", "metric": "redis_command_calls_total", "refId": "A", "step": 240 } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "Command Calls / sec", "tooltip": { "msResolution": true, "shared": true, "sort": 0, "value_type": "cumulative" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } }, { "aliasColors": {}, "bars": false, "dashLength": 10, "dashes": false, "datasource": "${DS_PROMETHEUS}", "fill": 1, "gridPos": { "h": 8, "w": 12, "x": 0, "y": 28 }, "id": 16, "legend": { "avg": false, "current": false, "max": false, "min": false, "show": true, "total": false, "values": false }, "lines": true, "linewidth": 1, "links": [], "nullPointMode": "null", "options": {}, "percentage": false, "pointradius": 2, "points": false, "renderer": "flot", "seriesOverrides": [], "spaceLength": 10, "stack": false, "steppedLine": false, "targets": [ { "expr": "redis_connected_clients{addr=\"$addr\"}", "format": "time_series", "intervalFactor": 1, "refId": "A" } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "Clients", "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } } ], "time": { "from": "now-24h", "to": "now" }, "timepicker": { "refresh_intervals": [ "5s", "10s", "30s", "1m", "5m", "15m", "30m", "1h", "2h", "1d" ], "time_options": [ "5m", "15m", "1h", "6h", "12h", "24h", "2d", "7d", "30d" ] }, "templating": { "list": [ { "current": {}, "datasource": "${DS_PROMETHEUS}", "hide": 0, "includeAll": false, "multi": false, "name": "addr", "options": [], "query": "label_values(redis_up, addr)", "refresh": 1, "regex": "", "type": "query" } ] }, "annotations": { "list": [] }, "refresh": "30s", "schemaVersion": 12, "version": 52, "links": [], "gnetId": 37 } redis_exporter-1.69.0/contrib/k8s-redis-and-exporter-deployment.yaml000066400000000000000000000016701476520031400255740ustar00rootroot00000000000000--- apiVersion: v1 kind: Namespace metadata: name: redis --- apiVersion: apps/v1 kind: Deployment metadata: namespace: redis name: redis spec: replicas: 1 selector: matchLabels: app: redis template: metadata: annotations: prometheus.io/scrape: "true" prometheus.io/port: "9121" labels: app: redis spec: containers: - name: redis image: redis:4 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 - name: redis-exporter image: oliver006/redis_exporter:latest securityContext: runAsUser: 59000 runAsGroup: 59000 allowPrivilegeEscalation: false capabilities: drop: - ALL resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 9121 redis_exporter-1.69.0/contrib/manifest.yml000066400000000000000000000001741476520031400206420ustar00rootroot00000000000000--- buildpack: go_buildpack command: redis_exporter --use-cf-bindings --web.listen-address=:8080 env: GOPACKAGENAME: main redis_exporter-1.69.0/contrib/openshift-template.yaml000066400000000000000000000064361476520031400230140ustar00rootroot00000000000000apiVersion: v1 kind: Template labels: template: redis-exporter app: redis-exporter tier: redis metadata: annotations: openshift.io/display-name: Openshift Redis Exporter deployment template description: >- Deploy a Redis exporter for Prometheus into a specific namespace together with image stream tags: 'redis-exporter' name: redis-exporter parameters: - name: NAME description: The name of the application displayName: Name required: true value: redis-exporter - name: NAMESPACE description: The namespace of the application displayName: Namespace required: true - name: SOURCE_REPOSITORY_URL description: The URL of the repository with your application source code. displayName: Git Repository URL required: true value: 'https://github.com/oliver006/redis_exporter.git' - name: SOURCE_REPOSITORY_REF description: Set the branch name if you are not using master branch displayName: Git Reference value: master required: false - name: REDIS_ADDR description: Set the service names of the Redis instances that you like to export displayName: Redis Addresses required: true - name: REDIS_PASSWORD description: Set the password for the Redis instances that you like to export displayName: Redis Password required: false - name: REDIS_ALIAS description: Set the service alias of the Redis instances that you like to export displayName: Redis Alias required: false - name: REDIS_FILE description: Set the Redis file that contains one or more redis nodes, separated by newline displayName: Redis file required: false objects: - apiVersion: v1 kind: ImageStream metadata: generation: 2 labels: app: redis-exporter name: redis-exporter name: redis-exporter spec: dockerImageRepository: oliver006/redis_exporter - apiVersion: v1 kind: DeploymentConfig metadata: labels: app: redis-exporter name: redis-exporter spec: replicas: 1 selector: app: redis-exporter template: metadata: labels: app: redis-exporter spec: containers: - image: docker-registry.default.svc:5000/${NAMESPACE}/redis-exporter imagePullPolicy: Always name: redis-exporter ports: - containerPort: 9121 env: - name: REDIS_ADDR value: "${REDIS_ADDR}" - name: REDIS_PASSWORD value: "${REDIS_PASSWORD}" - name: REDIS_ALIAS value: "${REDIS_ALIAS}" - name: REDIS_FILE value: "${REDIS_FILE}" resources: {} dnsPolicy: ClusterFirst restartPolicy: Always securityContext: {} terminationGracePeriodSeconds: 30 test: false triggers: [] status: {} - apiVersion: v1 kind: Service metadata: labels: name: redis-exporter role: service name: redis-exporter spec: ports: - port: 9121 targetPort: 9121 selector: app: "redis-exporter" redis_exporter-1.69.0/contrib/redis-mixin/000077500000000000000000000000001476520031400205375ustar00rootroot00000000000000redis_exporter-1.69.0/contrib/redis-mixin/.gitignore000077500000000000000000000000461476520031400225320ustar00rootroot00000000000000alerts.yaml rules.yaml dashboards_out redis_exporter-1.69.0/contrib/redis-mixin/Makefile000077500000000000000000000013711476520031400222040ustar00rootroot00000000000000JSONNET_FMT := jsonnetfmt -n 2 --max-blank-lines 2 --string-style s --comment-style s # no lint for now, fails with a lot of errors and needs cleaning up first all: deps fmt build clean deps: go install github.com/monitoring-mixins/mixtool/cmd/mixtool@master go install github.com/google/go-jsonnet/cmd/jsonnetfmt@latest fmt: find . -name 'vendor' -prune -o -name '*.libsonnet' -print -o -name '*.jsonnet' -print | \ xargs -n 1 -- $(JSONNET_FMT) -i lint: find . -name 'vendor' -prune -o -name '*.libsonnet' -print -o -name '*.jsonnet' -print | \ while read f; do \ $(JSONNET_FMT) "$$f" | diff -u "$$f" -; \ done mixtool lint mixin.libsonnet build: mixtool generate all mixin.libsonnet clean: rm -rf dashboards_out alerts.yaml rules.yaml redis_exporter-1.69.0/contrib/redis-mixin/README.md000077500000000000000000000025661476520031400220320ustar00rootroot00000000000000# Redis Mixin _This is a work in progress. We aim for it to become a good role model for alerts and dashboards eventually, but it is not quite there yet._ The Redis Mixin is a set of configurable, reusable, and extensible alerts and dashboards based on the metrics exported by the Redis Exporter. The mixin creates recording and alerting rules for Prometheus and suitable dashboard descriptions for Grafana. To use them, you need to have `mixtool` and `jsonnetfmt` installed. If you have a working Go development environment, it's easiest to run the following: ```bash # go >= 1.17 # Using `go get` to install binaries is deprecated. $ go install github.com/monitoring-mixins/mixtool/cmd/mixtool@latest $ go install github.com/google/go-jsonnet/cmd/jsonnet@latest # go < 1.17 $ go get github.com/monitoring-mixins/mixtool/cmd/mixtool $ go get github.com/google/go-jsonnet/cmd/jsonnetfmt ``` You can then build the Prometheus rules files `alerts.yaml` and `rules.yaml` and a directory `dashboard_out` with the JSON dashboard files for Grafana: ```bash $ make build ``` The mixin currently treats each redis instance independently - it has no notion of replication or clustering. We aim to support these concepts in future versions. The mixin dashboard is a fork of the one in the [contrib](contrib/) directory. For more advanced uses of mixins, see https://github.com/monitoring-mixins/docs. redis_exporter-1.69.0/contrib/redis-mixin/alerts/000077500000000000000000000000001476520031400220315ustar00rootroot00000000000000redis_exporter-1.69.0/contrib/redis-mixin/alerts/redis.libsonnet000077500000000000000000000061561476520031400250710ustar00rootroot00000000000000{ prometheusAlerts+:: { groups+: [ { name: 'redis', rules: [ { alert: 'RedisDown', expr: 'redis_up{%(redisExporterSelector)s} == 0' % $._config, 'for': '5m', labels: { severity: 'critical', }, annotations: { summary: 'Redis down (instance {{ $labels.instance }})', description: 'Redis instance is down\n VALUE = {{ $value }}\n LABELS: {{ $labels }}', }, }, { alert: 'RedisOutOfMemory', expr: 'redis_memory_used_bytes{%(redisExporterSelector)s} / redis_total_system_memory_bytes{%(redisExporterSelector)s} * 100 > 90' % $._config, 'for': '5m', labels: { severity: 'warning', }, annotations: { summary: 'Redis out of memory (instance {{ $labels.instance }})', description: 'Redis is running out of memory (> 90%)\n VALUE = {{ $value }}\n LABELS: {{ $labels }}', }, }, { alert: 'RedisTooManyConnections', expr: 'redis_connected_clients{%(redisExporterSelector)s} > %(redisConnectionsThreshold)s' % $._config, 'for': '5m', labels: { severity: 'warning', }, annotations: { summary: 'Redis too many connections (instance {{ $labels.instance }})', description: 'Redis instance has too many connections\n VALUE = {{ $value }}\n LABELS: {{ $labels }}', }, }, { alert: 'RedisClusterSlotFail', expr: 'redis_cluster_slots_fail{%(redisExporterSelector)s} > 0' % $._config, 'for': '5m', labels: { severity: 'warning', }, annotations: { summary: 'Number of hash slots mapping to a node in FAIL state (instance {{ $labels.instance }})', description: 'Redis cluster has slots fail\n VALUE = {{ $value }}\n LABELS: {{ $labels }}', }, }, { alert: 'RedisClusterSlotPfail', expr: 'redis_cluster_slots_pfail{%(redisExporterSelector)s} > 0' % $._config, 'for': '5m', labels: { severity: 'warning', }, annotations: { summary: 'Number of hash slots mapping to a node in PFAIL state (instance {{ $labels.instance }})', description: 'Redis cluster has slots pfail\n VALUE = {{ $value }}\n LABELS: {{ $labels }}', }, }, { alert: 'RedisClusterStateNotOk', expr: 'redis_cluster_state{%(redisExporterSelector)s} == 0' % $._config, 'for': '5m', labels: { severity: 'critical', }, annotations: { summary: 'Redis cluster state is not ok (instance {{ $labels.instance }})', description: 'Redis cluster is not ok\n VALUE = {{ $value }}\n LABELS: {{ $labels }}', }, }, ], }, ], }, } redis_exporter-1.69.0/contrib/redis-mixin/config.libsonnet000066400000000000000000000001501476520031400237170ustar00rootroot00000000000000{ _config+:: { redisConnectionsThreshold: '100', redisExporterSelector: 'job="redis"', }, } redis_exporter-1.69.0/contrib/redis-mixin/dashboards/000077500000000000000000000000001476520031400226515ustar00rootroot00000000000000redis_exporter-1.69.0/contrib/redis-mixin/dashboards/redis-overview.json000077500000000000000000000746361476520031400265410ustar00rootroot00000000000000{ "annotations": { "list": [ { "builtIn": 1, "datasource": "-- Grafana --", "enable": true, "hide": true, "iconColor": "rgba(0, 211, 255, 1)", "name": "Annotations & Alerts", "type": "dashboard" } ] }, "description": "Redis Dashboard for Prometheus Redis Exporter 1.x", "editable": true, "gnetId": 763, "graphTooltip": 1, "id": 24, "iteration": 1602758020790, "links": [], "panels": [ { "collapsed": false, "datasource": null, "gridPos": { "h": 1, "w": 24, "x": 0, "y": 0 }, "id": 24, "panels": [], "title": "Performance", "type": "row" }, { "aliasColors": {}, "bars": false, "dashLength": 10, "dashes": false, "datasource": "$datasource", "description": "Average taken across instances", "editable": true, "error": false, "fieldConfig": { "defaults": { "custom": {} }, "overrides": [] }, "fill": 1, "fillGradient": 0, "grid": {}, "gridPos": { "h": 7, "w": 8, "x": 0, "y": 1 }, "hiddenSeries": false, "id": 18, "isNew": true, "legend": { "avg": false, "current": false, "hideEmpty": false, "hideZero": false, "max": false, "min": false, "show": true, "total": false, "values": false }, "lines": true, "linewidth": 1, "links": [], "nullPointMode": "connected", "options": { "dataLinks": [] }, "percentage": false, "pointradius": 5, "points": false, "renderer": "flot", "seriesOverrides": [], "spaceLength": 10, "stack": true, "steppedLine": false, "targets": [ { "expr": "avg(irate(redis_commands_total{instance=~\"$instance\"} [$__rate_interval])) by (cmd)", "format": "time_series", "interval": "", "intervalFactor": 2, "legendFormat": "{{cmd}}", "metric": "redis_command_calls_total", "refId": "A", "step": 240 } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "Commands per second", "tooltip": { "msResolution": true, "shared": true, "sort": 2, "value_type": "individual" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } }, { "aliasColors": {}, "bars": false, "dashLength": 10, "dashes": false, "datasource": "$datasource", "description": "Average taken across instances", "editable": true, "error": false, "fieldConfig": { "defaults": { "custom": {} }, "overrides": [] }, "fill": 1, "fillGradient": 0, "grid": {}, "gridPos": { "h": 7, "w": 8, "x": 8, "y": 1 }, "hiddenSeries": false, "id": 20, "isNew": true, "legend": { "avg": false, "current": false, "hideEmpty": false, "hideZero": true, "max": false, "min": false, "show": true, "total": false, "values": false }, "lines": true, "linewidth": 1, "links": [], "nullPointMode": "connected", "options": { "dataLinks": [] }, "percentage": false, "pointradius": 5, "points": false, "renderer": "flot", "seriesOverrides": [], "spaceLength": 10, "stack": false, "steppedLine": false, "targets": [ { "expr": "avg(irate(redis_commands_duration_seconds_total{instance=~\"$instance\"}[$__rate_interval])) by (cmd)\n /\navg(irate(redis_commands_total{instance=~\"$instance\"}[$__rate_interval])) by (cmd)\n", "format": "time_series", "interval": "", "intervalFactor": 2, "legendFormat": "{{ cmd }}", "metric": "redis_command_calls_total", "refId": "A", "step": 240 } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "Command latency per second", "tooltip": { "msResolution": true, "shared": true, "sort": 2, "value_type": "individual" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "s", "label": null, "logBase": 1, "max": null, "min": null, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } }, { "aliasColors": { "Hit ratio": "blue" }, "bars": false, "dashLength": 10, "dashes": false, "datasource": "$datasource", "decimals": 2, "description": "Hit rate shows the percentage of key space lookups that hit a key.", "editable": true, "error": false, "fieldConfig": { "defaults": { "custom": {} }, "overrides": [] }, "fill": 1, "fillGradient": 0, "grid": {}, "gridPos": { "h": 7, "w": 8, "x": 16, "y": 1 }, "hiddenSeries": false, "id": 1, "isNew": true, "legend": { "avg": false, "current": false, "max": false, "min": false, "show": true, "total": false, "values": false }, "lines": true, "linewidth": 1, "links": [], "nullPointMode": "connected", "options": { "dataLinks": [] }, "percentage": true, "pointradius": 5, "points": false, "renderer": "flot", "seriesOverrides": [ { "alias": "/Target/", "color": "#56A64B", "dashes": true, "fill": 0, "hideTooltip": true, "linewidth": 1 } ], "spaceLength": 10, "stack": false, "steppedLine": false, "targets": [ { "expr": "avg(irate(redis_keyspace_hits_total{instance=~\"$instance\"}[$__rate_interval]) / (irate(redis_keyspace_misses_total{instance=~\"$instance\"}[$__rate_interval]) + irate(redis_keyspace_hits_total{instance=~\"$instance\"}[$__rate_interval]))) by (instance)", "format": "time_series", "hide": false, "interval": "", "intervalFactor": 2, "legendFormat": "{{instance}}", "metric": "", "refId": "A", "step": 240, "target": "" }, { "expr": "1", "interval": "", "legendFormat": "Target hit ratio for cache", "refId": "B" } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "Hit ratio per instance", "tooltip": { "msResolution": false, "shared": true, "sort": 0, "value_type": "individual" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "percentunit", "label": "", "logBase": 1, "max": null, "min": 0, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } }, { "collapsed": false, "datasource": null, "gridPos": { "h": 1, "w": 24, "x": 0, "y": 8 }, "id": 22, "panels": [], "title": "Memory", "type": "row" }, { "aliasColors": { "max": "#BF1B00" }, "bars": false, "dashLength": 10, "dashes": false, "datasource": "$datasource", "description": "Total taken across instances", "editable": true, "error": false, "fieldConfig": { "defaults": { "custom": {} }, "overrides": [] }, "fill": 1, "fillGradient": 0, "grid": {}, "gridPos": { "h": 7, "w": 8, "x": 0, "y": 9 }, "hiddenSeries": false, "id": 7, "isNew": true, "legend": { "avg": false, "current": false, "hideEmpty": false, "hideZero": false, "max": false, "min": false, "show": true, "total": false, "values": false }, "lines": true, "linewidth": 1, "links": [], "nullPointMode": "null as zero", "options": { "dataLinks": [] }, "percentage": false, "pointradius": 5, "points": false, "renderer": "flot", "seriesOverrides": [ { "alias": "/max/", "color": "#E02F44", "dashes": true, "fill": 0, "linewidth": 1 } ], "spaceLength": 10, "stack": false, "steppedLine": false, "targets": [ { "expr": "sum(redis_memory_used_bytes{instance=~\"$instance\"})", "format": "time_series", "interval": "", "intervalFactor": 2, "legendFormat": "Used Memory", "metric": "", "refId": "A", "step": 240, "target": "" }, { "expr": "sum(redis_memory_max_bytes{instance=~\"$instance\"})", "format": "time_series", "hide": false, "interval": "", "intervalFactor": 2, "legendFormat": "Configured max memory", "refId": "B", "step": 240 }, { "expr": "sum(redis_memory_used_rss_bytes{instance=~\"$instance\"})", "interval": "", "legendFormat": "Used RSS memory", "refId": "C" } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "Total Memory Usage", "tooltip": { "msResolution": false, "shared": true, "sort": 0, "value_type": "cumulative" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "bytes", "label": null, "logBase": 1, "max": null, "min": 0, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } }, { "aliasColors": { "Recommend restart redis": "red" }, "bars": false, "dashLength": 10, "dashes": false, "datasource": "$datasource", "editable": true, "error": false, "fieldConfig": { "defaults": { "custom": {} }, "overrides": [] }, "fill": 1, "fillGradient": 0, "grid": {}, "gridPos": { "h": 7, "w": 8, "x": 8, "y": 9 }, "hiddenSeries": false, "id": 10, "isNew": true, "legend": { "avg": false, "current": false, "max": false, "min": false, "show": true, "total": false, "values": false }, "lines": true, "linewidth": 1, "links": [], "nullPointMode": "connected", "options": { "dataLinks": [] }, "percentage": false, "pointradius": 5, "points": false, "renderer": "flot", "seriesOverrides": [ { "alias": "/restart/", "color": "#E02F44", "dashes": true, "fill": 0, "linewidth": 1 } ], "spaceLength": 10, "stack": false, "steppedLine": false, "targets": [ { "expr": "redis_memory_fragmentation_ratio{instance=~\"$instance\"}", "hide": false, "interval": "", "legendFormat": "{{instance}}", "refId": "C" } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "Memory fragmentation ratio per instance", "tooltip": { "msResolution": true, "shared": true, "sort": 0, "value_type": "cumulative" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } }, { "aliasColors": { "Evictions": "red", "evicts": "#890F02", "memcached_items_evicted_total{instance=\"172.17.0.1:9150\",job=\"prometheus\"}": "#890F02", "reclaims": "#3F6833", "{container=\"redis-exporter\", instance=\"redis-86cb5d76d7-fcdln:redis-exporter:redis-metrics\", job=\"default/redis\", namespace=\"default\", pod=\"redis-86cb5d76d7-fcdln\"}": "red", "{instance=\"redis-86cb5d76d7-fcdln:redis-exporter:redis-metrics\"}": "red" }, "bars": false, "dashLength": 10, "dashes": false, "datasource": "$datasource", "editable": true, "error": false, "fieldConfig": { "defaults": { "custom": {} }, "overrides": [] }, "fill": 1, "fillGradient": 0, "grid": {}, "gridPos": { "h": 7, "w": 8, "x": 16, "y": 9 }, "hiddenSeries": false, "id": 8, "isNew": true, "legend": { "avg": false, "current": false, "max": false, "min": false, "show": true, "total": false, "values": false }, "lines": true, "linewidth": 1, "links": [], "nullPointMode": "connected", "options": { "dataLinks": [] }, "percentage": false, "pointradius": 5, "points": false, "renderer": "flot", "seriesOverrides": [ { "alias": "reclaims", "yaxis": 2 } ], "spaceLength": 10, "stack": false, "steppedLine": false, "targets": [ { "expr": "irate(redis_evicted_keys_total{instance=~\"$instance\"}[$__rate_interval])", "format": "time_series", "interval": "", "intervalFactor": 2, "legendFormat": "{{instance}}", "refId": "B", "step": 240 } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "Key evictions per second per instance", "tooltip": { "msResolution": false, "shared": true, "sort": 0, "value_type": "cumulative" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } }, { "collapsed": false, "datasource": null, "gridPos": { "h": 1, "w": 24, "x": 0, "y": 16 }, "id": 26, "panels": [], "title": "Basic activity", "type": "row" }, { "aliasColors": {}, "bars": false, "dashLength": 10, "dashes": false, "datasource": "$datasource", "description": "Sum taken across instances", "fieldConfig": { "defaults": { "custom": {} }, "overrides": [] }, "fill": 1, "fillGradient": 0, "gridPos": { "h": 7, "w": 8, "x": 0, "y": 17 }, "hiddenSeries": false, "id": 16, "legend": { "avg": false, "current": false, "max": false, "min": false, "show": true, "total": false, "values": false }, "lines": true, "linewidth": 1, "links": [], "nullPointMode": "null", "options": { "dataLinks": [] }, "percentage": false, "pointradius": 2, "points": false, "renderer": "flot", "seriesOverrides": [], "spaceLength": 10, "stack": false, "steppedLine": false, "targets": [ { "expr": "sum(redis_connected_clients{instance=~\"$instance\"})", "format": "time_series", "interval": "", "intervalFactor": 1, "legendFormat": "Connected", "refId": "A" }, { "expr": "sum(redis_blocked_clients{instance=~\"$instance\"})", "format": "time_series", "interval": "", "intervalFactor": 1, "legendFormat": "Blocked", "refId": "B" } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "Connected/Blocked Clients", "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } }, { "aliasColors": { "db1": "yellow" }, "bars": false, "dashLength": 10, "dashes": false, "datasource": "$datasource", "description": "Sum taken across instances", "editable": true, "error": false, "fieldConfig": { "defaults": { "custom": {} }, "overrides": [] }, "fill": 1, "fillGradient": 0, "grid": {}, "gridPos": { "h": 7, "w": 8, "x": 8, "y": 17 }, "hiddenSeries": false, "id": 5, "isNew": true, "legend": { "alignAsTable": false, "avg": false, "current": false, "hideEmpty": false, "hideZero": true, "max": false, "min": false, "rightSide": false, "show": true, "total": false, "values": false }, "lines": true, "linewidth": 1, "links": [], "nullPointMode": "null", "options": { "dataLinks": [] }, "percentage": false, "pointradius": 5, "points": false, "renderer": "flot", "seriesOverrides": [], "spaceLength": 10, "stack": true, "steppedLine": false, "targets": [ { "expr": "sum (redis_db_keys{instance=~\"$instance\"}) by (db)", "format": "time_series", "interval": "", "intervalFactor": 1, "legendFormat": "{{ db }}", "refId": "A", "step": 240, "target": "" } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "Total Items per DB", "tooltip": { "msResolution": false, "shared": true, "sort": 1, "value_type": "individual" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "none", "label": null, "logBase": 1, "max": null, "min": null, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } }, { "aliasColors": {}, "bars": false, "dashLength": 10, "dashes": false, "datasource": "$datasource", "description": "Sum taken across instances", "editable": true, "error": false, "fieldConfig": { "defaults": { "custom": {} }, "overrides": [] }, "fill": 1, "fillGradient": 0, "grid": {}, "gridPos": { "h": 7, "w": 8, "x": 16, "y": 17 }, "hiddenSeries": false, "id": 13, "isNew": true, "legend": { "avg": false, "current": false, "max": false, "min": false, "show": true, "total": false, "values": false }, "lines": true, "linewidth": 1, "links": [], "nullPointMode": "connected", "options": { "dataLinks": [] }, "percentage": false, "pointradius": 5, "points": false, "renderer": "flot", "seriesOverrides": [], "spaceLength": 10, "stack": true, "steppedLine": false, "targets": [ { "expr": "sum (redis_db_keys{instance=~\"$instance\"}) - sum (redis_db_keys_expiring{instance=~\"$instance\"})", "format": "time_series", "hide": false, "interval": "", "intervalFactor": 2, "legendFormat": "Not expiring", "refId": "A", "step": 240, "target": "" }, { "expr": "sum(redis_db_keys_expiring{instance=~\"$instance\"})", "format": "time_series", "interval": "", "intervalFactor": 2, "legendFormat": "Expiring", "metric": "", "refId": "B", "step": 240 } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "Expiring vs Not-Expiring Keys", "tooltip": { "msResolution": false, "shared": true, "sort": 0, "value_type": "individual" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } }, { "aliasColors": {}, "bars": false, "dashLength": 10, "dashes": false, "datasource": "$datasource", "description": "This metric will only be non-zero if the instance is a master", "fieldConfig": { "defaults": { "custom": {} }, "overrides": [] }, "fill": 1, "fillGradient": 0, "gridPos": { "h": 7, "w": 8, "x": 0, "y": 24 }, "hiddenSeries": false, "id": 28, "legend": { "avg": false, "current": false, "max": false, "min": false, "show": true, "total": false, "values": false }, "lines": true, "linewidth": 1, "nullPointMode": "null", "options": { "dataLinks": [] }, "percentage": false, "pointradius": 2, "points": false, "renderer": "flot", "seriesOverrides": [], "spaceLength": 10, "stack": false, "steppedLine": false, "targets": [ { "expr": "sum(redis_connected_slaves{instance=~\"$instance\"}) by (instance)", "interval": "", "legendFormat": "{{instance}}", "refId": "A" } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "Connected slaves by instance", "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } }, { "aliasColors": {}, "bars": false, "dashLength": 10, "dashes": false, "datasource": "$datasource", "description": "This metric is only exported if the instance is a slave.", "fieldConfig": { "defaults": { "custom": {} }, "overrides": [] }, "fill": 1, "fillGradient": 0, "gridPos": { "h": 7, "w": 8, "x": 8, "y": 24 }, "hiddenSeries": false, "id": 30, "legend": { "avg": false, "current": false, "max": false, "min": false, "show": true, "total": false, "values": false }, "lines": true, "linewidth": 1, "nullPointMode": "null", "options": { "dataLinks": [] }, "percentage": false, "pointradius": 2, "points": false, "renderer": "flot", "seriesOverrides": [], "spaceLength": 10, "stack": false, "steppedLine": false, "targets": [ { "expr": "redis_master_last_io_seconds_ago{instance=~\"$instance\"}", "interval": "", "legendFormat": "{{instance}}", "refId": "A" } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "Time since last master connection", "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "s", "label": null, "logBase": 1, "max": null, "min": null, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } } ], "refresh": false, "schemaVersion": 25, "style": "dark", "tags": [ "prometheus", "redis" ], "templating": { "list": [ { "allValue": null, "current": { "selected": true, "tags": [], "text": "redisdb-7d6b98cd98-kjt5x + redisdb-7d6b98cd98-pbkg6", "value": [ "redisdb-7d6b98cd98-kjt5x", "redisdb-7d6b98cd98-pbkg6" ] }, "datasource": "${datasource}", "definition": "label_values(redis_up, instance)", "hide": 0, "includeAll": false, "label": null, "multi": true, "name": "instance", "options": [], "query": "label_values(redis_up, instance)", "refresh": 2, "regex": "", "skipUrlSync": false, "sort": 1, "tagValuesQuery": "", "tags": [], "tagsQuery": "", "type": "query", "useTags": false }, { "current": { "selected": false, "text": "prometheus", "value": "prometheus" }, "hide": 0, "includeAll": false, "label": "Data Source", "multi": false, "name": "datasource", "options": [], "query": "prometheus", "queryValue": "", "refresh": 1, "regex": "", "skipUrlSync": false, "type": "datasource" } ] }, "time": { "from": "now-12h", "to": "now" }, "timepicker": { "refresh_intervals": [ "10s", "30s", "1m", "5m", "15m", "30m", "1h", "2h", "1d" ], "time_options": [ "5m", "15m", "1h", "6h", "12h", "24h", "2d", "7d", "30d" ] }, "timezone": "browser", "title": "Redis Dashboard for Prometheus Redis Exporter 1.x", "uid": "bRd48yKMdd", "version": 5 } redis_exporter-1.69.0/contrib/redis-mixin/dashboards/redis.libsonnet000066400000000000000000000001351476520031400256750ustar00rootroot00000000000000{ grafanaDashboards+:: { 'redis-overview.json': (import 'redis-overview.json'), }, } redis_exporter-1.69.0/contrib/redis-mixin/mixin.libsonnet000077500000000000000000000002131476520031400236010ustar00rootroot00000000000000(import 'alerts/redis.libsonnet') + (import 'rules/redis.libsonnet') + (import 'dashboards/redis.libsonnet') + (import 'config.libsonnet') redis_exporter-1.69.0/contrib/redis-mixin/rules/000077500000000000000000000000001476520031400216715ustar00rootroot00000000000000redis_exporter-1.69.0/contrib/redis-mixin/rules/redis.libsonnet000077500000000000000000000005361476520031400247250ustar00rootroot00000000000000{ prometheusRules+:: { groups+: [ { name: 'redis.rules', rules: [ { record: 'redis_memory_fragmentation_ratio', expr: 'redis_memory_used_rss_bytes{%(redisExporterSelector)s} / redis_memory_used_bytes{%(redisExporterSelector)s}' % $._config, }, ], }, ], }, } redis_exporter-1.69.0/contrib/sample-pwd-file.json000066400000000000000000000002161476520031400221670ustar00rootroot00000000000000{ "redis://localhost:16379": "", "redis://exporter@localhost:16390": "exporter-password", "redis://localhost:16380": "redis-password" } redis_exporter-1.69.0/contrib/sample-pwd-file.json-malformed000066400000000000000000000001161476520031400241320ustar00rootroot00000000000000{ "redis://redis6:6379": "", "redis://pwd-redis5:6380": "redis-password" redis_exporter-1.69.0/contrib/sample_collect_script.lua000066400000000000000000000012131476520031400233610ustar00rootroot00000000000000-- Example collect script for -script option -- This returns a Lua table with alternating keys and values. -- Both keys and values must be strings, similar to a HGETALL result. -- More info about Redis Lua scripting: https://redis.io/commands/eval local result = {} -- Add all keys and values from some hash in db 5 redis.call("SELECT", 5) local r = redis.call("HGETALL", "some-hash-with-stats") if r ~= nil then for _,v in ipairs(r) do table.insert(result, v) -- alternating keys and values end end -- Set foo to 42 table.insert(result, "foo") table.insert(result, "42") -- note the string, use tostring() if needed return result redis_exporter-1.69.0/contrib/tls/000077500000000000000000000000001476520031400171115ustar00rootroot00000000000000redis_exporter-1.69.0/contrib/tls/gen-test-certs.sh000077500000000000000000000014351476520031400223170ustar00rootroot00000000000000#!/bin/bash # Generate test certificates: # # ca.{crt,key} Self signed CA certificate. # redis.{crt,key} A certificate with no key usage/policy restrictions. dir=`dirname $0` # Generate CA openssl genrsa -out ${dir}/ca.key 4096 openssl req \ -x509 -new -nodes -sha256 \ -key ${dir}/ca.key \ -days 3650 \ -subj '/O=redis_exporter/CN=Certificate Authority' \ -out ${dir}/ca.crt # Generate cert openssl genrsa -out ${dir}/redis.key 2048 openssl req \ -new -sha256 \ -subj "/O=redis_exporter/CN=localhost" \ -key ${dir}/redis.key | \ openssl x509 \ -req -sha256 \ -CA ${dir}/ca.crt \ -CAkey ${dir}/ca.key \ -CAserial ${dir}/ca.txt \ -CAcreateserial \ -days 3650 \ -out ${dir}/redis.crt redis_exporter-1.69.0/docker-compose.yml000066400000000000000000000055171476520031400203140ustar00rootroot00000000000000services: redis7: image: redis:7.4 command: "redis-server --enable-debug-command yes --protected-mode no" ports: - "16385:6379" - "6379:6379" redis7-tls: image: redis:7.4 volumes: - ./contrib/tls:/tls command: | redis-server --enable-debug-command yes --protected-mode no --tls-port 6379 --port 0 --tls-cert-file /tls/redis.crt --tls-key-file /tls/redis.key --tls-ca-cert-file /tls/ca.crt ports: - "16387:6379" valkey8: image: valkey/valkey:8 command: "valkey-server --enable-debug-command yes --protected-mode no" ports: - "16382:6379" valkey8-tls: image: valkey/valkey:8 volumes: - ./contrib/tls:/tls command: | valkey-server --enable-debug-command yes --protected-mode no --tls-port 6379 --port 0 --tls-cert-file /tls/redis.crt --tls-key-file /tls/redis.key --tls-ca-cert-file /tls/ca.crt ports: - "16386:6379" valkey7: image: valkey/valkey:7.2 command: "valkey-server --enable-debug-command yes --protected-mode no" ports: - "16384:6379" redis6: image: redis:6.2 command: "redis-server --protected-mode no" ports: - "16379:6379" redis5: image: redis:5 command: "redis-server" ports: - "16383:6379" pwd-redis7: image: redis:7.4 command: "redis-server --protected-mode no --requirepass redis-password" ports: - "16380:6379" pwd-user-redis7: image: redis:7.4 command: "redis-server --protected-mode no --requirepass dummy --user exporter on +CLIENT +INFO +SELECT +SLOWLOG +LATENCY '>exporter-password'" ports: - "16390:6379" redis-2-8: image: redis:2.8 command: "redis-server" ports: - "16381:6379" keydb-01: image: "eqalpha/keydb:x86_64_v6.3.4" command: "keydb-server --protected-mode no" ports: - "16401:6379" keydb-02: image: "eqalpha/keydb:x86_64_v6.3.1" command: "keydb-server --protected-mode no --active-replica yes --replicaof keydb-01 6379" ports: - "16402:6379" redis-cluster: image: grokzen/redis-cluster:6.2.14 environment: - IP=0.0.0.0 ports: - 7000-7005:7000-7005 - 17000-17005:7000-7005 redis-cluster-password: image: bitnami/redis-cluster:7.4 environment: - REDIS_PORT_NUMBER=7006 - REDIS_PASSWORD=redis-password - REDIS_CLUSTER_CREATOR=yes - REDIS_NODES=redis-cluster-password:7006 ports: - "17006:7006" redis-sentinel: image: docker.io/bitnami/redis-sentinel:6.2-debian-10 environment: - REDIS_MASTER_HOST=redis6 ports: - "26379:26379" tile38: image: tile38/tile38:latest ports: - "19851:9851" redis-stack: image: redis/redis-stack-server:7.4.0-v0 ports: - "36379:6379" redis_exporter-1.69.0/exporter/000077500000000000000000000000001476520031400165175ustar00rootroot00000000000000redis_exporter-1.69.0/exporter/clients.go000066400000000000000000000236241476520031400205160ustar00rootroot00000000000000package exporter import ( "regexp" "strconv" "strings" "time" "github.com/gomodule/redigo/redis" "github.com/prometheus/client_golang/prometheus" log "github.com/sirupsen/logrus" ) type ClientInfo struct { Id, Name, User, Flags, Db, Host, Port, Resp string CreatedAt, IdleSince, Sub, Psub, Ssub, Watch, Qbuf, QbufFree, Obl, Oll, OMem, TotMem int64 } /* Valid Examples id=11 addr=127.0.0.1:63508 fd=8 name= age=6321 idle=6320 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=setex user=default resp=2 id=14 addr=127.0.0.1:64958 fd=9 name= age=5 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=26 qbuf-free=32742 obl=0 oll=0 omem=0 events=r cmd=client user=default resp=3 id=40253233 addr=fd40:1481:21:dbe0:7021:300:a03:1a06:44426 fd=19 name= age=782 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=26 qbuf-free=32742 argv-mem=10 obl=0 oll=0 omem=0 tot-mem=61466 ow=0 owmem=0 events=r cmd=client user=default lib-name=redis-py lib-ver=5.0.1 numops=9 */ func parseClientListString(clientInfo string) (*ClientInfo, bool) { if matched, _ := regexp.MatchString(`^id=\d+ addr=\S+`, clientInfo); !matched { return nil, false } connectedClient := ClientInfo{} connectedClient.Ssub = -1 // mark it as missing - introduced in Redis 7.0.3 connectedClient.Watch = -1 // mark it as missing - introduced in Redis 7.4 for _, kvPart := range strings.Split(clientInfo, " ") { vPart := strings.Split(kvPart, "=") if len(vPart) != 2 { log.Debugf("Invalid format for client list string, got: %s", kvPart) return nil, false } switch vPart[0] { case "id": connectedClient.Id = vPart[1] case "name": connectedClient.Name = vPart[1] case "user": connectedClient.User = vPart[1] case "age": createdAt, err := durationFieldToTimestamp(vPart[1]) if err != nil { log.Debugf("could not parse 'age' field(%s): %s", vPart[1], err.Error()) return nil, false } connectedClient.CreatedAt = createdAt case "idle": idleSinceTs, err := durationFieldToTimestamp(vPart[1]) if err != nil { log.Debugf("could not parse 'idle' field(%s): %s", vPart[1], err.Error()) return nil, false } connectedClient.IdleSince = idleSinceTs case "flags": connectedClient.Flags = vPart[1] case "db": connectedClient.Db = vPart[1] case "sub": connectedClient.Sub, _ = strconv.ParseInt(vPart[1], 10, 64) case "psub": connectedClient.Psub, _ = strconv.ParseInt(vPart[1], 10, 64) case "ssub": connectedClient.Ssub, _ = strconv.ParseInt(vPart[1], 10, 64) case "watch": connectedClient.Watch, _ = strconv.ParseInt(vPart[1], 10, 64) case "qbuf": connectedClient.Qbuf, _ = strconv.ParseInt(vPart[1], 10, 64) case "qbuf-free": connectedClient.QbufFree, _ = strconv.ParseInt(vPart[1], 10, 64) case "obl": connectedClient.Obl, _ = strconv.ParseInt(vPart[1], 10, 64) case "oll": connectedClient.Oll, _ = strconv.ParseInt(vPart[1], 10, 64) case "omem": connectedClient.OMem, _ = strconv.ParseInt(vPart[1], 10, 64) case "tot-mem": connectedClient.TotMem, _ = strconv.ParseInt(vPart[1], 10, 64) case "addr": hostPortString := strings.Split(vPart[1], ":") if len(hostPortString) < 2 { log.Debug("Invalid value for 'addr' found in client info") return nil, false } connectedClient.Host = strings.Join(hostPortString[:len(hostPortString)-1], ":") connectedClient.Port = hostPortString[len(hostPortString)-1] case "resp": connectedClient.Resp = vPart[1] } } return &connectedClient, true } func durationFieldToTimestamp(field string) (int64, error) { parsed, err := strconv.ParseInt(field, 10, 64) if err != nil { return 0, err } return time.Now().Unix() - parsed, nil } func (e *Exporter) extractConnectedClientMetrics(ch chan<- prometheus.Metric, c redis.Conn) { reply, err := redis.String(doRedisCmd(c, "CLIENT", "LIST")) if err != nil { log.Errorf("CLIENT LIST err: %s", err) return } for _, c := range strings.Split(reply, "\n") { if info, ok := parseClientListString(c); ok { clientInfoLabels := []string{"id", "name", "flags", "db", "host"} clientInfoLabelsValues := []string{info.Id, info.Name, info.Flags, info.Db, info.Host} if e.options.ExportClientsInclPort { clientInfoLabels = append(clientInfoLabels, "port") clientInfoLabelsValues = append(clientInfoLabelsValues, info.Port) } if user := info.User; user != "" { clientInfoLabels = append(clientInfoLabels, "user") clientInfoLabelsValues = append(clientInfoLabelsValues, user) } // introduced in Redis 7.0 if resp := info.Resp; resp != "" { clientInfoLabels = append(clientInfoLabels, "resp") clientInfoLabelsValues = append(clientInfoLabelsValues, resp) } e.metricDescriptions["connected_client_info"] = newMetricDescr( e.options.Namespace, "connected_client_info", "Details about a connected client", clientInfoLabels, ) e.registerConstMetricGauge( ch, "connected_client_info", 1.0, clientInfoLabelsValues..., ) clientBaseLabels := []string{"id", "name"} clientBaseLabelsValues := []string{info.Id, info.Name} e.metricDescriptions["connected_client_output_buffer_memory_usage_bytes"] = newMetricDescr( e.options.Namespace, "connected_client_output_buffer_memory_usage_bytes", "A connected client's output buffer memory usage in bytes", clientBaseLabels, ) e.registerConstMetricGauge( ch, "connected_client_output_buffer_memory_usage_bytes", float64(info.OMem), clientBaseLabelsValues..., ) e.metricDescriptions["connected_client_total_memory_consumed_bytes"] = newMetricDescr( e.options.Namespace, "connected_client_total_memory_consumed_bytes", "Total memory consumed by a client in its various buffers", clientBaseLabels, ) e.registerConstMetricGauge( ch, "connected_client_total_memory_consumed_bytes", float64(info.TotMem), clientBaseLabelsValues..., ) e.metricDescriptions["connected_client_created_at_timestamp"] = newMetricDescr( e.options.Namespace, "connected_client_created_at_timestamp", "A connected client's creation timestamp", clientBaseLabels, ) e.registerConstMetricGauge( ch, "connected_client_created_at_timestamp", float64(info.CreatedAt), clientBaseLabelsValues..., ) e.metricDescriptions["connected_client_idle_since_timestamp"] = newMetricDescr( e.options.Namespace, "connected_client_idle_since_timestamp", "A connected client's idle since timestamp", clientBaseLabels, ) e.registerConstMetricGauge( ch, "connected_client_idle_since_timestamp", float64(info.IdleSince), clientBaseLabelsValues..., ) e.metricDescriptions["connected_client_channel_subscriptions_count"] = newMetricDescr( e.options.Namespace, "connected_client_channel_subscriptions_count", "A connected client's number of channel subscriptions", clientBaseLabels, ) e.registerConstMetricGauge( ch, "connected_client_channel_subscriptions_count", float64(info.Sub), clientBaseLabelsValues..., ) e.metricDescriptions["connected_client_pattern_matching_subscriptions_count"] = newMetricDescr( e.options.Namespace, "connected_client_pattern_matching_subscriptions_count", "A connected client's number of pattern matching subscriptions", clientBaseLabels, ) e.registerConstMetricGauge( ch, "connected_client_pattern_matching_subscriptions_count", float64(info.Psub), clientBaseLabelsValues..., ) if info.Ssub != -1 { e.metricDescriptions["connected_client_shard_channel_subscriptions_count"] = newMetricDescr( e.options.Namespace, "connected_client_shard_channel_subscriptions_count", "a connected client's number of shard channel subscriptions", clientBaseLabels, ) e.registerConstMetricGauge( ch, "connected_client_shard_channel_subscriptions_count", float64(info.Ssub), clientBaseLabelsValues..., ) } if info.Watch != -1 { e.metricDescriptions["connected_client_shard_channel_watched_keys"] = newMetricDescr( e.options.Namespace, "connected_client_shard_channel_watched_keys", "a connected client's number of keys it's currently watching", clientBaseLabels, ) e.registerConstMetricGauge( ch, "connected_client_shard_channel_watched_keys", float64(info.Watch), clientBaseLabelsValues..., ) } e.metricDescriptions["connected_client_query_buffer_length_bytes"] = newMetricDescr( e.options.Namespace, "connected_client_query_buffer_length_bytes", "A connected client's query buffer length in bytes (0 means no query pending)", clientBaseLabels, ) e.registerConstMetricGauge( ch, "connected_client_query_buffer_length_bytes", float64(info.Qbuf), clientBaseLabelsValues..., ) e.metricDescriptions["connected_client_query_buffer_free_space_bytes"] = newMetricDescr( e.options.Namespace, "connected_client_query_buffer_free_space_bytes", "A connected client's free space of the query buffer in bytes (0 means the buffer is full)", clientBaseLabels, ) e.registerConstMetricGauge( ch, "connected_client_query_buffer_free_space_bytes", float64(info.QbufFree), clientBaseLabelsValues..., ) e.metricDescriptions["connected_client_output_buffer_length_bytes"] = newMetricDescr( e.options.Namespace, "connected_client_output_buffer_length_bytes", "A connected client's output buffer length in bytes", clientBaseLabels, ) e.registerConstMetricGauge( ch, "connected_client_output_buffer_length_bytes", float64(info.Obl), clientBaseLabelsValues..., ) e.metricDescriptions["connected_client_output_list_length"] = newMetricDescr( e.options.Namespace, "connected_client_output_list_length", "A connected client's output list length (replies are queued in this list when the buffer is full)", clientBaseLabels, ) e.registerConstMetricGauge( ch, "connected_client_output_list_length", float64(info.Oll), clientBaseLabelsValues..., ) } } } redis_exporter-1.69.0/exporter/clients_test.go000066400000000000000000000225001476520031400215450ustar00rootroot00000000000000package exporter import ( "os" "strings" "testing" "time" "github.com/prometheus/client_golang/prometheus" ) func TestDurationFieldToTimestamp(t *testing.T) { nowTs := time.Now().Unix() for _, tst := range []struct { in string expectedOk bool expectedVal int64 }{ { in: "123", expectedOk: true, expectedVal: nowTs - 123, }, { in: "0", expectedOk: true, expectedVal: nowTs - 0, }, { in: "abc", expectedOk: false, }, } { res, err := durationFieldToTimestamp(tst.in) if err == nil && !tst.expectedOk { t.Fatalf("expected not ok, but got no error, input: [%s]", tst.in) } else if err != nil && tst.expectedOk { t.Fatalf("expected ok, but got error: %s, input: [%s]", err, tst.in) } if tst.expectedOk { if res != tst.expectedVal { t.Fatalf("expected %d, but got: %d", tst.expectedVal, res) } } } } func TestParseClientListString(t *testing.T) { convertDurationToTimestampInt64 := func(duration string) int64 { ts, err := durationFieldToTimestamp(duration) if err != nil { panic(err) } return ts } tsts := []struct { in string expectedOk bool expectedInfo ClientInfo }{ { in: "id=11 addr=127.0.0.1:63508 fd=8 name= age=6321 idle=6320 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=3 oll=8 omem=0 tot-mem=0 events=r cmd=setex", expectedOk: true, expectedInfo: ClientInfo{Id: "11", CreatedAt: convertDurationToTimestampInt64("6321"), IdleSince: convertDurationToTimestampInt64("6320"), Flags: "N", Db: "0", Ssub: -1, Watch: -1, Obl: 3, Oll: 8, OMem: 0, TotMem: 0, Host: "127.0.0.1", Port: "63508"}, }, { in: "id=14 addr=127.0.0.1:64958 fd=9 name=foo age=5 idle=0 flags=N db=1 sub=0 psub=0 multi=-1 qbuf=26 qbuf-free=32742 obl=0 oll=0 omem=0 tot-mem=0 events=r cmd=client", expectedOk: true, expectedInfo: ClientInfo{Id: "14", Name: "foo", CreatedAt: convertDurationToTimestampInt64("5"), IdleSince: convertDurationToTimestampInt64("0"), Flags: "N", Db: "1", Ssub: -1, Watch: -1, Qbuf: 26, QbufFree: 32742, OMem: 0, TotMem: 0, Host: "127.0.0.1", Port: "64958"}, }, { in: "id=14 addr=127.0.0.1:64959 fd=9 name= age=5 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=26 qbuf-free=32742 obl=0 oll=0 omem=0 tot-mem=0 events=r cmd=client user=default resp=3", expectedOk: true, expectedInfo: ClientInfo{Id: "14", CreatedAt: convertDurationToTimestampInt64("5"), IdleSince: convertDurationToTimestampInt64("0"), Flags: "N", Db: "0", Ssub: -1, Watch: -1, Qbuf: 26, QbufFree: 32742, OMem: 0, TotMem: 0, Host: "127.0.0.1", Port: "64959", User: "default", Resp: "3"}, }, { in: "id=40253233 addr=fd40:1481:21:dbe0:7021:300:a03:1a06:44426 fd=19 name= age=782 idle=0 flags=N db=0 sub=896 psub=18 ssub=17 watch=3 multi=-1 qbuf=26 qbuf-free=32742 argv-mem=10 obl=0 oll=555 omem=0 tot-mem=61466 ow=0 owmem=0 events=r cmd=client user=default lib-name=redis-py lib-ver=5.0.1 numops=9", expectedOk: true, expectedInfo: ClientInfo{Id: "40253233", CreatedAt: convertDurationToTimestampInt64("782"), IdleSince: convertDurationToTimestampInt64("0"), Flags: "N", Db: "0", Sub: 896, Psub: 18, Ssub: 17, Watch: 3, Qbuf: 26, QbufFree: 32742, Oll: 555, OMem: 0, TotMem: 61466, Host: "fd40:1481:21:dbe0:7021:300:a03:1a06", Port: "44426", User: "default"}, }, { in: "id=14 addr=127.0.0.1:64958 fd=9 name=foo age=ABCDE idle=0 flags=N db=1 sub=0 psub=0 multi=-1 qbuf=26 qbuf-free=32742 obl=0 oll=0 omem=0 tot-mem=0 events=r cmd=client", expectedOk: false, }, { in: "id=14 addr=127.0.0.1:64958 fd=9 name=foo age=5 idle=NOPE flags=N db=1 sub=0 psub=0 multi=-1 qbuf=26 qbuf-free=32742 obl=0 oll=0 omem=0 tot-mem=0 events=r cmd=client", expectedOk: false, }, { in: "id=14 addr=127.0.0.1:64958 sub=ERR", expectedOk: true, expectedInfo: ClientInfo{Id: "14", Sub: 0, Ssub: -1, Watch: -1, Host: "127.0.0.1", Port: "64958"}, }, { in: "id=14 addr=127.0.0.1:64958 psub=ERR", expectedOk: true, expectedInfo: ClientInfo{Id: "14", Psub: 0, Ssub: -1, Watch: -1, Host: "127.0.0.1", Port: "64958"}, }, { in: "id=14 addr=127.0.0.1:64958 ssub=ERR", expectedOk: true, expectedInfo: ClientInfo{Id: "14", Ssub: 0, Watch: -1, Host: "127.0.0.1", Port: "64958"}, }, { in: "id=14 addr=127.0.0.1:64958 watch=ERR", expectedOk: true, expectedInfo: ClientInfo{Id: "14", Ssub: -1, Watch: 0, Host: "127.0.0.1", Port: "64958"}, }, { in: "id=14 addr=127.0.0.1:64958 qbuf=ERR", expectedOk: true, expectedInfo: ClientInfo{Id: "14", Ssub: -1, Watch: -1, Qbuf: 0, Host: "127.0.0.1", Port: "64958"}, }, { in: "id=14 addr=127.0.0.1:64958 qbuf-free=ERR", expectedOk: true, expectedInfo: ClientInfo{Id: "14", Ssub: -1, Watch: -1, QbufFree: 0, Host: "127.0.0.1", Port: "64958"}, }, { in: "id=14 addr=127.0.0.1:64958 obl=ERR", expectedOk: true, expectedInfo: ClientInfo{Id: "14", Ssub: -1, Watch: -1, Obl: 0, Host: "127.0.0.1", Port: "64958"}, }, { in: "id=14 addr=127.0.0.1:64958 oll=ERR", expectedOk: true, expectedInfo: ClientInfo{Id: "14", Ssub: -1, Watch: -1, Oll: 0, Host: "127.0.0.1", Port: "64958"}, }, { in: "id=14 addr=127.0.0.1:64958 omem=ERR", expectedOk: true, expectedInfo: ClientInfo{Id: "14", Ssub: -1, Watch: -1, OMem: 0, Host: "127.0.0.1", Port: "64958"}, }, { in: "id=14 addr=127.0.0.1:64958 tot-mem=ERR", expectedOk: true, expectedInfo: ClientInfo{Id: "14", Ssub: -1, Watch: -1, TotMem: 0, Host: "127.0.0.1", Port: "64958"}, }, { in: "", expectedOk: false, }, } for _, tst := range tsts { info, ok := parseClientListString(tst.in) if !tst.expectedOk { if ok { t.Errorf("expected NOT ok, but got ok, input: %s", tst.in) } continue } if *info != tst.expectedInfo { t.Errorf("TestParseClientListString( %s ) error. Given: %#v Wanted: %#v", tst.in, info, tst.expectedInfo) } } } func TestExportClientList(t *testing.T) { for _, isExportClientList := range []bool{true, false} { e := getTestExporterWithOptions(Options{ Namespace: "test", Registry: prometheus.NewRegistry(), ExportClientList: isExportClientList, }) chM := make(chan prometheus.Metric) go func() { e.Collect(chM) close(chM) }() tsts := []struct { in string found bool }{ {in: "connected_client_info"}, {in: "connected_client_output_buffer_memory_usage_bytes"}, {in: "connected_client_total_memory_consumed_bytes"}, {in: "connected_client_created_at_timestamp"}, {in: "connected_client_idle_since_timestamp"}, {in: "connected_client_channel_subscriptions_count"}, {in: "connected_client_pattern_matching_subscriptions_count"}, {in: "connected_client_query_buffer_length_bytes"}, {in: "connected_client_query_buffer_free_space_bytes"}, {in: "connected_client_output_buffer_length_bytes"}, {in: "connected_client_output_list_length"}, {in: "connected_client_shard_channel_subscriptions_count"}, {in: "connected_client_info"}, } for m := range chM { desc := m.Desc().String() for i := range tsts { if strings.Contains(desc, tsts[i].in) { tsts[i].found = true } } } for _, tst := range tsts { if isExportClientList && !tst.found { t.Errorf("%s was *not* found in isExportClientList metrics but expected", tst.in) } else if !isExportClientList && tst.found { t.Errorf("%s was *found* in isExportClientList metrics but *not* expected", tst.in) } } } } /* some metrics are only in redis 7 but not yet in valkey 7.2 like "connected_client_shard_channel_watched_keys" */ func TestExportClientListRedis7(t *testing.T) { redisSevenAddr := os.Getenv("TEST_REDIS7_URI") if redisSevenAddr == "" { t.Skipf("Skipping TestExportClientListRedis7, env var TEST_REDIS7_URI not set") } e := getTestExporterWithAddrAndOptions(redisSevenAddr, Options{ Namespace: "test", Registry: prometheus.NewRegistry(), ExportClientList: true, }) chM := make(chan prometheus.Metric) go func() { e.Collect(chM) close(chM) }() tsts := []struct { in string found bool }{ { in: "connected_client_shard_channel_subscriptions_count", }, { in: "connected_client_shard_channel_watched_keys", }, } for m := range chM { desc := m.Desc().String() for i := range tsts { if strings.Contains(desc, tsts[i].in) { tsts[i].found = true } } } for _, tst := range tsts { if !tst.found { t.Errorf(`%s was *not* found in isExportClientList metrics but expected`, tst.in) } } } func TestExportClientListInclPort(t *testing.T) { for _, inclPort := range []bool{true, false} { e := getTestExporterWithOptions(Options{ Namespace: "test", Registry: prometheus.NewRegistry(), ExportClientList: true, ExportClientsInclPort: inclPort, }) chM := make(chan prometheus.Metric) go func() { e.Collect(chM) close(chM) }() found := false for m := range chM { desc := m.Desc().String() if strings.Contains(desc, "connected_client_info") { if strings.Contains(desc, "port") { found = true } } } if inclPort && !found { t.Errorf(`connected_client_info did *not* include "port" in isExportClientList metrics but was expected`) } else if !inclPort && found { t.Errorf(`connected_client_info did *include* "port" in isExportClientList metrics but was *not* expected`) } } } redis_exporter-1.69.0/exporter/exporter.go000066400000000000000000001073141476520031400207240ustar00rootroot00000000000000package exporter import ( "fmt" "net/http" "net/url" "runtime" "strconv" "strings" "sync" "time" "github.com/gomodule/redigo/redis" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus/promhttp" log "github.com/sirupsen/logrus" ) type BuildInfo struct { Version string CommitSha string Date string } // Exporter implements the prometheus.Exporter interface, and exports Redis metrics. type Exporter struct { sync.Mutex redisAddr string totalScrapes prometheus.Counter scrapeDuration prometheus.Summary targetScrapeRequestErrors prometheus.Counter metricDescriptions map[string]*prometheus.Desc options Options metricMapCounters map[string]string metricMapGauges map[string]string mux *http.ServeMux buildInfo BuildInfo } type Options struct { User string Password string Namespace string PasswordMap map[string]string ConfigCommandName string CheckKeys string CheckSingleKeys string CheckStreams string CheckSingleStreams string StreamsExcludeConsumerMetrics bool CheckKeysBatchSize int64 CheckKeyGroups string MaxDistinctKeyGroups int64 CountKeys string LuaScript map[string][]byte ClientCertFile string ClientKeyFile string CaCertFile string InclConfigMetrics bool InclModulesMetrics bool DisableExportingKeyValues bool ExcludeLatencyHistogramMetrics bool RedactConfigMetrics bool InclSystemMetrics bool SkipTLSVerification bool SetClientName bool IsTile38 bool IsCluster bool ExportClientList bool ExportClientsInclPort bool ConnectionTimeouts time.Duration MetricsPath string RedisMetricsOnly bool PingOnConnect bool RedisPwdFile string Registry *prometheus.Registry BuildInfo BuildInfo BasicAuthUsername string BasicAuthPassword string } // NewRedisExporter returns a new exporter of Redis metrics. func NewRedisExporter(uri string, opts Options) (*Exporter, error) { log.Debugf("NewRedisExporter options: %#v", opts) switch { case strings.HasPrefix(uri, "valkey://"): uri = strings.Replace(uri, "valkey://", "redis://", 1) case strings.HasPrefix(uri, "valkeys://"): uri = strings.Replace(uri, "valkeys://", "rediss://", 1) } log.Debugf("NewRedisExporter = using redis uri: %s", uri) e := &Exporter{ redisAddr: uri, options: opts, buildInfo: opts.BuildInfo, totalScrapes: prometheus.NewCounter(prometheus.CounterOpts{ Namespace: opts.Namespace, Name: "exporter_scrapes_total", Help: "Current total redis scrapes.", }), scrapeDuration: prometheus.NewSummary(prometheus.SummaryOpts{ Namespace: opts.Namespace, Name: "exporter_scrape_duration_seconds", Help: "Durations of scrapes by the exporter", }), targetScrapeRequestErrors: prometheus.NewCounter(prometheus.CounterOpts{ Namespace: opts.Namespace, Name: "target_scrape_request_errors_total", Help: "Errors in requests to the exporter", }), metricMapGauges: map[string]string{ // # Server "uptime_in_seconds": "uptime_in_seconds", "process_id": "process_id", "io_threads_active": "io_threads_active", // # Clients "connected_clients": "connected_clients", "blocked_clients": "blocked_clients", "maxclients": "max_clients", "tracking_clients": "tracking_clients", "clients_in_timeout_table": "clients_in_timeout_table", "pubsub_clients": "pubsub_clients", // Added in Redis 7.4 "watching_clients": "watching_clients", // Added in Redis 7.4 "total_watched_keys": "total_watched_keys", // Added in Redis 7.4 "total_blocking_keys": "total_blocking_keys", // Added in Redis 7.2 "total_blocking_keys_on_nokey": "total_blocking_keys_on_nokey", // Added in Redis 7.2 // redis 2,3,4.x "client_longest_output_list": "client_longest_output_list", "client_biggest_input_buf": "client_biggest_input_buf", // the above two metrics were renamed in redis 5.x "client_recent_max_output_buffer": "client_recent_max_output_buffer_bytes", "client_recent_max_input_buffer": "client_recent_max_input_buffer_bytes", // # Memory "allocator_active": "allocator_active_bytes", "allocator_allocated": "allocator_allocated_bytes", "allocator_resident": "allocator_resident_bytes", "allocator_frag_ratio": "allocator_frag_ratio", "allocator_frag_bytes": "allocator_frag_bytes", "allocator_rss_ratio": "allocator_rss_ratio", "allocator_rss_bytes": "allocator_rss_bytes", "used_memory": "memory_used_bytes", "used_memory_rss": "memory_used_rss_bytes", "used_memory_peak": "memory_used_peak_bytes", "used_memory_lua": "memory_used_lua_bytes", "used_memory_vm_eval": "memory_used_vm_eval_bytes", // Added in Redis 7.0 "used_memory_scripts_eval": "memory_used_scripts_eval_bytes", // Added in Redis 7.0 "used_memory_overhead": "memory_used_overhead_bytes", "used_memory_startup": "memory_used_startup_bytes", "used_memory_dataset": "memory_used_dataset_bytes", "number_of_cached_scripts": "number_of_cached_scripts", // Added in Redis 7.0 "number_of_functions": "number_of_functions", // Added in Redis 7.0 "number_of_libraries": "number_of_libraries", // Added in Redis 7.4 "used_memory_vm_functions": "memory_used_vm_functions_bytes", // Added in Redis 7.0 "used_memory_scripts": "memory_used_scripts_bytes", // Added in Redis 7.0 "used_memory_functions": "memory_used_functions_bytes", // Added in Redis 7.0 "used_memory_vm_total": "memory_used_vm_total", // Added in Redis 7.0 "maxmemory": "memory_max_bytes", "maxmemory_reservation": "memory_max_reservation_bytes", "maxmemory_desired_reservation": "memory_max_reservation_desired_bytes", "maxfragmentationmemory_reservation": "memory_max_fragmentation_reservation_bytes", "maxfragmentationmemory_desired_reservation": "memory_max_fragmentation_reservation_desired_bytes", "mem_fragmentation_ratio": "mem_fragmentation_ratio", "mem_fragmentation_bytes": "mem_fragmentation_bytes", "mem_clients_slaves": "mem_clients_slaves", "mem_clients_normal": "mem_clients_normal", "expired_stale_perc": "expired_stale_percentage", // https://github.com/antirez/redis/blob/17bf0b25c1171486e3a1b089f3181fff2bc0d4f0/src/evict.c#L349-L352 // ... the sum of AOF and slaves buffer .... "mem_not_counted_for_evict": "mem_not_counted_for_eviction_bytes", "mem_total_replication_buffers": "mem_total_replication_buffers_bytes", // Added in Redis 7.0 "mem_overhead_db_hashtable_rehashing": "mem_overhead_db_hashtable_rehashing_bytes", // Added in Redis 7.4 "lazyfree_pending_objects": "lazyfree_pending_objects", "active_defrag_running": "active_defrag_running", "migrate_cached_sockets": "migrate_cached_sockets_total", "active_defrag_hits": "defrag_hits", "active_defrag_misses": "defrag_misses", "active_defrag_key_hits": "defrag_key_hits", "active_defrag_key_misses": "defrag_key_misses", // https://github.com/antirez/redis/blob/0af467d18f9d12b137af3b709c0af579c29d8414/src/expire.c#L297-L299 "expired_time_cap_reached_count": "expired_time_cap_reached_total", // # Persistence "loading": "loading_dump_file", "async_loading": "async_loading", // Added in Redis 7.0 "rdb_changes_since_last_save": "rdb_changes_since_last_save", "rdb_bgsave_in_progress": "rdb_bgsave_in_progress", "rdb_last_save_time": "rdb_last_save_timestamp_seconds", "rdb_last_bgsave_status": "rdb_last_bgsave_status", "rdb_last_bgsave_time_sec": "rdb_last_bgsave_duration_sec", "rdb_current_bgsave_time_sec": "rdb_current_bgsave_duration_sec", "rdb_saves": "rdb_saves_total", "rdb_last_cow_size": "rdb_last_cow_size_bytes", "rdb_last_load_keys_expired": "rdb_last_load_expired_keys", // Added in Redis 7.0 "rdb_last_load_keys_loaded": "rdb_last_load_loaded_keys", // Added in Redis 7.0 "aof_enabled": "aof_enabled", "aof_rewrite_in_progress": "aof_rewrite_in_progress", "aof_rewrite_scheduled": "aof_rewrite_scheduled", "aof_last_rewrite_time_sec": "aof_last_rewrite_duration_sec", "aof_current_rewrite_time_sec": "aof_current_rewrite_duration_sec", "aof_last_cow_size": "aof_last_cow_size_bytes", "aof_current_size": "aof_current_size_bytes", "aof_base_size": "aof_base_size_bytes", "aof_pending_rewrite": "aof_pending_rewrite", "aof_buffer_length": "aof_buffer_length", "aof_rewrite_buffer_length": "aof_rewrite_buffer_length", // Added in Redis 7.0 "aof_pending_bio_fsync": "aof_pending_bio_fsync", "aof_delayed_fsync": "aof_delayed_fsync", "aof_last_bgrewrite_status": "aof_last_bgrewrite_status", "aof_last_write_status": "aof_last_write_status", "module_fork_in_progress": "module_fork_in_progress", "module_fork_last_cow_size": "module_fork_last_cow_size", // # Stats "current_eviction_exceeded_time": "current_eviction_exceeded_time_ms", "pubsub_channels": "pubsub_channels", "pubsub_patterns": "pubsub_patterns", "pubsubshard_channels": "pubsubshard_channels", // Added in Redis 7.0.3 "latest_fork_usec": "latest_fork_usec", "tracking_total_keys": "tracking_total_keys", "tracking_total_items": "tracking_total_items", "tracking_total_prefixes": "tracking_total_prefixes", // # Replication "connected_slaves": "connected_slaves", "repl_backlog_size": "replication_backlog_bytes", "repl_backlog_active": "repl_backlog_is_active", "repl_backlog_first_byte_offset": "repl_backlog_first_byte_offset", "repl_backlog_histlen": "repl_backlog_history_bytes", "master_repl_offset": "master_repl_offset", "second_repl_offset": "second_repl_offset", "slave_expires_tracked_keys": "slave_expires_tracked_keys", "slave_priority": "slave_priority", "sync_full": "replica_resyncs_full", "sync_partial_ok": "replica_partial_resync_accepted", "sync_partial_err": "replica_partial_resync_denied", // # Cluster "cluster_stats_messages_sent": "cluster_messages_sent_total", "cluster_stats_messages_received": "cluster_messages_received_total", // # Tile38 // based on https://tile38.com/commands/server/ "tile38_aof_size": "tile38_aof_size_bytes", "tile38_avg_point_size": "tile38_avg_item_size_bytes", "tile38_sys_cpus": "tile38_cpus_total", "tile38_heap_released_bytes": "tile38_heap_released_bytes", "tile38_heap_alloc_bytes": "tile38_heap_size_bytes", "tile38_http_transport": "tile38_http_transport", "tile38_in_memory_size": "tile38_in_memory_size_bytes", "tile38_max_heap_size": "tile38_max_heap_size_bytes", "tile38_alloc_bytes": "tile38_mem_alloc_bytes", "tile38_num_collections": "tile38_num_collections_total", "tile38_num_hooks": "tile38_num_hooks_total", "tile38_num_objects": "tile38_num_objects_total", "tile38_num_points": "tile38_num_points_total", "tile38_pointer_size": "tile38_pointer_size_bytes", "tile38_read_only": "tile38_read_only", "tile38_go_threads": "tile38_threads_total", "tile38_go_goroutines": "tile38_go_goroutines_total", "tile38_last_gc_time_seconds": "tile38_last_gc_time_seconds", "tile38_next_gc_bytes": "tile38_next_gc_bytes", // addtl. KeyDB metrics "server_threads": "server_threads_total", "long_lock_waits": "long_lock_waits_total", "current_client_thread": "current_client_thread", // Redis Modules metrics, RediSearch module "search_number_of_indexes": "search_number_of_indexes", "search_used_memory_indexes": "search_used_memory_indexes_bytes", "search_global_idle": "search_global_idle", "search_global_total": "search_global_total", "search_bytes_collected": "search_collected_bytes", "search_dialect_1": "search_dialect_1", "search_dialect_2": "search_dialect_2", "search_dialect_3": "search_dialect_3", "search_dialect_4": "search_dialect_4", }, metricMapCounters: map[string]string{ "total_connections_received": "connections_received_total", "total_commands_processed": "commands_processed_total", "rejected_connections": "rejected_connections_total", "total_net_input_bytes": "net_input_bytes_total", "total_net_output_bytes": "net_output_bytes_total", "total_net_repl_input_bytes": "net_repl_input_bytes_total", "total_net_repl_output_bytes": "net_repl_output_bytes_total", "expired_subkeys": "expired_subkeys_total", "expired_keys": "expired_keys_total", "expired_time_cap_reached_count": "expired_time_cap_reached_total", "expire_cycle_cpu_milliseconds": "expire_cycle_cpu_time_ms_total", "evicted_keys": "evicted_keys_total", "evicted_clients": "evicted_clients_total", // Added in Redis 7.0 "evicted_scripts": "evicted_scripts_total", // Added in Redis 7.4 "total_eviction_exceeded_time": "eviction_exceeded_time_ms_total", "keyspace_hits": "keyspace_hits_total", "keyspace_misses": "keyspace_misses_total", "used_cpu_sys": "cpu_sys_seconds_total", "used_cpu_user": "cpu_user_seconds_total", "used_cpu_sys_children": "cpu_sys_children_seconds_total", "used_cpu_user_children": "cpu_user_children_seconds_total", "used_cpu_sys_main_thread": "cpu_sys_main_thread_seconds_total", "used_cpu_user_main_thread": "cpu_user_main_thread_seconds_total", "unexpected_error_replies": "unexpected_error_replies", "total_error_replies": "total_error_replies", "dump_payload_sanitizations": "dump_payload_sanitizations", "total_reads_processed": "total_reads_processed", "total_writes_processed": "total_writes_processed", "io_threaded_reads_processed": "io_threaded_reads_processed", "io_threaded_writes_processed": "io_threaded_writes_processed", "client_query_buffer_limit_disconnections": "client_query_buffer_limit_disconnections_total", "client_output_buffer_limit_disconnections": "client_output_buffer_limit_disconnections_total", "reply_buffer_shrinks": "reply_buffer_shrinks_total", "reply_buffer_expands": "reply_buffer_expands_total", "acl_access_denied_auth": "acl_access_denied_auth_total", "acl_access_denied_cmd": "acl_access_denied_cmd_total", "acl_access_denied_key": "acl_access_denied_key_total", "acl_access_denied_channel": "acl_access_denied_channel_total", // addtl. KeyDB metrics "cached_keys": "cached_keys_total", "storage_provider_read_hits": "storage_provider_read_hits", "storage_provider_read_misses": "storage_provider_read_misses", // Redis Modules metrics, RediSearch module "search_total_indexing_time": "search_indexing_time_ms_total", "search_total_cycles": "search_cycles_total", "search_total_ms_run": "search_run_ms_total", }, } if e.options.ConfigCommandName == "" { e.options.ConfigCommandName = "CONFIG" } if keys, err := parseKeyArg(opts.CheckKeys); err != nil { return nil, fmt.Errorf("couldn't parse check-keys: %s", err) } else { log.Debugf("keys: %#v", keys) } if singleKeys, err := parseKeyArg(opts.CheckSingleKeys); err != nil { return nil, fmt.Errorf("couldn't parse check-single-keys: %s", err) } else { log.Debugf("singleKeys: %#v", singleKeys) } if streams, err := parseKeyArg(opts.CheckStreams); err != nil { return nil, fmt.Errorf("couldn't parse check-streams: %s", err) } else { log.Debugf("streams: %#v", streams) } if singleStreams, err := parseKeyArg(opts.CheckSingleStreams); err != nil { return nil, fmt.Errorf("couldn't parse check-single-streams: %s", err) } else { log.Debugf("singleStreams: %#v", singleStreams) } if countKeys, err := parseKeyArg(opts.CountKeys); err != nil { return nil, fmt.Errorf("couldn't parse count-keys: %s", err) } else { log.Debugf("countKeys: %#v", countKeys) } if opts.InclSystemMetrics { e.metricMapGauges["total_system_memory"] = "total_system_memory_bytes" } e.metricDescriptions = map[string]*prometheus.Desc{} for k, desc := range map[string]struct { txt string lbls []string }{ "commands_duration_seconds_total": {txt: `Total amount of time in seconds spent per command`, lbls: []string{"cmd"}}, "commands_failed_calls_total": {txt: `Total number of errors prior command execution per command`, lbls: []string{"cmd"}}, "commands_rejected_calls_total": {txt: `Total number of errors within command execution per command`, lbls: []string{"cmd"}}, "commands_total": {txt: `Total number of calls per command`, lbls: []string{"cmd"}}, "commands_latencies_usec": {txt: `A histogram of latencies per command`, lbls: []string{"cmd"}}, "latency_percentiles_usec": {txt: `A summary of latency percentile distribution per command`, lbls: []string{"cmd"}}, "config_client_output_buffer_limit_bytes": {txt: `The configured buffer limits per class`, lbls: []string{"class", "limit"}}, "config_client_output_buffer_limit_overcome_seconds": {txt: `How long for buffer limits per class to be exceeded before replicas are dropped`, lbls: []string{"class", "limit"}}, "config_key_value": {txt: `Config key and value`, lbls: []string{"key", "value"}}, "config_value": {txt: `Config key and value as metric`, lbls: []string{"key"}}, "connected_slave_lag_seconds": {txt: "Lag of connected slave", lbls: []string{"slave_ip", "slave_port", "slave_state"}}, "connected_slave_offset_bytes": {txt: "Offset of connected slave", lbls: []string{"slave_ip", "slave_port", "slave_state"}}, "db_avg_ttl_seconds": {txt: "Avg TTL in seconds", lbls: []string{"db"}}, "db_keys": {txt: "Total number of keys by DB", lbls: []string{"db"}}, "db_keys_expiring": {txt: "Total number of expiring keys by DB", lbls: []string{"db"}}, "db_keys_cached": {txt: "Total number of cached keys by DB", lbls: []string{"db"}}, "errors_total": {txt: `Total number of errors per error type`, lbls: []string{"err"}}, "exporter_last_scrape_error": {txt: "The last scrape error status.", lbls: []string{"err"}}, "instance_info": {txt: "Information about the Redis instance", lbls: []string{"role", "redis_version", "redis_build_id", "redis_mode", "os", "maxmemory_policy", "tcp_port", "run_id", "process_id", "master_replid"}}, "key_group_count": {txt: `Count of keys in key group`, lbls: []string{"db", "key_group"}}, "key_group_memory_usage_bytes": {txt: `Total memory usage of key group in bytes`, lbls: []string{"db", "key_group"}}, "key_size": {txt: `The length or size of "key"`, lbls: []string{"db", "key"}}, "key_value": {txt: `The value of "key"`, lbls: []string{"db", "key"}}, "key_value_as_string": {txt: `The value of "key" as a string`, lbls: []string{"db", "key", "val"}}, "key_memory_usage_bytes": {txt: `The memory usage of "key" in bytes`, lbls: []string{"db", "key"}}, "keys_count": {txt: `Count of keys`, lbls: []string{"db", "key"}}, "last_key_groups_scrape_duration_milliseconds": {txt: `Duration of the last key group metrics scrape in milliseconds`}, "last_slow_execution_duration_seconds": {txt: `The amount of time needed for last slow execution, in seconds`}, "latency_spike_duration_seconds": {txt: `Length of the last latency spike in seconds`, lbls: []string{"event_name"}}, "latency_spike_last": {txt: `When the latency spike last occurred`, lbls: []string{"event_name"}}, "master_last_io_seconds_ago": {txt: "Master last io seconds ago", lbls: []string{"master_host", "master_port"}}, "master_link_up": {txt: "Master link status on Redis slave", lbls: []string{"master_host", "master_port"}}, "master_sync_in_progress": {txt: "Master sync in progress", lbls: []string{"master_host", "master_port"}}, "number_of_distinct_key_groups": {txt: `Number of distinct key groups`, lbls: []string{"db"}}, "script_result": {txt: "Result of the collect script evaluation", lbls: []string{"filename"}}, "script_values": {txt: "Values returned by the collect script", lbls: []string{"key", "filename"}}, "sentinel_master_ok_sentinels": {txt: "The number of okay sentinels monitoring this master", lbls: []string{"master_name", "master_address"}}, "sentinel_master_ok_slaves": {txt: "The number of okay slaves of the master", lbls: []string{"master_name", "master_address"}}, "sentinel_master_sentinels": {txt: "The number of sentinels monitoring this master", lbls: []string{"master_name", "master_address"}}, "sentinel_master_slaves": {txt: "The number of slaves of the master", lbls: []string{"master_name", "master_address"}}, "sentinel_master_status": {txt: "Master status on Sentinel", lbls: []string{"master_name", "master_address", "master_status"}}, "sentinel_master_ckquorum_status": {txt: "Master ckquorum status", lbls: []string{"master_name", "message"}}, "sentinel_masters": {txt: "The number of masters this sentinel is watching"}, "sentinel_master_setting_ckquorum": {txt: "Show the current ckquorum config for each master", lbls: []string{"master_name", "master_address"}}, "sentinel_master_setting_failover_timeout": {txt: "Show the current failover-timeout config for each master", lbls: []string{"master_name", "master_address"}}, "sentinel_master_setting_parallel_syncs": {txt: "Show the current parallel-syncs config for each master", lbls: []string{"master_name", "master_address"}}, "sentinel_master_setting_down_after_milliseconds": {txt: "Show the current down-after-milliseconds config for each master", lbls: []string{"master_name", "master_address"}}, "sentinel_running_scripts": {txt: "Number of scripts in execution right now"}, "sentinel_scripts_queue_length": {txt: "Queue of user scripts to execute"}, "sentinel_simulate_failure_flags": {txt: "Failures simulations"}, "sentinel_tilt": {txt: "Sentinel is in TILT mode"}, "slave_info": {txt: "Information about the Redis slave", lbls: []string{"master_host", "master_port", "read_only"}}, "slave_repl_offset": {txt: "Slave replication offset", lbls: []string{"master_host", "master_port"}}, "slowlog_last_id": {txt: `Last id of slowlog`}, "slowlog_length": {txt: `Total slowlog`}, "start_time_seconds": {txt: "Start time of the Redis instance since unix epoch in seconds."}, "stream_group_consumer_idle_seconds": {txt: `Consumer idle time in seconds`, lbls: []string{"db", "stream", "group", "consumer"}}, "stream_group_consumer_messages_pending": {txt: `Pending number of messages for this specific consumer`, lbls: []string{"db", "stream", "group", "consumer"}}, "stream_group_consumers": {txt: `Consumers count of stream group`, lbls: []string{"db", "stream", "group"}}, "stream_group_entries_read": {txt: `Total number of entries read from the stream group`, lbls: []string{"db", "stream", "group"}}, "stream_group_lag": {txt: `The number of messages waiting to be delivered to the stream group's consumers`, lbls: []string{"db", "stream", "group"}}, "stream_group_last_delivered_id": {txt: `The epoch timestamp (ms) of the last delivered message`, lbls: []string{"db", "stream", "group"}}, "stream_group_messages_pending": {txt: `Pending number of messages in that stream group`, lbls: []string{"db", "stream", "group"}}, "stream_groups": {txt: `Groups count of stream`, lbls: []string{"db", "stream"}}, "stream_last_generated_id": {txt: `The epoch timestamp (ms) of the latest message on the stream`, lbls: []string{"db", "stream"}}, "stream_length": {txt: `The number of elements of the stream`, lbls: []string{"db", "stream"}}, "stream_max_deleted_entry_id": {txt: `The epoch timestamp (ms) of last message was deleted from the stream`, lbls: []string{"db", "stream"}}, "stream_first_entry_id": {txt: `The epoch timestamp (ms) of the first message in the stream`, lbls: []string{"db", "stream"}}, "stream_last_entry_id": {txt: `The epoch timestamp (ms) of the last message in the stream`, lbls: []string{"db", "stream"}}, "stream_radix_tree_keys": {txt: `Radix tree keys count"`, lbls: []string{"db", "stream"}}, "stream_radix_tree_nodes": {txt: `Radix tree nodes count`, lbls: []string{"db", "stream"}}, "up": {txt: "Information about the Redis instance"}, "module_info": {txt: "Information about loaded Redis module", lbls: []string{"name", "ver", "api", "filters", "usedby", "using"}}, } { e.metricDescriptions[k] = newMetricDescr(opts.Namespace, k, desc.txt, desc.lbls) } if e.options.MetricsPath == "" { e.options.MetricsPath = "/metrics" } e.mux = http.NewServeMux() if e.options.Registry != nil { e.options.Registry.MustRegister(e) e.mux.Handle(e.options.MetricsPath, promhttp.HandlerFor( e.options.Registry, promhttp.HandlerOpts{ErrorHandling: promhttp.ContinueOnError}, )) if !e.options.RedisMetricsOnly { buildInfoCollector := prometheus.NewGaugeVec(prometheus.GaugeOpts{ Namespace: opts.Namespace, Name: "exporter_build_info", Help: "redis exporter build_info", }, []string{"version", "commit_sha", "build_date", "golang_version"}) buildInfoCollector.WithLabelValues(e.buildInfo.Version, e.buildInfo.CommitSha, e.buildInfo.Date, runtime.Version()).Set(1) e.options.Registry.MustRegister(buildInfoCollector) } } e.mux.HandleFunc("/", e.indexHandler) e.mux.HandleFunc("/scrape", e.scrapeHandler) e.mux.HandleFunc("/health", e.healthHandler) e.mux.HandleFunc("/-/reload", e.reloadPwdFile) return e, nil } // Describe outputs Redis metric descriptions. func (e *Exporter) Describe(ch chan<- *prometheus.Desc) { for _, desc := range e.metricDescriptions { ch <- desc } for _, v := range e.metricMapGauges { ch <- newMetricDescr(e.options.Namespace, v, v+" metric", nil) } for _, v := range e.metricMapCounters { ch <- newMetricDescr(e.options.Namespace, v, v+" metric", nil) } ch <- e.totalScrapes.Desc() ch <- e.scrapeDuration.Desc() ch <- e.targetScrapeRequestErrors.Desc() } // Collect fetches new metrics from the RedisHost and updates the appropriate metrics. func (e *Exporter) Collect(ch chan<- prometheus.Metric) { e.Lock() defer e.Unlock() e.totalScrapes.Inc() if e.redisAddr != "" { startTime := time.Now() var up float64 if err := e.scrapeRedisHost(ch); err != nil { e.registerConstMetricGauge(ch, "exporter_last_scrape_error", 1.0, fmt.Sprintf("%s", err)) } else { up = 1 e.registerConstMetricGauge(ch, "exporter_last_scrape_error", 0, "") } e.registerConstMetricGauge(ch, "up", up) took := time.Since(startTime).Seconds() e.scrapeDuration.Observe(took) e.registerConstMetricGauge(ch, "exporter_last_scrape_duration_seconds", took) } ch <- e.totalScrapes ch <- e.scrapeDuration ch <- e.targetScrapeRequestErrors } func (e *Exporter) extractConfigMetrics(ch chan<- prometheus.Metric, config []interface{}) (dbCount int, err error) { if len(config)%2 != 0 { return 0, fmt.Errorf("invalid config: %#v", config) } for pos := 0; pos < len(config)/2; pos++ { strKey, err := redis.String(config[pos*2], nil) if err != nil { log.Errorf("invalid config key name, err: %s, skipped", err) continue } strVal, err := redis.String(config[pos*2+1], nil) if err != nil { log.Debugf("invalid config value for key name %s, err: %s, skipped", strKey, err) continue } if strKey == "databases" { if dbCount, err = strconv.Atoi(strVal); err != nil { return 0, fmt.Errorf("invalid config value for key databases: %#v", strVal) } } if e.options.InclConfigMetrics { if redact := map[string]bool{ "masterauth": true, "requirepass": true, "tls-key-file-pass": true, "tls-client-key-file-pass": true, }[strKey]; !redact || !e.options.RedactConfigMetrics { e.registerConstMetricGauge(ch, "config_key_value", 1.0, strKey, strVal) if val, err := strconv.ParseFloat(strVal, 64); err == nil { e.registerConstMetricGauge(ch, "config_value", val, strKey) } } } if map[string]bool{ "io-threads": true, "maxclients": true, "maxmemory": true, }[strKey] { if val, err := strconv.ParseFloat(strVal, 64); err == nil { strKey = strings.ReplaceAll(strKey, "-", "_") e.registerConstMetricGauge(ch, fmt.Sprintf("config_%s", strKey), val) } } if strKey == "client-output-buffer-limit" { // client-output-buffer-limit "normal 0 0 0 slave 1610612736 1610612736 0 pubsub 33554432 8388608 60" splitVal := strings.Split(strVal, " ") for i := 0; i < len(splitVal); i += 4 { class := splitVal[i] if val, err := strconv.ParseFloat(splitVal[i+1], 64); err == nil { e.registerConstMetricGauge(ch, "config_client_output_buffer_limit_bytes", val, class, "hard") } if val, err := strconv.ParseFloat(splitVal[i+2], 64); err == nil { e.registerConstMetricGauge(ch, "config_client_output_buffer_limit_bytes", val, class, "soft") } if val, err := strconv.ParseFloat(splitVal[i+3], 64); err == nil { e.registerConstMetricGauge(ch, "config_client_output_buffer_limit_overcome_seconds", val, class, "soft") } } } } return } func (e *Exporter) scrapeRedisHost(ch chan<- prometheus.Metric) error { defer log.Debugf("scrapeRedisHost() done") startTime := time.Now() c, err := e.connectToRedis() connectTookSeconds := time.Since(startTime).Seconds() e.registerConstMetricGauge(ch, "exporter_last_scrape_connect_time_seconds", connectTookSeconds) if err != nil { var redactedAddr string if redisURL, err2 := url.Parse(e.redisAddr); err2 != nil { log.Debugf("url.Parse( %s ) err: %s", e.redisAddr, err2) redactedAddr = "" } else { redactedAddr = redisURL.Redacted() } log.Errorf("Couldn't connect to redis instance (%s)", redactedAddr) log.Debugf("connectToRedis( %s ) err: %s", e.redisAddr, err) return err } defer c.Close() log.Debugf("connected to: %s", e.redisAddr) log.Debugf("connecting took %f seconds", connectTookSeconds) if e.options.PingOnConnect { startTime := time.Now() if _, err := doRedisCmd(c, "PING"); err != nil { log.Errorf("Couldn't PING server, err: %s", err) } else { pingTookSeconds := time.Since(startTime).Seconds() e.registerConstMetricGauge(ch, "exporter_last_scrape_ping_time_seconds", pingTookSeconds) log.Debugf("PING took %f seconds", pingTookSeconds) } } if e.options.SetClientName { if _, err := doRedisCmd(c, "CLIENT", "SETNAME", "redis_exporter"); err != nil { log.Errorf("Couldn't set client name, err: %s", err) } } dbCount := 0 if e.options.ConfigCommandName == "-" { log.Debugf("Skipping extractConfigMetrics()") } else { if config, err := redis.Values(doRedisCmd(c, e.options.ConfigCommandName, "GET", "*")); err == nil { dbCount, err = e.extractConfigMetrics(ch, config) if err != nil { log.Errorf("Redis extractConfigMetrics() err: %s", err) return err } } else { log.Debugf("Redis CONFIG err: %s", err) } } infoAll, err := redis.String(doRedisCmd(c, "INFO", "ALL")) if err != nil || infoAll == "" { log.Debugf("Redis INFO ALL err: %s", err) infoAll, err = redis.String(doRedisCmd(c, "INFO")) if err != nil { log.Errorf("Redis INFO err: %s", err) return err } } log.Debugf("Redis INFO ALL result: [%#v]", infoAll) if strings.Contains(infoAll, "cluster_enabled:1") { if clusterInfo, err := redis.String(doRedisCmd(c, "CLUSTER", "INFO")); err == nil { e.extractClusterInfoMetrics(ch, clusterInfo) // in cluster mode Redis only supports one database so no extra DB number padding needed dbCount = 1 } else { log.Errorf("Redis CLUSTER INFO err: %s", err) } } else if dbCount == 0 { // in non-cluster mode, if dbCount is zero then "CONFIG" failed to retrieve a valid // number of databases and we use the Redis config default which is 16 dbCount = 16 } log.Debugf("dbCount: %d", dbCount) e.extractInfoMetrics(ch, infoAll, dbCount) if !e.options.ExcludeLatencyHistogramMetrics { e.extractLatencyMetrics(ch, infoAll, c) } if e.options.IsCluster { clusterClient, err := e.connectToRedisCluster() if err != nil { log.Errorf("Couldn't connect to redis cluster") return err } defer clusterClient.Close() e.extractCheckKeyMetrics(ch, clusterClient) } else { e.extractCheckKeyMetrics(ch, c) } e.extractSlowLogMetrics(ch, c) e.extractStreamMetrics(ch, c) e.extractCountKeysMetrics(ch, c) e.extractKeyGroupMetrics(ch, c, dbCount) if strings.Contains(infoAll, "# Sentinel") { e.extractSentinelMetrics(ch, c) } if e.options.ExportClientList { e.extractConnectedClientMetrics(ch, c) } if e.options.IsTile38 { e.extractTile38Metrics(ch, c) } if e.options.InclModulesMetrics { e.extractModulesMetrics(ch, c) } if len(e.options.LuaScript) > 0 { for filename, script := range e.options.LuaScript { if err := e.extractLuaScriptMetrics(ch, c, filename, script); err != nil { return err } } } return nil } redis_exporter-1.69.0/exporter/exporter_test.go000066400000000000000000000374311476520031400217650ustar00rootroot00000000000000package exporter /* to run the tests with redis running on anything but localhost:6379 use $ go test --redis.addr=: for html coverage report run $ go test -coverprofile=coverage.out && go tool cover -html=coverage.out */ import ( "fmt" "github.com/mna/redisc" "net/http/httptest" "os" "strings" "testing" "time" "github.com/gomodule/redigo/redis" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus/promhttp" dto "github.com/prometheus/client_model/go" log "github.com/sirupsen/logrus" ) var ( dbNumStr = "11" altDBNumStr = "12" anotherAltDbNumStr = "14" invalidDBNumStr = "16" dbNumStrFull = fmt.Sprintf("db%s", dbNumStr) ) const ( TestKeysSetName = "test-set" TestKeysZSetName = "test-zset" TestKeysStreamName = "test-stream" TestKeysHllName = "test-hll" TestKeysHashName = "test-hash" TestKeyGroup1 = "test_group_1" TestKeyGroup2 = "test_group_2" ) var ( testKeySingleString string testKeys []string testKeysExpiring []string testKeysList []string AllTestKeys = []string{ TestKeysSetName, TestKeysZSetName, TestKeysStreamName, TestKeysHllName, TestKeysHashName, TestKeyGroup1, TestKeyGroup2, } ) func getTestExporter() *Exporter { return getTestExporterWithOptions(Options{Namespace: "test", Registry: prometheus.NewRegistry()}) } func getTestExporterWithOptions(opt Options) *Exporter { addr := os.Getenv("TEST_REDIS_URI") if addr == "" { panic("missing env var TEST_REDIS_URI") } e, _ := NewRedisExporter(addr, opt) return e } func getTestExporterWithAddr(addr string) *Exporter { e, _ := NewRedisExporter(addr, Options{Namespace: "test", Registry: prometheus.NewRegistry()}) return e } func getTestExporterWithAddrAndOptions(addr string, opt Options) *Exporter { e, _ := NewRedisExporter(addr, opt) return e } func setupKeys(t *testing.T, c redis.Conn, dbNum string) error { if _, err := doRedisCmd(c, "SELECT", dbNum); err != nil { // not failing on this one - cluster doesn't allow for SELECT so we log and ignore the error log.Printf("setupTestKeys() - couldn't setup redis, err: %s ", err) } testValue := 1234.56 for _, key := range testKeys { if _, err := doRedisCmd(c, "SET", key, testValue); err != nil { t.Errorf("couldn't setup redis, err: %s ", err) return err } } // set to expire in 600 seconds, should be plenty for a test run for _, key := range testKeysExpiring { if _, err := doRedisCmd(c, "SETEX", key, "600", testValue); err != nil { t.Errorf("couldn't setup redis, err: %s ", err) return err } } for _, key := range testKeysList { for _, val := range testKeys { if _, err := doRedisCmd(c, "LPUSH", key, val); err != nil { t.Errorf("couldn't setup redis, err: %s ", err) return err } } } if _, err := doRedisCmd(c, "PFADD", TestKeysHllName, "val1"); err != nil { t.Errorf("PFADD err: %s", err) return err } if _, err := doRedisCmd(c, "PFADD", TestKeysHllName, "val22"); err != nil { t.Errorf("PFADD err: %s", err) return err } if _, err := doRedisCmd(c, "PFADD", TestKeysHllName, "val333"); err != nil { t.Errorf("PFADD err: %s", err) return err } if _, err := doRedisCmd(c, "SADD", TestKeysSetName, "test-val-1"); err != nil { t.Errorf("SADD err: %s", err) return err } if _, err := doRedisCmd(c, "SADD", TestKeysSetName, "test-val-2"); err != nil { t.Errorf("SADD err: %s", err) return err } if _, err := doRedisCmd(c, "ZADD", TestKeysZSetName, "12", "test-zzzval-1"); err != nil { t.Errorf("ZADD err: %s", err) return err } if _, err := doRedisCmd(c, "ZADD", TestKeysZSetName, "23", "test-zzzval-2"); err != nil { t.Errorf("ZADD err: %s", err) return err } if _, err := doRedisCmd(c, "ZADD", TestKeysZSetName, "45", "test-zzzval-3"); err != nil { t.Errorf("ZADD err: %s", err) return err } if _, err := doRedisCmd(c, "SET", testKeySingleString, "this-is-a-string"); err != nil { t.Errorf("SET %s err: %s", testKeySingleString, err) return err } if _, err := doRedisCmd(c, "HSET", TestKeysHashName, "field1", "Hello"); err != nil { t.Errorf("HSET err: %s", err) return err } if _, err := doRedisCmd(c, "HSET", TestKeysHashName, "field2", "World"); err != nil { t.Errorf("HSET err: %s", err) return err } if _, err := doRedisCmd(c, "HSET", TestKeysHashName, "field3", "What's"); err != nil { t.Errorf("HSET err: %s", err) return err } if _, err := doRedisCmd(c, "HSET", TestKeysHashName, "field4", "new?"); err != nil { t.Errorf("HSET err: %s", err) return err } if x, err := redis.String(doRedisCmd(c, "HGET", TestKeysHashName, "field4")); err != nil || x != "new?" { t.Errorf("HGET %s err: %s x: %s", TestKeysHashName, err, x) } // Create test streams doRedisCmd(c, "XGROUP", "CREATE", TestKeysStreamName, TestKeyGroup1, "$", "MKSTREAM") doRedisCmd(c, "XGROUP", "CREATE", TestKeysStreamName, TestKeyGroup2, "$", "MKSTREAM") doRedisCmd(c, "XADD", TestKeysStreamName, TestStreamTimestamps[0], "field_1", "str_1") doRedisCmd(c, "XADD", TestKeysStreamName, TestStreamTimestamps[1], "field_2", "str_2") // Process messages to assign Consumers to their groups doRedisCmd(c, "XREADGROUP", "GROUP", TestKeyGroup1, "test_consumer_1", "COUNT", "1", "STREAMS", TestKeysStreamName, ">") doRedisCmd(c, "XREADGROUP", "GROUP", TestKeyGroup1, "test_consumer_2", "COUNT", "1", "STREAMS", TestKeysStreamName, ">") doRedisCmd(c, "XREADGROUP", "GROUP", TestKeyGroup2, "test_consumer_1", "COUNT", "1", "STREAMS", TestKeysStreamName, "0") t.Logf("setupKeys %s - DONE", dbNum) time.Sleep(time.Millisecond * 100) return nil } func deleteKeys(c redis.Conn, dbNum string) { if _, err := doRedisCmd(c, "SELECT", dbNum); err != nil { log.Printf("deleteTestKeys() - couldn't setup redis, err: %s ", err) // not failing on this one - cluster doesn't allow for SELECT so we log and ignore the error } for _, key := range AllTestKeys { doRedisCmd(c, "DEL", key) } } func setupTestKeys(t *testing.T, uri string) { log.Debugf("setupTestKeys uri: %s", uri) c, err := redis.DialURL(uri) if err != nil { t.Fatalf("couldn't setup redis for uri %s, err: %s ", uri, err) return } defer c.Close() if err := setupKeys(t, c, dbNumStr); err != nil { t.Fatalf("couldn't setup test keys, err: %s ", err) } if err := setupKeys(t, c, altDBNumStr); err != nil { t.Fatalf("couldn't setup test keys, err: %s ", err) } if err := setupKeys(t, c, anotherAltDbNumStr); err != nil { t.Fatalf("couldn't setup test keys, err: %s ", err) } } func setupTestKeysCluster(t *testing.T, uri string) { log.Debugf("Creating cluster object") cluster := redisc.Cluster{ StartupNodes: []string{ strings.Replace(uri, "redis://", "", 1), }, DialOptions: []redis.DialOption{}, } if err := cluster.Refresh(); err != nil { log.Fatalf("Refresh failed: %v", err) } conn, err := cluster.Dial() if err != nil { log.Errorf("Dial() failed: %v", err) } c, err := redisc.RetryConn(conn, 10, 100*time.Millisecond) if err != nil { log.Errorf("RetryConn() failed: %v", err) } // cluster only supports db==0 if err := setupKeys(t, c, "0"); err != nil { t.Fatalf("couldn't setup test keys, err: %s ", err) return } time.Sleep(time.Second) if x, err := redis.Strings(doRedisCmd(c, "KEYS", "*")); err != nil { t.Errorf("KEYS * err: %s", err) } else { t.Logf("KEYS * -> %#v", x) } } func deleteTestKeys(t *testing.T, addr string) error { c, err := redis.DialURL(addr) if err != nil { t.Errorf("couldn't setup redis, err: %s ", err) return err } defer c.Close() deleteKeys(c, dbNumStr) deleteKeys(c, altDBNumStr) deleteKeys(c, anotherAltDbNumStr) return nil } func deleteTestKeysCluster(addr string) error { e, _ := NewRedisExporter(addr, Options{}) c, err := e.connectToRedisCluster() if err != nil { return err } defer c.Close() // cluster only supports db==0 deleteKeys(c, "0") return nil } func TestIncludeSystemMemoryMetric(t *testing.T) { for _, inc := range []bool{false, true} { r := prometheus.NewRegistry() ts := httptest.NewServer(promhttp.HandlerFor(r, promhttp.HandlerOpts{})) e, _ := NewRedisExporter(os.Getenv("TEST_REDIS_URI"), Options{Namespace: "test", InclSystemMetrics: inc}) r.Register(e) body := downloadURL(t, ts.URL+"/metrics") if inc && !strings.Contains(body, "total_system_memory_bytes") { t.Errorf("want metrics to include total_system_memory_bytes, have:\n%s", body) } else if !inc && strings.Contains(body, "total_system_memory_bytes") { t.Errorf("did NOT want metrics to include total_system_memory_bytes, have:\n%s", body) } ts.Close() } } func TestIncludeConfigMetrics(t *testing.T) { for _, inc := range []bool{false, true} { r := prometheus.NewRegistry() ts := httptest.NewServer(promhttp.HandlerFor(r, promhttp.HandlerOpts{})) e, _ := NewRedisExporter(os.Getenv("TEST_REDIS_URI"), Options{Namespace: "test", InclConfigMetrics: inc}) r.Register(e) what := `test_config_key_value{key="appendonly",value="no"}` body := downloadURL(t, ts.URL+"/metrics") if inc && !strings.Contains(body, what) { t.Errorf("want metrics to include test_config_key_value, have:\n%s", body) } else if !inc && strings.Contains(body, what) { t.Errorf("did NOT want metrics to include test_config_key_value, have:\n%s", body) } ts.Close() } } func TestClientOutputBufferLimitMetrics(t *testing.T) { for _, class := range []string{ `normal`, `pubsub`, `slave`, } { for _, limit := range []string{ `hard`, `soft`, } { want := fmt.Sprintf("%s{class=\"%s\",limit=\"%s\"}", "config_client_output_buffer_limit_bytes", class, limit) r := prometheus.NewRegistry() ts := httptest.NewServer(promhttp.HandlerFor(r, promhttp.HandlerOpts{})) e, _ := NewRedisExporter(os.Getenv("TEST_REDIS_URI"), Options{Namespace: "test"}) r.Register(e) body := downloadURL(t, ts.URL+"/metrics") if !strings.Contains(body, want) { t.Errorf("want metrics to include %s, have:\n%s", want, body) } } want := fmt.Sprintf("%s{class=\"%s\",limit=\"soft\"}", "config_client_output_buffer_limit_overcome_seconds", class) r := prometheus.NewRegistry() ts := httptest.NewServer(promhttp.HandlerFor(r, promhttp.HandlerOpts{})) e, _ := NewRedisExporter(os.Getenv("TEST_REDIS_URI"), Options{Namespace: "test"}) r.Register(e) body := downloadURL(t, ts.URL+"/metrics") if !strings.Contains(body, want) { t.Errorf("want metrics to include %s, have:\n%s", want, body) } } } func TestExcludeConfigMetricsViaCONFIGCommand(t *testing.T) { for _, inc := range []bool{false, true} { r := prometheus.NewRegistry() ts := httptest.NewServer(promhttp.HandlerFor(r, promhttp.HandlerOpts{})) e, _ := NewRedisExporter(os.Getenv("TEST_REDIS_URI"), Options{ Namespace: "test", ConfigCommandName: "-", InclConfigMetrics: inc}) r.Register(e) what := `test_config_key_value{key="appendonly",value="no"}` body := downloadURL(t, ts.URL+"/metrics") if strings.Contains(body, what) { t.Fatalf("found test_config_key_value but should have skipped CONFIG call") } ts.Close() } } func TestNonExistingHost(t *testing.T) { e, _ := NewRedisExporter("unix:///tmp/doesnt.exist", Options{Namespace: "test"}) chM := make(chan prometheus.Metric) go func() { e.Collect(chM) close(chM) }() want := map[string]float64{"test_exporter_last_scrape_error": 1.0, "test_exporter_scrapes_total": 1.0} for m := range chM { descString := m.Desc().String() for k := range want { if strings.Contains(descString, k) { g := &dto.Metric{} m.Write(g) val := 0.0 if g.GetGauge() != nil { val = *g.GetGauge().Value } else if g.GetCounter() != nil { val = *g.GetCounter().Value } else { continue } if val == want[k] { want[k] = -1.0 } } } } for k, v := range want { if v > 0 { t.Errorf("didn't find %s", k) } } } func TestKeysReset(t *testing.T) { e, _ := NewRedisExporter(os.Getenv("TEST_REDIS_URI"), Options{Namespace: "test", CheckSingleKeys: dbNumStrFull + "=" + testKeys[0], Registry: prometheus.NewRegistry()}) ts := httptest.NewServer(e) defer ts.Close() setupTestKeys(t, os.Getenv("TEST_REDIS_URI")) defer deleteTestKeys(t, os.Getenv("TEST_REDIS_URI")) chM := make(chan prometheus.Metric, 10000) go func() { e.Collect(chM) close(chM) }() body := downloadURL(t, ts.URL+"/metrics") if !strings.Contains(body, testKeys[0]) { t.Errorf("Did not found key %q\n%s", testKeys[0], body) } deleteTestKeys(t, os.Getenv("TEST_REDIS_URI")) body = downloadURL(t, ts.URL+"/metrics") if !strings.Contains(body, testKeys[0]) { t.Errorf("Key %q (from check-single-keys) should be available in metrics with default value 0\n%s", testKeys[0], body) } } func TestRedisMetricsOnly(t *testing.T) { for _, inc := range []bool{false, true} { r := prometheus.NewRegistry() ts := httptest.NewServer(promhttp.HandlerFor(r, promhttp.HandlerOpts{})) _, err := NewRedisExporter(os.Getenv("TEST_REDIS_URI"), Options{Namespace: "test", Registry: r, RedisMetricsOnly: inc}) if err != nil { t.Fatalf(`error when creating exporter with registry: %s`, err) } body := downloadURL(t, ts.URL+"/metrics") if inc && strings.Contains(body, "exporter_build_info") { t.Errorf("want metrics to include exporter_build_info, have:\n%s", body) } else if !inc && !strings.Contains(body, "exporter_build_info") { t.Errorf("did NOT want metrics to include exporter_build_info, have:\n%s", body) } ts.Close() } } func TestConnectionDurations(t *testing.T) { metric1 := "exporter_last_scrape_ping_time_seconds" metric2 := "exporter_last_scrape_connect_time_seconds" for _, inclPing := range []bool{false, true} { r := prometheus.NewRegistry() ts := httptest.NewServer(promhttp.HandlerFor(r, promhttp.HandlerOpts{})) e, _ := NewRedisExporter(os.Getenv("TEST_REDIS_URI"), Options{Namespace: "test", PingOnConnect: inclPing}) r.Register(e) body := downloadURL(t, ts.URL+"/metrics") if inclPing && !strings.Contains(body, metric1) { t.Fatalf("want metrics to include %s, have:\n%s", metric1, body) } else if !inclPing && strings.Contains(body, metric1) { t.Fatalf("did NOT want metrics to include %s, have:\n%s", metric1, body) } // always expect this one if !strings.Contains(body, metric2) { t.Fatalf("want metrics to include %s, have:\n%s", metric2, body) } ts.Close() } } func TestKeyDbMetrics(t *testing.T) { if os.Getenv("TEST_KEYDB01_URI") == "" { t.Skipf("Skipping due to missing TEST_KEYDB01_URI") } setupTestKeys(t, os.Getenv("TEST_KEYDB01_URI")) defer deleteTestKeys(t, os.Getenv("TEST_KEYDB01_URI")) for _, want := range []string{ `test_db_keys_cached`, `test_storage_provider_read_hits`, } { r := prometheus.NewRegistry() ts := httptest.NewServer(promhttp.HandlerFor(r, promhttp.HandlerOpts{})) e, _ := NewRedisExporter(os.Getenv("TEST_KEYDB01_URI"), Options{Namespace: "test"}) r.Register(e) body := downloadURL(t, ts.URL+"/metrics") if !strings.Contains(body, want) { t.Errorf("want metrics to include %s, have:\n%s", want, body) } ts.Close() } } func init() { ll := strings.ToLower(os.Getenv("LOG_LEVEL")) if pl, err := log.ParseLevel(ll); err == nil { log.Printf("Setting log level to: %s", ll) log.SetLevel(pl) } else { log.SetLevel(log.InfoLevel) } testTimestamp := time.Now().Unix() for _, n := range []string{"john", "paul", "ringo", "george"} { testKeys = append(testKeys, fmt.Sprintf("key_%s_%d", n, testTimestamp)) } testKeySingleString = fmt.Sprintf("key_string_%d", testTimestamp) AllTestKeys = append(AllTestKeys, testKeySingleString) testKeysList = append(testKeysList, "test_beatles_list") for _, n := range []string{"A.J.", "Howie", "Nick", "Kevin", "Brian"} { testKeysExpiring = append(testKeysExpiring, fmt.Sprintf("key_exp_%s_%d", n, testTimestamp)) } AllTestKeys = append(AllTestKeys, testKeys...) AllTestKeys = append(AllTestKeys, testKeysList...) AllTestKeys = append(AllTestKeys, testKeysExpiring...) } redis_exporter-1.69.0/exporter/http.go000066400000000000000000000074761476520031400200430ustar00rootroot00000000000000package exporter import ( "crypto/subtle" "errors" "fmt" "net/http" "net/url" "strings" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus/promhttp" log "github.com/sirupsen/logrus" ) func (e *Exporter) ServeHTTP(w http.ResponseWriter, r *http.Request) { if err := e.verifyBasicAuth(r.BasicAuth()); err != nil { w.Header().Set("WWW-Authenticate", `Basic realm="redis-exporter, charset=UTF-8"`) http.Error(w, err.Error(), http.StatusUnauthorized) return } e.mux.ServeHTTP(w, r) } func (e *Exporter) healthHandler(w http.ResponseWriter, r *http.Request) { _, _ = w.Write([]byte(`ok`)) } func (e *Exporter) indexHandler(w http.ResponseWriter, r *http.Request) { _, _ = w.Write([]byte(` Redis Exporter ` + e.buildInfo.Version + `

Redis Exporter ` + e.buildInfo.Version + `

Metrics

`)) } func (e *Exporter) scrapeHandler(w http.ResponseWriter, r *http.Request) { target := r.URL.Query().Get("target") if target == "" { http.Error(w, "'target' parameter must be specified", http.StatusBadRequest) e.targetScrapeRequestErrors.Inc() return } if !strings.Contains(target, "://") { target = "redis://" + target } u, err := url.Parse(target) if err != nil { http.Error(w, fmt.Sprintf("Invalid 'target' parameter, parse err: %ck ", err), http.StatusBadRequest) e.targetScrapeRequestErrors.Inc() return } opts := e.options // get rid of username/password info in "target" so users don't send them in plain text via http // and save "user" in options so we can use it later when connecting to the redis instance // the password will be looked up from the password file if u.User != nil { opts.User = u.User.Username() u.User = nil } target = u.String() if ck := r.URL.Query().Get("check-keys"); ck != "" { opts.CheckKeys = ck } if csk := r.URL.Query().Get("check-single-keys"); csk != "" { opts.CheckSingleKeys = csk } if cs := r.URL.Query().Get("check-streams"); cs != "" { opts.CheckStreams = cs } if css := r.URL.Query().Get("check-single-streams"); css != "" { opts.CheckSingleStreams = css } if cntk := r.URL.Query().Get("count-keys"); cntk != "" { opts.CountKeys = cntk } registry := prometheus.NewRegistry() opts.Registry = registry _, err = NewRedisExporter(target, opts) if err != nil { http.Error(w, "NewRedisExporter() err: err", http.StatusBadRequest) e.targetScrapeRequestErrors.Inc() return } promhttp.HandlerFor( registry, promhttp.HandlerOpts{ErrorHandling: promhttp.ContinueOnError}, ).ServeHTTP(w, r) } func (e *Exporter) reloadPwdFile(w http.ResponseWriter, r *http.Request) { if e.options.RedisPwdFile == "" { http.Error(w, "There is no pwd file specified", http.StatusBadRequest) return } log.Debugf("Reload redisPwdFile") passwordMap, err := LoadPwdFile(e.options.RedisPwdFile) if err != nil { log.Errorf("Error reloading redis passwords from file %s, err: %s", e.options.RedisPwdFile, err) http.Error(w, "failed to reload passwords file: "+err.Error(), http.StatusInternalServerError) return } e.Lock() e.options.PasswordMap = passwordMap e.Unlock() _, _ = w.Write([]byte(`ok`)) } func (e *Exporter) isBasicAuthConfigured() bool { return e.options.BasicAuthUsername != "" && e.options.BasicAuthPassword != "" } func (e *Exporter) verifyBasicAuth(user, password string, authHeaderSet bool) error { if !e.isBasicAuthConfigured() { return nil } if !authHeaderSet { return errors.New("Unauthorized") } userCorrect := subtle.ConstantTimeCompare([]byte(user), []byte(e.options.BasicAuthUsername)) passCorrect := subtle.ConstantTimeCompare([]byte(password), []byte(e.options.BasicAuthPassword)) if userCorrect == 0 || passCorrect == 0 { return errors.New("Unauthorized") } return nil } redis_exporter-1.69.0/exporter/http_test.go000066400000000000000000000425531476520031400210750ustar00rootroot00000000000000package exporter import ( "fmt" "io" "math/rand" "net" "net/http" "net/http/httptest" "net/url" "os" "strings" "sync" "testing" "github.com/prometheus/client_golang/prometheus" log "github.com/sirupsen/logrus" ) func TestHTTPScrapeMetricsEndpoints(t *testing.T) { if os.Getenv("TEST_REDIS_URI") == "" || os.Getenv("TEST_PWD_REDIS_URI") == "" { t.Skipf("Skipping TestHTTPScrapeMetricsEndpoints, missing env vars") } setupTestKeys(t, os.Getenv("TEST_REDIS_URI")) defer deleteTestKeys(t, os.Getenv("TEST_REDIS_URI")) setupTestKeys(t, os.Getenv("TEST_PWD_REDIS_URI")) defer deleteTestKeys(t, os.Getenv("TEST_PWD_REDIS_URI")) csk := dbNumStrFull + "=" + url.QueryEscape(testKeys[0]) // check-single-keys css := dbNumStrFull + "=" + TestKeysStreamName // check-single-streams cntk := dbNumStrFull + "=" + testKeys[0] + "*" // count-keys u, err := url.Parse(os.Getenv("TEST_REDIS_URI")) if err != nil { t.Fatalf("url.Parse() err: %s", err) } testRedisIPAddress := "" testRedisHostname := u.Hostname() if testRedisHostname == "localhost" { testRedisIPAddress = "127.0.0.1" } else { ips, err := net.LookupIP(testRedisHostname) if err != nil { t.Fatalf("Could not get IP address: %s", err) } if len(ips) == 0 { t.Fatal("No IP addresses found") } testRedisIPAddress = ips[0].String() } testRedisIPAddress = fmt.Sprintf("%s:%s", testRedisIPAddress, u.Port()) testRedisHostname = fmt.Sprintf("%s:%s", testRedisHostname, u.Port()) t.Logf("testRedisIPAddress: %s", testRedisIPAddress) t.Logf("testRedisHostname: %s", testRedisHostname) for _, tst := range []struct { name string addr string ck string csk string cs string scrapeCs string css string cntk string pwd string scrape bool target string wantStatusCode int }{ {name: "ip-addr", addr: testRedisIPAddress, csk: csk, css: css, cntk: cntk}, {name: "hostname", addr: testRedisHostname, csk: csk, css: css, cntk: cntk}, {name: "check-keys", addr: os.Getenv("TEST_REDIS_URI"), ck: csk, cs: css, cntk: cntk}, {name: "check-single-keys", addr: os.Getenv("TEST_REDIS_URI"), csk: csk, css: css, cntk: cntk}, {name: "addr-no-prefix", addr: strings.TrimPrefix(os.Getenv("TEST_REDIS_URI"), "redis://"), csk: csk, css: css, cntk: cntk}, {name: "scrape-target-no-prefix", pwd: "", scrape: true, target: strings.TrimPrefix(os.Getenv("TEST_REDIS_URI"), "redis://"), ck: csk, cs: css, cntk: cntk}, {name: "scrape-broken-target", wantStatusCode: http.StatusBadRequest, scrape: true, target: "://nope"}, {name: "scrape-broken-target2", wantStatusCode: http.StatusBadRequest, scrape: true, target: os.Getenv("TEST_REDIS_URI") + "-", csk: csk, css: css, cntk: cntk}, {name: "scrape-broken-cs", wantStatusCode: http.StatusBadRequest, scrape: true, target: os.Getenv("TEST_REDIS_URI"), scrapeCs: "1=2=3=4"}, {name: "scrape-ck", pwd: "", scrape: true, target: os.Getenv("TEST_REDIS_URI"), ck: csk, scrapeCs: css, cntk: cntk}, {name: "scrape-csk", pwd: "", scrape: true, target: os.Getenv("TEST_REDIS_URI"), csk: csk, css: css, cntk: cntk}, {name: "scrape-pwd-ck", pwd: "redis-password", scrape: true, target: os.Getenv("TEST_PWD_REDIS_URI"), ck: csk, scrapeCs: css, cntk: cntk}, {name: "scrape-pwd-csk", pwd: "redis-password", scrape: true, target: os.Getenv("TEST_PWD_REDIS_URI"), csk: csk, scrapeCs: css, cntk: cntk}, {name: "error-scrape-no-target", wantStatusCode: http.StatusBadRequest, scrape: true, target: ""}, } { t.Run(tst.name, func(t *testing.T) { options := Options{ Namespace: "test", Password: tst.pwd, LuaScript: map[string][]byte{ "test.lua": []byte(`return {"a", "11", "b", "12", "c", "13"}`), }, Registry: prometheus.NewRegistry(), } options.CheckSingleKeys = tst.csk options.CheckKeys = tst.ck options.CheckSingleStreams = tst.css options.CheckStreams = tst.cs options.CountKeys = tst.cntk options.CheckKeysBatchSize = 1000 e, _ := NewRedisExporter(tst.addr, options) ts := httptest.NewServer(e) u := ts.URL if tst.scrape { u += "/scrape" v := url.Values{} v.Add("target", tst.target) v.Add("check-single-keys", tst.csk) v.Add("check-keys", tst.ck) v.Add("check-streams", tst.scrapeCs) v.Add("check-single-streams", tst.css) v.Add("count-keys", tst.cntk) up, _ := url.Parse(u) up.RawQuery = v.Encode() u = up.String() } else { u += "/metrics" } wantStatusCode := http.StatusOK if tst.wantStatusCode != 0 { wantStatusCode = tst.wantStatusCode } gotStatusCode, body := downloadURLWithStatusCode(t, u) if gotStatusCode != wantStatusCode { t.Fatalf("got status code: %d wanted: %d", gotStatusCode, wantStatusCode) return } // we can stop here if we expected a non-200 response if wantStatusCode != http.StatusOK { return } wants := []string{ // metrics `test_connected_clients`, `test_commands_processed_total`, `test_instance_info`, "db_keys", "db_avg_ttl_seconds", "cpu_sys_seconds_total", "loading_dump_file", // testing renames "config_maxmemory", // testing config extraction "config_maxclients", // testing config extraction "slowlog_length", "slowlog_last_id", "start_time_seconds", "uptime_in_seconds", // labels and label values `redis_mode`, `cmd="config`, "maxmemory_policy", `test_script_value`, // lua script `test_key_size{db="db11",key="` + testKeys[0] + `"} 7`, `test_key_value{db="db11",key="` + testKeys[0] + `"} 1234.56`, `test_keys_count{db="db11",key="` + testKeys[0] + `*"} 1`, `test_db_keys{db="db11"} `, `test_db_keys_expiring{db="db11"} `, // streams `stream_length`, `stream_groups`, `stream_radix_tree_keys`, `stream_radix_tree_nodes`, `stream_group_consumers`, `stream_group_messages_pending`, `stream_group_consumer_messages_pending`, `stream_group_consumer_idle_seconds`, `test_up 1`, } for _, want := range wants { if !strings.Contains(body, want) { t.Errorf("url: %s want metrics to include %q, have:\n%s", u, want, body) break } } ts.Close() }) } } func TestSimultaneousMetricsHttpRequests(t *testing.T) { if os.Getenv("TEST_REDIS_URI") == "" || os.Getenv("TEST_REDIS_2_8_URI") == "" || os.Getenv("TEST_KEYDB01_URI") == "" || os.Getenv("TEST_KEYDB02_URI") == "" || os.Getenv("TEST_REDIS5_URI") == "" || os.Getenv("TEST_REDIS6_URI") == "" || os.Getenv("TEST_REDIS_CLUSTER_MASTER_URI") == "" || os.Getenv("TEST_REDIS_CLUSTER_SLAVE_URI") == "" || os.Getenv("TEST_TILE38_URI") == "" || os.Getenv("TEST_REDIS_MODULES_URI") == "" { t.Skipf("Skipping TestSimultaneousMetricsHttpRequests, missing env vars") } setupTestKeys(t, os.Getenv("TEST_REDIS_URI")) defer deleteTestKeys(t, os.Getenv("TEST_REDIS_URI")) e, _ := NewRedisExporter("", Options{Namespace: "test", InclSystemMetrics: false, Registry: prometheus.NewRegistry()}) ts := httptest.NewServer(e) defer ts.Close() uris := []string{ os.Getenv("TEST_REDIS_URI"), os.Getenv("TEST_REDIS_2_8_URI"), os.Getenv("TEST_REDIS7_URI"), os.Getenv("TEST_VALKEY7_URI"), os.Getenv("TEST_VALKEY8_URI"), os.Getenv("TEST_KEYDB01_URI"), os.Getenv("TEST_KEYDB02_URI"), os.Getenv("TEST_REDIS5_URI"), os.Getenv("TEST_REDIS6_URI"), os.Getenv("TEST_REDIS_MODULES_URI"), // tile38 & Cluster need to be last in this list so we can identify them when selected, down in line 229 os.Getenv("TEST_REDIS_CLUSTER_MASTER_URI"), os.Getenv("TEST_REDIS_CLUSTER_SLAVE_URI"), os.Getenv("TEST_TILE38_URI"), } t.Logf("uris: %#v", uris) goroutines := 20 var wg sync.WaitGroup wg.Add(goroutines) for ; goroutines > 0; goroutines-- { go func() { requests := 100 for ; requests > 0; requests-- { v := url.Values{} uriIdx := rand.Intn(len(uris)) target := uris[uriIdx] v.Add("target", target) // not appending this param for Tile38 and cluster (the last two in the list) // Tile38 & cluster don't support the SELECT command so this test will fail and spam the logs if uriIdx < len(uris)-3 { v.Add("check-single-keys", dbNumStrFull+"="+url.QueryEscape(testKeys[0])) } up, _ := url.Parse(ts.URL + "/scrape") up.RawQuery = v.Encode() fullURL := up.String() body := downloadURL(t, fullURL) wants := []string{ `test_connected_clients`, `test_commands_processed_total`, `test_instance_info`, `test_up 1`, } for _, want := range wants { if !strings.Contains(body, want) { t.Errorf("fullURL: %s - want metrics to include %q, have:\n%s", fullURL, want, body) break } } } wg.Done() }() } wg.Wait() } func TestHttpHandlers(t *testing.T) { if os.Getenv("TEST_PWD_REDIS_URI") == "" { t.Skipf("TEST_PWD_REDIS_URI not set - skipping") } e, _ := NewRedisExporter(os.Getenv("TEST_PWD_REDIS_URI"), Options{Namespace: "test", Registry: prometheus.NewRegistry()}) ts := httptest.NewServer(e) defer ts.Close() for _, tst := range []struct { path string want string }{ { path: "/", want: `Redis Exporter `, }, { path: "/health", want: `ok`, }, } { t.Run(fmt.Sprintf("path: %s", tst.path), func(t *testing.T) { body := downloadURL(t, ts.URL+tst.path) if !strings.Contains(body, tst.want) { t.Fatalf(`error, expected string "%s" in body, got body: \n\n%s`, tst.want, body) } }) } } func TestReloadHandlers(t *testing.T) { if os.Getenv("TEST_PWD_REDIS_URI") == "" { t.Skipf("TEST_PWD_REDIS_URI not set - skipping") } eWithPwdfile, _ := NewRedisExporter(os.Getenv("TEST_PWD_REDIS_URI"), Options{Namespace: "test", Registry: prometheus.NewRegistry(), RedisPwdFile: "../contrib/sample-pwd-file.json"}) ts := httptest.NewServer(eWithPwdfile) defer ts.Close() for _, tst := range []struct { e *Exporter path string want string }{ { path: "/-/reload", want: `ok`, }, } { t.Run(fmt.Sprintf("path: %s", tst.path), func(t *testing.T) { body := downloadURL(t, ts.URL+tst.path) if !strings.Contains(body, tst.want) { t.Fatalf(`error, expected string "%s" in body, got body: \n\n%s`, tst.want, body) } }) } eWithnoPwdfile, _ := NewRedisExporter(os.Getenv("TEST_PWD_REDIS_URI"), Options{Namespace: "test", Registry: prometheus.NewRegistry()}) ts2 := httptest.NewServer(eWithnoPwdfile) defer ts2.Close() for _, tst := range []struct { e *Exporter path string want string }{ { path: "/-/reload", want: `There is no pwd file specified`, }, } { t.Run(fmt.Sprintf("path: %s", tst.path), func(t *testing.T) { body := downloadURL(t, ts2.URL+tst.path) if !strings.Contains(body, tst.want) { t.Fatalf(`error, expected string "%s" in body, got body: \n\n%s`, tst.want, body) } }) } eWithMalformedPwdfile, _ := NewRedisExporter(os.Getenv("TEST_PWD_REDIS_URI"), Options{Namespace: "test", Registry: prometheus.NewRegistry(), RedisPwdFile: "../contrib/sample-pwd-file.json-malformed"}) ts3 := httptest.NewServer(eWithMalformedPwdfile) defer ts3.Close() for _, tst := range []struct { e *Exporter path string want string }{ { path: "/-/reload", want: `failed to reload passwords file: unexpected end of JSON input`, }, } { t.Run(fmt.Sprintf("path: %s", tst.path), func(t *testing.T) { body := downloadURL(t, ts3.URL+tst.path) if !strings.Contains(body, tst.want) { t.Fatalf(`error, expected string "%s" in body, got body: \n\n%s`, tst.want, body) } }) } } func TestIsBasicAuthConfigured(t *testing.T) { tests := []struct { name string username string password string want bool }{ { name: "no credentials configured", username: "", password: "", want: false, }, { name: "only username configured", username: "user", password: "", want: false, }, { name: "only password configured", username: "", password: "pass", want: false, }, { name: "both credentials configured", username: "user", password: "pass", want: true, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { e, _ := NewRedisExporter("", Options{ BasicAuthUsername: tt.username, BasicAuthPassword: tt.password, }) if got := e.isBasicAuthConfigured(); got != tt.want { t.Errorf("isBasicAuthConfigured() = %v, want %v", got, tt.want) } }) } } func TestVerifyBasicAuth(t *testing.T) { tests := []struct { name string configUser string configPass string providedUser string providedPass string authHeaderSet bool wantErr bool wantErrString string }{ { name: "no auth configured - no credentials provided", configUser: "", configPass: "", providedUser: "", providedPass: "", authHeaderSet: false, wantErr: false, }, { name: "auth configured - no auth header", configUser: "user", configPass: "pass", providedUser: "", providedPass: "", authHeaderSet: false, wantErr: true, wantErrString: "Unauthorized", }, { name: "auth configured - correct credentials", configUser: "user", configPass: "pass", providedUser: "user", providedPass: "pass", authHeaderSet: true, wantErr: false, }, { name: "auth configured - wrong username", configUser: "user", configPass: "pass", providedUser: "wronguser", providedPass: "pass", authHeaderSet: true, wantErr: true, wantErrString: "Unauthorized", }, { name: "auth configured - wrong password", configUser: "user", configPass: "pass", providedUser: "user", providedPass: "wrongpass", authHeaderSet: true, wantErr: true, wantErrString: "Unauthorized", }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { e, _ := NewRedisExporter("", Options{ BasicAuthUsername: tt.configUser, BasicAuthPassword: tt.configPass, }) err := e.verifyBasicAuth(tt.providedUser, tt.providedPass, tt.authHeaderSet) if (err != nil) != tt.wantErr { t.Errorf("verifyBasicAuth() error = %v, wantErr %v", err, tt.wantErr) return } if err != nil && err.Error() != tt.wantErrString { t.Errorf("verifyBasicAuth() error = %v, wantErrString %v", err, tt.wantErrString) } }) } } func TestBasicAuth(t *testing.T) { if os.Getenv("TEST_REDIS_URI") == "" { t.Skipf("TEST_REDIS_URI not set - skipping") } tests := []struct { name string username string password string configUsername string configPassword string wantStatusCode int }{ { name: "No auth configured - no credentials provided", username: "", password: "", configUsername: "", configPassword: "", wantStatusCode: http.StatusOK, }, { name: "Auth configured - correct credentials", username: "testuser", password: "testpass", configUsername: "testuser", configPassword: "testpass", wantStatusCode: http.StatusOK, }, { name: "Auth configured - wrong username", username: "wronguser", password: "testpass", configUsername: "testuser", configPassword: "testpass", wantStatusCode: http.StatusUnauthorized, }, { name: "Auth configured - wrong password", username: "testuser", password: "wrongpass", configUsername: "testuser", configPassword: "testpass", wantStatusCode: http.StatusUnauthorized, }, { name: "Auth configured - no credentials provided", username: "", password: "", configUsername: "testuser", configPassword: "testpass", wantStatusCode: http.StatusUnauthorized, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { e, _ := NewRedisExporter(os.Getenv("TEST_REDIS_URI"), Options{ Namespace: "test", Registry: prometheus.NewRegistry(), BasicAuthUsername: tt.configUsername, BasicAuthPassword: tt.configPassword, }) ts := httptest.NewServer(e) defer ts.Close() client := &http.Client{} req, err := http.NewRequest("GET", ts.URL+"/metrics", nil) if err != nil { t.Fatalf("Failed to create request: %v", err) } if tt.username != "" || tt.password != "" { req.SetBasicAuth(tt.username, tt.password) } resp, err := client.Do(req) if err != nil { t.Fatalf("Failed to send request: %v", err) } defer resp.Body.Close() if resp.StatusCode != tt.wantStatusCode { t.Errorf("Expected status code %d, got %d", tt.wantStatusCode, resp.StatusCode) } body, err := io.ReadAll(resp.Body) if err != nil { t.Fatalf("Failed to read response body: %v", err) } if tt.wantStatusCode == http.StatusOK { if !strings.Contains(string(body), "test_up") { t.Errorf("Expected body to contain 'test_up', got: %s", string(body)) } } else { if !strings.Contains(resp.Header.Get("WWW-Authenticate"), "Basic realm=\"redis-exporter") { t.Errorf("Expected WWW-Authenticate header, got: %s", resp.Header.Get("WWW-Authenticate")) } } }) } } func downloadURL(t *testing.T, u string) string { _, res := downloadURLWithStatusCode(t, u) return res } func downloadURLWithStatusCode(t *testing.T, u string) (int, string) { log.Debugf("downloadURL() %s", u) resp, err := http.Get(u) if err != nil { t.Fatal(err) } defer resp.Body.Close() body, err := io.ReadAll(resp.Body) if err != nil { t.Fatal(err) } return resp.StatusCode, string(body) } �����������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/info.go��������������������������������������������������������������0000664�0000000�0000000�00000037731�14765200314�0020014�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "errors" "fmt" "regexp" "strconv" "strings" "time" "github.com/prometheus/client_golang/prometheus" log "github.com/sirupsen/logrus" ) // precompiled regexps // keydb multimaster /* master_host:kdb0.server.local master_port:6377 master_1_host:kdb1.server.local master_1_port:6377 */ var reMasterHost = regexp.MustCompile(`^master(_[0-9]+)?_host`) var reMasterPort = regexp.MustCompile(`^master(_[0-9]+)?_port`) var reMasterLinkStatus = regexp.MustCompile(`^master(_[0-9]+)?_link_status`) // info fieldKey:fieldValue -> metric redis_fieldKey{master_host, master_port} fieldValue var reMasterDirect = regexp.MustCompile(`^(master(_[0-9]+)?_(last_io_seconds_ago|sync_in_progress)|slave_repl_offset)`) // numbered slaves /* slave0:ip=10.254.11.1,port=6379,state=online,offset=1751844676,lag=0 slave1:ip=10.254.11.2,port=6379,state=online,offset=1751844222,lag=0 */ var reSlave = regexp.MustCompile(`^slave\d+`) func extractVal(s string) (val float64, err error) { split := strings.Split(s, "=") if len(split) != 2 { return 0, fmt.Errorf("nope") } val, err = strconv.ParseFloat(split[1], 64) if err != nil { return 0, fmt.Errorf("nope") } return } func extractPercentileVal(s string) (percentile float64, val float64, err error) { split := strings.Split(s, "=") if len(split) != 2 { return } percentile, err = strconv.ParseFloat(split[0][1:], 64) if err != nil { return } val, err = strconv.ParseFloat(split[1], 64) return } func (e *Exporter) extractInfoMetrics(ch chan<- prometheus.Metric, info string, dbCount int) { keyValues := map[string]string{} handledDBs := map[string]bool{} cmdCount := map[string]uint64{} cmdSum := map[string]float64{} cmdLatencyMap := map[string]map[float64]float64{} fieldClass := "" lines := strings.Split(info, "\n") masterHost := "" masterPort := "" for _, line := range lines { line = strings.TrimSpace(line) log.Debugf("info: %s", line) if len(line) > 0 && strings.HasPrefix(line, "# ") { fieldClass = line[2:] log.Debugf("set fieldClass: %s", fieldClass) continue } if (len(line) < 2) || (!strings.Contains(line, ":")) { continue } split := strings.SplitN(line, ":", 2) fieldKey := split[0] fieldValue := split[1] keyValues[fieldKey] = fieldValue if reMasterHost.MatchString(fieldKey) { masterHost = fieldValue } if reMasterPort.MatchString(fieldKey) { masterPort = fieldValue } switch fieldClass { case "Replication": if ok := e.handleMetricsReplication(ch, masterHost, masterPort, fieldKey, fieldValue); ok { continue } case "Server": e.handleMetricsServer(ch, fieldKey, fieldValue) case "Commandstats": cmd, calls, usecsTotal := e.handleMetricsCommandStats(ch, fieldKey, fieldValue) cmdCount[cmd] = uint64(calls) cmdSum[cmd] = usecsTotal continue case "Latencystats": e.handleMetricsLatencyStats(fieldKey, fieldValue, cmdLatencyMap) continue case "Errorstats": e.handleMetricsErrorStats(ch, fieldKey, fieldValue) continue case "Keyspace": if keysTotal, keysEx, avgTTL, keysCached, ok := parseDBKeyspaceString(fieldKey, fieldValue); ok { dbName := fieldKey e.registerConstMetricGauge(ch, "db_keys", keysTotal, dbName) e.registerConstMetricGauge(ch, "db_keys_expiring", keysEx, dbName) if keysCached > -1 { e.registerConstMetricGauge(ch, "db_keys_cached", keysCached, dbName) } if avgTTL > -1 { e.registerConstMetricGauge(ch, "db_avg_ttl_seconds", avgTTL, dbName) } handledDBs[dbName] = true continue } case "Sentinel": e.handleMetricsSentinel(ch, fieldKey, fieldValue) } if !e.includeMetric(fieldKey) { continue } e.parseAndRegisterConstMetric(ch, fieldKey, fieldValue) } // To be able to generate the latency summaries we need the count and sum that we get // from #Commandstats processing and the percentile info that we get from the #Latencystats processing e.generateCommandLatencySummaries(ch, cmdLatencyMap, cmdCount, cmdSum) for dbIndex := 0; dbIndex < dbCount; dbIndex++ { dbName := "db" + strconv.Itoa(dbIndex) if _, exists := handledDBs[dbName]; !exists { e.registerConstMetricGauge(ch, "db_keys", 0, dbName) e.registerConstMetricGauge(ch, "db_keys_expiring", 0, dbName) } } e.registerConstMetricGauge(ch, "instance_info", 1, keyValues["role"], keyValues["redis_version"], keyValues["redis_build_id"], keyValues["redis_mode"], keyValues["os"], keyValues["maxmemory_policy"], keyValues["tcp_port"], keyValues["run_id"], keyValues["process_id"], keyValues["master_replid"], ) if keyValues["role"] == "slave" { e.registerConstMetricGauge(ch, "slave_info", 1, keyValues["master_host"], keyValues["master_port"], keyValues["slave_read_only"]) } } func (e *Exporter) generateCommandLatencySummaries(ch chan<- prometheus.Metric, cmdLatencyMap map[string]map[float64]float64, cmdCount map[string]uint64, cmdSum map[string]float64) { for cmd, latencyMap := range cmdLatencyMap { count, okCount := cmdCount[cmd] sum, okSum := cmdSum[cmd] if okCount && okSum { e.registerConstSummary(ch, "latency_percentiles_usec", []string{"cmd"}, count, sum, latencyMap, cmd) } } } func (e *Exporter) extractClusterInfoMetrics(ch chan<- prometheus.Metric, info string) { lines := strings.Split(info, "\r\n") for _, line := range lines { log.Debugf("info: %s", line) split := strings.Split(line, ":") if len(split) != 2 { continue } fieldKey := split[0] fieldValue := split[1] if !e.includeMetric(fieldKey) { continue } e.parseAndRegisterConstMetric(ch, fieldKey, fieldValue) } } /* valid example: db0:keys=1,expires=0,avg_ttl=0,cached_keys=0 */ func parseDBKeyspaceString(inputKey string, inputVal string) (keysTotal float64, keysExpiringTotal float64, avgTTL float64, keysCachedTotal float64, ok bool) { log.Debugf("parseDBKeyspaceString inputKey: [%s] inputVal: [%s]", inputKey, inputVal) if !strings.HasPrefix(inputKey, "db") { log.Debugf("parseDBKeyspaceString inputKey not starting with 'db': [%s]", inputKey) return } split := strings.Split(inputVal, ",") if len(split) < 2 || len(split) > 4 { log.Debugf("parseDBKeyspaceString strings.Split(inputVal) invalid: %#v", split) return } var err error if keysTotal, err = extractVal(split[0]); err != nil { log.Debugf("parseDBKeyspaceString extractVal(split[0]) invalid, err: %s", err) return } if keysExpiringTotal, err = extractVal(split[1]); err != nil { log.Debugf("parseDBKeyspaceString extractVal(split[1]) invalid, err: %s", err) return } avgTTL = -1 if len(split) > 2 { if avgTTL, err = extractVal(split[2]); err != nil { log.Debugf("parseDBKeyspaceString extractVal(split[2]) invalid, err: %s", err) return } avgTTL /= 1000 } keysCachedTotal = -1 if len(split) > 3 { if keysCachedTotal, err = extractVal(split[3]); err != nil { log.Debugf("parseDBKeyspaceString extractVal(split[3]) invalid, err: %s", err) return } } ok = true return } /* slave0:ip=10.254.11.1,port=6379,state=online,offset=1751844676,lag=0 slave1:ip=10.254.11.2,port=6379,state=online,offset=1751844222,lag=0 */ func parseConnectedSlaveString(slaveName string, keyValues string) (offset float64, ip string, port string, state string, lag float64, ok bool) { ok = false if !reSlave.MatchString(slaveName) { return } connectedkeyValues := make(map[string]string) for _, kvPart := range strings.Split(keyValues, ",") { x := strings.Split(kvPart, "=") if len(x) != 2 { log.Debugf("Invalid format for connected slave string, got: %s", kvPart) return } connectedkeyValues[x[0]] = x[1] } offset, err := strconv.ParseFloat(connectedkeyValues["offset"], 64) if err != nil { log.Debugf("Can not parse connected slave offset, got: %s", connectedkeyValues["offset"]) return } if lagStr, exists := connectedkeyValues["lag"]; !exists { // Prior to Redis 3.0, "lag" property does not exist lag = -1 } else { lag, err = strconv.ParseFloat(lagStr, 64) if err != nil { log.Debugf("Can not parse connected slave lag, got: %s", lagStr) return } } ok = true ip = connectedkeyValues["ip"] port = connectedkeyValues["port"] state = connectedkeyValues["state"] return } func (e *Exporter) handleMetricsReplication(ch chan<- prometheus.Metric, masterHost string, masterPort string, fieldKey string, fieldValue string) bool { // only slaves have this field if reMasterLinkStatus.MatchString(fieldKey) { if fieldValue == "up" { e.registerConstMetricGauge(ch, "master_link_up", 1, masterHost, masterPort) } else { e.registerConstMetricGauge(ch, "master_link_up", 0, masterHost, masterPort) } return true } if reMasterDirect.MatchString(fieldKey) { if strings.HasSuffix(fieldKey, "last_io_seconds_ago") { fieldKey = "master_last_io_seconds_ago" } else if strings.HasSuffix(fieldKey, "sync_in_progress") { fieldKey = "master_sync_in_progress" } val, _ := strconv.Atoi(fieldValue) e.registerConstMetricGauge(ch, fieldKey, float64(val), masterHost, masterPort) return true } // not a slave, try extracting master metrics if slaveOffset, slaveIP, slavePort, slaveState, slaveLag, ok := parseConnectedSlaveString(fieldKey, fieldValue); ok { e.registerConstMetricGauge(ch, "connected_slave_offset_bytes", slaveOffset, slaveIP, slavePort, slaveState, ) if slaveLag > -1 { e.registerConstMetricGauge(ch, "connected_slave_lag_seconds", slaveLag, slaveIP, slavePort, slaveState, ) } return true } return false } func (e *Exporter) handleMetricsServer(ch chan<- prometheus.Metric, fieldKey string, fieldValue string) { if fieldKey == "uptime_in_seconds" { if uptime, err := strconv.ParseFloat(fieldValue, 64); err == nil { e.registerConstMetricGauge(ch, "start_time_seconds", float64(time.Now().Unix())-uptime) } } if fieldKey == "configured_hz" { if hz, err := strconv.ParseInt(fieldValue, 10, 64); err == nil { e.registerConstMetricGauge(ch, "configured_hz", float64(hz)) } } if fieldKey == "hz" { if hz, err := strconv.ParseInt(fieldValue, 10, 64); err == nil { e.registerConstMetricGauge(ch, "hz", float64(hz)) } } } func parseMetricsCommandStats(fieldKey string, fieldValue string) (cmd string, calls float64, rejectedCalls float64, failedCalls float64, usecTotal float64, extendedStats bool, errorOut error) { /* There are 2 formats. (One before Redis 6.2 and one after it) Format before v6.2: cmdstat_get:calls=21,usec=175,usec_per_call=8.33 cmdstat_set:calls=61,usec=3139,usec_per_call=51.46 cmdstat_setex:calls=75,usec=1260,usec_per_call=16.80 cmdstat_georadius_ro:calls=75,usec=1260,usec_per_call=16.80 Format from v6.2 forward: cmdstat_get:calls=21,usec=175,usec_per_call=8.33,rejected_calls=0,failed_calls=0 cmdstat_set:calls=61,usec=3139,usec_per_call=51.46,rejected_calls=0,failed_calls=0 cmdstat_setex:calls=75,usec=1260,usec_per_call=16.80,rejected_calls=0,failed_calls=0 cmdstat_georadius_ro:calls=75,usec=1260,usec_per_call=16.80,rejected_calls=0,failed_calls=0 broken up like this: fieldKey = cmdstat_get fieldValue= calls=21,usec=175,usec_per_call=8.33 */ const cmdPrefix = "cmdstat_" extendedStats = false if !strings.HasPrefix(fieldKey, cmdPrefix) { errorOut = errors.New("Invalid fieldKey") return } cmd = strings.TrimPrefix(fieldKey, cmdPrefix) splitValue := strings.Split(fieldValue, ",") splitLen := len(splitValue) if splitLen < 3 { errorOut = errors.New("Invalid fieldValue") return } // internal error variable var err error calls, err = extractVal(splitValue[0]) if err != nil { errorOut = errors.New("Invalid splitValue[0]") return } usecTotal, err = extractVal(splitValue[1]) if err != nil { errorOut = errors.New("Invalid splitValue[1]") return } // pre 6.2 did not include rejected/failed calls stats so if we have less than 5 tokens we're done here if splitLen < 5 { return } rejectedCalls, err = extractVal(splitValue[3]) if err != nil { errorOut = errors.New("Invalid rejected_calls while parsing splitValue[3]") return } failedCalls, err = extractVal(splitValue[4]) if err != nil { errorOut = errors.New("Invalid failed_calls while parsing splitValue[4]") return } extendedStats = true return } func parseMetricsLatencyStats(fieldKey string, fieldValue string) (cmd string, percentileMap map[float64]float64, errorOut error) { /* # Latencystats latency_percentiles_usec_rpop:p50=0.001,p99=1.003,p99.9=4.015 latency_percentiles_usec_zadd:p50=0.001,p99=1.003,p99.9=4.015 latency_percentiles_usec_hset:p50=0.001,p99=1.003,p99.9=3.007 latency_percentiles_usec_set:p50=0.001,p99=1.003,p99.9=4.015 latency_percentiles_usec_lpop:p50=0.001,p99=1.003,p99.9=4.015 latency_percentiles_usec_lpush:p50=0.001,p99=1.003,p99.9=4.015 latency_percentiles_usec_lrange:p50=17.023,p99=21.119,p99.9=27.007 latency_percentiles_usec_get:p50=0.001,p99=1.003,p99.9=3.007 latency_percentiles_usec_mset:p50=1.003,p99=1.003,p99.9=1.003 latency_percentiles_usec_spop:p50=0.001,p99=1.003,p99.9=1.003 latency_percentiles_usec_incr:p50=0.001,p99=1.003,p99.9=3.007 latency_percentiles_usec_rpush:p50=0.001,p99=1.003,p99.9=4.015 latency_percentiles_usec_zpopmin:p50=0.001,p99=1.003,p99.9=3.007 latency_percentiles_usec_config|resetstat:p50=280.575,p99=280.575,p99.9=280.575 latency_percentiles_usec_config|get:p50=8.031,p99=27.007,p99.9=27.007 latency_percentiles_usec_ping:p50=0.001,p99=1.003,p99.9=1.003 latency_percentiles_usec_sadd:p50=0.001,p99=1.003,p99.9=3.007 broken up like this: fieldKey = latency_percentiles_usec_ping fieldValue= p50=0.001,p99=1.003,p99.9=3.007 */ const cmdPrefix = "latency_percentiles_usec_" percentileMap = map[float64]float64{} if !strings.HasPrefix(fieldKey, cmdPrefix) { errorOut = errors.New("Invalid fieldKey") return } cmd = strings.TrimPrefix(fieldKey, cmdPrefix) splitValue := strings.Split(fieldValue, ",") splitLen := len(splitValue) if splitLen < 1 { errorOut = errors.New("Invalid fieldValue") return } for pos, kv := range splitValue { percentile, value, err := extractPercentileVal(kv) if err != nil { errorOut = fmt.Errorf("Invalid splitValue[%d]", pos) return } percentileMap[percentile] = value } return } func parseMetricsErrorStats(fieldKey string, fieldValue string) (errorType string, count float64, errorOut error) { /* Format: errorstat_ERR:count=4 errorstat_NOAUTH:count=3 broken up like this: fieldKey = errorstat_ERR fieldValue= count=3 */ const prefix = "errorstat_" if !strings.HasPrefix(fieldKey, prefix) { errorOut = errors.New("Invalid fieldKey. errorstat_ prefix not present") return } errorType = strings.TrimPrefix(fieldKey, prefix) count, err := extractVal(fieldValue) if err != nil { errorOut = errors.New("Invalid error type on splitValue[0]") return } return } func (e *Exporter) handleMetricsCommandStats(ch chan<- prometheus.Metric, fieldKey string, fieldValue string) (cmd string, calls float64, usecTotal float64) { cmd, calls, rejectedCalls, failedCalls, usecTotal, extendedStats, err := parseMetricsCommandStats(fieldKey, fieldValue) if err != nil { log.Debugf("parseMetricsCommandStats( %s , %s ) err: %s", fieldKey, fieldValue, err) return } e.registerConstMetric(ch, "commands_total", calls, prometheus.CounterValue, cmd) e.registerConstMetric(ch, "commands_duration_seconds_total", usecTotal/1e6, prometheus.CounterValue, cmd) if extendedStats { e.registerConstMetric(ch, "commands_rejected_calls_total", rejectedCalls, prometheus.CounterValue, cmd) e.registerConstMetric(ch, "commands_failed_calls_total", failedCalls, prometheus.CounterValue, cmd) } return } func (e *Exporter) handleMetricsLatencyStats(fieldKey string, fieldValue string, cmdLatencyMap map[string]map[float64]float64) { cmd, latencyMap, err := parseMetricsLatencyStats(fieldKey, fieldValue) if err == nil { cmdLatencyMap[cmd] = latencyMap } } func (e *Exporter) handleMetricsErrorStats(ch chan<- prometheus.Metric, fieldKey string, fieldValue string) { if errorPrefix, count, err := parseMetricsErrorStats(fieldKey, fieldValue); err == nil { e.registerConstMetric(ch, "errors_total", count, prometheus.CounterValue, errorPrefix) } } ���������������������������������������redis_exporter-1.69.0/exporter/info_test.go���������������������������������������������������������0000664�0000000�0000000�00000031412�14765200314�0021041�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "fmt" "net/http/httptest" "os" "reflect" "regexp" "strings" "testing" "github.com/prometheus/client_golang/prometheus" log "github.com/sirupsen/logrus" ) func TestKeyspaceStringParser(t *testing.T) { tsts := []struct { db string stats string keysTotal, keysEx, keysCached, avgTTL float64 ok bool }{ {db: "xxx", stats: "", ok: false}, {db: "xxx", stats: "keys=1,expires=0,avg_ttl=0", ok: false}, {db: "db0", stats: "xxx", ok: false}, {db: "db1", stats: "keys=abcd,expires=0,avg_ttl=0", ok: false}, {db: "db2", stats: "keys=1234=1234,expires=0,avg_ttl=0", ok: false}, {db: "db3", stats: "keys=abcde,expires=0", ok: false}, {db: "db3", stats: "keys=213,expires=xxx", ok: false}, {db: "db3", stats: "keys=123,expires=0,avg_ttl=zzz", ok: false}, {db: "db3", stats: "keys=1,expires=0,avg_ttl=zzz,cached_keys=0", ok: false}, {db: "db3", stats: "keys=1,expires=0,avg_ttl=0,cached_keys=zzz", ok: false}, {db: "db3", stats: "keys=1,expires=0,avg_ttl=0,cached_keys=0,extra=0", ok: false}, {db: "db0", stats: "keys=1,expires=0,avg_ttl=0", keysTotal: 1, keysEx: 0, avgTTL: 0, keysCached: -1, ok: true}, {db: "db0", stats: "keys=1,expires=0,avg_ttl=0,cached_keys=0", keysTotal: 1, keysEx: 0, avgTTL: 0, keysCached: 0, ok: true}, } for _, tst := range tsts { if kt, kx, ttl, kc, ok := parseDBKeyspaceString(tst.db, tst.stats); true { if ok != tst.ok { t.Errorf("failed for: db:%s stats:%s", tst.db, tst.stats) continue } if ok && (kt != tst.keysTotal || kx != tst.keysEx || kc != tst.keysCached || ttl != tst.avgTTL) { t.Errorf("values not matching, db:%s stats:%s %f %f %f %f", tst.db, tst.stats, kt, kx, kc, ttl) } } } } type slaveData struct { k, v string ip, state, port string offset float64 lag float64 ok bool } func TestParseConnectedSlaveString(t *testing.T) { tsts := []slaveData{ {k: "slave0", v: "ip=10.254.11.1,port=6379,state=online,offset=1751844676,lag=0", offset: 1751844676, ip: "10.254.11.1", port: "6379", state: "online", ok: true, lag: 0}, {k: "slave0", v: "ip=2a00:1450:400e:808::200e,port=6379,state=online,offset=1751844676,lag=0", offset: 1751844676, ip: "2a00:1450:400e:808::200e", port: "6379", state: "online", ok: true, lag: 0}, {k: "slave1", v: "offset=1,lag=0", offset: 1, ok: true}, {k: "slave1", v: "offset=1", offset: 1, ok: true, lag: -1}, {k: "slave2", v: "ip=1.2.3.4,state=online,offset=123,lag=42", offset: 123, ip: "1.2.3.4", state: "online", ok: true, lag: 42}, {k: "slave", v: "offset=1751844676,lag=0", ok: false}, {k: "slaveA", v: "offset=1751844676,lag=0", ok: false}, {k: "slave0", v: "offset=abc,lag=0", ok: false}, {k: "slave0", v: "offset=0,lag=abc", ok: false}, } for _, tst := range tsts { t.Run(fmt.Sprintf("%s---%s", tst.k, tst.v), func(t *testing.T) { offset, ip, port, state, lag, ok := parseConnectedSlaveString(tst.k, tst.v) if ok != tst.ok { t.Errorf("failed for: db:%s stats:%s", tst.k, tst.v) return } if offset != tst.offset || ip != tst.ip || port != tst.port || state != tst.state || lag != tst.lag { t.Errorf("values not matching, string:%s %f %s %s %s %f", tst.v, offset, ip, port, state, lag) } }) } } func TestCommandStats(t *testing.T) { defaultAddr := os.Getenv("TEST_REDIS_URI") e := getTestExporterWithAddr(defaultAddr) setupTestKeys(t, defaultAddr) want := map[string]bool{"test_commands_duration_seconds_total": false, "test_commands_total": false} commandStatsCheck(t, e, want) deleteTestKeys(t, defaultAddr) redisSixTwoAddr := os.Getenv("TEST_REDIS6_URI") if redisSixTwoAddr != "" { // Since Redis v6.2 we should expect extra failed calls and rejected calls e = getTestExporterWithAddr(redisSixTwoAddr) setupTestKeys(t, redisSixTwoAddr) want = map[string]bool{"test_commands_duration_seconds_total": false, "test_commands_total": false, "commands_failed_calls_total": false, "commands_rejected_calls_total": false, "errors_total": false} commandStatsCheck(t, e, want) deleteTestKeys(t, redisSixTwoAddr) } } func commandStatsCheck(t *testing.T, e *Exporter, want map[string]bool) { chM := make(chan prometheus.Metric) go func() { e.Collect(chM) close(chM) }() for m := range chM { for k := range want { if strings.Contains(m.Desc().String(), k) { want[k] = true } } } for k, found := range want { if !found { t.Errorf("didn't find %s", k) } } } func TestClusterMaster(t *testing.T) { if os.Getenv("TEST_REDIS_CLUSTER_MASTER_URI") == "" { t.Skipf("TEST_REDIS_CLUSTER_MASTER_URI not set - skipping") } addr := os.Getenv("TEST_REDIS_CLUSTER_MASTER_URI") e, _ := NewRedisExporter(addr, Options{Namespace: "test", Registry: prometheus.NewRegistry()}) ts := httptest.NewServer(e) defer ts.Close() chM := make(chan prometheus.Metric, 10000) go func() { e.Collect(chM) close(chM) }() body := downloadURL(t, ts.URL+"/metrics") log.Debugf("master - body: %s", body) for _, want := range []string{ "test_instance_info{", "test_master_repl_offset", } { if !strings.Contains(body, want) { t.Errorf("Did not find key [%s] \nbody: %s", want, body) } } } func TestClusterSlave(t *testing.T) { if os.Getenv("TEST_REDIS_CLUSTER_SLAVE_URI") == "" { t.Skipf("TEST_REDIS_CLUSTER_SLAVE_URI not set - skipping") } addr := os.Getenv("TEST_REDIS_CLUSTER_SLAVE_URI") e, _ := NewRedisExporter(addr, Options{Namespace: "test", Registry: prometheus.NewRegistry()}) ts := httptest.NewServer(e) defer ts.Close() chM := make(chan prometheus.Metric, 10000) go func() { e.Collect(chM) close(chM) }() body := downloadURL(t, ts.URL+"/metrics") log.Debugf("slave - body: %s", body) for _, want := range []string{ "test_instance_info", "test_master_last_io_seconds", "test_slave_info", } { if !strings.Contains(body, want) { t.Errorf("Did not find key [%s] \nbody: %s", want, body) } } hostReg, _ := regexp.Compile(`master_host="([0,1]?\d{1,2}|2([0-4][0-9]|5[0-5]))(\.([0,1]?\d{1,2}|2([0-4][0-9]|5[0-5]))){3}"`) masterHost := hostReg.FindString(body) portReg, _ := regexp.Compile(`master_port="(\d+)"`) masterPort := portReg.FindString(body) for wantedKey, wantedVal := range map[string]int{ masterHost: 5, masterPort: 5, } { if res := strings.Count(body, wantedKey); res != wantedVal { t.Errorf("Result: %s -> %d, Wanted: %d \nbody: %s", wantedKey, res, wantedVal, body) } } } func TestParseCommandStats(t *testing.T) { for _, tst := range []struct { fieldKey string fieldValue string wantSuccess bool wantExtraStats bool wantCmd string wantCalls float64 wantRejectedCalls float64 wantFailedCalls float64 wantUsecTotal float64 }{ { fieldKey: "cmdstat_get", fieldValue: "calls=21,usec=175,usec_per_call=8.33", wantSuccess: true, wantCmd: "get", wantCalls: 21, wantUsecTotal: 175, }, { fieldKey: "cmdstat_georadius_ro", fieldValue: "calls=75,usec=1260,usec_per_call=16.80", wantSuccess: true, wantCmd: "georadius_ro", wantCalls: 75, wantUsecTotal: 1260, }, { fieldKey: "borked_stats", fieldValue: "calls=75,usec=1260,usec_per_call=16.80", wantSuccess: false, }, { fieldKey: "cmdstat_georadius_ro", fieldValue: "borked_values", wantSuccess: false, }, { fieldKey: "cmdstat_georadius_ro", fieldValue: "usec_per_call=16.80", wantSuccess: false, }, { fieldKey: "cmdstat_georadius_ro", fieldValue: "calls=ABC,usec=1260,usec_per_call=16.80", wantSuccess: false, }, { fieldKey: "cmdstat_georadius_ro", fieldValue: "calls=75,usec=DEF,usec_per_call=16.80", wantSuccess: false, }, { fieldKey: "cmdstat_georadius_ro", fieldValue: "calls=75,usec=1024,usec_per_call=16.80,rejected_calls=5,failed_calls=10", wantCmd: "georadius_ro", wantCalls: 75, wantUsecTotal: 1024, wantSuccess: true, wantExtraStats: true, wantFailedCalls: 10, wantRejectedCalls: 5, }, { fieldKey: "cmdstat_georadius_ro", fieldValue: "calls=75,usec=1024,usec_per_call=16.80,rejected_calls=ABC,failed_calls=10", wantSuccess: false, }, { fieldKey: "cmdstat_georadius_ro", fieldValue: "calls=75,usec=1024,usec_per_call=16.80,rejected_calls=5,failed_calls=ABC", wantSuccess: false, }, } { t.Run(tst.fieldKey+tst.fieldValue, func(t *testing.T) { cmd, calls, rejectedCalls, failedCalls, usecTotal, _, err := parseMetricsCommandStats(tst.fieldKey, tst.fieldValue) if tst.wantSuccess && err != nil { t.Fatalf("err: %s", err) return } if !tst.wantSuccess && err == nil { t.Fatalf("expected err!") return } if !tst.wantSuccess { return } if cmd != tst.wantCmd { t.Fatalf("cmd not matching, got: %s, wanted: %s", cmd, tst.wantCmd) } if calls != tst.wantCalls { t.Fatalf("cmd not matching, got: %f, wanted: %f", calls, tst.wantCalls) } if rejectedCalls != tst.wantRejectedCalls { t.Fatalf("cmd not matching, got: %f, wanted: %f", rejectedCalls, tst.wantRejectedCalls) } if failedCalls != tst.wantFailedCalls { t.Fatalf("cmd not matching, got: %f, wanted: %f", failedCalls, tst.wantFailedCalls) } if usecTotal != tst.wantUsecTotal { t.Fatalf("cmd not matching, got: %f, wanted: %f", usecTotal, tst.wantUsecTotal) } }) } } func TestParseErrorStats(t *testing.T) { for _, tst := range []struct { fieldKey string fieldValue string wantSuccess bool wantErrorPrefix string wantCount float64 }{ { fieldKey: "errorstat_ERR", fieldValue: "count=4", wantSuccess: true, wantErrorPrefix: "ERR", wantCount: 4, }, { fieldKey: "borked_stats", fieldValue: "count=4", wantSuccess: false, }, { fieldKey: "errorstat_ERR", fieldValue: "borked_values", wantSuccess: false, }, { fieldKey: "errorstat_ERR", fieldValue: "count=ABC", wantSuccess: false, }, } { t.Run(tst.fieldKey+tst.fieldValue, func(t *testing.T) { errorPrefix, count, err := parseMetricsErrorStats(tst.fieldKey, tst.fieldValue) if tst.wantSuccess && err != nil { t.Fatalf("err: %s", err) return } if !tst.wantSuccess && err == nil { t.Fatalf("expected err!") return } if !tst.wantSuccess { return } if errorPrefix != tst.wantErrorPrefix { t.Fatalf("cmd not matching, got: %s, wanted: %s", errorPrefix, tst.wantErrorPrefix) } if count != tst.wantCount { t.Fatalf("cmd not matching, got: %f, wanted: %f", count, tst.wantCount) } }) } } func Test_parseMetricsLatencyStats(t *testing.T) { type args struct { fieldKey string fieldValue string } tests := []struct { name string args args wantCmd string wantPercentileMap map[float64]float64 wantErr bool }{ { name: "simple", args: args{fieldKey: "latency_percentiles_usec_ping", fieldValue: "p50=0.001,p99=1.003,p99.9=3.007"}, wantCmd: "ping", wantPercentileMap: map[float64]float64{50.0: 0.001, 99.0: 1.003, 99.9: 3.007}, wantErr: false, }, { name: "single-percentile", args: args{fieldKey: "latency_percentiles_usec_ping", fieldValue: "p50=0.001"}, wantCmd: "ping", wantPercentileMap: map[float64]float64{50.0: 0.001}, wantErr: false, }, { name: "empty", args: args{fieldKey: "latency_percentiles_usec_ping", fieldValue: ""}, wantCmd: "ping", wantPercentileMap: map[float64]float64{0: 0}, wantErr: false, }, { name: "invalid-percentile", args: args{fieldKey: "latency_percentiles_usec_ping", fieldValue: "p50=a"}, wantCmd: "ping", wantPercentileMap: map[float64]float64{}, wantErr: true, }, { name: "invalid prefix", args: args{fieldKey: "wrong_prefix_", fieldValue: "p50=0.001,p99=1.003,p99.9=3.007"}, wantCmd: "", wantPercentileMap: map[float64]float64{}, wantErr: true, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { gotCmd, gotPercentileMap, err := parseMetricsLatencyStats(tt.args.fieldKey, tt.args.fieldValue) if (err != nil) != tt.wantErr { t.Errorf("test %s. parseMetricsLatencyStats() error = %v, wantErr %v", tt.name, err, tt.wantErr) return } if gotCmd != tt.wantCmd { t.Errorf("parseMetricsLatencyStats() gotCmd = %v, want %v", gotCmd, tt.wantCmd) } if !reflect.DeepEqual(gotPercentileMap, tt.wantPercentileMap) { t.Errorf("parseMetricsLatencyStats() gotPercentileMap = %v, want %v", gotPercentileMap, tt.wantPercentileMap) } }) } } ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/key_groups.go��������������������������������������������������������0000664�0000000�0000000�00000015375�14765200314�0021250�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "encoding/csv" "fmt" "sort" "strings" "time" "github.com/gomodule/redigo/redis" "github.com/prometheus/client_golang/prometheus" log "github.com/sirupsen/logrus" ) type keyGroupMetrics struct { keyGroup string count int64 memoryUsage int64 } type overflowedKeyGroupMetrics struct { topMemoryUsageKeyGroups []*keyGroupMetrics overflowKeyGroupAggregate keyGroupMetrics keyGroupsCount int64 } type keyGroupsScrapeResult struct { duration time.Duration metrics []map[string]*keyGroupMetrics overflowedMetrics []*overflowedKeyGroupMetrics } func (e *Exporter) extractKeyGroupMetrics(ch chan<- prometheus.Metric, c redis.Conn, dbCount int) { allDbKeyGroupMetrics := e.gatherKeyGroupsMetricsForAllDatabases(c, dbCount) if allDbKeyGroupMetrics == nil { return } for db, dbKeyGroupMetrics := range allDbKeyGroupMetrics.metrics { dbLabel := fmt.Sprintf("db%d", db) registerKeyGroupMetrics := func(metrics *keyGroupMetrics) { e.registerConstMetricGauge( ch, "key_group_count", float64(metrics.count), dbLabel, metrics.keyGroup, ) e.registerConstMetricGauge( ch, "key_group_memory_usage_bytes", float64(metrics.memoryUsage), dbLabel, metrics.keyGroup, ) } if allDbKeyGroupMetrics.overflowedMetrics[db] != nil { overflowedMetrics := allDbKeyGroupMetrics.overflowedMetrics[db] for _, metrics := range overflowedMetrics.topMemoryUsageKeyGroups { registerKeyGroupMetrics(metrics) } registerKeyGroupMetrics(&overflowedMetrics.overflowKeyGroupAggregate) e.registerConstMetricGauge(ch, "number_of_distinct_key_groups", float64(overflowedMetrics.keyGroupsCount), dbLabel) } else if dbKeyGroupMetrics != nil { for _, metrics := range dbKeyGroupMetrics { registerKeyGroupMetrics(metrics) } e.registerConstMetricGauge(ch, "number_of_distinct_key_groups", float64(len(dbKeyGroupMetrics)), dbLabel) } } e.registerConstMetricGauge(ch, "last_key_groups_scrape_duration_milliseconds", float64(allDbKeyGroupMetrics.duration.Milliseconds())) } func (e *Exporter) gatherKeyGroupsMetricsForAllDatabases(c redis.Conn, dbCount int) *keyGroupsScrapeResult { start := time.Now() allMetrics := &keyGroupsScrapeResult{ metrics: make([]map[string]*keyGroupMetrics, dbCount), overflowedMetrics: make([]*overflowedKeyGroupMetrics, dbCount), } defer func() { allMetrics.duration = time.Since(start) }() if strings.TrimSpace(e.options.CheckKeyGroups) == "" { return allMetrics } keyGroups, err := csv.NewReader( strings.NewReader(e.options.CheckKeyGroups), ).Read() if err != nil { log.Errorf("Failed to parse key groups as csv: %s", err) return allMetrics } for i, v := range keyGroups { keyGroups[i] = strings.TrimSpace(v) } keyGroupsNoEmptyStrings := make([]string, 0) for _, v := range keyGroups { if len(v) > 0 { keyGroupsNoEmptyStrings = append(keyGroupsNoEmptyStrings, v) } } if len(keyGroupsNoEmptyStrings) == 0 { return allMetrics } for db := 0; db < dbCount; db++ { if _, err := doRedisCmd(c, "SELECT", db); err != nil { log.Errorf("Couldn't select database %d when getting key info.", db) continue } allGroups, err := gatherKeyGroupMetrics(c, e.options.CheckKeysBatchSize, keyGroupsNoEmptyStrings) if err != nil { log.Error(err) continue } allMetrics.metrics[db] = allGroups if int64(len(allGroups)) > e.options.MaxDistinctKeyGroups { metricsSlice := make([]*keyGroupMetrics, 0, len(allGroups)) for _, v := range allGroups { metricsSlice = append(metricsSlice, v) } sort.Slice(metricsSlice, func(i, j int) bool { if metricsSlice[i].memoryUsage == metricsSlice[j].memoryUsage { if metricsSlice[i].count == metricsSlice[j].count { return metricsSlice[i].keyGroup < metricsSlice[j].keyGroup } return metricsSlice[i].count < metricsSlice[j].count } return metricsSlice[i].memoryUsage > metricsSlice[j].memoryUsage }) var overflowedCount, overflowedMemoryUsage int64 for _, v := range metricsSlice[e.options.MaxDistinctKeyGroups:] { overflowedCount += v.count overflowedMemoryUsage += v.memoryUsage } allMetrics.overflowedMetrics[db] = &overflowedKeyGroupMetrics{ topMemoryUsageKeyGroups: metricsSlice[:e.options.MaxDistinctKeyGroups], overflowKeyGroupAggregate: keyGroupMetrics{ keyGroup: "overflow", count: overflowedCount, memoryUsage: overflowedMemoryUsage, }, keyGroupsCount: int64(len(allGroups)), } } } return allMetrics } func gatherKeyGroupMetrics(c redis.Conn, batchSize int64, keyGroups []string) (map[string]*keyGroupMetrics, error) { allGroups := make(map[string]*keyGroupMetrics) keysAndArgs := []interface{}{0, batchSize} for _, keyGroup := range keyGroups { keysAndArgs = append(keysAndArgs, keyGroup) } script := redis.NewScript( 0, ` local result = {} local batch = redis.call("SCAN", ARGV[1], "COUNT", ARGV[2]) local groups = {} local usage = 0 local group_index = 0 local group = nil local value = {} local key_match_result = {} local status = false local err = nil for i=3,#ARGV do status, err = pcall(string.find, " ", ARGV[i]) if not status then error(err .. ARGV[i]) end end for i,key in ipairs(batch[2]) do local reply = redis.pcall("MEMORY", "USAGE", key) if type(reply) == "number" then usage = reply; end group = nil for i=3,#ARGV do key_match_result = {string.find(key, ARGV[i])} if key_match_result[1] ~= nil then group = table.concat({unpack(key_match_result, 3, #key_match_result)}, "") break end end if group == nil then group = "unclassified" end value = groups[group] if value == nil then groups[group] = {1, usage} else groups[group] = {value[1] + 1, value[2] + usage} end end for group,value in pairs(groups) do result[#result+1] = {group, value[1], value[2]} end return {batch[1], result}`, ) for { arr, err := redis.Values(script.Do(c, keysAndArgs...)) if err != nil { return nil, err } if len(arr) != 2 { return nil, fmt.Errorf("invalid response from key group metrics lua script for groups: %s", strings.Join(keyGroups, ", ")) } groups, _ := redis.Values(arr[1], nil) for _, group := range groups { metricsArr, _ := redis.Values(group, nil) name, _ := redis.String(metricsArr[0], nil) count, _ := redis.Int64(metricsArr[1], nil) memoryUsage, _ := redis.Int64(metricsArr[2], nil) if currentMetrics, ok := allGroups[name]; ok { currentMetrics.count += count currentMetrics.memoryUsage += memoryUsage } else { allGroups[name] = &keyGroupMetrics{ keyGroup: name, count: count, memoryUsage: memoryUsage, } } } if keysAndArgs[0], _ = redis.Int(arr[0], nil); keysAndArgs[0].(int) == 0 { break } } return allGroups, nil } �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/key_groups_test.go���������������������������������������������������0000664�0000000�0000000�00000011721�14765200314�0022276�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "os" "reflect" "strconv" "strings" "testing" "time" "github.com/gomodule/redigo/redis" "github.com/prometheus/client_golang/prometheus" dto "github.com/prometheus/client_model/go" ) func getDBCount(c redis.Conn) (dbCount int, err error) { dbCount = 16 var config []string if config, err = redis.Strings(doRedisCmd(c, "CONFIG", "GET", "*")); err != nil { return } for pos := 0; pos < len(config)/2; pos++ { strKey := config[pos*2] strVal := config[pos*2+1] if strKey == "databases" { if dbCount, err = strconv.Atoi(strVal); err != nil { dbCount = 16 } return } } return } type keyGroupData struct { name string checkKeyGroups string maxDistinctKeyGroups int64 wantedCount map[string]int wantedMemory map[string]bool wantedDistintKeyGroups int } func TestKeyGroupMetrics(t *testing.T) { if os.Getenv("TEST_REDIS_URI") == "" { t.Skipf("TEST_REDIS_URI not set - skipping") } addr := os.Getenv("TEST_REDIS_URI") c, err := redis.DialURL(addr) if err != nil { t.Fatalf("Couldn't connect to %#v: %#v", addr, err) } var dbCount int if dbCount, err = getDBCount(c); err != nil { t.Fatalf("Couldn't get dbCount: %#v", err) } setupTestKeys(t, addr) defer deleteTestKeys(t, addr) tsts := []keyGroupData{ { name: "synchronous with unclassified keys", checkKeyGroups: "^(key_ringo)_[0-9]+$,^(key_paul)_[0-9]+$,^(key_exp)_.+$", maxDistinctKeyGroups: 100, // The actual counts are a function of keys (all types) being set up in the init() function // and the CheckKeyGroups regexes for initializing the Redis exporter above. The count below // will need to be updated if either of the aforementioned things have changed. wantedCount: map[string]int{ "key_ringo": 1, "key_paul": 1, "unclassified": 9, "key_exp": 5, }, wantedMemory: map[string]bool{ "key_ringo": true, "key_paul": true, "unclassified": true, "key_exp": true, }, wantedDistintKeyGroups: 4, }, { name: "synchronous with overflow keys", checkKeyGroups: "^(.*)$", // Each key is a distinct key group maxDistinctKeyGroups: 1, // The actual counts depend on the largest key being set up in the init() // function (test-stream at the time this code was written) and the total // of keys (all types). This will need to be updated to match future // updates of the init() function wantedCount: map[string]int{ "overflow": 15, "test-stream": 1, }, wantedMemory: map[string]bool{ "overflow": true, "test-stream": true, }, wantedDistintKeyGroups: 16, }, } for _, tst := range tsts { t.Run(tst.name, func(t *testing.T) { e, _ := NewRedisExporter( addr, Options{ Namespace: "test", CheckKeyGroups: tst.checkKeyGroups, CheckKeysBatchSize: 1000, MaxDistinctKeyGroups: tst.maxDistinctKeyGroups, }, ) for { chM := make(chan prometheus.Metric) go func() { e.extractKeyGroupMetrics(chM, c, dbCount) close(chM) }() actualCount := make(map[string]int) actualMemory := make(map[string]bool) actualDistinctKeyGroups := 0 receivedMetrics := false for m := range chM { receivedMetrics = true got := &dto.Metric{} m.Write(got) if strings.Contains(m.Desc().String(), "test_key_group_count") { for _, label := range got.GetLabel() { if *label.Name == "key_group" { actualCount[*label.Value] = int(*got.Gauge.Value) } } } else if strings.Contains(m.Desc().String(), "test_key_group_memory_usage_bytes") { for _, label := range got.GetLabel() { if *label.Name == "key_group" { actualMemory[*label.Value] = true } } } else if strings.Contains(m.Desc().String(), "test_number_of_distinct_key_groups") { for _, label := range got.GetLabel() { if *label.Name == "db" && *label.Value == "db"+dbNumStr { actualDistinctKeyGroups = int(*got.Gauge.Value) } } } } if !receivedMetrics { time.Sleep(100 * time.Millisecond) continue } if !reflect.DeepEqual(tst.wantedCount, actualCount) { t.Errorf("Key group count metrics are not expected:\n Expected: %#v\nActual: %#v\n", tst.wantedCount, actualCount) } // It's a little fragile to anticipate how much memory // will be allocated for specific key groups, so we // are only going to check for presence of memory usage // metrics for expected key groups here. if !reflect.DeepEqual(tst.wantedMemory, actualMemory) { t.Errorf("Key group memory usage metrics are not expected:\n Expected: %#v\nActual: %#v\n", tst.wantedMemory, actualMemory) } if actualDistinctKeyGroups != tst.wantedDistintKeyGroups { t.Errorf("Unexpected number of distinct key groups, expected: %d, actual: %d", tst.wantedDistintKeyGroups, actualDistinctKeyGroups) } break } }) } } �����������������������������������������������redis_exporter-1.69.0/exporter/keys.go��������������������������������������������������������������0000664�0000000�0000000�00000034735�14765200314�0020035�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "fmt" "net/url" "regexp" "strconv" "strings" "github.com/gomodule/redigo/redis" "github.com/prometheus/client_golang/prometheus" log "github.com/sirupsen/logrus" ) type dbKeyPair struct { db string key string } func getStringInfoNotPipelined(c redis.Conn, key string) (strVal string, keyType string, size int64, err error) { if strVal, err = redis.String(doRedisCmd(c, "GET", key)); err != nil { log.Errorf("GET %s err: %s", key, err) } // Check PFCOUNT first because STRLEN on HyperLogLog strings returns the wrong length // while PFCOUNT only works on HLL strings and returns an error on regular strings. // // no pipelining / batching for cluster mode, it's not supported if size, err = redis.Int64(doRedisCmd(c, "PFCOUNT", key)); err == nil { // hyperloglog keyType = "HLL" return } else if size, err = redis.Int64(doRedisCmd(c, "STRLEN", key)); err == nil { keyType = "string" return } return } func (e *Exporter) getKeyInfo(ch chan<- prometheus.Metric, c redis.Conn, dbLabel string, keyType string, keyName string) { var err error var size int64 var strVal string switch keyType { case "none": log.Debugf("Key '%s' not found when trying to get type and size: using default '0.0'", keyName) e.registerConstMetricGauge(ch, "key_size", 0.0, dbLabel, keyName) return case "string": strVal, keyType, size, err = getStringInfoNotPipelined(c, keyName) case "list": size, err = redis.Int64(doRedisCmd(c, "LLEN", keyName)) case "set": size, err = redis.Int64(doRedisCmd(c, "SCARD", keyName)) case "zset": size, err = redis.Int64(doRedisCmd(c, "ZCARD", keyName)) case "hash": size, err = redis.Int64(doRedisCmd(c, "HLEN", keyName)) case "stream": size, err = redis.Int64(doRedisCmd(c, "XLEN", keyName)) default: err = fmt.Errorf("unknown type: %v for key: %v", keyType, keyName) } if err != nil { log.Errorf("getKeyInfo() err: %s", err) return } e.registerConstMetricGauge(ch, "key_size", float64(size), dbLabel, keyName) // Only run on single value strings if keyType == "string" && !e.options.DisableExportingKeyValues && strVal != "" { if val, err := strconv.ParseFloat(strVal, 64); err == nil { // Only record value metric if value is float-y e.registerConstMetricGauge(ch, "key_value", val, dbLabel, keyName) } else { // if it's not float-y then we'll record the value as a string label e.registerConstMetricGauge(ch, "key_value_as_string", 1.0, dbLabel, keyName, strVal) } } } func (e *Exporter) extractCheckKeyMetrics(ch chan<- prometheus.Metric, c redis.Conn) { keys, err := parseKeyArg(e.options.CheckKeys) if err != nil { log.Errorf("Couldn't parse check-keys: %#v", err) return } log.Debugf("keys: %#v", keys) singleKeys, err := parseKeyArg(e.options.CheckSingleKeys) if err != nil { log.Errorf("Couldn't parse check-single-keys: %#v", err) return } log.Debugf("e.singleKeys: %#v", singleKeys) allKeys := append([]dbKeyPair{}, singleKeys...) log.Debugf("e.keys: %#v", keys) if scannedKeys, err := getKeysFromPatterns(c, keys, e.options.CheckKeysBatchSize); err == nil { allKeys = append(allKeys, scannedKeys...) } else { log.Errorf("Error expanding key patterns: %#v", err) } log.Debugf("allKeys: %#v", allKeys) /* important: when adding, modifying, removing metrics both paths here (pipelined/non-pipelined) need to be modified */ if e.options.IsCluster { e.extractCheckKeyMetricsNotPipelined(ch, c, allKeys) } else { e.extractCheckKeyMetricsPipelined(ch, c, allKeys) } } func (e *Exporter) extractCheckKeyMetricsPipelined(ch chan<- prometheus.Metric, c redis.Conn, allKeys []dbKeyPair) { // // the following commands are all pipelined/batched to improve performance // by removing one roundtrip to the redis instance // see https://github.com/oliver006/redis_exporter/issues/980 // /* group keys by DB so we don't have to do repeated SELECT calls and jump between DBs --> saves roundtrips, improves latency */ keysByDb := map[string][]string{} for _, k := range allKeys { if a, ok := keysByDb[k.db]; ok { // exists already a = append(a, k.key) keysByDb[k.db] = a } else { // first time - got to init the array keysByDb[k.db] = []string{k.key} } } log.Debugf("keysByDb: %#v", keysByDb) for dbNum, arrayOfKeys := range keysByDb { dbLabel := "db" + dbNum log.Debugf("c.Send() SELECT [%s]", dbNum) if err := c.Send("SELECT", dbNum); err != nil { log.Errorf("Couldn't select database [%s] when getting key info.", dbNum) continue } /* first pipeline (batch) all the TYPE & MEMORY USAGE calls and ship them to the redis instance everything else is dependent on the TYPE of the key */ for _, keyName := range arrayOfKeys { log.Debugf("c.Send() TYPE [%v]", keyName) if err := c.Send("TYPE", keyName); err != nil { log.Errorf("c.Send() TYPE err: %s", err) return } log.Debugf("c.Send() MEMORY USAGE [%v]", keyName) if err := c.Send("MEMORY", "USAGE", keyName); err != nil { log.Errorf("c.Send() MEMORY USAGE err: %s", err) return } } log.Debugf("c.Flush()") if err := c.Flush(); err != nil { log.Errorf("FLUSH err: %s", err) return } // throwaway Receive() call for the response of the SELECT() call if _, err := redis.String(c.Receive()); err != nil { log.Errorf("Receive() err: %s", err) continue } /* populate "keyTypes" with the batched TYPE responses from the redis instance and collect MEMORY USAGE responses and immediately emmit that metric */ keyTypes := make([]string, len(arrayOfKeys)) for idx, keyName := range arrayOfKeys { var err error keyTypes[idx], err = redis.String(c.Receive()) if err != nil { log.Errorf("key: [%s] - Receive err: %s", keyName, err) continue } memUsageInBytes, err := redis.Int64(c.Receive()) if err != nil { log.Errorf("key: [%s] - memUsageInBytes Receive() err: %s", keyName, err) continue } e.registerConstMetricGauge(ch, "key_memory_usage_bytes", float64(memUsageInBytes), dbLabel, keyName) } /* now that we have the types for all the keys we can gather information about each key like size & length and value (redis cmd used is dependent on TYPE) */ e.getKeyInfoPipelined(ch, c, dbLabel, arrayOfKeys, keyTypes) } } func (e *Exporter) getKeyInfoPipelined(ch chan<- prometheus.Metric, c redis.Conn, dbLabel string, arrayOfKeys []string, keyTypes []string) { for idx, keyName := range arrayOfKeys { keyType := keyTypes[idx] switch keyType { case "none": continue case "string": log.Debugf("c.Send() PFCOUNT args: [%v]", keyName) if err := c.Send("PFCOUNT", keyName); err != nil { log.Errorf("PFCOUNT err: %s", err) return } log.Debugf("c.Send() STRLEN args: [%v]", keyName) if err := c.Send("STRLEN", keyName); err != nil { log.Errorf("PFCOUNT err: %s", err) return } log.Debugf("c.Send() GET args: [%v]", keyName) if err := c.Send("GET", keyName); err != nil { log.Errorf("PFCOUNT err: %s", err) return } case "list": log.Debugf("c.Send() LLEN args: [%v]", keyName) if err := c.Send("LLEN", keyName); err != nil { log.Errorf("LLEN err: %s", err) return } case "set": log.Debugf("c.Send() SCARD args: [%v]", keyName) if err := c.Send("SCARD", keyName); err != nil { log.Errorf("SCARD err: %s", err) return } case "zset": log.Debugf("c.Send() ZCARD args: [%v]", keyName) if err := c.Send("ZCARD", keyName); err != nil { log.Errorf("ZCARD err: %s", err) return } case "hash": log.Debugf("c.Send() HLEN args: [%v]", keyName) if err := c.Send("HLEN", keyName); err != nil { log.Errorf("HLEN err: %s", err) return } case "stream": log.Debugf("c.Send() XLEN args: [%v]", keyName) if err := c.Send("XLEN", keyName); err != nil { log.Errorf("XLEN err: %s", err) return } default: log.Errorf("unknown type: %v for key: %v", keyType, keyName) continue } } log.Debugf("c.Flush()") if err := c.Flush(); err != nil { log.Errorf("Flush() err: %s", err) return } for idx, keyName := range arrayOfKeys { keyType := keyTypes[idx] var err error var size int64 var strVal string switch keyType { case "none": log.Debugf("Key '%s' not found when trying to get type and size: using default '0.0'", keyName) e.registerConstMetricGauge(ch, "key_size", 0.0, dbLabel, keyName) return case "string": hllSize, hllErr := redis.Int64(c.Receive()) strSize, strErr := redis.Int64(c.Receive()) var strValErr error if strVal, strValErr = redis.String(c.Receive()); strValErr != nil { log.Errorf("c.Receive() for GET %s err: %s", keyName, strValErr) } log.Debugf("Done with c.Receive() x 3") if hllErr == nil { // hyperloglog size = hllSize // "TYPE" reports hll as string // this will prevent treating the result as a string by the caller (e.g. call GET) keyType = "HLL" } else if strErr == nil { // not hll so possibly a string? size = strSize keyType = "string" } else { continue } case "hash", "list", "set", "stream", "zset": size, err = redis.Int64(c.Receive()) default: err = fmt.Errorf("unknown type: %v for key: %v", keyType, keyName) } if err != nil { log.Errorf("getKeyInfo() err: %s", err) continue } if keyType == "string" && !e.options.DisableExportingKeyValues && strVal != "" { if val, err := strconv.ParseFloat(strVal, 64); err == nil { // Only record value metric if value is float-y e.registerConstMetricGauge(ch, "key_value", val, dbLabel, keyName) } else { // if it's not float-y then we'll record the value as a string label e.registerConstMetricGauge(ch, "key_value_as_string", 1.0, dbLabel, keyName, strVal) } } e.registerConstMetricGauge(ch, "key_size", float64(size), dbLabel, keyName) } } func (e *Exporter) extractCheckKeyMetricsNotPipelined(ch chan<- prometheus.Metric, c redis.Conn, allKeys []dbKeyPair) { // Cluster mode only has one db // no need to run `SELECT" but got to set it to "0" in the loop because it's used as the label for _, k := range allKeys { k.db = "0" keyType, err := redis.String(doRedisCmd(c, "TYPE", k.key)) if err != nil { log.Errorf("TYPE err: %s", keyType) continue } if memUsageInBytes, err := redis.Int64(doRedisCmd(c, "MEMORY", "USAGE", k.key)); err == nil { e.registerConstMetricGauge(ch, "key_memory_usage_bytes", float64(memUsageInBytes), "db"+k.db, k.key) } else { log.Errorf("MEMORY USAGE %s err: %s", k.key, err) } dbLabel := "db" + k.db e.getKeyInfo(ch, c, dbLabel, keyType, k.key) } } func (e *Exporter) extractCountKeysMetrics(ch chan<- prometheus.Metric, c redis.Conn) { cntKeys, err := parseKeyArg(e.options.CountKeys) if err != nil { log.Errorf("Couldn't parse given count keys: %s", err) return } for _, k := range cntKeys { if _, err := doRedisCmd(c, "SELECT", k.db); err != nil { log.Errorf("Couldn't select database '%s' when getting stream info", k.db) continue } cnt, err := getKeysCount(c, k.key, e.options.CheckKeysBatchSize) if err != nil { log.Errorf("couldn't get key count for '%s', err: %s", k.key, err) continue } dbLabel := "db" + k.db e.registerConstMetricGauge(ch, "keys_count", float64(cnt), dbLabel, k.key) } } func getKeysCount(c redis.Conn, pattern string, count int64) (int, error) { keysCount := 0 keys, err := scanKeys(c, pattern, count) if err != nil { return keysCount, fmt.Errorf("error retrieving '%s' keys err: %s", pattern, err) } keysCount = len(keys) return keysCount, nil } // Regexp pattern to check if given key contains any // glob-style pattern symbol. // // https://redis.io/commands/scan#the-match-option var globPattern = regexp.MustCompile(`[\?\*\[\]\^]+`) // getKeysFromPatterns does a SCAN for a key if the key contains pattern characters func getKeysFromPatterns(c redis.Conn, keys []dbKeyPair, count int64) (expandedKeys []dbKeyPair, err error) { expandedKeys = []dbKeyPair{} for _, k := range keys { if globPattern.MatchString(k.key) { if _, err := doRedisCmd(c, "SELECT", k.db); err != nil { return expandedKeys, err } keyNames, err := redis.Strings(scanKeys(c, k.key, count)) if err != nil { log.Errorf("error with SCAN for pattern: %#v err: %s", k.key, err) continue } for _, keyName := range keyNames { expandedKeys = append(expandedKeys, dbKeyPair{db: k.db, key: keyName}) } } else { expandedKeys = append(expandedKeys, k) } } return expandedKeys, err } // parseKeyArgs splits a command-line supplied argument into a slice of dbKeyPairs. func parseKeyArg(keysArgString string) (keys []dbKeyPair, err error) { if keysArgString == "" { log.Debugf("parseKeyArg(): Got empty key arguments, parsing skipped") return keys, err } for _, k := range strings.Split(keysArgString, ",") { var db string var key string if k == "" { continue } frags := strings.Split(k, "=") switch len(frags) { case 1: db = "0" key, err = url.QueryUnescape(strings.TrimSpace(frags[0])) case 2: db = strings.Replace(strings.TrimSpace(frags[0]), "db", "", -1) key, err = url.QueryUnescape(strings.TrimSpace(frags[1])) default: return keys, fmt.Errorf("invalid key list argument: %s", k) } if err != nil { return keys, fmt.Errorf("couldn't parse db/key string: %s", k) } // We want to guarantee at the top level that invalid values // will not fall into the final Redis call. if db == "" || key == "" { log.Errorf("parseKeyArg(): Empty value parsed in pair '%s=%s', skip", db, key) continue } number, err := strconv.Atoi(db) if err != nil || number < 0 { return keys, fmt.Errorf("Invalid database index for db \"%s\": %s", db, err) } keys = append(keys, dbKeyPair{db, key}) } return keys, err } // scanForKeys returns a list of keys matching `pattern` by using `SCAN`, which is safer for production systems than using `KEYS`. // This function was adapted from: https://github.com/reisinger/examples-redigo func scanKeys(c redis.Conn, pattern string, count int64) (keys []interface{}, err error) { if pattern == "" { return keys, fmt.Errorf("Pattern shouldn't be empty") } iter := 0 for { arr, err := redis.Values(doRedisCmd(c, "SCAN", iter, "MATCH", pattern, "COUNT", count)) if err != nil { return keys, fmt.Errorf("error retrieving '%s' keys err: %s", pattern, err) } if len(arr) != 2 { return keys, fmt.Errorf("invalid response from SCAN for pattern: %s", pattern) } k, _ := redis.Values(arr[1], nil) keys = append(keys, k...) if iter, _ = redis.Int(arr[0], nil); iter == 0 { break } } return keys, nil } �����������������������������������redis_exporter-1.69.0/exporter/keys_test.go���������������������������������������������������������0000664�0000000�0000000�00000052053�14765200314�0021065�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "fmt" "net/http/httptest" "net/url" "os" "reflect" "sort" "strings" "testing" "github.com/gomodule/redigo/redis" "github.com/prometheus/client_golang/prometheus" log "github.com/sirupsen/logrus" ) // defaultCount is used for `SCAN whatever COUNT defaultCount` command const ( defaultCount int64 = 10 invalidCount int64 = 0 ) func TestKeyValuesAndSizes(t *testing.T) { e, _ := NewRedisExporter( os.Getenv("TEST_REDIS_URI"), Options{ Namespace: "test", CheckSingleKeys: dbNumStrFull + "=" + url.QueryEscape(testKeys[0]), Registry: prometheus.NewRegistry()}, ) ts := httptest.NewServer(e) defer ts.Close() setupTestKeys(t, os.Getenv("TEST_REDIS_URI")) defer deleteTestKeys(t, os.Getenv("TEST_REDIS_URI")) chM := make(chan prometheus.Metric, 10000) go func() { e.Collect(chM) close(chM) }() body := downloadURL(t, ts.URL+"/metrics") for _, want := range []string{ "test_key_size", "test_key_value", } { if !strings.Contains(body, want) { t.Fatalf("didn't find %s, body: %s", want, body) return } } } func TestKeyValuesAsLabel(t *testing.T) { setupTestKeys(t, os.Getenv("TEST_REDIS_URI")) defer deleteTestKeys(t, os.Getenv("TEST_REDIS_URI")) for _, exc := range []bool{true, false} { e, _ := NewRedisExporter( os.Getenv("TEST_REDIS_URI"), Options{ Namespace: "test", CheckSingleKeys: dbNumStrFull + "=" + url.QueryEscape(testKeySingleString), DisableExportingKeyValues: exc, Registry: prometheus.NewRegistry()}, ) ts := httptest.NewServer(e) chM := make(chan prometheus.Metric, 10000) go func() { e.Collect(chM) close(chM) }() body := downloadURL(t, ts.URL+"/metrics") for _, match := range []string{ "key_value_as_string", "test_key_value", } { if exc && strings.Contains(body, match) { t.Fatalf("didn't expect %s with DisableExportingKeyValues enabled, body: %s", match, body) } else if !exc && !strings.Contains(body, match) { t.Fatalf("didn't find %s with DisableExportingKeyValues disabled, body: %s", match, body) } } ts.Close() } } func TestClusterKeyValuesAndSizes(t *testing.T) { clusterUri := os.Getenv("TEST_REDIS_CLUSTER_MASTER_URI") if clusterUri == "" { t.Skipf("Skipping TestClusterKeyValuesAndSizes, don't have env var TEST_REDIS_CLUSTER_MASTER_URI") } setupTestKeysCluster(t, clusterUri) defer deleteTestKeysCluster(clusterUri) for _, disableExportingValues := range []bool{true, false} { e, _ := NewRedisExporter( clusterUri, Options{ Namespace: "test", DisableExportingKeyValues: disableExportingValues, CheckSingleKeys: fmt.Sprintf( "%s=%s,%s=%s", dbNumStrFull, url.QueryEscape(testKeys[0]), dbNumStrFull, url.QueryEscape(TestKeysSetName), ), IsCluster: true, }, ) chM := make(chan prometheus.Metric) go func() { e.Collect(chM) close(chM) }() foundExpectedKey := map[string]bool{ "test_key_size": false, "test_key_value": false, "test_key_memory_usage_bytes": false, } for m := range chM { for k := range foundExpectedKey { if strings.Contains(m.Desc().String(), k) { foundExpectedKey[k] = true } } } for k, found := range foundExpectedKey { if k == "test_key_value" { if found && disableExportingValues { t.Errorf("didn't expect %s with DisableExportingKeyValues enabled", k) } else if !found && !disableExportingValues { t.Errorf("didn't find %s with DisableExportingKeyValues disabled", k) } } else if !found { t.Errorf("didn't find %s", k) } } } } func TestParseKeyArg(t *testing.T) { for _, test := range []struct { name string keyArgs string expected []dbKeyPair expectSuccess bool }{ // positive tests {"empty_args", "", []dbKeyPair{}, true}, {"default_database", "my-key", []dbKeyPair{{"0", "my-key"}}, true}, {"prefixed_database", "db0=my-key", []dbKeyPair{{"0", "my-key"}}, true}, {"indexed_database", "0=my-key", []dbKeyPair{{"0", "my-key"}}, true}, {"triple_key", "check-key-01", []dbKeyPair{{"0", "check-key-01"}}, true}, { name: "default_database_multiple_keys", keyArgs: "my-key1,my-key2", expected: []dbKeyPair{ {"0", "my-key1"}, {"0", "my-key2"}, }, expectSuccess: true, }, { name: "key_with_leading_space", keyArgs: "my-key-noSpace, my-key-withSpace", expected: []dbKeyPair{ {"0", "my-key-noSpace"}, {"0", "my-key-withSpace"}, }, expectSuccess: true, }, { name: "key_with_spaces", keyArgs: "my-key-noSpace1, my-key-withSpaces ,my-key-noSpace2", expected: []dbKeyPair{ {"0", "my-key-noSpace1"}, {"0", "my-key-withSpaces"}, {"0", "my-key-noSpace2"}, }, expectSuccess: true, }, { name: "different_databases", keyArgs: "db0=key1,db1=key1", expected: []dbKeyPair{ {"0", "key1"}, {"1", "key1"}, }, expectSuccess: true, }, { name: "dbdb_replace", keyArgs: "dbdbdb0=key1,db1=key1", expected: []dbKeyPair{ {"0", "key1"}, {"1", "key1"}, }, expectSuccess: true, }, { name: "default_database_with_another", keyArgs: "key1,db1=key1", expected: []dbKeyPair{ {"0", "key1"}, {"1", "key1"}, }, expectSuccess: true, }, { "invalid_args_with_args_separator_skipped", "=", []dbKeyPair{}, true, }, { "empty_args_with_comma_separators_skipped", ",,,my-key", []dbKeyPair{{"0", "my-key"}}, true, }, { "multiple_invalid_args_skipped", "=,=,,0=my-key", []dbKeyPair{{"0", "my-key"}}, true, }, { "empty_key_with_args_separator_skipped", "0=", []dbKeyPair{}, true, }, { "empty_database_with_args_separator_skipped", "=my-key", []dbKeyPair{}, true, }, // negative tests { "string_database_index", "wrong=my-key", []dbKeyPair{}, false, }, { "prefixed_string_database_index", "dbwrong=my-key", []dbKeyPair{}, false, }, { "wrong_args_count", "wrong=wrong=wrong", []dbKeyPair{}, false, }, { "wrong_args", "wrong=wrong=1", []dbKeyPair{}, false, }, { "negative_database_index", "db-1=my-key", []dbKeyPair{}, false, }, } { t.Run(test.name, func(t *testing.T) { parsed, err := parseKeyArg(test.keyArgs) if test.expectSuccess && err != nil { t.Errorf("Expected success for test: %s, got err: %s", test.name, err) return } if len(parsed) != len(test.expected) { t.Errorf("Parsed elements count don't match expected: parsed %d; expected %d", len(parsed), len(test.expected)) return } for i, pair := range test.expected { if pair != parsed[i] { t.Errorf("Parsed elements don't match expected dbKeyPair:\n parsed %#v;\nexpected %#v", parsed[i], pair) return } } if !test.expectSuccess && err == nil { t.Errorf("Expected failure for test: %s, got no err", test.name) return } if !test.expectSuccess && err != nil { t.Logf("Expected failure for test: %s, got err: %s", test.name, err) return } }) } } type keyFixture struct { command string key string args []interface{} } func newKeyFixture(command string, key string, args ...interface{}) keyFixture { return keyFixture{command, key, args} } func createKeyFixtures(t *testing.T, c redis.Conn, fixtures []keyFixture) { for _, f := range fixtures { args := append([]interface{}{f.key}, f.args...) if _, err := c.Do(f.command, args...); err != nil { t.Fatalf("Error creating fixture: %#v, %#v", f, err) } } } func deleteKeyFixtures(t *testing.T, c redis.Conn, fixtures []keyFixture) { for _, f := range fixtures { if _, err := c.Do("DEL", f.key); err != nil { t.Errorf("Error deleting fixture: %#v, %#v", f, err) } } } func TestScanKeys(t *testing.T) { numKeys := 1000 var fixtures []keyFixture // Make 1000 keys that match for i := 0; i < numKeys; i++ { key := fmt.Sprintf("get_keys_test_shouldmatch_%v", i) fixtures = append(fixtures, newKeyFixture("SET", key, "Woohoo!")) } // And 1000 that don't for i := 0; i < numKeys; i++ { key := fmt.Sprintf("get_keys_test_shouldnotmatch_%v", i) fixtures = append(fixtures, newKeyFixture("SET", key, "Rats!")) } addr := os.Getenv("TEST_REDIS_URI") db := dbNumStr c, err := redis.DialURL(addr) if err != nil { t.Fatalf("Couldn't connect to %#v: %#v", addr, err) } _, err = c.Do("SELECT", db) if err != nil { t.Errorf("Couldn't select database %#v", db) } defer func() { deleteKeyFixtures(t, c, fixtures) c.Close() }() createKeyFixtures(t, c, fixtures) matches, err := redis.Strings(scanKeys(c, "get_keys_test_*shouldmatch*", defaultCount)) if err != nil { t.Errorf("Error getting keys matching a pattern: %#v", err) } numMatches := len(matches) if numMatches != numKeys { t.Errorf("Expected %#v matches, got %#v.", numKeys, numMatches) } for _, match := range matches { if !strings.HasPrefix(match, "get_keys_test_shouldmatch") { t.Errorf("Expected match to have prefix: get_keys_test_shouldmatch") } } // Test expected errors separately invalidFixtures := map[string]int64{ // empty string is a string after all "": 100, "pattern": invalidCount, } for pattern, count := range invalidFixtures { got, err := redis.Strings(scanKeys(c, pattern, count)) if err != nil { t.Logf("\"Passed\" expected, got error: %#v", err) if pattern == "" && err.Error() != "Pattern shouldn't be empty" { t.Errorf("\"Empty pattern\" error message expected, but got: %s", err.Error()) } } else { if len(got) >= 0 { t.Errorf("Error expected, got valid response: %#v", got) } } } } func TestGetKeysFromPatterns(t *testing.T) { addr := os.Getenv("TEST_REDIS_URI") dbMain := dbNumStr dbAlt := altDBNumStr dbInvalid := invalidDBNumStr dbMainFixtures := []keyFixture{ newKeyFixture("SET", "dbMainNoPattern1", "woohoo!"), newKeyFixture("SET", "dbMainSomePattern1", "woohoo!"), newKeyFixture("SET", "dbMainSomePattern2", "woohoo!"), } dbAltFixtures := []keyFixture{ newKeyFixture("SET", "dbAltNoPattern1", "woohoo!"), newKeyFixture("SET", "dbAltSomePattern1", "woohoo!"), newKeyFixture("SET", "dbAltSomePattern2", "woohoo!"), } keys := []dbKeyPair{ {db: dbMain, key: "dbMainNoPattern1"}, {db: dbMain, key: "*SomePattern*"}, {db: dbAlt, key: "dbAltNoPattern1"}, {db: dbAlt, key: "*SomePattern*"}, } invalidKeys := []dbKeyPair{ {db: dbInvalid, key: "someUnusedPattern*"}, } c, err := redis.DialURL(addr) if err != nil { t.Fatalf("Couldn't connect to %#v: %#v", addr, err) } defer func() { _, err = c.Do("SELECT", dbMain) if err != nil { t.Errorf("Couldn't select database %#v", dbMain) } deleteKeyFixtures(t, c, dbMainFixtures) _, err = c.Do("SELECT", dbAlt) if err != nil { t.Errorf("Couldn't select database %#v", dbAlt) } deleteKeyFixtures(t, c, dbAltFixtures) c.Close() }() _, err = c.Do("SELECT", dbMain) if err != nil { t.Errorf("Couldn't select database %#v", dbMain) } createKeyFixtures(t, c, dbMainFixtures) _, err = c.Do("SELECT", dbAlt) if err != nil { t.Errorf("Couldn't select database %#v", dbAlt) } createKeyFixtures(t, c, dbAltFixtures) expandedKeys, err := getKeysFromPatterns(c, keys, defaultCount) if err != nil { t.Errorf("Error getting keys from patterns: %#v", err) } expectedKeys := []dbKeyPair{ {db: dbMain, key: "dbMainNoPattern1"}, {db: dbMain, key: "dbMainSomePattern1"}, {db: dbMain, key: "dbMainSomePattern2"}, {db: dbAlt, key: "dbAltNoPattern1"}, {db: dbAlt, key: "dbAltSomePattern1"}, {db: dbAlt, key: "dbAltSomePattern2"}, } sort.Slice(expectedKeys, func(i, j int) bool { return (expectedKeys[i].db + expectedKeys[i].key) < (expectedKeys[j].db + expectedKeys[j].key) }) sort.Slice(expandedKeys, func(i, j int) bool { return (expandedKeys[i].db + expandedKeys[i].key) < (expandedKeys[j].db + expandedKeys[j].key) }) if !reflect.DeepEqual(expectedKeys, expandedKeys) { t.Errorf("When expanding keys:\nexpected: %#v\nactual: %#v", expectedKeys, expandedKeys) } got, err := getKeysFromPatterns(c, invalidKeys, defaultCount) if err != nil { t.Logf("Expected error - \"invalid DB\": %#v", err) } else { if len(got) != 0 { t.Errorf("Error expected with invalid database %#v, got valid response: %#v", invalidKeys, got) } } } /* func TestGetKeyInfo(t *testing.T) { addr := os.Getenv("TEST_REDIS_URI") db := dbNumStr c, err := redis.DialURL(addr) if err != nil { t.Fatalf("Couldn't connect to %#v: %#v", addr, err) } _, err = c.Do("SELECT", db) if err != nil { t.Errorf("Couldn't select database %#v", db) } fixtures := []keyFixture{ {"SET", "key_info_test_string", []interface{}{"Woohoo!"}}, {"HSET", "key_info_test_hash", []interface{}{"hashkey1", "hashval1"}}, {"PFADD", "key_info_test_hll", []interface{}{"hllval1", "hllval2"}}, {"PFADD", "key_info_test_hll2", []interface{}{"hll2val_1", "hll2val_2", "hll2val_3"}}, {"LPUSH", "key_info_test_list", []interface{}{"listval1", "listval2", "listval3"}}, {"SADD", "key_info_test_set", []interface{}{"setval1", "setval2", "setval3", "setval4"}}, {"ZADD", "key_info_test_zset", []interface{}{ "1", "zsetval1", "2", "zsetval2", "3", "zsetval3", "4", "zsetval4", "5", "zsetval5", }}, {"XADD", "key_info_test_stream", []interface{}{"*", "field1", "str1"}}, } createKeyFixtures(t, c, fixtures) defer func() { deleteKeyFixtures(t, c, fixtures) c.Close() }() expectedSizes := map[string]float64{ "key_info_test_string": 7, "key_info_test_hash": 1, "key_info_test_hll": 2, "key_info_test_hll2": 3, "key_info_test_list": 3, "key_info_test_set": 4, "key_info_test_zset": 5, "key_info_test_stream": 1, } // Test all known types for _, f := range fixtures { keyType, err := redis.String(c.Do("TYPE", f.key)) if err != nil { t.Fatalf("TYPE err: %s", err) } info, err := getKeyInfo(c, keyType, f.key, false) if err != nil { t.Fatalf("Error getting key info for %#v.", f.key) } expected := expectedSizes[f.key] if info.size != expected { t.Errorf("Wrong size for key: %#v. Expected: %#v; Actual: %#v", f.key, expected, info.size) t.Logf("info: %#v", info) } } absentKeyName := "absent_key" // Test absent key returns the correct error keyType, err := redis.String(c.Do("TYPE", absentKeyName)) if err != nil { t.Fatalf("TYPE err: %s", err) } _, err = getKeyInfo(c, keyType, absentKeyName, false) if !errors.Is(err, errKeyTypeNotFound) { t.Errorf("Expected `errKeyTypeNotFound` for absent key. Got a different error, err: %#v", err) } } */ func TestKeySizeList(t *testing.T) { s := dbNumStrFull + "=" + testKeysList[0] e, _ := NewRedisExporter( os.Getenv("TEST_REDIS_URI"), Options{Namespace: "test", CheckSingleKeys: s}, ) setupTestKeys(t, os.Getenv("TEST_REDIS_URI")) defer deleteTestKeys(t, os.Getenv("TEST_REDIS_URI")) chM := make(chan prometheus.Metric) go func() { e.Collect(chM) close(chM) }() found := false for m := range chM { if strings.Contains(m.Desc().String(), "test_key_size") { found = true } } if !found { t.Errorf("didn't find the key") } } func TestKeyValueInvalidDB(t *testing.T) { e, _ := NewRedisExporter( os.Getenv("TEST_REDIS_URI"), Options{ Namespace: "test", CheckSingleKeys: "999=" + url.QueryEscape(testKeys[0]), }, ) chM := make(chan prometheus.Metric) go func() { e.Collect(chM) close(chM) }() dontWant := map[string]bool{"test_key_size": false} for m := range chM { switch m.(type) { case prometheus.Gauge: for k := range dontWant { if strings.Contains(m.Desc().String(), k) { log.Println(m.Desc().String()) dontWant[k] = true } } default: log.Debugf("default: m: %#v", m) } } for k, found := range dontWant { if found { t.Errorf("we found %s but it shouldn't be there", k) } } } func TestCheckKeys(t *testing.T) { for _, tst := range []struct { SingleCheckKey string CheckKeys string ExpectSuccess bool }{ {"", "", true}, {"db1=key3", "", true}, {"check-key-01", "", true}, {"", "check-key-02", true}, {"wrong=wrong=1", "", false}, {"", "wrong=wrong=2", false}, } { _, err := NewRedisExporter(os.Getenv("TEST_REDIS_URI"), Options{Namespace: "test", CheckSingleKeys: tst.SingleCheckKey, CheckKeys: tst.CheckKeys}) if tst.ExpectSuccess && err != nil { t.Errorf("Expected success for test: %#v, got err: %s", tst, err) return } if !tst.ExpectSuccess && err == nil { t.Errorf("Expected failure for test: %#v, got no err", tst) return } } } func TestCheckSingleKeyDefaultsTo0(t *testing.T) { uri := os.Getenv("TEST_REDIS_URI") e, _ := NewRedisExporter(uri, Options{Namespace: "test", CheckSingleKeys: "single", Registry: prometheus.NewRegistry()}) ts := httptest.NewServer(e) defer ts.Close() setupTestKeys(t, uri) defer deleteTestKeys(t, uri) body := downloadURL(t, ts.URL+"/metrics") if !strings.Contains(body, `test_key_size{db="db0",key="single"} 0`) { t.Errorf("Expected metric `test_key_size` with key=`single` and value 0 but got:\n%s", body) } } func TestCheckKeysMultipleDBs(t *testing.T) { uri := os.Getenv("TEST_REDIS_URI") e, _ := NewRedisExporter(uri, Options{Namespace: "test", CheckSingleKeys: "single," + dbNumStr + "=" + testKeys[0] + "," + dbNumStr + "=" + testKeySingleString + "," + altDBNumStr + "=" + TestKeysHllName + "," + altDBNumStr + "=" + testKeySingleString + "," + anotherAltDbNumStr + "=" + testKeys[0], CheckKeys: dbNumStr + "=" + "test*", CheckKeysBatchSize: 1000, Registry: prometheus.NewRegistry(), }) ts := httptest.NewServer(e) defer ts.Close() setupTestKeys(t, uri) defer deleteTestKeys(t, uri) body := downloadURL(t, ts.URL+"/metrics") for _, k := range []string{ `test_key_size{db="db0",key="single"} 0`, // non-existent key fmt.Sprintf(`test_key_size{db="db%s",key="%s"} 16`, dbNumStr, testKeySingleString), fmt.Sprintf(`test_key_size{db="db%s",key="%s"} 3`, dbNumStr, TestKeysZSetName), fmt.Sprintf(`test_key_size{db="db%s",key="%s"} 4`, dbNumStr, TestKeysHashName), fmt.Sprintf(`test_key_size{db="db%s",key="%s"} 3`, altDBNumStr, TestKeysHllName), fmt.Sprintf(`test_key_size{db="db%s",key="%s"} 16`, altDBNumStr, testKeySingleString), fmt.Sprintf(`test_key_size{db="db%s",key="%s"} 7`, anotherAltDbNumStr, testKeys[0]), fmt.Sprintf(`test_key_value{db="db%s",key="%s"} 1234.56`, dbNumStr, testKeys[0]), fmt.Sprintf(`test_key_value{db="db%s",key="%s"} 1234.56`, anotherAltDbNumStr, testKeys[0]), fmt.Sprintf(`key_memory_usage_bytes{db="db%s",key="%s"}`, dbNumStr, testKeySingleString), fmt.Sprintf(`key_memory_usage_bytes{db="db%s",key="%s"}`, altDBNumStr, testKeySingleString), fmt.Sprintf(`key_memory_usage_bytes{db="db%s",key="%s"}`, anotherAltDbNumStr, testKeys[0]), } { if !strings.Contains(body, k) { t.Errorf("Expected metric: %s but got:\n%s", k, body) } } } func TestClusterGetKeyInfo(t *testing.T) { clusterUri := os.Getenv("TEST_REDIS_CLUSTER_MASTER_URI") if clusterUri == "" { t.Skipf("Skipping TestClusterKeyValuesAndSizes, don't have env var TEST_REDIS_CLUSTER_MASTER_URI") } e, _ := NewRedisExporter( clusterUri, Options{ Namespace: "test", CheckSingleKeys: strings.Join(AllTestKeys, ","), Registry: prometheus.NewRegistry(), IsCluster: true, }, ) ts := httptest.NewServer(e) defer ts.Close() setupTestKeysCluster(t, clusterUri) defer deleteTestKeysCluster(clusterUri) body := downloadURL(t, ts.URL+"/metrics") for _, want := range []string{ "key_value_as_string", `test_key_size{db="db0",key="test-hll"} 3`, } { if !strings.Contains(body, want) { t.Errorf("Expected metric: %s but got:\n%s", want, body) } } } func TestGetKeysCount(t *testing.T) { addr := os.Getenv("TEST_REDIS_URI") db := dbNumStr c, err := redis.DialURL(addr) if err != nil { t.Fatalf("Couldn't connect to %#v: %#v", addr, err) } _, err = c.Do("SELECT", db) if err != nil { t.Errorf("Couldn't select database %#v", db) } fixtures := []keyFixture{ {"SET", "count_test:keys_count_test_string1", []interface{}{"Woohoo!"}}, {"SET", "count_test:keys_count_test_string2", []interface{}{"!oohooW"}}, {"LPUSH", "count_test:keys_count_test_list1", []interface{}{"listval1", "listval2", "listval3"}}, {"LPUSH", "count_test:keys_count_test_list2", []interface{}{"listval1", "listval2", "listval3"}}, {"LPUSH", "count_test:keys_count_test_list3", []interface{}{"listval1", "listval2", "listval3"}}, } createKeyFixtures(t, c, fixtures) defer func() { deleteKeyFixtures(t, c, fixtures) c.Close() }() expectedCount := map[string]int{ "count_test:keys_count_test_string*": 2, "count_test:keys_count_test_list*": 3, "count_test:*": 5, } for pattern, count := range expectedCount { actualCount, err := getKeysCount(c, pattern, defaultCount) if err != nil { t.Errorf("Error getting count for pattern \"%#v\"", pattern) } if actualCount != count { t.Errorf("Wrong count for pattern \"%#v\". Expected: %#v; Actual: %#v", pattern, count, actualCount) } } got, err := getKeysCount(c, "pattern", invalidCount) if err != nil { t.Logf("Expected error - \"error retrieving keys\": %#v", err) } else { if got >= 0 { t.Errorf("Error expected with invalidCount option \"%#v\", got valid response: %#v", invalidCount, got) } } } �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/latency.go�����������������������������������������������������������0000664�0000000�0000000�00000006200�14765200314�0020503�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "regexp" "strconv" "strings" "sync" "github.com/gomodule/redigo/redis" "github.com/prometheus/client_golang/prometheus" log "github.com/sirupsen/logrus" ) var ( logLatestErrOnce, logHistogramErrOnce sync.Once extractUsecRegexp = regexp.MustCompile(`(?m)^cmdstat_([a-zA-Z0-9\|]+):.*usec=([0-9]+).*$`) ) func (e *Exporter) extractLatencyMetrics(ch chan<- prometheus.Metric, infoAll string, c redis.Conn) { e.extractLatencyLatestMetrics(ch, c) e.extractLatencyHistogramMetrics(ch, infoAll, c) } func (e *Exporter) extractLatencyLatestMetrics(outChan chan<- prometheus.Metric, redisConn redis.Conn) { reply, err := redis.Values(doRedisCmd(redisConn, "LATENCY", "LATEST")) if err != nil { /* this can be a little too verbose, see e.g. https://github.com/oliver006/redis_exporter/issues/495 we're logging this only once as an Error and always as Debugf() */ logLatestErrOnce.Do(func() { log.Errorf("WARNING, LOGGED ONCE ONLY: cmd LATENCY LATEST, err: %s", err) }) log.Debugf("cmd LATENCY LATEST, err: %s", err) return } for _, l := range reply { if latencyResult, err := redis.Values(l, nil); err == nil { var eventName string var spikeLast, spikeDuration, maxLatency int64 if _, err := redis.Scan(latencyResult, &eventName, &spikeLast, &spikeDuration, &maxLatency); err == nil { spikeDurationSeconds := float64(spikeDuration) / 1e3 e.registerConstMetricGauge(outChan, "latency_spike_last", float64(spikeLast), eventName) e.registerConstMetricGauge(outChan, "latency_spike_duration_seconds", spikeDurationSeconds, eventName) } } } } /* https://redis.io/docs/latest/commands/latency-histogram/ */ func (e *Exporter) extractLatencyHistogramMetrics(outChan chan<- prometheus.Metric, infoAll string, redisConn redis.Conn) { reply, err := redis.Values(doRedisCmd(redisConn, "LATENCY", "HISTOGRAM")) if err != nil { logHistogramErrOnce.Do(func() { log.Errorf("WARNING, LOGGED ONCE ONLY: cmd LATENCY HISTOGRAM, err: %s", err) }) log.Debugf("cmd LATENCY HISTOGRAM, err: %s", err) return } for i := 0; i < len(reply); i += 2 { cmd, _ := redis.String(reply[i], nil) details, _ := redis.Values(reply[i+1], nil) var totalCalls uint64 var bucketInfo []uint64 if _, err := redis.Scan(details, nil, &totalCalls, nil, &bucketInfo); err != nil { break } buckets := map[float64]uint64{} for j := 0; j < len(bucketInfo); j += 2 { usec := float64(bucketInfo[j]) count := bucketInfo[j+1] buckets[usec] = count } totalUsecs := extractTotalUsecForCommand(infoAll, cmd) labelValues := []string{"cmd"} e.registerConstHistogram(outChan, "commands_latencies_usec", labelValues, totalCalls, float64(totalUsecs), buckets, cmd) } } func extractTotalUsecForCommand(infoAll string, cmd string) uint64 { total := uint64(0) matches := extractUsecRegexp.FindAllStringSubmatch(infoAll, -1) for _, match := range matches { if !strings.HasPrefix(match[1], cmd) { continue } usecs, err := strconv.ParseUint(match[2], 10, 0) if err != nil { log.Warnf("Unable to parse uint from string \"%s\": %v", match[2], err) continue } total += usecs } return total } ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/latency_test.go������������������������������������������������������0000664�0000000�0000000�00000010637�14765200314�0021553�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "fmt" "math" "os" "strings" "testing" "time" "github.com/gomodule/redigo/redis" "github.com/prometheus/client_golang/prometheus" dto "github.com/prometheus/client_model/go" ) const ( latencyTestTimeToSleepInMillis = 200 ) func TestLatencySpike(t *testing.T) { e := getTestExporter() setupLatency(t, os.Getenv("TEST_REDIS_URI")) defer resetLatency(t, os.Getenv("TEST_REDIS_URI")) chM := make(chan prometheus.Metric) go func() { e.Collect(chM) close(chM) }() for m := range chM { if strings.Contains(m.Desc().String(), "latency_spike_duration_seconds") { got := &dto.Metric{} m.Write(got) // The metric value is in seconds, but our sleep interval is specified // in milliseconds, so we need to convert val := got.GetGauge().GetValue() * 1000 // Because we're dealing with latency, there might be a slight delay // even after sleeping for a specific amount of time so checking // to see if we're between +-5 of our expected value if math.Abs(float64(latencyTestTimeToSleepInMillis)-val) > 5 { t.Errorf("values not matching, %f != %f", float64(latencyTestTimeToSleepInMillis), val) } } } resetLatency(t, os.Getenv("TEST_REDIS_URI")) chM = make(chan prometheus.Metric) go func() { e.Collect(chM) close(chM) }() for m := range chM { switch m := m.(type) { case prometheus.Gauge: if strings.Contains(m.Desc().String(), "latency_spike_duration_seconds") { t.Errorf("latency threshold was not reset") } } } } func setupLatency(t *testing.T, addr string) error { c, err := redis.DialURL(addr) if err != nil { t.Errorf("couldn't setup redis, err: %s ", err) return err } defer c.Close() _, err = c.Do("CONFIG", "SET", "LATENCY-MONITOR-THRESHOLD", 100) if err != nil { t.Errorf("couldn't setup redis, err: %s ", err) return err } // Have to pass in the sleep time in seconds so we have to divide // the number of milliseconds by 1000 to get number of seconds _, err = c.Do("DEBUG", "SLEEP", latencyTestTimeToSleepInMillis/1000.0) if err != nil { t.Errorf("couldn't setup redis, err: %s ", err) return err } time.Sleep(time.Millisecond * 50) return nil } func resetLatency(t *testing.T, addr string) error { c, err := redis.DialURL(addr) if err != nil { t.Errorf("couldn't setup redis, err: %s ", err) return err } defer c.Close() _, err = c.Do("LATENCY", "RESET") if err != nil { t.Errorf("couldn't setup redis, err: %s ", err) return err } time.Sleep(time.Millisecond * 50) return nil } func TestLatencyHistogram(t *testing.T) { addr := os.Getenv("TEST_REDIS_URI") // Since Redis 7.0.0 we should have latency histogram stats e := getTestExporterWithAddr(addr) setupTestKeys(t, addr) want := map[string]bool{"commands_latencies_usec": false} commandStatsCheck(t, e, want) deleteTestKeys(t, addr) } func TestExtractTotalUsecForCommand(t *testing.T) { statsOutString := `# Commandstats cmdstat_testerr|1:calls=1,usec_per_call=5.00,rejected_calls=0,failed_calls=0 cmdstat_testerr:calls=1,usec=2,usec_per_call=5.00,rejected_calls=0,failed_calls=0 cmdstat_testerr2:calls=1,usec=-2,usec_per_call=5.00,rejected_calls=0,failed_calls=0 cmdstat_testerr3:calls=1,usec=` + fmt.Sprintf("%d1", uint64(math.MaxUint64)) + `,usec_per_call=5.00,rejected_calls=0,failed_calls=0 cmdstat_config|get:calls=69103,usec=15005068,usec_per_call=217.14,rejected_calls=0,failed_calls=0 cmdstat_config|set:calls=3,usec=58,usec_per_call=19.33,rejected_calls=0,failed_calls=3 # Latencystats latency_percentiles_usec_pubsub|channels:p50=5.023,p99=5.023,p99.9=5.023 latency_percentiles_usec_config|get:p50=272.383,p99=346.111,p99.9=395.263 latency_percentiles_usec_config|set:p50=23.039,p99=27.007,p99.9=27.007` testMap := map[string]uint64{ "config|set": 58, "config": 58 + 15005068, "testerr|1": 0, "testerr": 2 + 0, "testerr2": 0, "testerr3": 0, } for cmd, expected := range testMap { if res := extractTotalUsecForCommand(statsOutString, cmd); res != expected { t.Errorf("Incorrect usec extracted. Expected %d but got %d!", expected, res) } } } func TestLatencyStats(t *testing.T) { redisSevenAddr := os.Getenv("TEST_REDIS_URI") // Since Redis v7 we should have extended latency stats (summary of command latencies) e := getTestExporterWithAddr(redisSevenAddr) setupTestKeys(t, redisSevenAddr) want := map[string]bool{"latency_percentiles_usec": false} commandStatsCheck(t, e, want) deleteTestKeys(t, redisSevenAddr) } �������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/lua.go���������������������������������������������������������������0000664�0000000�0000000�00000002104�14765200314�0017624�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "strconv" "github.com/gomodule/redigo/redis" "github.com/prometheus/client_golang/prometheus" log "github.com/sirupsen/logrus" ) func (e *Exporter) extractLuaScriptMetrics(ch chan<- prometheus.Metric, c redis.Conn, filename string, script []byte) error { log.Debugf("Evaluating e.options.LuaScript: %s", filename) kv, err := redis.StringMap(doRedisCmd(c, "EVAL", script, 0, 0)) if err != nil { log.Errorf("LuaScript error: %v", err) e.registerConstMetricGauge(ch, "script_result", 0, filename) return err } if len(kv) == 0 { log.Debugf("Lua script returned no results") e.registerConstMetricGauge(ch, "script_result", 2, filename) return nil } for key, stringVal := range kv { val, err := strconv.ParseFloat(stringVal, 64) if err != nil { log.Errorf("Error parsing lua script results, err: %s", err) e.registerConstMetricGauge(ch, "script_result", 0, filename) return err } e.registerConstMetricGauge(ch, "script_values", val, key, filename) } e.registerConstMetricGauge(ch, "script_result", 1, filename) return nil } ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/lua_test.go����������������������������������������������������������0000664�0000000�0000000�00000004435�14765200314�0020674�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "net/http/httptest" "os" "strings" "testing" "github.com/prometheus/client_golang/prometheus" ) func TestLuaScript(t *testing.T) { for _, tst := range []struct { Name string Script string ExpectedKeys int ExpectedError bool Wants []string }{ { Name: "ok1", Script: `return {"a", "11", "b", "12", "c", "13"}`, ExpectedKeys: 4, Wants: []string{`test_exporter_last_scrape_error{err=""} 0`, `test_script_values{filename="test.lua",key="a"} 11`, `test_script_values{filename="test.lua",key="b"} 12`, `test_script_values{filename="test.lua",key="c"} 13`, `test_script_result{filename="test.lua"} 1`}, }, { Name: "ok2", Script: `return {"key1", "6389"}`, ExpectedKeys: 4, Wants: []string{`test_exporter_last_scrape_error{err=""} 0`, `test_script_values{filename="test.lua",key="key1"} 6389`, `test_script_result{filename="test.lua"} 1`}, }, { Name: "ok3", Script: `return {} `, ExpectedKeys: 1, Wants: []string{`test_script_result{filename="test.lua"} 2`}, }, { Name: "borked1", Script: `return {"key1" BROKEN `, ExpectedKeys: 1, ExpectedError: true, Wants: []string{`test_exporter_last_scrape_error{err="ERR Error compiling script`, `test_script_result{filename="test.lua"} 0`}, }, { Name: "borked2", Script: `return {"key1", "abc"}`, ExpectedKeys: 1, ExpectedError: true, Wants: []string{`test_exporter_last_scrape_error{err="strconv.ParseFloat: parsing \"abc\": invalid syntax"} 1`, `test_script_result{filename="test.lua"} 0`}, }, } { t.Run(tst.Name, func(t *testing.T) { e, _ := NewRedisExporter( os.Getenv("TEST_REDIS_URI"), Options{ Namespace: "test", Registry: prometheus.NewRegistry(), LuaScript: map[string][]byte{"test.lua": []byte(tst.Script)}, }) ts := httptest.NewServer(e) defer ts.Close() chM := make(chan prometheus.Metric, 10000) go func() { e.Collect(chM) close(chM) }() body := downloadURL(t, ts.URL+"/metrics") for _, want := range tst.Wants { if !strings.Contains(body, want) { t.Errorf(`error, expected string "%s" in body, got body: \n\n%s`, want, body) } } }) } } �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/metrics.go�����������������������������������������������������������0000664�0000000�0000000�00000006607�14765200314�0020525�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "regexp" "strconv" "strings" "github.com/prometheus/client_golang/prometheus" log "github.com/sirupsen/logrus" ) var metricNameRE = regexp.MustCompile(`[^a-zA-Z0-9_]`) func sanitizeMetricName(n string) string { return metricNameRE.ReplaceAllString(n, "_") } func newMetricDescr(namespace string, metricName string, docString string, labels []string) *prometheus.Desc { return prometheus.NewDesc(prometheus.BuildFQName(namespace, "", metricName), docString, labels, nil) } func (e *Exporter) includeMetric(s string) bool { if strings.HasPrefix(s, "db") || strings.HasPrefix(s, "cmdstat_") || strings.HasPrefix(s, "cluster_") { return true } if _, ok := e.metricMapGauges[s]; ok { return true } _, ok := e.metricMapCounters[s] return ok } func (e *Exporter) parseAndRegisterConstMetric(ch chan<- prometheus.Metric, fieldKey, fieldValue string) { orgMetricName := sanitizeMetricName(fieldKey) metricName := orgMetricName if newName, ok := e.metricMapGauges[metricName]; ok { metricName = newName } else { if newName, ok := e.metricMapCounters[metricName]; ok { metricName = newName } } var err error var val float64 switch fieldValue { case "ok", "true": val = 1 case "err", "fail", "false": val = 0 default: val, err = strconv.ParseFloat(fieldValue, 64) } if err != nil { log.Debugf("couldn't parse %s, err: %s", fieldValue, err) return } t := prometheus.GaugeValue if e.metricMapCounters[orgMetricName] != "" { t = prometheus.CounterValue } switch metricName { case "latest_fork_usec": metricName = "latest_fork_seconds" val = val / 1e6 } e.registerConstMetric(ch, metricName, val, t) } func (e *Exporter) registerConstMetricGauge(ch chan<- prometheus.Metric, metric string, val float64, labels ...string) { e.registerConstMetric(ch, metric, val, prometheus.GaugeValue, labels...) } func (e *Exporter) registerConstMetric(ch chan<- prometheus.Metric, metric string, val float64, valType prometheus.ValueType, labelValues ...string) { description := e.findOrCreateMetricDescription(metric, labelValues) m, err := prometheus.NewConstMetric(description, valType, val, labelValues...) if err != nil { log.Debugf("registerConstMetric( %s , %.2f) err: %s", metric, val, err) return } ch <- m } func (e *Exporter) registerConstSummary(ch chan<- prometheus.Metric, metric string, labelValues []string, count uint64, sum float64, latencyMap map[float64]float64, cmd string) { description := e.findOrCreateMetricDescription(metric, labelValues) // Create a constant summary from values we got from a 3rd party telemetry system. summary := prometheus.MustNewConstSummary( description, count, sum, latencyMap, cmd, ) ch <- summary } func (e *Exporter) registerConstHistogram(ch chan<- prometheus.Metric, metric string, labelValues []string, count uint64, sum float64, buckets map[float64]uint64, cmd string) { description := e.findOrCreateMetricDescription(metric, labelValues) histogram := prometheus.MustNewConstHistogram( description, count, sum, buckets, cmd, ) ch <- histogram } func (e *Exporter) findOrCreateMetricDescription(metricName string, labels []string) *prometheus.Desc { description, found := e.metricDescriptions[metricName] if !found { description = newMetricDescr(e.options.Namespace, metricName, metricName+" metric", labels) e.metricDescriptions[metricName] = description } return description } �������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/metrics_test.go������������������������������������������������������0000664�0000000�0000000�00000003413�14765200314�0021554�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "strings" "testing" "github.com/prometheus/client_golang/prometheus" ) func TestSanitizeMetricName(t *testing.T) { tsts := map[string]string{ "cluster_stats_messages_auth-req_received": "cluster_stats_messages_auth_req_received", "cluster_stats_messages_auth_req_received": "cluster_stats_messages_auth_req_received", } for m, want := range tsts { if got := sanitizeMetricName(m); got != want { t.Errorf("sanitizeMetricName( %s ) error, want: %s, got: %s", m, want, got) } } } func TestRegisterConstHistogram(t *testing.T) { exp := getTestExporter() metricName := "foo" ch := make(chan prometheus.Metric) go func() { exp.registerConstHistogram(ch, metricName, []string{"bar"}, 12, .24, map[float64]uint64{}, "test") close(ch) }() for m := range ch { if strings.Contains(m.Desc().String(), metricName) { return } } t.Errorf("Histogram was not registered") } func TestFindOrCreateMetricsDescriptionFindExisting(t *testing.T) { exp := getTestExporter() exp.metricDescriptions = map[string]*prometheus.Desc{} metricName := "foo" labels := []string{"1", "2"} ret := exp.findOrCreateMetricDescription(metricName, labels) ret2 := exp.findOrCreateMetricDescription(metricName, labels) if ret == nil || ret2 == nil || ret != ret2 { t.Errorf("Unexpected return values: (%v, %v)", ret, ret2) } if len(exp.metricDescriptions) != 1 { t.Errorf("Unexpected metricDescriptions entry count.") } } func TestFindOrCreateMetricsDescriptionCreateNew(t *testing.T) { exp := getTestExporter() exp.metricDescriptions = map[string]*prometheus.Desc{} metricName := "foo" labels := []string{"1", "2"} ret := exp.findOrCreateMetricDescription(metricName, labels) if ret == nil { t.Errorf("Unexpected return value: %s", ret) } } �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/modules.go�����������������������������������������������������������0000664�0000000�0000000�00000002341�14765200314�0020516�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "strings" "github.com/gomodule/redigo/redis" "github.com/prometheus/client_golang/prometheus" log "github.com/sirupsen/logrus" ) func (e *Exporter) extractModulesMetrics(ch chan<- prometheus.Metric, c redis.Conn) { info, err := redis.String(doRedisCmd(c, "INFO", "MODULES")) if err != nil { log.Errorf("extractSearchMetrics() err: %s", err) return } lines := strings.Split(info, "\r\n") for _, line := range lines { log.Debugf("info: %s", line) split := strings.Split(line, ":") if len(split) != 2 { continue } if split[0] == "module" { // module format: 'module:name=<module-name>,ver=21005,api=1,filters=0,usedby=[],using=[],options=[]' module := strings.Split(split[1], ",") if len(module) != 7 { continue } e.registerConstMetricGauge(ch, "module_info", 1, strings.Split(module[0], "=")[1], strings.Split(module[1], "=")[1], strings.Split(module[2], "=")[1], strings.Split(module[3], "=")[1], strings.Split(module[4], "=")[1], strings.Split(module[5], "=")[1], ) continue } fieldKey := split[0] fieldValue := split[1] if !e.includeMetric(fieldKey) { continue } e.parseAndRegisterConstMetric(ch, fieldKey, fieldValue) } } �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/modules_test.go������������������������������������������������������0000664�0000000�0000000�00000004145�14765200314�0021561�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "os" "strings" "testing" "github.com/prometheus/client_golang/prometheus" ) func TestModules(t *testing.T) { if os.Getenv("TEST_REDIS_MODULES_URI") == "" { t.Skipf("TEST_REDIS_MODULES_URI not set - skipping") } tsts := []struct { addr string inclModulesMetrics bool wantModulesMetrics bool }{ {addr: os.Getenv("TEST_REDIS_MODULES_URI"), inclModulesMetrics: true, wantModulesMetrics: true}, {addr: os.Getenv("TEST_REDIS_MODULES_URI"), inclModulesMetrics: false, wantModulesMetrics: false}, {addr: os.Getenv("TEST_REDIS_URI"), inclModulesMetrics: true, wantModulesMetrics: false}, {addr: os.Getenv("TEST_REDIS_URI"), inclModulesMetrics: false, wantModulesMetrics: false}, } for _, tst := range tsts { e, _ := NewRedisExporter(tst.addr, Options{Namespace: "test", InclModulesMetrics: tst.inclModulesMetrics}) chM := make(chan prometheus.Metric) go func() { e.Collect(chM) close(chM) }() wantedMetrics := map[string]bool{ "module_info": false, "search_number_of_indexes": false, "search_used_memory_indexes_bytes": false, "search_indexing_time_ms_total": false, "search_global_idle": false, "search_global_total": false, "search_collected_bytes": false, "search_cycles_total": false, "search_run_ms_total": false, "search_dialect_1": false, "search_dialect_2": false, "search_dialect_3": false, "search_dialect_4": false, } for m := range chM { for want := range wantedMetrics { if strings.Contains(m.Desc().String(), want) { wantedMetrics[want] = true } } } if tst.wantModulesMetrics { for want, found := range wantedMetrics { if !found { t.Errorf("%s was *not* found in Redis Modules metrics but expected", want) } } } else if !tst.wantModulesMetrics { for want, found := range wantedMetrics { if found { t.Errorf("%s was *found* in Redis Modules metrics but *not* expected", want) } } } } } ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/pwd_file.go����������������������������������������������������������0000664�0000000�0000000�00000001301�14765200314�0020632�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "encoding/json" "os" log "github.com/sirupsen/logrus" ) // LoadPwdFile reads the redis password file and returns the password map func LoadPwdFile(passwordFile string) (map[string]string, error) { res := make(map[string]string) log.Debugf("start load password file: %s", passwordFile) bytes, err := os.ReadFile(passwordFile) if err != nil { log.Warnf("load password file failed: %s", err) return nil, err } err = json.Unmarshal(bytes, &res) if err != nil { log.Warnf("password file format error: %s", err) return nil, err } log.Infof("Loaded %d entries from %s", len(res), passwordFile) for k := range res { log.Debugf("%s", k) } return res, nil } �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/pwd_file_test.go�����������������������������������������������������0000664�0000000�0000000�00000013002�14765200314�0021672�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "fmt" "net/http" "net/http/httptest" "net/url" "os" "strings" "testing" "github.com/prometheus/client_golang/prometheus" ) func TestLoadPwdFile(t *testing.T) { for _, tst := range []struct { name string pwdFile string ok bool }{ { name: "load-password-file-success", pwdFile: "../contrib/sample-pwd-file.json", ok: true, }, { name: "load-password-file-missing", pwdFile: "non-existent.json", ok: false, }, { name: "load-password-file-malformed", pwdFile: "../contrib/sample-pwd-file.json-malformed", ok: false, }, } { t.Run(tst.name, func(t *testing.T) { _, err := LoadPwdFile(tst.pwdFile) if err == nil && !tst.ok { t.Fatalf("Test Failed, result is not what we want") } if err != nil && tst.ok { t.Fatalf("Test Failed, result is not what we want") } }) } } func TestPasswordMap(t *testing.T) { pwdFile := "../contrib/sample-pwd-file.json" passwordMap, err := LoadPwdFile(pwdFile) if err != nil { t.Fatalf("Test Failed, error: %v", err) } if len(passwordMap) == 0 { t.Fatalf("Password map is empty - failing") } for _, tst := range []struct { name string addr string want string }{ {name: "password-hit", addr: "redis://localhost:16380", want: "redis-password"}, {name: "password-missed", addr: "Non-existent-redis-host", want: ""}, } { t.Run(tst.name, func(t *testing.T) { pwd := passwordMap[tst.addr] if !strings.Contains(pwd, tst.want) { t.Errorf("redis host: %s password is not what we want", tst.addr) } }) } } func TestHTTPScrapeWithPasswordFile(t *testing.T) { if os.Getenv("TEST_PWD_REDIS_URI") == "" { t.Skipf("Skipping TestHTTPScrapeWithPasswordFile, missing env variables") } pwdFile := "../contrib/sample-pwd-file.json" passwordMap, err := LoadPwdFile(pwdFile) if err != nil { t.Fatalf("Test Failed, error: %v", err) } if len(passwordMap) == 0 { t.Fatalf("Password map is empty!") } for _, tst := range []struct { name string addr string wants []string useWrongPassword bool wantStatusCode int }{ {name: "scrape-pwd-file", addr: os.Getenv("TEST_PWD_REDIS_URI"), wants: []string{ "uptime_in_seconds", "test_up 1", }}, {name: "scrape-pwd-file-wrong-password", addr: "redis://localhost:16380", useWrongPassword: true, wants: []string{ "test_up 0", }}, } { if tst.useWrongPassword { passwordMap[tst.addr] = "wrong-password" } options := Options{ Namespace: "test", PasswordMap: passwordMap, LuaScript: map[string][]byte{ "test.lua": []byte(`return {"a", "11", "b", "12", "c", "13"}`), }, Registry: prometheus.NewRegistry(), } t.Run(tst.name, func(t *testing.T) { e, _ := NewRedisExporter(tst.addr, options) ts := httptest.NewServer(e) u := ts.URL u += "/scrape" v := url.Values{} v.Add("target", tst.addr) up, _ := url.Parse(u) up.RawQuery = v.Encode() u = up.String() wantStatusCode := http.StatusOK if tst.wantStatusCode != 0 { wantStatusCode = tst.wantStatusCode } gotStatusCode, body := downloadURLWithStatusCode(t, u) if gotStatusCode != wantStatusCode { t.Fatalf("got status code: %d wanted: %d", gotStatusCode, wantStatusCode) return } // we can stop here if we expected a non-200 response if wantStatusCode != http.StatusOK { return } for _, want := range tst.wants { if !strings.Contains(body, want) { t.Errorf("url: %s want metrics to include %q, have:\n%s", u, want, body) break } } ts.Close() }) } } func TestHTTPScrapeWithUsername(t *testing.T) { if os.Getenv("TEST_USER_PWD_REDIS_URI") == "" { t.Skipf("Skipping TestHTTPScrapeWithPasswordFile, missing env variables") } pwdFile := "../contrib/sample-pwd-file.json" passwordMap, err := LoadPwdFile(pwdFile) if err != nil { t.Fatalf("Test Failed, error: %v", err) } if len(passwordMap) == 0 { t.Fatalf("Password map is empty!") } // use provided uri but remove password before sending it over the wire // after all, we want to test the lookup in the password map u, err := url.Parse(os.Getenv("TEST_USER_PWD_REDIS_URI")) u.User = url.User(u.User.Username()) uriWithUser := u.String() uriWithUser = strings.Replace(uriWithUser, fmt.Sprintf(":@%s", u.Host), fmt.Sprintf("@%s", u.Host), 1) for _, tst := range []struct { name string addr string wants []string wantStatusCode int }{ { name: "scrape-pwd-file", wantStatusCode: http.StatusOK, addr: uriWithUser, wants: []string{ "uptime_in_seconds", "test_up 1", }}, } { options := Options{ Namespace: "test", PasswordMap: passwordMap, Registry: prometheus.NewRegistry(), } t.Run(tst.name, func(t *testing.T) { e, _ := NewRedisExporter(tst.addr, options) ts := httptest.NewServer(e) u := ts.URL u += "/scrape" v := url.Values{} v.Add("target", tst.addr) up, _ := url.Parse(u) up.RawQuery = v.Encode() u = up.String() gotStatusCode, body := downloadURLWithStatusCode(t, u) if gotStatusCode != tst.wantStatusCode { t.Fatalf("got status code: %d wanted: %d", gotStatusCode, tst.wantStatusCode) return } // we can stop here if we expected a non-200 response if tst.wantStatusCode != http.StatusOK { return } for _, want := range tst.wants { if !strings.Contains(body, want) { t.Errorf("url: %s want metrics to include %q, have:\n%s", u, want, body) break } } ts.Close() }) } } ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/redis.go�������������������������������������������������������������0000664�0000000�0000000�00000006761�14765200314�0020166�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "fmt" "net/url" "strings" "time" "github.com/gomodule/redigo/redis" "github.com/mna/redisc" log "github.com/sirupsen/logrus" ) func (e *Exporter) configureOptions(uri string) ([]redis.DialOption, error) { tlsConfig, err := e.CreateClientTLSConfig() if err != nil { return nil, err } options := []redis.DialOption{ redis.DialConnectTimeout(e.options.ConnectionTimeouts), redis.DialReadTimeout(e.options.ConnectionTimeouts), redis.DialWriteTimeout(e.options.ConnectionTimeouts), redis.DialTLSConfig(tlsConfig), redis.DialUseTLS(strings.HasPrefix(e.redisAddr, "rediss://")), } if e.options.User != "" { options = append(options, redis.DialUsername(e.options.User)) } if e.options.Password != "" { options = append(options, redis.DialPassword(e.options.Password)) } if pwd, ok := e.lookupPasswordInPasswordMap(uri); ok && pwd != "" { options = append(options, redis.DialPassword(pwd)) } return options, nil } func (e *Exporter) lookupPasswordInPasswordMap(uri string) (string, bool) { u, err := url.Parse(uri) if err != nil { return "", false } if e.options.User != "" { u.User = url.User(e.options.User) } uri = u.String() // strip solo ":" if present in uri that has a username (and no pwd) uri = strings.Replace(uri, fmt.Sprintf(":@%s", u.Host), fmt.Sprintf("@%s", u.Host), 1) log.Debugf("looking up in pwd map, uri: %s", uri) if pwd, ok := e.options.PasswordMap[uri]; ok && pwd != "" { return pwd, true } return "", false } func (e *Exporter) connectToRedis() (redis.Conn, error) { uri := e.redisAddr if !strings.Contains(uri, "://") { uri = "redis://" + uri } options, err := e.configureOptions(uri) if err != nil { return nil, err } log.Debugf("Trying DialURL(): %s", uri) c, err := redis.DialURL(uri, options...) if err != nil { log.Debugf("DialURL() failed, err: %s", err) if frags := strings.Split(e.redisAddr, "://"); len(frags) == 2 { log.Debugf("Trying: Dial(): %s %s", frags[0], frags[1]) c, err = redis.Dial(frags[0], frags[1], options...) } else { log.Debugf("Trying: Dial(): tcp %s", e.redisAddr) c, err = redis.Dial("tcp", e.redisAddr, options...) } } return c, err } func (e *Exporter) connectToRedisCluster() (redis.Conn, error) { uri := e.redisAddr if !strings.Contains(uri, "://") { uri = "redis://" + uri } options, err := e.configureOptions(uri) if err != nil { return nil, err } // remove url scheme for redis.Cluster.StartupNodes if strings.Contains(uri, "://") { u, _ := url.Parse(uri) if u.Port() == "" { uri = u.Host + ":6379" } else { uri = u.Host } } else { if frags := strings.Split(uri, ":"); len(frags) != 2 { uri = uri + ":6379" } } log.Debugf("Creating cluster object") cluster := redisc.Cluster{ StartupNodes: []string{uri}, DialOptions: options, } log.Debugf("Running refresh on cluster object") if err := cluster.Refresh(); err != nil { log.Errorf("Cluster refresh failed: %v", err) } log.Debugf("Creating redis connection object") conn, err := cluster.Dial() if err != nil { log.Errorf("Dial failed: %v", err) } c, err := redisc.RetryConn(conn, 10, 100*time.Millisecond) if err != nil { log.Errorf("RetryConn failed: %v", err) } return c, err } func doRedisCmd(c redis.Conn, cmd string, args ...interface{}) (interface{}, error) { log.Debugf("c.Do() - running command: %s args: [%v]", cmd, args) res, err := c.Do(cmd, args...) if err != nil { log.Debugf("c.Do() - err: %s", err) } log.Debugf("c.Do() - done") return res, err } ���������������redis_exporter-1.69.0/exporter/redis_test.go��������������������������������������������������������0000664�0000000�0000000�00000010714�14765200314�0021216�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "bytes" "net/http/httptest" "net/url" "os" "strings" "testing" "github.com/prometheus/client_golang/prometheus" log "github.com/sirupsen/logrus" ) func TestHostVariations(t *testing.T) { host := strings.ReplaceAll(os.Getenv("TEST_REDIS_URI"), "redis://", "") for _, prefix := range []string{"", "redis://", "tcp://", ""} { e, _ := NewRedisExporter(prefix+host, Options{SkipTLSVerification: true}) c, err := e.connectToRedis() if err != nil { t.Errorf("connectToRedis() err: %s", err) continue } if _, err := c.Do("PING", ""); err != nil { t.Errorf("PING err: %s", err) } c.Close() } } func TestValkeyScheme(t *testing.T) { host := os.Getenv("TEST_VALKEY8_URI") e, _ := NewRedisExporter(host, Options{SkipTLSVerification: true}) c, err := e.connectToRedis() if err != nil { t.Fatalf("connectToRedis() err: %s", err) } if _, err := c.Do("PING", ""); err != nil { t.Errorf("PING err: %s", err) } c.Close() } func TestPasswordProtectedInstance(t *testing.T) { userAddr := os.Getenv("TEST_USER_PWD_REDIS_URI") if userAddr == "" { t.Skipf("Skipping TestHTTPScrapeWithPasswordFile, missing env variables") } parsedPassword := "" parsed, err := url.Parse(userAddr) if err == nil && parsed.User != nil { parsedPassword, _ = parsed.User.Password() } tsts := []struct { name string addr string user string pwd string }{ { name: "TEST_PWD_REDIS_URI", addr: os.Getenv("TEST_PWD_REDIS_URI"), }, { name: "TEST_USER_PWD_REDIS_URI", addr: userAddr, }, { name: "parsed-TEST_USER_PWD_REDIS_URI", addr: parsed.Host, user: parsed.User.Username(), pwd: parsedPassword, }, } for _, tst := range tsts { t.Run(tst.name, func(t *testing.T) { e, _ := NewRedisExporter( tst.addr, Options{ Namespace: "test", Registry: prometheus.NewRegistry(), User: tst.user, Password: tst.pwd, }) ts := httptest.NewServer(e) defer ts.Close() chM := make(chan prometheus.Metric, 10000) go func() { e.Collect(chM) close(chM) }() body := downloadURL(t, ts.URL+"/metrics") if !strings.Contains(body, "test_up 1") { t.Errorf(`%s - response to /metric doesn't contain "test_up 1"`, tst) } }) } } func TestPasswordInvalid(t *testing.T) { if os.Getenv("TEST_PWD_REDIS_URI") == "" { t.Skipf("TEST_PWD_REDIS_URI not set - skipping") } testPwd := "redis-password" uri := strings.Replace(os.Getenv("TEST_PWD_REDIS_URI"), testPwd, "wrong-pwd", -1) e, _ := NewRedisExporter(uri, Options{Namespace: "test", Registry: prometheus.NewRegistry()}) ts := httptest.NewServer(e) defer ts.Close() chM := make(chan prometheus.Metric, 10000) go func() { e.Collect(chM) close(chM) }() want := `test_exporter_last_scrape_error{err="dial redis: unknown network redis"} 1` body := downloadURL(t, ts.URL+"/metrics") if !strings.Contains(body, want) { t.Errorf(`error, expected string "%s" in body, got body: \n\n%s`, want, body) } } func TestConnectToClusterUsingPasswordFile(t *testing.T) { clusterUri := os.Getenv("TEST_REDIS_CLUSTER_PASSWORD_URI") if clusterUri == "" { t.Skipf("TEST_REDIS_CLUSTER_PASSWORD_URI is not set") } passMap := map[string]string{clusterUri: "redis-password"} wrongPassMap := map[string]string{"redis://redis-cluster-password-wrong:7006": "redis-password"} tsts := []struct { name string isCluster bool passMap map[string]string refreshError bool }{ {name: "ConnectToCluster using password file with cluster mode", isCluster: true, passMap: passMap, refreshError: false}, {name: "ConnectToCluster using password file without cluster mode", isCluster: false, passMap: passMap, refreshError: false}, {name: "ConnectToCluster using password file with cluster mode failed", isCluster: false, passMap: wrongPassMap, refreshError: true}, } for _, tst := range tsts { t.Run(tst.name, func(t *testing.T) { e, _ := NewRedisExporter(clusterUri, Options{ SkipTLSVerification: true, PasswordMap: tst.passMap, IsCluster: tst.isCluster, }) var buf bytes.Buffer log.SetOutput(&buf) defer func() { log.SetOutput(os.Stderr) }() _, err := e.connectToRedisCluster() t.Logf("connectToRedisCluster() err: %s", err) if strings.Contains(buf.String(), "Cluster refresh failed:") && !tst.refreshError { t.Errorf("Test Cluster connection Failed error") } if err != nil { t.Errorf("Test Cluster connection Failed-connection error") } }) } } ����������������������������������������������������redis_exporter-1.69.0/exporter/sentinels.go���������������������������������������������������������0000664�0000000�0000000�00000014622�14765200314�0021057�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "regexp" "strconv" "strings" "github.com/gomodule/redigo/redis" "github.com/prometheus/client_golang/prometheus" log "github.com/sirupsen/logrus" ) func (e *Exporter) handleMetricsSentinel(ch chan<- prometheus.Metric, fieldKey string, fieldValue string) { switch fieldKey { case "sentinel_masters", "sentinel_tilt", "sentinel_running_scripts", "sentinel_scripts_queue_length", "sentinel_simulate_failure_flags": val, _ := strconv.Atoi(fieldValue) e.registerConstMetricGauge(ch, fieldKey, float64(val)) return } if masterName, masterStatus, masterAddress, masterSlaves, masterSentinels, ok := parseSentinelMasterString(fieldKey, fieldValue); ok { masterStatusNum := 0.0 if masterStatus == "ok" { masterStatusNum = 1 } e.registerConstMetricGauge(ch, "sentinel_master_status", masterStatusNum, masterName, masterAddress, masterStatus) e.registerConstMetricGauge(ch, "sentinel_master_slaves", masterSlaves, masterName, masterAddress) e.registerConstMetricGauge(ch, "sentinel_master_sentinels", masterSentinels, masterName, masterAddress) return } } func (e *Exporter) extractSentinelMetrics(ch chan<- prometheus.Metric, c redis.Conn) { masterDetails, err := redis.Values(doRedisCmd(c, "SENTINEL", "MASTERS")) if err != nil { log.Debugf("Error getting sentinel master details %s:", err) return } log.Debugf("Sentinel master details: %#v", masterDetails) for _, masterDetail := range masterDetails { masterDetailMap, err := redis.StringMap(masterDetail, nil) if err != nil { log.Debugf("Error getting masterDetailmap from masterDetail: %s, err: %s", masterDetail, err) continue } masterName, ok := masterDetailMap["name"] if !ok { continue } masterIp, ok := masterDetailMap["ip"] if !ok { continue } masterPort, ok := masterDetailMap["port"] if !ok { continue } masterAddr := masterIp + ":" + masterPort masterCkquorumMsg, err := redis.String(doRedisCmd(c, "SENTINEL", "CKQUORUM", masterName)) log.Debugf("Sentinel ckquorum status for master %s: %s %s", masterName, masterCkquorumMsg, err) masterCkquorumStatus := 1 if err != nil { masterCkquorumStatus = 0 masterCkquorumMsg = err.Error() } e.registerConstMetricGauge(ch, "sentinel_master_ckquorum_status", float64(masterCkquorumStatus), masterName, masterCkquorumMsg) masterCkquorum, _ := strconv.ParseFloat(masterDetailMap["quorum"], 64) masterFailoverTimeout, _ := strconv.ParseFloat(masterDetailMap["failover-timeout"], 64) masterParallelSyncs, _ := strconv.ParseFloat(masterDetailMap["parallel-syncs"], 64) masterDownAfterMs, _ := strconv.ParseFloat(masterDetailMap["down-after-milliseconds"], 64) e.registerConstMetricGauge(ch, "sentinel_master_setting_ckquorum", masterCkquorum, masterName, masterAddr) e.registerConstMetricGauge(ch, "sentinel_master_setting_failover_timeout", masterFailoverTimeout, masterName, masterAddr) e.registerConstMetricGauge(ch, "sentinel_master_setting_parallel_syncs", masterParallelSyncs, masterName, masterAddr) e.registerConstMetricGauge(ch, "sentinel_master_setting_down_after_milliseconds", masterDownAfterMs, masterName, masterAddr) sentinelDetails, _ := redis.Values(doRedisCmd(c, "SENTINEL", "SENTINELS", masterName)) log.Debugf("Sentinel details for master %s: %s", masterName, sentinelDetails) e.processSentinelSentinels(ch, sentinelDetails, masterName, masterAddr) slaveDetails, _ := redis.Values(doRedisCmd(c, "SENTINEL", "SLAVES", masterName)) log.Debugf("Slave details for master %s: %s", masterName, slaveDetails) e.processSentinelSlaves(ch, slaveDetails, masterName, masterAddr) } } func (e *Exporter) processSentinelSentinels(ch chan<- prometheus.Metric, sentinelDetails []interface{}, labels ...string) { // If we are here then this master is in ok state masterOkSentinels := 1 for _, sentinelDetail := range sentinelDetails { sentinelDetailMap, err := redis.StringMap(sentinelDetail, nil) if err != nil { log.Debugf("Error getting sentinelDetailMap from sentinelDetail: %s, err: %s", sentinelDetail, err) continue } sentinelFlags, ok := sentinelDetailMap["flags"] if !ok { continue } if strings.Contains(sentinelFlags, "o_down") { continue } if strings.Contains(sentinelFlags, "s_down") { continue } masterOkSentinels = masterOkSentinels + 1 } e.registerConstMetricGauge(ch, "sentinel_master_ok_sentinels", float64(masterOkSentinels), labels...) } func (e *Exporter) processSentinelSlaves(ch chan<- prometheus.Metric, slaveDetails []interface{}, labels ...string) { masterOkSlaves := 0 for _, slaveDetail := range slaveDetails { slaveDetailMap, err := redis.StringMap(slaveDetail, nil) if err != nil { log.Debugf("Error getting slavedetailMap from slaveDetail: %s, err: %s", slaveDetail, err) continue } slaveFlags, ok := slaveDetailMap["flags"] if !ok { continue } if strings.Contains(slaveFlags, "o_down") { continue } if strings.Contains(slaveFlags, "s_down") { continue } masterOkSlaves = masterOkSlaves + 1 } e.registerConstMetricGauge(ch, "sentinel_master_ok_slaves", float64(masterOkSlaves), labels...) } /* valid examples: master0:name=user03,status=sdown,address=192.169.2.52:6381,slaves=1,sentinels=5 master1:name=user02,status=ok,address=192.169.2.54:6380,slaves=1,sentinels=5 */ func parseSentinelMasterString(master string, masterInfo string) (masterName string, masterStatus string, masterAddr string, masterSlaves float64, masterSentinels float64, ok bool) { ok = false if matched, _ := regexp.MatchString(`^master\d+`, master); !matched { return } matchedMasterInfo := make(map[string]string) for _, kvPart := range strings.Split(masterInfo, ",") { x := strings.Split(kvPart, "=") if len(x) != 2 { log.Errorf("Invalid format for sentinel's master string, got: %s", kvPart) continue } matchedMasterInfo[x[0]] = x[1] } masterName = matchedMasterInfo["name"] masterStatus = matchedMasterInfo["status"] masterAddr = matchedMasterInfo["address"] masterSlaves, err := strconv.ParseFloat(matchedMasterInfo["slaves"], 64) if err != nil { log.Debugf("parseSentinelMasterString(): couldn't parse slaves value, got: %s, err: %s", matchedMasterInfo["slaves"], err) return } masterSentinels, err = strconv.ParseFloat(matchedMasterInfo["sentinels"], 64) if err != nil { log.Debugf("parseSentinelMasterString(): couldn't parse sentinels value, got: %s, err: %s", matchedMasterInfo["sentinels"], err) return } ok = true return } ��������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/sentinels_test.go����������������������������������������������������0000664�0000000�0000000�00000060604�14765200314�0022117�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "fmt" "os" "strings" "testing" "github.com/gomodule/redigo/redis" "github.com/prometheus/client_golang/prometheus" dto "github.com/prometheus/client_model/go" ) func TestExtractInfoMetricsSentinel(t *testing.T) { if os.Getenv("TEST_REDIS_SENTINEL_URI") == "" { t.Skipf("TEST_REDIS_SENTINEL_URI not set - skipping") } addr := os.Getenv("TEST_REDIS_SENTINEL_URI") e, _ := NewRedisExporter( addr, Options{Namespace: "test"}, ) c, err := redis.DialURL(addr) if err != nil { t.Fatalf("Couldn't connect to %#v: %#v", addr, err) } infoAll, err := redis.String(doRedisCmd(c, "INFO", "ALL")) if err != nil { t.Logf("Redis INFO ALL err: %s", err) infoAll, err = redis.String(doRedisCmd(c, "INFO")) if err != nil { t.Fatalf("Redis INFO err: %s", err) } } chM := make(chan prometheus.Metric) go func() { e.extractInfoMetrics(chM, infoAll, 0) close(chM) }() want := map[string]bool{ "sentinel_tilt": false, "sentinel_running_scripts": false, "sentinel_scripts_queue_length": false, "sentinel_simulate_failure_flags": false, "sentinel_masters": false, "sentinel_master_status": false, "sentinel_master_slaves": false, "sentinel_master_sentinels": false, } for m := range chM { for k := range want { if strings.Contains(m.Desc().String(), k) { want[k] = true } } } for k, found := range want { if !found { t.Errorf("didn't find %s", k) } } } type sentinelData struct { k, v string name, status, address string slaves, sentinels float64 ok bool } func TestParseSentinelMasterString(t *testing.T) { tsts := []sentinelData{ {k: "master0", v: "name=user03,status=sdown,address=192.169.2.52:6381,slaves=1,sentinels=5", name: "user03", status: "sdown", address: "192.169.2.52:6381", slaves: 1, sentinels: 5, ok: true}, {k: "master1", v: "name=master,status=ok,address=127.0.0.1:6379,slaves=999,sentinels=500", name: "master", status: "ok", address: "127.0.0.1:6379", slaves: 999, sentinels: 500, ok: true}, {k: "master", v: "name=user03", ok: false}, {k: "masterA", v: "status=ko", ok: false}, {k: "master0", v: "slaves=abc,sentinels=0", ok: false}, {k: "master0", v: "slaves=0,sentinels=abc", ok: false}, } for _, tst := range tsts { name := fmt.Sprintf("%s---%s", tst.k, tst.v) t.Run(name, func(t *testing.T) { if masterName, masterStatus, masterAddress, masterSlaves, masterSentinels, ok := parseSentinelMasterString(tst.k, tst.v); true { if ok != tst.ok { t.Errorf("failed for: master:%s data:%s", tst.k, tst.v) return } if masterName != tst.name || masterStatus != tst.status || masterAddress != tst.address || masterSlaves != tst.slaves || masterSentinels != tst.sentinels { t.Errorf("values not matching:\nstring:%s\ngot:%s %s %s %f %f", tst.v, masterName, masterStatus, masterAddress, masterSlaves, masterSentinels) } } }) } } func TestExtractSentinelMetricsForRedis(t *testing.T) { if os.Getenv("TEST_REDIS_URI") == "" { t.Skipf("TEST_REDIS_URI not set - skipping") } addr := os.Getenv("TEST_REDIS_URI") e, _ := NewRedisExporter( addr, Options{Namespace: "test"}, ) c, err := redis.DialURL(addr) if err != nil { t.Fatalf("Couldn't connect to %#v: %#v", addr, err) } defer c.Close() chM := make(chan prometheus.Metric) go func() { e.extractSentinelMetrics(chM, c) close(chM) }() want := map[string]bool{ "sentinel_master_ok_sentinels": false, "sentinel_master_ok_slaves": false, } for m := range chM { for k := range want { if strings.Contains(m.Desc().String(), k) { want[k] = true } } } for k, found := range want { if found { t.Errorf("Found sentinel metric %s for redis instance", k) } } } func TestExtractSentinelMetricsForSentinel(t *testing.T) { if os.Getenv("TEST_REDIS_SENTINEL_URI") == "" { t.Skipf("TEST_REDIS_SENTINEL_URI not set - skipping") } addr := os.Getenv("TEST_REDIS_SENTINEL_URI") e, _ := NewRedisExporter( addr, Options{Namespace: "test"}, ) c, err := redis.DialURL(addr) if err != nil { t.Fatalf("Couldn't connect to %#v: %#v", addr, err) } defer c.Close() infoAll, err := redis.String(doRedisCmd(c, "INFO", "ALL")) if err != nil { t.Logf("Redis INFO ALL err: %s", err) infoAll, err = redis.String(doRedisCmd(c, "INFO")) if err != nil { t.Fatalf("Redis INFO err: %s", err) } } chM := make(chan prometheus.Metric) if strings.Contains(infoAll, "# Sentinel") { go func() { e.extractSentinelMetrics(chM, c) close(chM) }() } else { t.Fatalf("Couldn't find sentinel section in Redis INFO: %s", infoAll) } want := map[string]bool{ "sentinel_master_ok_sentinels": false, "sentinel_master_ok_slaves": false, "sentinel_master_ckquorum_status": false, "sentinel_master_setting_ckquorum": false, "sentinel_master_setting_failover_timeout": false, "sentinel_master_setting_parallel_syncs": false, "sentinel_master_setting_down_after_milliseconds": false, } for m := range chM { for k := range want { if strings.Contains(m.Desc().String(), k) { want[k] = true } } } for k, found := range want { if !found { t.Errorf("didn't find metric %s", k) } } } type sentinelSentinelsData struct { name string sentinelDetails []interface{} labels []string expectedMetricValue map[string]int } func TestProcessSentinelSentinels(t *testing.T) { if os.Getenv("TEST_REDIS_SENTINEL_URI") == "" { t.Skipf("TEST_REDIS_SENTINEL_URI not set - skipping") } addr := os.Getenv("TEST_REDIS_SENTINEL_URI") e, _ := NewRedisExporter( addr, Options{Namespace: "test"}, ) oneOkSentinelExpectedMetricValue := map[string]int{ "sentinel_master_ok_sentinels": 1, } twoOkSentinelExpectedMetricValue := map[string]int{ "sentinel_master_ok_sentinels": 2, } tsts := []sentinelSentinelsData{ {"1/1 okay sentinel", []interface{}{[]interface{}{[]byte("")}}, []string{"mymaster", "172.17.0.7:26379"}, oneOkSentinelExpectedMetricValue}, {"1/3 okay sentinel", []interface{}{[]interface{}{[]byte("name"), []byte("284bc2ef46881bd71e81610152cb96031d211d28"), []byte("ip"), []byte("172.17.0.8"), []byte("port"), []byte("26379"), []byte("runid"), []byte("284bc2ef46881bd71e81610152cb96031d211d28"), []byte("flags"), []byte("o_down,s_down,sentinel"), []byte("link-pending-commands"), []byte("38"), []byte("link-refcount"), []byte("1"), []byte("last-ping-sent"), []byte("11828891"), []byte("last-ok-ping-reply"), []byte("11829539"), []byte("last-ping-reply"), []byte("11829539"), []byte("s-down-time"), []byte("11823816"), []byte("down-after-milliseconds"), []byte("5000"), []byte("last-hello-message"), []byte("11829434"), []byte("voted-leader"), []byte("?"), []byte("voted-leader-epoch"), []byte("0")}, []interface{}{[]byte("name"), []byte("c3ab3cdcaeb193bb49b16d4d3da88def984ab3bf"), []byte("ip"), []byte("172.17.0.7"), []byte("port"), []byte("26379"), []byte("runid"), []byte("c3ab3cdcaeb193bb49b16d4d3da88def984ab3bf"), []byte("flags"), []byte("s_down,sentinel"), []byte("link-pending-commands"), []byte("38"), []byte("link-refcount"), []byte("1"), []byte("last-ping-sent"), []byte("11828891"), []byte("last-ok-ping-reply"), []byte("11829539"), []byte("last-ping-reply"), []byte("11829539"), []byte("s-down-time"), []byte("11823815"), []byte("down-after-milliseconds"), []byte("5000"), []byte("last-hello-message"), []byte("11829434"), []byte("voted-leader"), []byte("?"), []byte("voted-leader-epoch"), []byte("0")}}, []string{"mymaster", "172.17.0.7:26379"}, oneOkSentinelExpectedMetricValue}, {"2/3 okay sentinel(string is not byte slice)", []interface{}{[]interface{}{[]byte("name"), []byte("284bc2ef46881bd71e81610152cb96031d211d28"), []byte("ip"), []byte("172.17.0.8"), []byte("port"), []byte("26379"), []byte("runid"), []byte("284bc2ef46881bd71e81610152cb96031d211d28"), []byte("flags"), []byte("sentinel"), []byte("link-pending-commands"), []byte("38"), []byte("link-refcount"), []byte("1"), []byte("last-ping-sent"), []byte("11828891"), []byte("last-ok-ping-reply"), []byte("11829539"), []byte("last-ping-reply"), []byte("11829539"), []byte("s-down-time"), []byte("11823816"), []byte("down-after-milliseconds"), []byte("5000"), []byte("last-hello-message"), []byte("11829434"), []byte("voted-leader"), []byte("?"), []byte("voted-leader-epoch"), []byte("0")}, []interface{}{[]byte("name"), []byte("c3ab3cdcaeb193bb49b16d4d3da88def984ab3bf"), []byte("ip"), []byte("172.17.0.7"), []byte("port"), []byte("26379"), []byte("runid"), []byte("c3ab3cdcaeb193bb49b16d4d3da88def984ab3bf"), []byte("flags"), "sentinel", []byte("link-pending-commands"), []byte("38"), []byte("link-refcount"), []byte("1"), []byte("last-ping-sent"), []byte("11828891"), []byte("last-ok-ping-reply"), []byte("11829539"), []byte("last-ping-reply"), []byte("11829539"), []byte("s-down-time"), []byte("11823815"), []byte("down-after-milliseconds"), []byte("5000"), []byte("last-hello-message"), []byte("11829434"), []byte("voted-leader"), []byte("?"), []byte("voted-leader-epoch"), []byte("0")}}, []string{"mymaster", "172.17.0.7:26379"}, twoOkSentinelExpectedMetricValue}, {"2/3 okay sentinel", []interface{}{[]interface{}{[]byte("name"), []byte("284bc2ef46881bd71e81610152cb96031d211d28"), []byte("ip"), []byte("172.17.0.8"), []byte("port"), []byte("26379"), []byte("runid"), []byte("284bc2ef46881bd71e81610152cb96031d211d28"), []byte("flags"), []byte("sentinel"), []byte("link-pending-commands"), []byte("38"), []byte("link-refcount"), []byte("1"), []byte("last-ping-sent"), []byte("11828891"), []byte("last-ok-ping-reply"), []byte("11829539"), []byte("last-ping-reply"), []byte("11829539"), []byte("s-down-time"), []byte("11823816"), []byte("down-after-milliseconds"), []byte("5000"), []byte("last-hello-message"), []byte("11829434"), []byte("voted-leader"), []byte("?"), []byte("voted-leader-epoch"), []byte("0")}, []interface{}{[]byte("name"), []byte("c3ab3cdcaeb193bb49b16d4d3da88def984ab3bf"), []byte("ip"), []byte("172.17.0.7"), []byte("port"), []byte("26379"), []byte("runid"), []byte("c3ab3cdcaeb193bb49b16d4d3da88def984ab3bf"), []byte("flags"), []byte("s_down,sentinel"), []byte("link-pending-commands"), []byte("38"), []byte("link-refcount"), []byte("1"), []byte("last-ping-sent"), []byte("11828891"), []byte("last-ok-ping-reply"), []byte("11829539"), []byte("last-ping-reply"), []byte("11829539"), []byte("s-down-time"), []byte("11823815"), []byte("down-after-milliseconds"), []byte("5000"), []byte("last-hello-message"), []byte("11829434"), []byte("voted-leader"), []byte("?"), []byte("voted-leader-epoch"), []byte("0")}}, []string{"mymaster", "172.17.0.7:26379"}, twoOkSentinelExpectedMetricValue}, {"2/3 okay sentinel(missing flags)", []interface{}{[]interface{}{[]byte("name"), []byte("284bc2ef46881bd71e81610152cb96031d211d28"), []byte("ip"), []byte("172.17.0.8"), []byte("port"), []byte("26379"), []byte("runid"), []byte("284bc2ef46881bd71e81610152cb96031d211d28"), []byte("flags"), []byte("sentinel"), []byte("link-pending-commands"), []byte("38"), []byte("link-refcount"), []byte("1"), []byte("last-ping-sent"), []byte("11828891"), []byte("last-ok-ping-reply"), []byte("11829539"), []byte("last-ping-reply"), []byte("11829539"), []byte("s-down-time"), []byte("11823816"), []byte("down-after-milliseconds"), []byte("5000"), []byte("last-hello-message"), []byte("11829434"), []byte("voted-leader"), []byte("?"), []byte("voted-leader-epoch"), []byte("0")}, []interface{}{[]byte("name"), []byte("c3ab3cdcaeb193bb49b16d4d3da88def984ab3bf"), []byte("ip"), []byte("172.17.0.7"), []byte("port"), []byte("26379"), []byte("runid"), []byte("c3ab3cdcaeb193bb49b16d4d3da88def984ab3bf"), []byte("link-pending-commands"), []byte("38"), []byte("link-refcount"), []byte("1"), []byte("last-ping-sent"), []byte("11828891"), []byte("last-ok-ping-reply"), []byte("11829539"), []byte("last-ping-reply"), []byte("11829539"), []byte("s-down-time"), []byte("11823815"), []byte("down-after-milliseconds"), []byte("5000"), []byte("last-hello-message"), []byte("11829434"), []byte("voted-leader"), []byte("?"), []byte("voted-leader-epoch"), []byte("0")}}, []string{"mymaster", "172.17.0.7:26379"}, twoOkSentinelExpectedMetricValue}, } for _, tst := range tsts { t.Run(tst.name, func(t *testing.T) { chM := make(chan prometheus.Metric) go func() { e.processSentinelSentinels(chM, tst.sentinelDetails, tst.labels...) close(chM) }() want := map[string]bool{ "sentinel_master_ok_sentinels": false, } for m := range chM { for k := range want { if strings.Contains(m.Desc().String(), k) { want[k] = true got := &dto.Metric{} m.Write(got) val := got.GetGauge().GetValue() if int(val) != tst.expectedMetricValue[k] { t.Errorf("Expected metric value %d didn't match to reported value %d for test %s", tst.expectedMetricValue[k], int(val), tst.name) } } } } for k, found := range want { if !found { t.Errorf("didn't find metric %s", k) } } }) } } type sentinelSlavesData struct { name string slaveDetails []interface{} labels []string expectedMetricValue map[string]int } func TestProcessSentinelSlaves(t *testing.T) { if os.Getenv("TEST_REDIS_SENTINEL_URI") == "" { t.Skipf("TEST_REDIS_SENTINEL_URI not set - skipping") } addr := os.Getenv("TEST_REDIS_SENTINEL_URI") e, _ := NewRedisExporter( addr, Options{Namespace: "test"}, ) zeroOkSlaveExpectedMetricValue := map[string]int{ "sentinel_master_ok_slaves": 0, } oneOkSlaveExpectedMetricValue := map[string]int{ "sentinel_master_ok_slaves": 1, } twoOkSlaveExpectedMetricValue := map[string]int{ "sentinel_master_ok_slaves": 2, } tsts := []sentinelSlavesData{ {"0/1 okay slave(string is not byte slice)", []interface{}{[]interface{}{[]string{"name"}, []byte("172.17.0.3:6379"), []byte("ip"), []byte("172.17.0.3"), []byte("port"), []byte("6379"), []byte("runid"), []byte("42ebb784f2bd560903de9fb7d4533263d5db558a"), []byte("flags"), []byte("slave"), []byte("link-pending-commands"), []byte("0"), []byte("link-refcount"), []byte("1"), []byte("last-ping-sent"), []byte("0"), []byte("last-ok-ping-reply"), []byte("490"), []byte("last-ping-reply"), []byte("490"), []byte("down-after-milliseconds"), []byte("5000"), []byte("info-refresh"), []byte("2636"), []byte("role-reported"), []byte("slave"), []byte("role-reported-time"), []byte("48279581"), []byte("master-link-down-time"), []byte("0"), []byte("master-link-status"), []byte("ok"), []byte("master-host"), []byte("172.17.0.2"), []byte("master-port"), []byte("6379"), []byte("slave-priority"), []byte("100"), []byte("slave-repl-offset"), []byte("765829")}}, []string{"mymaster", "172.17.0.7:26379"}, zeroOkSlaveExpectedMetricValue}, {"1/1 okay slave", []interface{}{[]interface{}{[]byte("name"), []byte("172.17.0.3:6379"), []byte("ip"), []byte("172.17.0.3"), []byte("port"), []byte("6379"), []byte("runid"), []byte("42ebb784f2bd560903de9fb7d4533263d5db558a"), []byte("flags"), []byte("slave"), []byte("link-pending-commands"), []byte("0"), []byte("link-refcount"), []byte("1"), []byte("last-ping-sent"), []byte("0"), []byte("last-ok-ping-reply"), []byte("490"), []byte("last-ping-reply"), []byte("490"), []byte("down-after-milliseconds"), []byte("5000"), []byte("info-refresh"), []byte("2636"), []byte("role-reported"), []byte("slave"), []byte("role-reported-time"), []byte("48279581"), []byte("master-link-down-time"), []byte("0"), []byte("master-link-status"), []byte("ok"), []byte("master-host"), []byte("172.17.0.2"), []byte("master-port"), []byte("6379"), []byte("slave-priority"), []byte("100"), []byte("slave-repl-offset"), []byte("765829")}}, []string{"mymaster", "172.17.0.7:26379"}, oneOkSlaveExpectedMetricValue}, {"1/3 okay slave", []interface{}{[]interface{}{[]byte("name"), []byte("172.17.0.6:6379"), []byte("ip"), []byte("172.17.0.6"), []byte("port"), []byte("6379"), []byte("runid"), []byte("254576b435fcd73121a6497d3b03f3a464de9a10"), []byte("flags"), []byte("slave"), []byte("link-pending-commands"), []byte("0"), []byte("last-ok-ping-reply"), []byte("1021"), []byte("last-ping-reply"), []byte("1021"), []byte("down-after-milliseconds"), []byte("5000"), []byte("info-refresh"), []byte("6293"), []byte("role-reported"), []byte("slave"), []byte("role-reported-time"), []byte("36490"), []byte("master-link-down-time"), []byte("0"), []byte("master-link-status"), []byte("ok"), []byte("master-host"), []byte("172.17.0.2"), []byte("master-port"), []byte("6379"), []byte("slave-priority"), []byte("100"), []byte("slave-repl-offset"), []byte("1316759")}, []interface{}{[]byte("name"), []byte("172.17.0.3:6379"), []byte("ip"), []byte("172.17.0.3"), []byte("port"), []byte("6379"), []byte("runid"), []byte("42ebb784f2bd560903de9fb7d4533263d5db558a"), []byte("flags"), []byte("s_down,slave"), []byte("link-pending-commands"), []byte("0"), []byte("link-refcount"), []byte("1"), []byte("last-ping-sent"), []byte("0"), []byte("last-ok-ping-reply"), []byte("655"), []byte("last-ping-reply"), []byte("655"), []byte("down-after-milliseconds"), []byte("5000"), []byte("info-refresh"), []byte("6394"), []byte("role-reported"), []byte("slave"), []byte("role-reported-time"), []byte("56525539"), []byte("master-link-down-time"), []byte("0"), []byte("master-link-status"), []byte("ok"), []byte("master-host"), []byte("172.17.0.2"), []byte("master-port"), []byte("6379"), []byte("slave-priority"), []byte("100"), []byte("slave-repl-offset"), []byte("1316759")}, []interface{}{[]byte("name"), []byte("172.17.0.5:6379"), []byte("ip"), []byte("172.17.0.5"), []byte("port"), []byte("6379"), []byte("runid"), []byte("8f4b14e820fab7b38cad640208803dfb9fa225ca"), []byte("flags"), []byte("o_down,s_down,slave"), []byte("link-pending-commands"), []byte("100"), []byte("link-refcount"), []byte("1"), []byte("last-ping-sent"), []byte("23792"), []byte("last-ok-ping-reply"), []byte("23902"), []byte("last-ping-reply"), []byte("23902"), []byte("s-down-time"), []byte("18785"), []byte("down-after-milliseconds"), []byte("5000"), []byte("info-refresh"), []byte("26352"), []byte("role-reported"), []byte("slave"), []byte("role-reported-time"), []byte("36493"), []byte("master-link-down-time"), []byte("0"), []byte("master-link-status"), []byte("ok"), []byte("master-host"), []byte("redis-master"), []byte("master-port"), []byte("6379"), []byte("slave-priority"), []byte("100"), []byte("slave-repl-offset"), []byte("1315493")}}, []string{"mymaster", "172.17.0.7:26379"}, oneOkSlaveExpectedMetricValue}, {"2/3 okay slave", []interface{}{[]interface{}{[]byte("name"), []byte("172.17.0.6:6379"), []byte("ip"), []byte("172.17.0.6"), []byte("port"), []byte("6379"), []byte("runid"), []byte("254576b435fcd73121a6497d3b03f3a464de9a10"), []byte("flags"), []byte("slave"), []byte("link-pending-commands"), []byte("0"), []byte("last-ok-ping-reply"), []byte("1021"), []byte("last-ping-reply"), []byte("1021"), []byte("down-after-milliseconds"), []byte("5000"), []byte("info-refresh"), []byte("6293"), []byte("role-reported"), []byte("slave"), []byte("role-reported-time"), []byte("36490"), []byte("master-link-down-time"), []byte("0"), []byte("master-link-status"), []byte("ok"), []byte("master-host"), []byte("172.17.0.2"), []byte("master-port"), []byte("6379"), []byte("slave-priority"), []byte("100"), []byte("slave-repl-offset"), []byte("1316759")}, []interface{}{[]byte("name"), []byte("172.17.0.3:6379"), []byte("ip"), []byte("172.17.0.3"), []byte("port"), []byte("6379"), []byte("runid"), []byte("42ebb784f2bd560903de9fb7d4533263d5db558a"), []byte("flags"), []byte("slave"), []byte("link-pending-commands"), []byte("0"), []byte("link-refcount"), []byte("1"), []byte("last-ping-sent"), []byte("0"), []byte("last-ok-ping-reply"), []byte("655"), []byte("last-ping-reply"), []byte("655"), []byte("down-after-milliseconds"), []byte("5000"), []byte("info-refresh"), []byte("6394"), []byte("role-reported"), []byte("slave"), []byte("role-reported-time"), []byte("56525539"), []byte("master-link-down-time"), []byte("0"), []byte("master-link-status"), []byte("ok"), []byte("master-host"), []byte("172.17.0.2"), []byte("master-port"), []byte("6379"), []byte("slave-priority"), []byte("100"), []byte("slave-repl-offset"), []byte("1316759")}, []interface{}{[]byte("name"), []byte("172.17.0.5:6379"), []byte("ip"), []byte("172.17.0.5"), []byte("port"), []byte("6379"), []byte("runid"), []byte("8f4b14e820fab7b38cad640208803dfb9fa225ca"), []byte("flags"), []byte("s_down,slave"), []byte("link-pending-commands"), []byte("100"), []byte("link-refcount"), []byte("1"), []byte("last-ping-sent"), []byte("23792"), []byte("last-ok-ping-reply"), []byte("23902"), []byte("last-ping-reply"), []byte("23902"), []byte("s-down-time"), []byte("18785"), []byte("down-after-milliseconds"), []byte("5000"), []byte("info-refresh"), []byte("26352"), []byte("role-reported"), []byte("slave"), []byte("role-reported-time"), []byte("36493"), []byte("master-link-down-time"), []byte("0"), []byte("master-link-status"), []byte("ok"), []byte("master-host"), []byte("redis-master"), []byte("master-port"), []byte("6379"), []byte("slave-priority"), []byte("100"), []byte("slave-repl-offset"), []byte("1315493")}}, []string{"mymaster", "172.17.0.7:26379"}, twoOkSlaveExpectedMetricValue}, {"2/3 okay slave(missing flags)", []interface{}{[]interface{}{[]byte("name"), []byte("172.17.0.6:6379"), []byte("ip"), []byte("172.17.0.6"), []byte("port"), []byte("6379"), []byte("runid"), []byte("254576b435fcd73121a6497d3b03f3a464de9a10"), []byte("flags"), []byte("slave"), []byte("link-pending-commands"), []byte("0"), []byte("last-ok-ping-reply"), []byte("1021"), []byte("last-ping-reply"), []byte("1021"), []byte("down-after-milliseconds"), []byte("5000"), []byte("info-refresh"), []byte("6293"), []byte("role-reported"), []byte("slave"), []byte("role-reported-time"), []byte("36490"), []byte("master-link-down-time"), []byte("0"), []byte("master-link-status"), []byte("ok"), []byte("master-host"), []byte("172.17.0.2"), []byte("master-port"), []byte("6379"), []byte("slave-priority"), []byte("100"), []byte("slave-repl-offset"), []byte("1316759")}, []interface{}{[]byte("name"), []byte("172.17.0.3:6379"), []byte("ip"), []byte("172.17.0.3"), []byte("port"), []byte("6379"), []byte("runid"), []byte("42ebb784f2bd560903de9fb7d4533263d5db558a"), []byte("flags"), []byte("slave"), []byte("link-pending-commands"), []byte("0"), []byte("link-refcount"), []byte("1"), []byte("last-ping-sent"), []byte("0"), []byte("last-ok-ping-reply"), []byte("655"), []byte("last-ping-reply"), []byte("655"), []byte("down-after-milliseconds"), []byte("5000"), []byte("info-refresh"), []byte("6394"), []byte("role-reported"), []byte("slave"), []byte("role-reported-time"), []byte("56525539"), []byte("master-link-down-time"), []byte("0"), []byte("master-link-status"), []byte("ok"), []byte("master-host"), []byte("172.17.0.2"), []byte("master-port"), []byte("6379"), []byte("slave-priority"), []byte("100"), []byte("slave-repl-offset"), []byte("1316759")}, []interface{}{[]byte("name"), []byte("172.17.0.5:6379"), []byte("ip"), []byte("172.17.0.5"), []byte("port"), []byte("6379"), []byte("runid"), []byte("8f4b14e820fab7b38cad640208803dfb9fa225ca"), []byte("link-pending-commands"), []byte("100"), []byte("link-refcount"), []byte("1"), []byte("last-ping-sent"), []byte("23792"), []byte("last-ok-ping-reply"), []byte("23902"), []byte("last-ping-reply"), []byte("23902"), []byte("s-down-time"), []byte("18785"), []byte("down-after-milliseconds"), []byte("5000"), []byte("info-refresh"), []byte("26352"), []byte("role-reported"), []byte("slave"), []byte("role-reported-time"), []byte("36493"), []byte("master-link-down-time"), []byte("0"), []byte("master-link-status"), []byte("ok"), []byte("master-host"), []byte("redis-master"), []byte("master-port"), []byte("6379"), []byte("slave-priority"), []byte("100"), []byte("slave-repl-offset"), []byte("1315493")}}, []string{"mymaster", "172.17.0.7:26379"}, twoOkSlaveExpectedMetricValue}, } for _, tst := range tsts { t.Run(tst.name, func(t *testing.T) { chM := make(chan prometheus.Metric) go func() { e.processSentinelSlaves(chM, tst.slaveDetails, tst.labels...) close(chM) }() want := map[string]bool{ "sentinel_master_ok_slaves": false, } for m := range chM { for k := range want { if strings.Contains(m.Desc().String(), k) { want[k] = true got := &dto.Metric{} m.Write(got) val := got.GetGauge().GetValue() if int(val) != tst.expectedMetricValue[k] { t.Errorf("Expected metric value %d didn't match to reported value %d for test %s", tst.expectedMetricValue[k], int(val), tst.name) } } } } for k, found := range want { if !found { t.Errorf("didn't find metric %s", k) } } }) } } ����������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/slowlog.go�����������������������������������������������������������0000664�0000000�0000000�00000001667�14765200314�0020546�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "github.com/gomodule/redigo/redis" "github.com/prometheus/client_golang/prometheus" ) func (e *Exporter) extractSlowLogMetrics(ch chan<- prometheus.Metric, c redis.Conn) { if reply, err := redis.Int64(doRedisCmd(c, "SLOWLOG", "LEN")); err == nil { e.registerConstMetricGauge(ch, "slowlog_length", float64(reply)) } values, err := redis.Values(doRedisCmd(c, "SLOWLOG", "GET", "1")) if err != nil { return } var slowlogLastID int64 var lastSlowExecutionDurationSeconds float64 if len(values) > 0 { if values, err = redis.Values(values[0], err); err == nil && len(values) > 0 { slowlogLastID = values[0].(int64) if len(values) > 2 { lastSlowExecutionDurationSeconds = float64(values[2].(int64)) / 1e6 } } } e.registerConstMetricGauge(ch, "slowlog_last_id", float64(slowlogLastID)) e.registerConstMetricGauge(ch, "last_slow_execution_duration_seconds", lastSlowExecutionDurationSeconds) } �������������������������������������������������������������������������redis_exporter-1.69.0/exporter/slowlog_test.go������������������������������������������������������0000664�0000000�0000000�00000005303�14765200314�0021574�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "os" "strings" "testing" "time" "github.com/gomodule/redigo/redis" "github.com/prometheus/client_golang/prometheus" dto "github.com/prometheus/client_model/go" ) func TestSlowLog(t *testing.T) { e := getTestExporter() chM := make(chan prometheus.Metric) go func() { e.Collect(chM) close(chM) }() oldSlowLogID := float64(0) for m := range chM { switch m := m.(type) { case prometheus.Gauge: if strings.Contains(m.Desc().String(), "slowlog_last_id") { got := &dto.Metric{} m.Write(got) oldSlowLogID = got.GetGauge().GetValue() } } } setupSlowLog(t, os.Getenv("TEST_REDIS_URI")) defer resetSlowLog(t, os.Getenv("TEST_REDIS_URI")) chM = make(chan prometheus.Metric) go func() { e.Collect(chM) close(chM) }() for m := range chM { switch m := m.(type) { case prometheus.Gauge: if strings.Contains(m.Desc().String(), "slowlog_last_id") { got := &dto.Metric{} m.Write(got) val := got.GetGauge().GetValue() if oldSlowLogID > val { t.Errorf("no new slowlogs found") } } if strings.Contains(m.Desc().String(), "slowlog_length") { got := &dto.Metric{} m.Write(got) val := got.GetGauge().GetValue() if val == 0 { t.Errorf("slowlog length is zero") } } } } resetSlowLog(t, os.Getenv("TEST_REDIS_URI")) chM = make(chan prometheus.Metric) go func() { e.Collect(chM) close(chM) }() for m := range chM { switch m := m.(type) { case prometheus.Gauge: if strings.Contains(m.Desc().String(), "slowlog_length") { got := &dto.Metric{} m.Write(got) val := got.GetGauge().GetValue() if val != 0 { t.Errorf("Slowlog was not reset") } } } } } func setupSlowLog(t *testing.T, addr string) error { c, err := redis.DialURL(addr) if err != nil { t.Errorf("couldn't setup redis, err: %s ", err) return err } defer c.Close() _, err = c.Do("CONFIG", "SET", "SLOWLOG-LOG-SLOWER-THAN", 10000) if err != nil { t.Errorf("couldn't setup redis, err: %s ", err) return err } // Have to pass in the sleep time in seconds so we have to divide // the number of milliseconds by 1000 to get number of seconds _, err = c.Do("DEBUG", "SLEEP", latencyTestTimeToSleepInMillis/1000.0) if err != nil { t.Errorf("couldn't setup redis, err: %s ", err) return err } time.Sleep(time.Millisecond * 50) return nil } func resetSlowLog(t *testing.T, addr string) error { c, err := redis.DialURL(addr) if err != nil { t.Errorf("couldn't setup redis, err: %s ", err) return err } defer c.Close() _, err = c.Do("SLOWLOG", "RESET") if err != nil { t.Errorf("couldn't setup redis, err: %s ", err) return err } time.Sleep(time.Millisecond * 50) return nil } �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/streams.go�����������������������������������������������������������0000664�0000000�0000000�00000016505�14765200314�0020533�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "strconv" "strings" "github.com/gomodule/redigo/redis" "github.com/prometheus/client_golang/prometheus" log "github.com/sirupsen/logrus" ) // All fields of the streamInfo struct must be exported // because of redis.ScanStruct (reflect) limitations type streamInfo struct { Length int64 `redis:"length"` RadixTreeKeys int64 `redis:"radix-tree-keys"` RadixTreeNodes int64 `redis:"radix-tree-nodes"` LastGeneratedId string `redis:"last-generated-id"` Groups int64 `redis:"groups"` MaxDeletedEntryId string `redis:"max-deleted-entry-id"` FirstEntryId string LastEntryId string StreamGroupsInfo []streamGroupsInfo } type streamGroupsInfo struct { Name string `redis:"name"` Consumers int64 `redis:"consumers"` Pending int64 `redis:"pending"` LastDeliveredId string `redis:"last-delivered-id"` EntriesRead int64 `redis:"entries-read"` Lag int64 `redis:"lag"` StreamGroupConsumersInfo []streamGroupConsumersInfo } type streamGroupConsumersInfo struct { Name string `redis:"name"` Pending int64 `redis:"pending"` Idle int64 `redis:"idle"` } func getStreamInfo(c redis.Conn, key string) (*streamInfo, error) { values, err := redis.Values(doRedisCmd(c, "XINFO", "STREAM", key)) if err != nil { return nil, err } // Scan slice to struct var stream streamInfo if err := redis.ScanStruct(values, &stream); err != nil { return nil, err } // Extract first and last id from slice for idx, v := range values { vbytes, ok := v.([]byte) if !ok { continue } if string(vbytes) == "first-entry" { stream.FirstEntryId = getStreamEntryId(values, idx+1) } if string(vbytes) == "last-entry" { stream.LastEntryId = getStreamEntryId(values, idx+1) } } stream.StreamGroupsInfo, err = scanStreamGroups(c, key) if err != nil { return nil, err } log.Debugf("getStreamInfo() stream: %#v", &stream) return &stream, nil } func getStreamEntryId(redisValue []interface{}, index int) string { if values, ok := redisValue[index].([]interface{}); !ok || len(values) < 2 { log.Debugf("Failed to parse StreamEntryId") return "" } if len(redisValue) < index || redisValue[index] == nil { log.Debugf("Failed to parse StreamEntryId") return "" } entryId, ok := redisValue[index].([]interface{})[0].([]byte) if !ok { log.Debugf("Failed to parse StreamEntryId") return "" } return string(entryId) } func scanStreamGroups(c redis.Conn, stream string) ([]streamGroupsInfo, error) { groups, err := redis.Values(doRedisCmd(c, "XINFO", "GROUPS", stream)) if err != nil { return nil, err } var result []streamGroupsInfo for _, g := range groups { v, err := redis.Values(g, nil) if err != nil { log.Errorf("Couldn't convert group values for stream '%s': %s", stream, err) continue } log.Debugf("streamGroupsInfo value: %#v", v) var group streamGroupsInfo if err := redis.ScanStruct(v, &group); err != nil { log.Errorf("Couldn't scan group in stream '%s': %s", stream, err) continue } group.StreamGroupConsumersInfo, err = scanStreamGroupConsumers(c, stream, group.Name) if err != nil { return nil, err } result = append(result, group) } log.Debugf("groups: %v", result) return result, nil } func scanStreamGroupConsumers(c redis.Conn, stream string, group string) ([]streamGroupConsumersInfo, error) { consumers, err := redis.Values(doRedisCmd(c, "XINFO", "CONSUMERS", stream, group)) if err != nil { return nil, err } var result []streamGroupConsumersInfo for _, c := range consumers { v, err := redis.Values(c, nil) if err != nil { log.Errorf("Couldn't convert consumer values for group '%s' in stream '%s': %s", group, stream, err) continue } log.Debugf("streamGroupConsumersInfo value: %#v", v) var consumer streamGroupConsumersInfo if err := redis.ScanStruct(v, &consumer); err != nil { log.Errorf("Couldn't scan consumers for group '%s' in stream '%s': %s", group, stream, err) continue } result = append(result, consumer) } log.Debugf("consumers: %v", result) return result, nil } func parseStreamItemId(id string) float64 { if strings.TrimSpace(id) == "" { return 0 } frags := strings.Split(id, "-") if len(frags) == 0 { log.Errorf("Couldn't parse StreamItemId: %s", id) return 0 } parsedId, err := strconv.ParseFloat(strings.Split(id, "-")[0], 64) if err != nil { log.Errorf("Couldn't parse given StreamItemId: [%s] err: %s", id, err) } return parsedId } func (e *Exporter) extractStreamMetrics(ch chan<- prometheus.Metric, c redis.Conn) { streams, err := parseKeyArg(e.options.CheckStreams) if err != nil { log.Errorf("Couldn't parse given stream keys: %s", err) return } singleStreams, err := parseKeyArg(e.options.CheckSingleStreams) if err != nil { log.Errorf("Couldn't parse check-single-streams: %s", err) return } allStreams := append([]dbKeyPair{}, singleStreams...) scannedStreams, err := getKeysFromPatterns(c, streams, e.options.CheckKeysBatchSize) if err != nil { log.Errorf("Error expanding key patterns: %s", err) } else { allStreams = append(allStreams, scannedStreams...) } log.Debugf("allStreams: %#v", allStreams) for _, k := range allStreams { if _, err := doRedisCmd(c, "SELECT", k.db); err != nil { log.Debugf("Couldn't select database '%s' when getting stream info", k.db) continue } info, err := getStreamInfo(c, k.key) if err != nil { log.Errorf("couldn't get info for stream '%s', err: %s", k.key, err) continue } dbLabel := "db" + k.db e.registerConstMetricGauge(ch, "stream_length", float64(info.Length), dbLabel, k.key) e.registerConstMetricGauge(ch, "stream_radix_tree_keys", float64(info.RadixTreeKeys), dbLabel, k.key) e.registerConstMetricGauge(ch, "stream_radix_tree_nodes", float64(info.RadixTreeNodes), dbLabel, k.key) e.registerConstMetricGauge(ch, "stream_last_generated_id", parseStreamItemId(info.LastGeneratedId), dbLabel, k.key) e.registerConstMetricGauge(ch, "stream_groups", float64(info.Groups), dbLabel, k.key) e.registerConstMetricGauge(ch, "stream_max_deleted_entry_id", parseStreamItemId(info.MaxDeletedEntryId), dbLabel, k.key) e.registerConstMetricGauge(ch, "stream_first_entry_id", parseStreamItemId(info.FirstEntryId), dbLabel, k.key) e.registerConstMetricGauge(ch, "stream_last_entry_id", parseStreamItemId(info.LastEntryId), dbLabel, k.key) for _, g := range info.StreamGroupsInfo { e.registerConstMetricGauge(ch, "stream_group_consumers", float64(g.Consumers), dbLabel, k.key, g.Name) e.registerConstMetricGauge(ch, "stream_group_messages_pending", float64(g.Pending), dbLabel, k.key, g.Name) e.registerConstMetricGauge(ch, "stream_group_last_delivered_id", parseStreamItemId(g.LastDeliveredId), dbLabel, k.key, g.Name) e.registerConstMetricGauge(ch, "stream_group_entries_read", float64(g.EntriesRead), dbLabel, k.key, g.Name) e.registerConstMetricGauge(ch, "stream_group_lag", float64(g.Lag), dbLabel, k.key, g.Name) if !e.options.StreamsExcludeConsumerMetrics { for _, c := range g.StreamGroupConsumersInfo { e.registerConstMetricGauge(ch, "stream_group_consumer_messages_pending", float64(c.Pending), dbLabel, k.key, g.Name, c.Name) e.registerConstMetricGauge(ch, "stream_group_consumer_idle_seconds", float64(c.Idle)/1e3, dbLabel, k.key, g.Name, c.Name) } } } } } �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/streams_test.go������������������������������������������������������0000664�0000000�0000000�00000046502�14765200314�0021572�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "os" "strings" "testing" "github.com/gomodule/redigo/redis" "github.com/prometheus/client_golang/prometheus" log "github.com/sirupsen/logrus" ) type scanStreamFixture struct { name string stream string streamInfo streamInfo groups []streamGroupsInfo consumers []streamGroupConsumersInfo } var ( TestStreamTimestamps = []string{ "1638006862416-0", "1638006862417-2", } ) func isNotTestTimestamp(returned string) bool { for _, expected := range TestStreamTimestamps { if parseStreamItemId(expected) == parseStreamItemId(returned) { return false } } return true } func TestStreamsGetStreamInfo(t *testing.T) { if os.Getenv("TEST_REDIS6_URI") == "" { t.Skipf("TEST_REDIS6_URI not set - skipping") } addr := os.Getenv("TEST_REDIS6_URI") c, err := redis.DialURL(addr) if err != nil { t.Fatalf("Couldn't connect to %#v: %#v", addr, err) } defer c.Close() setupTestKeys(t, addr) defer deleteTestKeys(t, addr) if _, err = c.Do("SELECT", dbNumStr); err != nil { t.Errorf("Couldn't select database %#v", dbNumStr) } tsts := []scanStreamFixture{ { name: "Stream test", stream: TestKeysStreamName, streamInfo: streamInfo{ Length: 2, RadixTreeKeys: 1, RadixTreeNodes: 2, Groups: 2, }, }, } for _, tst := range tsts { t.Run(tst.name, func(t *testing.T) { info, err := getStreamInfo(c, tst.stream) if err != nil { t.Fatalf("Error getting stream info for %#v: %s", tst.stream, err) } if info.Length != tst.streamInfo.Length { t.Errorf("Stream length mismatch.\nActual: %#v \nExpected: %#v", info.Length, tst.streamInfo.Length) } if info.RadixTreeKeys != tst.streamInfo.RadixTreeKeys { t.Errorf("Stream RadixTreeKeys mismatch.\nActual: %#v \nExpected: %#v", info.RadixTreeKeys, tst.streamInfo.RadixTreeKeys) } if info.RadixTreeNodes != tst.streamInfo.RadixTreeNodes { t.Errorf("Stream RadixTreeNodes mismatch.\nActual: %#v \nExpected: %#v", info.RadixTreeNodes, tst.streamInfo.RadixTreeNodes) } if info.Groups != tst.streamInfo.Groups { t.Errorf("Stream Groups mismatch.\nActual: %#v \nExpected: %#v", info.Groups, tst.streamInfo.Groups) } if isNotTestTimestamp(info.LastGeneratedId) { t.Errorf("Stream LastGeneratedId mismatch.\nActual: %#v \nExpected any of: %#v", info.LastGeneratedId, TestStreamTimestamps) } if info.FirstEntryId != TestStreamTimestamps[0] { t.Errorf("Stream FirstEntryId mismatch.\nActual: %#v \nExpected any of: %#v", info.FirstEntryId, TestStreamTimestamps) } if info.LastEntryId != TestStreamTimestamps[len(TestStreamTimestamps)-1] { t.Errorf("Stream LastEntryId mismatch.\nActual: %#v \nExpected any of: %#v", info.LastEntryId, TestStreamTimestamps) } if info.MaxDeletedEntryId != "" { t.Errorf("Stream MaxDeletedEntryId mismatch.\nActual: %#v \nExpected: %#v", info.MaxDeletedEntryId, "0-0") } }) } } func TestStreamsGetStreamInfoUsingValKey7(t *testing.T) { if os.Getenv("TEST_VALKEY7_URI") == "" { t.Skipf("TEST_VALKEY7_URI not set - skipping") } addr := strings.Replace(os.Getenv("TEST_VALKEY7_URI"), "valkey://", "redis://", 1) c, err := redis.DialURL(addr) if err != nil { t.Fatalf("Couldn't connect to %#v: %#v", addr, err) } defer c.Close() setupTestKeys(t, addr) defer deleteTestKeys(t, addr) if _, err = c.Do("SELECT", dbNumStr); err != nil { t.Errorf("Couldn't select database %#v", dbNumStr) } tsts := []scanStreamFixture{ { name: "Stream test", stream: TestKeysStreamName, streamInfo: streamInfo{ Length: 2, RadixTreeKeys: 1, RadixTreeNodes: 2, Groups: 2, }, }, } for _, tst := range tsts { t.Run(tst.name, func(t *testing.T) { info, err := getStreamInfo(c, tst.stream) if err != nil { t.Fatalf("Error getting stream info for %#v: %s", tst.stream, err) } if info.Length != tst.streamInfo.Length { t.Errorf("Stream length mismatch.\nActual: %#v \nExpected: %#v", info.Length, tst.streamInfo.Length) } if info.RadixTreeKeys != tst.streamInfo.RadixTreeKeys { t.Errorf("Stream RadixTreeKeys mismatch.\nActual: %#v \nExpected: %#v", info.RadixTreeKeys, tst.streamInfo.RadixTreeKeys) } if info.RadixTreeNodes != tst.streamInfo.RadixTreeNodes { t.Errorf("Stream RadixTreeNodes mismatch.\nActual: %#v \nExpected: %#v", info.RadixTreeNodes, tst.streamInfo.RadixTreeNodes) } if info.Groups != tst.streamInfo.Groups { t.Errorf("Stream Groups mismatch.\nActual: %#v \nExpected: %#v", info.Groups, tst.streamInfo.Groups) } if isNotTestTimestamp(info.LastGeneratedId) { t.Errorf("Stream LastGeneratedId mismatch.\nActual: %#v \nExpected any of: %#v", info.LastGeneratedId, TestStreamTimestamps) } if info.FirstEntryId != TestStreamTimestamps[0] { t.Errorf("Stream FirstEntryId mismatch.\nActual: %#v \nExpected any of: %#v", info.FirstEntryId, TestStreamTimestamps) } if info.LastEntryId != TestStreamTimestamps[len(TestStreamTimestamps)-1] { t.Errorf("Stream LastEntryId mismatch.\nActual: %#v \nExpected any of: %#v", info.LastEntryId, TestStreamTimestamps) } if info.MaxDeletedEntryId != "0-0" { t.Errorf("Stream MaxDeletedEntryId mismatch.\nActual: %#v \nExpected: %#v", info.MaxDeletedEntryId, "0-0") } }) } } func TestStreamsScanStreamGroups123(t *testing.T) { if os.Getenv("TEST_REDIS6_URI") == "" { t.Skipf("TEST_REDIS6_URI not set - skipping") } addr := os.Getenv("TEST_REDIS6_URI") c, err := redis.DialURL(addr) if err != nil { t.Fatalf("Couldn't connect to %#v: %#v", addr, err) } if _, err = c.Do("SELECT", dbNumStr); err != nil { t.Errorf("Couldn't select database %#v", dbNumStr) } fixtures := []keyFixture{ {"XADD", "test_stream_1", []interface{}{"1638006862521-0", "field_1", "str_1"}}, {"XADD", "test_stream_2", []interface{}{"1638006862522-0", "field_pattern_1", "str_pattern_1"}}, } // Create test streams c.Do("XGROUP", "CREATE", "test_stream_1", "test_group_1", "$", "MKSTREAM") c.Do("XGROUP", "CREATE", "test_stream_2", "test_group_1", "$", "MKSTREAM") c.Do("XGROUP", "CREATE", "test_stream_2", "test_group_2", "$") // Add simple values defer func() { deleteKeyFixtures(t, c, fixtures) c.Close() }() createKeyFixtures(t, c, fixtures) // Process messages to assign Consumers to their groups c.Do("XREADGROUP", "GROUP", "test_group_1", "test_consumer_1", "COUNT", "1", "STREAMS", "test_stream_1", ">") c.Do("XREADGROUP", "GROUP", "test_group_1", "test_consumer_1", "COUNT", "1", "STREAMS", "test_stream_2", ">") c.Do("XREADGROUP", "GROUP", "test_group_1", "test_consumer_2", "COUNT", "1", "STREAMS", "test_stream_2", "0") tsts := []scanStreamFixture{ { name: "Single group test", stream: "test_stream_1", groups: []streamGroupsInfo{ { Name: "test_group_1", Consumers: 1, Pending: 1, EntriesRead: 0, Lag: 0, LastDeliveredId: "1638006862521-0", StreamGroupConsumersInfo: []streamGroupConsumersInfo{ { Name: "test_consumer_1", Pending: 1, }, }, }, }}, { name: "Multiple groups test", stream: "test_stream_2", groups: []streamGroupsInfo{ { Name: "test_group_1", Consumers: 2, Pending: 1, Lag: 0, EntriesRead: 0, LastDeliveredId: "1638006862522-0", }, { Name: "test_group_2", Consumers: 0, Pending: 0, Lag: 0, }, }}, } for _, tst := range tsts { t.Run(tst.name, func(t *testing.T) { scannedGroup, err := scanStreamGroups(c, tst.stream) t.Logf("scanStreamGroups() err: %s", err) if err != nil { t.Fatalf("Err: %s", err) return } if len(scannedGroup) == len(tst.groups) { for i := range scannedGroup { if scannedGroup[i].Name != tst.groups[i].Name { t.Errorf("%d) Group name mismatch.\nExpected: %#v \nActual: %#v", i, tst.groups[i].Name, scannedGroup[i].Name) } if scannedGroup[i].Consumers != tst.groups[i].Consumers { t.Errorf("%d) Consumers count mismatch.\nExpected: %#v \nActual: %#v", i, tst.groups[i].Consumers, scannedGroup[i].Consumers) } if scannedGroup[i].Pending != tst.groups[i].Pending { t.Errorf("%d) Pending items mismatch.\nExpected: %#v \nActual: %#v", i, tst.groups[i].Pending, scannedGroup[i].Pending) } if parseStreamItemId(scannedGroup[i].LastDeliveredId) != parseStreamItemId(tst.groups[i].LastDeliveredId) { t.Errorf("%d) LastDeliveredId items mismatch.\nExpected: %#v \nActual: %#v", i, tst.groups[i].LastDeliveredId, scannedGroup[i].LastDeliveredId) } if scannedGroup[i].Lag != tst.groups[i].Lag { t.Errorf("%d) Lag mismatch.\nExpected: %#v \nActual: %#v", i, tst.groups[i].Lag, scannedGroup[i].Lag) } if scannedGroup[i].EntriesRead != tst.groups[i].EntriesRead { t.Errorf("%d) EntriesRead mismatch.\nExpected: %#v \nActual: %#v", i, tst.groups[i].EntriesRead, scannedGroup[i].EntriesRead) } } } else { t.Errorf("Consumers entries mismatch.\nExpected: %d\nActual: %d", len(tst.consumers), len(scannedGroup)) } }) } } func TestStreamsScanStreamGroupsUsingValKey7(t *testing.T) { if os.Getenv("TEST_VALKEY7_URI") == "" { t.Skipf("TEST_VALKEY7_URI not set - skipping") } addr := strings.Replace(os.Getenv("TEST_VALKEY7_URI"), "valkey://", "redis://", 1) db := dbNumStr c, err := redis.DialURL(addr) if err != nil { t.Fatalf("Couldn't connect to %#v: %#v", addr, err) } if _, err = c.Do("SELECT", db); err != nil { t.Errorf("Couldn't select database %#v", db) } fixtures := []keyFixture{ {"XADD", "test_stream_1", []interface{}{"1638006862521-0", "field_1", "str_1"}}, {"XADD", "test_stream_2", []interface{}{"1638006862522-0", "field_pattern_1", "str_pattern_1"}}, } // Create test streams c.Do("XGROUP", "CREATE", "test_stream_1", "test_group_1", "$", "MKSTREAM") c.Do("XGROUP", "CREATE", "test_stream_2", "test_group_1", "$", "MKSTREAM") c.Do("XGROUP", "CREATE", "test_stream_2", "test_group_2", "$") // Add simple values defer func() { deleteKeyFixtures(t, c, fixtures) c.Close() }() createKeyFixtures(t, c, fixtures) // Process messages to assign Consumers to their groups c.Do("XREADGROUP", "GROUP", "test_group_1", "test_consumer_1", "COUNT", "1", "STREAMS", "test_stream_1", ">") c.Do("XREADGROUP", "GROUP", "test_group_1", "test_consumer_1", "COUNT", "1", "STREAMS", "test_stream_2", ">") c.Do("XREADGROUP", "GROUP", "test_group_1", "test_consumer_2", "COUNT", "1", "STREAMS", "test_stream_2", "0") tsts := []scanStreamFixture{ { name: "Single group test", stream: "test_stream_1", groups: []streamGroupsInfo{ { Name: "test_group_1", Consumers: 1, Pending: 1, EntriesRead: 1, Lag: 0, LastDeliveredId: "1638006862521-0", StreamGroupConsumersInfo: []streamGroupConsumersInfo{ { Name: "test_consumer_1", Pending: 1, }, }, }, }}, { name: "Multiple groups test", stream: "test_stream_2", groups: []streamGroupsInfo{ { Name: "test_group_1", Consumers: 2, Pending: 1, Lag: 0, EntriesRead: 1, LastDeliveredId: "1638006862522-0", }, { Name: "test_group_2", Consumers: 0, Pending: 0, Lag: 1, }, }}, } for _, tst := range tsts { t.Run(tst.name, func(t *testing.T) { scannedGroup, err := scanStreamGroups(c, tst.stream) t.Logf("scanStreamGroups() err: %s", err) if err != nil { t.Errorf("Err: %s", err) } if len(scannedGroup) == len(tst.groups) { for i := range scannedGroup { if scannedGroup[i].Name != tst.groups[i].Name { t.Errorf("Group name mismatch.\nExpected: %#v \nActual: %#v", tst.groups[i].Name, scannedGroup[i].Name) } if scannedGroup[i].Consumers != tst.groups[i].Consumers { t.Errorf("Consumers count mismatch.\nExpected: %#v \nActual: %#v", tst.groups[i].Consumers, scannedGroup[i].Consumers) } if scannedGroup[i].Pending != tst.groups[i].Pending { t.Errorf("Pending items mismatch.\nExpected: %#v \nActual: %#v", tst.groups[i].Pending, scannedGroup[i].Pending) } if parseStreamItemId(scannedGroup[i].LastDeliveredId) != parseStreamItemId(tst.groups[i].LastDeliveredId) { t.Errorf("LastDeliveredId items mismatch.\nExpected: %#v \nActual: %#v", tst.groups[i].LastDeliveredId, scannedGroup[i].LastDeliveredId) } if scannedGroup[i].Lag != tst.groups[i].Lag { t.Errorf("Lag mismatch.\nExpected: %#v \nActual: %#v", tst.groups[i].Lag, scannedGroup[i].Lag) } if scannedGroup[i].EntriesRead != tst.groups[i].EntriesRead { t.Errorf("EntriesRead mismatch.\nExpected: %#v \nActual: %#v", tst.groups[i].EntriesRead, scannedGroup[i].EntriesRead) } } } else { t.Errorf("Consumers entries mismatch.\nExpected: %d\nActual: %d", len(tst.consumers), len(scannedGroup)) } }) } } func TestStreamsScanStreamGroupsConsumers(t *testing.T) { if os.Getenv("TEST_REDIS_URI") == "" { t.Skipf("TEST_REDIS_URI not set - skipping") } addr := os.Getenv("TEST_REDIS_URI") db := dbNumStr c, err := redis.DialURL(addr) if err != nil { t.Fatalf("Couldn't connect to %#v: %#v", addr, err) } if _, err = c.Do("SELECT", db); err != nil { t.Errorf("Couldn't select database %#v", db) } fixtures := []keyFixture{ {"XADD", "single_consumer_stream", []interface{}{"*", "field_1", "str_1"}}, {"XADD", "multiple_consumer_stream", []interface{}{"*", "field_pattern_1", "str_pattern_1"}}, } // Create test streams c.Do("XGROUP", "CREATE", "single_consumer_stream", "test_group_1", "$", "MKSTREAM") c.Do("XGROUP", "CREATE", "multiple_consumer_stream", "test_group_1", "$", "MKSTREAM") // Add simple test items to streams defer func() { deleteKeyFixtures(t, c, fixtures) c.Close() }() createKeyFixtures(t, c, fixtures) // Process messages to assign Consumers to their groups c.Do("XREADGROUP", "GROUP", "test_group_1", "test_consumer_1", "COUNT", "1", "STREAMS", "single_consumer_stream", ">") c.Do("XREADGROUP", "GROUP", "test_group_1", "test_consumer_1", "COUNT", "1", "STREAMS", "multiple_consumer_stream", ">") c.Do("XREADGROUP", "GROUP", "test_group_1", "test_consumer_2", "COUNT", "1", "STREAMS", "multiple_consumer_stream", "0") tsts := []scanStreamFixture{ { name: "Single group test", stream: "single_consumer_stream", groups: []streamGroupsInfo{{Name: "test_group_1"}}, consumers: []streamGroupConsumersInfo{ { Name: "test_consumer_1", Pending: 1, }, }, }, { name: "Multiple consumers test", stream: "multiple_consumer_stream", groups: []streamGroupsInfo{{Name: "test_group_1"}}, consumers: []streamGroupConsumersInfo{ { Name: "test_consumer_1", Pending: 1, }, { Name: "test_consumer_2", Pending: 0, }, }, }, } for _, tst := range tsts { t.Run(tst.name, func(t *testing.T) { // For each group for _, g := range tst.groups { g.StreamGroupConsumersInfo, err = scanStreamGroupConsumers(c, tst.stream, g.Name) if err != nil { t.Errorf("Err: %s", err) } if len(g.StreamGroupConsumersInfo) == len(tst.consumers) { for i := range g.StreamGroupConsumersInfo { if g.StreamGroupConsumersInfo[i].Name != tst.consumers[i].Name { t.Errorf("Consumer name mismatch.\nExpected: %#v \nActual: %#v", tst.consumers[i].Name, g.StreamGroupConsumersInfo[i].Name) } if g.StreamGroupConsumersInfo[i].Pending != tst.consumers[i].Pending { t.Errorf("Pending items mismatch for %s.\nExpected: %#v \nActual: %#v", g.StreamGroupConsumersInfo[i].Name, tst.consumers[i].Pending, g.StreamGroupConsumersInfo[i].Pending) } } } else { t.Errorf("Consumers entries mismatch.\nExpected: %d\nActual: %d", len(tst.consumers), len(g.StreamGroupConsumersInfo)) } } }) } } func TestStreamsExtractStreamMetrics(t *testing.T) { if os.Getenv("TEST_REDIS_URI") == "" { t.Skipf("TEST_REDIS_URI not set - skipping") } addr := os.Getenv("TEST_REDIS_URI") e, _ := NewRedisExporter( addr, Options{Namespace: "test", CheckSingleStreams: dbNumStrFull + "=" + TestKeysStreamName}, ) c, err := redis.DialURL(addr) if err != nil { t.Fatalf("Couldn't connect to %#v: %#v", addr, err) } setupTestKeys(t, addr) defer deleteTestKeys(t, addr) chM := make(chan prometheus.Metric) go func() { e.extractStreamMetrics(chM, c) close(chM) }() want := map[string]bool{ "stream_length": false, "stream_radix_tree_keys": false, "stream_radix_tree_nodes": false, "stream_last_generated_id": false, "stream_groups": false, "stream_max_deleted_entry_id": false, "stream_first_entry_id": false, "stream_last_entry_id": false, "stream_group_consumers": false, "stream_group_messages_pending": false, "stream_group_last_delivered_id": false, "stream_group_entries_read": false, "stream_group_lag": false, "stream_group_consumer_messages_pending": false, "stream_group_consumer_idle_seconds": false, } for m := range chM { for k := range want { log.Debugf("metric: %s", m.Desc().String()) log.Debugf("want: %s", k) if strings.Contains(m.Desc().String(), k) { want[k] = true } } } for k, found := range want { if !found { t.Errorf("didn't find %s", k) } } } func TestStreamsExtractStreamMetricsExcludeConsumer(t *testing.T) { if os.Getenv("TEST_REDIS_URI") == "" { t.Skipf("TEST_REDIS_URI not set - skipping") } addr := os.Getenv("TEST_REDIS_URI") e, _ := NewRedisExporter( addr, Options{Namespace: "test", CheckSingleStreams: dbNumStrFull + "=" + TestKeysStreamName, StreamsExcludeConsumerMetrics: true}, ) c, err := redis.DialURL(addr) if err != nil { t.Fatalf("Couldn't connect to %#v: %#v", addr, err) } setupTestKeys(t, addr) defer deleteTestKeys(t, addr) chM := make(chan prometheus.Metric) go func() { e.extractStreamMetrics(chM, c) close(chM) }() want := map[string]bool{ "stream_length": false, "stream_radix_tree_keys": false, "stream_radix_tree_nodes": false, "stream_last_generated_id": false, "stream_groups": false, "stream_max_deleted_entry_id": false, "stream_first_entry_id": false, "stream_last_entry_id": false, "stream_group_consumers": false, "stream_group_messages_pending": false, "stream_group_last_delivered_id": false, "stream_group_entries_read": false, "stream_group_lag": false, } dontWant := map[string]bool{ "stream_group_consumer_messages_pending": false, "stream_group_consumer_idle_seconds": false, } for m := range chM { for k := range want { log.Debugf("metric: %s", m.Desc().String()) log.Debugf("want: %s", k) if strings.Contains(m.Desc().String(), k) { want[k] = true } } for k := range dontWant { log.Debugf("metric: %s", m.Desc().String()) log.Debugf("don't want: %s", k) if strings.Contains(m.Desc().String(), k) { dontWant[k] = true } } } for k, found := range want { if !found { t.Errorf("didn't find %s metric, which should be collected", k) } } for k, found := range dontWant { if found { t.Errorf("found %s metric, which shouldn't be collected", k) } } } ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/tile38.go������������������������������������������������������������0000664�0000000�0000000�00000001343�14765200314�0020157�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "strings" "github.com/gomodule/redigo/redis" "github.com/prometheus/client_golang/prometheus" log "github.com/sirupsen/logrus" ) func (e *Exporter) extractTile38Metrics(ch chan<- prometheus.Metric, c redis.Conn) { info, err := redis.Strings(doRedisCmd(c, "SERVER", "EXT")) if err != nil { log.Errorf("extractTile38Metrics() err: %s", err) return } for i := 0; i < len(info); i += 2 { fieldKey := info[i] if !strings.HasPrefix(fieldKey, "tile38_") { fieldKey = "tile38_" + fieldKey } fieldValue := info[i+1] log.Debugf("tile38 key:%s val:%s", fieldKey, fieldValue) if !e.includeMetric(fieldKey) { continue } e.parseAndRegisterConstMetric(ch, fieldKey, fieldValue) } } ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/tile38_test.go�������������������������������������������������������0000664�0000000�0000000�00000003077�14765200314�0021224�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "os" "strings" "testing" "github.com/prometheus/client_golang/prometheus" ) func TestTile38(t *testing.T) { if os.Getenv("TEST_TILE38_URI") == "" { t.Skipf("TEST_TILE38_URI not set - skipping") } tsts := []struct { addr string isTile38 bool wantTile38Metrics bool }{ {addr: os.Getenv("TEST_TILE38_URI"), isTile38: true, wantTile38Metrics: true}, {addr: os.Getenv("TEST_TILE38_URI"), isTile38: false, wantTile38Metrics: false}, {addr: os.Getenv("TEST_REDIS_URI"), isTile38: true, wantTile38Metrics: false}, {addr: os.Getenv("TEST_REDIS_URI"), isTile38: false, wantTile38Metrics: false}, } for _, tst := range tsts { e, _ := NewRedisExporter(tst.addr, Options{Namespace: "test", IsTile38: tst.isTile38}) chM := make(chan prometheus.Metric) go func() { e.Collect(chM) close(chM) }() wantedMetrics := map[string]bool{ "tile38_threads_total": false, "tile38_cpus_total": false, "tile38_go_goroutines_total": false, "tile38_avg_item_size_bytes": false, } for m := range chM { for want := range wantedMetrics { if strings.Contains(m.Desc().String(), want) { wantedMetrics[want] = true } } } if tst.wantTile38Metrics { for want, found := range wantedMetrics { if !found { t.Errorf("%s was *not* found in tile38 metrics but expected", want) } } } else if !tst.wantTile38Metrics { for want, found := range wantedMetrics { if found { t.Errorf("%s was *found* in tile38 metrics but *not* expected", want) } } } } } �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/tls.go���������������������������������������������������������������0000664�0000000�0000000�00000007064�14765200314�0017657�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "crypto/tls" "crypto/x509" "fmt" "os" log "github.com/sirupsen/logrus" ) // CreateClientTLSConfig verifies configured files and return a prepared tls.Config func (e *Exporter) CreateClientTLSConfig() (*tls.Config, error) { tlsConfig := tls.Config{ InsecureSkipVerify: e.options.SkipTLSVerification, } if e.options.ClientCertFile != "" && e.options.ClientKeyFile != "" { cert, err := LoadKeyPair(e.options.ClientCertFile, e.options.ClientKeyFile) if err != nil { return nil, err } tlsConfig.Certificates = []tls.Certificate{*cert} } if e.options.CaCertFile != "" { certificates, err := LoadCAFile(e.options.CaCertFile) if err != nil { return nil, err } tlsConfig.RootCAs = certificates } else { // Load the system certificate pool rootCAs, err := x509.SystemCertPool() if err != nil { return nil, err } tlsConfig.RootCAs = rootCAs } return &tlsConfig, nil } var tlsVersions = map[string]uint16{ "TLS1.3": tls.VersionTLS13, "TLS1.2": tls.VersionTLS12, "TLS1.1": tls.VersionTLS11, "TLS1.0": tls.VersionTLS10, } // CreateServerTLSConfig verifies configuration and return a prepared tls.Config func (e *Exporter) CreateServerTLSConfig(certFile, keyFile, caCertFile, minVersionString string) (*tls.Config, error) { // Verify that the initial key pair is accepted _, err := LoadKeyPair(certFile, keyFile) if err != nil { return nil, err } // Get minimum acceptable TLS version from the config string minVersion, ok := tlsVersions[minVersionString] if !ok { return nil, fmt.Errorf("configured minimum TLS version unknown: '%s'", minVersionString) } tlsConfig := tls.Config{ MinVersion: minVersion, GetCertificate: GetServerCertificateFunc(certFile, keyFile), } if caCertFile != "" { // Verify that the initial CA file is accepted when configured _, err := LoadCAFile(caCertFile) if err != nil { return nil, err } tlsConfig.GetConfigForClient = GetConfigForClientFunc(certFile, keyFile, caCertFile) } return &tlsConfig, nil } // GetServerCertificateFunc returns a function for tls.Config.GetCertificate func GetServerCertificateFunc(certFile, keyFile string) func(*tls.ClientHelloInfo) (*tls.Certificate, error) { return func(*tls.ClientHelloInfo) (*tls.Certificate, error) { return LoadKeyPair(certFile, keyFile) } } // GetConfigForClientFunc returns a function for tls.Config.GetConfigForClient func GetConfigForClientFunc(certFile, keyFile, caCertFile string) func(*tls.ClientHelloInfo) (*tls.Config, error) { return func(*tls.ClientHelloInfo) (*tls.Config, error) { certificates, err := LoadCAFile(caCertFile) if err != nil { return nil, err } tlsConfig := tls.Config{ ClientAuth: tls.RequireAndVerifyClientCert, ClientCAs: certificates, GetCertificate: GetServerCertificateFunc(certFile, keyFile), } return &tlsConfig, nil } } // LoadKeyPair reads and parses a public/private key pair from a pair of files. // The files must contain PEM encoded data. func LoadKeyPair(certFile, keyFile string) (*tls.Certificate, error) { log.Debugf("Load key pair: %s %s", certFile, keyFile) cert, err := tls.LoadX509KeyPair(certFile, keyFile) if err != nil { return nil, err } return &cert, nil } // LoadCAFile reads and parses CA certificates from a file into a pool. // The file must contain PEM encoded data. func LoadCAFile(caFile string) (*x509.CertPool, error) { log.Debugf("Load CA cert file: %s", caFile) pemCerts, err := os.ReadFile(caFile) if err != nil { return nil, err } pool := x509.NewCertPool() pool.AppendCertsFromPEM(pemCerts) return pool, nil } ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/exporter/tls_test.go����������������������������������������������������������0000664�0000000�0000000�00000010305�14765200314�0020706�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package exporter import ( "os" "strings" "testing" "github.com/prometheus/client_golang/prometheus" ) func TestCreateClientTLSConfig(t *testing.T) { for _, test := range []struct { name string options Options expectSuccess bool }{ // positive tests {"no_options", Options{}, true}, {"skip_verificaton", Options{ SkipTLSVerification: true}, true}, {"load_client_keypair", Options{ ClientCertFile: "../contrib/tls/redis.crt", ClientKeyFile: "../contrib/tls/redis.key"}, true}, {"load_ca_cert", Options{ CaCertFile: "../contrib/tls/ca.crt"}, true}, {"load_system_certs", Options{}, true}, // negative tests {"nonexisting_client_files", Options{ ClientCertFile: "/nonexisting/file", ClientKeyFile: "/nonexisting/file"}, false}, {"nonexisting_ca_file", Options{ CaCertFile: "/nonexisting/file"}, false}, } { t.Run(test.name, func(t *testing.T) { e := getTestExporterWithOptions(test.options) _, err := e.CreateClientTLSConfig() if test.expectSuccess && err != nil { t.Errorf("Expected success for test: %s, got err: %s", test.name, err) return } }) } } func TestValkeyTLSScheme(t *testing.T) { for _, host := range []string{ os.Getenv("TEST_REDIS7_TLS_URI"), os.Getenv("TEST_VALKEY8_TLS_URI"), } { t.Run(host, func(t *testing.T) { e, _ := NewRedisExporter(host, Options{ SkipTLSVerification: true, ClientCertFile: "../contrib/tls/redis.crt", ClientKeyFile: "../contrib/tls/redis.key", }, ) c, err := e.connectToRedis() if err != nil { t.Fatalf("connectToRedis() err: %s", err) } if _, err := c.Do("PING", ""); err != nil { t.Errorf("PING err: %s", err) } c.Close() chM := make(chan prometheus.Metric) go func() { e.Collect(chM) close(chM) }() tsts := []struct { in string found bool }{ {in: "db_keys"}, {in: "commands_total"}, {in: "total_connections_received"}, {in: "used_memory"}, } for m := range chM { desc := m.Desc().String() for i := range tsts { if strings.Contains(desc, tsts[i].in) { tsts[i].found = true } } } }) } } func TestCreateServerTLSConfig(t *testing.T) { e := getTestExporter() // positive tests _, err := e.CreateServerTLSConfig("../contrib/tls/redis.crt", "../contrib/tls/redis.key", "", "TLS1.1") if err != nil { t.Errorf("CreateServerTLSConfig() err: %s", err) } _, err = e.CreateServerTLSConfig("../contrib/tls/redis.crt", "../contrib/tls/redis.key", "../contrib/tls/ca.crt", "TLS1.0") if err != nil { t.Errorf("CreateServerTLSConfig() err: %s", err) } // negative tests _, err = e.CreateServerTLSConfig("/nonexisting/file", "/nonexisting/file", "", "TLS1.1") if err == nil { t.Errorf("Expected CreateServerTLSConfig() to fail") } _, err = e.CreateServerTLSConfig("/nonexisting/file", "/nonexisting/file", "/nonexisting/file", "TLS1.2") if err == nil { t.Errorf("Expected CreateServerTLSConfig() to fail") } _, err = e.CreateServerTLSConfig("../contrib/tls/redis.crt", "../contrib/tls/redis.key", "/nonexisting/file", "TLS1.3") if err == nil { t.Errorf("Expected CreateServerTLSConfig() to fail") } _, err = e.CreateServerTLSConfig("../contrib/tls/redis.crt", "../contrib/tls/redis.key", "../contrib/tls/ca.crt", "TLSX") if err == nil { t.Errorf("Expected CreateServerTLSConfig() to fail") } } func TestGetServerCertificateFunc(t *testing.T) { // positive test _, err := GetServerCertificateFunc("../contrib/tls/ca.crt", "../contrib/tls/ca.key")(nil) if err != nil { t.Errorf("GetServerCertificateFunc() err: %s", err) } // negative test _, err = GetServerCertificateFunc("/nonexisting/file", "/nonexisting/file")(nil) if err == nil { t.Errorf("Expected GetServerCertificateFunc() to fail") } } func TestGetConfigForClientFunc(t *testing.T) { // positive test _, err := GetConfigForClientFunc("../contrib/tls/redis.crt", "../contrib/tls/redis.key", "../contrib/tls/ca.crt")(nil) if err != nil { t.Errorf("GetConfigForClientFunc() err: %s", err) } // negative test _, err = GetConfigForClientFunc("/nonexisting/file", "/nonexisting/file", "/nonexisting/file")(nil) if err == nil { t.Errorf("Expected GetConfigForClientFunc() to fail") } } ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/go.mod������������������������������������������������������������������������0000664�0000000�0000000�00000001242�14765200314�0015754�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module github.com/oliver006/redis_exporter go 1.21 require ( github.com/gomodule/redigo v1.9.2 github.com/mna/redisc v1.4.0 github.com/prometheus/client_golang v1.21.1 github.com/prometheus/client_model v0.6.1 github.com/sirupsen/logrus v1.9.3 ) require ( github.com/beorn7/perks v1.0.1 // indirect github.com/cespare/xxhash/v2 v2.3.0 // indirect github.com/klauspost/compress v1.17.11 // indirect github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect github.com/prometheus/common v0.62.0 // indirect github.com/prometheus/procfs v0.15.1 // indirect golang.org/x/sys v0.28.0 // indirect google.golang.org/protobuf v1.36.1 // indirect ) ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/go.sum������������������������������������������������������������������������0000664�0000000�0000000�00000010121�14765200314�0015775�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/gomodule/redigo v1.8.5/go.mod h1:P9dn9mFrCBvWhGE1wpxx6fgq7BAeLBk+UUUzlpkBYO0= github.com/gomodule/redigo v1.9.2 h1:HrutZBLhSIU8abiSfW8pj8mPhOyMYjZT/wcA4/L9L9s= github.com/gomodule/redigo v1.9.2/go.mod h1:KsU3hiK/Ay8U42qpaJk+kuNa3C+spxapWpM+ywhcgtw= github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI= github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/klauspost/compress v1.17.11 h1:In6xLpyWOi1+C7tXUUWv2ot1QvBjxevKAaI6IXrJmUc= github.com/klauspost/compress v1.17.11/go.mod h1:pMDklpSncoRMuLFrf1W9Ss9KT+0rH90U12bZKk7uwG0= github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc= github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= github.com/mna/redisc v1.4.0 h1:rBKXyGO/39SGmYoRKCyzXcBpoMMKqkikg8E1G8YIfSA= github.com/mna/redisc v1.4.0/go.mod h1:CplIoaSTDi5h9icnj4FLbRgHoNKCHDNJDVRztWDGeSQ= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/prometheus/client_golang v1.21.1 h1:DOvXXTqVzvkIewV/CDPFdejpMCGeMcbGCQ8YOmu+Ibk= github.com/prometheus/client_golang v1.21.1/go.mod h1:U9NM32ykUErtVBxdvD3zfi+EuFkkaBvMb09mIfe0Zgg= github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E= github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY= github.com/prometheus/common v0.62.0 h1:xasJaQlnWAeyHdUBeGjXmutelfJHWMRr+Fg4QszZ2Io= github.com/prometheus/common v0.62.0/go.mod h1:vyBcEuLSvWos9B1+CyL7JZ2up+uFzXhkqml0W5zIY1I= github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc= github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk= github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA= github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA= golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= google.golang.org/protobuf v1.36.1 h1:yBPeRvTftaleIgM3PZ/WBIZ7XM/eEYAaEyCwvyjq/gk= google.golang.org/protobuf v1.36.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/main.go�����������������������������������������������������������������������0000664�0000000�0000000�00000035425�14765200314�0016133�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������package main import ( "context" "errors" "flag" "net/http" "os" "os/signal" "runtime" "strconv" "strings" "syscall" "time" "github.com/prometheus/client_golang/prometheus" log "github.com/sirupsen/logrus" "github.com/oliver006/redis_exporter/exporter" ) var ( /* BuildVersion, BuildDate, BuildCommitSha are filled in by the build script */ BuildVersion = "<<< filled in by build >>>" BuildDate = "<<< filled in by build >>>" BuildCommitSha = "<<< filled in by build >>>" ) func getEnv(key string, defaultVal string) string { if envVal, ok := os.LookupEnv(key); ok { return envVal } return defaultVal } func getEnvBool(key string, defaultVal bool) bool { if envVal, ok := os.LookupEnv(key); ok { envBool, err := strconv.ParseBool(envVal) if err == nil { return envBool } } return defaultVal } func getEnvInt64(key string, defaultVal int64) int64 { if envVal, ok := os.LookupEnv(key); ok { envInt64, err := strconv.ParseInt(envVal, 10, 64) if err == nil { return envInt64 } } return defaultVal } func main() { var ( redisAddr = flag.String("redis.addr", getEnv("REDIS_ADDR", "redis://localhost:6379"), "Address of the Redis instance to scrape") redisUser = flag.String("redis.user", getEnv("REDIS_USER", ""), "User name to use for authentication (Redis ACL for Redis 6.0 and newer)") redisPwd = flag.String("redis.password", getEnv("REDIS_PASSWORD", ""), "Password of the Redis instance to scrape") redisPwdFile = flag.String("redis.password-file", getEnv("REDIS_PASSWORD_FILE", ""), "Password file of the Redis instance to scrape") namespace = flag.String("namespace", getEnv("REDIS_EXPORTER_NAMESPACE", "redis"), "Namespace for metrics") checkKeys = flag.String("check-keys", getEnv("REDIS_EXPORTER_CHECK_KEYS", ""), "Comma separated list of key-patterns to export value and length/size, searched for with SCAN") checkSingleKeys = flag.String("check-single-keys", getEnv("REDIS_EXPORTER_CHECK_SINGLE_KEYS", ""), "Comma separated list of single keys to export value and length/size") checkKeyGroups = flag.String("check-key-groups", getEnv("REDIS_EXPORTER_CHECK_KEY_GROUPS", ""), "Comma separated list of lua regex for grouping keys") checkStreams = flag.String("check-streams", getEnv("REDIS_EXPORTER_CHECK_STREAMS", ""), "Comma separated list of stream-patterns to export info about streams, groups and consumers, searched for with SCAN") checkSingleStreams = flag.String("check-single-streams", getEnv("REDIS_EXPORTER_CHECK_SINGLE_STREAMS", ""), "Comma separated list of single streams to export info about streams, groups and consumers") streamsExcludeConsumerMetrics = flag.Bool("streams-exclude-consumer-metrics", getEnvBool("REDIS_EXPORTER_STREAMS_EXCLUDE_CONSUMER_METRICS", false), "Don't collect per consumer metrics for streams (decreases cardinality)") countKeys = flag.String("count-keys", getEnv("REDIS_EXPORTER_COUNT_KEYS", ""), "Comma separated list of patterns to count (eg: 'db0=production_*,db3=sessions:*'), searched for with SCAN") checkKeysBatchSize = flag.Int64("check-keys-batch-size", getEnvInt64("REDIS_EXPORTER_CHECK_KEYS_BATCH_SIZE", 1000), "Approximate number of keys to process in each execution, larger value speeds up scanning.\nWARNING: Still Redis is a single-threaded app, huge COUNT can affect production environment.") scriptPath = flag.String("script", getEnv("REDIS_EXPORTER_SCRIPT", ""), "Comma separated list of path(s) to Redis Lua script(s) for gathering extra metrics") listenAddress = flag.String("web.listen-address", getEnv("REDIS_EXPORTER_WEB_LISTEN_ADDRESS", ":9121"), "Address to listen on for web interface and telemetry.") metricPath = flag.String("web.telemetry-path", getEnv("REDIS_EXPORTER_WEB_TELEMETRY_PATH", "/metrics"), "Path under which to expose metrics.") logFormat = flag.String("log-format", getEnv("REDIS_EXPORTER_LOG_FORMAT", "txt"), "Log format, valid options are txt and json") configCommand = flag.String("config-command", getEnv("REDIS_EXPORTER_CONFIG_COMMAND", "CONFIG"), "What to use for the CONFIG command, set to \"-\" to skip config metrics extraction") connectionTimeout = flag.String("connection-timeout", getEnv("REDIS_EXPORTER_CONNECTION_TIMEOUT", "15s"), "Timeout for connection to Redis instance") tlsClientKeyFile = flag.String("tls-client-key-file", getEnv("REDIS_EXPORTER_TLS_CLIENT_KEY_FILE", ""), "Name of the client key file (including full path) if the server requires TLS client authentication") tlsClientCertFile = flag.String("tls-client-cert-file", getEnv("REDIS_EXPORTER_TLS_CLIENT_CERT_FILE", ""), "Name of the client certificate file (including full path) if the server requires TLS client authentication") tlsCaCertFile = flag.String("tls-ca-cert-file", getEnv("REDIS_EXPORTER_TLS_CA_CERT_FILE", ""), "Name of the CA certificate file (including full path) if the server requires TLS client authentication") tlsServerKeyFile = flag.String("tls-server-key-file", getEnv("REDIS_EXPORTER_TLS_SERVER_KEY_FILE", ""), "Name of the server key file (including full path) if the web interface and telemetry should use TLS") tlsServerCertFile = flag.String("tls-server-cert-file", getEnv("REDIS_EXPORTER_TLS_SERVER_CERT_FILE", ""), "Name of the server certificate file (including full path) if the web interface and telemetry should use TLS") tlsServerCaCertFile = flag.String("tls-server-ca-cert-file", getEnv("REDIS_EXPORTER_TLS_SERVER_CA_CERT_FILE", ""), "Name of the CA certificate file (including full path) if the web interface and telemetry should require TLS client authentication") tlsServerMinVersion = flag.String("tls-server-min-version", getEnv("REDIS_EXPORTER_TLS_SERVER_MIN_VERSION", "TLS1.2"), "Minimum TLS version that is acceptable by the web interface and telemetry when using TLS") maxDistinctKeyGroups = flag.Int64("max-distinct-key-groups", getEnvInt64("REDIS_EXPORTER_MAX_DISTINCT_KEY_GROUPS", 100), "The maximum number of distinct key groups with the most memory utilization to present as distinct metrics per database, the leftover key groups will be aggregated in the 'overflow' bucket") isDebug = flag.Bool("debug", getEnvBool("REDIS_EXPORTER_DEBUG", false), "Output verbose debug information") setClientName = flag.Bool("set-client-name", getEnvBool("REDIS_EXPORTER_SET_CLIENT_NAME", true), "Whether to set client name to redis_exporter") isTile38 = flag.Bool("is-tile38", getEnvBool("REDIS_EXPORTER_IS_TILE38", false), "Whether to scrape Tile38 specific metrics") isCluster = flag.Bool("is-cluster", getEnvBool("REDIS_EXPORTER_IS_CLUSTER", false), "Whether this is a redis cluster (Enable this if you need to fetch key level data on a Redis Cluster).") exportClientList = flag.Bool("export-client-list", getEnvBool("REDIS_EXPORTER_EXPORT_CLIENT_LIST", false), "Whether to scrape Client List specific metrics") exportClientPort = flag.Bool("export-client-port", getEnvBool("REDIS_EXPORTER_EXPORT_CLIENT_PORT", false), "Whether to include the client's port when exporting the client list. Warning: including the port increases the number of metrics generated and will make your Prometheus server take up more memory") showVersion = flag.Bool("version", false, "Show version information and exit") redisMetricsOnly = flag.Bool("redis-only-metrics", getEnvBool("REDIS_EXPORTER_REDIS_ONLY_METRICS", false), "Whether to also export go runtime metrics") pingOnConnect = flag.Bool("ping-on-connect", getEnvBool("REDIS_EXPORTER_PING_ON_CONNECT", false), "Whether to ping the redis instance after connecting") inclConfigMetrics = flag.Bool("include-config-metrics", getEnvBool("REDIS_EXPORTER_INCL_CONFIG_METRICS", false), "Whether to include all config settings as metrics") inclModulesMetrics = flag.Bool("include-modules-metrics", getEnvBool("REDIS_EXPORTER_INCL_MODULES_METRICS", false), "Whether to collect Redis Modules metrics") disableExportingKeyValues = flag.Bool("disable-exporting-key-values", getEnvBool("REDIS_EXPORTER_DISABLE_EXPORTING_KEY_VALUES", false), "Whether to disable values of keys stored in redis as labels or not when using check-keys/check-single-key") excludeLatencyHistogramMetrics = flag.Bool("exclude-latency-histogram-metrics", getEnvBool("REDIS_EXPORTER_EXCLUDE_LATENCY_HISTOGRAM_METRICS", false), "Do not try to collect latency histogram metrics") redactConfigMetrics = flag.Bool("redact-config-metrics", getEnvBool("REDIS_EXPORTER_REDACT_CONFIG_METRICS", true), "Whether to redact config settings that include potentially sensitive information like passwords") inclSystemMetrics = flag.Bool("include-system-metrics", getEnvBool("REDIS_EXPORTER_INCL_SYSTEM_METRICS", false), "Whether to include system metrics like e.g. redis_total_system_memory_bytes") skipTLSVerification = flag.Bool("skip-tls-verification", getEnvBool("REDIS_EXPORTER_SKIP_TLS_VERIFICATION", false), "Whether to to skip TLS verification") basicAuthUsername = flag.String("basic-auth-username", getEnv("REDIS_EXPORTER_BASIC_AUTH_USERNAME", ""), "Username for basic authentication") basicAuthPassword = flag.String("basic-auth-password", getEnv("REDIS_EXPORTER_BASIC_AUTH_PASSWORD", ""), "Password for basic authentication") ) flag.Parse() switch *logFormat { case "json": log.SetFormatter(&log.JSONFormatter{}) default: log.SetFormatter(&log.TextFormatter{}) } if *showVersion { log.SetOutput(os.Stdout) } log.Printf("Redis Metrics Exporter %s build date: %s sha1: %s Go: %s GOOS: %s GOARCH: %s", BuildVersion, BuildDate, BuildCommitSha, runtime.Version(), runtime.GOOS, runtime.GOARCH, ) if *showVersion { return } if *isDebug { log.SetLevel(log.DebugLevel) log.Debugln("Enabling debug output") } else { log.SetLevel(log.InfoLevel) } to, err := time.ParseDuration(*connectionTimeout) if err != nil { log.Fatalf("Couldn't parse connection timeout duration, err: %s", err) } passwordMap := make(map[string]string) if *redisPwd == "" && *redisPwdFile != "" { passwordMap, err = exporter.LoadPwdFile(*redisPwdFile) if err != nil { log.Fatalf("Error loading redis passwords from file %s, err: %s", *redisPwdFile, err) } } var ls map[string][]byte if *scriptPath != "" { scripts := strings.Split(*scriptPath, ",") ls = make(map[string][]byte, len(scripts)) for _, script := range scripts { if ls[script], err = os.ReadFile(script); err != nil { log.Fatalf("Error loading script file %s err: %s", script, err) } } } registry := prometheus.NewRegistry() if !*redisMetricsOnly { registry = prometheus.DefaultRegisterer.(*prometheus.Registry) } exp, err := exporter.NewRedisExporter( *redisAddr, exporter.Options{ User: *redisUser, Password: *redisPwd, PasswordMap: passwordMap, Namespace: *namespace, ConfigCommandName: *configCommand, CheckKeys: *checkKeys, CheckSingleKeys: *checkSingleKeys, CheckKeysBatchSize: *checkKeysBatchSize, CheckKeyGroups: *checkKeyGroups, MaxDistinctKeyGroups: *maxDistinctKeyGroups, CheckStreams: *checkStreams, CheckSingleStreams: *checkSingleStreams, StreamsExcludeConsumerMetrics: *streamsExcludeConsumerMetrics, CountKeys: *countKeys, LuaScript: ls, InclSystemMetrics: *inclSystemMetrics, InclConfigMetrics: *inclConfigMetrics, DisableExportingKeyValues: *disableExportingKeyValues, ExcludeLatencyHistogramMetrics: *excludeLatencyHistogramMetrics, RedactConfigMetrics: *redactConfigMetrics, SetClientName: *setClientName, IsTile38: *isTile38, IsCluster: *isCluster, InclModulesMetrics: *inclModulesMetrics, ExportClientList: *exportClientList, ExportClientsInclPort: *exportClientPort, SkipTLSVerification: *skipTLSVerification, ClientCertFile: *tlsClientCertFile, ClientKeyFile: *tlsClientKeyFile, CaCertFile: *tlsCaCertFile, ConnectionTimeouts: to, MetricsPath: *metricPath, RedisMetricsOnly: *redisMetricsOnly, PingOnConnect: *pingOnConnect, RedisPwdFile: *redisPwdFile, Registry: registry, BuildInfo: exporter.BuildInfo{ Version: BuildVersion, CommitSha: BuildCommitSha, Date: BuildDate, }, BasicAuthUsername: *basicAuthUsername, BasicAuthPassword: *basicAuthPassword, }, ) if err != nil { log.Fatal(err) } // Verify that initial client keypair and CA are accepted if (*tlsClientCertFile != "") != (*tlsClientKeyFile != "") { log.Fatal("TLS client key file and cert file should both be present") } _, err = exp.CreateClientTLSConfig() if err != nil { log.Fatal(err) } log.Infof("Providing metrics at %s%s", *listenAddress, *metricPath) log.Debugf("Configured redis addr: %#v", *redisAddr) server := &http.Server{ Addr: *listenAddress, Handler: exp, } go func() { if *tlsServerCertFile != "" && *tlsServerKeyFile != "" { log.Debugf("Bind as TLS using cert %s and key %s", *tlsServerCertFile, *tlsServerKeyFile) tlsConfig, err := exp.CreateServerTLSConfig(*tlsServerCertFile, *tlsServerKeyFile, *tlsServerCaCertFile, *tlsServerMinVersion) if err != nil { log.Fatal(err) } server.TLSConfig = tlsConfig if err := server.ListenAndServeTLS("", ""); err != nil && !errors.Is(err, http.ErrServerClosed) { log.Fatalf("TLS Server error: %v", err) } } else { if err := server.ListenAndServe(); err != nil && !errors.Is(err, http.ErrServerClosed) { log.Fatalf("Server error: %v", err) } } }() // graceful shutdown quit := make(chan os.Signal, 1) signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM) _quit := <-quit log.Infof("Received %s signal, exiting", _quit.String()) // Create a context with a timeout ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() // Shutdown the HTTP server gracefully if err := server.Shutdown(ctx); err != nil { log.Fatalf("Server shutdown failed: %v", err) } log.Infof("Server shut down gracefully") } �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������redis_exporter-1.69.0/package-github-binaries.sh����������������������������������������������������0000775�0000000�0000000�00000001055�14765200314�0021654�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������#!/usr/bin/env bash set -u -e -o pipefail mkdir -p dist for build in $(ls .build); do echo "Creating archive for ${build}" cp LICENSE README.md ".build/${build}/" if [[ "${build}" =~ windows-.*$ ]] ; then # Make sure to clear out zip files to prevent zip from appending to the archive. rm "dist/${build}.zip" || true cd ".build/" && zip -r --quiet -9 "../dist/${build}.zip" "${build}" && cd ../ else tar -C ".build/" -czf "dist/${build}.tar.gz" "${build}" fi done cd dist sha256sum *.gz *.zip > sha256sums.txt ls -la cd .. �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������