pax_global_header00006660000000000000000000000064141335110760014513gustar00rootroot0000000000000052 comment=580003c91bbfdca0ffab80fb07639d77a28a7f22 prometheus-postfix-exporter-0.3.0/000077500000000000000000000000001413351107600172465ustar00rootroot00000000000000prometheus-postfix-exporter-0.3.0/.dockerignore000066400000000000000000000001011413351107600217120ustar00rootroot00000000000000# Created by .ignore support plugin (hsz.mobi) .idea/ *.iml .git/prometheus-postfix-exporter-0.3.0/.gitignore000066400000000000000000000001641413351107600212370ustar00rootroot00000000000000# Editor files *~ .idea/ # Test binary, build with `go test -c` *.test # Binaries postfix_exporter *.iml vendor/ prometheus-postfix-exporter-0.3.0/.travis.yml000066400000000000000000000014011413351107600213530ustar00rootroot00000000000000language: go matrix: include: - go: 1.16.x env: VET=1 GO111MODULE=on - go: 1.16.x env: RACE=1 GO111MODULE=on - go: 1.16.x env: RUN386=1 - go: 1.15.x env: VET=1 GO111MODULE=on - go: 1.15.x env: RACE=1 GO111MODULE=on - go: 1.15.x env: RUN386=1 - go: 1.14.x env: VET=1 GO111MODULE=on - go: 1.14.x env: RACE=1 GO111MODULE=on - go: 1.14.x env: RUN386=1 - go: 1.13.x env: VET=1 GO111MODULE=on - go: 1.13.x env: RACE=1 GO111MODULE=on - go: 1.13.x env: RUN386=1 - go: 1.12.x env: GO111MODULE=on - go: 1.11.x env: GO111MODULE=on - go: stable addons: apt: packages: - libsystemd-dev env: global: GO111MODULE: on prometheus-postfix-exporter-0.3.0/CHANGELOG.md000066400000000000000000000004521413351107600210600ustar00rootroot00000000000000## 0.1.3 / 2021-05-02 * [BUGFIX] Fix default for mail log path (/var/log/mail.log) ## 0.1.2 / 2018-05-04 * [ENHANCEMENT] Build tag for systemd ## 0.1.1 / 2018-04-19 * [BUGFIX] Non-updating metrics from systemd-journal fix ## 0.1.0 / 2018-02-23 * [ENHANCEMENT] Initial release, add changelog prometheus-postfix-exporter-0.3.0/Dockerfile000066400000000000000000000007541413351107600212460ustar00rootroot00000000000000FROM golang:1.16 AS builder WORKDIR /src # avoid downloading the dependencies on succesive builds RUN apt-get update -qq && apt-get install -qqy \ build-essential \ libsystemd-dev COPY go.mod go.sum ./ RUN go mod download RUN go mod verify COPY . . # Force the go compiler to use modules ENV GO111MODULE=on RUN go test RUN go build -o /bin/postfix_exporter FROM debian:latest EXPOSE 9154 WORKDIR / COPY --from=builder /bin/postfix_exporter /bin/ ENTRYPOINT ["/bin/postfix_exporter"] prometheus-postfix-exporter-0.3.0/LICENSE000066400000000000000000000261351413351107600202620ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. prometheus-postfix-exporter-0.3.0/README.md000066400000000000000000000065541413351107600205370ustar00rootroot00000000000000# Prometheus Postfix exporter Prometheus metrics exporter for [the Postfix mail server](http://www.postfix.org/). This exporter provides histogram metrics for the size and age of messages stored in the mail queue. It extracts these metrics from Postfix by connecting to a UNIX socket under `/var/spool`. It also counts events by parsing Postfix's log entries, using regular expression matching. The log entries are retrieved from the systemd journal, the Docker logs, or from a log file. ## Options These options can be used when starting the `postfix_exporter` | Flag | Description | Default | |--------------------------|------------------------------------------------------|-----------------------------------| | `--web.listen-address` | Address to listen on for web interface and telemetry | `9154` | | `--web.telemetry-path` | Path under which to expose metrics | `/metrics` | | `--postfix.showq_path` | Path at which Postfix places its showq socket | `/var/spool/postfix/public/showq` | | `--postfix.logfile_path` | Path where Postfix writes log entries | `/var/log/mail.log` | | `--log.unsupported` | Log all unsupported lines | `false` | | `--docker.enable` | Read from the Docker logs instead of a file | `false` | | `--docker.container.id` | The container to read Docker logs from | `postfix` | | `--systemd.enable` | Read from the systemd journal instead of file | `false` | | `--systemd.unit` | Name of the Postfix systemd unit | `postfix.service` | | `--systemd.slice` | Name of the Postfix systemd slice. | `""` | | `--systemd.journal_path` | Path to the systemd journal | `""` | ## Events from Docker Postfix servers running in a [Docker](https://www.docker.com/) container can be monitored using the `--docker.enable` flag. The default container ID is `postfix`, but can be customized with the `--docker.container.id` flag. The default is to connect to the local Docker, but this can be customized using [the `DOCKER_HOST` and similar](https://pkg.go.dev/github.com/docker/docker/client?tab=doc#NewEnvClient) environment variables. ## Events from log file The log file is tailed when processed. Rotating the log files while the exporter is running is OK. The path to the log file is specified with the `--postfix.logfile_path` flag. ## Events from systemd Retrieval from the systemd journal is enabled with the `--systemd.enable` flag. This overrides the log file setting. It is possible to specify the unit (with `--systemd.unit`) or slice (with `--systemd.slice`). Additionally, it is possible to read the journal from a directory with the `--systemd.journal_path` flag. ## Build options Default the exporter is build with systemd journal functionality (but it is disabled at default). Because the systemd headers are required for building with systemd, there is an option to build the exporter without systemd. Use the build tag `nosystemd`. ``` go build -tags nosystemd ``` prometheus-postfix-exporter-0.3.0/build_static.sh000077500000000000000000000004351413351107600222550ustar00rootroot00000000000000#!/bin/sh docker run -i -v `pwd`:/postfix_exporter golang:1.16 /bin/sh << 'EOF' set -ex # Install prerequisites for the build process. apt-get update -q apt-get install -yq libsystemd-dev cd /postfix_exporter go get -d ./... go build -a -tags static_all strip postfix_exporter EOF prometheus-postfix-exporter-0.3.0/go.mod000066400000000000000000000011461413351107600203560ustar00rootroot00000000000000module github.com/kumina/postfix_exporter go 1.16 require ( github.com/Microsoft/go-winio v0.5.0 // indirect github.com/alecthomas/kingpin v2.2.6+incompatible github.com/coreos/go-systemd/v22 v22.0.0 github.com/docker/distribution v2.7.1+incompatible // indirect github.com/docker/docker v1.13.1 github.com/docker/go-connections v0.4.0 // indirect github.com/docker/go-units v0.4.0 // indirect github.com/nxadm/tail v1.4.8 github.com/opencontainers/go-digest v1.0.0 // indirect github.com/prometheus/client_golang v1.4.1 github.com/prometheus/client_model v0.2.0 github.com/stretchr/testify v1.4.0 ) prometheus-postfix-exporter-0.3.0/go.sum000066400000000000000000000274761413351107600204210ustar00rootroot00000000000000github.com/Microsoft/go-winio v0.5.0 h1:Elr9Wn+sGKPlkaBvwu4mTrxtmOp3F3yV9qhaHbXGjwU= github.com/Microsoft/go-winio v0.5.0/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84= github.com/alecthomas/kingpin v2.2.6+incompatible h1:5svnBTFgJjZvGKyYBtMB0+m5wvrbUHiqye8wRJMlnYI= github.com/alecthomas/kingpin v2.2.6+incompatible/go.mod h1:59OFYbFVLKQKq+mqrL6Rw5bR0c3ACQaawgXx0QYndlE= github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751 h1:JYp7IbQjafoB+tBA3gMyHYHrpOtNuDiK/uB5uXxq5wM= github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4 h1:Hs82Z41s6SdL1CELW+XaDYmOH4hkBN4/N9og/AsOv7E= github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8= github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY= github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/coreos/go-systemd/v22 v22.0.0 h1:XJIw/+VlJ+87J+doOxznsAWIdmWuViOVhkQamW5YV28= github.com/coreos/go-systemd/v22 v22.0.0/go.mod h1:xO0FLkIi5MaZafQlIrOotqXZ90ih+1atmu1JpKERPPk= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/docker/distribution v2.7.1+incompatible h1:a5mlkVzth6W5A4fOsS3D2EO5BUmsJpcB+cRlLU7cSug= github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w= github.com/docker/docker v1.13.1 h1:IkZjBSIc8hBjLpqeAbeE5mca5mNgeatLHBy3GO78BWo= github.com/docker/docker v1.13.1/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ= github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec= github.com/docker/go-units v0.4.0 h1:3uh0PgVws3nIA0Q+MwDC8yjEPf9zjRfZZWXZYDct3Tw= github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk= github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4= github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= github.com/godbus/dbus/v5 v5.0.3/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA= github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs= github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4= github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc= github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU= github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE= github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU= github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U= github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM= github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo= github.com/prometheus/client_golang v1.4.1 h1:FFSuS004yOQEtDdTq+TAOLP5xUq63KqAFYyOi8zA+Y8= github.com/prometheus/client_golang v1.4.1/go.mod h1:e9GMxYsXl05ICDXkRhurwBS4Q3OK1iX/F2sw+iXX5zU= github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M= github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.9.1 h1:KOMtN28tlbam3/7ZKEYKHhKoJZYYj3gMH4uc62x7X7U= github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8bs7vj7HSQ4= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.8 h1:+fpWZdT24pJBiqJdAwYBjPSk+5YmQzYNPYzQsdzLkt8= github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A= github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk= github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190613194153-d28f0bde5980 h1:dfGZHvZk057jK2MCeWus/TowKpJ8y4AmooUzdBSR9GU= golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c h1:VwygUrnw9jn88c4u8GD3rZQbqrP/tgas88tPUbBxQrk= golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ= gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.5 h1:ymVxjfMaHvXD8RqPRmzHHsB3VvucivSkIAvJFDI5O3c= gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= prometheus-postfix-exporter-0.3.0/logsource.go000066400000000000000000000032261413351107600216020ustar00rootroot00000000000000package main import ( "context" "fmt" "io" "github.com/alecthomas/kingpin" ) // A LogSourceFactory provides a repository of log sources that can be // instantiated from command line flags. type LogSourceFactory interface { // Init adds the factory's struct fields as flags in the // application. Init(*kingpin.Application) // New attempts to create a new log source. This is called after // flags have been parsed. Returning `nil, nil`, means the user // didn't want this log source. New(context.Context) (LogSourceCloser, error) } type LogSourceCloser interface { io.Closer LogSource } var logSourceFactories []LogSourceFactory // RegisterLogSourceFactory can be called from module `init` functions // to register factories. func RegisterLogSourceFactory(lsf LogSourceFactory) { logSourceFactories = append(logSourceFactories, lsf) } // InitLogSourceFactories runs Init on all factories. The // initialization order is arbitrary, except `fileLogSourceFactory` is // always last (the fallback). The file log source must be last since // it's enabled by default. func InitLogSourceFactories(app *kingpin.Application) { RegisterLogSourceFactory(&fileLogSourceFactory{}) for _, f := range logSourceFactories { f.Init(app) } } // NewLogSourceFromFactories iterates through the factories and // attempts to instantiate a log source. The first factory to return // success wins. func NewLogSourceFromFactories(ctx context.Context) (LogSourceCloser, error) { for _, f := range logSourceFactories { src, err := f.New(ctx) if err != nil { return nil, err } if src != nil { return src, nil } } return nil, fmt.Errorf("no log source configured") } prometheus-postfix-exporter-0.3.0/logsource_docker.go000066400000000000000000000046531413351107600231360ustar00rootroot00000000000000// +build !nodocker package main import ( "bufio" "context" "io" "log" "strings" "github.com/alecthomas/kingpin" "github.com/docker/docker/api/types" "github.com/docker/docker/client" ) // A DockerLogSource reads log records from the given Docker // journal. type DockerLogSource struct { client DockerClient containerID string reader *bufio.Reader } // A DockerClient is the client interface that client.Client // provides. See https://pkg.go.dev/github.com/docker/docker/client type DockerClient interface { io.Closer ContainerLogs(context.Context, string, types.ContainerLogsOptions) (io.ReadCloser, error) } // NewDockerLogSource returns a log source for reading Docker logs. func NewDockerLogSource(ctx context.Context, c DockerClient, containerID string) (*DockerLogSource, error) { r, err := c.ContainerLogs(ctx, containerID, types.ContainerLogsOptions{ ShowStdout: true, ShowStderr: true, Follow: true, Tail: "0", }) if err != nil { return nil, err } logSrc := &DockerLogSource{ client: c, containerID: containerID, reader: bufio.NewReader(r), } return logSrc, nil } func (s *DockerLogSource) Close() error { return s.client.Close() } func (s *DockerLogSource) Path() string { return "docker:" + s.containerID } func (s *DockerLogSource) Read(ctx context.Context) (string, error) { line, err := s.reader.ReadString('\n') if err != nil { return "", err } return strings.TrimSpace(line), nil } // A dockerLogSourceFactory is a factory that can create // DockerLogSources from command line flags. type dockerLogSourceFactory struct { enable bool containerID string } func (f *dockerLogSourceFactory) Init(app *kingpin.Application) { app.Flag("docker.enable", "Read from Docker logs. Environment variable DOCKER_HOST can be used to change the address. See https://pkg.go.dev/github.com/docker/docker/client?tab=doc#NewEnvClient for more information.").Default("false").BoolVar(&f.enable) app.Flag("docker.container.id", "ID/name of the Postfix Docker container.").Default("postfix").StringVar(&f.containerID) } func (f *dockerLogSourceFactory) New(ctx context.Context) (LogSourceCloser, error) { if !f.enable { return nil, nil } log.Println("Reading log events from Docker") c, err := client.NewEnvClient() if err != nil { return nil, err } return NewDockerLogSource(ctx, c, f.containerID) } func init() { RegisterLogSourceFactory(&dockerLogSourceFactory{}) } prometheus-postfix-exporter-0.3.0/logsource_docker_test.go000066400000000000000000000036231413351107600241710ustar00rootroot00000000000000// +build !nodocker package main import ( "context" "io" "io/ioutil" "strings" "testing" "github.com/docker/docker/api/types" "github.com/stretchr/testify/assert" ) func TestNewDockerLogSource(t *testing.T) { ctx := context.Background() c := &fakeDockerClient{} src, err := NewDockerLogSource(ctx, c, "acontainer") if err != nil { t.Fatalf("NewDockerLogSource failed: %v", err) } assert.Equal(t, []string{"acontainer"}, c.containerLogsCalls, "A call to ContainerLogs should be made.") if err := src.Close(); err != nil { t.Fatalf("Close failed: %v", err) } assert.Equal(t, 1, c.closeCalls, "A call to Close should be made.") } func TestDockerLogSource_Path(t *testing.T) { ctx := context.Background() c := &fakeDockerClient{} src, err := NewDockerLogSource(ctx, c, "acontainer") if err != nil { t.Fatalf("NewDockerLogSource failed: %v", err) } defer src.Close() assert.Equal(t, "docker:acontainer", src.Path(), "Path should be set by New.") } func TestDockerLogSource_Read(t *testing.T) { ctx := context.Background() c := &fakeDockerClient{ logsReader: ioutil.NopCloser(strings.NewReader("Feb 13 23:31:30 ahost anid[123]: aline\n")), } src, err := NewDockerLogSource(ctx, c, "acontainer") if err != nil { t.Fatalf("NewDockerLogSource failed: %v", err) } defer src.Close() s, err := src.Read(ctx) if err != nil { t.Fatalf("Read failed: %v", err) } assert.Equal(t, "Feb 13 23:31:30 ahost anid[123]: aline", s, "Read should get data from the journal entry.") } type fakeDockerClient struct { logsReader io.ReadCloser containerLogsCalls []string closeCalls int } func (c *fakeDockerClient) ContainerLogs(ctx context.Context, containerID string, opts types.ContainerLogsOptions) (io.ReadCloser, error) { c.containerLogsCalls = append(c.containerLogsCalls, containerID) return c.logsReader, nil } func (c *fakeDockerClient) Close() error { c.closeCalls++ return nil } prometheus-postfix-exporter-0.3.0/logsource_file.go000066400000000000000000000037701413351107600226050ustar00rootroot00000000000000package main import ( "context" "io" "log" "github.com/alecthomas/kingpin" "github.com/nxadm/tail" ) // A FileLogSource can read lines from a file. type FileLogSource struct { tailer *tail.Tail } // NewFileLogSource creates a new log source, tailing the given file. func NewFileLogSource(path string) (*FileLogSource, error) { tailer, err := tail.TailFile(path, tail.Config{ ReOpen: true, // reopen the file if it's rotated MustExist: true, // fail immediately if the file is missing or has incorrect permissions Follow: true, // run in follow mode Location: &tail.SeekInfo{Whence: io.SeekEnd}, // seek to end of file Logger: tail.DiscardingLogger, }) if err != nil { return nil, err } return &FileLogSource{tailer}, nil } func (s *FileLogSource) Close() error { defer s.tailer.Cleanup() go func() { // Stop() waits for the tailer goroutine to shut down, but it // can be blocking on sending on the Lines channel... for range s.tailer.Lines { } }() return s.tailer.Stop() } func (s *FileLogSource) Path() string { return s.tailer.Filename } func (s *FileLogSource) Read(ctx context.Context) (string, error) { select { case line, ok := <-s.tailer.Lines: if !ok { return "", io.EOF } return line.Text, nil case <-ctx.Done(): return "", ctx.Err() } } // A fileLogSourceFactory is a factory than can create log sources // from command line flags. // // Because this factory is enabled by default, it must always be // registered last. type fileLogSourceFactory struct { path string } func (f *fileLogSourceFactory) Init(app *kingpin.Application) { app.Flag("postfix.logfile_path", "Path where Postfix writes log entries.").Default("/var/log/mail.log").StringVar(&f.path) } func (f *fileLogSourceFactory) New(ctx context.Context) (LogSourceCloser, error) { if f.path == "" { return nil, nil } log.Printf("Reading log events from %s", f.path) return NewFileLogSource(f.path) } prometheus-postfix-exporter-0.3.0/logsource_file_test.go000066400000000000000000000032331413351107600236360ustar00rootroot00000000000000package main import ( "context" "fmt" "io/ioutil" "os" "sync" "testing" "time" "github.com/stretchr/testify/assert" ) func TestFileLogSource_Path(t *testing.T) { path, close, err := setupFakeLogFile() if err != nil { t.Fatalf("setupFakeTailer failed: %v", err) } defer close() src, err := NewFileLogSource(path) if err != nil { t.Fatalf("NewFileLogSource failed: %v", err) } defer src.Close() assert.Equal(t, path, src.Path(), "Path should be set by New.") } func TestFileLogSource_Read(t *testing.T) { ctx := context.Background() path, close, err := setupFakeLogFile() if err != nil { t.Fatalf("setupFakeTailer failed: %v", err) } defer close() src, err := NewFileLogSource(path) if err != nil { t.Fatalf("NewFileLogSource failed: %v", err) } defer src.Close() s, err := src.Read(ctx) if err != nil { t.Fatalf("Read failed: %v", err) } assert.Equal(t, "Feb 13 23:31:30 ahost anid[123]: aline", s, "Read should get data from the journal entry.") } func setupFakeLogFile() (string, func(), error) { f, err := ioutil.TempFile("", "filelogsource") if err != nil { return "", nil, err } ctx, cancel := context.WithCancel(context.Background()) var wg sync.WaitGroup wg.Add(1) go func() { defer wg.Done() defer os.Remove(f.Name()) defer f.Close() for { // The tailer seeks to the end and then does a // follow. Keep writing lines so we know it wakes up and // returns lines. fmt.Fprintln(f, "Feb 13 23:31:30 ahost anid[123]: aline") select { case <-time.After(10 * time.Millisecond): // continue case <-ctx.Done(): return } } }() return f.Name(), func() { cancel() wg.Wait() }, nil } prometheus-postfix-exporter-0.3.0/logsource_systemd.go000066400000000000000000000072071413351107600233550ustar00rootroot00000000000000// +build !nosystemd,linux package main import ( "context" "fmt" "io" "log" "time" "github.com/alecthomas/kingpin" "github.com/coreos/go-systemd/v22/sdjournal" ) // timeNow is a test fake injection point. var timeNow = time.Now // A SystemdLogSource reads log records from the given Systemd // journal. type SystemdLogSource struct { journal SystemdJournal path string } // A SystemdJournal is the journal interface that sdjournal.Journal // provides. See https://pkg.go.dev/github.com/coreos/go-systemd/sdjournal?tab=doc type SystemdJournal interface { io.Closer AddMatch(match string) error GetEntry() (*sdjournal.JournalEntry, error) Next() (uint64, error) SeekRealtimeUsec(usec uint64) error Wait(timeout time.Duration) int } // NewSystemdLogSource returns a log source for reading Systemd // journal entries. `unit` and `slice` provide filtering if non-empty // (with `slice` taking precedence). func NewSystemdLogSource(j SystemdJournal, path, unit, slice string) (*SystemdLogSource, error) { logSrc := &SystemdLogSource{journal: j, path: path} var err error if slice != "" { err = logSrc.journal.AddMatch("_SYSTEMD_SLICE=" + slice) } else if unit != "" { err = logSrc.journal.AddMatch("_SYSTEMD_UNIT=" + unit) } if err != nil { logSrc.journal.Close() return nil, err } // Start at end of journal if err := logSrc.journal.SeekRealtimeUsec(uint64(timeNow().UnixNano() / 1000)); err != nil { logSrc.journal.Close() return nil, err } if r := logSrc.journal.Wait(1 * time.Second); r < 0 { logSrc.journal.Close() return nil, err } return logSrc, nil } func (s *SystemdLogSource) Close() error { return s.journal.Close() } func (s *SystemdLogSource) Path() string { return s.path } func (s *SystemdLogSource) Read(ctx context.Context) (string, error) { c, err := s.journal.Next() if err != nil { return "", err } if c == 0 { return "", io.EOF } e, err := s.journal.GetEntry() if err != nil { return "", err } ts := time.Unix(0, int64(e.RealtimeTimestamp)*int64(time.Microsecond)) return fmt.Sprintf( "%s %s %s[%s]: %s", ts.Format(time.Stamp), e.Fields["_HOSTNAME"], e.Fields["SYSLOG_IDENTIFIER"], e.Fields["_PID"], e.Fields["MESSAGE"], ), nil } // A systemdLogSourceFactory is a factory that can create // SystemdLogSources from command line flags. type systemdLogSourceFactory struct { enable bool unit, slice, path string } func (f *systemdLogSourceFactory) Init(app *kingpin.Application) { app.Flag("systemd.enable", "Read from the systemd journal instead of log").Default("false").BoolVar(&f.enable) app.Flag("systemd.unit", "Name of the Postfix systemd unit.").Default("postfix.service").StringVar(&f.unit) app.Flag("systemd.slice", "Name of the Postfix systemd slice. Overrides the systemd unit.").Default("").StringVar(&f.slice) app.Flag("systemd.journal_path", "Path to the systemd journal").Default("").StringVar(&f.path) } func (f *systemdLogSourceFactory) New(ctx context.Context) (LogSourceCloser, error) { if !f.enable { return nil, nil } log.Println("Reading log events from systemd") j, path, err := newSystemdJournal(f.path) if err != nil { return nil, err } return NewSystemdLogSource(j, path, f.unit, f.slice) } // newSystemdJournal creates a journal handle. It returns the handle // and a string representation of it. If `path` is empty, it connects // to the local journald. func newSystemdJournal(path string) (*sdjournal.Journal, string, error) { if path != "" { j, err := sdjournal.NewJournalFromDir(path) return j, path, err } j, err := sdjournal.NewJournal() return j, "journald", err } func init() { RegisterLogSourceFactory(&systemdLogSourceFactory{}) } prometheus-postfix-exporter-0.3.0/logsource_systemd_test.go000066400000000000000000000072251413351107600244140ustar00rootroot00000000000000// +build !nosystemd,linux package main import ( "context" "io" "os" "testing" "time" "github.com/coreos/go-systemd/v22/sdjournal" "github.com/stretchr/testify/assert" ) func TestNewSystemdLogSource(t *testing.T) { j := &fakeSystemdJournal{} src, err := NewSystemdLogSource(j, "apath", "aunit", "aslice") if err != nil { t.Fatalf("NewSystemdLogSource failed: %v", err) } assert.Equal(t, []string{"_SYSTEMD_SLICE=aslice"}, j.addMatchCalls, "A match should be added for slice.") assert.Equal(t, []uint64{1234567890000000}, j.seekRealtimeUsecCalls, "A call to SeekRealtimeUsec should be made.") assert.Equal(t, []time.Duration{1 * time.Second}, j.waitCalls, "A call to Wait should be made.") if err := src.Close(); err != nil { t.Fatalf("Close failed: %v", err) } assert.Equal(t, 1, j.closeCalls, "A call to Close should be made.") } func TestSystemdLogSource_Path(t *testing.T) { j := &fakeSystemdJournal{} src, err := NewSystemdLogSource(j, "apath", "aunit", "aslice") if err != nil { t.Fatalf("NewSystemdLogSource failed: %v", err) } defer src.Close() assert.Equal(t, "apath", src.Path(), "Path should be set by New.") } func TestSystemdLogSource_Read(t *testing.T) { ctx := context.Background() j := &fakeSystemdJournal{ getEntryValues: []sdjournal.JournalEntry{ { Fields: map[string]string{ "_HOSTNAME": "ahost", "SYSLOG_IDENTIFIER": "anid", "_PID": "123", "MESSAGE": "aline", }, RealtimeTimestamp: 1234567890000000, }, }, nextValues: []uint64{1}, } src, err := NewSystemdLogSource(j, "apath", "aunit", "aslice") if err != nil { t.Fatalf("NewSystemdLogSource failed: %v", err) } defer src.Close() s, err := src.Read(ctx) if err != nil { t.Fatalf("Read failed: %v", err) } assert.Equal(t, "Feb 13 23:31:30 ahost anid[123]: aline", s, "Read should get data from the journal entry.") } func TestSystemdLogSource_ReadEOF(t *testing.T) { ctx := context.Background() j := &fakeSystemdJournal{ nextValues: []uint64{0}, } src, err := NewSystemdLogSource(j, "apath", "aunit", "aslice") if err != nil { t.Fatalf("NewSystemdLogSource failed: %v", err) } defer src.Close() _, err = src.Read(ctx) assert.Equal(t, io.EOF, err, "Should interpret Next 0 as EOF.") } func TestMain(m *testing.M) { // We compare Unix timestamps to date strings, so make it deterministic. os.Setenv("TZ", "UTC") timeNow = func() time.Time { return time.Date(2009, 2, 13, 23, 31, 30, 0, time.UTC) } defer func() { timeNow = time.Now }() os.Exit(m.Run()) } type fakeSystemdJournal struct { getEntryValues []sdjournal.JournalEntry getEntryError error nextValues []uint64 nextError error addMatchCalls []string closeCalls int seekRealtimeUsecCalls []uint64 waitCalls []time.Duration } func (j *fakeSystemdJournal) AddMatch(match string) error { j.addMatchCalls = append(j.addMatchCalls, match) return nil } func (j *fakeSystemdJournal) Close() error { j.closeCalls++ return nil } func (j *fakeSystemdJournal) GetEntry() (*sdjournal.JournalEntry, error) { if len(j.getEntryValues) == 0 { return nil, j.getEntryError } e := j.getEntryValues[0] j.getEntryValues = j.getEntryValues[1:] return &e, nil } func (j *fakeSystemdJournal) Next() (uint64, error) { if len(j.nextValues) == 0 { return 0, j.nextError } v := j.nextValues[0] j.nextValues = j.nextValues[1:] return v, nil } func (j *fakeSystemdJournal) SeekRealtimeUsec(usec uint64) error { j.seekRealtimeUsecCalls = append(j.seekRealtimeUsecCalls, usec) return nil } func (j *fakeSystemdJournal) Wait(timeout time.Duration) int { j.waitCalls = append(j.waitCalls, timeout) return 0 } prometheus-postfix-exporter-0.3.0/main.go000066400000000000000000000035121413351107600205220ustar00rootroot00000000000000package main import ( "context" "log" "net/http" "os" "github.com/alecthomas/kingpin" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus/promhttp" ) func main() { var ( ctx = context.Background() app = kingpin.New("postfix_exporter", "Prometheus metrics exporter for postfix") listenAddress = app.Flag("web.listen-address", "Address to listen on for web interface and telemetry.").Default(":9154").String() metricsPath = app.Flag("web.telemetry-path", "Path under which to expose metrics.").Default("/metrics").String() postfixShowqPath = app.Flag("postfix.showq_path", "Path at which Postfix places its showq socket.").Default("/var/spool/postfix/public/showq").String() logUnsupportedLines = app.Flag("log.unsupported", "Log all unsupported lines.").Bool() ) InitLogSourceFactories(app) kingpin.MustParse(app.Parse(os.Args[1:])) logSrc, err := NewLogSourceFromFactories(ctx) if err != nil { log.Fatalf("Error opening log source: %s", err) } defer logSrc.Close() exporter, err := NewPostfixExporter( *postfixShowqPath, logSrc, *logUnsupportedLines, ) if err != nil { log.Fatalf("Failed to create PostfixExporter: %s", err) } prometheus.MustRegister(exporter) http.Handle(*metricsPath, promhttp.Handler()) http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { _, err = w.Write([]byte(` Postfix Exporter

Postfix Exporter

Metrics

`)) if err != nil { panic(err) } }) ctx, cancelFunc := context.WithCancel(ctx) defer cancelFunc() go exporter.StartMetricCollection(ctx) log.Print("Listening on ", *listenAddress) log.Fatal(http.ListenAndServe(*listenAddress, nil)) } prometheus-postfix-exporter-0.3.0/mock/000077500000000000000000000000001413351107600201775ustar00rootroot00000000000000prometheus-postfix-exporter-0.3.0/mock/HistogramVecMock.go000066400000000000000000000021721413351107600237350ustar00rootroot00000000000000package mock import "github.com/prometheus/client_golang/prometheus" type HistorgramVecMock struct { mock HistogramMock } func (m *HistorgramVecMock) Describe(chan<- *prometheus.Desc) { panic("implement me") } func (m *HistorgramVecMock) GetMetricWith(prometheus.Labels) (prometheus.Observer, error) { panic("implement me") } func (m *HistorgramVecMock) GetMetricWithLabelValues(lvs ...string) (prometheus.Observer, error) { panic("implement me") } func (m *HistorgramVecMock) With(prometheus.Labels) prometheus.Observer { panic("implement me") } func (m *HistorgramVecMock) WithLabelValues(...string) prometheus.Observer { return m.mock } func (m *HistorgramVecMock) CurryWith(prometheus.Labels) (prometheus.ObserverVec, error) { panic("implement me") } func (m *HistorgramVecMock) MustCurryWith(prometheus.Labels) prometheus.ObserverVec { panic("implement me") } func (m *HistorgramVecMock) Collect(chan<- prometheus.Metric) { panic("implement me") } func (m *HistorgramVecMock) GetSum() float64 { return *m.mock.sum } func NewHistogramVecMock() *HistorgramVecMock { return &HistorgramVecMock{mock: *NewHistogramMock()} } prometheus-postfix-exporter-0.3.0/mock/HistorgramMock.go000066400000000000000000000012051413351107600234550ustar00rootroot00000000000000package mock import ( "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_model/go" ) type HistogramMock struct { sum *float64 } func NewHistogramMock() *HistogramMock { return &HistogramMock{sum: new(float64)} } func (HistogramMock) Desc() *prometheus.Desc { panic("implement me") } func (HistogramMock) Write(*io_prometheus_client.Metric) error { panic("implement me") } func (HistogramMock) Describe(chan<- *prometheus.Desc) { panic("implement me") } func (HistogramMock) Collect(chan<- prometheus.Metric) { panic("implement me") } func (h HistogramMock) Observe(value float64) { *h.sum += value } prometheus-postfix-exporter-0.3.0/postfix_exporter.go000066400000000000000000000604611413351107600232300ustar00rootroot00000000000000// Copyright 2017 Kumina, https://kumina.nl/ // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package main import ( "bufio" "bytes" "context" "errors" "fmt" "io" "log" "net" "regexp" "strconv" "strings" "time" "github.com/prometheus/client_golang/prometheus" ) var ( postfixUpDesc = prometheus.NewDesc( prometheus.BuildFQName("postfix", "", "up"), "Whether scraping Postfix's metrics was successful.", []string{"path"}, nil) ) // PostfixExporter holds the state that should be preserved by the // Postfix Prometheus metrics exporter across scrapes. type PostfixExporter struct { showqPath string logSrc LogSource logUnsupportedLines bool // Metrics that should persist after refreshes, based on logs. cleanupProcesses prometheus.Counter cleanupRejects prometheus.Counter cleanupNotAccepted prometheus.Counter lmtpDelays *prometheus.HistogramVec pipeDelays *prometheus.HistogramVec qmgrInsertsNrcpt prometheus.Histogram qmgrInsertsSize prometheus.Histogram qmgrRemoves prometheus.Counter smtpDelays *prometheus.HistogramVec smtpTLSConnects *prometheus.CounterVec smtpConnectionTimedOut prometheus.Counter smtpDeferreds prometheus.Counter smtpdConnects prometheus.Counter smtpdDisconnects prometheus.Counter smtpdFCrDNSErrors prometheus.Counter smtpdLostConnections *prometheus.CounterVec smtpdProcesses *prometheus.CounterVec smtpdRejects *prometheus.CounterVec smtpdSASLAuthenticationFailures prometheus.Counter smtpdTLSConnects *prometheus.CounterVec unsupportedLogEntries *prometheus.CounterVec smtpStatusDeferred prometheus.Counter opendkimSignatureAdded *prometheus.CounterVec } // A LogSource is an interface to read log lines. type LogSource interface { // Path returns a representation of the log location. Path() string // Read returns the next log line. Returns `io.EOF` at the end of // the log. Read(context.Context) (string, error) } // CollectShowqFromReader parses the output of Postfix's 'showq' command // and turns it into metrics. // // The output format of this command depends on the version of Postfix // used. Postfix 2.x uses a textual format, identical to the output of // the 'mailq' command. Postfix 3.x uses a binary format, where entries // are terminated using null bytes. Auto-detect the format by scanning // for null bytes in the first 128 bytes of output. func CollectShowqFromReader(file io.Reader, ch chan<- prometheus.Metric) error { reader := bufio.NewReader(file) buf, err := reader.Peek(128) if err != nil && err != io.EOF { log.Printf("Could not read postfix output, %v", err) } if bytes.IndexByte(buf, 0) >= 0 { return CollectBinaryShowqFromReader(reader, ch) } return CollectTextualShowqFromReader(reader, ch) } // CollectTextualShowqFromReader parses Postfix's textual showq output. func CollectTextualShowqFromReader(file io.Reader, ch chan<- prometheus.Metric) error { // Histograms tracking the messages by size and age. sizeHistogram := prometheus.NewHistogramVec( prometheus.HistogramOpts{ Namespace: "postfix", Name: "showq_message_size_bytes", Help: "Size of messages in Postfix's message queue, in bytes", Buckets: []float64{1e3, 1e4, 1e5, 1e6, 1e7, 1e8, 1e9}, }, []string{"queue"}) ageHistogram := prometheus.NewHistogramVec( prometheus.HistogramOpts{ Namespace: "postfix", Name: "showq_message_age_seconds", Help: "Age of messages in Postfix's message queue, in seconds", Buckets: []float64{1e1, 1e2, 1e3, 1e4, 1e5, 1e6, 1e7, 1e8}, }, []string{"queue"}) err := CollectTextualShowqFromScanner(sizeHistogram, ageHistogram, file) sizeHistogram.Collect(ch) ageHistogram.Collect(ch) return err } func CollectTextualShowqFromScanner(sizeHistogram prometheus.ObserverVec, ageHistogram prometheus.ObserverVec, file io.Reader) error { scanner := bufio.NewScanner(file) scanner.Split(bufio.ScanLines) // Initialize all queue buckets to zero. for _, q := range []string{"active", "hold", "other"} { sizeHistogram.WithLabelValues(q) ageHistogram.WithLabelValues(q) } location, err := time.LoadLocation("Local") if err != nil { log.Println(err) } // Regular expression for matching postqueue's output. Example: // "A07A81514 5156 Tue Feb 14 13:13:54 MAILER-DAEMON" messageLine := regexp.MustCompile(`^[0-9A-F]+([\*!]?) +(\d+) (\w{3} \w{3} +\d+ +\d+:\d{2}:\d{2}) +`) for scanner.Scan() { text := scanner.Text() matches := messageLine.FindStringSubmatch(text) if matches == nil { continue } queueMatch := matches[1] sizeMatch := matches[2] dateMatch := matches[3] // Derive the name of the message queue. queue := "other" if queueMatch == "*" { queue = "active" } else if queueMatch == "!" { queue = "hold" } // Parse the message size. size, err := strconv.ParseFloat(sizeMatch, 64) if err != nil { return err } // Parse the message date. Unfortunately, the // output contains no year number. Assume it // applies to the last year for which the // message date doesn't exceed time.Now(). date, err := time.ParseInLocation("Mon Jan 2 15:04:05", dateMatch, location) if err != nil { return err } now := time.Now() date = date.AddDate(now.Year(), 0, 0) if date.After(now) { date = date.AddDate(-1, 0, 0) } sizeHistogram.WithLabelValues(queue).Observe(size) ageHistogram.WithLabelValues(queue).Observe(now.Sub(date).Seconds()) } return scanner.Err() } // ScanNullTerminatedEntries is a splitting function for bufio.Scanner // to split entries by null bytes. func ScanNullTerminatedEntries(data []byte, atEOF bool) (advance int, token []byte, err error) { if i := bytes.IndexByte(data, 0); i >= 0 { // Valid record found. return i + 1, data[0:i], nil } else if atEOF && len(data) != 0 { // Data at the end of the file without a null terminator. return 0, nil, errors.New("Expected null byte terminator") } else { // Request more data. return 0, nil, nil } } // CollectBinaryShowqFromReader parses Postfix's binary showq format. func CollectBinaryShowqFromReader(file io.Reader, ch chan<- prometheus.Metric) error { scanner := bufio.NewScanner(file) scanner.Split(ScanNullTerminatedEntries) // Histograms tracking the messages by size and age. sizeHistogram := prometheus.NewHistogramVec( prometheus.HistogramOpts{ Namespace: "postfix", Name: "showq_message_size_bytes", Help: "Size of messages in Postfix's message queue, in bytes", Buckets: []float64{1e3, 1e4, 1e5, 1e6, 1e7, 1e8, 1e9}, }, []string{"queue"}) ageHistogram := prometheus.NewHistogramVec( prometheus.HistogramOpts{ Namespace: "postfix", Name: "showq_message_age_seconds", Help: "Age of messages in Postfix's message queue, in seconds", Buckets: []float64{1e1, 1e2, 1e3, 1e4, 1e5, 1e6, 1e7, 1e8}, }, []string{"queue"}) // Initialize all queue buckets to zero. for _, q := range []string{"active", "deferred", "hold", "incoming", "maildrop"} { sizeHistogram.WithLabelValues(q) ageHistogram.WithLabelValues(q) } now := float64(time.Now().UnixNano()) / 1e9 queue := "unknown" for scanner.Scan() { // Parse a key/value entry. key := scanner.Text() if len(key) == 0 { // Empty key means a record separator. queue = "unknown" continue } if !scanner.Scan() { return fmt.Errorf("key %q does not have a value", key) } value := scanner.Text() if key == "queue_name" { // The name of the message queue. queue = value } else if key == "size" { // Message size in bytes. size, err := strconv.ParseFloat(value, 64) if err != nil { return err } sizeHistogram.WithLabelValues(queue).Observe(size) } else if key == "time" { // Message time as a UNIX timestamp. utime, err := strconv.ParseFloat(value, 64) if err != nil { return err } ageHistogram.WithLabelValues(queue).Observe(now - utime) } } sizeHistogram.Collect(ch) ageHistogram.Collect(ch) return scanner.Err() } // CollectShowqFromSocket collects Postfix queue statistics from a socket. func CollectShowqFromSocket(path string, ch chan<- prometheus.Metric) error { fd, err := net.Dial("unix", path) if err != nil { return err } defer fd.Close() return CollectShowqFromReader(fd, ch) } // Patterns for parsing log messages. var ( logLine = regexp.MustCompile(` ?(postfix|opendkim)(/(\w+))?\[\d+\]: (.*)`) lmtpPipeSMTPLine = regexp.MustCompile(`, relay=(\S+), .*, delays=([0-9\.]+)/([0-9\.]+)/([0-9\.]+)/([0-9\.]+), `) qmgrInsertLine = regexp.MustCompile(`:.*, size=(\d+), nrcpt=(\d+) `) smtpStatusDeferredLine = regexp.MustCompile(`, status=deferred`) smtpTLSLine = regexp.MustCompile(`^(\S+) TLS connection established to \S+: (\S+) with cipher (\S+) \((\d+)/(\d+) bits\)`) smtpConnectionTimedOut = regexp.MustCompile(`^connect\s+to\s+(.*)\[(.*)\]:(\d+):\s+(Connection timed out)$`) smtpdFCrDNSErrorsLine = regexp.MustCompile(`^warning: hostname \S+ does not resolve to address `) smtpdProcessesSASLLine = regexp.MustCompile(`: client=.*, sasl_method=(\S+)`) smtpdRejectsLine = regexp.MustCompile(`^NOQUEUE: reject: RCPT from \S+: ([0-9]+) `) smtpdLostConnectionLine = regexp.MustCompile(`^lost connection after (\w+) from `) smtpdSASLAuthenticationFailuresLine = regexp.MustCompile(`^warning: \S+: SASL \S+ authentication failed: `) smtpdTLSLine = regexp.MustCompile(`^(\S+) TLS connection established from \S+: (\S+) with cipher (\S+) \((\d+)/(\d+) bits\)`) opendkimSignatureAdded = regexp.MustCompile(`^[\w\d]+: DKIM-Signature field added \(s=(\w+), d=(.*)\)$`) ) // CollectFromLogline collects metrict from a Postfix log line. func (e *PostfixExporter) CollectFromLogLine(line string) { // Strip off timestamp, hostname, etc. logMatches := logLine.FindStringSubmatch(line) if logMatches == nil { // Unknown log entry format. e.addToUnsupportedLine(line, "") return } process := logMatches[1] remainder := logMatches[4] switch process { case "postfix": // Group patterns to check by Postfix service. subprocess := logMatches[3] switch subprocess { case "cleanup": if strings.Contains(remainder, ": message-id=<") { e.cleanupProcesses.Inc() } else if strings.Contains(remainder, ": reject: ") { e.cleanupRejects.Inc() } else { e.addToUnsupportedLine(line, subprocess) } case "lmtp": if lmtpMatches := lmtpPipeSMTPLine.FindStringSubmatch(remainder); lmtpMatches != nil { addToHistogramVec(e.lmtpDelays, lmtpMatches[2], "LMTP pdelay", "before_queue_manager") addToHistogramVec(e.lmtpDelays, lmtpMatches[3], "LMTP adelay", "queue_manager") addToHistogramVec(e.lmtpDelays, lmtpMatches[4], "LMTP sdelay", "connection_setup") addToHistogramVec(e.lmtpDelays, lmtpMatches[5], "LMTP xdelay", "transmission") } else { e.addToUnsupportedLine(line, subprocess) } case "pipe": if pipeMatches := lmtpPipeSMTPLine.FindStringSubmatch(remainder); pipeMatches != nil { addToHistogramVec(e.pipeDelays, pipeMatches[2], "PIPE pdelay", pipeMatches[1], "before_queue_manager") addToHistogramVec(e.pipeDelays, pipeMatches[3], "PIPE adelay", pipeMatches[1], "queue_manager") addToHistogramVec(e.pipeDelays, pipeMatches[4], "PIPE sdelay", pipeMatches[1], "connection_setup") addToHistogramVec(e.pipeDelays, pipeMatches[5], "PIPE xdelay", pipeMatches[1], "transmission") } else { e.addToUnsupportedLine(line, subprocess) } case "qmgr": if qmgrInsertMatches := qmgrInsertLine.FindStringSubmatch(remainder); qmgrInsertMatches != nil { addToHistogram(e.qmgrInsertsSize, qmgrInsertMatches[1], "QMGR size") addToHistogram(e.qmgrInsertsNrcpt, qmgrInsertMatches[2], "QMGR nrcpt") } else if strings.HasSuffix(remainder, ": removed") { e.qmgrRemoves.Inc() } else { e.addToUnsupportedLine(line, subprocess) } case "smtp": if smtpMatches := lmtpPipeSMTPLine.FindStringSubmatch(remainder); smtpMatches != nil { addToHistogramVec(e.smtpDelays, smtpMatches[2], "before_queue_manager", "") addToHistogramVec(e.smtpDelays, smtpMatches[3], "queue_manager", "") addToHistogramVec(e.smtpDelays, smtpMatches[4], "connection_setup", "") addToHistogramVec(e.smtpDelays, smtpMatches[5], "transmission", "") if smtpMatches := smtpStatusDeferredLine.FindStringSubmatch(remainder); smtpMatches != nil { e.smtpStatusDeferred.Inc() } } else if smtpTLSMatches := smtpTLSLine.FindStringSubmatch(remainder); smtpTLSMatches != nil { e.smtpTLSConnects.WithLabelValues(smtpTLSMatches[1:]...).Inc() } else if smtpMatches := smtpConnectionTimedOut.FindStringSubmatch(remainder); smtpMatches != nil { e.smtpConnectionTimedOut.Inc() } else { e.addToUnsupportedLine(line, subprocess) } case "smtpd": if strings.HasPrefix(remainder, "connect from ") { e.smtpdConnects.Inc() } else if strings.HasPrefix(remainder, "disconnect from ") { e.smtpdDisconnects.Inc() } else if smtpdFCrDNSErrorsLine.MatchString(remainder) { e.smtpdFCrDNSErrors.Inc() } else if smtpdLostConnectionMatches := smtpdLostConnectionLine.FindStringSubmatch(remainder); smtpdLostConnectionMatches != nil { e.smtpdLostConnections.WithLabelValues(smtpdLostConnectionMatches[1]).Inc() } else if smtpdProcessesSASLMatches := smtpdProcessesSASLLine.FindStringSubmatch(remainder); smtpdProcessesSASLMatches != nil { e.smtpdProcesses.WithLabelValues(smtpdProcessesSASLMatches[1]).Inc() } else if strings.Contains(remainder, ": client=") { e.smtpdProcesses.WithLabelValues("").Inc() } else if smtpdRejectsMatches := smtpdRejectsLine.FindStringSubmatch(remainder); smtpdRejectsMatches != nil { e.smtpdRejects.WithLabelValues(smtpdRejectsMatches[1]).Inc() } else if smtpdSASLAuthenticationFailuresLine.MatchString(remainder) { e.smtpdSASLAuthenticationFailures.Inc() } else if smtpdTLSMatches := smtpdTLSLine.FindStringSubmatch(remainder); smtpdTLSMatches != nil { e.smtpdTLSConnects.WithLabelValues(smtpdTLSMatches[1:]...).Inc() } else { e.addToUnsupportedLine(line, subprocess) } default: e.addToUnsupportedLine(line, subprocess) } case "opendkim": if opendkimMatches := opendkimSignatureAdded.FindStringSubmatch(remainder); opendkimMatches != nil { e.opendkimSignatureAdded.WithLabelValues(opendkimMatches[1], opendkimMatches[2]).Inc() } else { e.addToUnsupportedLine(line, process) } default: // Unknown log entry format. e.addToUnsupportedLine(line, "") } } func (e *PostfixExporter) addToUnsupportedLine(line string, subprocess string) { if e.logUnsupportedLines { log.Printf("Unsupported Line: %v", line) } e.unsupportedLogEntries.WithLabelValues(subprocess).Inc() } func addToHistogram(h prometheus.Histogram, value, fieldName string) { float, err := strconv.ParseFloat(value, 64) if err != nil { log.Printf("Couldn't convert value '%s' for %v: %v", value, fieldName, err) } h.Observe(float) } func addToHistogramVec(h *prometheus.HistogramVec, value, fieldName string, labels ...string) { float, err := strconv.ParseFloat(value, 64) if err != nil { log.Printf("Couldn't convert value '%s' for %v: %v", value, fieldName, err) } h.WithLabelValues(labels...).Observe(float) } // NewPostfixExporter creates a new Postfix exporter instance. func NewPostfixExporter(showqPath string, logSrc LogSource, logUnsupportedLines bool) (*PostfixExporter, error) { timeBuckets := []float64{1e-3, 1e-2, 1e-1, 1.0, 10, 1 * 60, 1 * 60 * 60, 24 * 60 * 60, 2 * 24 * 60 * 60} return &PostfixExporter{ logUnsupportedLines: logUnsupportedLines, showqPath: showqPath, logSrc: logSrc, cleanupProcesses: prometheus.NewCounter(prometheus.CounterOpts{ Namespace: "postfix", Name: "cleanup_messages_processed_total", Help: "Total number of messages processed by cleanup.", }), cleanupRejects: prometheus.NewCounter(prometheus.CounterOpts{ Namespace: "postfix", Name: "cleanup_messages_rejected_total", Help: "Total number of messages rejected by cleanup.", }), cleanupNotAccepted: prometheus.NewCounter(prometheus.CounterOpts{ Namespace: "postfix", Name: "cleanup_messages_not_accepted_total", Help: "Total number of messages not accepted by cleanup.", }), lmtpDelays: prometheus.NewHistogramVec( prometheus.HistogramOpts{ Namespace: "postfix", Name: "lmtp_delivery_delay_seconds", Help: "LMTP message processing time in seconds.", Buckets: timeBuckets, }, []string{"stage"}), pipeDelays: prometheus.NewHistogramVec( prometheus.HistogramOpts{ Namespace: "postfix", Name: "pipe_delivery_delay_seconds", Help: "Pipe message processing time in seconds.", Buckets: timeBuckets, }, []string{"relay", "stage"}), qmgrInsertsNrcpt: prometheus.NewHistogram(prometheus.HistogramOpts{ Namespace: "postfix", Name: "qmgr_messages_inserted_receipients", Help: "Number of receipients per message inserted into the mail queues.", Buckets: []float64{1, 2, 4, 8, 16, 32, 64, 128}, }), qmgrInsertsSize: prometheus.NewHistogram(prometheus.HistogramOpts{ Namespace: "postfix", Name: "qmgr_messages_inserted_size_bytes", Help: "Size of messages inserted into the mail queues in bytes.", Buckets: []float64{1e3, 1e4, 1e5, 1e6, 1e7, 1e8, 1e9}, }), qmgrRemoves: prometheus.NewCounter(prometheus.CounterOpts{ Namespace: "postfix", Name: "qmgr_messages_removed_total", Help: "Total number of messages removed from mail queues.", }), smtpDelays: prometheus.NewHistogramVec( prometheus.HistogramOpts{ Namespace: "postfix", Name: "smtp_delivery_delay_seconds", Help: "SMTP message processing time in seconds.", Buckets: timeBuckets, }, []string{"stage"}), smtpTLSConnects: prometheus.NewCounterVec( prometheus.CounterOpts{ Namespace: "postfix", Name: "smtp_tls_connections_total", Help: "Total number of outgoing TLS connections.", }, []string{"trust", "protocol", "cipher", "secret_bits", "algorithm_bits"}), smtpDeferreds: prometheus.NewCounter(prometheus.CounterOpts{ Namespace: "postfix", Name: "smtp_deferred_messages_total", Help: "Total number of messages that have been deferred on SMTP.", }), smtpConnectionTimedOut: prometheus.NewCounter(prometheus.CounterOpts{ Namespace: "postfix", Name: "smtp_connection_timed_out_total", Help: "Total number of messages that have been deferred on SMTP.", }), smtpdConnects: prometheus.NewCounter(prometheus.CounterOpts{ Namespace: "postfix", Name: "smtpd_connects_total", Help: "Total number of incoming connections.", }), smtpdDisconnects: prometheus.NewCounter(prometheus.CounterOpts{ Namespace: "postfix", Name: "smtpd_disconnects_total", Help: "Total number of incoming disconnections.", }), smtpdFCrDNSErrors: prometheus.NewCounter(prometheus.CounterOpts{ Namespace: "postfix", Name: "smtpd_forward_confirmed_reverse_dns_errors_total", Help: "Total number of connections for which forward-confirmed DNS cannot be resolved.", }), smtpdLostConnections: prometheus.NewCounterVec( prometheus.CounterOpts{ Namespace: "postfix", Name: "smtpd_connections_lost_total", Help: "Total number of connections lost.", }, []string{"after_stage"}), smtpdProcesses: prometheus.NewCounterVec( prometheus.CounterOpts{ Namespace: "postfix", Name: "smtpd_messages_processed_total", Help: "Total number of messages processed.", }, []string{"sasl_method"}), smtpdRejects: prometheus.NewCounterVec( prometheus.CounterOpts{ Namespace: "postfix", Name: "smtpd_messages_rejected_total", Help: "Total number of NOQUEUE rejects.", }, []string{"code"}), smtpdSASLAuthenticationFailures: prometheus.NewCounter(prometheus.CounterOpts{ Namespace: "postfix", Name: "smtpd_sasl_authentication_failures_total", Help: "Total number of SASL authentication failures.", }), smtpdTLSConnects: prometheus.NewCounterVec( prometheus.CounterOpts{ Namespace: "postfix", Name: "smtpd_tls_connections_total", Help: "Total number of incoming TLS connections.", }, []string{"trust", "protocol", "cipher", "secret_bits", "algorithm_bits"}), unsupportedLogEntries: prometheus.NewCounterVec( prometheus.CounterOpts{ Namespace: "postfix", Name: "unsupported_log_entries_total", Help: "Log entries that could not be processed.", }, []string{"service"}), smtpStatusDeferred: prometheus.NewCounter(prometheus.CounterOpts{ Namespace: "postfix", Name: "smtp_status_deferred", Help: "Total number of messages deferred.", }), opendkimSignatureAdded: prometheus.NewCounterVec( prometheus.CounterOpts{ Namespace: "opendkim", Name: "signatures_added_total", Help: "Total number of messages signed.", }, []string{"subject", "domain"}, ), }, nil } // Describe the Prometheus metrics that are going to be exported. func (e *PostfixExporter) Describe(ch chan<- *prometheus.Desc) { ch <- postfixUpDesc if e.logSrc == nil { return } ch <- e.cleanupProcesses.Desc() ch <- e.cleanupRejects.Desc() ch <- e.cleanupNotAccepted.Desc() e.lmtpDelays.Describe(ch) e.pipeDelays.Describe(ch) ch <- e.qmgrInsertsNrcpt.Desc() ch <- e.qmgrInsertsSize.Desc() ch <- e.qmgrRemoves.Desc() e.smtpDelays.Describe(ch) e.smtpTLSConnects.Describe(ch) ch <- e.smtpDeferreds.Desc() ch <- e.smtpdConnects.Desc() ch <- e.smtpdDisconnects.Desc() ch <- e.smtpdFCrDNSErrors.Desc() e.smtpdLostConnections.Describe(ch) e.smtpdProcesses.Describe(ch) e.smtpdRejects.Describe(ch) ch <- e.smtpdSASLAuthenticationFailures.Desc() e.smtpdTLSConnects.Describe(ch) ch <- e.smtpStatusDeferred.Desc() e.unsupportedLogEntries.Describe(ch) e.smtpConnectionTimedOut.Describe(ch) e.opendkimSignatureAdded.Describe(ch) } func (e *PostfixExporter) StartMetricCollection(ctx context.Context) { if e.logSrc == nil { return } gaugeVec := prometheus.NewGaugeVec( prometheus.GaugeOpts{ Namespace: "postfix", Subsystem: "", Name: "up", Help: "Whether scraping Postfix's metrics was successful.", }, []string{"path"}) gauge := gaugeVec.WithLabelValues(e.logSrc.Path()) defer gauge.Set(0) for { line, err := e.logSrc.Read(ctx) if err != nil { if err != io.EOF { log.Printf("Couldn't read journal: %v", err) } return } e.CollectFromLogLine(line) gauge.Set(1) } } // Collect metrics from Postfix's showq socket and its log file. func (e *PostfixExporter) Collect(ch chan<- prometheus.Metric) { err := CollectShowqFromSocket(e.showqPath, ch) if err == nil { ch <- prometheus.MustNewConstMetric( postfixUpDesc, prometheus.GaugeValue, 1.0, e.showqPath) } else { log.Printf("Failed to scrape showq socket: %s", err) ch <- prometheus.MustNewConstMetric( postfixUpDesc, prometheus.GaugeValue, 0.0, e.showqPath) } if e.logSrc == nil { return } ch <- e.cleanupProcesses ch <- e.cleanupRejects ch <- e.cleanupNotAccepted e.lmtpDelays.Collect(ch) e.pipeDelays.Collect(ch) ch <- e.qmgrInsertsNrcpt ch <- e.qmgrInsertsSize ch <- e.qmgrRemoves e.smtpDelays.Collect(ch) e.smtpTLSConnects.Collect(ch) ch <- e.smtpDeferreds ch <- e.smtpdConnects ch <- e.smtpdDisconnects ch <- e.smtpdFCrDNSErrors e.smtpdLostConnections.Collect(ch) e.smtpdProcesses.Collect(ch) e.smtpdRejects.Collect(ch) ch <- e.smtpdSASLAuthenticationFailures e.smtpdTLSConnects.Collect(ch) ch <- e.smtpStatusDeferred e.unsupportedLogEntries.Collect(ch) ch <- e.smtpConnectionTimedOut e.opendkimSignatureAdded.Collect(ch) } prometheus-postfix-exporter-0.3.0/postfix_exporter_test.go000066400000000000000000000247471413351107600242760ustar00rootroot00000000000000package main import ( "testing" "github.com/prometheus/client_golang/prometheus" io_prometheus_client "github.com/prometheus/client_model/go" "github.com/stretchr/testify/assert" ) func TestPostfixExporter_CollectFromLogline(t *testing.T) { type fields struct { showqPath string logSrc LogSource cleanupProcesses prometheus.Counter cleanupRejects prometheus.Counter cleanupNotAccepted prometheus.Counter lmtpDelays *prometheus.HistogramVec pipeDelays *prometheus.HistogramVec qmgrInsertsNrcpt prometheus.Histogram qmgrInsertsSize prometheus.Histogram qmgrRemoves prometheus.Counter smtpDelays *prometheus.HistogramVec smtpTLSConnects *prometheus.CounterVec smtpDeferreds prometheus.Counter smtpdConnects prometheus.Counter smtpdDisconnects prometheus.Counter smtpdFCrDNSErrors prometheus.Counter smtpdLostConnections *prometheus.CounterVec smtpdProcesses *prometheus.CounterVec smtpdRejects *prometheus.CounterVec smtpdSASLAuthenticationFailures prometheus.Counter smtpdTLSConnects *prometheus.CounterVec unsupportedLogEntries *prometheus.CounterVec } type args struct { line []string removedCount int saslFailedCount int outgoingTLS int smtpdMessagesProcessed int } tests := []struct { name string fields fields args args }{ { name: "Single line", args: args{ line: []string{ "Feb 11 16:49:24 letterman postfix/qmgr[8204]: AAB4D259B1: removed", }, removedCount: 1, saslFailedCount: 0, }, fields: fields{ qmgrRemoves: prometheus.NewCounter(prometheus.CounterOpts{}), unsupportedLogEntries: prometheus.NewCounterVec(prometheus.CounterOpts{}, []string{"process"}), }, }, { name: "Multiple lines", args: args{ line: []string{ "Feb 11 16:49:24 letterman postfix/qmgr[8204]: AAB4D259B1: removed", "Feb 11 16:49:24 letterman postfix/qmgr[8204]: C2032259E6: removed", "Feb 11 16:49:24 letterman postfix/qmgr[8204]: B83C4257DC: removed", "Feb 11 16:49:24 letterman postfix/qmgr[8204]: 721BE256EA: removed", "Feb 11 16:49:25 letterman postfix/qmgr[8204]: CA94A259EB: removed", "Feb 11 16:49:25 letterman postfix/qmgr[8204]: AC1E3259E1: removed", "Feb 11 16:49:25 letterman postfix/qmgr[8204]: D114D221E3: removed", "Feb 11 16:49:25 letterman postfix/qmgr[8204]: A55F82104D: removed", "Feb 11 16:49:25 letterman postfix/qmgr[8204]: D6DAA259BC: removed", "Feb 11 16:49:25 letterman postfix/qmgr[8204]: E3908259F0: removed", "Feb 11 16:49:25 letterman postfix/qmgr[8204]: 0CBB8259BF: removed", "Feb 11 16:49:25 letterman postfix/qmgr[8204]: EA3AD259F2: removed", "Feb 11 16:49:25 letterman postfix/qmgr[8204]: DDEF824B48: removed", "Feb 11 16:49:26 letterman postfix/qmgr[8204]: 289AF21DB9: removed", "Feb 11 16:49:26 letterman postfix/qmgr[8204]: 6192B260E8: removed", "Feb 11 16:49:26 letterman postfix/qmgr[8204]: F2831259F4: removed", "Feb 11 16:49:26 letterman postfix/qmgr[8204]: 09D60259F8: removed", "Feb 11 16:49:26 letterman postfix/qmgr[8204]: 13A19259FA: removed", "Feb 11 16:49:26 letterman postfix/qmgr[8204]: 2D42722065: removed", "Feb 11 16:49:26 letterman postfix/qmgr[8204]: 746E325A0E: removed", "Feb 11 16:49:26 letterman postfix/qmgr[8204]: 4D2F125A02: removed", "Feb 11 16:49:26 letterman postfix/qmgr[8204]: E30BC259EF: removed", "Feb 11 16:49:26 letterman postfix/qmgr[8204]: DC88924DA1: removed", "Feb 11 16:49:26 letterman postfix/qmgr[8204]: 2164B259FD: removed", "Feb 11 16:49:26 letterman postfix/qmgr[8204]: 8C30525A14: removed", "Feb 11 16:49:26 letterman postfix/qmgr[8204]: 8DCCE25A15: removed", "Feb 11 16:49:26 letterman postfix/qmgr[8204]: C5217255D5: removed", "Feb 11 16:49:27 letterman postfix/qmgr[8204]: D8EE625A28: removed", "Feb 11 16:49:27 letterman postfix/qmgr[8204]: 9AD7C25A19: removed", "Feb 11 16:49:27 letterman postfix/qmgr[8204]: D0EEE2596C: removed", "Feb 11 16:49:27 letterman postfix/qmgr[8204]: DFE732172E: removed", }, removedCount: 31, saslFailedCount: 0, }, fields: fields{ qmgrRemoves: prometheus.NewCounter(prometheus.CounterOpts{}), unsupportedLogEntries: prometheus.NewCounterVec(prometheus.CounterOpts{}, []string{"process"}), }, }, { name: "SASL Failed", args: args{ line: []string{ "Apr 26 10:55:19 tcc1 postfix/smtpd[21126]: warning: SASL authentication failure: cannot connect to saslauthd server: Permission denied", "Apr 26 10:55:19 tcc1 postfix/smtpd[21126]: warning: SASL authentication failure: Password verification failed", "Apr 26 10:55:19 tcc1 postfix/smtpd[21126]: warning: laptop.local[192.168.1.2]: SASL PLAIN authentication failed: generic failure", }, saslFailedCount: 1, removedCount: 0, }, fields: fields{ smtpdSASLAuthenticationFailures: prometheus.NewCounter(prometheus.CounterOpts{}), unsupportedLogEntries: prometheus.NewCounterVec(prometheus.CounterOpts{}, []string{"process"}), }, }, { name: "SASL login", args: args{ line: []string{ "Oct 30 13:19:26 mailgw-out1 postfix/smtpd[27530]: EB4B2C19E2: client=xxx[1.2.3.4], sasl_method=PLAIN, sasl_username=user@domain", "Feb 24 16:42:00 letterman postfix/smtpd[24906]: 1CF582025C: client=xxx[2.3.4.5]", }, removedCount: 0, saslFailedCount: 0, outgoingTLS: 0, smtpdMessagesProcessed: 2, }, fields: fields{ unsupportedLogEntries: prometheus.NewCounterVec(prometheus.CounterOpts{}, []string{"process"}), smtpdProcesses: prometheus.NewCounterVec(prometheus.CounterOpts{}, []string{"sasl_method"}), }, }, { name: "Issue #35", args: args{ line: []string{ "Jul 24 04:38:17 mail postfix/smtp[30582]: Verified TLS connection established to gmail-smtp-in.l.google.com[108.177.14.26]:25: TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256", "Jul 24 03:28:15 mail postfix/smtp[24052]: Verified TLS connection established to mx2.comcast.net[2001:558:fe21:2a::6]:25: TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)", }, removedCount: 0, saslFailedCount: 0, outgoingTLS: 2, }, fields: fields{ unsupportedLogEntries: prometheus.NewCounterVec(prometheus.CounterOpts{}, []string{"process"}), smtpTLSConnects: prometheus.NewCounterVec(prometheus.CounterOpts{}, []string{"Verified", "TLSv1.2", "ECDHE-RSA-AES256-GCM-SHA384", "256", "256"}), }, }, { name: "Testing delays", args: args{ line: []string{ "Feb 24 16:18:40 letterman postfix/smtp[59649]: 5270320179: to=, relay=mail.telia.com[81.236.60.210]:25, delay=2017, delays=0.1/2017/0.03/0.05, dsn=2.0.0, status=sent (250 2.0.0 6FVIjIMwUJwU66FVIjAEB0 mail accepted for delivery)", }, removedCount: 0, saslFailedCount: 0, outgoingTLS: 0, smtpdMessagesProcessed: 0, }, fields: fields{ smtpDelays: prometheus.NewHistogramVec(prometheus.HistogramOpts{}, []string{"stage"}), }, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { e := &PostfixExporter{ showqPath: tt.fields.showqPath, logSrc: tt.fields.logSrc, cleanupProcesses: tt.fields.cleanupProcesses, cleanupRejects: tt.fields.cleanupRejects, cleanupNotAccepted: tt.fields.cleanupNotAccepted, lmtpDelays: tt.fields.lmtpDelays, pipeDelays: tt.fields.pipeDelays, qmgrInsertsNrcpt: tt.fields.qmgrInsertsNrcpt, qmgrInsertsSize: tt.fields.qmgrInsertsSize, qmgrRemoves: tt.fields.qmgrRemoves, smtpDelays: tt.fields.smtpDelays, smtpTLSConnects: tt.fields.smtpTLSConnects, smtpDeferreds: tt.fields.smtpDeferreds, smtpdConnects: tt.fields.smtpdConnects, smtpdDisconnects: tt.fields.smtpdDisconnects, smtpdFCrDNSErrors: tt.fields.smtpdFCrDNSErrors, smtpdLostConnections: tt.fields.smtpdLostConnections, smtpdProcesses: tt.fields.smtpdProcesses, smtpdRejects: tt.fields.smtpdRejects, smtpdSASLAuthenticationFailures: tt.fields.smtpdSASLAuthenticationFailures, smtpdTLSConnects: tt.fields.smtpdTLSConnects, unsupportedLogEntries: tt.fields.unsupportedLogEntries, logUnsupportedLines: true, } for _, line := range tt.args.line { e.CollectFromLogLine(line) } assertCounterEquals(t, e.qmgrRemoves, tt.args.removedCount, "Wrong number of lines counted") assertCounterEquals(t, e.smtpdSASLAuthenticationFailures, tt.args.saslFailedCount, "Wrong number of Sasl counter counted") assertCounterEquals(t, e.smtpTLSConnects, tt.args.outgoingTLS, "Wrong number of TLS connections counted") assertCounterEquals(t, e.smtpdProcesses, tt.args.smtpdMessagesProcessed, "Wrong number of smtpd messages processed") }) } } func assertCounterEquals(t *testing.T, counter prometheus.Collector, expected int, message string) { if counter != nil && expected > 0 { switch counter.(type) { case *prometheus.CounterVec: counter := counter.(*prometheus.CounterVec) metricsChan := make(chan prometheus.Metric) go func() { counter.Collect(metricsChan) close(metricsChan) }() var count int = 0 for metric := range metricsChan { metricDto := io_prometheus_client.Metric{} metric.Write(&metricDto) count += int(*metricDto.Counter.Value) } assert.Equal(t, expected, count, message) case prometheus.Counter: metricsChan := make(chan prometheus.Metric) go func() { counter.Collect(metricsChan) close(metricsChan) }() var count int = 0 for metric := range metricsChan { metricDto := io_prometheus_client.Metric{} metric.Write(&metricDto) count += int(*metricDto.Counter.Value) } assert.Equal(t, expected, count, message) default: t.Fatal("Type not implemented") } } } prometheus-postfix-exporter-0.3.0/showq_test.go000066400000000000000000000021171413351107600217760ustar00rootroot00000000000000package main import ( "github.com/kumina/postfix_exporter/mock" "github.com/stretchr/testify/assert" "os" "testing" ) func TestCollectShowqFromReader(t *testing.T) { type args struct { file string } tests := []struct { name string args args wantErr bool expectedTotalCount float64 }{ { name: "basic test", args: args{ file: "testdata/showq.txt", }, wantErr: false, expectedTotalCount: 118702, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { file, err := os.Open(tt.args.file) if err != nil { t.Error(err) } sizeHistogram := mock.NewHistogramVecMock() ageHistogram := mock.NewHistogramVecMock() if err := CollectTextualShowqFromScanner(sizeHistogram, ageHistogram, file); (err != nil) != tt.wantErr { t.Errorf("CollectShowqFromReader() error = %v, wantErr %v", err, tt.wantErr) } assert.Equal(t, tt.expectedTotalCount, sizeHistogram.GetSum(), "Expected a lot more data.") assert.Less(t, 0.0, ageHistogram.GetSum(), "Age not greater than 0") }) } } prometheus-postfix-exporter-0.3.0/testdata/000077500000000000000000000000001413351107600210575ustar00rootroot00000000000000prometheus-postfix-exporter-0.3.0/testdata/showq.txt000066400000000000000000000111521413351107600227610ustar00rootroot00000000000000-Queue ID- --Size-- ----Arrival Time---- -Sender/Recipient------- C420820802* 4387 Mon Feb 24 13:35:18 sender@example.com recipient@lerum.se 8D5D4205B9* 4033 Mon Feb 24 13:22:16 sender@example.com recipient@lerum.se 7465520414* 4043 Mon Feb 24 13:22:16 sender@example.com recipient@lerum.se 3E2F72070A* 5301 Mon Feb 24 13:35:39 sender@example.com recipient@hotmail.se 542032060A* 5828 Mon Feb 24 13:34:46 sender@example.com recipient@skatteverket.se 4B96A2037C* 9868 Mon Feb 24 13:32:03 sender@example.com recipient@lerum.se E88EA20796* 5956 Mon Feb 24 13:34:55 sender@example.com recipient@edu.halmstad.se 8C9912052C* 4047 Mon Feb 24 13:22:16 sender@example.com recipient@lerum.se 70BDA2079B* 4404 Mon Feb 24 13:35:18 sender@example.com recipient@lerum.se 76E6A20536* 3875 Mon Feb 24 13:21:20 sender@example.com recipient@lerum.se 92C662062A* 3864 Mon Feb 24 13:21:20 sender@example.com recipient@lerum.se BA9BC2071E* 4387 Mon Feb 24 13:35:18 sender@example.com recipient@lerum.se 9A67020670* 4393 Mon Feb 24 13:34:06 sender@example.com recipient@lerum.se 651AC20138* 3872 Mon Feb 24 13:23:17 sender@example.com recipient@lerum.se 4F16D20516* 4052 Mon Feb 24 13:24:38 sender@example.com recipient@lerum.se C9C4A20501* 5099 Mon Feb 24 13:14:10 sender@example.com recipient@haninge.se 0572820D64 4098 Sat Feb 22 00:44:54 sender@example.com (host mail.wekudata.com[37.208.0.7] said: 452 4.2.2 Quota exceeded (rehanna@stahlstierna.se) (in reply to RCPT TO command)) recipient@stahlstierna.se 0B2C320952 4173 Sat Feb 22 00:42:07 sender@example.com (host alt1.gmail-smtp-in.l.google.com[108.177.97.26] said: 452-4.2.2 The email account that you tried to reach is over quota. Please direct 452-4.2.2 the recipient to 452 4.2.2 https://support.google.com/mail/?p=OverQuotaTemp q24si6538316pgt.498 - gsmtp (in reply to RCPT TO command)) recipient@gmail.com 0CC2B22124 10926 Fri Feb 21 13:31:58 sender@example.com (host alt1.gmail-smtp-in.l.google.com[108.177.97.26] said: 452-4.2.2 The email account that you tried to reach is over quota. Please direct 452-4.2.2 the recipient to 452 4.2.2 https://support.google.com/mail/?p=OverQuotaTemp f10si11999094pgj.597 - gsmtp (in reply to RCPT TO command)) recipient@gmail.com 0C84020606 4898 Mon Feb 24 08:30:34 sender@example.com (host alt1.gmail-smtp-in.l.google.com[108.177.97.26] said: 452-4.2.2 The email account that you tried to reach is over quota. Please direct 452-4.2.2 the recipient to 452 4.2.2 https://support.google.com/mail/?p=OverQuotaTemp 2si12536346pld.231 - gsmtp (in reply to RCPT TO command)) recipient@gmail.com 04EAA203C0 4133 Mon Feb 24 12:21:58 sender@example.com (host alt1.gmail-smtp-in.l.google.com[108.177.97.26] said: 452-4.2.2 The email account that you tried to reach is over quota. Please direct 452-4.2.2 the recipient to 452 4.2.2 https://support.google.com/mail/?p=OverQuotaTemp i16si12220651pfq.60 - gsmtp (in reply to RCPT TO command)) recipient@gmail.com 00C33202B6 4823 Mon Feb 24 11:32:37 sender@example.com (connect to gafe.se[151.252.30.111]:25: Connection refused) recipient@gafe.se 046E0218CA 4154 Mon Feb 24 00:13:12 sender@example.com (host alt1.gmail-smtp-in.l.google.com[108.177.97.26] said: 452-4.2.2 The email account that you tried to reach is over quota. Please direct 452-4.2.2 the recipient to 452 4.2.2 https://support.google.com/mail/?p=OverQuotaTemp y1si11835269pgi.474 - gsmtp (in reply to RCPT TO command)) recipient@gmail.com 06373212DC 4088 Sat Feb 22 00:34:11 sender@example.com (connect to smtp.falun.se[192.121.234.25]:25: Connection timed out) recipient@utb.falun.se